Nov 25 15:12:24 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 15:12:24 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 15:12:24 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 15:12:24 localhost kernel: BIOS-provided physical RAM map:
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 15:12:24 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 15:12:24 localhost kernel: NX (Execute Disable) protection: active
Nov 25 15:12:24 localhost kernel: APIC: Static calls initialized
Nov 25 15:12:24 localhost kernel: SMBIOS 2.8 present.
Nov 25 15:12:24 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 15:12:24 localhost kernel: Hypervisor detected: KVM
Nov 25 15:12:24 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 15:12:24 localhost kernel: kvm-clock: using sched offset of 5109129684 cycles
Nov 25 15:12:24 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 15:12:24 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 25 15:12:24 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 25 15:12:24 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 25 15:12:24 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 15:12:24 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 15:12:24 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 15:12:24 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 15:12:24 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 15:12:24 localhost kernel: Using GB pages for direct mapping
Nov 25 15:12:24 localhost kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 15:12:24 localhost kernel: ACPI: Early table checksum verification disabled
Nov 25 15:12:24 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 15:12:24 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 15:12:24 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 15:12:24 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 15:12:24 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 15:12:24 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 15:12:24 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 15:12:24 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 15:12:24 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 15:12:24 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 15:12:24 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 15:12:24 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 15:12:24 localhost kernel: No NUMA configuration found
Nov 25 15:12:24 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 15:12:24 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 25 15:12:24 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 25 15:12:24 localhost kernel: Zone ranges:
Nov 25 15:12:24 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 15:12:24 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 15:12:24 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 15:12:24 localhost kernel:   Device   empty
Nov 25 15:12:24 localhost kernel: Movable zone start for each node
Nov 25 15:12:24 localhost kernel: Early memory node ranges
Nov 25 15:12:24 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 15:12:24 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 15:12:24 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 15:12:24 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 15:12:24 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 15:12:24 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 15:12:24 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 15:12:24 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 15:12:24 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 15:12:24 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 15:12:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 15:12:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 15:12:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 15:12:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 15:12:24 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 15:12:24 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 15:12:24 localhost kernel: TSC deadline timer available
Nov 25 15:12:24 localhost kernel: CPU topo: Max. logical packages:   8
Nov 25 15:12:24 localhost kernel: CPU topo: Max. logical dies:       8
Nov 25 15:12:24 localhost kernel: CPU topo: Max. dies per package:   1
Nov 25 15:12:24 localhost kernel: CPU topo: Max. threads per core:   1
Nov 25 15:12:24 localhost kernel: CPU topo: Num. cores per package:     1
Nov 25 15:12:24 localhost kernel: CPU topo: Num. threads per package:   1
Nov 25 15:12:24 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 15:12:24 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 15:12:24 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 15:12:24 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 15:12:24 localhost kernel: Booting paravirtualized kernel on KVM
Nov 25 15:12:24 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 15:12:24 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 15:12:24 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 15:12:24 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 25 15:12:24 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 25 15:12:24 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 15:12:24 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 15:12:24 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 15:12:24 localhost kernel: random: crng init done
Nov 25 15:12:24 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 15:12:24 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 15:12:24 localhost kernel: Fallback order for Node 0: 0 
Nov 25 15:12:24 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 15:12:24 localhost kernel: Policy zone: Normal
Nov 25 15:12:24 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 15:12:24 localhost kernel: software IO TLB: area num 8.
Nov 25 15:12:24 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 15:12:24 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 15:12:24 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 15:12:24 localhost kernel: Dynamic Preempt: voluntary
Nov 25 15:12:24 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 15:12:24 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 25 15:12:24 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 15:12:24 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 25 15:12:24 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 25 15:12:24 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 25 15:12:24 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 15:12:24 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 15:12:24 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 15:12:24 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 15:12:24 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 15:12:24 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 15:12:24 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 15:12:24 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 15:12:24 localhost kernel: Console: colour VGA+ 80x25
Nov 25 15:12:24 localhost kernel: printk: console [ttyS0] enabled
Nov 25 15:12:24 localhost kernel: ACPI: Core revision 20230331
Nov 25 15:12:24 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 15:12:24 localhost kernel: x2apic enabled
Nov 25 15:12:24 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 15:12:24 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 15:12:24 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 25 15:12:24 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 15:12:24 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 15:12:24 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 15:12:24 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 15:12:24 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 15:12:24 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 15:12:24 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 15:12:24 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 15:12:24 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 15:12:24 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 15:12:24 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 15:12:24 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 15:12:24 localhost kernel: x86/bugs: return thunk changed
Nov 25 15:12:24 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 15:12:24 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 15:12:24 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 15:12:24 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 15:12:24 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 15:12:24 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 15:12:24 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 25 15:12:24 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 25 15:12:24 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 15:12:24 localhost kernel: landlock: Up and running.
Nov 25 15:12:24 localhost kernel: Yama: becoming mindful.
Nov 25 15:12:24 localhost kernel: SELinux:  Initializing.
Nov 25 15:12:24 localhost kernel: LSM support for eBPF active
Nov 25 15:12:24 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 15:12:24 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 15:12:24 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 15:12:24 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 15:12:24 localhost kernel: ... version:                0
Nov 25 15:12:24 localhost kernel: ... bit width:              48
Nov 25 15:12:24 localhost kernel: ... generic registers:      6
Nov 25 15:12:24 localhost kernel: ... value mask:             0000ffffffffffff
Nov 25 15:12:24 localhost kernel: ... max period:             00007fffffffffff
Nov 25 15:12:24 localhost kernel: ... fixed-purpose events:   0
Nov 25 15:12:24 localhost kernel: ... event mask:             000000000000003f
Nov 25 15:12:24 localhost kernel: signal: max sigframe size: 1776
Nov 25 15:12:24 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 25 15:12:24 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 25 15:12:24 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 25 15:12:24 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 25 15:12:24 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 15:12:24 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 15:12:24 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 25 15:12:24 localhost kernel: node 0 deferred pages initialised in 8ms
Nov 25 15:12:24 localhost kernel: Memory: 7776412K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 605564K reserved, 0K cma-reserved)
Nov 25 15:12:24 localhost kernel: devtmpfs: initialized
Nov 25 15:12:24 localhost kernel: x86/mm: Memory block size: 128MB
Nov 25 15:12:24 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 15:12:24 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 15:12:24 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 15:12:24 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 15:12:24 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 15:12:24 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 15:12:24 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 15:12:24 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 25 15:12:24 localhost kernel: audit: type=2000 audit(1764083543.228:1): state=initialized audit_enabled=0 res=1
Nov 25 15:12:24 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 15:12:24 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 15:12:24 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 15:12:24 localhost kernel: cpuidle: using governor menu
Nov 25 15:12:24 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 15:12:24 localhost kernel: PCI: Using configuration type 1 for base access
Nov 25 15:12:24 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 25 15:12:24 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 15:12:24 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 15:12:24 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 15:12:24 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 15:12:24 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 15:12:24 localhost kernel: Demotion targets for Node 0: null
Nov 25 15:12:24 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 15:12:24 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 25 15:12:24 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 25 15:12:24 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 15:12:24 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 15:12:24 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 15:12:24 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 15:12:24 localhost kernel: ACPI: Interpreter enabled
Nov 25 15:12:24 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 15:12:24 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 15:12:24 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 15:12:24 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 15:12:24 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 15:12:24 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 15:12:24 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [3] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [4] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [5] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [6] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [7] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [8] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [9] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [10] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [11] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [12] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [13] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [14] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [15] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [16] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [17] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [18] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [19] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [20] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [21] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [22] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [23] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [24] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [25] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [26] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [27] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [28] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [29] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [30] registered
Nov 25 15:12:24 localhost kernel: acpiphp: Slot [31] registered
Nov 25 15:12:24 localhost kernel: PCI host bridge to bus 0000:00
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 15:12:24 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 15:12:24 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 15:12:24 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 15:12:24 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 15:12:24 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 15:12:24 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 15:12:24 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 15:12:24 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 15:12:24 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 15:12:24 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 15:12:24 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 15:12:24 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 15:12:24 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 15:12:24 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 15:12:24 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 15:12:24 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 15:12:24 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 15:12:24 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 15:12:24 localhost kernel: iommu: Default domain type: Translated
Nov 25 15:12:24 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 15:12:24 localhost kernel: SCSI subsystem initialized
Nov 25 15:12:24 localhost kernel: ACPI: bus type USB registered
Nov 25 15:12:24 localhost kernel: usbcore: registered new interface driver usbfs
Nov 25 15:12:24 localhost kernel: usbcore: registered new interface driver hub
Nov 25 15:12:24 localhost kernel: usbcore: registered new device driver usb
Nov 25 15:12:24 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 15:12:24 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 15:12:24 localhost kernel: PTP clock support registered
Nov 25 15:12:24 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 25 15:12:24 localhost kernel: NetLabel: Initializing
Nov 25 15:12:24 localhost kernel: NetLabel:  domain hash size = 128
Nov 25 15:12:24 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 15:12:24 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 15:12:24 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 25 15:12:24 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 25 15:12:24 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 25 15:12:24 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 15:12:24 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 15:12:24 localhost kernel: vgaarb: loaded
Nov 25 15:12:24 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 15:12:24 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 15:12:24 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 15:12:24 localhost kernel: pnp: PnP ACPI init
Nov 25 15:12:24 localhost kernel: pnp 00:03: [dma 2]
Nov 25 15:12:24 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 25 15:12:24 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 15:12:24 localhost kernel: NET: Registered PF_INET protocol family
Nov 25 15:12:24 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 15:12:24 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 15:12:24 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 15:12:24 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 15:12:24 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 15:12:24 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 15:12:24 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 15:12:24 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 15:12:24 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 15:12:24 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 15:12:24 localhost kernel: NET: Registered PF_XDP protocol family
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 15:12:24 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 15:12:24 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 15:12:24 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 15:12:24 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 70934 usecs
Nov 25 15:12:24 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 25 15:12:24 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 15:12:24 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 25 15:12:24 localhost kernel: ACPI: bus type thunderbolt registered
Nov 25 15:12:24 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 25 15:12:24 localhost kernel: Initialise system trusted keyrings
Nov 25 15:12:24 localhost kernel: Key type blacklist registered
Nov 25 15:12:24 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 15:12:24 localhost kernel: zbud: loaded
Nov 25 15:12:24 localhost kernel: integrity: Platform Keyring initialized
Nov 25 15:12:24 localhost kernel: integrity: Machine keyring initialized
Nov 25 15:12:24 localhost kernel: Freeing initrd memory: 75160K
Nov 25 15:12:24 localhost kernel: NET: Registered PF_ALG protocol family
Nov 25 15:12:24 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 25 15:12:24 localhost kernel: Key type asymmetric registered
Nov 25 15:12:24 localhost kernel: Asymmetric key parser 'x509' registered
Nov 25 15:12:24 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 15:12:24 localhost kernel: io scheduler mq-deadline registered
Nov 25 15:12:24 localhost kernel: io scheduler kyber registered
Nov 25 15:12:24 localhost kernel: io scheduler bfq registered
Nov 25 15:12:24 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 15:12:24 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 15:12:24 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 15:12:24 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 25 15:12:24 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 15:12:24 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 15:12:24 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 15:12:24 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 15:12:24 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 15:12:24 localhost kernel: Non-volatile memory driver v1.3
Nov 25 15:12:24 localhost kernel: rdac: device handler registered
Nov 25 15:12:24 localhost kernel: hp_sw: device handler registered
Nov 25 15:12:24 localhost kernel: emc: device handler registered
Nov 25 15:12:24 localhost kernel: alua: device handler registered
Nov 25 15:12:24 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 15:12:24 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 15:12:24 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 15:12:24 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 15:12:24 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 15:12:24 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 15:12:24 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 25 15:12:24 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 15:12:24 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 15:12:24 localhost kernel: hub 1-0:1.0: USB hub found
Nov 25 15:12:24 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 25 15:12:24 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 15:12:24 localhost kernel: usbserial: USB Serial support registered for generic
Nov 25 15:12:24 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 15:12:24 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 15:12:24 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 15:12:24 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 15:12:24 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 15:12:24 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 15:12:24 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 15:12:24 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 15:12:24 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 15:12:24 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T15:12:23 UTC (1764083543)
Nov 25 15:12:24 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 15:12:24 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 15:12:24 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 15:12:24 localhost kernel: usbcore: registered new interface driver usbhid
Nov 25 15:12:24 localhost kernel: usbhid: USB HID core driver
Nov 25 15:12:24 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 25 15:12:24 localhost kernel: Initializing XFRM netlink socket
Nov 25 15:12:24 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 25 15:12:24 localhost kernel: Segment Routing with IPv6
Nov 25 15:12:24 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 25 15:12:24 localhost kernel: mpls_gso: MPLS GSO support
Nov 25 15:12:24 localhost kernel: IPI shorthand broadcast: enabled
Nov 25 15:12:24 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 15:12:24 localhost kernel: AES CTR mode by8 optimization enabled
Nov 25 15:12:24 localhost kernel: sched_clock: Marking stable (1210007323, 153660005)->(1438978824, -75311496)
Nov 25 15:12:24 localhost kernel: registered taskstats version 1
Nov 25 15:12:24 localhost kernel: Loading compiled-in X.509 certificates
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 15:12:24 localhost kernel: Demotion targets for Node 0: null
Nov 25 15:12:24 localhost kernel: page_owner is disabled
Nov 25 15:12:24 localhost kernel: Key type .fscrypt registered
Nov 25 15:12:24 localhost kernel: Key type fscrypt-provisioning registered
Nov 25 15:12:24 localhost kernel: Key type big_key registered
Nov 25 15:12:24 localhost kernel: Key type encrypted registered
Nov 25 15:12:24 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 15:12:24 localhost kernel: Loading compiled-in module X.509 certificates
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 15:12:24 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 25 15:12:24 localhost kernel: ima: No architecture policies found
Nov 25 15:12:24 localhost kernel: evm: Initialising EVM extended attributes:
Nov 25 15:12:24 localhost kernel: evm: security.selinux
Nov 25 15:12:24 localhost kernel: evm: security.SMACK64 (disabled)
Nov 25 15:12:24 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 15:12:24 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 15:12:24 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 15:12:24 localhost kernel: evm: security.apparmor (disabled)
Nov 25 15:12:24 localhost kernel: evm: security.ima
Nov 25 15:12:24 localhost kernel: evm: security.capability
Nov 25 15:12:24 localhost kernel: evm: HMAC attrs: 0x1
Nov 25 15:12:24 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 15:12:24 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 15:12:24 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 15:12:24 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 15:12:24 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 25 15:12:24 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 15:12:24 localhost kernel: Running certificate verification RSA selftest
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 15:12:24 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 15:12:24 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 15:12:24 localhost kernel: Running certificate verification ECDSA selftest
Nov 25 15:12:24 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 15:12:24 localhost kernel: clk: Disabling unused clocks
Nov 25 15:12:24 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 25 15:12:24 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 15:12:24 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 25 15:12:24 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 15:12:24 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 15:12:24 localhost kernel: Run /init as init process
Nov 25 15:12:24 localhost kernel:   with arguments:
Nov 25 15:12:24 localhost kernel:     /init
Nov 25 15:12:24 localhost kernel:   with environment:
Nov 25 15:12:24 localhost kernel:     HOME=/
Nov 25 15:12:24 localhost kernel:     TERM=linux
Nov 25 15:12:24 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 25 15:12:24 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 15:12:24 localhost systemd[1]: Detected virtualization kvm.
Nov 25 15:12:24 localhost systemd[1]: Detected architecture x86-64.
Nov 25 15:12:24 localhost systemd[1]: Running in initrd.
Nov 25 15:12:24 localhost systemd[1]: No hostname configured, using default hostname.
Nov 25 15:12:24 localhost systemd[1]: Hostname set to <localhost>.
Nov 25 15:12:24 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 25 15:12:24 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 25 15:12:24 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 15:12:24 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 15:12:24 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 25 15:12:24 localhost systemd[1]: Reached target Local File Systems.
Nov 25 15:12:24 localhost systemd[1]: Reached target Path Units.
Nov 25 15:12:24 localhost systemd[1]: Reached target Slice Units.
Nov 25 15:12:24 localhost systemd[1]: Reached target Swaps.
Nov 25 15:12:24 localhost systemd[1]: Reached target Timer Units.
Nov 25 15:12:24 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 15:12:24 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 25 15:12:24 localhost systemd[1]: Listening on Journal Socket.
Nov 25 15:12:24 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 15:12:24 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 15:12:24 localhost systemd[1]: Reached target Socket Units.
Nov 25 15:12:24 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 15:12:24 localhost systemd[1]: Starting Journal Service...
Nov 25 15:12:24 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 15:12:24 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 15:12:24 localhost systemd[1]: Starting Create System Users...
Nov 25 15:12:24 localhost systemd[1]: Starting Setup Virtual Console...
Nov 25 15:12:24 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 15:12:24 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 15:12:24 localhost systemd[1]: Finished Create System Users.
Nov 25 15:12:24 localhost systemd-journald[305]: Journal started
Nov 25 15:12:24 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/3ad80417845649f6921921501d8909bb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 15:12:24 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Nov 25 15:12:24 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Nov 25 15:12:24 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 15:12:24 localhost systemd[1]: Started Journal Service.
Nov 25 15:12:24 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 15:12:24 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 15:12:24 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 15:12:24 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 15:12:24 localhost systemd[1]: Finished Setup Virtual Console.
Nov 25 15:12:24 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 15:12:24 localhost systemd[1]: Starting dracut cmdline hook...
Nov 25 15:12:24 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 15:12:24 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 15:12:24 localhost systemd[1]: Finished dracut cmdline hook.
Nov 25 15:12:24 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 25 15:12:24 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 15:12:24 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 25 15:12:24 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 15:12:24 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 25 15:12:24 localhost kernel: RPC: Registered udp transport module.
Nov 25 15:12:24 localhost kernel: RPC: Registered tcp transport module.
Nov 25 15:12:24 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 15:12:24 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 15:12:24 localhost rpc.statd[443]: Version 2.5.4 starting
Nov 25 15:12:25 localhost rpc.statd[443]: Initializing NSM state
Nov 25 15:12:25 localhost rpc.idmapd[448]: Setting log level to 0
Nov 25 15:12:25 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 25 15:12:25 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 15:12:25 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 15:12:25 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 15:12:25 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 25 15:12:25 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 25 15:12:25 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 15:12:25 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 25 15:12:25 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 15:12:25 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 15:12:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 15:12:25 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 15:12:25 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 25 15:12:25 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 15:12:25 localhost systemd[1]: Reached target Network.
Nov 25 15:12:25 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 15:12:25 localhost systemd[1]: Starting dracut initqueue hook...
Nov 25 15:12:25 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 25 15:12:25 localhost systemd[1]: Reached target System Initialization.
Nov 25 15:12:25 localhost systemd[1]: Reached target Basic System.
Nov 25 15:12:25 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 15:12:25 localhost kernel: libata version 3.00 loaded.
Nov 25 15:12:25 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 25 15:12:25 localhost systemd-udevd[488]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 15:12:25 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 15:12:25 localhost kernel: scsi host0: ata_piix
Nov 25 15:12:25 localhost kernel: scsi host1: ata_piix
Nov 25 15:12:25 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 15:12:25 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 15:12:25 localhost kernel:  vda: vda1
Nov 25 15:12:25 localhost kernel: ata1: found unknown device (class 0)
Nov 25 15:12:25 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 15:12:25 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 15:12:25 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 15:12:25 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 15:12:25 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 15:12:25 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 15:12:25 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 25 15:12:25 localhost systemd[1]: Reached target Initrd Root Device.
Nov 25 15:12:25 localhost systemd[1]: Finished dracut initqueue hook.
Nov 25 15:12:25 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 15:12:25 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 15:12:25 localhost systemd[1]: Reached target Remote File Systems.
Nov 25 15:12:25 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 25 15:12:25 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 25 15:12:25 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 15:12:25 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 15:12:25 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 15:12:25 localhost systemd[1]: Mounting /sysroot...
Nov 25 15:12:26 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 15:12:26 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 15:12:26 localhost kernel: XFS (vda1): Ending clean mount
Nov 25 15:12:26 localhost systemd[1]: Mounted /sysroot.
Nov 25 15:12:26 localhost systemd[1]: Reached target Initrd Root File System.
Nov 25 15:12:26 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 15:12:26 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 15:12:26 localhost systemd[1]: Reached target Initrd File Systems.
Nov 25 15:12:26 localhost systemd[1]: Reached target Initrd Default Target.
Nov 25 15:12:26 localhost systemd[1]: Starting dracut mount hook...
Nov 25 15:12:26 localhost systemd[1]: Finished dracut mount hook.
Nov 25 15:12:26 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 15:12:26 localhost rpc.idmapd[448]: exiting on signal 15
Nov 25 15:12:26 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 15:12:26 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 15:12:26 localhost systemd[1]: Stopped target Network.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Timer Units.
Nov 25 15:12:26 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 15:12:26 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Basic System.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Path Units.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Remote File Systems.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Slice Units.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Socket Units.
Nov 25 15:12:26 localhost systemd[1]: Stopped target System Initialization.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Local File Systems.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Swaps.
Nov 25 15:12:26 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped dracut mount hook.
Nov 25 15:12:26 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 25 15:12:26 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 15:12:26 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 15:12:26 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 25 15:12:26 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 25 15:12:26 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 15:12:26 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 15:12:26 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 15:12:26 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 15:12:26 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 25 15:12:26 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 15:12:26 localhost systemd[1]: systemd-udevd.service: Consumed 1.074s CPU time.
Nov 25 15:12:26 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 15:12:26 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Closed udev Control Socket.
Nov 25 15:12:26 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Closed udev Kernel Socket.
Nov 25 15:12:26 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 25 15:12:26 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 25 15:12:26 localhost systemd[1]: Starting Cleanup udev Database...
Nov 25 15:12:26 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 15:12:26 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 15:12:26 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Stopped Create System Users.
Nov 25 15:12:26 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 15:12:26 localhost systemd[1]: Finished Cleanup udev Database.
Nov 25 15:12:26 localhost systemd[1]: Reached target Switch Root.
Nov 25 15:12:26 localhost systemd[1]: Starting Switch Root...
Nov 25 15:12:26 localhost systemd[1]: Switching root.
Nov 25 15:12:26 localhost systemd-journald[305]: Journal stopped
Nov 25 16:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9802f0b552e10a5e0872efe96186064f83fff4b402fe68d6f30c350a0c4e7bc-merged.mount: Deactivated successfully.
Nov 25 16:09:58 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:09:58 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:09:58 compute-0 sudo[225728]: pam_unix(sudo:session): session closed for user root
Nov 25 16:09:58 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:09:58 compute-0 podman[225557]: 2025-11-25 16:09:58.116379424 +0000 UTC m=+0.959514491 container remove 075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:09:58 compute-0 systemd[1]: libpod-conmon-075162efe33646661fb7a41de999347f723a3949fb540a05f8e8dfcb88e03b37.scope: Deactivated successfully.
Nov 25 16:09:58 compute-0 sudo[225298]: pam_unix(sudo:session): session closed for user root
Nov 25 16:09:58 compute-0 sudo[225773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:09:58 compute-0 sudo[225773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:09:58 compute-0 sudo[225773]: pam_unix(sudo:session): session closed for user root
Nov 25 16:09:58 compute-0 sudo[225798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:09:58 compute-0 sudo[225798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:09:58 compute-0 sudo[225798]: pam_unix(sudo:session): session closed for user root
Nov 25 16:09:58 compute-0 sudo[225846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:09:58 compute-0 sudo[225846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:09:58 compute-0 sudo[225846]: pam_unix(sudo:session): session closed for user root
Nov 25 16:09:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:09:58 compute-0 sudo[225900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:09:58 compute-0 sudo[225900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:09:58 compute-0 podman[225967]: 2025-11-25 16:09:58.697463644 +0000 UTC m=+0.037744331 container create 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:09:58 compute-0 systemd[1]: Started libpod-conmon-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope.
Nov 25 16:09:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:09:58 compute-0 podman[225967]: 2025-11-25 16:09:58.774573669 +0000 UTC m=+0.114854376 container init 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:09:58 compute-0 podman[225967]: 2025-11-25 16:09:58.681720854 +0000 UTC m=+0.022001561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:09:58 compute-0 podman[225967]: 2025-11-25 16:09:58.783668347 +0000 UTC m=+0.123949044 container start 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:09:58 compute-0 podman[225967]: 2025-11-25 16:09:58.787370227 +0000 UTC m=+0.127650914 container attach 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:09:58 compute-0 competent_beaver[226014]: 167 167
Nov 25 16:09:58 compute-0 systemd[1]: libpod-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope: Deactivated successfully.
Nov 25 16:09:58 compute-0 conmon[226014]: conmon 293314731c903ffdbde6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope/container/memory.events
Nov 25 16:09:58 compute-0 podman[225967]: 2025-11-25 16:09:58.790323818 +0000 UTC m=+0.130604505 container died 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 16:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe6caf93d954f6449c7f160b494fac4113788f10a5fb30efdfce59efee8a789-merged.mount: Deactivated successfully.
Nov 25 16:09:58 compute-0 podman[225967]: 2025-11-25 16:09:58.829598511 +0000 UTC m=+0.169879198 container remove 293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_beaver, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:09:58 compute-0 systemd[1]: libpod-conmon-293314731c903ffdbde6391bf4ebe6bb780618d08a21588a08828dd56a39dd98.scope: Deactivated successfully.
Nov 25 16:09:58 compute-0 sudo[226071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tintfpidkinfujxpfiqjvnzmajmixnih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764086998.277872-127-129997093884375/AnsiballZ_systemd_service.py'
Nov 25 16:09:58 compute-0 sudo[226071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:09:58 compute-0 podman[226081]: 2025-11-25 16:09:58.996511616 +0000 UTC m=+0.041093753 container create e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:09:59 compute-0 systemd[1]: Started libpod-conmon-e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606.scope.
Nov 25 16:09:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:09:59 compute-0 podman[226081]: 2025-11-25 16:09:59.072152931 +0000 UTC m=+0.116735058 container init e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:09:59 compute-0 podman[226081]: 2025-11-25 16:09:58.978446533 +0000 UTC m=+0.023028690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:09:59 compute-0 podman[226081]: 2025-11-25 16:09:59.080349214 +0000 UTC m=+0.124931341 container start e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:09:59 compute-0 podman[226081]: 2025-11-25 16:09:59.083594303 +0000 UTC m=+0.128176450 container attach e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:09:59 compute-0 python3.9[226075]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:09:59 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 16:09:59 compute-0 sudo[226071]: pam_unix(sudo:session): session closed for user root
Nov 25 16:09:59 compute-0 ceph-mon[74985]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:09:59 compute-0 sudo[226257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmrpavpmpbhpeaxvimnhduwlzlrjtzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764086999.4398367-135-14338036218570/AnsiballZ_systemd_service.py'
Nov 25 16:09:59 compute-0 sudo[226257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:09:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.889494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999889580, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 253, "total_data_size": 2458656, "memory_usage": 2491128, "flush_reason": "Manual Compaction"}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999903479, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1400692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11873, "largest_seqno": 13393, "table_properties": {"data_size": 1395509, "index_size": 2451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13064, "raw_average_key_size": 20, "raw_value_size": 1384147, "raw_average_value_size": 2129, "num_data_blocks": 113, "num_entries": 650, "num_filter_entries": 650, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764086834, "oldest_key_time": 1764086834, "file_creation_time": 1764086999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 13996 microseconds, and 7412 cpu microseconds.
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.903516) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1400692 bytes OK
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.903533) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.905690) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.905706) EVENT_LOG_v1 {"time_micros": 1764086999905701, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.905725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2452028, prev total WAL file size 2452028, number of live WAL files 2.
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.906503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1367KB)], [29(8085KB)]
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999906565, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9680411, "oldest_snapshot_seqno": -1}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4012 keys, 7366962 bytes, temperature: kUnknown
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999962904, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7366962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7338550, "index_size": 17294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 96353, "raw_average_key_size": 24, "raw_value_size": 7264537, "raw_average_value_size": 1810, "num_data_blocks": 752, "num_entries": 4012, "num_filter_entries": 4012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764086999, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.963146) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7366962 bytes
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.964376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.4 rd, 131.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(12.2) write-amplify(5.3) OK, records in: 4452, records dropped: 440 output_compression: NoCompression
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.964392) EVENT_LOG_v1 {"time_micros": 1764086999964383, "job": 12, "event": "compaction_finished", "compaction_time_micros": 56150, "compaction_time_cpu_micros": 16020, "output_level": 6, "num_output_files": 1, "total_output_size": 7366962, "num_input_records": 4452, "num_output_records": 4012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999964673, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764086999965885, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.906412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:09:59 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:09:59.965966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:09:59 compute-0 goofy_ellis[226098]: {
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "osd_id": 1,
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "type": "bluestore"
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:     },
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "osd_id": 2,
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "type": "bluestore"
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:     },
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "osd_id": 0,
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:         "type": "bluestore"
Nov 25 16:09:59 compute-0 goofy_ellis[226098]:     }
Nov 25 16:09:59 compute-0 goofy_ellis[226098]: }
Nov 25 16:10:00 compute-0 python3.9[226260]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:10:00 compute-0 systemd[1]: libpod-e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606.scope: Deactivated successfully.
Nov 25 16:10:00 compute-0 podman[226081]: 2025-11-25 16:10:00.019255221 +0000 UTC m=+1.063837348 container died e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-edf95ce340c21bf24190f5ac31fbed38925ce35e3b30d76aa6ec25b73d0d4a62-merged.mount: Deactivated successfully.
Nov 25 16:10:00 compute-0 systemd[1]: Reloading.
Nov 25 16:10:00 compute-0 podman[226081]: 2025-11-25 16:10:00.072298769 +0000 UTC m=+1.116880896 container remove e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:10:00 compute-0 sudo[225900]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:10:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:10:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:10:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:10:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fbbd77ce-3655-42aa-9e92-1c4d81f4814c does not exist
Nov 25 16:10:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9b45cf01-092a-4150-9248-f84004e1f464 does not exist
Nov 25 16:10:00 compute-0 systemd-rc-local-generator[226330]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:10:00 compute-0 systemd-sysv-generator[226333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:10:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:00 compute-0 systemd[1]: libpod-conmon-e48fc12052444e64910b9dd2b3c9abc486c62f96913d3694e7ce2353597d2606.scope: Deactivated successfully.
Nov 25 16:10:00 compute-0 sudo[226308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:10:00 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 16:10:00 compute-0 sudo[226308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:10:00 compute-0 sudo[226308]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:00 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 16:10:00 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 16:10:00 compute-0 sudo[226362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:10:00 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 16:10:00 compute-0 sudo[226362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:10:00 compute-0 sudo[226362]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:00 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 16:10:00 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 16:10:00 compute-0 sudo[226257]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:01 compute-0 sudo[226544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykkabdfqcvjywpjmvuqtpbafnpijgmje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087000.8176672-146-1685676101704/AnsiballZ_service_facts.py'
Nov 25 16:10:01 compute-0 sudo[226544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:10:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:10:01 compute-0 ceph-mon[74985]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:01 compute-0 python3.9[226546]: ansible-ansible.builtin.service_facts Invoked
Nov 25 16:10:01 compute-0 network[226563]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 16:10:01 compute-0 network[226564]: 'network-scripts' will be removed from distribution in near future.
Nov 25 16:10:01 compute-0 network[226565]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 16:10:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:03 compute-0 ceph-mon[74985]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:05 compute-0 sudo[226544]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:05 compute-0 ceph-mon[74985]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:05 compute-0 sudo[226835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giucmbptvkzwxfsirbijouzghsdbhyav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087005.6404073-156-51678850480132/AnsiballZ_file.py'
Nov 25 16:10:05 compute-0 sudo[226835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:06 compute-0 python3.9[226837]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 16:10:06 compute-0 sudo[226835]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:06 compute-0 sudo[226987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzuubcqmfxgcliptnqqchbqpewsdpgpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087006.2810245-164-213215616684299/AnsiballZ_modprobe.py'
Nov 25 16:10:06 compute-0 sudo[226987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:06 compute-0 python3.9[226989]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 16:10:06 compute-0 sudo[226987]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:07 compute-0 ceph-mon[74985]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:07 compute-0 sudo[227143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxrcssmutudsxpgsctnxfddmbzotejqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087007.126995-172-55631441861680/AnsiballZ_stat.py'
Nov 25 16:10:07 compute-0 sudo[227143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:07 compute-0 python3.9[227145]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:07 compute-0 sudo[227143]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:07 compute-0 sudo[227266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duvhajscqzyzbgksyapimpeghfskdjzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087007.126995-172-55631441861680/AnsiballZ_copy.py'
Nov 25 16:10:07 compute-0 sudo[227266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:08 compute-0 python3.9[227268]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087007.126995-172-55631441861680/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:08 compute-0 sudo[227266]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:08 compute-0 sudo[227418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htuosjbxooxvaopfcnzeybhvokhkvndl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087008.4264216-188-60223211684378/AnsiballZ_lineinfile.py'
Nov 25 16:10:08 compute-0 sudo[227418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:09 compute-0 python3.9[227420]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:09 compute-0 sudo[227418]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:09 compute-0 ceph-mon[74985]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:09 compute-0 sudo[227570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soopmfvqibdrnjrxndbbjiazzgjrlmrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087009.2219796-196-166750662417054/AnsiballZ_systemd.py'
Nov 25 16:10:09 compute-0 sudo[227570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:10:10 compute-0 python3.9[227572]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 16:10:10 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 16:10:10 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 16:10:10 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 16:10:10 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 16:10:10 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 16:10:10 compute-0 sudo[227570]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:10 compute-0 sudo[227726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwagovqaqlbsncqwxneocdiqweehjiqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087010.3984153-204-185128794618067/AnsiballZ_file.py'
Nov 25 16:10:10 compute-0 sudo[227726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:10 compute-0 python3.9[227728]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:10:10 compute-0 sudo[227726]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:11 compute-0 ceph-mon[74985]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:11 compute-0 sudo[227878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpqhfmlyfsjmkbufsemczupnakbwhtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087011.1966343-213-275996480569741/AnsiballZ_stat.py'
Nov 25 16:10:11 compute-0 sudo[227878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:11 compute-0 python3.9[227880]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:10:11 compute-0 sudo[227878]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:12 compute-0 sudo[228030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyoruasvovpkwhtnyfgcbjohjtzfycef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087011.8777616-222-16513799918576/AnsiballZ_stat.py'
Nov 25 16:10:12 compute-0 sudo[228030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:12 compute-0 python3.9[228032]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:10:12 compute-0 sudo[228030]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:13 compute-0 sudo[228182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leipbkeliissggaxskpnrfahhpgkpmtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087012.6032295-230-148085352226994/AnsiballZ_stat.py'
Nov 25 16:10:13 compute-0 sudo[228182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:13 compute-0 python3.9[228184]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:13 compute-0 sudo[228182]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:13 compute-0 ceph-mon[74985]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:13 compute-0 sudo[228305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqyfdvoycosqklzxwfisbjypotfjpjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087012.6032295-230-148085352226994/AnsiballZ_copy.py'
Nov 25 16:10:13 compute-0 sudo[228305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:10:13.574 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:10:13.576 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:10:13.576 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:10:13 compute-0 python3.9[228307]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087012.6032295-230-148085352226994/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:13 compute-0 sudo[228305]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:14 compute-0 sudo[228470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umgzywtugorqyfifurnautqvsrpphapt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087013.9061937-245-177279574446446/AnsiballZ_command.py'
Nov 25 16:10:14 compute-0 sudo[228470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:14 compute-0 podman[228431]: 2025-11-25 16:10:14.269283962 +0000 UTC m=+0.106877728 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:10:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:14 compute-0 python3.9[228475]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:10:14 compute-0 sudo[228470]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:14 compute-0 sudo[228636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrvbbbgzoyxarlffkyoqhsarmfetbdzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087014.5959308-253-32681157655762/AnsiballZ_lineinfile.py'
Nov 25 16:10:14 compute-0 sudo[228636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:15 compute-0 python3.9[228638]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:15 compute-0 sudo[228636]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:15 compute-0 ceph-mon[74985]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:15 compute-0 sudo[228788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghdjwtovmsbalubkqdhidwswjxepouie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087015.2112453-261-251355374185150/AnsiballZ_replace.py'
Nov 25 16:10:15 compute-0 sudo[228788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:15 compute-0 python3.9[228790]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:15 compute-0 sudo[228788]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:16 compute-0 sudo[228940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muguldbqmypglypevnvjajmryoirvadn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087016.011382-269-66080619211504/AnsiballZ_replace.py'
Nov 25 16:10:16 compute-0 sudo[228940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:16 compute-0 python3.9[228942]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:16 compute-0 sudo[228940]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:16 compute-0 sudo[229092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoxyopwuoftpwqiqyfcegyxvzwfoszxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087016.6623697-278-101908403469172/AnsiballZ_lineinfile.py'
Nov 25 16:10:16 compute-0 sudo[229092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:17 compute-0 python3.9[229094]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:17 compute-0 sudo[229092]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:17 compute-0 ceph-mon[74985]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:17 compute-0 sudo[229244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvrybrgteblchhyksntirlrwxoishre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087017.3804827-278-77952840696069/AnsiballZ_lineinfile.py'
Nov 25 16:10:17 compute-0 sudo[229244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:17 compute-0 podman[229246]: 2025-11-25 16:10:17.737775541 +0000 UTC m=+0.057376657 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:10:17 compute-0 python3.9[229247]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:17 compute-0 sudo[229244]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:18 compute-0 sudo[229414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfvufhuzcdwwdgekcmamnjttqzevllqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087017.9891431-278-134120588741867/AnsiballZ_lineinfile.py'
Nov 25 16:10:18 compute-0 sudo[229414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:18 compute-0 python3.9[229416]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:18 compute-0 sudo[229414]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:18 compute-0 sudo[229566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsvggxedbtmysyushtjewndwlomarjrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087018.5693164-278-189746665629562/AnsiballZ_lineinfile.py'
Nov 25 16:10:18 compute-0 sudo[229566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:18 compute-0 python3.9[229568]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:19 compute-0 sudo[229566]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:19 compute-0 sudo[229718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgdtaaucwqthibsbkrsldaccwhfcawil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087019.1461754-307-25226843017919/AnsiballZ_stat.py'
Nov 25 16:10:19 compute-0 sudo[229718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:19 compute-0 ceph-mon[74985]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:19 compute-0 python3.9[229720]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:10:19 compute-0 sudo[229718]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:20 compute-0 sudo[229872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyjtubtgrdnmpgmepcuktjlcdxvusova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087019.8769598-315-109379698353033/AnsiballZ_file.py'
Nov 25 16:10:20 compute-0 sudo[229872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:20 compute-0 python3.9[229874]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:20 compute-0 sudo[229872]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:20 compute-0 sudo[230024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruhcwqqozfkcvbdqddipdtjhowbnbbse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087020.7075477-324-182245045884248/AnsiballZ_file.py'
Nov 25 16:10:20 compute-0 sudo[230024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:21 compute-0 python3.9[230026]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:10:21 compute-0 sudo[230024]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:21 compute-0 ceph-mon[74985]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:21 compute-0 sudo[230176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iapkkfbmrfavxcfapmmeqnmyntyydhua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087021.3517091-332-37374820523064/AnsiballZ_stat.py'
Nov 25 16:10:21 compute-0 sudo[230176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:21 compute-0 python3.9[230178]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:21 compute-0 sudo[230176]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:22 compute-0 sudo[230254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ardwyufsvehqphvxvdjsvvqviydckxuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087021.3517091-332-37374820523064/AnsiballZ_file.py'
Nov 25 16:10:22 compute-0 sudo[230254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:22 compute-0 python3.9[230256]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:10:22 compute-0 sudo[230254]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:22 compute-0 sudo[230406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqanajqxbnkngxoaapenguvqzmflllk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087022.3923917-332-22276073027174/AnsiballZ_stat.py'
Nov 25 16:10:22 compute-0 sudo[230406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:22 compute-0 python3.9[230408]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:22 compute-0 sudo[230406]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:23 compute-0 sudo[230484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgtiojatlotgjoqjmxxvenqeuwysrxjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087022.3923917-332-22276073027174/AnsiballZ_file.py'
Nov 25 16:10:23 compute-0 sudo[230484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:23 compute-0 python3.9[230486]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:10:23 compute-0 sudo[230484]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:23 compute-0 ceph-mon[74985]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:23 compute-0 sudo[230636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyyakickjkrpvfaksockvxlnkoexewup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087023.5175514-355-104849638629774/AnsiballZ_file.py'
Nov 25 16:10:23 compute-0 sudo[230636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:23 compute-0 python3.9[230638]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:23 compute-0 sudo[230636]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:24 compute-0 sudo[230788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnljsbudwqqytenffwolahjpfrjbraqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087024.1509125-363-106195892458249/AnsiballZ_stat.py'
Nov 25 16:10:24 compute-0 sudo[230788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:24 compute-0 python3.9[230790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:24 compute-0 sudo[230788]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:24 compute-0 sudo[230866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtapcdobhwugggywnmcltnnsebnafcwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087024.1509125-363-106195892458249/AnsiballZ_file.py'
Nov 25 16:10:24 compute-0 sudo[230866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:25 compute-0 python3.9[230868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:25 compute-0 sudo[230866]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:25 compute-0 ceph-mon[74985]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:25 compute-0 sudo[231018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoqiuwlumeujmhjlyeszlljdzufevckd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087025.339648-375-91564040730869/AnsiballZ_stat.py'
Nov 25 16:10:25 compute-0 sudo[231018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:25 compute-0 python3.9[231020]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:25 compute-0 sudo[231018]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:26 compute-0 sudo[231096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrchjszdrezgxuabqfryplxotyefnkfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087025.339648-375-91564040730869/AnsiballZ_file.py'
Nov 25 16:10:26 compute-0 sudo[231096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:26 compute-0 python3.9[231098]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:26 compute-0 sudo[231096]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:26 compute-0 sudo[231248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odqumlxebkkgptiztzqfhryjqidxtccf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087026.570406-387-262949597943095/AnsiballZ_systemd.py'
Nov 25 16:10:26 compute-0 sudo[231248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:27 compute-0 python3.9[231250]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:10:27 compute-0 systemd[1]: Reloading.
Nov 25 16:10:27 compute-0 systemd-rc-local-generator[231276]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:10:27 compute-0 systemd-sysv-generator[231280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:10:27 compute-0 sudo[231248]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:27 compute-0 ceph-mon[74985]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:27 compute-0 sudo[231436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oelsyqbwpgwxxpbkjxtowewomouiznuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087027.630672-395-135345217615076/AnsiballZ_stat.py'
Nov 25 16:10:27 compute-0 sudo[231436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:28 compute-0 python3.9[231438]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:28 compute-0 sudo[231436]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:28 compute-0 sudo[231514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvkgixulinccrkdfvukyrlpceostaysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087027.630672-395-135345217615076/AnsiballZ_file.py'
Nov 25 16:10:28 compute-0 sudo[231514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:28 compute-0 python3.9[231516]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:28 compute-0 sudo[231514]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:29 compute-0 sudo[231666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntjlyvmjutfwazmtyrrkizjrhlhruine ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087028.7332933-407-63816020370533/AnsiballZ_stat.py'
Nov 25 16:10:29 compute-0 sudo[231666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:29 compute-0 python3.9[231668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:29 compute-0 sudo[231666]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:29 compute-0 sudo[231744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waigxpiqvbninbglyzmxhmsljdanumuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087028.7332933-407-63816020370533/AnsiballZ_file.py'
Nov 25 16:10:29 compute-0 sudo[231744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:29 compute-0 ceph-mon[74985]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:29 compute-0 python3.9[231746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:29 compute-0 sudo[231744]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:30 compute-0 sudo[231897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdyfrjkgzxtdfuakfbeyohdlqywmtxjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087030.0773807-419-29789005572562/AnsiballZ_systemd.py'
Nov 25 16:10:30 compute-0 sudo[231897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:30 compute-0 python3.9[231899]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:10:30 compute-0 systemd[1]: Reloading.
Nov 25 16:10:30 compute-0 systemd-rc-local-generator[231924]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:10:30 compute-0 systemd-sysv-generator[231929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:10:31 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 16:10:31 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 16:10:31 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 16:10:31 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 16:10:31 compute-0 sudo[231897]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:31 compute-0 ceph-mon[74985]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:31 compute-0 sudo[232090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaebamhkvgcpxiblgleybhwhqawoivqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087031.541526-429-10299803104307/AnsiballZ_file.py'
Nov 25 16:10:31 compute-0 sudo[232090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:32 compute-0 python3.9[232092]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:10:32 compute-0 sudo[232090]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:32 compute-0 sudo[232242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwirlmemgmhfwfmfwnbuejvydzgsygup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087032.238688-437-229986239869302/AnsiballZ_stat.py'
Nov 25 16:10:32 compute-0 sudo[232242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:32 compute-0 python3.9[232244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:32 compute-0 sudo[232242]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:33 compute-0 sudo[232365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chiejjyruhlabzhfrdtvupeviqtnsgdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087032.238688-437-229986239869302/AnsiballZ_copy.py'
Nov 25 16:10:33 compute-0 sudo[232365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:33 compute-0 python3.9[232367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087032.238688-437-229986239869302/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:10:33 compute-0 sudo[232365]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:33 compute-0 ceph-mon[74985]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:33 compute-0 sudo[232517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fivecyouhzuzcqasmvgxvbjimyxumqfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087033.70426-454-25685183509772/AnsiballZ_file.py'
Nov 25 16:10:33 compute-0 sudo[232517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:34 compute-0 python3.9[232519]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:10:34 compute-0 sudo[232517]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:34 compute-0 sudo[232669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-midpmxiknenmhdqjphdvgaouoepefekn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087034.302232-462-52544286662836/AnsiballZ_stat.py'
Nov 25 16:10:34 compute-0 sudo[232669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:34 compute-0 python3.9[232671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:34 compute-0 sudo[232669]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:35 compute-0 sudo[232792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oysskacmtspvwpmzcemnyysjmpbloztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087034.302232-462-52544286662836/AnsiballZ_copy.py'
Nov 25 16:10:35 compute-0 sudo[232792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:35 compute-0 python3.9[232794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087034.302232-462-52544286662836/.source.json _original_basename=.gy4z6kjq follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:35 compute-0 sudo[232792]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:35 compute-0 ceph-mon[74985]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:35 compute-0 sudo[232944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmyqllgpreiqsfndfmqovtioskeeftez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087035.4272454-477-94951767378818/AnsiballZ_file.py'
Nov 25 16:10:35 compute-0 sudo[232944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:35 compute-0 python3.9[232946]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:35 compute-0 sudo[232944]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:36 compute-0 sudo[233096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auuuwucisehghxtspajafimjwxnvrjfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087036.07846-485-249126268799619/AnsiballZ_stat.py'
Nov 25 16:10:36 compute-0 sudo[233096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:36 compute-0 sudo[233096]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:36 compute-0 sudo[233219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygfgnwhilbnxedefzdddddbldfyhbqew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087036.07846-485-249126268799619/AnsiballZ_copy.py'
Nov 25 16:10:36 compute-0 sudo[233219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:37 compute-0 sudo[233219]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:37 compute-0 ceph-mon[74985]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:37 compute-0 sudo[233371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbjpjrrkevpdwsnsrccrcloymvhfoyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087037.3167646-502-162317690860885/AnsiballZ_container_config_data.py'
Nov 25 16:10:37 compute-0 sudo[233371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:37 compute-0 python3.9[233373]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 16:10:37 compute-0 sudo[233371]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:38 compute-0 sudo[233523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iifysuhiwkgnitwiluipvupkdzbwsnjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087038.1616807-511-63045453754511/AnsiballZ_container_config_hash.py'
Nov 25 16:10:38 compute-0 sudo[233523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:38 compute-0 ceph-mon[74985]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:38 compute-0 python3.9[233525]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 16:10:38 compute-0 sudo[233523]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:39 compute-0 sudo[233675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sprxyafnibbcvvabbkzjtkyjgpgrzmgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087039.0374854-520-234756450679898/AnsiballZ_podman_container_info.py'
Nov 25 16:10:39 compute-0 sudo[233675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:39 compute-0 python3.9[233677]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 16:10:39 compute-0 sudo[233675]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:39 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:10:39
Nov 25 16:10:39 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:10:39 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:10:39 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Nov 25 16:10:39 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:10:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:40 compute-0 sudo[233853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcidxpnajycqxrtpabsgabscfjgianfh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764087040.433486-533-137734582020202/AnsiballZ_edpm_container_manage.py'
Nov 25 16:10:40 compute-0 sudo[233853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:41 compute-0 python3[233855]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 16:10:41 compute-0 ceph-mon[74985]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:42 compute-0 podman[233869]: 2025-11-25 16:10:42.25749629 +0000 UTC m=+1.027557252 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 16:10:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:42 compute-0 podman[233926]: 2025-11-25 16:10:42.403273251 +0000 UTC m=+0.050521404 container create 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 25 16:10:42 compute-0 podman[233926]: 2025-11-25 16:10:42.376374145 +0000 UTC m=+0.023622338 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 16:10:42 compute-0 python3[233855]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 16:10:42 compute-0 sudo[233853]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:43 compute-0 sudo[234115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynwxtakbwvkabqlythkwxbvtxplhbfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087042.7130523-541-107827879337700/AnsiballZ_stat.py'
Nov 25 16:10:43 compute-0 sudo[234115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:43 compute-0 python3.9[234117]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:10:43 compute-0 sudo[234115]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:43 compute-0 ceph-mon[74985]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:43 compute-0 sudo[234269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxwzcfkyxznrnuckufbfpeccvtjjaqht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087043.4524717-550-191233347842070/AnsiballZ_file.py'
Nov 25 16:10:43 compute-0 sudo[234269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:43 compute-0 python3.9[234271]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:43 compute-0 sudo[234269]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:44 compute-0 sudo[234345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofifmbitpucicodpsnywqxwxddvstknd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087043.4524717-550-191233347842070/AnsiballZ_stat.py'
Nov 25 16:10:44 compute-0 sudo[234345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:44 compute-0 python3.9[234347]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:10:44 compute-0 sudo[234345]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:44 compute-0 podman[234400]: 2025-11-25 16:10:44.704493888 +0000 UTC m=+0.120430669 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 16:10:44 compute-0 sudo[234522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkhaykiynpkqnivbqeyfnqinhfblftz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087044.4149652-550-124698999493805/AnsiballZ_copy.py'
Nov 25 16:10:44 compute-0 sudo[234522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:45 compute-0 python3.9[234524]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764087044.4149652-550-124698999493805/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:45 compute-0 sudo[234522]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:45 compute-0 sudo[234598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwcsslxthzzibiruedlkodocfvihmsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087044.4149652-550-124698999493805/AnsiballZ_systemd.py'
Nov 25 16:10:45 compute-0 sudo[234598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:45 compute-0 ceph-mon[74985]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:45 compute-0 python3.9[234600]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 16:10:45 compute-0 systemd[1]: Reloading.
Nov 25 16:10:45 compute-0 systemd-rc-local-generator[234626]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:10:45 compute-0 systemd-sysv-generator[234630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:10:45 compute-0 sudo[234598]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:46 compute-0 sudo[234710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbbhplqjournqctlhwqszvlyciiseico ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087044.4149652-550-124698999493805/AnsiballZ_systemd.py'
Nov 25 16:10:46 compute-0 sudo[234710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:46 compute-0 python3.9[234712]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:10:46 compute-0 systemd[1]: Reloading.
Nov 25 16:10:46 compute-0 systemd-sysv-generator[234744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:10:46 compute-0 systemd-rc-local-generator[234741]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:10:46 compute-0 systemd[1]: Starting multipathd container...
Nov 25 16:10:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 16:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 16:10:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.
Nov 25 16:10:46 compute-0 podman[234752]: 2025-11-25 16:10:46.966327644 +0000 UTC m=+0.102210528 container init 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 16:10:46 compute-0 multipathd[234768]: + sudo -E kolla_set_configs
Nov 25 16:10:46 compute-0 podman[234752]: 2025-11-25 16:10:46.989020556 +0000 UTC m=+0.124903410 container start 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd)
Nov 25 16:10:46 compute-0 podman[234752]: multipathd
Nov 25 16:10:46 compute-0 systemd[1]: Started multipathd container.
Nov 25 16:10:47 compute-0 sudo[234774]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 16:10:47 compute-0 sudo[234774]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 16:10:47 compute-0 sudo[234774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 16:10:47 compute-0 sudo[234710]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:47 compute-0 multipathd[234768]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 16:10:47 compute-0 multipathd[234768]: INFO:__main__:Validating config file
Nov 25 16:10:47 compute-0 multipathd[234768]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 16:10:47 compute-0 multipathd[234768]: INFO:__main__:Writing out command to execute
Nov 25 16:10:47 compute-0 sudo[234774]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:47 compute-0 multipathd[234768]: ++ cat /run_command
Nov 25 16:10:47 compute-0 multipathd[234768]: + CMD='/usr/sbin/multipathd -d'
Nov 25 16:10:47 compute-0 multipathd[234768]: + ARGS=
Nov 25 16:10:47 compute-0 multipathd[234768]: + sudo kolla_copy_cacerts
Nov 25 16:10:47 compute-0 podman[234775]: 2025-11-25 16:10:47.064388429 +0000 UTC m=+0.059158906 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 16:10:47 compute-0 systemd[1]: 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-4d17e05f0b7691ba.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 16:10:47 compute-0 systemd[1]: 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-4d17e05f0b7691ba.service: Failed with result 'exit-code'.
Nov 25 16:10:47 compute-0 sudo[234811]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 16:10:47 compute-0 sudo[234811]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 16:10:47 compute-0 sudo[234811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 16:10:47 compute-0 sudo[234811]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:47 compute-0 multipathd[234768]: + [[ ! -n '' ]]
Nov 25 16:10:47 compute-0 multipathd[234768]: + . kolla_extend_start
Nov 25 16:10:47 compute-0 multipathd[234768]: Running command: '/usr/sbin/multipathd -d'
Nov 25 16:10:47 compute-0 multipathd[234768]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 16:10:47 compute-0 multipathd[234768]: + umask 0022
Nov 25 16:10:47 compute-0 multipathd[234768]: + exec /usr/sbin/multipathd -d
Nov 25 16:10:47 compute-0 multipathd[234768]: 3504.738920 | --------start up--------
Nov 25 16:10:47 compute-0 multipathd[234768]: 3504.738941 | read /etc/multipath.conf
Nov 25 16:10:47 compute-0 multipathd[234768]: 3504.744411 | path checkers start up
Nov 25 16:10:47 compute-0 ceph-mon[74985]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:47 compute-0 python3.9[234957]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:10:48 compute-0 sudo[235126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtfsugpdtcnxzqqpqtulifdiupczdtog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087047.7479088-586-226920021796626/AnsiballZ_command.py'
Nov 25 16:10:48 compute-0 podman[235083]: 2025-11-25 16:10:48.035590619 +0000 UTC m=+0.055007054 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 16:10:48 compute-0 sudo[235126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:48 compute-0 python3.9[235130]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:10:48 compute-0 sudo[235126]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:48 compute-0 ceph-mon[74985]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:48 compute-0 sudo[235293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icejyxymtaufnuocdliownwgejffkfiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087048.465574-594-92181589425436/AnsiballZ_systemd.py'
Nov 25 16:10:48 compute-0 sudo[235293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:49 compute-0 python3.9[235295]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 16:10:49 compute-0 systemd[1]: Stopping multipathd container...
Nov 25 16:10:49 compute-0 multipathd[234768]: 3506.878211 | exit (signal)
Nov 25 16:10:49 compute-0 multipathd[234768]: 3506.878267 | --------shut down-------
Nov 25 16:10:49 compute-0 systemd[1]: libpod-917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.scope: Deactivated successfully.
Nov 25 16:10:49 compute-0 podman[235299]: 2025-11-25 16:10:49.263604136 +0000 UTC m=+0.163639925 container died 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 16:10:49 compute-0 systemd[1]: 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-4d17e05f0b7691ba.timer: Deactivated successfully.
Nov 25 16:10:49 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.
Nov 25 16:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe-userdata-shm.mount: Deactivated successfully.
Nov 25 16:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006-merged.mount: Deactivated successfully.
Nov 25 16:10:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:49 compute-0 podman[235299]: 2025-11-25 16:10:49.97572893 +0000 UTC m=+0.875764719 container cleanup 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 16:10:49 compute-0 podman[235299]: multipathd
Nov 25 16:10:50 compute-0 podman[235327]: multipathd
Nov 25 16:10:50 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 16:10:50 compute-0 systemd[1]: Stopped multipathd container.
Nov 25 16:10:50 compute-0 systemd[1]: Starting multipathd container...
Nov 25 16:10:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 16:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb8efd9b773fa530b238c7385ecc63712f6cf35e346eff28052eb8c2d915006/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 16:10:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe.
Nov 25 16:10:50 compute-0 podman[235340]: 2025-11-25 16:10:50.192397922 +0000 UTC m=+0.123280075 container init 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:10:50 compute-0 multipathd[235356]: + sudo -E kolla_set_configs
Nov 25 16:10:50 compute-0 sudo[235362]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 16:10:50 compute-0 sudo[235362]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 16:10:50 compute-0 sudo[235362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 16:10:50 compute-0 podman[235340]: 2025-11-25 16:10:50.219226346 +0000 UTC m=+0.150108499 container start 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 16:10:50 compute-0 multipathd[235356]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 16:10:50 compute-0 multipathd[235356]: INFO:__main__:Validating config file
Nov 25 16:10:50 compute-0 multipathd[235356]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 16:10:50 compute-0 multipathd[235356]: INFO:__main__:Writing out command to execute
Nov 25 16:10:50 compute-0 sudo[235362]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:50 compute-0 multipathd[235356]: ++ cat /run_command
Nov 25 16:10:50 compute-0 multipathd[235356]: + CMD='/usr/sbin/multipathd -d'
Nov 25 16:10:50 compute-0 multipathd[235356]: + ARGS=
Nov 25 16:10:50 compute-0 multipathd[235356]: + sudo kolla_copy_cacerts
Nov 25 16:10:50 compute-0 sudo[235378]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 16:10:50 compute-0 sudo[235378]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 16:10:50 compute-0 sudo[235378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 16:10:50 compute-0 sudo[235378]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:50 compute-0 podman[235340]: multipathd
Nov 25 16:10:50 compute-0 multipathd[235356]: + [[ ! -n '' ]]
Nov 25 16:10:50 compute-0 multipathd[235356]: + . kolla_extend_start
Nov 25 16:10:50 compute-0 multipathd[235356]: Running command: '/usr/sbin/multipathd -d'
Nov 25 16:10:50 compute-0 multipathd[235356]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 16:10:50 compute-0 multipathd[235356]: + umask 0022
Nov 25 16:10:50 compute-0 multipathd[235356]: + exec /usr/sbin/multipathd -d
Nov 25 16:10:50 compute-0 systemd[1]: Started multipathd container.
Nov 25 16:10:50 compute-0 multipathd[235356]: 3507.940134 | --------start up--------
Nov 25 16:10:50 compute-0 multipathd[235356]: 3507.940152 | read /etc/multipath.conf
Nov 25 16:10:50 compute-0 multipathd[235356]: 3507.945432 | path checkers start up
Nov 25 16:10:50 compute-0 sudo[235293]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:50 compute-0 podman[235363]: 2025-11-25 16:10:50.340700321 +0000 UTC m=+0.110983603 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:50 compute-0 sudo[235544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lemktgvdefawlwvzxnnmsnsgqhmqhyiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087050.5064397-602-143729812641858/AnsiballZ_file.py'
Nov 25 16:10:50 compute-0 sudo[235544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:10:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:10:50 compute-0 python3.9[235546]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:50 compute-0 sudo[235544]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:51 compute-0 ceph-mon[74985]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:51 compute-0 sudo[235696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obmjhvcnvecjzjrdngxaeybejczoptrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087051.3074722-614-206100804230132/AnsiballZ_file.py'
Nov 25 16:10:51 compute-0 sudo[235696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:51 compute-0 python3.9[235698]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 16:10:51 compute-0 sudo[235696]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:52 compute-0 sudo[235848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcpmjudozlbahrklxlimgaelvzrygxxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087051.9363437-622-247331873736619/AnsiballZ_modprobe.py'
Nov 25 16:10:52 compute-0 sudo[235848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:52 compute-0 python3.9[235850]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 16:10:52 compute-0 kernel: Key type psk registered
Nov 25 16:10:52 compute-0 sudo[235848]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:52 compute-0 sudo[236009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puephacwbtbttkmvdbhxlimgnajepcjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087052.6311238-630-142437976849660/AnsiballZ_stat.py'
Nov 25 16:10:52 compute-0 sudo[236009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:53 compute-0 python3.9[236011]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:10:53 compute-0 sudo[236009]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:53 compute-0 sudo[236132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlmbfzqrpdxtsuhpqizkfvbsmufmxzrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087052.6311238-630-142437976849660/AnsiballZ_copy.py'
Nov 25 16:10:53 compute-0 sudo[236132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:53 compute-0 ceph-mon[74985]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:53 compute-0 python3.9[236134]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764087052.6311238-630-142437976849660/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:53 compute-0 sudo[236132]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:54 compute-0 sudo[236284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cggcjwfsxoachdxpusfqypheuzqlqzlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087053.8080823-646-248449891241653/AnsiballZ_lineinfile.py'
Nov 25 16:10:54 compute-0 sudo[236284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:54 compute-0 python3.9[236286]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:10:54 compute-0 sudo[236284]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:54 compute-0 ceph-mon[74985]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:54 compute-0 sudo[236436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxegpbgsjmaweqmuquxuyebowqvglfsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087054.4245002-654-178626867748454/AnsiballZ_systemd.py'
Nov 25 16:10:54 compute-0 sudo[236436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:10:55 compute-0 python3.9[236438]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 16:10:55 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 16:10:55 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 16:10:55 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 16:10:55 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 16:10:55 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 16:10:55 compute-0 sudo[236436]: pam_unix(sudo:session): session closed for user root
Nov 25 16:10:55 compute-0 sudo[236592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdoxxsfoetugqyuztgwuvjvgoglvtoqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087055.3857265-662-261266905622648/AnsiballZ_dnf.py'
Nov 25 16:10:55 compute-0 sudo[236592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:10:55 compute-0 python3.9[236594]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 16:10:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:57 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 16:10:57 compute-0 ceph-mon[74985]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:58 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 16:10:58 compute-0 systemd[1]: Reloading.
Nov 25 16:10:58 compute-0 ceph-mon[74985]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:10:58 compute-0 systemd-rc-local-generator[236629]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:10:58 compute-0 systemd-sysv-generator[236632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:10:59 compute-0 systemd[1]: Reloading.
Nov 25 16:10:59 compute-0 systemd-rc-local-generator[236662]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:10:59 compute-0 systemd-sysv-generator[236666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:10:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:05 compute-0 sudo[236672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:05 compute-0 sudo[236672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:05 compute-0 sudo[236672]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:05 compute-0 sudo[236698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:11:05 compute-0 sudo[236698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:05 compute-0 sudo[236698]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:05 compute-0 sudo[236724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:05 compute-0 sudo[236724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:05 compute-0 sudo[236724]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:05 compute-0 systemd-logind[791]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 16:11:05 compute-0 ceph-mon[74985]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:05 compute-0 ceph-mon[74985]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:05 compute-0 ceph-mon[74985]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:05 compute-0 systemd-logind[791]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 16:11:05 compute-0 sudo[236782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:11:05 compute-0 sudo[236782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:05 compute-0 lvm[236807]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 16:11:05 compute-0 lvm[236808]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 16:11:05 compute-0 lvm[236808]: VG ceph_vg2 finished
Nov 25 16:11:05 compute-0 lvm[236807]: VG ceph_vg1 finished
Nov 25 16:11:05 compute-0 lvm[236806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 16:11:05 compute-0 lvm[236806]: VG ceph_vg0 finished
Nov 25 16:11:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 16:11:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 16:11:05 compute-0 systemd[1]: Reloading.
Nov 25 16:11:05 compute-0 systemd-sysv-generator[236884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:11:05 compute-0 systemd-rc-local-generator[236881]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:11:05 compute-0 sudo[236782]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:11:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:11:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:11:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:11:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f817085-ecee-4285-bb52-8b093f634a35 does not exist
Nov 25 16:11:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev aacf5c1e-ae45-41e8-8b8b-48437e0bd015 does not exist
Nov 25 16:11:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev eeb1ee12-4bfd-43e6-bc12-00f74711af2c does not exist
Nov 25 16:11:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:11:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:11:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:11:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:11:06 compute-0 sudo[236961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 16:11:06 compute-0 sudo[236961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:06 compute-0 sudo[236961]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:06 compute-0 sudo[237060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:11:06 compute-0 sudo[237060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:06 compute-0 sudo[237060]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:06 compute-0 sudo[237144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:06 compute-0 sudo[237144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:06 compute-0 sudo[237144]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:06 compute-0 sudo[237236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:11:06 compute-0 sudo[237236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:11:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:11:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:11:06 compute-0 podman[237689]: 2025-11-25 16:11:06.610074103 +0000 UTC m=+0.023946477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:11:06 compute-0 podman[237689]: 2025-11-25 16:11:06.750288244 +0000 UTC m=+0.164160598 container create a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 16:11:06 compute-0 systemd[1]: Started libpod-conmon-a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54.scope.
Nov 25 16:11:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:11:07 compute-0 podman[237689]: 2025-11-25 16:11:07.009128904 +0000 UTC m=+0.423001298 container init a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:11:07 compute-0 podman[237689]: 2025-11-25 16:11:07.019417682 +0000 UTC m=+0.433290036 container start a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:11:07 compute-0 intelligent_snyder[238080]: 167 167
Nov 25 16:11:07 compute-0 systemd[1]: libpod-a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54.scope: Deactivated successfully.
Nov 25 16:11:07 compute-0 sudo[236592]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:07 compute-0 podman[237689]: 2025-11-25 16:11:07.086939292 +0000 UTC m=+0.500811666 container attach a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:11:07 compute-0 podman[237689]: 2025-11-25 16:11:07.087675922 +0000 UTC m=+0.501548286 container died a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 16:11:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0253add0847978ea0206d19779a58f354e9fcd7cf9a3f7a450fee5ac5798a93a-merged.mount: Deactivated successfully.
Nov 25 16:11:07 compute-0 podman[237689]: 2025-11-25 16:11:07.390723875 +0000 UTC m=+0.804596229 container remove a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:11:07 compute-0 systemd[1]: libpod-conmon-a9910340b88422db2c1d0beadc3c25539ffc4d454b319a80b86e7b141b128a54.scope: Deactivated successfully.
Nov 25 16:11:07 compute-0 sudo[238364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxcnnbvcafkurnjkmvgmcgwpznjlkht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087067.247203-670-11285570896183/AnsiballZ_systemd_service.py'
Nov 25 16:11:07 compute-0 sudo[238364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:07 compute-0 ceph-mon[74985]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:07 compute-0 podman[238362]: 2025-11-25 16:11:07.519795216 +0000 UTC m=+0.021017918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:11:07 compute-0 podman[238362]: 2025-11-25 16:11:07.627610123 +0000 UTC m=+0.128832805 container create d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:11:07 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 16:11:07 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 16:11:07 compute-0 systemd[1]: Started libpod-conmon-d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae.scope.
Nov 25 16:11:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:11:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:07 compute-0 python3.9[238372]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 16:11:07 compute-0 podman[238362]: 2025-11-25 16:11:07.826227789 +0000 UTC m=+0.327450471 container init d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:11:07 compute-0 podman[238362]: 2025-11-25 16:11:07.835784746 +0000 UTC m=+0.337007438 container start d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:11:07 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 25 16:11:07 compute-0 iscsid[226360]: iscsid shutting down.
Nov 25 16:11:07 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 16:11:07 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 25 16:11:07 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 16:11:07 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 16:11:07 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 16:11:07 compute-0 podman[238362]: 2025-11-25 16:11:07.8778304 +0000 UTC m=+0.379053112 container attach d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 16:11:07 compute-0 sudo[238364]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 16:11:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 16:11:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.501s CPU time.
Nov 25 16:11:08 compute-0 systemd[1]: run-r2fae04956c864fab901613558534bf94.service: Deactivated successfully.
Nov 25 16:11:08 compute-0 python3.9[238541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 16:11:08 compute-0 ceph-mon[74985]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:08 compute-0 reverent_wescoff[238383]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:11:08 compute-0 reverent_wescoff[238383]: --> relative data size: 1.0
Nov 25 16:11:08 compute-0 reverent_wescoff[238383]: --> All data devices are unavailable
Nov 25 16:11:08 compute-0 systemd[1]: libpod-d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae.scope: Deactivated successfully.
Nov 25 16:11:08 compute-0 podman[238362]: 2025-11-25 16:11:08.878564358 +0000 UTC m=+1.379787050 container died d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:11:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c01b7b8e04939f2289be80158ce3f5b5b6ea9a8cbe5f3166fc521c1e51070b3d-merged.mount: Deactivated successfully.
Nov 25 16:11:09 compute-0 podman[238362]: 2025-11-25 16:11:09.014832372 +0000 UTC m=+1.516055054 container remove d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:11:09 compute-0 systemd[1]: libpod-conmon-d9fab8f023a421749bef8ea7d96e558fc2b5e74bc0728f9f8c3ad7eabcdbf9ae.scope: Deactivated successfully.
Nov 25 16:11:09 compute-0 sudo[237236]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:09 compute-0 sudo[238609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:09 compute-0 sudo[238609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:09 compute-0 sudo[238609]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:09 compute-0 sudo[238668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:11:09 compute-0 sudo[238668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:09 compute-0 sudo[238668]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:09 compute-0 sudo[238714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:09 compute-0 sudo[238714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:09 compute-0 sudo[238714]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:09 compute-0 sudo[238757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:11:09 compute-0 sudo[238757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:09 compute-0 sudo[238832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzdmgleirkyuqdcqdzcahszbejmqjwix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087069.101725-688-162152382740509/AnsiballZ_file.py'
Nov 25 16:11:09 compute-0 sudo[238832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:09 compute-0 python3.9[238834]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:09 compute-0 sudo[238832]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:09 compute-0 podman[238873]: 2025-11-25 16:11:09.600924837 +0000 UTC m=+0.039652670 container create a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:11:09 compute-0 systemd[1]: Started libpod-conmon-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope.
Nov 25 16:11:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:11:09 compute-0 podman[238873]: 2025-11-25 16:11:09.582983414 +0000 UTC m=+0.021711267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:11:09 compute-0 podman[238873]: 2025-11-25 16:11:09.686454964 +0000 UTC m=+0.125182807 container init a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:11:09 compute-0 podman[238873]: 2025-11-25 16:11:09.695160849 +0000 UTC m=+0.133888662 container start a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 16:11:09 compute-0 eager_wu[238913]: 167 167
Nov 25 16:11:09 compute-0 systemd[1]: libpod-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope: Deactivated successfully.
Nov 25 16:11:09 compute-0 conmon[238913]: conmon a9a81b144a65487fdf6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope/container/memory.events
Nov 25 16:11:09 compute-0 podman[238873]: 2025-11-25 16:11:09.711849799 +0000 UTC m=+0.150577612 container attach a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:11:09 compute-0 podman[238873]: 2025-11-25 16:11:09.712606339 +0000 UTC m=+0.151334242 container died a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f0663495e1a81ce5737f5bd5534eebe8fdd8b9ed089ff3d9caa04f20f7e8338-merged.mount: Deactivated successfully.
Nov 25 16:11:09 compute-0 podman[238873]: 2025-11-25 16:11:09.753730929 +0000 UTC m=+0.192458752 container remove a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:11:09 compute-0 systemd[1]: libpod-conmon-a9a81b144a65487fdf6e2ab4d3715882b302b921cd6d86123c02b94937ea6cb4.scope: Deactivated successfully.
Nov 25 16:11:09 compute-0 podman[238941]: 2025-11-25 16:11:09.910539817 +0000 UTC m=+0.050731429 container create 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:11:09 compute-0 systemd[1]: Started libpod-conmon-92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3.scope.
Nov 25 16:11:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:09 compute-0 podman[238941]: 2025-11-25 16:11:09.888359899 +0000 UTC m=+0.028551531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:11:10 compute-0 podman[238941]: 2025-11-25 16:11:10.008713825 +0000 UTC m=+0.148905457 container init 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:11:10 compute-0 podman[238941]: 2025-11-25 16:11:10.01521166 +0000 UTC m=+0.155403272 container start 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:11:10 compute-0 podman[238941]: 2025-11-25 16:11:10.051751465 +0000 UTC m=+0.191943097 container attach 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:11:10 compute-0 sudo[239088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddauacofvddgccowuuktntbxviuthipq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087069.9170911-699-19247219397476/AnsiballZ_systemd_service.py'
Nov 25 16:11:10 compute-0 sudo[239088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:10 compute-0 python3.9[239090]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 16:11:10 compute-0 systemd[1]: Reloading.
Nov 25 16:11:10 compute-0 systemd-rc-local-generator[239120]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:11:10 compute-0 systemd-sysv-generator[239123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:11:10 compute-0 jolly_carson[238989]: {
Nov 25 16:11:10 compute-0 jolly_carson[238989]:     "0": [
Nov 25 16:11:10 compute-0 jolly_carson[238989]:         {
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "devices": [
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "/dev/loop3"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             ],
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_name": "ceph_lv0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_size": "21470642176",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "name": "ceph_lv0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "tags": {
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cluster_name": "ceph",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.crush_device_class": "",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.encrypted": "0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osd_id": "0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.type": "block",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.vdo": "0"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             },
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "type": "block",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "vg_name": "ceph_vg0"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:         }
Nov 25 16:11:10 compute-0 jolly_carson[238989]:     ],
Nov 25 16:11:10 compute-0 jolly_carson[238989]:     "1": [
Nov 25 16:11:10 compute-0 jolly_carson[238989]:         {
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "devices": [
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "/dev/loop4"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             ],
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_name": "ceph_lv1",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_size": "21470642176",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "name": "ceph_lv1",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "tags": {
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cluster_name": "ceph",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.crush_device_class": "",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.encrypted": "0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osd_id": "1",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.type": "block",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.vdo": "0"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             },
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "type": "block",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "vg_name": "ceph_vg1"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:         }
Nov 25 16:11:10 compute-0 jolly_carson[238989]:     ],
Nov 25 16:11:10 compute-0 jolly_carson[238989]:     "2": [
Nov 25 16:11:10 compute-0 jolly_carson[238989]:         {
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "devices": [
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "/dev/loop5"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             ],
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_name": "ceph_lv2",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_size": "21470642176",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "name": "ceph_lv2",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "tags": {
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.cluster_name": "ceph",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.crush_device_class": "",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.encrypted": "0",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osd_id": "2",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.type": "block",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:                 "ceph.vdo": "0"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             },
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "type": "block",
Nov 25 16:11:10 compute-0 jolly_carson[238989]:             "vg_name": "ceph_vg2"
Nov 25 16:11:10 compute-0 jolly_carson[238989]:         }
Nov 25 16:11:10 compute-0 jolly_carson[238989]:     ]
Nov 25 16:11:10 compute-0 jolly_carson[238989]: }
Nov 25 16:11:10 compute-0 podman[238941]: 2025-11-25 16:11:10.823010074 +0000 UTC m=+0.963201696 container died 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 16:11:10 compute-0 systemd[1]: libpod-92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3.scope: Deactivated successfully.
Nov 25 16:11:10 compute-0 sudo[239088]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:10 compute-0 sshd-session[238918]: Connection closed by authenticating user root 171.244.51.45 port 49788 [preauth]
Nov 25 16:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bafb2a79185e7b626f99926ac55a7480219c068f3d9e303b1c26e2a734cfc057-merged.mount: Deactivated successfully.
Nov 25 16:11:11 compute-0 podman[238941]: 2025-11-25 16:11:11.189590821 +0000 UTC m=+1.329782433 container remove 92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:11:11 compute-0 systemd[1]: libpod-conmon-92c4b6eda7f4427a467c5d0ef6474f076eb8ef8ed616b612137d2522900f95d3.scope: Deactivated successfully.
Nov 25 16:11:11 compute-0 sudo[238757]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:11 compute-0 sudo[239278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:11 compute-0 sudo[239278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:11 compute-0 sudo[239278]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:11 compute-0 sudo[239320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:11:11 compute-0 sudo[239320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:11 compute-0 sudo[239320]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:11 compute-0 sudo[239345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:11 compute-0 sudo[239345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:11 compute-0 sudo[239345]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:11 compute-0 python3.9[239311]: ansible-ansible.builtin.service_facts Invoked
Nov 25 16:11:11 compute-0 ceph-mon[74985]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:11 compute-0 sudo[239370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:11:11 compute-0 sudo[239370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:11 compute-0 network[239411]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 16:11:11 compute-0 network[239412]: 'network-scripts' will be removed from distribution in near future.
Nov 25 16:11:11 compute-0 network[239413]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 16:11:11 compute-0 podman[239459]: 2025-11-25 16:11:11.748473953 +0000 UTC m=+0.041367607 container create c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:11:11 compute-0 podman[239459]: 2025-11-25 16:11:11.727546689 +0000 UTC m=+0.020440363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:11:12 compute-0 systemd[1]: Started libpod-conmon-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope.
Nov 25 16:11:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:11:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:12 compute-0 podman[239459]: 2025-11-25 16:11:12.379728646 +0000 UTC m=+0.672622320 container init c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:11:12 compute-0 podman[239459]: 2025-11-25 16:11:12.38651637 +0000 UTC m=+0.679410024 container start c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 16:11:12 compute-0 podman[239459]: 2025-11-25 16:11:12.389330085 +0000 UTC m=+0.682223769 container attach c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 16:11:12 compute-0 systemd[1]: libpod-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope: Deactivated successfully.
Nov 25 16:11:12 compute-0 compassionate_edison[239477]: 167 167
Nov 25 16:11:12 compute-0 conmon[239477]: conmon c9a94decb1acb0a4c673 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope/container/memory.events
Nov 25 16:11:12 compute-0 podman[239459]: 2025-11-25 16:11:12.392957583 +0000 UTC m=+0.685851237 container died c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-41c331054c6a3e6f590608a04b983851ca81139fcb04c6aced9dbe8cc43b1b7c-merged.mount: Deactivated successfully.
Nov 25 16:11:12 compute-0 podman[239459]: 2025-11-25 16:11:12.47145155 +0000 UTC m=+0.764345204 container remove c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_edison, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:11:12 compute-0 systemd[1]: libpod-conmon-c9a94decb1acb0a4c6735162b38353b7038ae3512f115d22807bbd28e6c37329.scope: Deactivated successfully.
Nov 25 16:11:12 compute-0 podman[239515]: 2025-11-25 16:11:12.664304191 +0000 UTC m=+0.050518284 container create 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:11:12 compute-0 systemd[1]: Started libpod-conmon-2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb.scope.
Nov 25 16:11:12 compute-0 podman[239515]: 2025-11-25 16:11:12.636328516 +0000 UTC m=+0.022542639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:11:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:11:12 compute-0 podman[239515]: 2025-11-25 16:11:12.778004227 +0000 UTC m=+0.164218320 container init 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:11:12 compute-0 podman[239515]: 2025-11-25 16:11:12.785100428 +0000 UTC m=+0.171314521 container start 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:11:12 compute-0 podman[239515]: 2025-11-25 16:11:12.824191042 +0000 UTC m=+0.210405135 container attach 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:11:13.576 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:11:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:11:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:11:13 compute-0 friendly_benz[239538]: {
Nov 25 16:11:13 compute-0 friendly_benz[239538]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "osd_id": 1,
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "type": "bluestore"
Nov 25 16:11:13 compute-0 friendly_benz[239538]:     },
Nov 25 16:11:13 compute-0 friendly_benz[239538]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "osd_id": 2,
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "type": "bluestore"
Nov 25 16:11:13 compute-0 friendly_benz[239538]:     },
Nov 25 16:11:13 compute-0 friendly_benz[239538]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "osd_id": 0,
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:11:13 compute-0 friendly_benz[239538]:         "type": "bluestore"
Nov 25 16:11:13 compute-0 friendly_benz[239538]:     }
Nov 25 16:11:13 compute-0 friendly_benz[239538]: }
Nov 25 16:11:13 compute-0 systemd[1]: libpod-2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb.scope: Deactivated successfully.
Nov 25 16:11:13 compute-0 podman[239515]: 2025-11-25 16:11:13.711038289 +0000 UTC m=+1.097252382 container died 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:11:13 compute-0 ceph-mon[74985]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f53a3446eb82a3e4a9baca21382fb83363846e63759cb7459668ec2c76d30eb4-merged.mount: Deactivated successfully.
Nov 25 16:11:14 compute-0 podman[239515]: 2025-11-25 16:11:14.078138098 +0000 UTC m=+1.464352191 container remove 2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:11:14 compute-0 systemd[1]: libpod-conmon-2247f66f8bbe7099d99d3090d55f98cdc2f687bb23477b8f99fc11dcdc2348bb.scope: Deactivated successfully.
Nov 25 16:11:14 compute-0 sudo[239370]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:11:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:11:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:11:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:11:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9cdfae03-cca0-4ae2-87bc-b6499a8a0ec5 does not exist
Nov 25 16:11:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9b832b82-ff17-4a48-a3a6-ada0773ce301 does not exist
Nov 25 16:11:14 compute-0 sudo[239674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:11:14 compute-0 sudo[239674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:14 compute-0 sudo[239674]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:14 compute-0 sudo[239703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:11:14 compute-0 sudo[239703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:11:14 compute-0 sudo[239703]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:14 compute-0 sudo[239890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukozfnolwdvjbhyeervkbfeheilnmhgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087074.6605296-718-220823076608141/AnsiballZ_systemd_service.py'
Nov 25 16:11:14 compute-0 sudo[239890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:14 compute-0 podman[239854]: 2025-11-25 16:11:14.952473297 +0000 UTC m=+0.080263286 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:11:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:11:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:11:15 compute-0 ceph-mon[74985]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:15 compute-0 python3.9[239901]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:15 compute-0 sudo[239890]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:15 compute-0 sudo[240061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoevdtifqtptxyyjdtgmjxrluxervquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087075.3898993-718-27027922839390/AnsiballZ_systemd_service.py'
Nov 25 16:11:15 compute-0 sudo[240061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:15 compute-0 python3.9[240063]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:15 compute-0 sudo[240061]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:16 compute-0 sudo[240214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvtcbgagdtyjgjpnmkxknqslekojsawi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087076.0978336-718-21555531471502/AnsiballZ_systemd_service.py'
Nov 25 16:11:16 compute-0 sudo[240214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:16 compute-0 python3.9[240216]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:16 compute-0 sudo[240214]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:17 compute-0 sudo[240367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuebvsjkbxuoxorjnatcgsfxavmkhcue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087076.8116157-718-214043787028822/AnsiballZ_systemd_service.py'
Nov 25 16:11:17 compute-0 sudo[240367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:17 compute-0 python3.9[240369]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:17 compute-0 sudo[240367]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:17 compute-0 ceph-mon[74985]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:17 compute-0 sudo[240520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqhtjyzgbhucbyhxdfewxgzpvhgfhvpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087077.5097606-718-68867582979984/AnsiballZ_systemd_service.py'
Nov 25 16:11:17 compute-0 sudo[240520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:18 compute-0 python3.9[240522]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:18 compute-0 sudo[240520]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:18 compute-0 podman[240524]: 2025-11-25 16:11:18.142610348 +0000 UTC m=+0.068623382 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:11:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:18 compute-0 sudo[240692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykjmjeggutjbwvcuhvqiwlmovxlbrhbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087078.2038145-718-16843360027884/AnsiballZ_systemd_service.py'
Nov 25 16:11:18 compute-0 sudo[240692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:18 compute-0 python3.9[240694]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:18 compute-0 sudo[240692]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:19 compute-0 sudo[240845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvrdeaeoqmzaxrglwjabmacdjfkmrhfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087078.8836126-718-22207252948525/AnsiballZ_systemd_service.py'
Nov 25 16:11:19 compute-0 sudo[240845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:19 compute-0 python3.9[240847]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:19 compute-0 sudo[240845]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:19 compute-0 ceph-mon[74985]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:19 compute-0 sudo[240998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpacponhagzfjasakmnfdofclrdizbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087079.5747542-718-77354549020266/AnsiballZ_systemd_service.py'
Nov 25 16:11:19 compute-0 sudo[240998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:20 compute-0 python3.9[241000]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:11:20 compute-0 sudo[240998]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:20 compute-0 ceph-mon[74985]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:20 compute-0 sudo[241171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmklaufrhssyuwrpxiecyugqmwnknnfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087080.4106116-777-103386922946889/AnsiballZ_file.py'
Nov 25 16:11:20 compute-0 sudo[241171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:20 compute-0 podman[241102]: 2025-11-25 16:11:20.656427508 +0000 UTC m=+0.079043573 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:11:20 compute-0 python3.9[241173]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:20 compute-0 sudo[241171]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:21 compute-0 sudo[241323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqwninjdvgbrixvazilejcbryixihuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087080.9776144-777-119367716433452/AnsiballZ_file.py'
Nov 25 16:11:21 compute-0 sudo[241323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:21 compute-0 python3.9[241325]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:21 compute-0 sudo[241323]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:21 compute-0 sudo[241475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eabcjsaczqvuywkjhrdrhypxtwuqqgju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087081.6624749-777-221081601543789/AnsiballZ_file.py'
Nov 25 16:11:21 compute-0 sudo[241475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:22 compute-0 python3.9[241477]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:22 compute-0 sudo[241475]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:22 compute-0 sudo[241627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzxovnkasezxmkiudjrhgbeodrhukpav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087082.1883316-777-60798229463176/AnsiballZ_file.py'
Nov 25 16:11:22 compute-0 sudo[241627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:22 compute-0 python3.9[241629]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:22 compute-0 sudo[241627]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:22 compute-0 sudo[241779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uutolkrrzjhjphhrfpszfhzvtxfpbmvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087082.7392066-777-228333060182282/AnsiballZ_file.py'
Nov 25 16:11:22 compute-0 sudo[241779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:23 compute-0 python3.9[241781]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:23 compute-0 sudo[241779]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:23 compute-0 ceph-mon[74985]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:23 compute-0 sudo[241931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhzkiwowenmoirpntrfogirsgrmquoyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087083.3242927-777-163280383054736/AnsiballZ_file.py'
Nov 25 16:11:23 compute-0 sudo[241931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:23 compute-0 python3.9[241933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:23 compute-0 sudo[241931]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:24 compute-0 sudo[242083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbdhszchxqqgekciibnymuzulhobvtne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087083.911124-777-258128031599774/AnsiballZ_file.py'
Nov 25 16:11:24 compute-0 sudo[242083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:24 compute-0 python3.9[242085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:24 compute-0 sudo[242083]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:25 compute-0 sudo[242235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uimcruniiwamfqbysinmkxawhqgdrguj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087084.7346282-777-168215990594075/AnsiballZ_file.py'
Nov 25 16:11:25 compute-0 sudo[242235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:25 compute-0 python3.9[242237]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:25 compute-0 sudo[242235]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:25 compute-0 ceph-mon[74985]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:25 compute-0 sudo[242387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzjpwskemfwljixbltcwhrkgxvfvctym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087085.6053455-834-269217077348237/AnsiballZ_file.py'
Nov 25 16:11:25 compute-0 sudo[242387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:26 compute-0 python3.9[242389]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:26 compute-0 sudo[242387]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:26 compute-0 sudo[242539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmzmbehuicgofurqvzlsqcykqdbgcfiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087086.223006-834-105568023148718/AnsiballZ_file.py'
Nov 25 16:11:26 compute-0 sudo[242539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:26 compute-0 python3.9[242541]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:26 compute-0 sudo[242539]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:26 compute-0 ceph-mon[74985]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:27 compute-0 sudo[242691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erbymratjtjgnbrxznmialitqtezzrym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087086.8583736-834-226004352029878/AnsiballZ_file.py'
Nov 25 16:11:27 compute-0 sudo[242691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:27 compute-0 python3.9[242693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:27 compute-0 sudo[242691]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:27 compute-0 sudo[242843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbaipssysdxzmwgiblwhydwoocopmrkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087087.4655433-834-139149250791701/AnsiballZ_file.py'
Nov 25 16:11:27 compute-0 sudo[242843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:27 compute-0 python3.9[242845]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:27 compute-0 sudo[242843]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:28 compute-0 sudo[242995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckhncwrqyickjtlngbmhfotmvqftzkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087088.1693773-834-147822616445684/AnsiballZ_file.py'
Nov 25 16:11:28 compute-0 sudo[242995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:28 compute-0 python3.9[242997]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:28 compute-0 sudo[242995]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:29 compute-0 sudo[243147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aovvubnbaxqmreabyiurgmtxkrriolbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087088.7309892-834-101845565412508/AnsiballZ_file.py'
Nov 25 16:11:29 compute-0 sudo[243147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:29 compute-0 python3.9[243149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:29 compute-0 sudo[243147]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:30 compute-0 sudo[243299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zknnqhqbmndtokbtsjsuluokwweztrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087089.5853999-834-77283282101643/AnsiballZ_file.py'
Nov 25 16:11:30 compute-0 sudo[243299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:30 compute-0 ceph-mon[74985]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:30 compute-0 python3.9[243301]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:30 compute-0 sudo[243299]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:30 compute-0 sudo[243451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihanhtqivpnafpierwrxpugrbubqsire ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087090.4079702-834-100710021848670/AnsiballZ_file.py'
Nov 25 16:11:30 compute-0 sudo[243451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:30 compute-0 python3.9[243453]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:11:30 compute-0 sudo[243451]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:31 compute-0 ceph-mon[74985]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:31 compute-0 sudo[243603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzlsoymccgywejmovebnstnwuyypyvyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087091.201056-892-60714557706395/AnsiballZ_command.py'
Nov 25 16:11:31 compute-0 sudo[243603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:31 compute-0 python3.9[243605]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:31 compute-0 sudo[243603]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:32 compute-0 python3.9[243757]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 16:11:33 compute-0 sudo[243907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czxldakeucqlbmndoqkhipvvdddqkmso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087092.8054512-910-123801923149293/AnsiballZ_systemd_service.py'
Nov 25 16:11:33 compute-0 sudo[243907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:33 compute-0 python3.9[243909]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 16:11:33 compute-0 systemd[1]: Reloading.
Nov 25 16:11:33 compute-0 ceph-mon[74985]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:33 compute-0 systemd-rc-local-generator[243934]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:11:33 compute-0 systemd-sysv-generator[243938]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:11:33 compute-0 sudo[243907]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:34 compute-0 sudo[244094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuzvsnbewbwkdbozlxeeuavegfhupbsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087093.8800447-918-20532726089319/AnsiballZ_command.py'
Nov 25 16:11:34 compute-0 sudo[244094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:34 compute-0 python3.9[244096]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:34 compute-0 sudo[244094]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:34 compute-0 sudo[244247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzukkrvtklfgxinbfohfojpgxvyhvvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087094.518105-918-252451470479199/AnsiballZ_command.py'
Nov 25 16:11:34 compute-0 sudo[244247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:34 compute-0 python3.9[244249]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:35 compute-0 sudo[244247]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:35 compute-0 sudo[244400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisnclpukxtseucjmtzffvpsqcrpgnde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087095.1404085-918-222322936426127/AnsiballZ_command.py'
Nov 25 16:11:35 compute-0 sudo[244400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:35 compute-0 ceph-mon[74985]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:35 compute-0 python3.9[244402]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:35 compute-0 sudo[244400]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:36 compute-0 sudo[244553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxytbkxxyrlqjigblualbngqafsdjdhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087095.8160925-918-69844862913128/AnsiballZ_command.py'
Nov 25 16:11:36 compute-0 sudo[244553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:36 compute-0 python3.9[244555]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:36 compute-0 sudo[244553]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:36 compute-0 sudo[244706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phpuynrkvslwpzgdcpiarngfudshiqir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087096.4252257-918-74013833333866/AnsiballZ_command.py'
Nov 25 16:11:36 compute-0 sudo[244706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:36 compute-0 ceph-mon[74985]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:36 compute-0 python3.9[244708]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:36 compute-0 sudo[244706]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:11:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 3173 writes, 14K keys, 3173 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3173 writes, 3173 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1273 writes, 5543 keys, 1273 commit groups, 1.0 writes per commit group, ingest: 8.48 MB, 0.01 MB/s
                                           Interval WAL: 1274 writes, 1274 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.4      0.99              0.05         6    0.165       0      0       0.0       0.0
                                             L6      1/0    7.03 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     99.3     82.2      0.42              0.09         5    0.085     20K   2214       0.0       0.0
                                            Sum      1/0    7.03 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     29.8     34.8      1.41              0.14        11    0.128     20K   2214       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     23.3     23.9      1.15              0.08         6    0.191     12K   1468       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     99.3     82.2      0.42              0.09         5    0.085     20K   2214       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.1      0.94              0.05         5    0.188       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 1.4 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 308.00 MB usage: 1.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(88,1.21 MB,0.391789%) FilterBlock(12,63.23 KB,0.0200495%) IndexBlock(12,126.14 KB,0.0399949%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 16:11:37 compute-0 sudo[244859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prejkgsqclemgpppajhuswakrkahgsbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087097.0535698-918-220417175718874/AnsiballZ_command.py'
Nov 25 16:11:37 compute-0 sudo[244859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:37 compute-0 python3.9[244861]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:37 compute-0 sudo[244859]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:38 compute-0 sudo[245012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwfufrmuancrxggwhlxxplihebnyjro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087097.8492796-918-170099770858962/AnsiballZ_command.py'
Nov 25 16:11:38 compute-0 sudo[245012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:38 compute-0 python3.9[245014]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:38 compute-0 sudo[245012]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:38 compute-0 sudo[245165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwvfydljwkktedtgohiyvtdlazduwplm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087098.4057145-918-122412535036617/AnsiballZ_command.py'
Nov 25 16:11:38 compute-0 sudo[245165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:38 compute-0 python3.9[245167]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 16:11:38 compute-0 sudo[245165]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:39 compute-0 ceph-mon[74985]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:39 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:11:39
Nov 25 16:11:39 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:11:39 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:11:39 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'images']
Nov 25 16:11:39 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:11:40 compute-0 sudo[245318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaaprxykbsvcxzcalyhbtnjtqpydnxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087099.7944527-997-73250601459442/AnsiballZ_file.py'
Nov 25 16:11:40 compute-0 sudo[245318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:11:40 compute-0 python3.9[245320]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:40 compute-0 sudo[245318]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:11:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:40 compute-0 ceph-mon[74985]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:40 compute-0 sudo[245470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhumileqeyflzmvdmtlctzhzhfutxxdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087100.4095263-997-278335668052831/AnsiballZ_file.py'
Nov 25 16:11:40 compute-0 sudo[245470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:41 compute-0 python3.9[245472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:41 compute-0 sudo[245470]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:41 compute-0 sudo[245622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afvbloszrcmkxjaecfycihmiggzunggk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087101.373266-997-84288528989846/AnsiballZ_file.py'
Nov 25 16:11:41 compute-0 sudo[245622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:41 compute-0 python3.9[245624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:41 compute-0 sudo[245622]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:42 compute-0 sudo[245774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxigjmjhkdmcfaifgccmmxmhcwmvlfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087102.0061913-1019-602143145434/AnsiballZ_file.py'
Nov 25 16:11:42 compute-0 sudo[245774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:42 compute-0 python3.9[245776]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:42 compute-0 sudo[245774]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:43 compute-0 sudo[245926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjgbofrihoupdiyzuwljpmkmhjcddtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087102.6422462-1019-42194240393553/AnsiballZ_file.py'
Nov 25 16:11:43 compute-0 sudo[245926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:43 compute-0 python3.9[245928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:43 compute-0 sudo[245926]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:43 compute-0 ceph-mon[74985]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:43 compute-0 sudo[246078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dunumaoxvthfavjkcnhwvujdiwzeynae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087103.373233-1019-260393635645141/AnsiballZ_file.py'
Nov 25 16:11:43 compute-0 sudo[246078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:44 compute-0 python3.9[246080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:44 compute-0 sudo[246078]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:44 compute-0 sudo[246230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odaaukqdafakdqfochrrenxmtagwdiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087104.2031538-1019-138419368785433/AnsiballZ_file.py'
Nov 25 16:11:44 compute-0 sudo[246230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:44 compute-0 ceph-mon[74985]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:44 compute-0 python3.9[246232]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:44 compute-0 sudo[246230]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:45 compute-0 sudo[246395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ushosoarrkytkojttwhnxegjbetpyfzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087104.957616-1019-179181469933575/AnsiballZ_file.py'
Nov 25 16:11:45 compute-0 sudo[246395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:45 compute-0 podman[246356]: 2025-11-25 16:11:45.319579718 +0000 UTC m=+0.090710668 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:11:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:45 compute-0 python3.9[246403]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:45 compute-0 sudo[246395]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:46 compute-0 sudo[246560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmcziaemwrhmvtiywslvhmomoqchtxad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087105.6258276-1019-54697316817240/AnsiballZ_file.py'
Nov 25 16:11:46 compute-0 sudo[246560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:46 compute-0 python3.9[246562]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:46 compute-0 sudo[246560]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:46 compute-0 sudo[246712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-labwoqiyqdvqlrbpudgtndhwotcetijv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087106.4552243-1019-171588316287442/AnsiballZ_file.py'
Nov 25 16:11:46 compute-0 sudo[246712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:46 compute-0 python3.9[246714]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:46 compute-0 sudo[246712]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:47 compute-0 ceph-mon[74985]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:48 compute-0 podman[246739]: 2025-11-25 16:11:48.638092389 +0000 UTC m=+0.050158593 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:11:48 compute-0 ceph-mon[74985]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:11:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:11:51 compute-0 ceph-mon[74985]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:51 compute-0 podman[246758]: 2025-11-25 16:11:51.692861338 +0000 UTC m=+0.100396868 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 16:11:52 compute-0 sshd-session[246778]: banner exchange: Connection from 165.22.126.29 port 53540: invalid format
Nov 25 16:11:52 compute-0 sshd-session[246779]: banner exchange: Connection from 165.22.126.29 port 53552: invalid format
Nov 25 16:11:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:52 compute-0 sudo[246905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecezzfnminoaytyuedjqngfjcwuephcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087112.494116-1208-80566943531411/AnsiballZ_getent.py'
Nov 25 16:11:52 compute-0 sudo[246905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:53 compute-0 python3.9[246907]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 16:11:53 compute-0 sudo[246905]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:53 compute-0 ceph-mon[74985]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:53 compute-0 sudo[247058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqxvpwosdzzpzzjzxeyfkqmvhdduaoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087113.3138583-1216-220263636056995/AnsiballZ_group.py'
Nov 25 16:11:53 compute-0 sudo[247058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:53 compute-0 python3.9[247060]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 16:11:53 compute-0 groupadd[247061]: group added to /etc/group: name=nova, GID=42436
Nov 25 16:11:53 compute-0 groupadd[247061]: group added to /etc/gshadow: name=nova
Nov 25 16:11:53 compute-0 groupadd[247061]: new group: name=nova, GID=42436
Nov 25 16:11:53 compute-0 sudo[247058]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:54 compute-0 sudo[247216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbfgxvtjwsgisnuhkdymyzjdawckuiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087114.1093047-1224-194217694546168/AnsiballZ_user.py'
Nov 25 16:11:54 compute-0 sudo[247216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:11:54 compute-0 python3.9[247218]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 16:11:54 compute-0 useradd[247220]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 25 16:11:54 compute-0 useradd[247220]: add 'nova' to group 'libvirt'
Nov 25 16:11:54 compute-0 useradd[247220]: add 'nova' to shadow group 'libvirt'
Nov 25 16:11:54 compute-0 sudo[247216]: pam_unix(sudo:session): session closed for user root
Nov 25 16:11:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:11:55 compute-0 ceph-mon[74985]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:55 compute-0 sshd-session[247251]: Accepted publickey for zuul from 192.168.122.30 port 36192 ssh2: ECDSA SHA256:9KqzpXmppnMwGwVHF2wOKwwhXNcutlJnRXXU19Lreu4
Nov 25 16:11:55 compute-0 systemd-logind[791]: New session 51 of user zuul.
Nov 25 16:11:55 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 25 16:11:55 compute-0 sshd-session[247251]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 16:11:56 compute-0 sshd-session[247254]: Received disconnect from 192.168.122.30 port 36192:11: disconnected by user
Nov 25 16:11:56 compute-0 sshd-session[247254]: Disconnected from user zuul 192.168.122.30 port 36192
Nov 25 16:11:56 compute-0 sshd-session[247251]: pam_unix(sshd:session): session closed for user zuul
Nov 25 16:11:56 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 16:11:56 compute-0 systemd-logind[791]: Session 51 logged out. Waiting for processes to exit.
Nov 25 16:11:56 compute-0 systemd-logind[791]: Removed session 51.
Nov 25 16:11:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:56 compute-0 python3.9[247404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:11:57 compute-0 python3.9[247525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087116.1825483-1249-150606599717485/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:57 compute-0 ceph-mon[74985]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:57 compute-0 python3.9[247675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:11:58 compute-0 python3.9[247751]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:58 compute-0 python3.9[247901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:11:59 compute-0 python3.9[248022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087118.377047-1249-89315532051845/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:11:59 compute-0 ceph-mon[74985]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:11:59 compute-0 python3.9[248172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:12:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:00 compute-0 python3.9[248293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087119.5167482-1249-233657873537978/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:12:01 compute-0 python3.9[248443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:12:01 compute-0 ceph-mon[74985]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:01 compute-0 python3.9[248564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087120.6722069-1249-104652293965098/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:12:02 compute-0 python3.9[248714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:12:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:02 compute-0 python3.9[248835]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087121.8277698-1249-43994199884388/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:12:03 compute-0 ceph-mon[74985]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:03 compute-0 sudo[248985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slkhmlsvbkxaenlnvyjigaraqrsydxmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087123.2033434-1332-157153773500484/AnsiballZ_file.py'
Nov 25 16:12:03 compute-0 sudo[248985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:03 compute-0 python3.9[248987]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:12:03 compute-0 sudo[248985]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:04 compute-0 sudo[249137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpmizkamzidukhjcltfyydyqicjmuuig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087123.906296-1340-188024752130334/AnsiballZ_copy.py'
Nov 25 16:12:04 compute-0 sudo[249137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:04 compute-0 python3.9[249139]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:12:04 compute-0 sudo[249137]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:04 compute-0 sudo[249289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brdvyeebubirdymywmdgmwwnnkwxuqxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087124.6676347-1348-25257821347169/AnsiballZ_stat.py'
Nov 25 16:12:04 compute-0 sudo[249289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:05 compute-0 python3.9[249291]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:12:05 compute-0 sudo[249289]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:05 compute-0 ceph-mon[74985]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:05 compute-0 sudo[249441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmebhpcrswkoazmpxjdxtetpwgswfbjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087125.4237113-1356-157217534128131/AnsiballZ_stat.py'
Nov 25 16:12:05 compute-0 sudo[249441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:05 compute-0 python3.9[249443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:12:05 compute-0 sudo[249441]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:06 compute-0 sudo[249564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gczkvdiqqnvbdprlxxhnxiwxnwqkshbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087125.4237113-1356-157217534128131/AnsiballZ_copy.py'
Nov 25 16:12:06 compute-0 sudo[249564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:06 compute-0 python3.9[249566]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764087125.4237113-1356-157217534128131/.source _original_basename=.6h8mq2xx follow=False checksum=01c0e30e0516984f93c650504b23ca47535007dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 16:12:06 compute-0 sudo[249564]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:07 compute-0 python3.9[249718]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:12:07 compute-0 ceph-mon[74985]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:07 compute-0 python3.9[249870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:12:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:08 compute-0 python3.9[249991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087127.5072236-1382-42388292095463/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:12:09 compute-0 python3.9[250141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 16:12:09 compute-0 ceph-mon[74985]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:09 compute-0 python3.9[250262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764087128.6540806-1397-219940988914914/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 16:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:12:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:10 compute-0 sudo[250412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfptvxavqudqizwbpfpwevpstrgdwsan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087130.1876497-1414-228106366051279/AnsiballZ_container_config_data.py'
Nov 25 16:12:10 compute-0 sudo[250412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:10 compute-0 python3.9[250414]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 16:12:10 compute-0 sudo[250412]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:11 compute-0 sudo[250564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxdifemnxcwwlaxlwpilzszgymmqwhxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087130.9403424-1423-200806947382954/AnsiballZ_container_config_hash.py'
Nov 25 16:12:11 compute-0 sudo[250564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:11 compute-0 python3.9[250566]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 16:12:11 compute-0 sudo[250564]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:11 compute-0 ceph-mon[74985]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:12 compute-0 sudo[250716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlyumrcvlytxilqcyqntjysmyvaorajb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764087131.7635071-1433-58956376724472/AnsiballZ_edpm_container_manage.py'
Nov 25 16:12:12 compute-0 sudo[250716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:12 compute-0 python3[250718]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 16:12:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:13 compute-0 ceph-mon[74985]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:12:13.577 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:12:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:12:13.578 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:12:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:14 compute-0 sudo[250755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:14 compute-0 sudo[250755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:14 compute-0 sudo[250755]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:14 compute-0 sudo[250780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:12:14 compute-0 sudo[250780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:14 compute-0 sudo[250780]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:14 compute-0 sudo[250805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:14 compute-0 sudo[250805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:14 compute-0 sudo[250805]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:14 compute-0 sudo[250830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:12:14 compute-0 sudo[250830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:15 compute-0 sudo[250830]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:12:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:12:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:12:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:12:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1acfdcd6-e1ac-496b-bfb7-6c88178d902b does not exist
Nov 25 16:12:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5ae80b77-32d3-4614-b173-003b2c92dd2f does not exist
Nov 25 16:12:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d2a91fdc-f571-4453-a228-ca06abb7b400 does not exist
Nov 25 16:12:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:12:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:12:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:12:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:15 compute-0 sudo[250901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:15 compute-0 sudo[250901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:15 compute-0 sudo[250901]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:15 compute-0 sudo[250932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:12:15 compute-0 sudo[250932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:15 compute-0 sudo[250932]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:15 compute-0 sudo[250958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:15 compute-0 sudo[250958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:15 compute-0 sudo[250958]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:15 compute-0 ceph-mon[74985]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:12:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:12:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:12:15 compute-0 sudo[250983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:12:15 compute-0 sudo[250983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:16 compute-0 ceph-mon[74985]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:18 compute-0 podman[250925]: 2025-11-25 16:12:18.26027166 +0000 UTC m=+2.874613042 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:12:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:18 compute-0 ceph-mon[74985]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:22 compute-0 ceph-mon[74985]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:22 compute-0 podman[251055]: 2025-11-25 16:12:22.178952345 +0000 UTC m=+2.595929535 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:12:22 compute-0 podman[251068]: 2025-11-25 16:12:22.23178805 +0000 UTC m=+0.072009952 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 16:12:22 compute-0 podman[250732]: 2025-11-25 16:12:22.254556274 +0000 UTC m=+9.848608780 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 16:12:22 compute-0 podman[251123]: 2025-11-25 16:12:22.346808122 +0000 UTC m=+0.042519958 container create 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:12:22 compute-0 systemd[1]: Started libpod-conmon-8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f.scope.
Nov 25 16:12:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:22 compute-0 podman[251155]: 2025-11-25 16:12:22.414472096 +0000 UTC m=+0.055592929 container create 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init)
Nov 25 16:12:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:22 compute-0 podman[251155]: 2025-11-25 16:12:22.386349778 +0000 UTC m=+0.027470671 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 16:12:22 compute-0 python3[250718]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 16:12:22 compute-0 podman[251123]: 2025-11-25 16:12:22.32932164 +0000 UTC m=+0.025033496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:12:22 compute-0 podman[251123]: 2025-11-25 16:12:22.431677201 +0000 UTC m=+0.127389097 container init 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:12:22 compute-0 podman[251123]: 2025-11-25 16:12:22.440233892 +0000 UTC m=+0.135945728 container start 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:12:22 compute-0 podman[251123]: 2025-11-25 16:12:22.443821808 +0000 UTC m=+0.139533654 container attach 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:12:22 compute-0 hungry_lichterman[251167]: 167 167
Nov 25 16:12:22 compute-0 systemd[1]: libpod-8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f.scope: Deactivated successfully.
Nov 25 16:12:22 compute-0 podman[251123]: 2025-11-25 16:12:22.450432856 +0000 UTC m=+0.146144692 container died 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-288bcc85ba6a52f4a763c9ec57ba834355384d04aeef569d67f59b8a5176b29b-merged.mount: Deactivated successfully.
Nov 25 16:12:22 compute-0 podman[251123]: 2025-11-25 16:12:22.532767467 +0000 UTC m=+0.228479303 container remove 8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 16:12:22 compute-0 systemd[1]: libpod-conmon-8621b749e2a54d12d6326448f5051ed0c0471dbdfac5588545f2f181478abe3f.scope: Deactivated successfully.
Nov 25 16:12:22 compute-0 sudo[250716]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:22 compute-0 podman[251238]: 2025-11-25 16:12:22.75980678 +0000 UTC m=+0.105334362 container create dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:12:22 compute-0 podman[251238]: 2025-11-25 16:12:22.674950291 +0000 UTC m=+0.020477903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:12:22 compute-0 systemd[1]: Started libpod-conmon-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope.
Nov 25 16:12:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:22 compute-0 podman[251238]: 2025-11-25 16:12:22.854826972 +0000 UTC m=+0.200354574 container init dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:12:22 compute-0 podman[251238]: 2025-11-25 16:12:22.863051304 +0000 UTC m=+0.208578896 container start dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:12:22 compute-0 podman[251238]: 2025-11-25 16:12:22.866535738 +0000 UTC m=+0.212063310 container attach dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 16:12:23 compute-0 sudo[251391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyjluqxqbmmoctizlaqruzcqxmktlshp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087142.7732763-1441-97168249769435/AnsiballZ_stat.py'
Nov 25 16:12:23 compute-0 sudo[251391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:23 compute-0 ceph-mon[74985]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:23 compute-0 python3.9[251393]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:12:23 compute-0 sudo[251391]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:23 compute-0 angry_heisenberg[251289]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:12:23 compute-0 angry_heisenberg[251289]: --> relative data size: 1.0
Nov 25 16:12:23 compute-0 angry_heisenberg[251289]: --> All data devices are unavailable
Nov 25 16:12:23 compute-0 systemd[1]: libpod-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope: Deactivated successfully.
Nov 25 16:12:23 compute-0 podman[251238]: 2025-11-25 16:12:23.958112804 +0000 UTC m=+1.303640386 container died dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:12:23 compute-0 systemd[1]: libpod-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope: Consumed 1.020s CPU time.
Nov 25 16:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-80a827b66ad80f1685463763c9ca55f6b8d66c93ea28783ec3700b138b60d6c1-merged.mount: Deactivated successfully.
Nov 25 16:12:24 compute-0 podman[251238]: 2025-11-25 16:12:24.011257477 +0000 UTC m=+1.356785059 container remove dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:12:24 compute-0 systemd[1]: libpod-conmon-dfbdd7fb99336963a3cfb84b78223a5137ae0dfdba431306788ed7267efe6748.scope: Deactivated successfully.
Nov 25 16:12:24 compute-0 sudo[250983]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:24 compute-0 sudo[251581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oypkcvrjybxbfvjlburrgksjywrnvvxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087143.7803576-1453-113859602614882/AnsiballZ_container_config_data.py'
Nov 25 16:12:24 compute-0 sudo[251581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:24 compute-0 sudo[251582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:24 compute-0 sudo[251582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:24 compute-0 sudo[251582]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:24 compute-0 sudo[251609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:12:24 compute-0 sudo[251609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:24 compute-0 sudo[251609]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:24 compute-0 sudo[251634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:24 compute-0 sudo[251634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:24 compute-0 sudo[251634]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:24 compute-0 sudo[251659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:12:24 compute-0 sudo[251659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:24 compute-0 python3.9[251591]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 16:12:24 compute-0 sudo[251581]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:24 compute-0 podman[251770]: 2025-11-25 16:12:24.562729269 +0000 UTC m=+0.041749967 container create 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 16:12:24 compute-0 systemd[1]: Started libpod-conmon-349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5.scope.
Nov 25 16:12:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:24 compute-0 podman[251770]: 2025-11-25 16:12:24.542495514 +0000 UTC m=+0.021516242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:12:24 compute-0 podman[251770]: 2025-11-25 16:12:24.641791752 +0000 UTC m=+0.120812480 container init 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:12:24 compute-0 podman[251770]: 2025-11-25 16:12:24.648443381 +0000 UTC m=+0.127464089 container start 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:12:24 compute-0 podman[251770]: 2025-11-25 16:12:24.651785191 +0000 UTC m=+0.130805899 container attach 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:12:24 compute-0 zealous_nobel[251820]: 167 167
Nov 25 16:12:24 compute-0 systemd[1]: libpod-349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5.scope: Deactivated successfully.
Nov 25 16:12:24 compute-0 podman[251770]: 2025-11-25 16:12:24.654663059 +0000 UTC m=+0.133683767 container died 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b07b6c2d9e8d982758bde88981358e3ef9716715afc03e791b7ff0dd9738182-merged.mount: Deactivated successfully.
Nov 25 16:12:24 compute-0 podman[251770]: 2025-11-25 16:12:24.685966183 +0000 UTC m=+0.164986891 container remove 349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 16:12:24 compute-0 systemd[1]: libpod-conmon-349ea515a29b5a2ba9f0b5210cf96c0531a3d5e127427b19a6a8bca2477fb4e5.scope: Deactivated successfully.
Nov 25 16:12:24 compute-0 sudo[251907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvoshwdfhapynixkeufdkjwgwomgkkvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087144.5084066-1462-197029700857978/AnsiballZ_container_config_hash.py'
Nov 25 16:12:24 compute-0 sudo[251907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:24 compute-0 podman[251915]: 2025-11-25 16:12:24.844368995 +0000 UTC m=+0.041809649 container create 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:12:24 compute-0 systemd[1]: Started libpod-conmon-31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b.scope.
Nov 25 16:12:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:24 compute-0 podman[251915]: 2025-11-25 16:12:24.82712121 +0000 UTC m=+0.024561884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:12:24 compute-0 podman[251915]: 2025-11-25 16:12:24.924549166 +0000 UTC m=+0.121989840 container init 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:12:24 compute-0 podman[251915]: 2025-11-25 16:12:24.930746944 +0000 UTC m=+0.128187598 container start 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 16:12:24 compute-0 podman[251915]: 2025-11-25 16:12:24.933950121 +0000 UTC m=+0.131390775 container attach 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:12:24 compute-0 python3.9[251910]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 16:12:24 compute-0 sudo[251907]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:25 compute-0 ceph-mon[74985]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:25 compute-0 sudo[252086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhntnpcxjtdnaiepgssdbjsrmgjojmog ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764087145.2676241-1472-179689715002300/AnsiballZ_edpm_container_manage.py'
Nov 25 16:12:25 compute-0 sudo[252086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:25 compute-0 bold_blackwell[251932]: {
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:     "0": [
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:         {
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "devices": [
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "/dev/loop3"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             ],
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_name": "ceph_lv0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_size": "21470642176",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "name": "ceph_lv0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "tags": {
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cluster_name": "ceph",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.crush_device_class": "",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.encrypted": "0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osd_id": "0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.type": "block",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.vdo": "0"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             },
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "type": "block",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "vg_name": "ceph_vg0"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:         }
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:     ],
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:     "1": [
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:         {
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "devices": [
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "/dev/loop4"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             ],
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_name": "ceph_lv1",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_size": "21470642176",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "name": "ceph_lv1",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "tags": {
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cluster_name": "ceph",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.crush_device_class": "",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.encrypted": "0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osd_id": "1",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.type": "block",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.vdo": "0"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             },
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "type": "block",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "vg_name": "ceph_vg1"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:         }
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:     ],
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:     "2": [
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:         {
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "devices": [
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "/dev/loop5"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             ],
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_name": "ceph_lv2",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_size": "21470642176",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "name": "ceph_lv2",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "tags": {
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.cluster_name": "ceph",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.crush_device_class": "",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.encrypted": "0",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osd_id": "2",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.type": "block",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:                 "ceph.vdo": "0"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             },
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "type": "block",
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:             "vg_name": "ceph_vg2"
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:         }
Nov 25 16:12:25 compute-0 bold_blackwell[251932]:     ]
Nov 25 16:12:25 compute-0 bold_blackwell[251932]: }
Nov 25 16:12:25 compute-0 systemd[1]: libpod-31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b.scope: Deactivated successfully.
Nov 25 16:12:25 compute-0 podman[251915]: 2025-11-25 16:12:25.819752488 +0000 UTC m=+1.017193172 container died 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-035be13ec855940a7248334cdb599c0f885c1a828ea8d3ccdcc3843f3f79713f-merged.mount: Deactivated successfully.
Nov 25 16:12:25 compute-0 python3[252088]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 16:12:25 compute-0 podman[251915]: 2025-11-25 16:12:25.901916484 +0000 UTC m=+1.099357138 container remove 31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:12:25 compute-0 systemd[1]: libpod-conmon-31c34bd315703e23384388ddb780e031fdb43b25a4a192c8d6982a187fe5ba2b.scope: Deactivated successfully.
Nov 25 16:12:25 compute-0 sudo[251659]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:25 compute-0 sudo[252126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:26 compute-0 sudo[252126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:26 compute-0 sudo[252126]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:26 compute-0 podman[252163]: 2025-11-25 16:12:26.048344163 +0000 UTC m=+0.044685426 container create 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:12:26 compute-0 podman[252163]: 2025-11-25 16:12:26.026618417 +0000 UTC m=+0.022959700 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 16:12:26 compute-0 python3[252088]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 25 16:12:26 compute-0 sudo[252170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:12:26 compute-0 sudo[252170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:26 compute-0 sudo[252170]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:26 compute-0 sudo[252208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:26 compute-0 sudo[252208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:26 compute-0 sudo[252208]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:26 compute-0 sudo[252086]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:26 compute-0 sudo[252252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:12:26 compute-0 sudo[252252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:26 compute-0 podman[252395]: 2025-11-25 16:12:26.463503889 +0000 UTC m=+0.042918949 container create 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:12:26 compute-0 systemd[1]: Started libpod-conmon-64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388.scope.
Nov 25 16:12:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:26 compute-0 podman[252395]: 2025-11-25 16:12:26.535325505 +0000 UTC m=+0.114740575 container init 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 16:12:26 compute-0 podman[252395]: 2025-11-25 16:12:26.445820532 +0000 UTC m=+0.025235612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:12:26 compute-0 podman[252395]: 2025-11-25 16:12:26.541682357 +0000 UTC m=+0.121097417 container start 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 16:12:26 compute-0 elastic_feistel[252455]: 167 167
Nov 25 16:12:26 compute-0 systemd[1]: libpod-64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388.scope: Deactivated successfully.
Nov 25 16:12:26 compute-0 sudo[252495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcqiopgcnwidysabkxdooxkarbyrhmah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087146.3224127-1480-11530487809910/AnsiballZ_stat.py'
Nov 25 16:12:26 compute-0 sudo[252495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:26 compute-0 podman[252395]: 2025-11-25 16:12:26.585626732 +0000 UTC m=+0.165041822 container attach 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:12:26 compute-0 podman[252395]: 2025-11-25 16:12:26.586440484 +0000 UTC m=+0.165855544 container died 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e607a99af8139fda1589e500f910cfe39671e112179191b52fad7a41f88212c9-merged.mount: Deactivated successfully.
Nov 25 16:12:26 compute-0 podman[252395]: 2025-11-25 16:12:26.725252577 +0000 UTC m=+0.304667637 container remove 64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:12:26 compute-0 systemd[1]: libpod-conmon-64992192d2928295bb06e14a175c54ec5bd417e8e393f010b4b1e2649d1b0388.scope: Deactivated successfully.
Nov 25 16:12:26 compute-0 python3.9[252502]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:12:26 compute-0 sudo[252495]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:26 compute-0 podman[252536]: 2025-11-25 16:12:26.884262325 +0000 UTC m=+0.039015213 container create 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:12:26 compute-0 systemd[1]: Started libpod-conmon-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope.
Nov 25 16:12:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:26 compute-0 podman[252536]: 2025-11-25 16:12:26.961248702 +0000 UTC m=+0.116001610 container init 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:12:26 compute-0 podman[252536]: 2025-11-25 16:12:26.868770498 +0000 UTC m=+0.023523406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:12:26 compute-0 podman[252536]: 2025-11-25 16:12:26.969418711 +0000 UTC m=+0.124171599 container start 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:12:27 compute-0 podman[252536]: 2025-11-25 16:12:27.00312003 +0000 UTC m=+0.157872918 container attach 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:12:27 compute-0 sudo[252683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uheoajonvcgjlkrukgsmcgpzlhbcflqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087147.0006323-1489-167763001008387/AnsiballZ_file.py'
Nov 25 16:12:27 compute-0 sudo[252683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:27 compute-0 ceph-mon[74985]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:27 compute-0 python3.9[252685]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:12:27 compute-0 sudo[252683]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:28 compute-0 funny_shamir[252553]: {
Nov 25 16:12:28 compute-0 funny_shamir[252553]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "osd_id": 1,
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "type": "bluestore"
Nov 25 16:12:28 compute-0 funny_shamir[252553]:     },
Nov 25 16:12:28 compute-0 funny_shamir[252553]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "osd_id": 2,
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "type": "bluestore"
Nov 25 16:12:28 compute-0 funny_shamir[252553]:     },
Nov 25 16:12:28 compute-0 funny_shamir[252553]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "osd_id": 0,
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:12:28 compute-0 funny_shamir[252553]:         "type": "bluestore"
Nov 25 16:12:28 compute-0 funny_shamir[252553]:     }
Nov 25 16:12:28 compute-0 funny_shamir[252553]: }
Nov 25 16:12:28 compute-0 systemd[1]: libpod-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope: Deactivated successfully.
Nov 25 16:12:28 compute-0 podman[252536]: 2025-11-25 16:12:28.031076532 +0000 UTC m=+1.185829420 container died 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:12:28 compute-0 systemd[1]: libpod-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope: Consumed 1.065s CPU time.
Nov 25 16:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-807611bc39022569e8e4418628a2048314e1404b1fe8c3e100e824b5ca92eebc-merged.mount: Deactivated successfully.
Nov 25 16:12:28 compute-0 podman[252536]: 2025-11-25 16:12:28.090722381 +0000 UTC m=+1.245475269 container remove 889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:12:28 compute-0 sudo[252876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plfqklfyvnzcauzciulgjyvjleeluiwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087147.711346-1489-154206535093666/AnsiballZ_copy.py'
Nov 25 16:12:28 compute-0 sudo[252876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:28 compute-0 systemd[1]: libpod-conmon-889a1caf64b9df49150d9aef63919b8f70ba416d491adf8947f0a8c298fbb7c7.scope: Deactivated successfully.
Nov 25 16:12:28 compute-0 sudo[252252]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:12:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:12:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:12:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:12:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 95e38b67-6b4e-45fc-bd71-bbec646b7ccd does not exist
Nov 25 16:12:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 62f5cb10-3cd6-4bb3-aa87-bfa700226505 does not exist
Nov 25 16:12:28 compute-0 sudo[252879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:12:28 compute-0 sudo[252879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:28 compute-0 sudo[252879]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:28 compute-0 sudo[252904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:12:28 compute-0 sudo[252904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:12:28 compute-0 sudo[252904]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:28 compute-0 python3.9[252878]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764087147.711346-1489-154206535093666/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 16:12:28 compute-0 sudo[252876]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:28 compute-0 sudo[253002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbkzubacdhuperkwnkvbyocaprniqeqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087147.711346-1489-154206535093666/AnsiballZ_systemd.py'
Nov 25 16:12:28 compute-0 sudo[253002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:29 compute-0 python3.9[253004]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 16:12:29 compute-0 systemd[1]: Reloading.
Nov 25 16:12:29 compute-0 systemd-sysv-generator[253033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:12:29 compute-0 systemd-rc-local-generator[253028]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:12:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:12:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:12:29 compute-0 ceph-mon[74985]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:29 compute-0 sudo[253002]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:29 compute-0 sudo[253112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofkxgfcogimqritimnjeuxgbainzlppm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087147.711346-1489-154206535093666/AnsiballZ_systemd.py'
Nov 25 16:12:29 compute-0 sudo[253112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:29 compute-0 python3.9[253114]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 16:12:30 compute-0 systemd[1]: Reloading.
Nov 25 16:12:30 compute-0 systemd-rc-local-generator[253143]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 16:12:30 compute-0 systemd-sysv-generator[253146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 16:12:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:30 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 16:12:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:30 compute-0 podman[253154]: 2025-11-25 16:12:30.489629683 +0000 UTC m=+0.091509459 container init 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:12:30 compute-0 podman[253154]: 2025-11-25 16:12:30.497874825 +0000 UTC m=+0.099754601 container start 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 25 16:12:30 compute-0 podman[253154]: nova_compute
Nov 25 16:12:30 compute-0 nova_compute[253170]: + sudo -E kolla_set_configs
Nov 25 16:12:30 compute-0 systemd[1]: Started nova_compute container.
Nov 25 16:12:30 compute-0 sudo[253112]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Validating config file
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying service configuration files
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Deleting /etc/ceph
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Creating directory /etc/ceph
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Writing out command to execute
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:30 compute-0 nova_compute[253170]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 16:12:30 compute-0 nova_compute[253170]: ++ cat /run_command
Nov 25 16:12:30 compute-0 nova_compute[253170]: + CMD=nova-compute
Nov 25 16:12:30 compute-0 nova_compute[253170]: + ARGS=
Nov 25 16:12:30 compute-0 nova_compute[253170]: + sudo kolla_copy_cacerts
Nov 25 16:12:30 compute-0 nova_compute[253170]: + [[ ! -n '' ]]
Nov 25 16:12:30 compute-0 nova_compute[253170]: + . kolla_extend_start
Nov 25 16:12:30 compute-0 nova_compute[253170]: Running command: 'nova-compute'
Nov 25 16:12:30 compute-0 nova_compute[253170]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 16:12:30 compute-0 nova_compute[253170]: + umask 0022
Nov 25 16:12:30 compute-0 nova_compute[253170]: + exec nova-compute
Nov 25 16:12:31 compute-0 ceph-mon[74985]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:31 compute-0 python3.9[253331]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:12:32 compute-0 python3.9[253482]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:12:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:32 compute-0 python3.9[253632]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.089 253174 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.089 253174 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.089 253174 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.090 253174 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.242 253174 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.258 253174 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.258 253174 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 16:12:33 compute-0 ceph-mon[74985]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:33 compute-0 sudo[253786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxfcsihbwummafzqqjwjnqxzlweihree ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087153.1493416-1549-56030294505191/AnsiballZ_podman_container.py'
Nov 25 16:12:33 compute-0 sudo[253786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:33 compute-0 nova_compute[253170]: 2025-11-25 16:12:33.921 253174 INFO nova.virt.driver [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 16:12:33 compute-0 python3.9[253788]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 16:12:34 compute-0 sudo[253786]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:34 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:12:34 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.073 253174 INFO nova.compute.provider_config [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.089 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.089 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.090 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.090 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.090 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.091 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.092 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.093 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.094 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.095 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.096 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.097 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.098 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.099 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.100 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.101 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.102 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.103 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.104 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.105 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.106 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.107 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.108 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.109 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.110 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.111 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.112 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.113 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.114 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.115 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.116 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.117 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.118 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.119 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.120 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.121 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.122 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.123 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.124 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.125 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.126 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.127 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.128 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.129 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.130 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.131 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.132 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.133 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.134 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.135 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.136 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.137 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.138 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.139 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.140 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.141 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.142 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.143 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.144 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.145 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.146 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.147 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.148 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.149 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.150 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.151 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.152 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.153 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.154 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.155 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.156 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.157 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.158 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.159 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.160 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.161 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.162 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.163 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 WARNING oslo_config.cfg [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 16:12:34 compute-0 nova_compute[253170]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 16:12:34 compute-0 nova_compute[253170]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 16:12:34 compute-0 nova_compute[253170]: and ``live_migration_inbound_addr`` respectively.
Nov 25 16:12:34 compute-0 nova_compute[253170]: ).  Its value may be silently ignored in the future.
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.164 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.165 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.166 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.167 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_secret_uuid        = d82baeae-c742-50a4-b8f6-b5257c8a2c92 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.168 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.169 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.170 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.171 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.172 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.173 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.174 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.175 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.176 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.177 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.178 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.179 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.180 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.181 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.182 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.183 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.184 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.185 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.186 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.187 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.188 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.189 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.190 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.191 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.192 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.193 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.194 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.195 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.196 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.197 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.198 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.199 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.200 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.201 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.202 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.203 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.204 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.205 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.206 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.207 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.208 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.209 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.210 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.211 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.212 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.213 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.214 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.215 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.216 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.217 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.218 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.219 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.220 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.221 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.222 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.223 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.224 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.225 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.226 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.227 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.228 253174 DEBUG oslo_service.service [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.229 253174 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.242 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.242 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.242 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.243 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 16:12:34 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 16:12:34 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.324 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff5f3564910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.327 253174 DEBUG nova.virt.libvirt.host [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff5f3564910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.328 253174 INFO nova.virt.libvirt.driver [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Connection event '1' reason 'None'
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.342 253174 WARNING nova.virt.libvirt.driver [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.343 253174 DEBUG nova.virt.libvirt.volume.mount [None req-39439ec2-045f-4e2a-a8ed-f642f8fc9cc6 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 16:12:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:34 compute-0 sudo[254013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evonrcmqdcqvgrujqmpugxktzrtarqux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087154.2616932-1557-180375281614805/AnsiballZ_systemd.py'
Nov 25 16:12:34 compute-0 sudo[254013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:34 compute-0 python3.9[254015]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 16:12:34 compute-0 systemd[1]: Stopping nova_compute container...
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.910 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.910 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:12:34 compute-0 nova_compute[253170]: 2025-11-25 16:12:34.910 253174 DEBUG oslo_concurrency.lockutils [None req-41c86e5d-f0ff-49c9-815f-79b4cf6ca924 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:35 compute-0 virtqemud[253880]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 16:12:35 compute-0 virtqemud[253880]: hostname: compute-0
Nov 25 16:12:35 compute-0 virtqemud[253880]: End of file while reading data: Input/output error
Nov 25 16:12:35 compute-0 systemd[1]: libpod-38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508.scope: Deactivated successfully.
Nov 25 16:12:35 compute-0 systemd[1]: libpod-38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508.scope: Consumed 3.098s CPU time.
Nov 25 16:12:35 compute-0 podman[254027]: 2025-11-25 16:12:35.533302463 +0000 UTC m=+0.657304886 container died 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 16:12:35 compute-0 ceph-mon[74985]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508-userdata-shm.mount: Deactivated successfully.
Nov 25 16:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a-merged.mount: Deactivated successfully.
Nov 25 16:12:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:37 compute-0 ceph-mon[74985]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:37 compute-0 podman[254027]: 2025-11-25 16:12:37.036038904 +0000 UTC m=+2.160041327 container cleanup 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:12:37 compute-0 podman[254027]: nova_compute
Nov 25 16:12:37 compute-0 podman[254064]: nova_compute
Nov 25 16:12:37 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 16:12:37 compute-0 systemd[1]: Stopped nova_compute container.
Nov 25 16:12:37 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 16:12:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f48d95bd7e7ef61ab60e0f179ecbdf79c9b613c267d2605d61c45cb8f57f0a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:37 compute-0 podman[254076]: 2025-11-25 16:12:37.21039194 +0000 UTC m=+0.092324553 container init 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 16:12:37 compute-0 podman[254076]: 2025-11-25 16:12:37.215906189 +0000 UTC m=+0.097838772 container start 38f7b30ba70212106819855e16a59af501ec966381aae6819da9d878aaf4e508 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute)
Nov 25 16:12:37 compute-0 podman[254076]: nova_compute
Nov 25 16:12:37 compute-0 nova_compute[254092]: + sudo -E kolla_set_configs
Nov 25 16:12:37 compute-0 systemd[1]: Started nova_compute container.
Nov 25 16:12:37 compute-0 sudo[254013]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Validating config file
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying service configuration files
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /etc/ceph
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Creating directory /etc/ceph
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Writing out command to execute
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:37 compute-0 nova_compute[254092]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 16:12:37 compute-0 nova_compute[254092]: ++ cat /run_command
Nov 25 16:12:37 compute-0 nova_compute[254092]: + CMD=nova-compute
Nov 25 16:12:37 compute-0 nova_compute[254092]: + ARGS=
Nov 25 16:12:37 compute-0 nova_compute[254092]: + sudo kolla_copy_cacerts
Nov 25 16:12:37 compute-0 nova_compute[254092]: + [[ ! -n '' ]]
Nov 25 16:12:37 compute-0 nova_compute[254092]: + . kolla_extend_start
Nov 25 16:12:37 compute-0 nova_compute[254092]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 16:12:37 compute-0 nova_compute[254092]: Running command: 'nova-compute'
Nov 25 16:12:37 compute-0 nova_compute[254092]: + umask 0022
Nov 25 16:12:37 compute-0 nova_compute[254092]: + exec nova-compute
Nov 25 16:12:37 compute-0 sudo[254253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jauitaovowottsxuypsceigpjbydgzgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764087157.465325-1566-29873808332290/AnsiballZ_podman_container.py'
Nov 25 16:12:37 compute-0 sudo[254253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 16:12:37 compute-0 python3.9[254255]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 16:12:38 compute-0 systemd[1]: Started libpod-conmon-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0.scope.
Nov 25 16:12:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 16:12:38 compute-0 podman[254280]: 2025-11-25 16:12:38.165905556 +0000 UTC m=+0.102117777 container init 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3)
Nov 25 16:12:38 compute-0 podman[254280]: 2025-11-25 16:12:38.172951907 +0000 UTC m=+0.109164128 container start 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:12:38 compute-0 python3.9[254255]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 16:12:38 compute-0 nova_compute_init[254301]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 16:12:38 compute-0 systemd[1]: libpod-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0.scope: Deactivated successfully.
Nov 25 16:12:38 compute-0 podman[254314]: 2025-11-25 16:12:38.269709809 +0000 UTC m=+0.031343437 container died 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0-userdata-shm.mount: Deactivated successfully.
Nov 25 16:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c733aae186e559d4abc5cea6746395833e7d89c14b4a22fe40cb951bbde7677-merged.mount: Deactivated successfully.
Nov 25 16:12:38 compute-0 podman[254314]: 2025-11-25 16:12:38.301284852 +0000 UTC m=+0.062918440 container cleanup 8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:12:38 compute-0 systemd[1]: libpod-conmon-8df2cdd7351c7e05ad950da90d63c3feb28b0240d2ab1eaef94281878f2bd4a0.scope: Deactivated successfully.
Nov 25 16:12:38 compute-0 sudo[254253]: pam_unix(sudo:session): session closed for user root
Nov 25 16:12:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:38 compute-0 sshd-session[223242]: Connection closed by 192.168.122.30 port 50946
Nov 25 16:12:38 compute-0 sshd-session[223239]: pam_unix(sshd:session): session closed for user zuul
Nov 25 16:12:38 compute-0 systemd-logind[791]: Session 50 logged out. Waiting for processes to exit.
Nov 25 16:12:38 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 16:12:38 compute-0 systemd[1]: session-50.scope: Consumed 2min 13.311s CPU time.
Nov 25 16:12:38 compute-0 systemd-logind[791]: Removed session 50.
Nov 25 16:12:39 compute-0 nova_compute[254092]: 2025-11-25 16:12:39.327 254096 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 16:12:39 compute-0 nova_compute[254092]: 2025-11-25 16:12:39.328 254096 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 16:12:39 compute-0 nova_compute[254092]: 2025-11-25 16:12:39.328 254096 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 16:12:39 compute-0 nova_compute[254092]: 2025-11-25 16:12:39.328 254096 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 16:12:39 compute-0 ceph-mon[74985]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:39 compute-0 nova_compute[254092]: 2025-11-25 16:12:39.466 254096 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:12:39 compute-0 nova_compute[254092]: 2025-11-25 16:12:39.486 254096 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:12:39 compute-0 nova_compute[254092]: 2025-11-25 16:12:39.486 254096 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 16:12:39 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:12:39
Nov 25 16:12:39 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:12:39 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:12:39 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes']
Nov 25 16:12:39 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.094 254096 INFO nova.virt.driver [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.239 254096 INFO nova.compute.provider_config [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.255 254096 DEBUG oslo_concurrency.lockutils [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.255 254096 DEBUG oslo_concurrency.lockutils [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.255 254096 DEBUG oslo_concurrency.lockutils [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.256 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.257 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.258 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.259 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.260 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.261 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.262 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.263 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.264 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.265 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.266 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.267 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.268 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.269 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.269 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.269 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.270 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.271 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.272 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.273 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.274 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.275 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.276 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.277 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.278 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.279 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.280 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.281 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.282 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.283 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.284 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.285 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.286 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.287 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.288 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.289 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.290 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.291 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.292 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.293 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.294 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.295 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.296 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.297 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.298 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.299 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.300 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.301 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.302 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.303 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.304 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.305 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.306 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.307 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.308 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.309 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.310 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.311 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.312 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.313 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.314 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.315 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.316 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.317 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.318 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.319 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.320 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.321 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.322 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.323 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.324 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.325 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.326 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.327 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.328 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 WARNING oslo_config.cfg [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 16:12:40 compute-0 nova_compute[254092]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 16:12:40 compute-0 nova_compute[254092]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 16:12:40 compute-0 nova_compute[254092]: and ``live_migration_inbound_addr`` respectively.
Nov 25 16:12:40 compute-0 nova_compute[254092]: ).  Its value may be silently ignored in the future.
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.329 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.330 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.331 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.332 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_secret_uuid        = d82baeae-c742-50a4-b8f6-b5257c8a2c92 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.333 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.334 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.335 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.336 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.337 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.338 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.339 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.340 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.341 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.342 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.343 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.344 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.345 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.346 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.347 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.348 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.349 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.350 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.351 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.352 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.353 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.354 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.355 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.355 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.355 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.356 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.357 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.358 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.359 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.360 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.361 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.362 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.363 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.364 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.365 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.366 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.367 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.368 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.369 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.370 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.371 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.372 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.373 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.374 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.375 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.376 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.377 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.378 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.379 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.380 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.381 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.382 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.383 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.384 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.385 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.386 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.387 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.388 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.389 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.390 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.391 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.392 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.393 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.394 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.395 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.396 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.397 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.397 254096 DEBUG oslo_service.service [None req-dde4c166-f9de-4591-addb-6b81c4955eb0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.398 254096 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 16:12:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.416 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.416 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.416 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.417 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.427 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f649a344610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.429 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f649a344610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.430 254096 INFO nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Connection event '1' reason 'None'
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.439 254096 INFO nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]: 
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <host>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <uuid>3ad80417-8456-49f6-9219-21501d8909bb</uuid>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <arch>x86_64</arch>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model>EPYC-Rome-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <vendor>AMD</vendor>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <microcode version='16777317'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <signature family='23' model='49' stepping='0'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='x2apic'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='tsc-deadline'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='osxsave'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='hypervisor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='tsc_adjust'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='spec-ctrl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='stibp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='arch-capabilities'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='cmp_legacy'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='topoext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='virt-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='lbrv'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='tsc-scale'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='vmcb-clean'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='pause-filter'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='pfthreshold'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='svme-addr-chk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='rdctl-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='mds-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature name='pschange-mc-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <pages unit='KiB' size='4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <pages unit='KiB' size='2048'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <pages unit='KiB' size='1048576'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <power_management>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <suspend_mem/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </power_management>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <iommu support='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <migration_features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <live/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <uri_transports>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <uri_transport>tcp</uri_transport>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <uri_transport>rdma</uri_transport>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </uri_transports>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </migration_features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <topology>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <cells num='1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <cell id='0'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           <memory unit='KiB'>7864320</memory>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           <distances>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <sibling id='0' value='10'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           </distances>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           <cpus num='8'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:           </cpus>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         </cell>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </cells>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </topology>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <cache>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </cache>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <secmodel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model>selinux</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <doi>0</doi>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </secmodel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <secmodel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model>dac</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <doi>0</doi>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </secmodel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </host>
Nov 25 16:12:40 compute-0 nova_compute[254092]: 
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <guest>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <os_type>hvm</os_type>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <arch name='i686'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <wordsize>32</wordsize>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <domain type='qemu'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <domain type='kvm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </arch>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <pae/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <nonpae/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <acpi default='on' toggle='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <apic default='on' toggle='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <cpuselection/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <deviceboot/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <disksnapshot default='on' toggle='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <externalSnapshot/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </guest>
Nov 25 16:12:40 compute-0 nova_compute[254092]: 
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <guest>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <os_type>hvm</os_type>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <arch name='x86_64'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <wordsize>64</wordsize>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <domain type='qemu'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <domain type='kvm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </arch>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <acpi default='on' toggle='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <apic default='on' toggle='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <cpuselection/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <deviceboot/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <disksnapshot default='on' toggle='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <externalSnapshot/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </guest>
Nov 25 16:12:40 compute-0 nova_compute[254092]: 
Nov 25 16:12:40 compute-0 nova_compute[254092]: </capabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]: 
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.446 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.447 254096 WARNING nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.447 254096 DEBUG nova.virt.libvirt.volume.mount [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.469 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 16:12:40 compute-0 nova_compute[254092]: <domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <domain>kvm</domain>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <arch>i686</arch>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <vcpu max='4096'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <iothreads supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <os supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='firmware'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <loader supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>rom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pflash</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='readonly'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>yes</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='secure'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </loader>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-passthrough' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='hostPassthroughMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='maximum' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='maximumMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-model' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <vendor>AMD</vendor>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='x2apic'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='hypervisor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='stibp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='overflow-recov'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='succor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lbrv'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-scale'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='flushbyasid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pause-filter'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pfthreshold'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='disable' name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='custom' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Dhyana-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-128'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-256'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-512'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v6'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v7'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <memoryBacking supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='sourceType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>anonymous</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>memfd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </memoryBacking>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <disk supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='diskDevice'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>disk</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cdrom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>floppy</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>lun</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>fdc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>sata</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <graphics supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vnc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egl-headless</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <video supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='modelType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vga</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cirrus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>none</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>bochs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ramfb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hostdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='mode'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>subsystem</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='startupPolicy'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>mandatory</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>requisite</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>optional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='subsysType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pci</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='capsType'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='pciBackend'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hostdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <rng supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>random</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <filesystem supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='driverType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>path</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>handle</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtiofs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </filesystem>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <tpm supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-tis</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-crb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emulator</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>external</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendVersion'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>2.0</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </tpm>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <redirdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </redirdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <channel supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </channel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <crypto supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </crypto>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <interface supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>passt</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <panic supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>isa</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>hyperv</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </panic>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <console supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>null</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dev</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pipe</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stdio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>udp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tcp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu-vdagent</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </console>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <gic supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <vmcoreinfo supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <genid supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backingStoreInput supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backup supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <async-teardown supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <ps2 supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sev supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sgx supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hyperv supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='features'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>relaxed</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vapic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>spinlocks</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vpindex</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>runtime</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>synic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stimer</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reset</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vendor_id</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>frequencies</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reenlightenment</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tlbflush</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ipi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>avic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emsr_bitmap</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>xmm_input</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <spinlocks>4095</spinlocks>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <stimer_direct>on</stimer_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hyperv>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <launchSecurity supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='sectype'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tdx</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </launchSecurity>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:12:40 compute-0 nova_compute[254092]: </domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.474 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 16:12:40 compute-0 nova_compute[254092]: <domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <domain>kvm</domain>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <arch>i686</arch>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <vcpu max='240'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <iothreads supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <os supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='firmware'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <loader supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>rom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pflash</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='readonly'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>yes</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='secure'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </loader>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-passthrough' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='hostPassthroughMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='maximum' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='maximumMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-model' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <vendor>AMD</vendor>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='x2apic'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='hypervisor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='stibp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='overflow-recov'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='succor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lbrv'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-scale'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='flushbyasid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pause-filter'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pfthreshold'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='disable' name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='custom' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Dhyana-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-128'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-256'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-512'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v6'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v7'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <memoryBacking supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='sourceType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>anonymous</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>memfd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </memoryBacking>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <disk supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='diskDevice'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>disk</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cdrom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>floppy</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>lun</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ide</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>fdc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>sata</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <graphics supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vnc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egl-headless</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <video supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='modelType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vga</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cirrus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>none</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>bochs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ramfb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hostdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='mode'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>subsystem</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='startupPolicy'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>mandatory</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>requisite</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>optional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='subsysType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pci</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='capsType'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='pciBackend'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hostdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <rng supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>random</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <filesystem supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='driverType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>path</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>handle</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtiofs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </filesystem>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <tpm supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-tis</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-crb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emulator</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>external</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendVersion'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>2.0</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </tpm>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <redirdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </redirdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <channel supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </channel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <crypto supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </crypto>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <interface supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>passt</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <panic supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>isa</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>hyperv</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </panic>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <console supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>null</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dev</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pipe</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stdio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>udp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tcp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu-vdagent</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </console>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <gic supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <vmcoreinfo supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <genid supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backingStoreInput supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backup supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <async-teardown supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <ps2 supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sev supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sgx supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hyperv supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='features'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>relaxed</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vapic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>spinlocks</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vpindex</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>runtime</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>synic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stimer</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reset</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vendor_id</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>frequencies</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reenlightenment</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tlbflush</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ipi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>avic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emsr_bitmap</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>xmm_input</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <spinlocks>4095</spinlocks>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <stimer_direct>on</stimer_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hyperv>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <launchSecurity supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='sectype'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tdx</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </launchSecurity>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:12:40 compute-0 nova_compute[254092]: </domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.501 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.504 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 16:12:40 compute-0 nova_compute[254092]: <domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <domain>kvm</domain>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <arch>x86_64</arch>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <vcpu max='4096'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <iothreads supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <os supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='firmware'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>efi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <loader supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>rom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pflash</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='readonly'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>yes</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='secure'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>yes</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </loader>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-passthrough' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='hostPassthroughMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='maximum' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='maximumMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-model' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <vendor>AMD</vendor>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='x2apic'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='hypervisor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='stibp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='overflow-recov'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='succor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lbrv'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-scale'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='flushbyasid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pause-filter'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pfthreshold'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='disable' name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='custom' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Dhyana-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-128'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-256'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-512'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v6'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v7'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <memoryBacking supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='sourceType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>anonymous</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>memfd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </memoryBacking>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <disk supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='diskDevice'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>disk</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cdrom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>floppy</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>lun</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>fdc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>sata</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <graphics supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vnc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egl-headless</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <video supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='modelType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vga</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cirrus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>none</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>bochs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ramfb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hostdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='mode'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>subsystem</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='startupPolicy'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>mandatory</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>requisite</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>optional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='subsysType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pci</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='capsType'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='pciBackend'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hostdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <rng supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>random</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <filesystem supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='driverType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>path</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>handle</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtiofs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </filesystem>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <tpm supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-tis</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-crb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emulator</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>external</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendVersion'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>2.0</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </tpm>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <redirdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </redirdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <channel supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </channel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <crypto supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </crypto>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <interface supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>passt</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <panic supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>isa</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>hyperv</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </panic>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <console supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>null</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dev</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pipe</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stdio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>udp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tcp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu-vdagent</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </console>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <gic supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <vmcoreinfo supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <genid supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backingStoreInput supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backup supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <async-teardown supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <ps2 supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sev supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sgx supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hyperv supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='features'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>relaxed</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vapic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>spinlocks</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vpindex</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>runtime</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>synic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stimer</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reset</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vendor_id</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>frequencies</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reenlightenment</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tlbflush</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ipi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>avic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emsr_bitmap</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>xmm_input</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <spinlocks>4095</spinlocks>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <stimer_direct>on</stimer_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hyperv>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <launchSecurity supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='sectype'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tdx</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </launchSecurity>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:12:40 compute-0 nova_compute[254092]: </domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.581 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 16:12:40 compute-0 nova_compute[254092]: <domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <domain>kvm</domain>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <arch>x86_64</arch>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <vcpu max='240'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <iothreads supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <os supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='firmware'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <loader supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>rom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pflash</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='readonly'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>yes</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='secure'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>no</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </loader>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-passthrough' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='hostPassthroughMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='maximum' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='maximumMigratable'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>on</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>off</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='host-model' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <vendor>AMD</vendor>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='x2apic'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='hypervisor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='stibp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='overflow-recov'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='succor'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lbrv'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='tsc-scale'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='flushbyasid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pause-filter'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='pfthreshold'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <feature policy='disable' name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <mode name='custom' supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Broadwell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Cooperlake-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Denverton-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Dhyana-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='auto-ibrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Milan-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amd-psfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='no-nested-data-bp'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='null-sel-clr-base'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='stibp-always-on'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-Rome-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='EPYC-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='GraniteRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-128'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-256'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx10-512'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='prefetchiti'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Haswell-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v6'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Icelake-Server-v7'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='IvyBridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='KnightsMill-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4fmaps'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-4vnniw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512er'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512pf'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G4-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Opteron_G5-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fma4'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tbm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xop'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SapphireRapids-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='amx-tile'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-bf16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-fp16'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512-vpopcntdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bitalg'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vbmi2'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrc'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fzrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='la57'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='taa-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='tsx-ldtrk'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xfd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='SierraForest-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ifma'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-ne-convert'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx-vnni-int8'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='bus-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cmpccxadd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fbsdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='fsrs'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ibrs-all'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mcdt-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pbrsb-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='psdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='sbdr-ssdp-no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='serialize'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vaes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='vpclmulqdq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Client-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='hle'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='rtm'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Skylake-Server-v5'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512bw'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512cd'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512dq'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512f'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='avx512vl'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='invpcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pcid'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='pku'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='mpx'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v2'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v3'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='core-capability'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='split-lock-detect'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='Snowridge-v4'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='cldemote'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='erms'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='gfni'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdir64b'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='movdiri'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='xsaves'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='athlon-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='core2duo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='coreduo-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='n270-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='ss'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <blockers model='phenom-v1'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnow'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <feature name='3dnowext'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </blockers>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </mode>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <memoryBacking supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <enum name='sourceType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>anonymous</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <value>memfd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </memoryBacking>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <disk supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='diskDevice'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>disk</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cdrom</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>floppy</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>lun</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ide</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>fdc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>sata</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <graphics supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vnc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egl-headless</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <video supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='modelType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vga</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>cirrus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>none</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>bochs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ramfb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hostdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='mode'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>subsystem</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='startupPolicy'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>mandatory</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>requisite</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>optional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='subsysType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pci</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>scsi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='capsType'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='pciBackend'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hostdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <rng supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtio-non-transitional</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>random</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>egd</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <filesystem supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='driverType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>path</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>handle</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>virtiofs</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </filesystem>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <tpm supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-tis</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tpm-crb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emulator</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>external</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendVersion'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>2.0</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </tpm>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <redirdev supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='bus'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>usb</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </redirdev>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <channel supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </channel>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <crypto supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendModel'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>builtin</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </crypto>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <interface supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='backendType'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>default</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>passt</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <panic supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='model'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>isa</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>hyperv</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </panic>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <console supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='type'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>null</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vc</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pty</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dev</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>file</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>pipe</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stdio</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>udp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tcp</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>unix</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>qemu-vdagent</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>dbus</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </console>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <gic supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <vmcoreinfo supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <genid supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backingStoreInput supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <backup supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <async-teardown supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <ps2 supported='yes'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sev supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <sgx supported='no'/>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <hyperv supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='features'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>relaxed</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vapic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>spinlocks</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vpindex</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>runtime</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>synic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>stimer</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reset</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>vendor_id</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>frequencies</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>reenlightenment</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tlbflush</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>ipi</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>avic</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>emsr_bitmap</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>xmm_input</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <spinlocks>4095</spinlocks>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <stimer_direct>on</stimer_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </defaults>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </hyperv>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     <launchSecurity supported='yes'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       <enum name='sectype'>
Nov 25 16:12:40 compute-0 nova_compute[254092]:         <value>tdx</value>
Nov 25 16:12:40 compute-0 nova_compute[254092]:       </enum>
Nov 25 16:12:40 compute-0 nova_compute[254092]:     </launchSecurity>
Nov 25 16:12:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:12:40 compute-0 nova_compute[254092]: </domainCapabilities>
Nov 25 16:12:40 compute-0 nova_compute[254092]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.667 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.667 254096 INFO nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Secure Boot support detected
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.670 254096 INFO nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.670 254096 INFO nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.678 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.716 254096 INFO nova.virt.node [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Determined node identity 4f066da7-306c-41d7-8522-9a9189cacc49 from /var/lib/nova/compute_id
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.733 254096 WARNING nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Compute nodes ['4f066da7-306c-41d7-8522-9a9189cacc49'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.767 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.802 254096 WARNING nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.802 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.802 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.803 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.803 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:12:40 compute-0 nova_compute[254092]: 2025-11-25 16:12:40.803 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:12:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:12:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880014724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.200 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:12:41 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 16:12:41 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 16:12:41 compute-0 ceph-mon[74985]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2880014724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.482 254096 WARNING nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.483 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.483 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.484 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.496 254096 WARNING nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] No compute node record for compute-0.ctlplane.example.com:4f066da7-306c-41d7-8522-9a9189cacc49: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 4f066da7-306c-41d7-8522-9a9189cacc49 could not be found.
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.517 254096 INFO nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 4f066da7-306c-41d7-8522-9a9189cacc49
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.585 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:12:41 compute-0 nova_compute[254092]: 2025-11-25 16:12:41.585 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:12:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:42 compute-0 nova_compute[254092]: 2025-11-25 16:12:42.414 254096 INFO nova.scheduler.client.report [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [req-a439584f-a634-409a-9edc-28be634c7cef] Created resource provider record via placement API for resource provider with UUID 4f066da7-306c-41d7-8522-9a9189cacc49 and name compute-0.ctlplane.example.com.
Nov 25 16:12:42 compute-0 nova_compute[254092]: 2025-11-25 16:12:42.775 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:12:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:12:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305549128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.223 254096 DEBUG oslo_concurrency.processutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.228 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 16:12:43 compute-0 nova_compute[254092]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.229 254096 INFO nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] kernel doesn't support AMD SEV
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.229 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.230 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.308 254096 DEBUG nova.scheduler.client.report [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updated inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.308 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.308 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.395 254096 DEBUG nova.compute.provider_tree [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Updating resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.430 254096 DEBUG nova.compute.resource_tracker [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.431 254096 DEBUG oslo_concurrency.lockutils [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.431 254096 DEBUG nova.service [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.475 254096 DEBUG nova.service [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 25 16:12:43 compute-0 nova_compute[254092]: 2025-11-25 16:12:43.476 254096 DEBUG nova.servicegroup.drivers.db [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 25 16:12:43 compute-0 ceph-mon[74985]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3305549128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:12:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:45 compute-0 ceph-mon[74985]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:47 compute-0 ceph-mon[74985]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:49 compute-0 ceph-mon[74985]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.354413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170354481, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1772, "num_deletes": 507, "total_data_size": 2507206, "memory_usage": 2540232, "flush_reason": "Manual Compaction"}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170407326, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2462496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13394, "largest_seqno": 15165, "table_properties": {"data_size": 2454720, "index_size": 4206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 17980, "raw_average_key_size": 18, "raw_value_size": 2437367, "raw_average_value_size": 2487, "num_data_blocks": 192, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087000, "oldest_key_time": 1764087000, "file_creation_time": 1764087170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 52949 microseconds, and 6710 cpu microseconds.
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.407374) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2462496 bytes OK
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.407393) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413148) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413199) EVENT_LOG_v1 {"time_micros": 1764087170413188, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413220) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2498567, prev total WAL file size 2498567, number of live WAL files 2.
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.414032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2404KB)], [32(7194KB)]
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170414079, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9829458, "oldest_snapshot_seqno": -1}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3965 keys, 7796220 bytes, temperature: kUnknown
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170478447, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7796220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7767428, "index_size": 17802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 97356, "raw_average_key_size": 24, "raw_value_size": 7693324, "raw_average_value_size": 1940, "num_data_blocks": 756, "num_entries": 3965, "num_filter_entries": 3965, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.478735) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7796220 bytes
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.479849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.6 rd, 121.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4992, records dropped: 1027 output_compression: NoCompression
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.479864) EVENT_LOG_v1 {"time_micros": 1764087170479857, "job": 14, "event": "compaction_finished", "compaction_time_micros": 64434, "compaction_time_cpu_micros": 22725, "output_level": 6, "num_output_files": 1, "total_output_size": 7796220, "num_input_records": 4992, "num_output_records": 3965, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170480279, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087170481432, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.413963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:12:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:12:50.481541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:12:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:12:51 compute-0 ceph-mon[74985]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:52 compute-0 podman[254462]: 2025-11-25 16:12:52.641457445 +0000 UTC m=+0.061726407 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:12:52 compute-0 podman[254463]: 2025-11-25 16:12:52.661542318 +0000 UTC m=+0.081909683 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:12:52 compute-0 podman[254464]: 2025-11-25 16:12:52.669884673 +0000 UTC m=+0.089128777 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:12:53 compute-0 ceph-mon[74985]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:54 compute-0 ceph-mon[74985]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:12:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:57 compute-0 ceph-mon[74985]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:12:59 compute-0 ceph-mon[74985]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:00 compute-0 ceph-mon[74985]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:03 compute-0 ceph-mon[74985]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:04 compute-0 ceph-mon[74985]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:13:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/327896286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:13:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:13:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/327896286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:13:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:13:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651973424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:13:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:13:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651973424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:13:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106676261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:13:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106676261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/327896286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/327896286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1651973424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1651973424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/106676261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:13:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/106676261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:13:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:09 compute-0 ceph-mon[74985]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:13:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:10 compute-0 ceph-mon[74985]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:13:13.579 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:13:13.579 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:13:13.579 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:13:13 compute-0 ceph-mon[74985]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:15 compute-0 ceph-mon[74985]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:17 compute-0 ceph-mon[74985]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:18 compute-0 ceph-mon[74985]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:21 compute-0 ceph-mon[74985]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:23 compute-0 ceph-mon[74985]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:23 compute-0 podman[254530]: 2025-11-25 16:13:23.634496071 +0000 UTC m=+0.048451638 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 16:13:23 compute-0 podman[254529]: 2025-11-25 16:13:23.659483456 +0000 UTC m=+0.077176475 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:13:23 compute-0 podman[254531]: 2025-11-25 16:13:23.66373018 +0000 UTC m=+0.075554431 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:13:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:25 compute-0 ceph-mon[74985]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:27 compute-0 ceph-mon[74985]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:28 compute-0 sudo[254590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:28 compute-0 sudo[254590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:28 compute-0 sudo[254590]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:28 compute-0 sudo[254615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:13:28 compute-0 sudo[254615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:28 compute-0 sudo[254615]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:28 compute-0 sudo[254640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:28 compute-0 sudo[254640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:28 compute-0 sudo[254640]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:28 compute-0 sudo[254665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 16:13:28 compute-0 sudo[254665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:28 compute-0 sudo[254665]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:13:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:13:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:28 compute-0 sudo[254710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:28 compute-0 sudo[254710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:28 compute-0 sudo[254710]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:29 compute-0 sudo[254735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:13:29 compute-0 sudo[254735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:29 compute-0 sudo[254735]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:29 compute-0 sudo[254760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:29 compute-0 sudo[254760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:29 compute-0 sudo[254760]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:29 compute-0 sudo[254785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:13:29 compute-0 sudo[254785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:29 compute-0 sudo[254785]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:13:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:13:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:13:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:13:29 compute-0 ceph-mon[74985]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:13:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9ae456da-3f1e-444c-b28f-abba3dcf71c1 does not exist
Nov 25 16:13:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b3d3ec1a-03d8-4200-b489-c4dbc8eb79e4 does not exist
Nov 25 16:13:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev bf74416e-8e2e-460c-a4b8-c4be4ee165df does not exist
Nov 25 16:13:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:13:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:13:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:13:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:13:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:13:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:13:29 compute-0 sudo[254842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:29 compute-0 sudo[254842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:29 compute-0 sudo[254842]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:29 compute-0 sudo[254867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:13:29 compute-0 sudo[254867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:29 compute-0 sudo[254867]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:29 compute-0 sudo[254892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:29 compute-0 sudo[254892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:29 compute-0 sudo[254892]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:29 compute-0 sudo[254917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:13:29 compute-0 sudo[254917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:30 compute-0 podman[254982]: 2025-11-25 16:13:30.332441746 +0000 UTC m=+0.050564686 container create 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:13:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:30 compute-0 systemd[1]: Started libpod-conmon-3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56.scope.
Nov 25 16:13:30 compute-0 podman[254982]: 2025-11-25 16:13:30.308534711 +0000 UTC m=+0.026657671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:13:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:13:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:30 compute-0 podman[254982]: 2025-11-25 16:13:30.426066334 +0000 UTC m=+0.144189304 container init 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:13:30 compute-0 podman[254982]: 2025-11-25 16:13:30.436416513 +0000 UTC m=+0.154539453 container start 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 25 16:13:30 compute-0 podman[254982]: 2025-11-25 16:13:30.440186495 +0000 UTC m=+0.158309465 container attach 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:13:30 compute-0 practical_merkle[254998]: 167 167
Nov 25 16:13:30 compute-0 systemd[1]: libpod-3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56.scope: Deactivated successfully.
Nov 25 16:13:30 compute-0 podman[254982]: 2025-11-25 16:13:30.445003145 +0000 UTC m=+0.163126085 container died 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 25 16:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d88346ed80f00c4e2a1b43c7bbaa410f199ca2a18b247574ef9e72989b9731d-merged.mount: Deactivated successfully.
Nov 25 16:13:30 compute-0 podman[254982]: 2025-11-25 16:13:30.499163967 +0000 UTC m=+0.217286907 container remove 3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:13:30 compute-0 systemd[1]: libpod-conmon-3f3928b8fb4d3a076f298f734398dd32fec4092e3ccee55e6cd1c5a37d3d5b56.scope: Deactivated successfully.
Nov 25 16:13:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:13:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:13:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:13:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:13:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:13:30 compute-0 ceph-mon[74985]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:30 compute-0 podman[255021]: 2025-11-25 16:13:30.659954118 +0000 UTC m=+0.042338514 container create 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 16:13:30 compute-0 systemd[1]: Started libpod-conmon-4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c.scope.
Nov 25 16:13:30 compute-0 podman[255021]: 2025-11-25 16:13:30.640254327 +0000 UTC m=+0.022638743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:13:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:30 compute-0 podman[255021]: 2025-11-25 16:13:30.771707766 +0000 UTC m=+0.154092172 container init 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:13:30 compute-0 podman[255021]: 2025-11-25 16:13:30.782315131 +0000 UTC m=+0.164699527 container start 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:13:30 compute-0 podman[255021]: 2025-11-25 16:13:30.789237339 +0000 UTC m=+0.171621735 container attach 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:13:31 compute-0 elated_chandrasekhar[255037]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:13:31 compute-0 elated_chandrasekhar[255037]: --> relative data size: 1.0
Nov 25 16:13:31 compute-0 elated_chandrasekhar[255037]: --> All data devices are unavailable
Nov 25 16:13:31 compute-0 systemd[1]: libpod-4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c.scope: Deactivated successfully.
Nov 25 16:13:31 compute-0 podman[255066]: 2025-11-25 16:13:31.822768441 +0000 UTC m=+0.020467263 container died 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e158fa746b81b21804795e731dc448baea4f962311adbbec4c7440b462c5f1e-merged.mount: Deactivated successfully.
Nov 25 16:13:31 compute-0 podman[255066]: 2025-11-25 16:13:31.874586299 +0000 UTC m=+0.072285101 container remove 4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 16:13:31 compute-0 systemd[1]: libpod-conmon-4ada6b7a27f09b46a725ee84d90ab15d46f5b37779c9ae32ec441fd6ba5f716c.scope: Deactivated successfully.
Nov 25 16:13:31 compute-0 sudo[254917]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:31 compute-0 sudo[255082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:31 compute-0 sudo[255082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:31 compute-0 sudo[255082]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:32 compute-0 sudo[255107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:13:32 compute-0 sudo[255107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:32 compute-0 sudo[255107]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:32 compute-0 sudo[255132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:32 compute-0 sudo[255132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:32 compute-0 sudo[255132]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:32 compute-0 sudo[255157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:13:32 compute-0 sudo[255157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:32 compute-0 podman[255224]: 2025-11-25 16:13:32.404173116 +0000 UTC m=+0.039019173 container create 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 16:13:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:32 compute-0 systemd[1]: Started libpod-conmon-0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82.scope.
Nov 25 16:13:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:13:32 compute-0 podman[255224]: 2025-11-25 16:13:32.468572595 +0000 UTC m=+0.103418682 container init 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:13:32 compute-0 podman[255224]: 2025-11-25 16:13:32.474431714 +0000 UTC m=+0.109277771 container start 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:13:32 compute-0 amazing_agnesi[255240]: 167 167
Nov 25 16:13:32 compute-0 podman[255224]: 2025-11-25 16:13:32.478158414 +0000 UTC m=+0.113004531 container attach 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 16:13:32 compute-0 systemd[1]: libpod-0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82.scope: Deactivated successfully.
Nov 25 16:13:32 compute-0 podman[255224]: 2025-11-25 16:13:32.47913563 +0000 UTC m=+0.113981687 container died 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:13:32 compute-0 podman[255224]: 2025-11-25 16:13:32.389067559 +0000 UTC m=+0.023913616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-74d78250b15acd04d23e7ef844ab5347e921fa48ba875b60d59b2a2f5679bfcc-merged.mount: Deactivated successfully.
Nov 25 16:13:32 compute-0 podman[255224]: 2025-11-25 16:13:32.5087395 +0000 UTC m=+0.143585557 container remove 0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:13:32 compute-0 systemd[1]: libpod-conmon-0fad9594f823c68c4a2873f4da86c1af514cf50106dbf7a6635d759d2d2ccc82.scope: Deactivated successfully.
Nov 25 16:13:32 compute-0 podman[255262]: 2025-11-25 16:13:32.669476799 +0000 UTC m=+0.055474059 container create 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:13:32 compute-0 systemd[1]: Started libpod-conmon-5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621.scope.
Nov 25 16:13:32 compute-0 podman[255262]: 2025-11-25 16:13:32.636314634 +0000 UTC m=+0.022311924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:13:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:32 compute-0 podman[255262]: 2025-11-25 16:13:32.82397456 +0000 UTC m=+0.209971840 container init 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 16:13:32 compute-0 podman[255262]: 2025-11-25 16:13:32.831673918 +0000 UTC m=+0.217671218 container start 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 16:13:32 compute-0 podman[255262]: 2025-11-25 16:13:32.836504509 +0000 UTC m=+0.222501849 container attach 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:13:33 compute-0 ceph-mon[74985]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:33 compute-0 competent_leavitt[255279]: {
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:     "0": [
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:         {
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "devices": [
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "/dev/loop3"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             ],
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_name": "ceph_lv0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_size": "21470642176",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "name": "ceph_lv0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "tags": {
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cluster_name": "ceph",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.crush_device_class": "",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.encrypted": "0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osd_id": "0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.type": "block",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.vdo": "0"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             },
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "type": "block",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "vg_name": "ceph_vg0"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:         }
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:     ],
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:     "1": [
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:         {
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "devices": [
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "/dev/loop4"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             ],
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_name": "ceph_lv1",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_size": "21470642176",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "name": "ceph_lv1",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "tags": {
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cluster_name": "ceph",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.crush_device_class": "",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.encrypted": "0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osd_id": "1",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.type": "block",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.vdo": "0"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             },
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "type": "block",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "vg_name": "ceph_vg1"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:         }
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:     ],
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:     "2": [
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:         {
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "devices": [
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "/dev/loop5"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             ],
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_name": "ceph_lv2",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_size": "21470642176",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "name": "ceph_lv2",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "tags": {
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.cluster_name": "ceph",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.crush_device_class": "",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.encrypted": "0",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osd_id": "2",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.type": "block",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:                 "ceph.vdo": "0"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             },
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "type": "block",
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:             "vg_name": "ceph_vg2"
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:         }
Nov 25 16:13:33 compute-0 competent_leavitt[255279]:     ]
Nov 25 16:13:33 compute-0 competent_leavitt[255279]: }
Nov 25 16:13:33 compute-0 systemd[1]: libpod-5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621.scope: Deactivated successfully.
Nov 25 16:13:33 compute-0 podman[255262]: 2025-11-25 16:13:33.636117395 +0000 UTC m=+1.022114655 container died 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-655a9cbc12e8c858895ff111ca450f67a6b760c4b0bd270cab6fdd5ab12bb75c-merged.mount: Deactivated successfully.
Nov 25 16:13:33 compute-0 podman[255262]: 2025-11-25 16:13:33.696398453 +0000 UTC m=+1.082395753 container remove 5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leavitt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:13:33 compute-0 systemd[1]: libpod-conmon-5a9427174ca53dea3de34f420f34051cb13fdd05f61c260a8d3c68faec53d621.scope: Deactivated successfully.
Nov 25 16:13:33 compute-0 sudo[255157]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:33 compute-0 sudo[255302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:33 compute-0 sudo[255302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:33 compute-0 sudo[255302]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:33 compute-0 sudo[255327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:13:33 compute-0 sudo[255327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:33 compute-0 sudo[255327]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:33 compute-0 sudo[255352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:33 compute-0 sudo[255352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:33 compute-0 sudo[255352]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:33 compute-0 sudo[255377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:13:33 compute-0 sudo[255377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:34 compute-0 podman[255442]: 2025-11-25 16:13:34.316270528 +0000 UTC m=+0.055734716 container create a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 16:13:34 compute-0 systemd[1]: Started libpod-conmon-a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae.scope.
Nov 25 16:13:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:13:34 compute-0 podman[255442]: 2025-11-25 16:13:34.29040405 +0000 UTC m=+0.029868298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:13:34 compute-0 podman[255442]: 2025-11-25 16:13:34.391283373 +0000 UTC m=+0.130747531 container init a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:13:34 compute-0 podman[255442]: 2025-11-25 16:13:34.397138771 +0000 UTC m=+0.136602919 container start a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:13:34 compute-0 podman[255442]: 2025-11-25 16:13:34.399934477 +0000 UTC m=+0.139398665 container attach a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 16:13:34 compute-0 pedantic_pascal[255458]: 167 167
Nov 25 16:13:34 compute-0 systemd[1]: libpod-a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae.scope: Deactivated successfully.
Nov 25 16:13:34 compute-0 podman[255442]: 2025-11-25 16:13:34.403364329 +0000 UTC m=+0.142828487 container died a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaff202f9479d468926b2db9c4f4902c3cba11e794dcfdf71ad74d4a2259fee1-merged.mount: Deactivated successfully.
Nov 25 16:13:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:34 compute-0 podman[255442]: 2025-11-25 16:13:34.433512553 +0000 UTC m=+0.172976711 container remove a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pascal, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:13:34 compute-0 systemd[1]: libpod-conmon-a63bcb3a47322d9deac9b9da4653abaec3b3d1a63631487c47e9db3822336bae.scope: Deactivated successfully.
Nov 25 16:13:34 compute-0 nova_compute[254092]: 2025-11-25 16:13:34.477 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:34 compute-0 nova_compute[254092]: 2025-11-25 16:13:34.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:34 compute-0 podman[255481]: 2025-11-25 16:13:34.591873348 +0000 UTC m=+0.037376880 container create 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 16:13:34 compute-0 systemd[1]: Started libpod-conmon-6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8.scope.
Nov 25 16:13:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:13:34 compute-0 podman[255481]: 2025-11-25 16:13:34.664129389 +0000 UTC m=+0.109632921 container init 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:13:34 compute-0 podman[255481]: 2025-11-25 16:13:34.670874921 +0000 UTC m=+0.116378463 container start 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 16:13:34 compute-0 podman[255481]: 2025-11-25 16:13:34.576609276 +0000 UTC m=+0.022112828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:13:34 compute-0 podman[255481]: 2025-11-25 16:13:34.751392614 +0000 UTC m=+0.196896176 container attach 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:13:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:35 compute-0 ceph-mon[74985]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:35 compute-0 sleepy_cori[255498]: {
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "osd_id": 1,
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "type": "bluestore"
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:     },
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "osd_id": 2,
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "type": "bluestore"
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:     },
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "osd_id": 0,
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:         "type": "bluestore"
Nov 25 16:13:35 compute-0 sleepy_cori[255498]:     }
Nov 25 16:13:35 compute-0 sleepy_cori[255498]: }
Nov 25 16:13:35 compute-0 systemd[1]: libpod-6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8.scope: Deactivated successfully.
Nov 25 16:13:35 compute-0 podman[255531]: 2025-11-25 16:13:35.636725216 +0000 UTC m=+0.021658355 container died 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-efaa5d88526a5288cbe0fbd9c952ecae33b4d3fce8d737d5bc18a93f6a24c5e5-merged.mount: Deactivated successfully.
Nov 25 16:13:35 compute-0 podman[255531]: 2025-11-25 16:13:35.693788937 +0000 UTC m=+0.078722046 container remove 6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cori, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:13:35 compute-0 systemd[1]: libpod-conmon-6750461d784e1dbc69c13732c4d8089aa675ca9e68ac283765c93334810bc5b8.scope: Deactivated successfully.
Nov 25 16:13:35 compute-0 sudo[255377]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:13:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:13:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 54690f2b-7ecf-487d-8d86-1f7b432247cf does not exist
Nov 25 16:13:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ffcf5784-979d-44cf-a754-097a2e0067d3 does not exist
Nov 25 16:13:35 compute-0 sudo[255546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:13:35 compute-0 sudo[255546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:35 compute-0 sudo[255546]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:35 compute-0 sudo[255571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:13:35 compute-0 sudo[255571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:13:35 compute-0 sudo[255571]: pam_unix(sudo:session): session closed for user root
Nov 25 16:13:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:36 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:36 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:13:36 compute-0 ceph-mon[74985]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:39 compute-0 ceph-mon[74985]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.534 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.535 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:13:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:13:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478020630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:13:39 compute-0 nova_compute[254092]: 2025-11-25 16:13:39.963 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:13:39
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log']
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.154 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.156 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.156 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.156 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.262 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:13:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:13:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/478020630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:13:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:13:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5544 writes, 23K keys, 5544 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5544 writes, 828 syncs, 6.70 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 16:13:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:13:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1037459494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.742 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.748 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.764 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.766 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:13:40 compute-0 nova_compute[254092]: 2025-11-25 16:13:40.766 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:13:41 compute-0 ceph-mon[74985]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1037459494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:13:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:43 compute-0 ceph-mon[74985]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:45 compute-0 ceph-mon[74985]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:47 compute-0 ceph-mon[74985]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 16:13:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16247239' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 16:13:48 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 16:13:48 compute-0 ceph-mgr[75280]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 16:13:48 compute-0 ceph-mgr[75280]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 16:13:49 compute-0 ceph-mon[74985]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/16247239' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 16:13:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:13:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Cumulative writes: 6612 writes, 27K keys, 6612 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6612 writes, 1143 syncs, 5.78 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.19              0.00         1    0.193       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.19              0.00         1    0.193       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.19              0.00         1    0.193       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da751090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da751090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.20              0.00         1    0.200       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.20              0.00         1    0.200       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.200       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da751090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.100       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.100       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.100       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 16:13:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:50 compute-0 ceph-mon[74985]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:13:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:13:51 compute-0 ceph-mon[74985]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:53 compute-0 ceph-mon[74985]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:54 compute-0 podman[255640]: 2025-11-25 16:13:54.652606945 +0000 UTC m=+0.073052153 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 25 16:13:54 compute-0 podman[255641]: 2025-11-25 16:13:54.666392437 +0000 UTC m=+0.082843237 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 16:13:54 compute-0 podman[255642]: 2025-11-25 16:13:54.713540531 +0000 UTC m=+0.122419707 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 16:13:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:13:55 compute-0 ceph-mon[74985]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:56 compute-0 ceph-mon[74985]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:13:59 compute-0 ceph-mon[74985]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:01 compute-0 ceph-mon[74985]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:03 compute-0 ceph-mon[74985]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:05 compute-0 ceph-mon[74985]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:14:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Cumulative writes: 5678 writes, 24K keys, 5678 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5678 writes, 844 syncs, 6.73 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.118       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.118       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.118       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.146       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.146       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.146       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 16:14:07 compute-0 ceph-mon[74985]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:09 compute-0 ceph-mon[74985]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:14:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:10 compute-0 ceph-mon[74985]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 16:14:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/204269038' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 16:14:13 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 16:14:13 compute-0 ceph-mgr[75280]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 16:14:13 compute-0 ceph-mgr[75280]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 16:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:14:13.580 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:14:13.580 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:14:13.581 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:14:13 compute-0 ceph-mon[74985]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/204269038' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 16:14:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:14 compute-0 ceph-mon[74985]: from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 16:14:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:15 compute-0 ceph-mon[74985]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:16 compute-0 ceph-mon[74985]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:19 compute-0 ceph-mon[74985]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:21 compute-0 ceph-mon[74985]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:23 compute-0 ceph-mon[74985]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 16:14:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:25 compute-0 ceph-mon[74985]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:25 compute-0 podman[255705]: 2025-11-25 16:14:25.639850513 +0000 UTC m=+0.056807614 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:14:25 compute-0 podman[255704]: 2025-11-25 16:14:25.68495393 +0000 UTC m=+0.101360097 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:14:25 compute-0 podman[255706]: 2025-11-25 16:14:25.685316 +0000 UTC m=+0.100373231 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:14:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:27 compute-0 ceph-mon[74985]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:28 compute-0 ceph-mon[74985]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:31 compute-0 ceph-mon[74985]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:33 compute-0 ceph-mon[74985]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:34 compute-0 ceph-mon[74985]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:36 compute-0 sudo[255771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:36 compute-0 sudo[255771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:36 compute-0 sudo[255771]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:36 compute-0 sudo[255796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:14:36 compute-0 sudo[255796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:36 compute-0 sudo[255796]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:36 compute-0 sudo[255821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:36 compute-0 sudo[255821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:36 compute-0 sudo[255821]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:36 compute-0 sudo[255846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:14:36 compute-0 sudo[255846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:36 compute-0 sudo[255846]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:14:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:14:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:14:36 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:14:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:14:36 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:14:36 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1f5c59a4-f631-4012-a929-302685491e83 does not exist
Nov 25 16:14:36 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 04fddb78-95a1-4a10-b485-713d0cdb6911 does not exist
Nov 25 16:14:36 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a501fadc-fbcb-4853-86c8-bbe49be75768 does not exist
Nov 25 16:14:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:14:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:14:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:14:36 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:14:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:14:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:14:36 compute-0 sudo[255902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:36 compute-0 sudo[255902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:36 compute-0 sudo[255902]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:36 compute-0 sudo[255927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:14:36 compute-0 sudo[255927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:36 compute-0 sudo[255927]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:36 compute-0 sudo[255952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:36 compute-0 sudo[255952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:36 compute-0 sudo[255952]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:37 compute-0 sudo[255977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:14:37 compute-0 sudo[255977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:37 compute-0 podman[256044]: 2025-11-25 16:14:37.331243875 +0000 UTC m=+0.020172685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:14:37 compute-0 podman[256044]: 2025-11-25 16:14:37.517617767 +0000 UTC m=+0.206546557 container create 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:14:37 compute-0 ceph-mon[74985]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:14:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:14:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:14:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:14:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:14:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:14:37 compute-0 systemd[1]: Started libpod-conmon-45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc.scope.
Nov 25 16:14:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:14:37 compute-0 podman[256044]: 2025-11-25 16:14:37.820288699 +0000 UTC m=+0.509217529 container init 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:14:37 compute-0 podman[256044]: 2025-11-25 16:14:37.82628147 +0000 UTC m=+0.515210280 container start 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:14:37 compute-0 keen_noyce[256060]: 167 167
Nov 25 16:14:37 compute-0 systemd[1]: libpod-45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc.scope: Deactivated successfully.
Nov 25 16:14:37 compute-0 podman[256044]: 2025-11-25 16:14:37.953281298 +0000 UTC m=+0.642210128 container attach 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:14:37 compute-0 podman[256044]: 2025-11-25 16:14:37.954568944 +0000 UTC m=+0.643497734 container died 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cecc8d6e0434490aa99fb5ac0c2c5dff76faa9395d8a1c1f5e95a4faf40a81f6-merged.mount: Deactivated successfully.
Nov 25 16:14:38 compute-0 podman[256044]: 2025-11-25 16:14:38.102125617 +0000 UTC m=+0.791054447 container remove 45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:14:38 compute-0 systemd[1]: libpod-conmon-45cab3f4b2bc18242e654df3a1d6760a575254ea64623511d1d3ac12a42a6bfc.scope: Deactivated successfully.
Nov 25 16:14:38 compute-0 podman[256084]: 2025-11-25 16:14:38.275088337 +0000 UTC m=+0.045932931 container create 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:14:38 compute-0 systemd[1]: Started libpod-conmon-9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621.scope.
Nov 25 16:14:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:38 compute-0 podman[256084]: 2025-11-25 16:14:38.326063153 +0000 UTC m=+0.096907767 container init 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:14:38 compute-0 podman[256084]: 2025-11-25 16:14:38.333630067 +0000 UTC m=+0.104474671 container start 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:14:38 compute-0 podman[256084]: 2025-11-25 16:14:38.337023609 +0000 UTC m=+0.107868223 container attach 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:14:38 compute-0 podman[256084]: 2025-11-25 16:14:38.256433753 +0000 UTC m=+0.027278397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:14:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:38 compute-0 ceph-mon[74985]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:39 compute-0 hopeful_hypatia[256102]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:14:39 compute-0 hopeful_hypatia[256102]: --> relative data size: 1.0
Nov 25 16:14:39 compute-0 hopeful_hypatia[256102]: --> All data devices are unavailable
Nov 25 16:14:39 compute-0 systemd[1]: libpod-9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621.scope: Deactivated successfully.
Nov 25 16:14:39 compute-0 podman[256084]: 2025-11-25 16:14:39.38201239 +0000 UTC m=+1.152857034 container died 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a0c273da2351cd45d8b82e4bc152bfebf900c21b6ff2d2cb0d1b02ba59b3b9-merged.mount: Deactivated successfully.
Nov 25 16:14:39 compute-0 podman[256084]: 2025-11-25 16:14:39.432173335 +0000 UTC m=+1.203017929 container remove 9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:14:39 compute-0 systemd[1]: libpod-conmon-9036a62ac4884fcb1f87a4d4b73587b62373deab3d5b990eb00df387d941c621.scope: Deactivated successfully.
Nov 25 16:14:39 compute-0 sudo[255977]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:39 compute-0 sudo[256145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:39 compute-0 sudo[256145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:39 compute-0 sudo[256145]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:39 compute-0 sudo[256170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:14:39 compute-0 sudo[256170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:39 compute-0 sudo[256170]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:39 compute-0 sudo[256195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:39 compute-0 sudo[256195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:39 compute-0 sudo[256195]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:39 compute-0 sudo[256220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:14:39 compute-0 sudo[256220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:39 compute-0 podman[256287]: 2025-11-25 16:14:39.962862472 +0000 UTC m=+0.038870500 container create f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:14:40 compute-0 systemd[1]: Started libpod-conmon-f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51.scope.
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:14:40
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'images', '.rgw.root']
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:14:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:14:40 compute-0 podman[256287]: 2025-11-25 16:14:40.026101859 +0000 UTC m=+0.102109897 container init f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:14:40 compute-0 podman[256287]: 2025-11-25 16:14:40.031810663 +0000 UTC m=+0.107818671 container start f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 16:14:40 compute-0 wonderful_bhabha[256303]: 167 167
Nov 25 16:14:40 compute-0 systemd[1]: libpod-f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51.scope: Deactivated successfully.
Nov 25 16:14:40 compute-0 podman[256287]: 2025-11-25 16:14:40.035915054 +0000 UTC m=+0.111923122 container attach f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:14:40 compute-0 podman[256287]: 2025-11-25 16:14:40.03650915 +0000 UTC m=+0.112517188 container died f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 16:14:40 compute-0 podman[256287]: 2025-11-25 16:14:39.948254927 +0000 UTC m=+0.024262955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:14:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f85442d45cf649ca5f022e9314ae62cb0d96b1eaa062696c0f399e14d6568d20-merged.mount: Deactivated successfully.
Nov 25 16:14:40 compute-0 podman[256287]: 2025-11-25 16:14:40.067563449 +0000 UTC m=+0.143571477 container remove f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 16:14:40 compute-0 systemd[1]: libpod-conmon-f6b626c208f702644547af1fe545af6b9f6c0b3874c1addf335a77a8e643be51.scope: Deactivated successfully.
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:14:40 compute-0 podman[256326]: 2025-11-25 16:14:40.223692764 +0000 UTC m=+0.034886233 container create 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:14:40 compute-0 systemd[1]: Started libpod-conmon-383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d.scope.
Nov 25 16:14:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:40 compute-0 podman[256326]: 2025-11-25 16:14:40.300463556 +0000 UTC m=+0.111657045 container init 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:14:40 compute-0 podman[256326]: 2025-11-25 16:14:40.210199289 +0000 UTC m=+0.021392778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:14:40 compute-0 podman[256326]: 2025-11-25 16:14:40.311256948 +0000 UTC m=+0.122450417 container start 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 16:14:40 compute-0 podman[256326]: 2025-11-25 16:14:40.313997792 +0000 UTC m=+0.125191281 container attach 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:14:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.759 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.774 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.775 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.775 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.785 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.786 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.786 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.786 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.787 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.805 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:14:40 compute-0 nova_compute[254092]: 2025-11-25 16:14:40.806 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]: {
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:     "0": [
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:         {
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "devices": [
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "/dev/loop3"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             ],
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_name": "ceph_lv0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_size": "21470642176",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "name": "ceph_lv0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "tags": {
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cluster_name": "ceph",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.crush_device_class": "",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.encrypted": "0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osd_id": "0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.type": "block",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.vdo": "0"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             },
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "type": "block",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "vg_name": "ceph_vg0"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:         }
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:     ],
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:     "1": [
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:         {
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "devices": [
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "/dev/loop4"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             ],
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_name": "ceph_lv1",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_size": "21470642176",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "name": "ceph_lv1",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "tags": {
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cluster_name": "ceph",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.crush_device_class": "",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.encrypted": "0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osd_id": "1",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.type": "block",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.vdo": "0"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             },
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "type": "block",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "vg_name": "ceph_vg1"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:         }
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:     ],
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:     "2": [
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:         {
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "devices": [
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "/dev/loop5"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             ],
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_name": "ceph_lv2",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_size": "21470642176",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "name": "ceph_lv2",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "tags": {
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.cluster_name": "ceph",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.crush_device_class": "",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.encrypted": "0",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osd_id": "2",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.type": "block",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:                 "ceph.vdo": "0"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             },
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "type": "block",
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:             "vg_name": "ceph_vg2"
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:         }
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]:     ]
Nov 25 16:14:41 compute-0 competent_ptolemy[256341]: }
Nov 25 16:14:41 compute-0 systemd[1]: libpod-383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d.scope: Deactivated successfully.
Nov 25 16:14:41 compute-0 podman[256326]: 2025-11-25 16:14:41.06958134 +0000 UTC m=+0.880774809 container died 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f6600ae8a0c6b639aaeba7af3a0bc8eadd598540b253900700cf6cf02831ebc-merged.mount: Deactivated successfully.
Nov 25 16:14:41 compute-0 podman[256326]: 2025-11-25 16:14:41.126487906 +0000 UTC m=+0.937681375 container remove 383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:14:41 compute-0 systemd[1]: libpod-conmon-383f63cf72c2a40837e8e3a70d4b0224adacb5d1753de72d4677157f00f1065d.scope: Deactivated successfully.
Nov 25 16:14:41 compute-0 sudo[256220]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:14:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652670263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:14:41 compute-0 sudo[256381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:41 compute-0 sudo[256381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.226 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:14:41 compute-0 sudo[256381]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:41 compute-0 sudo[256408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:14:41 compute-0 sudo[256408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:41 compute-0 sudo[256408]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:41 compute-0 sudo[256433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:41 compute-0 sudo[256433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:41 compute-0 sudo[256433]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.370 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.371 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.371 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.371 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:14:41 compute-0 sudo[256458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:14:41 compute-0 sudo[256458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.439 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.454 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:14:41 compute-0 ceph-mon[74985]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2652670263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:14:41 compute-0 podman[256543]: 2025-11-25 16:14:41.743404014 +0000 UTC m=+0.046606367 container create 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:14:41 compute-0 systemd[1]: Started libpod-conmon-6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77.scope.
Nov 25 16:14:41 compute-0 podman[256543]: 2025-11-25 16:14:41.725440224 +0000 UTC m=+0.028642557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:14:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:14:41 compute-0 podman[256543]: 2025-11-25 16:14:41.83672801 +0000 UTC m=+0.139930413 container init 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:14:41 compute-0 podman[256543]: 2025-11-25 16:14:41.845015721 +0000 UTC m=+0.148218034 container start 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:14:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:14:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728102570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:14:41 compute-0 podman[256543]: 2025-11-25 16:14:41.849317756 +0000 UTC m=+0.152520149 container attach 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:14:41 compute-0 angry_taussig[256559]: 167 167
Nov 25 16:14:41 compute-0 systemd[1]: libpod-6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77.scope: Deactivated successfully.
Nov 25 16:14:41 compute-0 podman[256543]: 2025-11-25 16:14:41.853790036 +0000 UTC m=+0.156992349 container died 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.873 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-323e7eb0510db34d8c153a4f08b82a7a122cb7b73cdd4ba5caef1edbcde9b061-merged.mount: Deactivated successfully.
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.884 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:14:41 compute-0 podman[256543]: 2025-11-25 16:14:41.890471137 +0000 UTC m=+0.193673450 container remove 6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_taussig, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:14:41 compute-0 systemd[1]: libpod-conmon-6fc2eee5cb8e68d88ddc1a95c4b3627198ec2ae94d8752fdc17ec0d9c86f7f77.scope: Deactivated successfully.
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.903 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.907 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:14:41 compute-0 nova_compute[254092]: 2025-11-25 16:14:41.907 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:14:42 compute-0 podman[256585]: 2025-11-25 16:14:42.052454149 +0000 UTC m=+0.041435719 container create 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 16:14:42 compute-0 systemd[1]: Started libpod-conmon-472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197.scope.
Nov 25 16:14:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:14:42 compute-0 podman[256585]: 2025-11-25 16:14:42.117823057 +0000 UTC m=+0.106804657 container init 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 16:14:42 compute-0 podman[256585]: 2025-11-25 16:14:42.126497278 +0000 UTC m=+0.115478848 container start 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:14:42 compute-0 podman[256585]: 2025-11-25 16:14:42.035417142 +0000 UTC m=+0.024398732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:14:42 compute-0 podman[256585]: 2025-11-25 16:14:42.130444034 +0000 UTC m=+0.119425624 container attach 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:14:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3728102570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:14:42 compute-0 nova_compute[254092]: 2025-11-25 16:14:42.618 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:42 compute-0 nova_compute[254092]: 2025-11-25 16:14:42.618 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:14:43 compute-0 sharp_pike[256601]: {
Nov 25 16:14:43 compute-0 sharp_pike[256601]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "osd_id": 1,
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "type": "bluestore"
Nov 25 16:14:43 compute-0 sharp_pike[256601]:     },
Nov 25 16:14:43 compute-0 sharp_pike[256601]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "osd_id": 2,
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "type": "bluestore"
Nov 25 16:14:43 compute-0 sharp_pike[256601]:     },
Nov 25 16:14:43 compute-0 sharp_pike[256601]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "osd_id": 0,
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:14:43 compute-0 sharp_pike[256601]:         "type": "bluestore"
Nov 25 16:14:43 compute-0 sharp_pike[256601]:     }
Nov 25 16:14:43 compute-0 sharp_pike[256601]: }
Nov 25 16:14:43 compute-0 systemd[1]: libpod-472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197.scope: Deactivated successfully.
Nov 25 16:14:43 compute-0 podman[256585]: 2025-11-25 16:14:43.051159665 +0000 UTC m=+1.040141235 container died 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a252270fb6ea73da7bd09317d3649c0f70c3fc332de4650e88dde473e9826e99-merged.mount: Deactivated successfully.
Nov 25 16:14:43 compute-0 podman[256585]: 2025-11-25 16:14:43.102306852 +0000 UTC m=+1.091288422 container remove 472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:14:43 compute-0 systemd[1]: libpod-conmon-472c837be3a9292608935b3a44f325a12e250155c9c5f3fa86c5ddcf20c7b197.scope: Deactivated successfully.
Nov 25 16:14:43 compute-0 sudo[256458]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:14:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:14:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:14:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:14:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev dccdd01a-05bb-4ede-b510-4daac279521a does not exist
Nov 25 16:14:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 225a0fbe-878d-4a21-b708-07bea7aec32f does not exist
Nov 25 16:14:43 compute-0 sudo[256648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:14:43 compute-0 sudo[256648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:43 compute-0 sudo[256648]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:43 compute-0 sudo[256673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:14:43 compute-0 sudo[256673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:14:43 compute-0 sudo[256673]: pam_unix(sudo:session): session closed for user root
Nov 25 16:14:43 compute-0 ceph-mon[74985]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:14:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:14:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:45 compute-0 ceph-mon[74985]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:47 compute-0 ceph-mon[74985]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:49 compute-0 ceph-mon[74985]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:14:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:14:51 compute-0 ceph-mon[74985]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:53 compute-0 ceph-mon[74985]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:14:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/768575814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:14:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:14:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/768575814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:14:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:14:55 compute-0 ceph-mon[74985]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/768575814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:14:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/768575814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:14:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:56 compute-0 podman[256699]: 2025-11-25 16:14:56.6402554 +0000 UTC m=+0.053871351 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 16:14:56 compute-0 podman[256698]: 2025-11-25 16:14:56.640034915 +0000 UTC m=+0.061030303 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 16:14:56 compute-0 podman[256700]: 2025-11-25 16:14:56.675593125 +0000 UTC m=+0.088691832 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:14:56 compute-0 ceph-mon[74985]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:14:59 compute-0 ceph-mon[74985]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:01 compute-0 ceph-mon[74985]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:15:01.986 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:15:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:15:01.987 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:15:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:15:01.987 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:15:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.533841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302533887, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1305, "num_deletes": 251, "total_data_size": 2042198, "memory_usage": 2066504, "flush_reason": "Manual Compaction"}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302545846, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2001882, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15166, "largest_seqno": 16470, "table_properties": {"data_size": 1995681, "index_size": 3468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12916, "raw_average_key_size": 19, "raw_value_size": 1983240, "raw_average_value_size": 3023, "num_data_blocks": 159, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087171, "oldest_key_time": 1764087171, "file_creation_time": 1764087302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 12042 microseconds, and 5439 cpu microseconds.
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.545882) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2001882 bytes OK
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.545899) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547201) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547218) EVENT_LOG_v1 {"time_micros": 1764087302547212, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547237) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2036316, prev total WAL file size 2036316, number of live WAL files 2.
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1954KB)], [35(7613KB)]
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302547987, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9798102, "oldest_snapshot_seqno": -1}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4107 keys, 8019197 bytes, temperature: kUnknown
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302599157, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8019197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7989288, "index_size": 18535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100845, "raw_average_key_size": 24, "raw_value_size": 7912456, "raw_average_value_size": 1926, "num_data_blocks": 785, "num_entries": 4107, "num_filter_entries": 4107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.599386) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8019197 bytes
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.600885) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.2 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(8.9) write-amplify(4.0) OK, records in: 4621, records dropped: 514 output_compression: NoCompression
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.600904) EVENT_LOG_v1 {"time_micros": 1764087302600894, "job": 16, "event": "compaction_finished", "compaction_time_micros": 51241, "compaction_time_cpu_micros": 15941, "output_level": 6, "num_output_files": 1, "total_output_size": 8019197, "num_input_records": 4621, "num_output_records": 4107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302601295, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087302602761, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.547840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:15:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:15:02.602829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:15:03 compute-0 ceph-mon[74985]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:05 compute-0 ceph-mon[74985]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:06 compute-0 ceph-mon[74985]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:09 compute-0 ceph-mon[74985]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:15:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:11 compute-0 ceph-mon[74985]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:13 compute-0 ceph-mon[74985]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:15:13.582 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:15:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:15:13.582 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:15:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:15:13.582 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:15:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:15 compute-0 ceph-mon[74985]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:17 compute-0 ceph-mon[74985]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:19 compute-0 ceph-mon[74985]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:21 compute-0 ceph-mon[74985]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:22 compute-0 ceph-mon[74985]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:25 compute-0 ceph-mon[74985]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:25 compute-0 sshd-session[256763]: Connection closed by authenticating user root 171.244.51.45 port 51562 [preauth]
Nov 25 16:15:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:27 compute-0 ceph-mon[74985]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:27 compute-0 podman[256765]: 2025-11-25 16:15:27.639336553 +0000 UTC m=+0.059475533 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:15:27 compute-0 podman[256766]: 2025-11-25 16:15:27.654317083 +0000 UTC m=+0.073698992 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:15:27 compute-0 podman[256767]: 2025-11-25 16:15:27.655665368 +0000 UTC m=+0.071638076 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:15:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:29 compute-0 ceph-mon[74985]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:31 compute-0 ceph-mon[74985]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:33 compute-0 ceph-mon[74985]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:35 compute-0 ceph-mon[74985]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:36 compute-0 ceph-mon[74985]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:39 compute-0 nova_compute[254092]: 2025-11-25 16:15:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:39 compute-0 nova_compute[254092]: 2025-11-25 16:15:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:15:39 compute-0 nova_compute[254092]: 2025-11-25 16:15:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:15:39 compute-0 nova_compute[254092]: 2025-11-25 16:15:39.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:15:39 compute-0 nova_compute[254092]: 2025-11-25 16:15:39.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:15:39 compute-0 nova_compute[254092]: 2025-11-25 16:15:39.532 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:15:39 compute-0 ceph-mon[74985]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:15:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616823395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:15:39 compute-0 nova_compute[254092]: 2025-11-25 16:15:39.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:15:40
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr']
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.148 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.150 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.230 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.231 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.256 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:15:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:15:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945270563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.659 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.663 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.679 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.681 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:15:40 compute-0 nova_compute[254092]: 2025-11-25 16:15:40.681 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:15:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2616823395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:15:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3945270563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:15:41 compute-0 nova_compute[254092]: 2025-11-25 16:15:41.681 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:41 compute-0 nova_compute[254092]: 2025-11-25 16:15:41.681 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:15:41 compute-0 nova_compute[254092]: 2025-11-25 16:15:41.681 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:15:41 compute-0 nova_compute[254092]: 2025-11-25 16:15:41.694 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:15:41 compute-0 nova_compute[254092]: 2025-11-25 16:15:41.695 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:41 compute-0 nova_compute[254092]: 2025-11-25 16:15:41.695 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:41 compute-0 nova_compute[254092]: 2025-11-25 16:15:41.695 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:41 compute-0 ceph-mon[74985]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:42 compute-0 nova_compute[254092]: 2025-11-25 16:15:42.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:42 compute-0 nova_compute[254092]: 2025-11-25 16:15:42.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:42 compute-0 nova_compute[254092]: 2025-11-25 16:15:42.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:42 compute-0 nova_compute[254092]: 2025-11-25 16:15:42.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:15:42 compute-0 ceph-mon[74985]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:43 compute-0 sudo[256875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:43 compute-0 sudo[256875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:43 compute-0 sudo[256875]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:43 compute-0 sudo[256900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:15:43 compute-0 sudo[256900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:43 compute-0 sudo[256900]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:43 compute-0 sudo[256925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:43 compute-0 sudo[256925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:43 compute-0 sudo[256925]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:43 compute-0 sudo[256950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:15:43 compute-0 sudo[256950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:43 compute-0 nova_compute[254092]: 2025-11-25 16:15:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:15:43 compute-0 sudo[256950]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 16:15:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:15:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:15:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:15:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:15:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:15:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:15:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:15:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev be3640c4-23fb-4043-b2fd-93b684932138 does not exist
Nov 25 16:15:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6731975a-b50c-4aa5-9093-5431f5d20487 does not exist
Nov 25 16:15:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5fd52b80-9d7d-4aa4-98d7-57a56344630c does not exist
Nov 25 16:15:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:15:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:15:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:15:44 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:15:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:15:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:15:44 compute-0 sudo[257006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:44 compute-0 sudo[257006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:15:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:15:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:15:44 compute-0 sudo[257006]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:44 compute-0 sudo[257031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:15:44 compute-0 sudo[257031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:44 compute-0 sudo[257031]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:44 compute-0 sudo[257056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:44 compute-0 sudo[257056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:44 compute-0 sudo[257056]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:44 compute-0 sudo[257081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:15:44 compute-0 sudo[257081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:44 compute-0 podman[257147]: 2025-11-25 16:15:44.698836587 +0000 UTC m=+0.099678926 container create 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:15:44 compute-0 podman[257147]: 2025-11-25 16:15:44.619351482 +0000 UTC m=+0.020193831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:15:44 compute-0 systemd[1]: Started libpod-conmon-2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b.scope.
Nov 25 16:15:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:15:45 compute-0 podman[257147]: 2025-11-25 16:15:45.025521644 +0000 UTC m=+0.426363973 container init 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:15:45 compute-0 podman[257147]: 2025-11-25 16:15:45.032575182 +0000 UTC m=+0.433417501 container start 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:15:45 compute-0 zen_noether[257164]: 167 167
Nov 25 16:15:45 compute-0 systemd[1]: libpod-2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b.scope: Deactivated successfully.
Nov 25 16:15:45 compute-0 podman[257147]: 2025-11-25 16:15:45.126892204 +0000 UTC m=+0.527734553 container attach 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:15:45 compute-0 podman[257147]: 2025-11-25 16:15:45.128079865 +0000 UTC m=+0.528922194 container died 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 16:15:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:15:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:15:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:15:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:15:45 compute-0 ceph-mon[74985]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cb58db6c1a0616e4655dbaed01b45b5efa27a1d9fe23b35cbd5780d409cc450-merged.mount: Deactivated successfully.
Nov 25 16:15:45 compute-0 podman[257147]: 2025-11-25 16:15:45.549108065 +0000 UTC m=+0.949950394 container remove 2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:15:45 compute-0 systemd[1]: libpod-conmon-2058c28a6da524fb9223b10b3c68c9869633d99a9a0476e4a7a8856e393b7b7b.scope: Deactivated successfully.
Nov 25 16:15:45 compute-0 podman[257188]: 2025-11-25 16:15:45.705544798 +0000 UTC m=+0.045691193 container create a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:15:45 compute-0 podman[257188]: 2025-11-25 16:15:45.680033175 +0000 UTC m=+0.020179590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:15:45 compute-0 systemd[1]: Started libpod-conmon-a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee.scope.
Nov 25 16:15:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:45 compute-0 podman[257188]: 2025-11-25 16:15:45.857004818 +0000 UTC m=+0.197151213 container init a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:15:45 compute-0 podman[257188]: 2025-11-25 16:15:45.863627795 +0000 UTC m=+0.203774180 container start a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:15:45 compute-0 podman[257188]: 2025-11-25 16:15:45.889799495 +0000 UTC m=+0.229945920 container attach a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 16:15:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:46 compute-0 elated_wilson[257205]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:15:46 compute-0 elated_wilson[257205]: --> relative data size: 1.0
Nov 25 16:15:46 compute-0 elated_wilson[257205]: --> All data devices are unavailable
Nov 25 16:15:46 compute-0 systemd[1]: libpod-a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee.scope: Deactivated successfully.
Nov 25 16:15:46 compute-0 podman[257188]: 2025-11-25 16:15:46.857183174 +0000 UTC m=+1.197329589 container died a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f4b6ebbba05ac49996392b14c6041bfb0d49a74455f9b907c8cd507d17afc0-merged.mount: Deactivated successfully.
Nov 25 16:15:47 compute-0 podman[257188]: 2025-11-25 16:15:47.386805247 +0000 UTC m=+1.726951642 container remove a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:15:47 compute-0 sudo[257081]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:47 compute-0 systemd[1]: libpod-conmon-a500e7eec215f12489648af104620e624f0b6b6ab1b5cb6e5496691f149784ee.scope: Deactivated successfully.
Nov 25 16:15:47 compute-0 sudo[257247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:47 compute-0 sudo[257247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:47 compute-0 sudo[257247]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:47 compute-0 sudo[257272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:15:47 compute-0 sudo[257272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:47 compute-0 sudo[257272]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:47 compute-0 sudo[257297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:47 compute-0 sudo[257297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:47 compute-0 sudo[257297]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:47 compute-0 sudo[257322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:15:47 compute-0 sudo[257322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:47 compute-0 ceph-mon[74985]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:48 compute-0 podman[257386]: 2025-11-25 16:15:47.913982084 +0000 UTC m=+0.023287794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:15:48 compute-0 podman[257386]: 2025-11-25 16:15:48.013273528 +0000 UTC m=+0.122579258 container create ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:15:48 compute-0 systemd[1]: Started libpod-conmon-ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2.scope.
Nov 25 16:15:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:15:48 compute-0 podman[257386]: 2025-11-25 16:15:48.22160204 +0000 UTC m=+0.330907740 container init ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:15:48 compute-0 podman[257386]: 2025-11-25 16:15:48.228394041 +0000 UTC m=+0.337699751 container start ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 16:15:48 compute-0 quirky_rubin[257402]: 167 167
Nov 25 16:15:48 compute-0 systemd[1]: libpod-ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2.scope: Deactivated successfully.
Nov 25 16:15:48 compute-0 podman[257386]: 2025-11-25 16:15:48.278610524 +0000 UTC m=+0.387916244 container attach ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:15:48 compute-0 podman[257386]: 2025-11-25 16:15:48.279055556 +0000 UTC m=+0.388361256 container died ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4da6d6ffc8d4a87437c7ad09430340e6fd5b764b68c857a3c795d27a78bdeb4-merged.mount: Deactivated successfully.
Nov 25 16:15:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:48 compute-0 podman[257386]: 2025-11-25 16:15:48.62885019 +0000 UTC m=+0.738155890 container remove ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 16:15:48 compute-0 systemd[1]: libpod-conmon-ee8149f4d916878bf95fd5ba7e49f11b1bd38a0f4a4a1597ef12d851350045d2.scope: Deactivated successfully.
Nov 25 16:15:48 compute-0 podman[257427]: 2025-11-25 16:15:48.864853771 +0000 UTC m=+0.109567881 container create 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:15:48 compute-0 podman[257427]: 2025-11-25 16:15:48.776518009 +0000 UTC m=+0.021232139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:15:49 compute-0 systemd[1]: Started libpod-conmon-87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67.scope.
Nov 25 16:15:49 compute-0 ceph-mon[74985]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:49 compute-0 podman[257427]: 2025-11-25 16:15:49.396267912 +0000 UTC m=+0.640982042 container init 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:15:49 compute-0 podman[257427]: 2025-11-25 16:15:49.403142435 +0000 UTC m=+0.647856545 container start 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 16:15:49 compute-0 podman[257427]: 2025-11-25 16:15:49.493899243 +0000 UTC m=+0.738613363 container attach 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]: {
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:     "0": [
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:         {
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "devices": [
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "/dev/loop3"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             ],
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_name": "ceph_lv0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_size": "21470642176",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "name": "ceph_lv0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "tags": {
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cluster_name": "ceph",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.crush_device_class": "",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.encrypted": "0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osd_id": "0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.type": "block",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.vdo": "0"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             },
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "type": "block",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "vg_name": "ceph_vg0"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:         }
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:     ],
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:     "1": [
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:         {
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "devices": [
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "/dev/loop4"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             ],
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_name": "ceph_lv1",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_size": "21470642176",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "name": "ceph_lv1",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "tags": {
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cluster_name": "ceph",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.crush_device_class": "",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.encrypted": "0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osd_id": "1",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.type": "block",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.vdo": "0"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             },
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "type": "block",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "vg_name": "ceph_vg1"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:         }
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:     ],
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:     "2": [
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:         {
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "devices": [
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "/dev/loop5"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             ],
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_name": "ceph_lv2",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_size": "21470642176",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "name": "ceph_lv2",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "tags": {
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.cluster_name": "ceph",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.crush_device_class": "",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.encrypted": "0",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osd_id": "2",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.type": "block",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:                 "ceph.vdo": "0"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             },
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "type": "block",
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:             "vg_name": "ceph_vg2"
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:         }
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]:     ]
Nov 25 16:15:50 compute-0 exciting_ishizaka[257443]: }
Nov 25 16:15:50 compute-0 systemd[1]: libpod-87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67.scope: Deactivated successfully.
Nov 25 16:15:50 compute-0 podman[257452]: 2025-11-25 16:15:50.208305216 +0000 UTC m=+0.027160947 container died 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 16:15:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dfd23ac4767cf97aec9dc9b722ac7f421fef9c29b49610284d0d6fa9a6e9cad-merged.mount: Deactivated successfully.
Nov 25 16:15:50 compute-0 podman[257452]: 2025-11-25 16:15:50.832412966 +0000 UTC m=+0.651268617 container remove 87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:15:50 compute-0 systemd[1]: libpod-conmon-87de8b782be5c4de323bd53daebcb4f61b148c6c9d86f2da10dd14e554e30b67.scope: Deactivated successfully.
Nov 25 16:15:50 compute-0 sudo[257322]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:15:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:15:50 compute-0 sudo[257467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:50 compute-0 sudo[257467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:50 compute-0 sudo[257467]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:51 compute-0 sudo[257492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:15:51 compute-0 sudo[257492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:51 compute-0 sudo[257492]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:51 compute-0 sudo[257517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:51 compute-0 sudo[257517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:51 compute-0 sudo[257517]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:51 compute-0 sudo[257542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:15:51 compute-0 sudo[257542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:51 compute-0 podman[257607]: 2025-11-25 16:15:51.537094389 +0000 UTC m=+0.104685630 container create 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:15:51 compute-0 podman[257607]: 2025-11-25 16:15:51.452481617 +0000 UTC m=+0.020072888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:15:51 compute-0 ceph-mon[74985]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:51 compute-0 systemd[1]: Started libpod-conmon-425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e.scope.
Nov 25 16:15:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:15:51 compute-0 podman[257607]: 2025-11-25 16:15:51.905793539 +0000 UTC m=+0.473384790 container init 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:15:51 compute-0 podman[257607]: 2025-11-25 16:15:51.913921666 +0000 UTC m=+0.481512897 container start 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:15:51 compute-0 xenodochial_stonebraker[257623]: 167 167
Nov 25 16:15:51 compute-0 systemd[1]: libpod-425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e.scope: Deactivated successfully.
Nov 25 16:15:51 compute-0 podman[257607]: 2025-11-25 16:15:51.963553673 +0000 UTC m=+0.531144904 container attach 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:15:51 compute-0 podman[257607]: 2025-11-25 16:15:51.964141289 +0000 UTC m=+0.531732520 container died 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 16:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9064734063b702ac7b02467e929cc20e79c430d84ec8b36b7a667e1da2c0e0e7-merged.mount: Deactivated successfully.
Nov 25 16:15:52 compute-0 podman[257607]: 2025-11-25 16:15:52.036911705 +0000 UTC m=+0.604502936 container remove 425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:15:52 compute-0 systemd[1]: libpod-conmon-425ec6f8f68ab5aa63743327aeb628f4287277c343445d823d421f2689a33c8e.scope: Deactivated successfully.
Nov 25 16:15:52 compute-0 podman[257647]: 2025-11-25 16:15:52.19569364 +0000 UTC m=+0.041510481 container create df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:15:52 compute-0 systemd[1]: Started libpod-conmon-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope.
Nov 25 16:15:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:15:52 compute-0 podman[257647]: 2025-11-25 16:15:52.260986467 +0000 UTC m=+0.106803338 container init df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:15:52 compute-0 podman[257647]: 2025-11-25 16:15:52.267743507 +0000 UTC m=+0.113560348 container start df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:15:52 compute-0 podman[257647]: 2025-11-25 16:15:52.271523688 +0000 UTC m=+0.117340549 container attach df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:15:52 compute-0 podman[257647]: 2025-11-25 16:15:52.17995363 +0000 UTC m=+0.025770491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:15:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]: {
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "osd_id": 1,
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "type": "bluestore"
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:     },
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "osd_id": 2,
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "type": "bluestore"
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:     },
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "osd_id": 0,
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:         "type": "bluestore"
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]:     }
Nov 25 16:15:53 compute-0 eloquent_beaver[257664]: }
Nov 25 16:15:53 compute-0 systemd[1]: libpod-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope: Deactivated successfully.
Nov 25 16:15:53 compute-0 conmon[257664]: conmon df4cb368b8a4ecf70afd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope/container/memory.events
Nov 25 16:15:53 compute-0 podman[257647]: 2025-11-25 16:15:53.216072917 +0000 UTC m=+1.061889758 container died df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4e6efedb27ef86e0e6915f3bb7477149243bf534faf4957215f826057754224-merged.mount: Deactivated successfully.
Nov 25 16:15:53 compute-0 podman[257647]: 2025-11-25 16:15:53.275046784 +0000 UTC m=+1.120863635 container remove df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_beaver, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:15:53 compute-0 systemd[1]: libpod-conmon-df4cb368b8a4ecf70afdcc237f44375e6fad4e7fe150ac577e5cc6a4e53aa8d7.scope: Deactivated successfully.
Nov 25 16:15:53 compute-0 sudo[257542]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:15:53 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:15:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:15:53 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:15:53 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 435e1140-df1d-43ae-8989-a8ea973be967 does not exist
Nov 25 16:15:53 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f8c11953-e7f8-4a26-babb-521a4030b5a9 does not exist
Nov 25 16:15:53 compute-0 sudo[257711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:15:53 compute-0 sudo[257711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:53 compute-0 sudo[257711]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:53 compute-0 sudo[257736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:15:53 compute-0 sudo[257736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:15:53 compute-0 sudo[257736]: pam_unix(sudo:session): session closed for user root
Nov 25 16:15:53 compute-0 ceph-mon[74985]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:53 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:15:53 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:15:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:15:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3531795733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:15:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:15:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3531795733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:15:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:15:55 compute-0 ceph-mon[74985]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3531795733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:15:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3531795733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:15:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:56 compute-0 ceph-mon[74985]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:15:58 compute-0 podman[257762]: 2025-11-25 16:15:58.643813029 +0000 UTC m=+0.060366284 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:15:58 compute-0 podman[257763]: 2025-11-25 16:15:58.671558112 +0000 UTC m=+0.081979343 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:15:58 compute-0 podman[257761]: 2025-11-25 16:15:58.671740626 +0000 UTC m=+0.088141607 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:15:59 compute-0 ceph-mon[74985]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 25 16:16:00 compute-0 ceph-mon[74985]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 25 16:16:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 25 16:16:03 compute-0 ceph-mon[74985]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 25 16:16:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 25 16:16:04 compute-0 ceph-mon[74985]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Nov 25 16:16:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 16:16:06 compute-0 ceph-mon[74985]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 16:16:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 16:16:08 compute-0 ceph-mon[74985]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 16:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:16:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 25 16:16:11 compute-0 ceph-mon[74985]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 25 16:16:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 16:16:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:16:13.583 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:16:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:16:13.584 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:16:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:16:13.584 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:16:13 compute-0 ceph-mon[74985]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 25 16:16:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 16:16:14 compute-0 ceph-mon[74985]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 16:16:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 25 16:16:17 compute-0 ceph-mon[74985]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 25 16:16:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 16:16:19 compute-0 ceph-mon[74985]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 40 op/s
Nov 25 16:16:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 16:16:21 compute-0 ceph-mon[74985]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 16:16:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Nov 25 16:16:23 compute-0 ceph-mon[74985]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Nov 25 16:16:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 25 16:16:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:25 compute-0 ceph-mon[74985]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 25 16:16:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 25 16:16:26 compute-0 ceph-mon[74985]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 25 16:16:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 25 16:16:29 compute-0 ceph-mon[74985]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 25 16:16:29 compute-0 podman[257823]: 2025-11-25 16:16:29.665384828 +0000 UTC m=+0.071778411 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:16:29 compute-0 podman[257822]: 2025-11-25 16:16:29.667428962 +0000 UTC m=+0.077704478 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:16:29 compute-0 podman[257824]: 2025-11-25 16:16:29.690796087 +0000 UTC m=+0.101850255 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:16:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 16:16:31 compute-0 ceph-mon[74985]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 16:16:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:32 compute-0 ceph-mon[74985]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:35 compute-0 ceph-mon[74985]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:37 compute-0 ceph-mon[74985]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:39 compute-0 ceph-mon[74985]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:39 compute-0 nova_compute[254092]: 2025-11-25 16:16:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:39 compute-0 nova_compute[254092]: 2025-11-25 16:16:39.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:16:39 compute-0 nova_compute[254092]: 2025-11-25 16:16:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:16:39 compute-0 nova_compute[254092]: 2025-11-25 16:16:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:16:39 compute-0 nova_compute[254092]: 2025-11-25 16:16:39.524 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:16:39 compute-0 nova_compute[254092]: 2025-11-25 16:16:39.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:16:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:16:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919789275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.002 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:16:40
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'vms', 'volumes', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:16:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2919789275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.171 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.172 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5203MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.172 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.173 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.225 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.225 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.240 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:16:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:16:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638642740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.671 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.678 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.704 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.706 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:16:40 compute-0 nova_compute[254092]: 2025-11-25 16:16:40.706 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:16:41 compute-0 ceph-mon[74985]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3638642740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:16:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:43 compute-0 ceph-mon[74985]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.707 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.707 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.726 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.726 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:43 compute-0 nova_compute[254092]: 2025-11-25 16:16:43.739 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:44 compute-0 nova_compute[254092]: 2025-11-25 16:16:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:16:44 compute-0 nova_compute[254092]: 2025-11-25 16:16:44.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:16:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:45 compute-0 ceph-mon[74985]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:47 compute-0 ceph-mon[74985]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:49 compute-0 ceph-mon[74985]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:16:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:16:51 compute-0 ceph-mon[74985]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:52 compute-0 ceph-mon[74985]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:53 compute-0 sudo[257926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:53 compute-0 sudo[257926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:53 compute-0 sudo[257926]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:53 compute-0 sudo[257951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:16:53 compute-0 sudo[257951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:53 compute-0 sudo[257951]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:53 compute-0 sudo[257976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:53 compute-0 sudo[257976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:53 compute-0 sudo[257976]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:53 compute-0 sudo[258001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 16:16:53 compute-0 sudo[258001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:54 compute-0 podman[258100]: 2025-11-25 16:16:54.343144037 +0000 UTC m=+0.224650168 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:16:54 compute-0 podman[258100]: 2025-11-25 16:16:54.478079981 +0000 UTC m=+0.359586062 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:16:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:55 compute-0 sudo[258001]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:16:55 compute-0 sudo[258259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:55 compute-0 sudo[258259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:55 compute-0 sudo[258259]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042639771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042639771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:16:55 compute-0 sudo[258284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:16:55 compute-0 sudo[258284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:55 compute-0 sudo[258284]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:55 compute-0 sudo[258309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:55 compute-0 sudo[258309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:55 compute-0 sudo[258309]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:55 compute-0 sudo[258334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:16:55 compute-0 sudo[258334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:16:55 compute-0 ceph-mon[74985]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:55 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:16:55 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:16:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4042639771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:16:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4042639771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:16:55 compute-0 sudo[258334]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:16:55 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4f2f4a8d-95b5-433c-87af-7d1e66878b0b does not exist
Nov 25 16:16:55 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5a09352a-8517-4f5b-be66-1cbe5361a957 does not exist
Nov 25 16:16:55 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 696dbcea-ec2b-4744-a7ed-f924ca7f1309 does not exist
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:16:55 compute-0 sudo[258389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:55 compute-0 sudo[258389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:55 compute-0 sudo[258389]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:55 compute-0 sudo[258414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:16:55 compute-0 sudo[258414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:55 compute-0 sudo[258414]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:55 compute-0 sudo[258439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:55 compute-0 sudo[258439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:55 compute-0 sudo[258439]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:56 compute-0 sudo[258464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:16:56 compute-0 sudo[258464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:56 compute-0 podman[258528]: 2025-11-25 16:16:56.366771008 +0000 UTC m=+0.036175818 container create 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:16:56 compute-0 systemd[1]: Started libpod-conmon-450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170.scope.
Nov 25 16:16:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:16:56 compute-0 podman[258528]: 2025-11-25 16:16:56.349133302 +0000 UTC m=+0.018538142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:16:56 compute-0 podman[258528]: 2025-11-25 16:16:56.453254734 +0000 UTC m=+0.122659594 container init 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:16:56 compute-0 podman[258528]: 2025-11-25 16:16:56.460984663 +0000 UTC m=+0.130389493 container start 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:16:56 compute-0 inspiring_swartz[258544]: 167 167
Nov 25 16:16:56 compute-0 systemd[1]: libpod-450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170.scope: Deactivated successfully.
Nov 25 16:16:56 compute-0 podman[258528]: 2025-11-25 16:16:56.473186622 +0000 UTC m=+0.142591492 container attach 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:16:56 compute-0 podman[258528]: 2025-11-25 16:16:56.474062226 +0000 UTC m=+0.143467046 container died 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 16:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec3719d3e3750e0d855951d557fd1341c96ae0c8cf45a708348e48ecd796e87f-merged.mount: Deactivated successfully.
Nov 25 16:16:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:56 compute-0 podman[258528]: 2025-11-25 16:16:56.513968944 +0000 UTC m=+0.183373754 container remove 450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swartz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:16:56 compute-0 systemd[1]: libpod-conmon-450b1ba791138371aaa1c110738215d4f0d814da8acb839634e4058ddd0b1170.scope: Deactivated successfully.
Nov 25 16:16:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:16:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:16:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:16:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:16:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:16:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:16:56 compute-0 podman[258570]: 2025-11-25 16:16:56.660376567 +0000 UTC m=+0.042835557 container create 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:16:56 compute-0 systemd[1]: Started libpod-conmon-25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be.scope.
Nov 25 16:16:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:56 compute-0 podman[258570]: 2025-11-25 16:16:56.640357387 +0000 UTC m=+0.022816417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:16:56 compute-0 podman[258570]: 2025-11-25 16:16:56.734802188 +0000 UTC m=+0.117261188 container init 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:16:56 compute-0 podman[258570]: 2025-11-25 16:16:56.746546055 +0000 UTC m=+0.129005035 container start 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:16:56 compute-0 podman[258570]: 2025-11-25 16:16:56.749901965 +0000 UTC m=+0.132360965 container attach 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:16:57 compute-0 ceph-mon[74985]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:57 compute-0 beautiful_hamilton[258586]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:16:57 compute-0 beautiful_hamilton[258586]: --> relative data size: 1.0
Nov 25 16:16:57 compute-0 beautiful_hamilton[258586]: --> All data devices are unavailable
Nov 25 16:16:57 compute-0 systemd[1]: libpod-25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be.scope: Deactivated successfully.
Nov 25 16:16:57 compute-0 podman[258615]: 2025-11-25 16:16:57.801064223 +0000 UTC m=+0.024586525 container died 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6185b0064981d6bb878ce7e864795fb1cb7004bdf738142f1f679ad0db58da11-merged.mount: Deactivated successfully.
Nov 25 16:16:57 compute-0 podman[258615]: 2025-11-25 16:16:57.851714251 +0000 UTC m=+0.075236463 container remove 25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 16:16:57 compute-0 systemd[1]: libpod-conmon-25cc7c2379857d18d29ce21e8b7ca1f54bae2d994620344675cae9c4a667a6be.scope: Deactivated successfully.
Nov 25 16:16:57 compute-0 sudo[258464]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:57 compute-0 sudo[258630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:57 compute-0 sudo[258630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:57 compute-0 sudo[258630]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:58 compute-0 sudo[258655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:16:58 compute-0 sudo[258655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:58 compute-0 sudo[258655]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:58 compute-0 sudo[258680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:58 compute-0 sudo[258680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:58 compute-0 sudo[258680]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:58 compute-0 sudo[258705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:16:58 compute-0 sudo[258705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:58 compute-0 podman[258771]: 2025-11-25 16:16:58.433949166 +0000 UTC m=+0.039832127 container create 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 16:16:58 compute-0 systemd[1]: Started libpod-conmon-519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0.scope.
Nov 25 16:16:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:16:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:58 compute-0 podman[258771]: 2025-11-25 16:16:58.416609077 +0000 UTC m=+0.022492058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:16:58 compute-0 podman[258771]: 2025-11-25 16:16:58.525388985 +0000 UTC m=+0.131271966 container init 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:16:58 compute-0 podman[258771]: 2025-11-25 16:16:58.532100266 +0000 UTC m=+0.137983227 container start 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:16:58 compute-0 podman[258771]: 2025-11-25 16:16:58.535424076 +0000 UTC m=+0.141307067 container attach 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 16:16:58 compute-0 flamboyant_wiles[258787]: 167 167
Nov 25 16:16:58 compute-0 systemd[1]: libpod-519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0.scope: Deactivated successfully.
Nov 25 16:16:58 compute-0 podman[258771]: 2025-11-25 16:16:58.536380582 +0000 UTC m=+0.142263533 container died 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-84649de9f5f5473245159c4bb29f41c65e63f1ef7a6e5c65c7fa08866390eb70-merged.mount: Deactivated successfully.
Nov 25 16:16:58 compute-0 podman[258771]: 2025-11-25 16:16:58.567248806 +0000 UTC m=+0.173131767 container remove 519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 16:16:58 compute-0 systemd[1]: libpod-conmon-519ea69c6c7ad135483d6ff10d3ad8397ab18aeaac53946dd28d60289ebb1bb0.scope: Deactivated successfully.
Nov 25 16:16:58 compute-0 podman[258810]: 2025-11-25 16:16:58.727826302 +0000 UTC m=+0.040709330 container create 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:16:58 compute-0 systemd[1]: Started libpod-conmon-366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b.scope.
Nov 25 16:16:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:16:58 compute-0 podman[258810]: 2025-11-25 16:16:58.808052769 +0000 UTC m=+0.120935867 container init 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:16:58 compute-0 podman[258810]: 2025-11-25 16:16:58.71292021 +0000 UTC m=+0.025803258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:16:58 compute-0 podman[258810]: 2025-11-25 16:16:58.815359266 +0000 UTC m=+0.128242304 container start 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 16:16:58 compute-0 podman[258810]: 2025-11-25 16:16:58.818279915 +0000 UTC m=+0.131162973 container attach 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]: {
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:     "0": [
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:         {
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "devices": [
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "/dev/loop3"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             ],
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_name": "ceph_lv0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_size": "21470642176",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "name": "ceph_lv0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "tags": {
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cluster_name": "ceph",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.crush_device_class": "",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.encrypted": "0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osd_id": "0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.type": "block",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.vdo": "0"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             },
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "type": "block",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "vg_name": "ceph_vg0"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:         }
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:     ],
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:     "1": [
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:         {
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "devices": [
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "/dev/loop4"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             ],
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_name": "ceph_lv1",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_size": "21470642176",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "name": "ceph_lv1",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "tags": {
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cluster_name": "ceph",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.crush_device_class": "",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.encrypted": "0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osd_id": "1",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.type": "block",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.vdo": "0"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             },
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "type": "block",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "vg_name": "ceph_vg1"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:         }
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:     ],
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:     "2": [
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:         {
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "devices": [
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "/dev/loop5"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             ],
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_name": "ceph_lv2",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_size": "21470642176",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "name": "ceph_lv2",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "tags": {
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.cluster_name": "ceph",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.crush_device_class": "",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.encrypted": "0",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osd_id": "2",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.type": "block",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:                 "ceph.vdo": "0"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             },
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "type": "block",
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:             "vg_name": "ceph_vg2"
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:         }
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]:     ]
Nov 25 16:16:59 compute-0 nervous_vaughan[258827]: }
Nov 25 16:16:59 compute-0 ceph-mon[74985]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:16:59 compute-0 systemd[1]: libpod-366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b.scope: Deactivated successfully.
Nov 25 16:16:59 compute-0 podman[258810]: 2025-11-25 16:16:59.597952992 +0000 UTC m=+0.910836020 container died 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ddd40827672870740fb159b34337d8ef911f18304a9697725a17c09fa4ce2e1-merged.mount: Deactivated successfully.
Nov 25 16:16:59 compute-0 podman[258810]: 2025-11-25 16:16:59.654118499 +0000 UTC m=+0.967001527 container remove 366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:16:59 compute-0 systemd[1]: libpod-conmon-366931a2a1276bd6f73d9ce7c332accf442e6d1b57e017abfae9ec5a7a9c475b.scope: Deactivated successfully.
Nov 25 16:16:59 compute-0 sudo[258705]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:59 compute-0 sudo[258848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:59 compute-0 sudo[258848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:59 compute-0 sudo[258848]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:59 compute-0 sudo[258891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:16:59 compute-0 podman[258872]: 2025-11-25 16:16:59.840905352 +0000 UTC m=+0.058855150 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 16:16:59 compute-0 sudo[258891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:59 compute-0 sudo[258891]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:59 compute-0 podman[258874]: 2025-11-25 16:16:59.863435371 +0000 UTC m=+0.081254856 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 16:16:59 compute-0 podman[258873]: 2025-11-25 16:16:59.86706954 +0000 UTC m=+0.085194023 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 16:16:59 compute-0 sudo[258959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:16:59 compute-0 sudo[258959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:16:59 compute-0 sudo[258959]: pam_unix(sudo:session): session closed for user root
Nov 25 16:16:59 compute-0 sudo[258986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:16:59 compute-0 sudo[258986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:17:00 compute-0 podman[259051]: 2025-11-25 16:17:00.288795229 +0000 UTC m=+0.038222323 container create fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:17:00 compute-0 systemd[1]: Started libpod-conmon-fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688.scope.
Nov 25 16:17:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:17:00 compute-0 podman[259051]: 2025-11-25 16:17:00.356745144 +0000 UTC m=+0.106172268 container init fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:17:00 compute-0 podman[259051]: 2025-11-25 16:17:00.363054125 +0000 UTC m=+0.112481219 container start fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:17:00 compute-0 focused_rhodes[259067]: 167 167
Nov 25 16:17:00 compute-0 systemd[1]: libpod-fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688.scope: Deactivated successfully.
Nov 25 16:17:00 compute-0 podman[259051]: 2025-11-25 16:17:00.366689502 +0000 UTC m=+0.116116616 container attach fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:17:00 compute-0 podman[259051]: 2025-11-25 16:17:00.367846383 +0000 UTC m=+0.117273487 container died fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:17:00 compute-0 podman[259051]: 2025-11-25 16:17:00.27218008 +0000 UTC m=+0.021607204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a83a510060c37154acf89506e391d13f0882b4143be5ba704df03d4648fe56-merged.mount: Deactivated successfully.
Nov 25 16:17:00 compute-0 podman[259051]: 2025-11-25 16:17:00.399784006 +0000 UTC m=+0.149211100 container remove fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:17:00 compute-0 systemd[1]: libpod-conmon-fe861682cbd8fb111d8132aa69c71aef9eb6f8b7e89bad33e8c3da1c0f3bb688.scope: Deactivated successfully.
Nov 25 16:17:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:00 compute-0 podman[259091]: 2025-11-25 16:17:00.548155553 +0000 UTC m=+0.036423664 container create 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 16:17:00 compute-0 systemd[1]: Started libpod-conmon-735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09.scope.
Nov 25 16:17:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:17:00 compute-0 podman[259091]: 2025-11-25 16:17:00.621681979 +0000 UTC m=+0.109950100 container init 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:17:00 compute-0 podman[259091]: 2025-11-25 16:17:00.628494763 +0000 UTC m=+0.116762874 container start 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:17:00 compute-0 podman[259091]: 2025-11-25 16:17:00.5343504 +0000 UTC m=+0.022618531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:17:00 compute-0 podman[259091]: 2025-11-25 16:17:00.631577567 +0000 UTC m=+0.119845688 container attach 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:17:01 compute-0 funny_leakey[259107]: {
Nov 25 16:17:01 compute-0 funny_leakey[259107]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "osd_id": 1,
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "type": "bluestore"
Nov 25 16:17:01 compute-0 funny_leakey[259107]:     },
Nov 25 16:17:01 compute-0 funny_leakey[259107]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "osd_id": 2,
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "type": "bluestore"
Nov 25 16:17:01 compute-0 funny_leakey[259107]:     },
Nov 25 16:17:01 compute-0 funny_leakey[259107]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "osd_id": 0,
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:17:01 compute-0 funny_leakey[259107]:         "type": "bluestore"
Nov 25 16:17:01 compute-0 funny_leakey[259107]:     }
Nov 25 16:17:01 compute-0 funny_leakey[259107]: }
Nov 25 16:17:01 compute-0 systemd[1]: libpod-735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09.scope: Deactivated successfully.
Nov 25 16:17:01 compute-0 podman[259140]: 2025-11-25 16:17:01.597660806 +0000 UTC m=+0.020430432 container died 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:17:01 compute-0 ceph-mon[74985]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-612c6468b70989205208bded1a753eabda9d6e2ac9f51f23aee652cc8b8499e9-merged.mount: Deactivated successfully.
Nov 25 16:17:01 compute-0 podman[259140]: 2025-11-25 16:17:01.673609208 +0000 UTC m=+0.096378834 container remove 735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_leakey, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 16:17:01 compute-0 systemd[1]: libpod-conmon-735a1319522f660cc0b5cff404c61e09078bc66fb8e75c8f26e05d7d920d3a09.scope: Deactivated successfully.
Nov 25 16:17:01 compute-0 sudo[258986]: pam_unix(sudo:session): session closed for user root
Nov 25 16:17:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:17:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:17:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:17:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:17:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ba3947af-46a3-4f74-afff-8023cd600b1c does not exist
Nov 25 16:17:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 484b273f-f87d-4f23-b532-954490120cfb does not exist
Nov 25 16:17:01 compute-0 sudo[259155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:17:01 compute-0 sudo[259155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:17:01 compute-0 sudo[259155]: pam_unix(sudo:session): session closed for user root
Nov 25 16:17:01 compute-0 sudo[259180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:17:01 compute-0 sudo[259180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:17:01 compute-0 sudo[259180]: pam_unix(sudo:session): session closed for user root
Nov 25 16:17:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:17:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:17:03 compute-0 ceph-mon[74985]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:04 compute-0 ceph-mon[74985]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:07 compute-0 ceph-mon[74985]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:09 compute-0 ceph-mon[74985]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:17:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:11 compute-0 ceph-mon[74985]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:17:13.585 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:17:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:17:13.586 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:17:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:17:13.586 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:17:13 compute-0 ceph-mon[74985]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:15 compute-0 ceph-mon[74985]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:17 compute-0 ceph-mon[74985]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:19 compute-0 ceph-mon[74985]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:21 compute-0 ceph-mon[74985]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:23 compute-0 ceph-mon[74985]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:25 compute-0 ceph-mon[74985]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:27 compute-0 ceph-mon[74985]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:29 compute-0 ceph-mon[74985]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:30 compute-0 podman[259206]: 2025-11-25 16:17:30.666742976 +0000 UTC m=+0.076818245 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 16:17:30 compute-0 podman[259207]: 2025-11-25 16:17:30.675718439 +0000 UTC m=+0.085982753 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:17:30 compute-0 podman[259208]: 2025-11-25 16:17:30.685521224 +0000 UTC m=+0.089734694 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 16:17:31 compute-0 ceph-mon[74985]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:33 compute-0 ceph-mon[74985]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:35 compute-0 ceph-mon[74985]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:37 compute-0 ceph-mon[74985]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:39 compute-0 nova_compute[254092]: 2025-11-25 16:17:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:39 compute-0 nova_compute[254092]: 2025-11-25 16:17:39.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:17:39 compute-0 nova_compute[254092]: 2025-11-25 16:17:39.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:17:39 compute-0 nova_compute[254092]: 2025-11-25 16:17:39.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:39 compute-0 nova_compute[254092]: 2025-11-25 16:17:39.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:17:39 compute-0 nova_compute[254092]: 2025-11-25 16:17:39.559 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:39 compute-0 ceph-mon[74985]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:17:40
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'images', 'default.rgw.meta', '.mgr', '.rgw.root']
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:17:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:40 compute-0 nova_compute[254092]: 2025-11-25 16:17:40.578 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:40 compute-0 nova_compute[254092]: 2025-11-25 16:17:40.616 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:17:40 compute-0 nova_compute[254092]: 2025-11-25 16:17:40.616 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:17:40 compute-0 nova_compute[254092]: 2025-11-25 16:17:40.617 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:17:40 compute-0 nova_compute[254092]: 2025-11-25 16:17:40.617 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:17:40 compute-0 nova_compute[254092]: 2025-11-25 16:17:40.617 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:17:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:17:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520662425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.040 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.206 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.207 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5184MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.208 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.208 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.274 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.274 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.294 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:17:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:17:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592436445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:17:41 compute-0 ceph-mon[74985]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/520662425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:17:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3592436445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.764 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.773 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.791 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.794 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:17:41 compute-0 nova_compute[254092]: 2025-11-25 16:17:41.794 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:17:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:43 compute-0 nova_compute[254092]: 2025-11-25 16:17:43.712 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:43 compute-0 nova_compute[254092]: 2025-11-25 16:17:43.712 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:43 compute-0 nova_compute[254092]: 2025-11-25 16:17:43.713 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:43 compute-0 nova_compute[254092]: 2025-11-25 16:17:43.713 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:43 compute-0 nova_compute[254092]: 2025-11-25 16:17:43.713 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:43 compute-0 ceph-mon[74985]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:44 compute-0 nova_compute[254092]: 2025-11-25 16:17:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:44 compute-0 nova_compute[254092]: 2025-11-25 16:17:44.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:17:44 compute-0 nova_compute[254092]: 2025-11-25 16:17:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:17:44 compute-0 nova_compute[254092]: 2025-11-25 16:17:44.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:17:44 compute-0 nova_compute[254092]: 2025-11-25 16:17:44.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:44 compute-0 nova_compute[254092]: 2025-11-25 16:17:44.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:17:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:44 compute-0 ceph-mon[74985]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:45 compute-0 nova_compute[254092]: 2025-11-25 16:17:45.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:17:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:47 compute-0 ceph-mon[74985]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:49 compute-0 ceph-mon[74985]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:17:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:17:51 compute-0 ceph-mon[74985]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:53 compute-0 ceph-mon[74985]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:17:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225908457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:17:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:17:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1225908457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:17:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:17:55 compute-0 ceph-mon[74985]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1225908457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:17:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1225908457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:17:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:57 compute-0 ceph-mon[74985]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:17:59 compute-0 ceph-mon[74985]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:01 compute-0 ceph-mon[74985]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:01 compute-0 podman[259315]: 2025-11-25 16:18:01.68253941 +0000 UTC m=+0.091384259 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:18:01 compute-0 podman[259317]: 2025-11-25 16:18:01.699857518 +0000 UTC m=+0.102899181 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:18:01 compute-0 podman[259316]: 2025-11-25 16:18:01.701724928 +0000 UTC m=+0.104550925 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 16:18:02 compute-0 sudo[259382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:02 compute-0 sudo[259382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:02 compute-0 sudo[259382]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:02 compute-0 sudo[259407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:18:02 compute-0 sudo[259407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:02 compute-0 sudo[259407]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:02 compute-0 sudo[259432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:02 compute-0 sudo[259432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:02 compute-0 sudo[259432]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:02 compute-0 sudo[259457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:18:02 compute-0 sudo[259457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:02 compute-0 sudo[259457]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:18:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:18:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:18:02 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:18:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:18:02 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:18:02 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 11f74cfa-aa01-4b8e-b12d-9e912439092a does not exist
Nov 25 16:18:02 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 22df2f84-7673-4fc4-9617-f5ad03d5d0b3 does not exist
Nov 25 16:18:02 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 21e659e9-0922-4c62-9c5a-446a5ae6ff69 does not exist
Nov 25 16:18:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:18:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:18:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:18:02 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:18:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:18:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:18:02 compute-0 sudo[259512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:02 compute-0 sudo[259512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:02 compute-0 sudo[259512]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:03 compute-0 sudo[259537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:18:03 compute-0 sudo[259537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:03 compute-0 sudo[259537]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:03 compute-0 sudo[259562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:03 compute-0 sudo[259562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:03 compute-0 sudo[259562]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:03 compute-0 sudo[259587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:18:03 compute-0 sudo[259587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:03 compute-0 podman[259651]: 2025-11-25 16:18:03.567471906 +0000 UTC m=+0.052033157 container create 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:18:03 compute-0 systemd[1]: Started libpod-conmon-016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5.scope.
Nov 25 16:18:03 compute-0 podman[259651]: 2025-11-25 16:18:03.542385938 +0000 UTC m=+0.026947279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:18:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:18:03 compute-0 podman[259651]: 2025-11-25 16:18:03.658017271 +0000 UTC m=+0.142578582 container init 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:18:03 compute-0 podman[259651]: 2025-11-25 16:18:03.664650819 +0000 UTC m=+0.149212060 container start 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 16:18:03 compute-0 podman[259651]: 2025-11-25 16:18:03.667761423 +0000 UTC m=+0.152322734 container attach 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:18:03 compute-0 ceph-mon[74985]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:18:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:18:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:18:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:18:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:18:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:18:03 compute-0 boring_davinci[259667]: 167 167
Nov 25 16:18:03 compute-0 systemd[1]: libpod-016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5.scope: Deactivated successfully.
Nov 25 16:18:03 compute-0 podman[259651]: 2025-11-25 16:18:03.66985312 +0000 UTC m=+0.154414371 container died 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bc8817d10a833dc6061df6bd8aa5bec2cd6f3e931c3eda80506e4d48a81cd75-merged.mount: Deactivated successfully.
Nov 25 16:18:03 compute-0 podman[259651]: 2025-11-25 16:18:03.708867324 +0000 UTC m=+0.193428565 container remove 016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:18:03 compute-0 systemd[1]: libpod-conmon-016d08efd3cab79c94893525f732a5e13519764a5e970338c9e3c77646f857a5.scope: Deactivated successfully.
Nov 25 16:18:03 compute-0 podman[259691]: 2025-11-25 16:18:03.920136039 +0000 UTC m=+0.085651224 container create 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:18:03 compute-0 systemd[1]: Started libpod-conmon-65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608.scope.
Nov 25 16:18:03 compute-0 podman[259691]: 2025-11-25 16:18:03.902615906 +0000 UTC m=+0.068131171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:18:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:04 compute-0 podman[259691]: 2025-11-25 16:18:04.024972061 +0000 UTC m=+0.190487276 container init 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:18:04 compute-0 podman[259691]: 2025-11-25 16:18:04.035121785 +0000 UTC m=+0.200636980 container start 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:18:04 compute-0 podman[259691]: 2025-11-25 16:18:04.03865587 +0000 UTC m=+0.204171065 container attach 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:18:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:05 compute-0 nice_montalcini[259707]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:18:05 compute-0 nice_montalcini[259707]: --> relative data size: 1.0
Nov 25 16:18:05 compute-0 nice_montalcini[259707]: --> All data devices are unavailable
Nov 25 16:18:05 compute-0 systemd[1]: libpod-65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608.scope: Deactivated successfully.
Nov 25 16:18:05 compute-0 podman[259691]: 2025-11-25 16:18:05.084075234 +0000 UTC m=+1.249590429 container died 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:18:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b270f2baf505896dee0313214b2d1cc96845c1513fd4d6f5477f56fe2838e2b-merged.mount: Deactivated successfully.
Nov 25 16:18:05 compute-0 podman[259691]: 2025-11-25 16:18:05.21278196 +0000 UTC m=+1.378297155 container remove 65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:18:05 compute-0 systemd[1]: libpod-conmon-65deb9e9a81cbfeeb1a42eb8fd35fc7406a80833629d89d3171d7ee76d463608.scope: Deactivated successfully.
Nov 25 16:18:05 compute-0 sudo[259587]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:05 compute-0 sudo[259750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:05 compute-0 sudo[259750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:05 compute-0 sudo[259750]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:05 compute-0 sudo[259775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:18:05 compute-0 sudo[259775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:05 compute-0 sudo[259775]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:05 compute-0 sudo[259800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:05 compute-0 sudo[259800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:05 compute-0 sudo[259800]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:05 compute-0 sudo[259825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:18:05 compute-0 sudo[259825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:05 compute-0 ceph-mon[74985]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:05 compute-0 podman[259891]: 2025-11-25 16:18:05.948114748 +0000 UTC m=+0.050102184 container create ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:18:05 compute-0 systemd[1]: Started libpod-conmon-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope.
Nov 25 16:18:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:18:06 compute-0 podman[259891]: 2025-11-25 16:18:05.92450889 +0000 UTC m=+0.026496426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:18:06 compute-0 podman[259891]: 2025-11-25 16:18:06.024498091 +0000 UTC m=+0.126485547 container init ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:18:06 compute-0 podman[259891]: 2025-11-25 16:18:06.036020902 +0000 UTC m=+0.138008378 container start ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:18:06 compute-0 systemd[1]: libpod-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope: Deactivated successfully.
Nov 25 16:18:06 compute-0 great_bassi[259907]: 167 167
Nov 25 16:18:06 compute-0 conmon[259907]: conmon ad8d7e60fa15d57b910b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope/container/memory.events
Nov 25 16:18:06 compute-0 podman[259891]: 2025-11-25 16:18:06.040988357 +0000 UTC m=+0.142975813 container attach ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:18:06 compute-0 podman[259891]: 2025-11-25 16:18:06.041363267 +0000 UTC m=+0.143350713 container died ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0a9caadac5e3d9bb71a1d14cf9b16f08eb120a8a38baeefd86855d5243c920-merged.mount: Deactivated successfully.
Nov 25 16:18:06 compute-0 podman[259891]: 2025-11-25 16:18:06.079942228 +0000 UTC m=+0.181929684 container remove ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:18:06 compute-0 systemd[1]: libpod-conmon-ad8d7e60fa15d57b910b2cd85180e83ed715c37c3ffcd229b36d3457cc9b6983.scope: Deactivated successfully.
Nov 25 16:18:06 compute-0 podman[259929]: 2025-11-25 16:18:06.250290759 +0000 UTC m=+0.040663039 container create 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:18:06 compute-0 systemd[1]: Started libpod-conmon-6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a.scope.
Nov 25 16:18:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:18:06 compute-0 podman[259929]: 2025-11-25 16:18:06.230984457 +0000 UTC m=+0.021356727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:06 compute-0 podman[259929]: 2025-11-25 16:18:06.347986317 +0000 UTC m=+0.138358587 container init 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:18:06 compute-0 podman[259929]: 2025-11-25 16:18:06.360161827 +0000 UTC m=+0.150534107 container start 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:18:06 compute-0 podman[259929]: 2025-11-25 16:18:06.363798205 +0000 UTC m=+0.154170465 container attach 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:18:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]: {
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:     "0": [
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:         {
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "devices": [
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "/dev/loop3"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             ],
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_name": "ceph_lv0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_size": "21470642176",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "name": "ceph_lv0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "tags": {
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cluster_name": "ceph",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.crush_device_class": "",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.encrypted": "0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osd_id": "0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.type": "block",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.vdo": "0"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             },
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "type": "block",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "vg_name": "ceph_vg0"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:         }
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:     ],
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:     "1": [
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:         {
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "devices": [
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "/dev/loop4"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             ],
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_name": "ceph_lv1",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_size": "21470642176",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "name": "ceph_lv1",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "tags": {
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cluster_name": "ceph",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.crush_device_class": "",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.encrypted": "0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osd_id": "1",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.type": "block",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.vdo": "0"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             },
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "type": "block",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "vg_name": "ceph_vg1"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:         }
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:     ],
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:     "2": [
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:         {
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "devices": [
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "/dev/loop5"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             ],
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_name": "ceph_lv2",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_size": "21470642176",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "name": "ceph_lv2",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "tags": {
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.cluster_name": "ceph",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.crush_device_class": "",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.encrypted": "0",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osd_id": "2",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.type": "block",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:                 "ceph.vdo": "0"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             },
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "type": "block",
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:             "vg_name": "ceph_vg2"
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:         }
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]:     ]
Nov 25 16:18:07 compute-0 distracted_vaughan[259946]: }
Nov 25 16:18:07 compute-0 systemd[1]: libpod-6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a.scope: Deactivated successfully.
Nov 25 16:18:07 compute-0 podman[259929]: 2025-11-25 16:18:07.11934094 +0000 UTC m=+0.909713240 container died 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2c504af9623e83db56251e27408fb00ecf6fcc1c0fdac9fc7834a409c1d14a5-merged.mount: Deactivated successfully.
Nov 25 16:18:07 compute-0 podman[259929]: 2025-11-25 16:18:07.183929644 +0000 UTC m=+0.974301924 container remove 6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 16:18:07 compute-0 systemd[1]: libpod-conmon-6880f02b4bee1eab3f254dbc28d49071f4360f45dea1229462ebf0b6daf5bf3a.scope: Deactivated successfully.
Nov 25 16:18:07 compute-0 sudo[259825]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:07 compute-0 sudo[259970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:07 compute-0 sudo[259970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:07 compute-0 sudo[259970]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:07 compute-0 sudo[259995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:18:07 compute-0 sudo[259995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:07 compute-0 sudo[259995]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:07 compute-0 sudo[260020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:07 compute-0 sudo[260020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:07 compute-0 sudo[260020]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:07 compute-0 sudo[260045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:18:07 compute-0 sudo[260045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:07 compute-0 ceph-mon[74985]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:07 compute-0 podman[260111]: 2025-11-25 16:18:07.977820264 +0000 UTC m=+0.060092514 container create 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 16:18:08 compute-0 systemd[1]: Started libpod-conmon-0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310.scope.
Nov 25 16:18:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:18:08 compute-0 podman[260111]: 2025-11-25 16:18:07.958019189 +0000 UTC m=+0.040291429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:18:08 compute-0 podman[260111]: 2025-11-25 16:18:08.066480159 +0000 UTC m=+0.148752429 container init 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:18:08 compute-0 podman[260111]: 2025-11-25 16:18:08.076083198 +0000 UTC m=+0.158355448 container start 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:18:08 compute-0 podman[260111]: 2025-11-25 16:18:08.079374086 +0000 UTC m=+0.161646386 container attach 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:18:08 compute-0 thirsty_almeida[260127]: 167 167
Nov 25 16:18:08 compute-0 systemd[1]: libpod-0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310.scope: Deactivated successfully.
Nov 25 16:18:08 compute-0 podman[260111]: 2025-11-25 16:18:08.084021082 +0000 UTC m=+0.166293342 container died 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 16:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-098a43f94108413b595874b99fc9c95e16b8d326155f37e6224ade0d2ee92ed4-merged.mount: Deactivated successfully.
Nov 25 16:18:08 compute-0 podman[260111]: 2025-11-25 16:18:08.120504707 +0000 UTC m=+0.202776927 container remove 0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:18:08 compute-0 systemd[1]: libpod-conmon-0981ed85df86dd9e5438914051364902158a65f5d9eb44a2d993039f859ac310.scope: Deactivated successfully.
Nov 25 16:18:08 compute-0 podman[260150]: 2025-11-25 16:18:08.344396304 +0000 UTC m=+0.046800625 container create 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:18:08 compute-0 systemd[1]: Started libpod-conmon-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope.
Nov 25 16:18:08 compute-0 podman[260150]: 2025-11-25 16:18:08.320416976 +0000 UTC m=+0.022821307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:18:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:18:08 compute-0 podman[260150]: 2025-11-25 16:18:08.454409315 +0000 UTC m=+0.156813636 container init 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:18:08 compute-0 podman[260150]: 2025-11-25 16:18:08.460096998 +0000 UTC m=+0.162501289 container start 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:18:08 compute-0 podman[260150]: 2025-11-25 16:18:08.463936112 +0000 UTC m=+0.166340413 container attach 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 16:18:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:09 compute-0 gracious_colden[260166]: {
Nov 25 16:18:09 compute-0 gracious_colden[260166]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "osd_id": 1,
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "type": "bluestore"
Nov 25 16:18:09 compute-0 gracious_colden[260166]:     },
Nov 25 16:18:09 compute-0 gracious_colden[260166]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "osd_id": 2,
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "type": "bluestore"
Nov 25 16:18:09 compute-0 gracious_colden[260166]:     },
Nov 25 16:18:09 compute-0 gracious_colden[260166]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "osd_id": 0,
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:18:09 compute-0 gracious_colden[260166]:         "type": "bluestore"
Nov 25 16:18:09 compute-0 gracious_colden[260166]:     }
Nov 25 16:18:09 compute-0 gracious_colden[260166]: }
Nov 25 16:18:09 compute-0 systemd[1]: libpod-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope: Deactivated successfully.
Nov 25 16:18:09 compute-0 systemd[1]: libpod-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope: Consumed 1.144s CPU time.
Nov 25 16:18:09 compute-0 podman[260150]: 2025-11-25 16:18:09.599337896 +0000 UTC m=+1.301742247 container died 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e16366b52a8072a077d04be21d6db0e4c888c3c2065d52c50e47df6772da9b4-merged.mount: Deactivated successfully.
Nov 25 16:18:09 compute-0 podman[260150]: 2025-11-25 16:18:09.67503285 +0000 UTC m=+1.377437151 container remove 0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:18:09 compute-0 systemd[1]: libpod-conmon-0bd5f5d416857effe0a80bbff92948a62df27def2fceeb04d1495e7d79a88452.scope: Deactivated successfully.
Nov 25 16:18:09 compute-0 ceph-mon[74985]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:09 compute-0 sudo[260045]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:18:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:18:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:18:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:18:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a0fef568-ee18-44f0-a441-b573c17694eb does not exist
Nov 25 16:18:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 77f1e62b-04d1-487c-82e8-3e3e9288c046 does not exist
Nov 25 16:18:09 compute-0 sudo[260212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:18:09 compute-0 sudo[260212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:09 compute-0 sudo[260212]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:09 compute-0 sudo[260237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:18:09 compute-0 sudo[260237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:18:09 compute-0 sudo[260237]: pam_unix(sudo:session): session closed for user root
Nov 25 16:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:18:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:10 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:18:10 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:18:11 compute-0 ceph-mon[74985]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:18:13.586 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:18:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:18:13.587 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:18:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:18:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:18:13 compute-0 ceph-mon[74985]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:15 compute-0 ceph-mon[74985]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:17 compute-0 ceph-mon[74985]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:18 compute-0 ceph-mon[74985]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:21 compute-0 ceph-mon[74985]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:23 compute-0 ceph-mon[74985]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.584992) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505585036, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1827, "num_deletes": 252, "total_data_size": 3039018, "memory_usage": 3084856, "flush_reason": "Manual Compaction"}
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505611892, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1730282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16471, "largest_seqno": 18297, "table_properties": {"data_size": 1724297, "index_size": 2996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15145, "raw_average_key_size": 20, "raw_value_size": 1711044, "raw_average_value_size": 2281, "num_data_blocks": 139, "num_entries": 750, "num_filter_entries": 750, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087303, "oldest_key_time": 1764087303, "file_creation_time": 1764087505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 26998 microseconds, and 5801 cpu microseconds.
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.611989) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1730282 bytes OK
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.612015) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.613794) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.613824) EVENT_LOG_v1 {"time_micros": 1764087505613814, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.613851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3031257, prev total WAL file size 3031257, number of live WAL files 2.
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.615782) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353034' seq:72057594037927935, type:22 .. '6D67727374617400373537' seq:0, type:0; will stop at (end)
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1689KB)], [38(7831KB)]
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505615881, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9749479, "oldest_snapshot_seqno": -1}
Nov 25 16:18:25 compute-0 ceph-mon[74985]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4436 keys, 7672410 bytes, temperature: kUnknown
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505711551, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7672410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7642237, "index_size": 17967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 107826, "raw_average_key_size": 24, "raw_value_size": 7561507, "raw_average_value_size": 1704, "num_data_blocks": 765, "num_entries": 4436, "num_filter_entries": 4436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.711823) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7672410 bytes
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.8 rd, 80.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.1) write-amplify(4.4) OK, records in: 4857, records dropped: 421 output_compression: NoCompression
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714287) EVENT_LOG_v1 {"time_micros": 1764087505714279, "job": 18, "event": "compaction_finished", "compaction_time_micros": 95789, "compaction_time_cpu_micros": 20177, "output_level": 6, "num_output_files": 1, "total_output_size": 7672410, "num_input_records": 4857, "num_output_records": 4436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.615610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:25.714539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505716083, "job": 0, "event": "table_file_deletion", "file_number": 40}
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:18:25 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087505717865, "job": 0, "event": "table_file_deletion", "file_number": 38}
Nov 25 16:18:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:27 compute-0 ceph-mon[74985]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:29 compute-0 ceph-mon[74985]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:30 compute-0 sshd-session[260262]: Connection closed by 20.40.250.19 port 57488
Nov 25 16:18:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:30 compute-0 sshd-session[260263]: banner exchange: Connection from 20.40.250.19 port 46500: invalid format
Nov 25 16:18:31 compute-0 ceph-mon[74985]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:32 compute-0 podman[260265]: 2025-11-25 16:18:32.698943461 +0000 UTC m=+0.094771881 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 16:18:32 compute-0 podman[260264]: 2025-11-25 16:18:32.70853278 +0000 UTC m=+0.107260158 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:18:32 compute-0 podman[260266]: 2025-11-25 16:18:32.734391978 +0000 UTC m=+0.126005334 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 25 16:18:33 compute-0 ceph-mon[74985]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:34 compute-0 ceph-mon[74985]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:37 compute-0 ceph-mon[74985]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:39 compute-0 ceph-mon[74985]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:18:40
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:18:40 compute-0 nova_compute[254092]: 2025-11-25 16:18:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:40 compute-0 nova_compute[254092]: 2025-11-25 16:18:40.535 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:18:40 compute-0 nova_compute[254092]: 2025-11-25 16:18:40.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:18:40 compute-0 nova_compute[254092]: 2025-11-25 16:18:40.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:18:40 compute-0 nova_compute[254092]: 2025-11-25 16:18:40.536 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:18:40 compute-0 nova_compute[254092]: 2025-11-25 16:18:40.537 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:18:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:18:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422752470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.003 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.146 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.148 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.148 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.148 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.419 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.420 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.522 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.614 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.614 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.631 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.658 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:18:41 compute-0 nova_compute[254092]: 2025-11-25 16:18:41.687 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:18:41 compute-0 ceph-mon[74985]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2422752470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:18:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:18:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192207903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:18:42 compute-0 nova_compute[254092]: 2025-11-25 16:18:42.094 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:18:42 compute-0 nova_compute[254092]: 2025-11-25 16:18:42.100 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:18:42 compute-0 nova_compute[254092]: 2025-11-25 16:18:42.118 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:18:42 compute-0 nova_compute[254092]: 2025-11-25 16:18:42.120 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:18:42 compute-0 nova_compute[254092]: 2025-11-25 16:18:42.120 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:18:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/192207903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:18:43 compute-0 ceph-mon[74985]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.120 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:18:44 compute-0 nova_compute[254092]: 2025-11-25 16:18:44.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:18:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:45 compute-0 nova_compute[254092]: 2025-11-25 16:18:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:45 compute-0 nova_compute[254092]: 2025-11-25 16:18:45.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:45 compute-0 nova_compute[254092]: 2025-11-25 16:18:45.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:18:45 compute-0 nova_compute[254092]: 2025-11-25 16:18:45.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:18:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:45 compute-0 ceph-mon[74985]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:47 compute-0 ceph-mon[74985]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:49 compute-0 ceph-mon[74985]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:18:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:18:51 compute-0 ceph-mon[74985]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:53 compute-0 ceph-mon[74985]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.782023) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533782069, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 467, "num_deletes": 251, "total_data_size": 404699, "memory_usage": 413096, "flush_reason": "Manual Compaction"}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533787914, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 400889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18298, "largest_seqno": 18764, "table_properties": {"data_size": 398229, "index_size": 696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6270, "raw_average_key_size": 18, "raw_value_size": 393012, "raw_average_value_size": 1169, "num_data_blocks": 33, "num_entries": 336, "num_filter_entries": 336, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087505, "oldest_key_time": 1764087505, "file_creation_time": 1764087533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5941 microseconds, and 2775 cpu microseconds.
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.787965) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 400889 bytes OK
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.787984) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790157) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790179) EVENT_LOG_v1 {"time_micros": 1764087533790172, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790198) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 401923, prev total WAL file size 401923, number of live WAL files 2.
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(391KB)], [41(7492KB)]
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533790836, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8073299, "oldest_snapshot_seqno": -1}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4263 keys, 6317089 bytes, temperature: kUnknown
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533831217, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6317089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6289431, "index_size": 15901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104877, "raw_average_key_size": 24, "raw_value_size": 6213033, "raw_average_value_size": 1457, "num_data_blocks": 669, "num_entries": 4263, "num_filter_entries": 4263, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087533, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.831509) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6317089 bytes
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.833130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.5 rd, 156.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 7.3 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(35.9) write-amplify(15.8) OK, records in: 4772, records dropped: 509 output_compression: NoCompression
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.833157) EVENT_LOG_v1 {"time_micros": 1764087533833145, "job": 20, "event": "compaction_finished", "compaction_time_micros": 40469, "compaction_time_cpu_micros": 27856, "output_level": 6, "num_output_files": 1, "total_output_size": 6317089, "num_input_records": 4772, "num_output_records": 4263, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533833420, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087533835701, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.790674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:53 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:18:53.835856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:18:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:54 compute-0 ceph-mon[74985]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:18:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904060504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:18:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:18:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904060504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:18:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:18:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3904060504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:18:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3904060504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:18:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:56 compute-0 ceph-mon[74985]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:18:59 compute-0 ceph-mon[74985]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:01 compute-0 ceph-mon[74985]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:03 compute-0 podman[260368]: 2025-11-25 16:19:03.635528249 +0000 UTC m=+0.055019110 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:19:03 compute-0 ceph-mon[74985]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:03 compute-0 podman[260367]: 2025-11-25 16:19:03.668979984 +0000 UTC m=+0.089801340 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:19:03 compute-0 podman[260369]: 2025-11-25 16:19:03.680618028 +0000 UTC m=+0.092795190 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 16:19:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:05 compute-0 ceph-mon[74985]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:07 compute-0 ceph-mon[74985]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:09 compute-0 ceph-mon[74985]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:19:10 compute-0 sudo[260426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:10 compute-0 sudo[260426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:10 compute-0 sudo[260426]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:10 compute-0 sudo[260451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:19:10 compute-0 sudo[260451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:10 compute-0 sudo[260451]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:10 compute-0 sudo[260476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:10 compute-0 sudo[260476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:10 compute-0 sudo[260476]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:10 compute-0 sudo[260501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:19:10 compute-0 sudo[260501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:10 compute-0 sudo[260501]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:19:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:19:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:19:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:19:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:19:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 366c5ad1-a771-4256-bbbd-410c879c56bd does not exist
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 71a72332-ba7d-4f78-b278-87048da49e50 does not exist
Nov 25 16:19:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 38d26483-42a2-43fa-aeea-f3060b6b91d9 does not exist
Nov 25 16:19:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:19:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:19:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:19:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:19:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:19:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:19:10 compute-0 sudo[260557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:10 compute-0 sudo[260557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:10 compute-0 sudo[260557]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:11 compute-0 sudo[260582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:19:11 compute-0 sudo[260582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:11 compute-0 sudo[260582]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:11 compute-0 sudo[260607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:11 compute-0 sudo[260607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:11 compute-0 sudo[260607]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:11 compute-0 sudo[260632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:19:11 compute-0 sudo[260632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:11 compute-0 podman[260697]: 2025-11-25 16:19:11.51916062 +0000 UTC m=+0.068594126 container create ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:19:11 compute-0 systemd[1]: Started libpod-conmon-ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55.scope.
Nov 25 16:19:11 compute-0 podman[260697]: 2025-11-25 16:19:11.491954394 +0000 UTC m=+0.041387960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:19:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:19:11 compute-0 podman[260697]: 2025-11-25 16:19:11.622140566 +0000 UTC m=+0.171574162 container init ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:19:11 compute-0 podman[260697]: 2025-11-25 16:19:11.63635736 +0000 UTC m=+0.185790896 container start ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:19:11 compute-0 frosty_lichterman[260713]: 167 167
Nov 25 16:19:11 compute-0 podman[260697]: 2025-11-25 16:19:11.642255689 +0000 UTC m=+0.191689225 container attach ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:19:11 compute-0 systemd[1]: libpod-ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55.scope: Deactivated successfully.
Nov 25 16:19:11 compute-0 podman[260697]: 2025-11-25 16:19:11.64743322 +0000 UTC m=+0.196866756 container died ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 16:19:11 compute-0 ceph-mon[74985]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:19:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:19:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:19:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:19:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:19:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c5fc41bb0bd1f7c7ca8b4752f96f2708e488648e4d1bc85f3567c5b5fa863df-merged.mount: Deactivated successfully.
Nov 25 16:19:11 compute-0 podman[260697]: 2025-11-25 16:19:11.725018848 +0000 UTC m=+0.274452354 container remove ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:19:11 compute-0 systemd[1]: libpod-conmon-ab0d6aa742e77e35217af76375f80eb8a57f9f6555f824c0c038f51e225bcb55.scope: Deactivated successfully.
Nov 25 16:19:11 compute-0 podman[260737]: 2025-11-25 16:19:11.951776912 +0000 UTC m=+0.062917724 container create 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:19:11 compute-0 systemd[1]: Started libpod-conmon-380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638.scope.
Nov 25 16:19:12 compute-0 podman[260737]: 2025-11-25 16:19:11.922015826 +0000 UTC m=+0.033156648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:19:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:12 compute-0 podman[260737]: 2025-11-25 16:19:12.065748174 +0000 UTC m=+0.176888986 container init 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:19:12 compute-0 podman[260737]: 2025-11-25 16:19:12.074295785 +0000 UTC m=+0.185436597 container start 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:19:12 compute-0 podman[260737]: 2025-11-25 16:19:12.087826671 +0000 UTC m=+0.198967533 container attach 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:19:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:13 compute-0 xenodochial_gould[260753]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:19:13 compute-0 xenodochial_gould[260753]: --> relative data size: 1.0
Nov 25 16:19:13 compute-0 xenodochial_gould[260753]: --> All data devices are unavailable
Nov 25 16:19:13 compute-0 systemd[1]: libpod-380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638.scope: Deactivated successfully.
Nov 25 16:19:13 compute-0 podman[260737]: 2025-11-25 16:19:13.090771758 +0000 UTC m=+1.201912610 container died 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-df7373f333286f4afabc1ef3d9793473d02518cea371d929dab61ca3293964c7-merged.mount: Deactivated successfully.
Nov 25 16:19:13 compute-0 podman[260737]: 2025-11-25 16:19:13.147328098 +0000 UTC m=+1.258468910 container remove 380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:19:13 compute-0 systemd[1]: libpod-conmon-380429c8cb9e2d92f0cebd86fc6c551ddcf677755ec52aa5a972b0b68083b638.scope: Deactivated successfully.
Nov 25 16:19:13 compute-0 sudo[260632]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:13 compute-0 sudo[260794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:13 compute-0 sudo[260794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:13 compute-0 sudo[260794]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:13 compute-0 sudo[260819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:19:13 compute-0 sudo[260819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:13 compute-0 sudo[260819]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:13 compute-0 sudo[260844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:13 compute-0 sudo[260844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:13 compute-0 sudo[260844]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:13 compute-0 sudo[260869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:19:13 compute-0 sudo[260869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:19:13.587 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:19:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:19:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:19:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:19:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:19:13 compute-0 ceph-mon[74985]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:13 compute-0 podman[260936]: 2025-11-25 16:19:13.981064598 +0000 UTC m=+0.063302963 container create 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:19:14 compute-0 systemd[1]: Started libpod-conmon-83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37.scope.
Nov 25 16:19:14 compute-0 podman[260936]: 2025-11-25 16:19:13.955286301 +0000 UTC m=+0.037524746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:19:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:19:14 compute-0 podman[260936]: 2025-11-25 16:19:14.104796885 +0000 UTC m=+0.187035250 container init 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:19:14 compute-0 podman[260936]: 2025-11-25 16:19:14.111583848 +0000 UTC m=+0.193822213 container start 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 16:19:14 compute-0 podman[260936]: 2025-11-25 16:19:14.115057743 +0000 UTC m=+0.197296138 container attach 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:19:14 compute-0 stoic_shannon[260953]: 167 167
Nov 25 16:19:14 compute-0 systemd[1]: libpod-83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37.scope: Deactivated successfully.
Nov 25 16:19:14 compute-0 podman[260936]: 2025-11-25 16:19:14.118948198 +0000 UTC m=+0.201186603 container died 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-06656087c859aa69e6f9bf0d7a06480b1a1e74163fc2ad274eb5536768e6aa9e-merged.mount: Deactivated successfully.
Nov 25 16:19:14 compute-0 podman[260936]: 2025-11-25 16:19:14.16155309 +0000 UTC m=+0.243791455 container remove 83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:19:14 compute-0 systemd[1]: libpod-conmon-83d2e471f3e04754c222315ebb39dc2a9e6e9247ad85dff31b520bf5c3935d37.scope: Deactivated successfully.
Nov 25 16:19:14 compute-0 podman[260978]: 2025-11-25 16:19:14.397879172 +0000 UTC m=+0.073538531 container create 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:19:14 compute-0 systemd[1]: Started libpod-conmon-351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53.scope.
Nov 25 16:19:14 compute-0 podman[260978]: 2025-11-25 16:19:14.368253011 +0000 UTC m=+0.043912440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:19:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:14 compute-0 podman[260978]: 2025-11-25 16:19:14.49727362 +0000 UTC m=+0.172933029 container init 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:19:14 compute-0 podman[260978]: 2025-11-25 16:19:14.509147641 +0000 UTC m=+0.184806990 container start 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:19:14 compute-0 podman[260978]: 2025-11-25 16:19:14.513300804 +0000 UTC m=+0.188960153 container attach 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:19:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:15 compute-0 quirky_neumann[260994]: {
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:     "0": [
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:         {
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "devices": [
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "/dev/loop3"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             ],
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_name": "ceph_lv0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_size": "21470642176",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "name": "ceph_lv0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "tags": {
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cluster_name": "ceph",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.crush_device_class": "",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.encrypted": "0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osd_id": "0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.type": "block",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.vdo": "0"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             },
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "type": "block",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "vg_name": "ceph_vg0"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:         }
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:     ],
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:     "1": [
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:         {
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "devices": [
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "/dev/loop4"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             ],
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_name": "ceph_lv1",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_size": "21470642176",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "name": "ceph_lv1",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "tags": {
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cluster_name": "ceph",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.crush_device_class": "",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.encrypted": "0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osd_id": "1",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.type": "block",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.vdo": "0"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             },
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "type": "block",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "vg_name": "ceph_vg1"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:         }
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:     ],
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:     "2": [
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:         {
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "devices": [
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "/dev/loop5"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             ],
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_name": "ceph_lv2",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_size": "21470642176",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "name": "ceph_lv2",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "tags": {
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.cluster_name": "ceph",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.crush_device_class": "",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.encrypted": "0",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osd_id": "2",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.type": "block",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:                 "ceph.vdo": "0"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             },
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "type": "block",
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:             "vg_name": "ceph_vg2"
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:         }
Nov 25 16:19:15 compute-0 quirky_neumann[260994]:     ]
Nov 25 16:19:15 compute-0 quirky_neumann[260994]: }
Nov 25 16:19:15 compute-0 systemd[1]: libpod-351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53.scope: Deactivated successfully.
Nov 25 16:19:15 compute-0 podman[260978]: 2025-11-25 16:19:15.357145927 +0000 UTC m=+1.032805256 container died 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d3809063c84c114ac816c4612422e630d27c7a8131653591d6919121ddf2478-merged.mount: Deactivated successfully.
Nov 25 16:19:15 compute-0 podman[260978]: 2025-11-25 16:19:15.442007852 +0000 UTC m=+1.117667201 container remove 351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:19:15 compute-0 systemd[1]: libpod-conmon-351b3d0d80be2252a64e054b953b5102445674fc1db94cf7f3e1ec8cad8a5b53.scope: Deactivated successfully.
Nov 25 16:19:15 compute-0 sudo[260869]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:15 compute-0 sudo[261018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:15 compute-0 sudo[261018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:15 compute-0 sudo[261018]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:15 compute-0 sudo[261043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:19:15 compute-0 sudo[261043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:15 compute-0 sudo[261043]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:15 compute-0 ceph-mon[74985]: pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:15 compute-0 sudo[261068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:15 compute-0 sudo[261068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:15 compute-0 sudo[261068]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:15 compute-0 sudo[261093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:19:15 compute-0 sudo[261093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:16 compute-0 podman[261159]: 2025-11-25 16:19:16.202601445 +0000 UTC m=+0.062825181 container create c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:19:16 compute-0 systemd[1]: Started libpod-conmon-c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207.scope.
Nov 25 16:19:16 compute-0 podman[261159]: 2025-11-25 16:19:16.177369612 +0000 UTC m=+0.037593398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:19:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:19:16 compute-0 podman[261159]: 2025-11-25 16:19:16.293591196 +0000 UTC m=+0.153815002 container init c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:19:16 compute-0 podman[261159]: 2025-11-25 16:19:16.301420997 +0000 UTC m=+0.161644743 container start c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:19:16 compute-0 podman[261159]: 2025-11-25 16:19:16.30517905 +0000 UTC m=+0.165402796 container attach c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:19:16 compute-0 distracted_williamson[261176]: 167 167
Nov 25 16:19:16 compute-0 systemd[1]: libpod-c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207.scope: Deactivated successfully.
Nov 25 16:19:16 compute-0 podman[261159]: 2025-11-25 16:19:16.308446218 +0000 UTC m=+0.168669954 container died c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-55405a4156b501754d0afaf0258ef8c50723a73eaa74bf90a872ae92d8e4ad22-merged.mount: Deactivated successfully.
Nov 25 16:19:16 compute-0 podman[261159]: 2025-11-25 16:19:16.360082454 +0000 UTC m=+0.220306200 container remove c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:19:16 compute-0 systemd[1]: libpod-conmon-c00b6c1d68a4c1d03786010b18210bdffbe9983c3e108ae992fcdcf36b2c4207.scope: Deactivated successfully.
Nov 25 16:19:16 compute-0 podman[261202]: 2025-11-25 16:19:16.537790511 +0000 UTC m=+0.061829934 container create af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:19:16 compute-0 podman[261202]: 2025-11-25 16:19:16.515466897 +0000 UTC m=+0.039506330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:19:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:16 compute-0 systemd[1]: Started libpod-conmon-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope.
Nov 25 16:19:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:19:16 compute-0 podman[261202]: 2025-11-25 16:19:16.680662566 +0000 UTC m=+0.204701969 container init af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:19:16 compute-0 podman[261202]: 2025-11-25 16:19:16.688131067 +0000 UTC m=+0.212170450 container start af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:19:16 compute-0 podman[261202]: 2025-11-25 16:19:16.691217961 +0000 UTC m=+0.215257334 container attach af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:19:17 compute-0 loving_gates[261218]: {
Nov 25 16:19:17 compute-0 loving_gates[261218]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "osd_id": 1,
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "type": "bluestore"
Nov 25 16:19:17 compute-0 loving_gates[261218]:     },
Nov 25 16:19:17 compute-0 loving_gates[261218]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "osd_id": 2,
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "type": "bluestore"
Nov 25 16:19:17 compute-0 loving_gates[261218]:     },
Nov 25 16:19:17 compute-0 loving_gates[261218]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "osd_id": 0,
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:19:17 compute-0 loving_gates[261218]:         "type": "bluestore"
Nov 25 16:19:17 compute-0 loving_gates[261218]:     }
Nov 25 16:19:17 compute-0 loving_gates[261218]: }
Nov 25 16:19:17 compute-0 systemd[1]: libpod-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope: Deactivated successfully.
Nov 25 16:19:17 compute-0 systemd[1]: libpod-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope: Consumed 1.026s CPU time.
Nov 25 16:19:17 compute-0 podman[261202]: 2025-11-25 16:19:17.70604526 +0000 UTC m=+1.230084673 container died af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:19:17 compute-0 ceph-mon[74985]: pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a689ea0f57c5f3c2ddfbaf44d6b60bd1be0405a4e782916860def25e08d955ca-merged.mount: Deactivated successfully.
Nov 25 16:19:17 compute-0 podman[261202]: 2025-11-25 16:19:17.89239888 +0000 UTC m=+1.416438303 container remove af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 16:19:17 compute-0 systemd[1]: libpod-conmon-af5c49602abeeb58e1378d02b711525dbbed70094019883bf97472e230490826.scope: Deactivated successfully.
Nov 25 16:19:17 compute-0 sudo[261093]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:19:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:19:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:19:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:19:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b8ba95c7-4f0d-463c-a6eb-f83c1aa3d63a does not exist
Nov 25 16:19:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev de1d9ecc-0d5f-42ea-b0e3-44d75af26b3a does not exist
Nov 25 16:19:18 compute-0 sudo[261265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:19:18 compute-0 sudo[261265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:18 compute-0 sudo[261265]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:18 compute-0 sudo[261290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:19:18 compute-0 sudo[261290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:19:18 compute-0 sudo[261290]: pam_unix(sudo:session): session closed for user root
Nov 25 16:19:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:19:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:19:18 compute-0 ceph-mon[74985]: pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:21 compute-0 ceph-mon[74985]: pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:23 compute-0 ceph-mon[74985]: pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:25 compute-0 ceph-mon[74985]: pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:27 compute-0 ceph-mon[74985]: pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:29 compute-0 ceph-mon[74985]: pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:31 compute-0 ceph-mon[74985]: pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:33 compute-0 ceph-mon[74985]: pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:34 compute-0 podman[261316]: 2025-11-25 16:19:34.682609653 +0000 UTC m=+0.084705843 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 16:19:34 compute-0 podman[261315]: 2025-11-25 16:19:34.689715614 +0000 UTC m=+0.094677412 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 16:19:34 compute-0 podman[261317]: 2025-11-25 16:19:34.745816712 +0000 UTC m=+0.148613831 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:19:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:19:40
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:19:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.1459 seconds
Nov 25 16:19:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:41 compute-0 ceph-mon[74985]: pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:19:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:42 compute-0 ceph-mon[74985]: pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:42 compute-0 ceph-mon[74985]: pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:42 compute-0 ceph-mon[74985]: pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:42 compute-0 ceph-mon[74985]: pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:19:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806324696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:19:42 compute-0 nova_compute[254092]: 2025-11-25 16:19:42.940 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.098 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.099 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.099 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.100 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.191 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.191 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.214 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:19:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:19:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179185872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.604 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.609 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.628 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.629 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:19:43 compute-0 nova_compute[254092]: 2025-11-25 16:19:43.629 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:19:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/806324696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:19:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1179185872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:19:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:44 compute-0 nova_compute[254092]: 2025-11-25 16:19:44.630 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:44 compute-0 nova_compute[254092]: 2025-11-25 16:19:44.630 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:45 compute-0 ceph-mon[74985]: pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:45 compute-0 nova_compute[254092]: 2025-11-25 16:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:45 compute-0 nova_compute[254092]: 2025-11-25 16:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:45 compute-0 nova_compute[254092]: 2025-11-25 16:19:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:19:46 compute-0 nova_compute[254092]: 2025-11-25 16:19:46.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:46 compute-0 nova_compute[254092]: 2025-11-25 16:19:46.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:46 compute-0 nova_compute[254092]: 2025-11-25 16:19:46.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:19:46 compute-0 nova_compute[254092]: 2025-11-25 16:19:46.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:19:46 compute-0 nova_compute[254092]: 2025-11-25 16:19:46.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:19:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:46 compute-0 ceph-mon[74985]: pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:47 compute-0 nova_compute[254092]: 2025-11-25 16:19:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:19:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:49 compute-0 ceph-mon[74985]: pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:19:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:19:51 compute-0 ceph-mon[74985]: pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:53 compute-0 ceph-mon[74985]: pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:54 compute-0 ceph-mon[74985]: pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:19:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2254086920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:19:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:19:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2254086920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:19:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2254086920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:19:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2254086920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:19:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:19:57 compute-0 ceph-mon[74985]: pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:19:59 compute-0 ceph-mon[74985]: pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:00 compute-0 ceph-mon[74985]: pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:03 compute-0 ceph-mon[74985]: pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:05 compute-0 podman[261426]: 2025-11-25 16:20:05.626609797 +0000 UTC m=+0.049576522 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:20:05 compute-0 podman[261425]: 2025-11-25 16:20:05.629821964 +0000 UTC m=+0.054384562 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:20:05 compute-0 ceph-mon[74985]: pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:05 compute-0 podman[261427]: 2025-11-25 16:20:05.722415989 +0000 UTC m=+0.132425483 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 16:20:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:07 compute-0 ceph-mon[74985]: pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:09 compute-0 ceph-mon[74985]: pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:20:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:11 compute-0 ceph-mon[74985]: pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:13 compute-0 ceph-mon[74985]: pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:20:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:20:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:20:13.588 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:20:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:20:13.589 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:20:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:15 compute-0 ceph-mon[74985]: pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:17 compute-0 ceph-mon[74985]: pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:18 compute-0 sudo[261483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:18 compute-0 sudo[261483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:18 compute-0 sudo[261483]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:18 compute-0 sudo[261508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:20:18 compute-0 sudo[261508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:18 compute-0 sudo[261508]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:18 compute-0 sudo[261533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:18 compute-0 sudo[261533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:18 compute-0 sudo[261533]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:18 compute-0 sudo[261558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:20:18 compute-0 sudo[261558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:18 compute-0 sudo[261558]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:18 compute-0 ceph-mon[74985]: pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:20:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:20:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:20:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:20:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:20:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:20:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b54bf14e-b5ce-4991-b67f-99c3dbd8d493 does not exist
Nov 25 16:20:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d673727d-e49d-4b9e-aa56-1f1256f62d03 does not exist
Nov 25 16:20:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8fe345b5-9bca-479a-86f5-e861b1629950 does not exist
Nov 25 16:20:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:20:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:20:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:20:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:20:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:20:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:20:19 compute-0 sudo[261615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:19 compute-0 sudo[261615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:19 compute-0 sudo[261615]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:19 compute-0 sudo[261640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:20:19 compute-0 sudo[261640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:19 compute-0 sudo[261640]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:19 compute-0 sudo[261665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:19 compute-0 sudo[261665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:19 compute-0 sudo[261665]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:19 compute-0 sudo[261690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:20:19 compute-0 sudo[261690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:19 compute-0 podman[261755]: 2025-11-25 16:20:19.669263153 +0000 UTC m=+0.079743167 container create 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 16:20:19 compute-0 podman[261755]: 2025-11-25 16:20:19.615438998 +0000 UTC m=+0.025919062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:20:19 compute-0 systemd[1]: Started libpod-conmon-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope.
Nov 25 16:20:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:20:19 compute-0 podman[261755]: 2025-11-25 16:20:19.797108962 +0000 UTC m=+0.207589036 container init 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:20:19 compute-0 podman[261755]: 2025-11-25 16:20:19.808256623 +0000 UTC m=+0.218736657 container start 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:20:19 compute-0 blissful_jones[261771]: 167 167
Nov 25 16:20:19 compute-0 systemd[1]: libpod-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope: Deactivated successfully.
Nov 25 16:20:19 compute-0 conmon[261771]: conmon 76179ae8392914f09777 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope/container/memory.events
Nov 25 16:20:19 compute-0 podman[261755]: 2025-11-25 16:20:19.821872012 +0000 UTC m=+0.232352046 container attach 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:20:19 compute-0 podman[261755]: 2025-11-25 16:20:19.822258272 +0000 UTC m=+0.232738296 container died 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:20:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b87003fdf557c52f5c07cdf42576c3deab79caa833fb99a9474f5df05377a3-merged.mount: Deactivated successfully.
Nov 25 16:20:20 compute-0 podman[261755]: 2025-11-25 16:20:20.052812377 +0000 UTC m=+0.463292391 container remove 76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 16:20:20 compute-0 systemd[1]: libpod-conmon-76179ae8392914f09777890a34678af1f1abf26b66ce5a9e9454692a444e10d9.scope: Deactivated successfully.
Nov 25 16:20:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:20:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:20:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:20:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:20:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:20:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:20:20 compute-0 podman[261797]: 2025-11-25 16:20:20.227631186 +0000 UTC m=+0.051045602 container create 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:20:20 compute-0 systemd[1]: Started libpod-conmon-10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4.scope.
Nov 25 16:20:20 compute-0 podman[261797]: 2025-11-25 16:20:20.199245659 +0000 UTC m=+0.022660095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:20:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:20 compute-0 podman[261797]: 2025-11-25 16:20:20.326978823 +0000 UTC m=+0.150393239 container init 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:20:20 compute-0 podman[261797]: 2025-11-25 16:20:20.335800051 +0000 UTC m=+0.159214467 container start 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 16:20:20 compute-0 podman[261797]: 2025-11-25 16:20:20.338808203 +0000 UTC m=+0.162222639 container attach 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:20:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:21 compute-0 ceph-mon[74985]: pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:21 compute-0 sleepy_dewdney[261813]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:20:21 compute-0 sleepy_dewdney[261813]: --> relative data size: 1.0
Nov 25 16:20:21 compute-0 sleepy_dewdney[261813]: --> All data devices are unavailable
Nov 25 16:20:21 compute-0 systemd[1]: libpod-10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4.scope: Deactivated successfully.
Nov 25 16:20:21 compute-0 podman[261797]: 2025-11-25 16:20:21.309720864 +0000 UTC m=+1.133135270 container died 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-335619e06d96364fa70366f6e73abf6655aea65b0c9fed246e9868e730de05ec-merged.mount: Deactivated successfully.
Nov 25 16:20:21 compute-0 podman[261797]: 2025-11-25 16:20:21.357218779 +0000 UTC m=+1.180633195 container remove 10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 16:20:21 compute-0 systemd[1]: libpod-conmon-10c03ec02ea7292c03ca674bcc95ebb785c83e3135aada9abe27f0fee9da2cf4.scope: Deactivated successfully.
Nov 25 16:20:21 compute-0 sudo[261690]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:21 compute-0 sudo[261853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:21 compute-0 sudo[261853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:21 compute-0 sudo[261853]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:21 compute-0 sudo[261878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:20:21 compute-0 sudo[261878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:21 compute-0 sudo[261878]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:21 compute-0 sudo[261903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:21 compute-0 sudo[261903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:21 compute-0 sudo[261903]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:21 compute-0 sudo[261928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:20:21 compute-0 sudo[261928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:22 compute-0 podman[261994]: 2025-11-25 16:20:22.067908141 +0000 UTC m=+0.066322085 container create 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:20:22 compute-0 systemd[1]: Started libpod-conmon-20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a.scope.
Nov 25 16:20:22 compute-0 podman[261994]: 2025-11-25 16:20:22.042596037 +0000 UTC m=+0.041010071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:20:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:20:22 compute-0 podman[261994]: 2025-11-25 16:20:22.161453601 +0000 UTC m=+0.159867585 container init 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:20:22 compute-0 podman[261994]: 2025-11-25 16:20:22.170169807 +0000 UTC m=+0.168583751 container start 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:20:22 compute-0 podman[261994]: 2025-11-25 16:20:22.172996753 +0000 UTC m=+0.171410737 container attach 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:20:22 compute-0 hardcore_kowalevski[262011]: 167 167
Nov 25 16:20:22 compute-0 systemd[1]: libpod-20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a.scope: Deactivated successfully.
Nov 25 16:20:22 compute-0 podman[261994]: 2025-11-25 16:20:22.178865162 +0000 UTC m=+0.177279116 container died 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-30ed20ba18a1959df37f034d121dd6bb39748403a6930d3c69bc8c44a004179c-merged.mount: Deactivated successfully.
Nov 25 16:20:22 compute-0 podman[261994]: 2025-11-25 16:20:22.214034753 +0000 UTC m=+0.212448697 container remove 20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:20:22 compute-0 systemd[1]: libpod-conmon-20a3b2d95d7cd2c02e888e325516c99421b61166cb5e78241f495c23d5610a3a.scope: Deactivated successfully.
Nov 25 16:20:22 compute-0 podman[262033]: 2025-11-25 16:20:22.394316059 +0000 UTC m=+0.050426055 container create b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:20:22 compute-0 systemd[1]: Started libpod-conmon-b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65.scope.
Nov 25 16:20:22 compute-0 podman[262033]: 2025-11-25 16:20:22.373392344 +0000 UTC m=+0.029502390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:20:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:22 compute-0 podman[262033]: 2025-11-25 16:20:22.493281106 +0000 UTC m=+0.149391132 container init b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:20:22 compute-0 podman[262033]: 2025-11-25 16:20:22.501804976 +0000 UTC m=+0.157914972 container start b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 16:20:22 compute-0 podman[262033]: 2025-11-25 16:20:22.505032054 +0000 UTC m=+0.161142060 container attach b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:20:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:23 compute-0 recursing_haibt[262050]: {
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:     "0": [
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:         {
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "devices": [
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "/dev/loop3"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             ],
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_name": "ceph_lv0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_size": "21470642176",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "name": "ceph_lv0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "tags": {
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cluster_name": "ceph",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.crush_device_class": "",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.encrypted": "0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osd_id": "0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.type": "block",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.vdo": "0"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             },
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "type": "block",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "vg_name": "ceph_vg0"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:         }
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:     ],
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:     "1": [
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:         {
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "devices": [
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "/dev/loop4"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             ],
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_name": "ceph_lv1",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_size": "21470642176",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "name": "ceph_lv1",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "tags": {
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cluster_name": "ceph",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.crush_device_class": "",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.encrypted": "0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osd_id": "1",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.type": "block",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.vdo": "0"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             },
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "type": "block",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "vg_name": "ceph_vg1"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:         }
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:     ],
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:     "2": [
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:         {
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "devices": [
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "/dev/loop5"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             ],
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_name": "ceph_lv2",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_size": "21470642176",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "name": "ceph_lv2",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "tags": {
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.cluster_name": "ceph",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.crush_device_class": "",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.encrypted": "0",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osd_id": "2",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.type": "block",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:                 "ceph.vdo": "0"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             },
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "type": "block",
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:             "vg_name": "ceph_vg2"
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:         }
Nov 25 16:20:23 compute-0 recursing_haibt[262050]:     ]
Nov 25 16:20:23 compute-0 recursing_haibt[262050]: }
Nov 25 16:20:23 compute-0 systemd[1]: libpod-b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65.scope: Deactivated successfully.
Nov 25 16:20:23 compute-0 podman[262033]: 2025-11-25 16:20:23.216999931 +0000 UTC m=+0.873109967 container died b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bfbee3b953b1b9f200a545771cc3348283f42f195d958986d4cb7afaac0d033-merged.mount: Deactivated successfully.
Nov 25 16:20:23 compute-0 podman[262033]: 2025-11-25 16:20:23.277218039 +0000 UTC m=+0.933328045 container remove b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:20:23 compute-0 systemd[1]: libpod-conmon-b9d1ddf38d4d554d5507cedc3a81d016ea22169255bead54b5232897499d5e65.scope: Deactivated successfully.
Nov 25 16:20:23 compute-0 sudo[261928]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:23 compute-0 sudo[262072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:23 compute-0 sudo[262072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:23 compute-0 sudo[262072]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:23 compute-0 sudo[262099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:20:23 compute-0 sudo[262099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:23 compute-0 sudo[262099]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:23 compute-0 sudo[262124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:23 compute-0 sudo[262124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:23 compute-0 sudo[262124]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:23 compute-0 sudo[262149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:20:23 compute-0 sudo[262149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:23 compute-0 ceph-mon[74985]: pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:23 compute-0 podman[262216]: 2025-11-25 16:20:23.913113349 +0000 UTC m=+0.047976039 container create a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:20:23 compute-0 systemd[1]: Started libpod-conmon-a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a.scope.
Nov 25 16:20:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:20:23 compute-0 podman[262216]: 2025-11-25 16:20:23.898784881 +0000 UTC m=+0.033647581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:20:24 compute-0 podman[262216]: 2025-11-25 16:20:24.004589113 +0000 UTC m=+0.139451833 container init a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:20:24 compute-0 podman[262216]: 2025-11-25 16:20:24.01555595 +0000 UTC m=+0.150418640 container start a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:20:24 compute-0 podman[262216]: 2025-11-25 16:20:24.018658874 +0000 UTC m=+0.153521604 container attach a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:20:24 compute-0 gallant_gates[262232]: 167 167
Nov 25 16:20:24 compute-0 systemd[1]: libpod-a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a.scope: Deactivated successfully.
Nov 25 16:20:24 compute-0 podman[262216]: 2025-11-25 16:20:24.020465203 +0000 UTC m=+0.155327893 container died a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 16:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1638d8160de29c4e82e18a24cbc52faf2c00f04239feb8a30ec749e6b6872530-merged.mount: Deactivated successfully.
Nov 25 16:20:24 compute-0 podman[262216]: 2025-11-25 16:20:24.061090871 +0000 UTC m=+0.195953561 container remove a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:20:24 compute-0 systemd[1]: libpod-conmon-a7104d55b1be0dc645455a4871d05ca8891b89d46dc699acee30b5845b41005a.scope: Deactivated successfully.
Nov 25 16:20:24 compute-0 podman[262257]: 2025-11-25 16:20:24.2447736 +0000 UTC m=+0.044869445 container create f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:20:24 compute-0 systemd[1]: Started libpod-conmon-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope.
Nov 25 16:20:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:20:24 compute-0 podman[262257]: 2025-11-25 16:20:24.227495262 +0000 UTC m=+0.027591117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:20:24 compute-0 podman[262257]: 2025-11-25 16:20:24.324431484 +0000 UTC m=+0.124527409 container init f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:20:24 compute-0 podman[262257]: 2025-11-25 16:20:24.340324384 +0000 UTC m=+0.140420219 container start f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:20:24 compute-0 podman[262257]: 2025-11-25 16:20:24.344402474 +0000 UTC m=+0.144498309 container attach f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 16:20:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:24 compute-0 sshd-session[262096]: Connection closed by authenticating user root 171.244.51.45 port 52658 [preauth]
Nov 25 16:20:25 compute-0 cool_moore[262274]: {
Nov 25 16:20:25 compute-0 cool_moore[262274]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "osd_id": 1,
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "type": "bluestore"
Nov 25 16:20:25 compute-0 cool_moore[262274]:     },
Nov 25 16:20:25 compute-0 cool_moore[262274]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "osd_id": 2,
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "type": "bluestore"
Nov 25 16:20:25 compute-0 cool_moore[262274]:     },
Nov 25 16:20:25 compute-0 cool_moore[262274]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "osd_id": 0,
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:20:25 compute-0 cool_moore[262274]:         "type": "bluestore"
Nov 25 16:20:25 compute-0 cool_moore[262274]:     }
Nov 25 16:20:25 compute-0 cool_moore[262274]: }
Nov 25 16:20:25 compute-0 systemd[1]: libpod-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope: Deactivated successfully.
Nov 25 16:20:25 compute-0 podman[262257]: 2025-11-25 16:20:25.458029545 +0000 UTC m=+1.258125390 container died f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:20:25 compute-0 systemd[1]: libpod-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope: Consumed 1.119s CPU time.
Nov 25 16:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd6c81217dbe27d059ab8438cb72bf246c64ac25c63cae56f961d4ebb68588de-merged.mount: Deactivated successfully.
Nov 25 16:20:25 compute-0 podman[262257]: 2025-11-25 16:20:25.524619526 +0000 UTC m=+1.324715371 container remove f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_moore, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:20:25 compute-0 systemd[1]: libpod-conmon-f9850d0c0ba0ab7ca27625f242612b0c81b30ab852c5555700d074ac6b661e63.scope: Deactivated successfully.
Nov 25 16:20:25 compute-0 sudo[262149]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:20:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:20:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:20:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:20:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3cd9a751-4fe2-4bdc-b888-9f00d40ca504 does not exist
Nov 25 16:20:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d1d7a164-b58b-4f85-94b7-60cdf6f28c4f does not exist
Nov 25 16:20:25 compute-0 sudo[262321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:20:25 compute-0 sudo[262321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:25 compute-0 sudo[262321]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:26 compute-0 ceph-mon[74985]: pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:20:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:20:26 compute-0 sudo[262346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:20:26 compute-0 sudo[262346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:20:26 compute-0 sudo[262346]: pam_unix(sudo:session): session closed for user root
Nov 25 16:20:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:27 compute-0 ceph-mon[74985]: pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:29 compute-0 ceph-mon[74985]: pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:31 compute-0 ceph-mon[74985]: pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:33 compute-0 ceph-mon[74985]: pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:35 compute-0 ceph-mon[74985]: pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:36 compute-0 podman[262371]: 2025-11-25 16:20:36.665687351 +0000 UTC m=+0.072499472 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd)
Nov 25 16:20:36 compute-0 podman[262372]: 2025-11-25 16:20:36.698088927 +0000 UTC m=+0.103837269 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 25 16:20:36 compute-0 podman[262373]: 2025-11-25 16:20:36.711669855 +0000 UTC m=+0.116485112 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:20:36 compute-0 ceph-mon[74985]: pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:39 compute-0 ceph-mon[74985]: pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:20:40
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:20:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:40 compute-0 ceph-mon[74985]: pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:43 compute-0 ceph-mon[74985]: pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:20:44 compute-0 nova_compute[254092]: 2025-11-25 16:20:44.532 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:20:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:20:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238979290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.013 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:20:45 compute-0 ceph-mon[74985]: pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.187 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.188 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5183MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.188 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.188 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.283 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.283 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.305 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:20:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:20:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703015220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.741 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.747 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.763 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.765 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:20:45 compute-0 nova_compute[254092]: 2025-11-25 16:20:45.765 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:20:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/238979290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:20:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1703015220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:20:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:46 compute-0 nova_compute[254092]: 2025-11-25 16:20:46.764 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:46 compute-0 nova_compute[254092]: 2025-11-25 16:20:46.764 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:46 compute-0 nova_compute[254092]: 2025-11-25 16:20:46.765 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:46 compute-0 nova_compute[254092]: 2025-11-25 16:20:46.765 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:20:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.889451) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087646889519, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1104, "num_deletes": 256, "total_data_size": 1601280, "memory_usage": 1626496, "flush_reason": "Manual Compaction"}
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087646959037, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1575714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18765, "largest_seqno": 19868, "table_properties": {"data_size": 1570421, "index_size": 2753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10952, "raw_average_key_size": 18, "raw_value_size": 1559700, "raw_average_value_size": 2670, "num_data_blocks": 126, "num_entries": 584, "num_filter_entries": 584, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087534, "oldest_key_time": 1764087534, "file_creation_time": 1764087646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 69670 microseconds, and 8093 cpu microseconds.
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.959117) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1575714 bytes OK
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.959147) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.981177) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.981202) EVENT_LOG_v1 {"time_micros": 1764087646981194, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.981225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1596128, prev total WAL file size 1596128, number of live WAL files 2.
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.982352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1538KB)], [44(6169KB)]
Nov 25 16:20:46 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087646982388, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7892803, "oldest_snapshot_seqno": -1}
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4323 keys, 7751701 bytes, temperature: kUnknown
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087647251028, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7751701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7721711, "index_size": 18104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 107197, "raw_average_key_size": 24, "raw_value_size": 7642311, "raw_average_value_size": 1767, "num_data_blocks": 762, "num_entries": 4323, "num_filter_entries": 4323, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.251262) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7751701 bytes
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.282979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.4 rd, 28.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 6.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 4847, records dropped: 524 output_compression: NoCompression
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.283017) EVENT_LOG_v1 {"time_micros": 1764087647282990, "job": 22, "event": "compaction_finished", "compaction_time_micros": 268731, "compaction_time_cpu_micros": 26522, "output_level": 6, "num_output_files": 1, "total_output_size": 7751701, "num_input_records": 4847, "num_output_records": 4323, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087647283363, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 25 16:20:47 compute-0 ceph-mon[74985]: pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087647285015, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:46.982250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:20:47 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:20:47.285158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:20:47 compute-0 nova_compute[254092]: 2025-11-25 16:20:47.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:48 compute-0 nova_compute[254092]: 2025-11-25 16:20:48.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:48 compute-0 nova_compute[254092]: 2025-11-25 16:20:48.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:48 compute-0 nova_compute[254092]: 2025-11-25 16:20:48.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:20:48 compute-0 nova_compute[254092]: 2025-11-25 16:20:48.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:20:48 compute-0 nova_compute[254092]: 2025-11-25 16:20:48.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:20:48 compute-0 nova_compute[254092]: 2025-11-25 16:20:48.525 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:20:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:49 compute-0 ceph-mon[74985]: pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:20:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 25 16:20:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 25 16:20:50 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:20:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:20:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 25 16:20:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 25 16:20:51 compute-0 ceph-mon[74985]: osdmap e119: 3 total, 3 up, 3 in
Nov 25 16:20:51 compute-0 ceph-mon[74985]: pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 25 16:20:51 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 25 16:20:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 25 16:20:52 compute-0 ceph-mon[74985]: osdmap e120: 3 total, 3 up, 3 in
Nov 25 16:20:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 25 16:20:52 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 25 16:20:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Nov 25 16:20:53 compute-0 ceph-mon[74985]: osdmap e121: 3 total, 3 up, 3 in
Nov 25 16:20:53 compute-0 ceph-mon[74985]: pgmap v958: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Nov 25 16:20:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 25 16:20:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 25 16:20:54 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 25 16:20:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Nov 25 16:20:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:20:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800492187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:20:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:20:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800492187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:20:55 compute-0 ceph-mon[74985]: osdmap e122: 3 total, 3 up, 3 in
Nov 25 16:20:55 compute-0 ceph-mon[74985]: pgmap v960: 321 pgs: 321 active+clean; 8.4 MiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Nov 25 16:20:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/800492187' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:20:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/800492187' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:20:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 13 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Nov 25 16:20:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:20:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 25 16:20:57 compute-0 ceph-mon[74985]: pgmap v961: 321 pgs: 321 active+clean; 13 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Nov 25 16:20:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 25 16:20:57 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 25 16:20:58 compute-0 ceph-mon[74985]: osdmap e123: 3 total, 3 up, 3 in
Nov 25 16:20:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.2 MiB/s wr, 47 op/s
Nov 25 16:20:59 compute-0 ceph-mon[74985]: pgmap v963: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.2 MiB/s wr, 47 op/s
Nov 25 16:21:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Nov 25 16:21:01 compute-0 ceph-mon[74985]: pgmap v964: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Nov 25 16:21:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.9 MiB/s wr, 35 op/s
Nov 25 16:21:04 compute-0 ceph-mon[74985]: pgmap v965: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.9 MiB/s wr, 35 op/s
Nov 25 16:21:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.3 MiB/s wr, 29 op/s
Nov 25 16:21:05 compute-0 ceph-mon[74985]: pgmap v966: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.3 MiB/s wr, 29 op/s
Nov 25 16:21:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.8 MiB/s wr, 15 op/s
Nov 25 16:21:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:07 compute-0 ceph-mon[74985]: pgmap v967: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 2.8 MiB/s wr, 15 op/s
Nov 25 16:21:07 compute-0 podman[262480]: 2025-11-25 16:21:07.71118104 +0000 UTC m=+0.103214823 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:21:07 compute-0 podman[262481]: 2025-11-25 16:21:07.711311464 +0000 UTC m=+0.102679349 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 16:21:07 compute-0 podman[262482]: 2025-11-25 16:21:07.773866565 +0000 UTC m=+0.158666193 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:21:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:21:10 compute-0 ceph-mon[74985]: pgmap v968: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:11 compute-0 ceph-mon[74985]: pgmap v969: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:21:13.589 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:21:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:21:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:21:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:21:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:21:13 compute-0 ceph-mon[74985]: pgmap v970: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 25 16:21:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 25 16:21:14 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 25 16:21:14 compute-0 ceph-mon[74985]: pgmap v971: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:16 compute-0 ceph-mon[74985]: osdmap e124: 3 total, 3 up, 3 in
Nov 25 16:21:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Nov 25 16:21:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:17 compute-0 ceph-mon[74985]: pgmap v973: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Nov 25 16:21:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 715 B/s wr, 17 op/s
Nov 25 16:21:19 compute-0 ceph-mon[74985]: pgmap v974: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 715 B/s wr, 17 op/s
Nov 25 16:21:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1021 B/s wr, 18 op/s
Nov 25 16:21:20 compute-0 ceph-mon[74985]: pgmap v975: 321 pgs: 321 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1021 B/s wr, 18 op/s
Nov 25 16:21:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 25 16:21:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 25 16:21:22 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 25 16:21:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 25 16:21:23 compute-0 ceph-mon[74985]: osdmap e125: 3 total, 3 up, 3 in
Nov 25 16:21:23 compute-0 ceph-mon[74985]: pgmap v977: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 25 16:21:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 16:21:25 compute-0 ceph-mon[74985]: pgmap v978: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Nov 25 16:21:26 compute-0 sudo[262544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:26 compute-0 sudo[262544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:26 compute-0 sudo[262544]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:26 compute-0 sudo[262569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:21:26 compute-0 sudo[262569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:26 compute-0 sudo[262569]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:26 compute-0 sudo[262594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:26 compute-0 sudo[262594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:26 compute-0 sudo[262594]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 31 op/s
Nov 25 16:21:26 compute-0 sudo[262619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:21:26 compute-0 sudo[262619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 25 16:21:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 25 16:21:26 compute-0 ceph-mon[74985]: pgmap v979: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 31 op/s
Nov 25 16:21:27 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 25 16:21:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:27 compute-0 sudo[262619]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:21:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:21:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:21:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:21:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:21:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:21:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2f4d3432-2689-4749-a06c-7940df3d1ce0 does not exist
Nov 25 16:21:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 551d0327-0173-49cb-a2d8-a9093e29a2f5 does not exist
Nov 25 16:21:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 87856292-307e-4334-8dd0-16fb9e0f3968 does not exist
Nov 25 16:21:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:21:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:21:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:21:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:21:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:21:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:21:27 compute-0 sudo[262674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:27 compute-0 sudo[262674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:27 compute-0 sudo[262674]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:27 compute-0 sudo[262699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:21:27 compute-0 sudo[262699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:27 compute-0 sudo[262699]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:27 compute-0 sudo[262724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:27 compute-0 sudo[262724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:27 compute-0 sudo[262724]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:27 compute-0 sudo[262749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:21:27 compute-0 sudo[262749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:28 compute-0 podman[262816]: 2025-11-25 16:21:28.010979037 +0000 UTC m=+0.025773568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:21:28 compute-0 ceph-mon[74985]: osdmap e126: 3 total, 3 up, 3 in
Nov 25 16:21:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:21:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:21:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:21:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:21:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:21:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:21:28 compute-0 podman[262816]: 2025-11-25 16:21:28.31981197 +0000 UTC m=+0.334606481 container create ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:21:28 compute-0 systemd[1]: Started libpod-conmon-ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08.scope.
Nov 25 16:21:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:21:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 895 B/s wr, 17 op/s
Nov 25 16:21:28 compute-0 podman[262816]: 2025-11-25 16:21:28.743454149 +0000 UTC m=+0.758248680 container init ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:21:28 compute-0 podman[262816]: 2025-11-25 16:21:28.753004357 +0000 UTC m=+0.767798868 container start ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 16:21:28 compute-0 clever_hermann[262832]: 167 167
Nov 25 16:21:28 compute-0 systemd[1]: libpod-ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08.scope: Deactivated successfully.
Nov 25 16:21:28 compute-0 podman[262816]: 2025-11-25 16:21:28.811824888 +0000 UTC m=+0.826619419 container attach ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:21:28 compute-0 podman[262816]: 2025-11-25 16:21:28.812832855 +0000 UTC m=+0.827627366 container died ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7cd781ac9ba902579107db221c77b9871038cb5e6a309ecb4b2d33a6202890e-merged.mount: Deactivated successfully.
Nov 25 16:21:29 compute-0 ceph-mon[74985]: pgmap v981: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 895 B/s wr, 17 op/s
Nov 25 16:21:29 compute-0 podman[262816]: 2025-11-25 16:21:29.671002726 +0000 UTC m=+1.685797237 container remove ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:21:29 compute-0 systemd[1]: libpod-conmon-ac0b0264c5f27c421ef9a91d68846a00087cf3cce8d37c7fe8a01fe160f6dc08.scope: Deactivated successfully.
Nov 25 16:21:29 compute-0 podman[262857]: 2025-11-25 16:21:29.890130073 +0000 UTC m=+0.025674655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:21:30 compute-0 podman[262857]: 2025-11-25 16:21:30.077985424 +0000 UTC m=+0.213529956 container create c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:21:30 compute-0 systemd[1]: Started libpod-conmon-c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b.scope.
Nov 25 16:21:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:30 compute-0 podman[262857]: 2025-11-25 16:21:30.348060178 +0000 UTC m=+0.483604820 container init c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:21:30 compute-0 podman[262857]: 2025-11-25 16:21:30.355069548 +0000 UTC m=+0.490614140 container start c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:21:30 compute-0 podman[262857]: 2025-11-25 16:21:30.496919514 +0000 UTC m=+0.632464076 container attach c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:21:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Nov 25 16:21:31 compute-0 ceph-mon[74985]: pgmap v982: 321 pgs: 321 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Nov 25 16:21:31 compute-0 thirsty_cerf[262873]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:21:31 compute-0 thirsty_cerf[262873]: --> relative data size: 1.0
Nov 25 16:21:31 compute-0 thirsty_cerf[262873]: --> All data devices are unavailable
Nov 25 16:21:31 compute-0 systemd[1]: libpod-c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b.scope: Deactivated successfully.
Nov 25 16:21:31 compute-0 podman[262903]: 2025-11-25 16:21:31.407883824 +0000 UTC m=+0.023555398 container died c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:21:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-32a723869097ce6c4b8f654e91b79ca17f4321beacc5f25057818ddb77cd6d1f-merged.mount: Deactivated successfully.
Nov 25 16:21:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 4.9 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Nov 25 16:21:33 compute-0 podman[262903]: 2025-11-25 16:21:33.304005429 +0000 UTC m=+1.919677013 container remove c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:21:33 compute-0 systemd[1]: libpod-conmon-c6d2791f0d23a0e35e507b77ef6d751537d88d2c0c265b4d2cfc214ee24f0e5b.scope: Deactivated successfully.
Nov 25 16:21:33 compute-0 sudo[262749]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:33 compute-0 ceph-mon[74985]: pgmap v983: 321 pgs: 321 active+clean; 4.9 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Nov 25 16:21:33 compute-0 sudo[262918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:33 compute-0 sudo[262918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:33 compute-0 sudo[262918]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:33 compute-0 sudo[262943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:21:33 compute-0 sudo[262943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:33 compute-0 sudo[262943]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:33 compute-0 sudo[262968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:33 compute-0 sudo[262968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:33 compute-0 sudo[262968]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:33 compute-0 sudo[262993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:21:33 compute-0 sudo[262993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 4.9 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Nov 25 16:21:34 compute-0 podman[263058]: 2025-11-25 16:21:34.864770415 +0000 UTC m=+0.023222670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:21:35 compute-0 podman[263058]: 2025-11-25 16:21:35.176931088 +0000 UTC m=+0.335383273 container create 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:21:35 compute-0 ceph-mon[74985]: pgmap v984: 321 pgs: 321 active+clean; 4.9 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.4 KiB/s wr, 21 op/s
Nov 25 16:21:35 compute-0 systemd[1]: Started libpod-conmon-3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3.scope.
Nov 25 16:21:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:21:35 compute-0 podman[263058]: 2025-11-25 16:21:35.679452069 +0000 UTC m=+0.837904264 container init 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:21:35 compute-0 podman[263058]: 2025-11-25 16:21:35.685994847 +0000 UTC m=+0.844447022 container start 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 16:21:35 compute-0 practical_hamilton[263074]: 167 167
Nov 25 16:21:35 compute-0 systemd[1]: libpod-3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3.scope: Deactivated successfully.
Nov 25 16:21:35 compute-0 podman[263058]: 2025-11-25 16:21:35.794574974 +0000 UTC m=+0.953027149 container attach 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:21:35 compute-0 podman[263058]: 2025-11-25 16:21:35.794901113 +0000 UTC m=+0.953353288 container died 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:21:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c789636d04f310e5c001129b47925e28adbd5cdbdadd7b8b1fd5d680eebc906-merged.mount: Deactivated successfully.
Nov 25 16:21:36 compute-0 podman[263058]: 2025-11-25 16:21:36.476326013 +0000 UTC m=+1.634778188 container remove 3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hamilton, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 16:21:36 compute-0 systemd[1]: libpod-conmon-3519e6cde79395f2a82fe2af2508779bc0715eee7f91519f288008e4d31f17f3.scope: Deactivated successfully.
Nov 25 16:21:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 16:21:36 compute-0 podman[263099]: 2025-11-25 16:21:36.728155684 +0000 UTC m=+0.120874950 container create 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:21:36 compute-0 podman[263099]: 2025-11-25 16:21:36.634270554 +0000 UTC m=+0.026989830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:21:36 compute-0 systemd[1]: Started libpod-conmon-99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af.scope.
Nov 25 16:21:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:36 compute-0 ceph-mon[74985]: pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 16:21:36 compute-0 podman[263099]: 2025-11-25 16:21:36.965461612 +0000 UTC m=+0.358180928 container init 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 16:21:36 compute-0 podman[263099]: 2025-11-25 16:21:36.974744744 +0000 UTC m=+0.367464010 container start 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:21:36 compute-0 podman[263099]: 2025-11-25 16:21:36.992916135 +0000 UTC m=+0.385635461 container attach 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:21:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 25 16:21:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 25 16:21:37 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 25 16:21:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:21:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 4492 writes, 20K keys, 4492 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4492 writes, 4492 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1319 writes, 6006 keys, 1319 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s
                                           Interval WAL: 1319 writes, 1319 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.1      1.16              0.08        11    0.105       0      0       0.0       0.0
                                             L6      1/0    7.39 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     90.3     74.8      0.94              0.20        10    0.094     44K   5209       0.0       0.0
                                            Sum      1/0    7.39 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     40.6     44.1      2.10              0.28        21    0.100     44K   5209       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6     62.8     63.4      0.69              0.14        10    0.069     24K   2995       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     90.3     74.8      0.94              0.20        10    0.094     44K   5209       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.8      1.11              0.08        10    0.111       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.022, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 2.1 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 308.00 MB usage: 6.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(421,6.18 MB,2.00493%) FilterBlock(22,130.30 KB,0.0413127%) IndexBlock(22,237.86 KB,0.0754171%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]: {
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:     "0": [
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:         {
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "devices": [
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "/dev/loop3"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             ],
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_name": "ceph_lv0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_size": "21470642176",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "name": "ceph_lv0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "tags": {
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cluster_name": "ceph",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.crush_device_class": "",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.encrypted": "0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osd_id": "0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.type": "block",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.vdo": "0"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             },
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "type": "block",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "vg_name": "ceph_vg0"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:         }
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:     ],
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:     "1": [
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:         {
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "devices": [
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "/dev/loop4"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             ],
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_name": "ceph_lv1",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_size": "21470642176",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "name": "ceph_lv1",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "tags": {
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cluster_name": "ceph",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.crush_device_class": "",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.encrypted": "0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osd_id": "1",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.type": "block",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.vdo": "0"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             },
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "type": "block",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "vg_name": "ceph_vg1"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:         }
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:     ],
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:     "2": [
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:         {
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "devices": [
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "/dev/loop5"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             ],
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_name": "ceph_lv2",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_size": "21470642176",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "name": "ceph_lv2",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "tags": {
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.cluster_name": "ceph",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.crush_device_class": "",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.encrypted": "0",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osd_id": "2",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.type": "block",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:                 "ceph.vdo": "0"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             },
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "type": "block",
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:             "vg_name": "ceph_vg2"
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:         }
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]:     ]
Nov 25 16:21:37 compute-0 nervous_chebyshev[263116]: }
Nov 25 16:21:37 compute-0 systemd[1]: libpod-99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af.scope: Deactivated successfully.
Nov 25 16:21:37 compute-0 podman[263125]: 2025-11-25 16:21:37.767369982 +0000 UTC m=+0.030274510 container died 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:21:38 compute-0 ceph-mon[74985]: osdmap e127: 3 total, 3 up, 3 in
Nov 25 16:21:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-256600f77a54c29f82f093dab29168124ddbe2c6e02dd58a257052d121994446-merged.mount: Deactivated successfully.
Nov 25 16:21:38 compute-0 podman[263125]: 2025-11-25 16:21:38.677665393 +0000 UTC m=+0.940569911 container remove 99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:21:38 compute-0 systemd[1]: libpod-conmon-99bfcec57058ec6067854999aeb720cce9a54076d1ec1a1c261154a981fd80af.scope: Deactivated successfully.
Nov 25 16:21:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Nov 25 16:21:38 compute-0 sudo[262993]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:38 compute-0 podman[263135]: 2025-11-25 16:21:38.771222533 +0000 UTC m=+1.009298969 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 16:21:38 compute-0 podman[263155]: 2025-11-25 16:21:38.773628559 +0000 UTC m=+0.470678232 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:21:38 compute-0 podman[263126]: 2025-11-25 16:21:38.787321529 +0000 UTC m=+1.022899068 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:21:38 compute-0 sudo[263194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:38 compute-0 sudo[263194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:38 compute-0 sudo[263194]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:38 compute-0 sudo[263226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:21:38 compute-0 sudo[263226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:38 compute-0 sudo[263226]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:38 compute-0 sudo[263251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:38 compute-0 sudo[263251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:38 compute-0 sudo[263251]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:39 compute-0 sudo[263276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:21:39 compute-0 sudo[263276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:39 compute-0 ceph-mon[74985]: pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Nov 25 16:21:39 compute-0 podman[263344]: 2025-11-25 16:21:39.446884708 +0000 UTC m=+0.096800139 container create 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:21:39 compute-0 podman[263344]: 2025-11-25 16:21:39.37301131 +0000 UTC m=+0.022926761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:21:39 compute-0 systemd[1]: Started libpod-conmon-810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f.scope.
Nov 25 16:21:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:21:39 compute-0 podman[263344]: 2025-11-25 16:21:39.602077876 +0000 UTC m=+0.251993317 container init 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:21:39 compute-0 podman[263344]: 2025-11-25 16:21:39.610439653 +0000 UTC m=+0.260355084 container start 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:21:39 compute-0 heuristic_herschel[263360]: 167 167
Nov 25 16:21:39 compute-0 systemd[1]: libpod-810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f.scope: Deactivated successfully.
Nov 25 16:21:39 compute-0 podman[263344]: 2025-11-25 16:21:39.652232173 +0000 UTC m=+0.302147634 container attach 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:21:39 compute-0 podman[263344]: 2025-11-25 16:21:39.653180568 +0000 UTC m=+0.303095999 container died 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:21:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c24668aed7df2498639b47e73faf931df573333db12691c08e0d016541d3c8f-merged.mount: Deactivated successfully.
Nov 25 16:21:39 compute-0 podman[263344]: 2025-11-25 16:21:39.900425675 +0000 UTC m=+0.550341096 container remove 810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:21:39 compute-0 systemd[1]: libpod-conmon-810eaf85b4a0380eed1a0fb39aa1fd6aeca3ce80ef425ae31e2176399fd5956f.scope: Deactivated successfully.
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:21:40
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', '.rgw.root']
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:21:40 compute-0 podman[263386]: 2025-11-25 16:21:40.097226708 +0000 UTC m=+0.083125839 container create ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:21:40 compute-0 podman[263386]: 2025-11-25 16:21:40.037594586 +0000 UTC m=+0.023493737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:21:40 compute-0 systemd[1]: Started libpod-conmon-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope.
Nov 25 16:21:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:21:40 compute-0 podman[263386]: 2025-11-25 16:21:40.302800118 +0000 UTC m=+0.288699249 container init ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 16:21:40 compute-0 podman[263386]: 2025-11-25 16:21:40.309127019 +0000 UTC m=+0.295026150 container start ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:21:40 compute-0 podman[263386]: 2025-11-25 16:21:40.397748937 +0000 UTC m=+0.383648088 container attach ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:21:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 409 B/s wr, 8 op/s
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]: {
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "osd_id": 1,
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "type": "bluestore"
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:     },
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "osd_id": 2,
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "type": "bluestore"
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:     },
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "osd_id": 0,
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:         "type": "bluestore"
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]:     }
Nov 25 16:21:41 compute-0 ecstatic_aryabhata[263403]: }
Nov 25 16:21:41 compute-0 systemd[1]: libpod-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope: Deactivated successfully.
Nov 25 16:21:41 compute-0 systemd[1]: libpod-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope: Consumed 1.060s CPU time.
Nov 25 16:21:41 compute-0 ceph-mon[74985]: pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 409 B/s wr, 8 op/s
Nov 25 16:21:41 compute-0 podman[263436]: 2025-11-25 16:21:41.412543754 +0000 UTC m=+0.029916860 container died ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:21:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-44a50b71254882b8a8d88f36dee7fed31a94f653785b1e9d9c470c5fba49a42f-merged.mount: Deactivated successfully.
Nov 25 16:21:41 compute-0 podman[263436]: 2025-11-25 16:21:41.827178179 +0000 UTC m=+0.444551305 container remove ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:21:41 compute-0 systemd[1]: libpod-conmon-ba83182eed52360c53baca3df7db2b067c3d8672eab61a7e66d1c38adac40020.scope: Deactivated successfully.
Nov 25 16:21:41 compute-0 sudo[263276]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:21:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:21:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:21:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:21:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 62b6d10b-b5b3-47cc-be66-235c1611caba does not exist
Nov 25 16:21:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ca0f7dc1-3370-48c4-af13-4647302b9017 does not exist
Nov 25 16:21:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:42 compute-0 sudo[263451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:21:42 compute-0 sudo[263451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:42 compute-0 sudo[263451]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:42 compute-0 sudo[263476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:21:42 compute-0 sudo[263476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:21:42 compute-0 sudo[263476]: pam_unix(sudo:session): session closed for user root
Nov 25 16:21:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:21:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:21:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:21:43 compute-0 ceph-mon[74985]: pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:21:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.533 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.535 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.535 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:21:45 compute-0 ceph-mon[74985]: pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:21:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:21:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3562051711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:21:45 compute-0 nova_compute[254092]: 2025-11-25 16:21:45.938 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.079 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.080 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5154MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.080 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.080 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.165 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.166 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.190 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:21:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:21:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1669472854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.605 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.610 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.623 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.624 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:21:46 compute-0 nova_compute[254092]: 2025-11-25 16:21:46.625 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:21:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 16:21:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3562051711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:21:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1669472854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:21:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:47 compute-0 nova_compute[254092]: 2025-11-25 16:21:47.625 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:47 compute-0 nova_compute[254092]: 2025-11-25 16:21:47.626 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:47 compute-0 nova_compute[254092]: 2025-11-25 16:21:47.626 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:47 compute-0 nova_compute[254092]: 2025-11-25 16:21:47.626 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:21:47 compute-0 ceph-mon[74985]: pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 16:21:48 compute-0 nova_compute[254092]: 2025-11-25 16:21:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:48 compute-0 nova_compute[254092]: 2025-11-25 16:21:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 25 16:21:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 25 16:21:49 compute-0 ceph-mon[74985]: pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:21:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 25 16:21:49 compute-0 nova_compute[254092]: 2025-11-25 16:21:49.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:49 compute-0 nova_compute[254092]: 2025-11-25 16:21:49.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:21:49 compute-0 nova_compute[254092]: 2025-11-25 16:21:49.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:21:49 compute-0 nova_compute[254092]: 2025-11-25 16:21:49.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:21:49 compute-0 nova_compute[254092]: 2025-11-25 16:21:49.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:21:50 compute-0 ceph-mon[74985]: osdmap e128: 3 total, 3 up, 3 in
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 716 B/s wr, 3 op/s
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:21:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:21:51 compute-0 ceph-mon[74985]: pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 716 B/s wr, 3 op/s
Nov 25 16:21:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:21:52 compute-0 ceph-mon[74985]: pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:21:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:21:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:21:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4064269425' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:21:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:21:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4064269425' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:21:55 compute-0 ceph-mon[74985]: pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:21:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4064269425' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:21:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4064269425' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:21:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:21:56 compute-0 ceph-mon[74985]: pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:21:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:21:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:21:59 compute-0 ceph-mon[74985]: pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 16:22:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 25 16:22:01 compute-0 ceph-mon[74985]: pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Nov 25 16:22:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 767 B/s wr, 9 op/s
Nov 25 16:22:03 compute-0 ceph-mon[74985]: pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 767 B/s wr, 9 op/s
Nov 25 16:22:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:05 compute-0 ceph-mon[74985]: pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:06 compute-0 ceph-mon[74985]: pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:08 compute-0 ceph-mon[74985]: pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:09 compute-0 podman[263547]: 2025-11-25 16:22:09.640373003 +0000 UTC m=+0.061234077 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:22:09 compute-0 podman[263548]: 2025-11-25 16:22:09.665695348 +0000 UTC m=+0.080180669 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:22:09 compute-0 podman[263546]: 2025-11-25 16:22:09.672888533 +0000 UTC m=+0.093553471 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 25 16:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:22:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:11 compute-0 ceph-mon[74985]: pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:12 compute-0 ceph-mon[74985]: pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:22:13.589 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:22:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:22:13.590 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:22:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:15 compute-0 ceph-mon[74985]: pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:17 compute-0 ceph-mon[74985]: pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:18 compute-0 ceph-mon[74985]: pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:21 compute-0 ceph-mon[74985]: pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:23 compute-0 ceph-mon[74985]: pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:25 compute-0 ceph-mon[74985]: pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:27 compute-0 ceph-mon[74985]: pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:29 compute-0 ceph-mon[74985]: pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:31 compute-0 ceph-mon[74985]: pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:33 compute-0 ceph-mon[74985]: pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:34 compute-0 ceph-mon[74985]: pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:37 compute-0 ceph-mon[74985]: pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:39 compute-0 ceph-mon[74985]: pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:22:40
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.mgr', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:22:40 compute-0 podman[263612]: 2025-11-25 16:22:40.650304892 +0000 UTC m=+0.055201434 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 16:22:40 compute-0 podman[263611]: 2025-11-25 16:22:40.661955277 +0000 UTC m=+0.069700556 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 25 16:22:40 compute-0 podman[263613]: 2025-11-25 16:22:40.718466716 +0000 UTC m=+0.116467961 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:40 compute-0 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.174195) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761174233, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1253, "num_deletes": 254, "total_data_size": 1845245, "memory_usage": 1869920, "flush_reason": "Manual Compaction"}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 25 16:22:41 compute-0 ceph-mon[74985]: pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761340887, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1825986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19869, "largest_seqno": 21121, "table_properties": {"data_size": 1819921, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12744, "raw_average_key_size": 20, "raw_value_size": 1807736, "raw_average_value_size": 2842, "num_data_blocks": 154, "num_entries": 636, "num_filter_entries": 636, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087647, "oldest_key_time": 1764087647, "file_creation_time": 1764087761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 166728 microseconds, and 5764 cpu microseconds.
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.340922) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1825986 bytes OK
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.340941) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376165) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376204) EVENT_LOG_v1 {"time_micros": 1764087761376193, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1839532, prev total WAL file size 1839532, number of live WAL files 2.
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.377146) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1783KB)], [47(7570KB)]
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761377277, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9577687, "oldest_snapshot_seqno": -1}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4438 keys, 7799225 bytes, temperature: kUnknown
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761510359, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7799225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7768250, "index_size": 18769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110265, "raw_average_key_size": 24, "raw_value_size": 7686540, "raw_average_value_size": 1731, "num_data_blocks": 786, "num_entries": 4438, "num_filter_entries": 4438, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.510592) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7799225 bytes
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.517084) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.9 rd, 58.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.5) write-amplify(4.3) OK, records in: 4959, records dropped: 521 output_compression: NoCompression
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.517116) EVENT_LOG_v1 {"time_micros": 1764087761517101, "job": 24, "event": "compaction_finished", "compaction_time_micros": 133126, "compaction_time_cpu_micros": 28947, "output_level": 6, "num_output_files": 1, "total_output_size": 7799225, "num_input_records": 4959, "num_output_records": 4438, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761517937, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087761520780, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.376957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:22:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:22:41.520863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:22:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:42 compute-0 sudo[263677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:42 compute-0 sudo[263677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:42 compute-0 sudo[263677]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:42 compute-0 sudo[263702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:22:42 compute-0 sudo[263702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:42 compute-0 sudo[263702]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:42 compute-0 sudo[263727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:42 compute-0 sudo[263727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:42 compute-0 sudo[263727]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:42 compute-0 sudo[263752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:22:42 compute-0 sudo[263752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:42 compute-0 sudo[263752]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:22:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:22:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:22:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:22:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:22:43 compute-0 ceph-mon[74985]: pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:22:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a45ac62c-cd05-4507-93b2-392f60410e38 does not exist
Nov 25 16:22:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5d45e04c-b164-49a7-9267-44ff4c21784a does not exist
Nov 25 16:22:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7db5c926-8408-4eb1-b3ac-2b1d3607e2e7 does not exist
Nov 25 16:22:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:22:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:22:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:22:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:22:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:22:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:22:43 compute-0 sudo[263808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:43 compute-0 sudo[263808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:43 compute-0 sudo[263808]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:43 compute-0 sudo[263833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:22:43 compute-0 sudo[263833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:43 compute-0 sudo[263833]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:43 compute-0 sudo[263858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:43 compute-0 sudo[263858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:43 compute-0 sudo[263858]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:43 compute-0 sudo[263883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:22:43 compute-0 sudo[263883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:43 compute-0 podman[263948]: 2025-11-25 16:22:43.711064237 +0000 UTC m=+0.038629045 container create 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:22:43 compute-0 systemd[1]: Started libpod-conmon-91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37.scope.
Nov 25 16:22:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:22:43 compute-0 podman[263948]: 2025-11-25 16:22:43.694273443 +0000 UTC m=+0.021838261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:22:43 compute-0 podman[263948]: 2025-11-25 16:22:43.807675821 +0000 UTC m=+0.135240649 container init 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:22:43 compute-0 podman[263948]: 2025-11-25 16:22:43.816240113 +0000 UTC m=+0.143804921 container start 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 16:22:43 compute-0 podman[263948]: 2025-11-25 16:22:43.820296412 +0000 UTC m=+0.147861240 container attach 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:22:43 compute-0 epic_cori[263964]: 167 167
Nov 25 16:22:43 compute-0 systemd[1]: libpod-91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37.scope: Deactivated successfully.
Nov 25 16:22:43 compute-0 podman[263948]: 2025-11-25 16:22:43.822688937 +0000 UTC m=+0.150253775 container died 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 16:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-af0fdc1beafaee6c5bc6ebeb48caf38ecb3c04a583dcdba31f857127f99c9e4c-merged.mount: Deactivated successfully.
Nov 25 16:22:44 compute-0 podman[263948]: 2025-11-25 16:22:44.008464032 +0000 UTC m=+0.336028870 container remove 91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:22:44 compute-0 systemd[1]: libpod-conmon-91c782d4ff45b4b72275def270a705e361c1f3338fe8a5a0d7ef69434c28de37.scope: Deactivated successfully.
Nov 25 16:22:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:22:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:22:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:22:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:22:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:22:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:22:44 compute-0 podman[263991]: 2025-11-25 16:22:44.243258022 +0000 UTC m=+0.075243295 container create 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:22:44 compute-0 systemd[1]: Started libpod-conmon-2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29.scope.
Nov 25 16:22:44 compute-0 podman[263991]: 2025-11-25 16:22:44.210335192 +0000 UTC m=+0.042320485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:22:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:44 compute-0 podman[263991]: 2025-11-25 16:22:44.335858567 +0000 UTC m=+0.167843840 container init 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:22:44 compute-0 podman[263991]: 2025-11-25 16:22:44.351395447 +0000 UTC m=+0.183380700 container start 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:22:44 compute-0 podman[263991]: 2025-11-25 16:22:44.374648076 +0000 UTC m=+0.206633369 container attach 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:22:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:45 compute-0 ceph-mon[74985]: pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:45 compute-0 pedantic_davinci[264007]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:22:45 compute-0 pedantic_davinci[264007]: --> relative data size: 1.0
Nov 25 16:22:45 compute-0 pedantic_davinci[264007]: --> All data devices are unavailable
Nov 25 16:22:45 compute-0 systemd[1]: libpod-2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29.scope: Deactivated successfully.
Nov 25 16:22:45 compute-0 podman[264036]: 2025-11-25 16:22:45.401345516 +0000 UTC m=+0.021831182 container died 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:22:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4752d39ffde2042832595f4f45a6729fd1a9d570c0166c735e2e2a69f6c5b8ed-merged.mount: Deactivated successfully.
Nov 25 16:22:45 compute-0 nova_compute[254092]: 2025-11-25 16:22:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:45 compute-0 podman[264036]: 2025-11-25 16:22:45.505061251 +0000 UTC m=+0.125546937 container remove 2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_davinci, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:22:45 compute-0 systemd[1]: libpod-conmon-2df964b07f1b1c88a58256402bc038b811a5785e2da7589f18065f3b18e45c29.scope: Deactivated successfully.
Nov 25 16:22:45 compute-0 nova_compute[254092]: 2025-11-25 16:22:45.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:22:45 compute-0 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:22:45 compute-0 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:22:45 compute-0 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:22:45 compute-0 nova_compute[254092]: 2025-11-25 16:22:45.516 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:22:45 compute-0 sudo[263883]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:45 compute-0 sudo[264052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:45 compute-0 sudo[264052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:45 compute-0 sudo[264052]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:45 compute-0 sudo[264096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:22:45 compute-0 sudo[264096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:45 compute-0 sudo[264096]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:45 compute-0 sudo[264121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:45 compute-0 sudo[264121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:45 compute-0 sudo[264121]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:45 compute-0 sudo[264146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:22:45 compute-0 sudo[264146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:22:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471127480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:22:45 compute-0 nova_compute[254092]: 2025-11-25 16:22:45.941 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:22:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1471127480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.113 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.115 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.115 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.116 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.171 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.172 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.192 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:22:46 compute-0 podman[264213]: 2025-11-25 16:22:46.233988296 +0000 UTC m=+0.037219177 container create 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:22:46 compute-0 systemd[1]: Started libpod-conmon-539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6.scope.
Nov 25 16:22:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:22:46 compute-0 podman[264213]: 2025-11-25 16:22:46.310772703 +0000 UTC m=+0.114003634 container init 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 16:22:46 compute-0 podman[264213]: 2025-11-25 16:22:46.217929102 +0000 UTC m=+0.021160023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:22:46 compute-0 podman[264213]: 2025-11-25 16:22:46.319343335 +0000 UTC m=+0.122574246 container start 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:22:46 compute-0 brave_gould[264231]: 167 167
Nov 25 16:22:46 compute-0 podman[264213]: 2025-11-25 16:22:46.323999601 +0000 UTC m=+0.127230532 container attach 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 16:22:46 compute-0 systemd[1]: libpod-539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6.scope: Deactivated successfully.
Nov 25 16:22:46 compute-0 podman[264213]: 2025-11-25 16:22:46.325747669 +0000 UTC m=+0.128978550 container died 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aaaac24547034cc81cd47812e125a26b0fa20ec4e8a0e78c1e256531718edba-merged.mount: Deactivated successfully.
Nov 25 16:22:46 compute-0 podman[264213]: 2025-11-25 16:22:46.372062791 +0000 UTC m=+0.175293682 container remove 539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:22:46 compute-0 systemd[1]: libpod-conmon-539b44ad46547c0871caf2f211f3d08e61360cb25f08e1a765b7eb57b9e856f6.scope: Deactivated successfully.
Nov 25 16:22:46 compute-0 podman[264273]: 2025-11-25 16:22:46.518712058 +0000 UTC m=+0.036655202 container create bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:22:46 compute-0 systemd[1]: Started libpod-conmon-bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1.scope.
Nov 25 16:22:46 compute-0 podman[264273]: 2025-11-25 16:22:46.503045974 +0000 UTC m=+0.020989138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:22:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:22:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006967742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:22:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.629 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.636 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:46 compute-0 podman[264273]: 2025-11-25 16:22:46.652795065 +0000 UTC m=+0.170738239 container init bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.656 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.658 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.658 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.659 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:46 compute-0 nova_compute[254092]: 2025-11-25 16:22:46.659 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:22:46 compute-0 podman[264273]: 2025-11-25 16:22:46.660088541 +0000 UTC m=+0.178031685 container start bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:22:46 compute-0 podman[264273]: 2025-11-25 16:22:46.663469693 +0000 UTC m=+0.181412857 container attach bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:22:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2006967742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:22:47 compute-0 ceph-mon[74985]: pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]: {
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:     "0": [
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:         {
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "devices": [
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "/dev/loop3"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             ],
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_name": "ceph_lv0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_size": "21470642176",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "name": "ceph_lv0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "tags": {
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cluster_name": "ceph",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.crush_device_class": "",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.encrypted": "0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osd_id": "0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.type": "block",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.vdo": "0"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             },
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "type": "block",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "vg_name": "ceph_vg0"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:         }
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:     ],
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:     "1": [
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:         {
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "devices": [
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "/dev/loop4"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             ],
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_name": "ceph_lv1",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_size": "21470642176",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "name": "ceph_lv1",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "tags": {
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cluster_name": "ceph",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.crush_device_class": "",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.encrypted": "0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osd_id": "1",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.type": "block",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.vdo": "0"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             },
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "type": "block",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "vg_name": "ceph_vg1"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:         }
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:     ],
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:     "2": [
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:         {
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "devices": [
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "/dev/loop5"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             ],
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_name": "ceph_lv2",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_size": "21470642176",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "name": "ceph_lv2",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "tags": {
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.cluster_name": "ceph",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.crush_device_class": "",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.encrypted": "0",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osd_id": "2",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.type": "block",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:                 "ceph.vdo": "0"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             },
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "type": "block",
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:             "vg_name": "ceph_vg2"
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:         }
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]:     ]
Nov 25 16:22:47 compute-0 wonderful_dirac[264289]: }
Nov 25 16:22:47 compute-0 systemd[1]: libpod-bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1.scope: Deactivated successfully.
Nov 25 16:22:47 compute-0 podman[264273]: 2025-11-25 16:22:47.404194768 +0000 UTC m=+0.922137912 container died bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:22:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c8eb69dceacb11e6b73f7fbf9bbdf0ce90e18125428dc4332057a889ba2eafe-merged.mount: Deactivated successfully.
Nov 25 16:22:47 compute-0 podman[264273]: 2025-11-25 16:22:47.469110533 +0000 UTC m=+0.987053677 container remove bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dirac, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:22:47 compute-0 systemd[1]: libpod-conmon-bd70cde9d99078ba2a1c6256504591387a1b6be78254d7d561bf18749a0651d1.scope: Deactivated successfully.
Nov 25 16:22:47 compute-0 sudo[264146]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:47 compute-0 nova_compute[254092]: 2025-11-25 16:22:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:47 compute-0 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:47 compute-0 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:47 compute-0 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:47 compute-0 nova_compute[254092]: 2025-11-25 16:22:47.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:22:47 compute-0 nova_compute[254092]: 2025-11-25 16:22:47.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:22:47 compute-0 nova_compute[254092]: 2025-11-25 16:22:47.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:47 compute-0 sudo[264313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:47 compute-0 sudo[264313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:47 compute-0 sudo[264313]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:47 compute-0 sudo[264338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:22:47 compute-0 sudo[264338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:47 compute-0 sudo[264338]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:47 compute-0 sudo[264363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:47 compute-0 sudo[264363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:47 compute-0 sudo[264363]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:47 compute-0 sudo[264388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:22:47 compute-0 sudo[264388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:48 compute-0 podman[264453]: 2025-11-25 16:22:48.025449471 +0000 UTC m=+0.039311685 container create 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:22:48 compute-0 systemd[1]: Started libpod-conmon-2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2.scope.
Nov 25 16:22:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:22:48 compute-0 podman[264453]: 2025-11-25 16:22:48.091098757 +0000 UTC m=+0.104960991 container init 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:22:48 compute-0 podman[264453]: 2025-11-25 16:22:48.100137171 +0000 UTC m=+0.113999385 container start 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 16:22:48 compute-0 podman[264453]: 2025-11-25 16:22:48.009215502 +0000 UTC m=+0.023077736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:22:48 compute-0 podman[264453]: 2025-11-25 16:22:48.102824404 +0000 UTC m=+0.116686618 container attach 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 16:22:48 compute-0 frosty_nash[264469]: 167 167
Nov 25 16:22:48 compute-0 systemd[1]: libpod-2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2.scope: Deactivated successfully.
Nov 25 16:22:48 compute-0 podman[264453]: 2025-11-25 16:22:48.108815955 +0000 UTC m=+0.122678169 container died 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-654d2e50ba428c8447f1e9879b79299950a43263e7891cf77f3e32a8464a5643-merged.mount: Deactivated successfully.
Nov 25 16:22:48 compute-0 podman[264453]: 2025-11-25 16:22:48.147678627 +0000 UTC m=+0.161540881 container remove 2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:22:48 compute-0 systemd[1]: libpod-conmon-2b7e543fc8d6fa96756f95312735daab1ee2f30b1260abb32645bbec33899ab2.scope: Deactivated successfully.
Nov 25 16:22:48 compute-0 podman[264493]: 2025-11-25 16:22:48.320135032 +0000 UTC m=+0.043986291 container create 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:22:48 compute-0 systemd[1]: Started libpod-conmon-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope.
Nov 25 16:22:48 compute-0 podman[264493]: 2025-11-25 16:22:48.304823807 +0000 UTC m=+0.028675086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:22:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:22:48 compute-0 podman[264493]: 2025-11-25 16:22:48.431216815 +0000 UTC m=+0.155068094 container init 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 16:22:48 compute-0 podman[264493]: 2025-11-25 16:22:48.44134255 +0000 UTC m=+0.165193819 container start 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:22:48 compute-0 podman[264493]: 2025-11-25 16:22:48.445364508 +0000 UTC m=+0.169215797 container attach 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:22:48 compute-0 nova_compute[254092]: 2025-11-25 16:22:48.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]: {
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "osd_id": 1,
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "type": "bluestore"
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:     },
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "osd_id": 2,
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "type": "bluestore"
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:     },
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "osd_id": 0,
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:         "type": "bluestore"
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]:     }
Nov 25 16:22:49 compute-0 peaceful_babbage[264510]: }
Nov 25 16:22:49 compute-0 systemd[1]: libpod-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope: Deactivated successfully.
Nov 25 16:22:49 compute-0 conmon[264510]: conmon 9e3094993b554076d394 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope/container/memory.events
Nov 25 16:22:49 compute-0 podman[264493]: 2025-11-25 16:22:49.367608623 +0000 UTC m=+1.091459892 container died 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ba32637828d8a677475e9015f50df89dea23b5624857540b510de463d95766a-merged.mount: Deactivated successfully.
Nov 25 16:22:49 compute-0 podman[264493]: 2025-11-25 16:22:49.429265021 +0000 UTC m=+1.153116300 container remove 9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:22:49 compute-0 systemd[1]: libpod-conmon-9e3094993b554076d394ad5871dba11a044a9c8d3f2676611e498ddf00fee43a.scope: Deactivated successfully.
Nov 25 16:22:49 compute-0 sudo[264388]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:22:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:22:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:22:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:22:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 99a15ce4-0995-4fa5-8ff1-e03efa2333e6 does not exist
Nov 25 16:22:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 365e2596-a0e6-41ed-b63d-e9846e2ab0a4 does not exist
Nov 25 16:22:49 compute-0 nova_compute[254092]: 2025-11-25 16:22:49.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:49 compute-0 nova_compute[254092]: 2025-11-25 16:22:49.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:49 compute-0 nova_compute[254092]: 2025-11-25 16:22:49.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:22:49 compute-0 sudo[264556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:22:49 compute-0 sudo[264556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:49 compute-0 sudo[264556]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:49 compute-0 sudo[264581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:22:49 compute-0 sudo[264581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:22:49 compute-0 sudo[264581]: pam_unix(sudo:session): session closed for user root
Nov 25 16:22:49 compute-0 ceph-mon[74985]: pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:22:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:22:50 compute-0 nova_compute[254092]: 2025-11-25 16:22:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:50 compute-0 nova_compute[254092]: 2025-11-25 16:22:50.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:22:50 compute-0 nova_compute[254092]: 2025-11-25 16:22:50.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:22:50 compute-0 nova_compute[254092]: 2025-11-25 16:22:50.535 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:22:50 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:22:51 compute-0 nova_compute[254092]: 2025-11-25 16:22:51.529 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:51 compute-0 ceph-mon[74985]: pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:53 compute-0 nova_compute[254092]: 2025-11-25 16:22:53.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:22:53 compute-0 ceph-mon[74985]: pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:22:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752993815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:22:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:22:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752993815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:22:55 compute-0 ceph-mon[74985]: pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2752993815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:22:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2752993815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:22:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:22:57 compute-0 ceph-mon[74985]: pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:22:59 compute-0 ceph-mon[74985]: pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:01 compute-0 ceph-mon[74985]: pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:02 compute-0 ceph-mon[74985]: pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:05 compute-0 ceph-mon[74985]: pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:07 compute-0 ceph-mon[74985]: pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:09 compute-0 ceph-mon[74985]: pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:23:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:11 compute-0 podman[264607]: 2025-11-25 16:23:11.64949398 +0000 UTC m=+0.060769345 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:23:11 compute-0 podman[264606]: 2025-11-25 16:23:11.680373955 +0000 UTC m=+0.091554757 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:23:11 compute-0 podman[264608]: 2025-11-25 16:23:11.680447227 +0000 UTC m=+0.087198330 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 25 16:23:11 compute-0 ceph-mon[74985]: pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:13 compute-0 ceph-mon[74985]: pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:23:13.591 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:23:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:23:13.592 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:23:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:23:13.592 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:23:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:14 compute-0 ceph-mon[74985]: pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:17 compute-0 ceph-mon[74985]: pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:19 compute-0 ceph-mon[74985]: pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:21 compute-0 ceph-mon[74985]: pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:22 compute-0 ceph-mon[74985]: pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:25 compute-0 ceph-mon[74985]: pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:27 compute-0 ceph-mon[74985]: pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:28 compute-0 ceph-mon[74985]: pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:31 compute-0 ceph-mon[74985]: pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:33 compute-0 ceph-mon[74985]: pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:34 compute-0 nova_compute[254092]: 2025-11-25 16:23:34.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 25 16:23:35 compute-0 ceph-mon[74985]: pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 25 16:23:35 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 25 16:23:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 21 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Nov 25 16:23:36 compute-0 ceph-mon[74985]: osdmap e129: 3 total, 3 up, 3 in
Nov 25 16:23:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:37 compute-0 ceph-mon[74985]: pgmap v1048: 321 pgs: 321 active+clean; 21 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Nov 25 16:23:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 25 16:23:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 25 16:23:37 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 25 16:23:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 21 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 22 op/s
Nov 25 16:23:38 compute-0 ceph-mon[74985]: osdmap e130: 3 total, 3 up, 3 in
Nov 25 16:23:38 compute-0 ceph-mon[74985]: pgmap v1050: 321 pgs: 321 active+clean; 21 MiB data, 161 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 22 op/s
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:23:40
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:23:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:23:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 5920 writes, 24K keys, 5920 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5920 writes, 990 syncs, 5.98 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 376 writes, 869 keys, 376 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                           Interval WAL: 376 writes, 162 syncs, 2.32 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:23:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 16:23:40 compute-0 ceph-mon[74985]: pgmap v1051: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 16:23:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:42 compute-0 podman[264670]: 2025-11-25 16:23:42.637613817 +0000 UTC m=+0.054243888 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:23:42 compute-0 podman[264671]: 2025-11-25 16:23:42.677140243 +0000 UTC m=+0.090574217 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:23:42 compute-0 podman[264669]: 2025-11-25 16:23:42.691899015 +0000 UTC m=+0.112102413 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:23:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 25 16:23:43 compute-0 ceph-mon[74985]: pgmap v1052: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 25 16:23:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.3 MiB/s wr, 33 op/s
Nov 25 16:23:45 compute-0 ceph-mon[74985]: pgmap v1053: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.3 MiB/s wr, 33 op/s
Nov 25 16:23:46 compute-0 nova_compute[254092]: 2025-11-25 16:23:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:46 compute-0 nova_compute[254092]: 2025-11-25 16:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:23:46 compute-0 nova_compute[254092]: 2025-11-25 16:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:23:46 compute-0 nova_compute[254092]: 2025-11-25 16:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:23:46 compute-0 nova_compute[254092]: 2025-11-25 16:23:46.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:23:46 compute-0 nova_compute[254092]: 2025-11-25 16:23:46.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:23:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Nov 25 16:23:46 compute-0 ceph-mon[74985]: pgmap v1054: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Nov 25 16:23:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:23:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289942496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:23:46 compute-0 nova_compute[254092]: 2025-11-25 16:23:46.939 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.060 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.061 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5170MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.061 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.061 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:23:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.243 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.311 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.375 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.375 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.391 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.413 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.428 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:23:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:23:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417170017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.837 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.843 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.859 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.860 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:23:47 compute-0 nova_compute[254092]: 2025-11-25 16:23:47.860 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:23:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4289942496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:23:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1417170017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:23:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.9 MiB/s wr, 18 op/s
Nov 25 16:23:49 compute-0 ceph-mon[74985]: pgmap v1055: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.9 MiB/s wr, 18 op/s
Nov 25 16:23:49 compute-0 sudo[264773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:49 compute-0 sudo[264773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:49 compute-0 sudo[264773]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:49 compute-0 sudo[264798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:23:49 compute-0 sudo[264798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:49 compute-0 sudo[264798]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:49 compute-0 sudo[264823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:49 compute-0 sudo[264823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:49 compute-0 sudo[264823]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:49 compute-0 sudo[264848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 16:23:49 compute-0 sudo[264848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:49 compute-0 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:49 compute-0 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:49 compute-0 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:49 compute-0 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:49 compute-0 nova_compute[254092]: 2025-11-25 16:23:49.860 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:50 compute-0 sudo[264848]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:23:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.2 total, 600.0 interval
                                           Cumulative writes: 7071 writes, 28K keys, 7071 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7071 writes, 1329 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 459 writes, 1272 keys, 459 commit groups, 1.0 writes per commit group, ingest: 0.61 MB, 0.00 MB/s
                                           Interval WAL: 459 writes, 186 syncs, 2.47 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:23:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:23:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:23:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:50 compute-0 sudo[264892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:50 compute-0 sudo[264892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:50 compute-0 sudo[264892]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 sudo[264917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:23:50 compute-0 sudo[264917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:50 compute-0 sudo[264917]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 sudo[264942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:50 compute-0 sudo[264942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:50 compute-0 sudo[264942]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 sudo[264967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:23:50 compute-0 sudo[264967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:50 compute-0 nova_compute[254092]: 2025-11-25 16:23:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:50 compute-0 nova_compute[254092]: 2025-11-25 16:23:50.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:23:50 compute-0 sudo[264967]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 25 16:23:50 compute-0 sudo[265021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:50 compute-0 sudo[265021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:50 compute-0 sudo[265021]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 sudo[265046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:23:50 compute-0 sudo[265046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:50 compute-0 sudo[265046]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 sudo[265071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:50 compute-0 sudo[265071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:50 compute-0 sudo[265071]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:50 compute-0 sudo[265096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 16:23:50 compute-0 sudo[265096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:23:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:51 compute-0 ceph-mon[74985]: pgmap v1056: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Nov 25 16:23:51 compute-0 sudo[265096]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8fe984d4-d710-4024-850c-250280033c1a does not exist
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 86d7d757-0822-4c60-b80c-b2565f9da7f1 does not exist
Nov 25 16:23:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9190ca86-5d2c-4c55-aa11-f0cb26456ac0 does not exist
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:23:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:23:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:23:51 compute-0 sudo[265139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:51 compute-0 sudo[265139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:51 compute-0 sudo[265139]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:51 compute-0 sudo[265164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:23:51 compute-0 sudo[265164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:51 compute-0 sudo[265164]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:51 compute-0 sudo[265189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:51 compute-0 sudo[265189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:51 compute-0 sudo[265189]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:51 compute-0 sudo[265214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:23:51 compute-0 sudo[265214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:51 compute-0 nova_compute[254092]: 2025-11-25 16:23:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:51 compute-0 nova_compute[254092]: 2025-11-25 16:23:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:23:51 compute-0 nova_compute[254092]: 2025-11-25 16:23:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:23:51 compute-0 nova_compute[254092]: 2025-11-25 16:23:51.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:23:51 compute-0 podman[265279]: 2025-11-25 16:23:51.704664302 +0000 UTC m=+0.038585891 container create ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:23:51 compute-0 systemd[1]: Started libpod-conmon-ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94.scope.
Nov 25 16:23:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:23:51 compute-0 podman[265279]: 2025-11-25 16:23:51.77220116 +0000 UTC m=+0.106122759 container init ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:23:51 compute-0 podman[265279]: 2025-11-25 16:23:51.77806735 +0000 UTC m=+0.111988939 container start ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 16:23:51 compute-0 podman[265279]: 2025-11-25 16:23:51.781076411 +0000 UTC m=+0.114998010 container attach ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 16:23:51 compute-0 funny_neumann[265295]: 167 167
Nov 25 16:23:51 compute-0 systemd[1]: libpod-ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94.scope: Deactivated successfully.
Nov 25 16:23:51 compute-0 podman[265279]: 2025-11-25 16:23:51.783160188 +0000 UTC m=+0.117081777 container died ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:23:51 compute-0 podman[265279]: 2025-11-25 16:23:51.690338352 +0000 UTC m=+0.024259971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-707d5b6f2d5d46115eb5bc15d4b667507bb01f26a5339a99ff78ce93a5404597-merged.mount: Deactivated successfully.
Nov 25 16:23:51 compute-0 podman[265279]: 2025-11-25 16:23:51.822013936 +0000 UTC m=+0.155935525 container remove ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 16:23:51 compute-0 systemd[1]: libpod-conmon-ba502668c30b01f70fde0bc21051756ea880e8dee6bb382a7812124356eada94.scope: Deactivated successfully.
Nov 25 16:23:51 compute-0 podman[265319]: 2025-11-25 16:23:51.977209509 +0000 UTC m=+0.040295197 container create 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:23:52 compute-0 systemd[1]: Started libpod-conmon-2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1.scope.
Nov 25 16:23:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:52 compute-0 podman[265319]: 2025-11-25 16:23:51.958948133 +0000 UTC m=+0.022033851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:23:52 compute-0 podman[265319]: 2025-11-25 16:23:52.073434537 +0000 UTC m=+0.136520245 container init 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:23:52 compute-0 podman[265319]: 2025-11-25 16:23:52.079526704 +0000 UTC m=+0.142612412 container start 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:23:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:52 compute-0 podman[265319]: 2025-11-25 16:23:52.142752714 +0000 UTC m=+0.205838432 container attach 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:23:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:23:52 compute-0 nova_compute[254092]: 2025-11-25 16:23:52.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:23:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Nov 25 16:23:53 compute-0 condescending_payne[265336]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:23:53 compute-0 condescending_payne[265336]: --> relative data size: 1.0
Nov 25 16:23:53 compute-0 condescending_payne[265336]: --> All data devices are unavailable
Nov 25 16:23:53 compute-0 systemd[1]: libpod-2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1.scope: Deactivated successfully.
Nov 25 16:23:53 compute-0 podman[265365]: 2025-11-25 16:23:53.112399462 +0000 UTC m=+0.023104099 container died 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-948ecabf3467c20c5b41f8f921222a90918e092de46f6d03f1a724d88046052d-merged.mount: Deactivated successfully.
Nov 25 16:23:53 compute-0 podman[265365]: 2025-11-25 16:23:53.162759503 +0000 UTC m=+0.073464120 container remove 2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_payne, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:23:53 compute-0 systemd[1]: libpod-conmon-2c4832077fe2e14c62d23145ed544111180b03b2ab0721e5733ed69d6fd1ffd1.scope: Deactivated successfully.
Nov 25 16:23:53 compute-0 ceph-mon[74985]: pgmap v1057: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Nov 25 16:23:53 compute-0 sudo[265214]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:53 compute-0 sudo[265379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:53 compute-0 sudo[265379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:53 compute-0 sudo[265379]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:53 compute-0 sudo[265404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:23:53 compute-0 sudo[265404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:53 compute-0 sudo[265404]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:53 compute-0 sudo[265429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:53 compute-0 sudo[265429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:53 compute-0 sudo[265429]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:53 compute-0 sudo[265454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:23:53 compute-0 sudo[265454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:53 compute-0 podman[265518]: 2025-11-25 16:23:53.7990852 +0000 UTC m=+0.045747816 container create 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:23:53 compute-0 systemd[1]: Started libpod-conmon-7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90.scope.
Nov 25 16:23:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:23:53 compute-0 podman[265518]: 2025-11-25 16:23:53.86596213 +0000 UTC m=+0.112624776 container init 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:23:53 compute-0 podman[265518]: 2025-11-25 16:23:53.778233823 +0000 UTC m=+0.024896499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:23:53 compute-0 podman[265518]: 2025-11-25 16:23:53.874064661 +0000 UTC m=+0.120727277 container start 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 16:23:53 compute-0 modest_diffie[265534]: 167 167
Nov 25 16:23:53 compute-0 systemd[1]: libpod-7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90.scope: Deactivated successfully.
Nov 25 16:23:53 compute-0 podman[265518]: 2025-11-25 16:23:53.888685328 +0000 UTC m=+0.135347944 container attach 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 16:23:53 compute-0 podman[265518]: 2025-11-25 16:23:53.889182643 +0000 UTC m=+0.135845259 container died 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 25 16:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ded201f0456a0665df37a41a439310ab6948e2900fac73bd341dd4d45c225d5d-merged.mount: Deactivated successfully.
Nov 25 16:23:53 compute-0 podman[265518]: 2025-11-25 16:23:53.980145458 +0000 UTC m=+0.226808104 container remove 7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:23:53 compute-0 systemd[1]: libpod-conmon-7c89b2ae454fde6b428eee538feabdb41e34828b1619a5b0bfcb4b44abeabd90.scope: Deactivated successfully.
Nov 25 16:23:54 compute-0 podman[265559]: 2025-11-25 16:23:54.176552443 +0000 UTC m=+0.043224337 container create f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:23:54 compute-0 systemd[1]: Started libpod-conmon-f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af.scope.
Nov 25 16:23:54 compute-0 podman[265559]: 2025-11-25 16:23:54.161210696 +0000 UTC m=+0.027882610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:23:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:54 compute-0 podman[265559]: 2025-11-25 16:23:54.282552988 +0000 UTC m=+0.149224902 container init f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:23:54 compute-0 podman[265559]: 2025-11-25 16:23:54.294956205 +0000 UTC m=+0.161628099 container start f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:23:54 compute-0 podman[265559]: 2025-11-25 16:23:54.303295372 +0000 UTC m=+0.169967286 container attach f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:23:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]: {
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:     "0": [
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:         {
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "devices": [
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "/dev/loop3"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             ],
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_name": "ceph_lv0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_size": "21470642176",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "name": "ceph_lv0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "tags": {
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cluster_name": "ceph",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.crush_device_class": "",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.encrypted": "0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osd_id": "0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.type": "block",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.vdo": "0"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             },
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "type": "block",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "vg_name": "ceph_vg0"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:         }
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:     ],
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:     "1": [
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:         {
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "devices": [
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "/dev/loop4"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             ],
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_name": "ceph_lv1",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_size": "21470642176",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "name": "ceph_lv1",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "tags": {
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cluster_name": "ceph",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.crush_device_class": "",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.encrypted": "0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osd_id": "1",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.type": "block",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.vdo": "0"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             },
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "type": "block",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "vg_name": "ceph_vg1"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:         }
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:     ],
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:     "2": [
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:         {
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "devices": [
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "/dev/loop5"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             ],
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_name": "ceph_lv2",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_size": "21470642176",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "name": "ceph_lv2",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "tags": {
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.cluster_name": "ceph",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.crush_device_class": "",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.encrypted": "0",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osd_id": "2",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.type": "block",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:                 "ceph.vdo": "0"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             },
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "type": "block",
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:             "vg_name": "ceph_vg2"
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:         }
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]:     ]
Nov 25 16:23:54 compute-0 gifted_pasteur[265575]: }
Nov 25 16:23:55 compute-0 systemd[1]: libpod-f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af.scope: Deactivated successfully.
Nov 25 16:23:55 compute-0 podman[265559]: 2025-11-25 16:23:55.019066952 +0000 UTC m=+0.885738846 container died f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fb21c233a3714087fc5e07c0e0f55023ab399006edcb16938e0ea942bce51b0-merged.mount: Deactivated successfully.
Nov 25 16:23:55 compute-0 podman[265559]: 2025-11-25 16:23:55.093287051 +0000 UTC m=+0.959958945 container remove f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:23:55 compute-0 systemd[1]: libpod-conmon-f4db5b643e52e32987eba82e64399bc11c1d02cbd32a570475ac801d132083af.scope: Deactivated successfully.
Nov 25 16:23:55 compute-0 sudo[265454]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:23:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4284690352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:23:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:23:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4284690352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:23:55 compute-0 sudo[265598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:55 compute-0 sudo[265598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:55 compute-0 sudo[265598]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:55 compute-0 sudo[265623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:23:55 compute-0 sudo[265623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:55 compute-0 sudo[265623]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:55 compute-0 sudo[265648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:55 compute-0 sudo[265648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:55 compute-0 sudo[265648]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:55 compute-0 sudo[265673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:23:55 compute-0 sudo[265673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:55 compute-0 podman[265737]: 2025-11-25 16:23:55.7586944 +0000 UTC m=+0.022677058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:23:55 compute-0 podman[265737]: 2025-11-25 16:23:55.933861148 +0000 UTC m=+0.197843826 container create 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:23:55 compute-0 ceph-mon[74985]: pgmap v1058: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4284690352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:23:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4284690352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:23:56 compute-0 systemd[1]: Started libpod-conmon-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope.
Nov 25 16:23:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:23:56 compute-0 podman[265737]: 2025-11-25 16:23:56.145116677 +0000 UTC m=+0.409099365 container init 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:23:56 compute-0 podman[265737]: 2025-11-25 16:23:56.162785367 +0000 UTC m=+0.426768015 container start 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 16:23:56 compute-0 crazy_nobel[265753]: 167 167
Nov 25 16:23:56 compute-0 systemd[1]: libpod-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope: Deactivated successfully.
Nov 25 16:23:56 compute-0 conmon[265753]: conmon 1f18fc416e480548640e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope/container/memory.events
Nov 25 16:23:56 compute-0 podman[265737]: 2025-11-25 16:23:56.210447944 +0000 UTC m=+0.474430632 container attach 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:23:56 compute-0 podman[265737]: 2025-11-25 16:23:56.215001829 +0000 UTC m=+0.478984477 container died 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d34f77d87ecb5a90f171ca3b6988f234925c5781bf76649025e8295f17f41c-merged.mount: Deactivated successfully.
Nov 25 16:23:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:56 compute-0 podman[265737]: 2025-11-25 16:23:56.760745961 +0000 UTC m=+1.024728599 container remove 1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 16:23:56 compute-0 systemd[1]: libpod-conmon-1f18fc416e480548640ef5713d9c60a1a7d8241380d9c516a051352d2a76a178.scope: Deactivated successfully.
Nov 25 16:23:57 compute-0 podman[265778]: 2025-11-25 16:23:56.950317189 +0000 UTC m=+0.043354280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:23:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:23:57 compute-0 ceph-mon[74985]: pgmap v1059: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:57 compute-0 podman[265778]: 2025-11-25 16:23:57.171294313 +0000 UTC m=+0.264331354 container create 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:23:57 compute-0 systemd[1]: Started libpod-conmon-6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62.scope.
Nov 25 16:23:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:23:57 compute-0 podman[265778]: 2025-11-25 16:23:57.724864168 +0000 UTC m=+0.817901199 container init 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:23:57 compute-0 podman[265778]: 2025-11-25 16:23:57.732139246 +0000 UTC m=+0.825176277 container start 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:23:57 compute-0 podman[265778]: 2025-11-25 16:23:57.809150012 +0000 UTC m=+0.902187043 container attach 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]: {
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "osd_id": 1,
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "type": "bluestore"
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:     },
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "osd_id": 2,
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "type": "bluestore"
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:     },
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "osd_id": 0,
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:         "type": "bluestore"
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]:     }
Nov 25 16:23:58 compute-0 nifty_chatterjee[265795]: }
Nov 25 16:23:58 compute-0 systemd[1]: libpod-6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62.scope: Deactivated successfully.
Nov 25 16:23:58 compute-0 podman[265778]: 2025-11-25 16:23:58.676834196 +0000 UTC m=+1.769871267 container died 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:23:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-345a871ad51d8d5b14a34a523658979c36a9adba575c8a027962d85df2674166-merged.mount: Deactivated successfully.
Nov 25 16:23:58 compute-0 ceph-mon[74985]: pgmap v1060: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:23:58 compute-0 podman[265778]: 2025-11-25 16:23:58.957334539 +0000 UTC m=+2.050371570 container remove 6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatterjee, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:23:58 compute-0 systemd[1]: libpod-conmon-6f57398b76dffc683e9540a10a89a0e918f313e0fc2fd26340c8fe392a11ca62.scope: Deactivated successfully.
Nov 25 16:23:58 compute-0 sudo[265673]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:23:59 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:23:59 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:23:59 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 79a5725d-a130-4503-9325-d2a045057ab4 does not exist
Nov 25 16:23:59 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b697e559-9e54-4b37-8c86-82bd0df39313 does not exist
Nov 25 16:23:59 compute-0 sudo[265842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:23:59 compute-0 sudo[265842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:59 compute-0 sudo[265842]: pam_unix(sudo:session): session closed for user root
Nov 25 16:23:59 compute-0 sudo[265867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:23:59 compute-0 sudo[265867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:23:59 compute-0 sudo[265867]: pam_unix(sudo:session): session closed for user root
Nov 25 16:24:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:24:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:24:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:01 compute-0 ceph-mon[74985]: pgmap v1061: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:02 compute-0 ceph-mon[74985]: pgmap v1062: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:24:04.812 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:24:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:24:04.814 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:24:05 compute-0 ceph-mon[74985]: pgmap v1063: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:06 compute-0 ceph-mon[74985]: pgmap v1064: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:24:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.5 total, 600.0 interval
                                           Cumulative writes: 6266 writes, 25K keys, 6266 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6266 writes, 1100 syncs, 5.70 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 588 writes, 1697 keys, 588 commit groups, 1.0 writes per commit group, ingest: 0.89 MB, 0.00 MB/s
                                           Interval WAL: 588 writes, 256 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:24:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:24:07.816 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:24:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:09 compute-0 ceph-mon[74985]: pgmap v1065: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:24:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:10 compute-0 ceph-mon[74985]: pgmap v1066: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:12 compute-0 ceph-mon[74985]: pgmap v1067: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:24:13.592 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:24:13.593 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:24:13.593 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:13 compute-0 podman[265893]: 2025-11-25 16:24:13.639906998 +0000 UTC m=+0.057819665 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 16:24:13 compute-0 podman[265892]: 2025-11-25 16:24:13.66457251 +0000 UTC m=+0.076968167 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 16:24:13 compute-0 podman[265894]: 2025-11-25 16:24:13.712446602 +0000 UTC m=+0.129725271 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:24:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:15 compute-0 ceph-mon[74985]: pgmap v1068: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:17 compute-0 ceph-mon[74985]: pgmap v1069: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.360 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.361 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.433 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:24:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.852 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.853 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.860 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.861 254096 INFO nova.compute.claims [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:24:18 compute-0 nova_compute[254092]: 2025-11-25 16:24:18.986 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:19 compute-0 ceph-mon[74985]: pgmap v1070: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954788808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.429 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.436 254096 DEBUG nova.compute.provider_tree [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.452 254096 DEBUG nova.scheduler.client.report [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.498 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.499 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.643 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.643 254096 DEBUG nova.network.neutron [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.691 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.720 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.916 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.918 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.918 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Creating image(s)
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.944 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.968 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.988 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.991 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:19 compute-0 nova_compute[254092]: 2025-11-25 16:24:19.992 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2954788808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:20 compute-0 nova_compute[254092]: 2025-11-25 16:24:20.267 254096 DEBUG nova.virt.libvirt.imagebackend [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/8b512c8e-2281-41de-a668-eb983e174ba0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/8b512c8e-2281-41de-a668-eb983e174ba0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 16:24:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:21 compute-0 nova_compute[254092]: 2025-11-25 16:24:21.270 254096 DEBUG nova.network.neutron [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:24:21 compute-0 nova_compute[254092]: 2025-11-25 16:24:21.270 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:24:21 compute-0 ceph-mon[74985]: pgmap v1071: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:23 compute-0 ceph-mon[74985]: pgmap v1072: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:23 compute-0 nova_compute[254092]: 2025-11-25 16:24:23.450 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:23 compute-0 nova_compute[254092]: 2025-11-25 16:24:23.510 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:23 compute-0 nova_compute[254092]: 2025-11-25 16:24:23.511 254096 DEBUG nova.virt.images [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] 8b512c8e-2281-41de-a668-eb983e174ba0 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 16:24:23 compute-0 nova_compute[254092]: 2025-11-25 16:24:23.516 254096 DEBUG nova.privsep.utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 16:24:23 compute-0 nova_compute[254092]: 2025-11-25 16:24:23.516 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:24 compute-0 nova_compute[254092]: 2025-11-25 16:24:24.896 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:24 compute-0 nova_compute[254092]: 2025-11-25 16:24:24.896 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:24 compute-0 nova_compute[254092]: 2025-11-25 16:24:24.923 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.059 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.059 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.065 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.065 254096 INFO nova.compute.claims [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.200 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:25 compute-0 ceph-mon[74985]: pgmap v1073: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:24:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.409 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.part /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted" returned: 0 in 1.892s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.413 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.481 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169.converted --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.482 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.504 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.507 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a04ee12e-fa6a-4458-9472-b68930d7ba89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085360644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.634 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.639 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.674 254096 ERROR nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [req-fd1b6266-fde5-4532-8e04-c3b0cd76e047] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 4f066da7-306c-41d7-8522-9a9189cacc49.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-fd1b6266-fde5-4532-8e04-c3b0cd76e047"}]}
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.692 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.707 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.708 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.722 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.744 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:24:25 compute-0 nova_compute[254092]: 2025-11-25 16:24:25.821 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582008969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.231 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.237 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.328 254096 DEBUG nova.scheduler.client.report [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updated inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.329 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.330 254096 DEBUG nova.compute.provider_tree [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:24:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 25 16:24:26 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.576 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.578 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:24:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4085360644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1582008969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.633 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.664 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.682 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:24:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.805 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.806 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.806 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Creating image(s)
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.875 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.895 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.921 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:26 compute-0 nova_compute[254092]: 2025-11-25 16:24:26.926 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:27 compute-0 nova_compute[254092]: 2025-11-25 16:24:27.006 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:27 compute-0 nova_compute[254092]: 2025-11-25 16:24:27.007 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:27 compute-0 nova_compute[254092]: 2025-11-25 16:24:27.008 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:27 compute-0 nova_compute[254092]: 2025-11-25 16:24:27.008 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:27 compute-0 nova_compute[254092]: 2025-11-25 16:24:27.029 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:27 compute-0 nova_compute[254092]: 2025-11-25 16:24:27.033 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 25 16:24:27 compute-0 ceph-mon[74985]: osdmap e131: 3 total, 3 up, 3 in
Nov 25 16:24:27 compute-0 ceph-mon[74985]: pgmap v1075: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Nov 25 16:24:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 25 16:24:28 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 25 16:24:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Nov 25 16:24:29 compute-0 ceph-mon[74985]: osdmap e132: 3 total, 3 up, 3 in
Nov 25 16:24:29 compute-0 ceph-mon[74985]: pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Nov 25 16:24:29 compute-0 nova_compute[254092]: 2025-11-25 16:24:29.873 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:29 compute-0 nova_compute[254092]: 2025-11-25 16:24:29.919 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] resizing rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:24:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 76 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.150 254096 DEBUG nova.objects.instance [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bb45e40-37dd-4cae-a966-ecbd9260eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.161 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.162 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Ensure instance console log exists: /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.162 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.163 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.163 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.164 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.168 254096 WARNING nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:24:31 compute-0 ceph-mon[74985]: pgmap v1078: 321 pgs: 321 active+clean; 76 MiB data, 206 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 33 op/s
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.171 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.172 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.174 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.175 254096 DEBUG nova.virt.libvirt.host [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.175 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.175 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.176 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.177 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.178 254096 DEBUG nova.virt.hardware [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.193 254096 DEBUG nova.privsep.utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.193 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:24:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715378179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.625 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.704 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.708 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.778 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a04ee12e-fa6a-4458-9472-b68930d7ba89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.822 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] resizing rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.932 254096 DEBUG nova.objects.instance [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'migration_context' on Instance uuid a04ee12e-fa6a-4458-9472-b68930d7ba89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.962 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.963 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Ensure instance console log exists: /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.964 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.965 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.965 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.968 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.973 254096 WARNING nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.979 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.980 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.985 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.986 254096 DEBUG nova.virt.libvirt.host [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.987 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.987 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.988 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.989 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.989 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.990 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.990 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.991 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.992 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.992 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.993 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.993 254096 DEBUG nova.virt.hardware [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:24:31 compute-0 nova_compute[254092]: 2025-11-25 16:24:31.998 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:24:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258906684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.138 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.140 254096 DEBUG nova.objects.instance [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bb45e40-37dd-4cae-a966-ecbd9260eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/715378179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3258906684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.207 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <uuid>3bb45e40-37dd-4cae-a966-ecbd9260eb35</uuid>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <name>instance-00000002</name>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:name>tempest-AutoAllocateNetworkTest-server-1989432593</nova:name>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:24:31</nova:creationTime>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:user uuid="cb43f8dd9fea4dfb9a472f26cde44200">tempest-AutoAllocateNetworkTest-168723719-project-member</nova:user>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:project uuid="fdc770ac85b7451fbb50764fcc8bf038">tempest-AutoAllocateNetworkTest-168723719</nova:project>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <system>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="serial">3bb45e40-37dd-4cae-a966-ecbd9260eb35</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="uuid">3bb45e40-37dd-4cae-a966-ecbd9260eb35</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </system>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <os>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </os>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <features>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </features>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/console.log" append="off"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <video>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </video>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:24:32 compute-0 nova_compute[254092]: </domain>
Nov 25 16:24:32 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.252 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.252 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.253 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Using config drive
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.272 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:24:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658841496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.445 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.477 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.481 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 35 op/s
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.839 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Creating config drive at /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.849 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnvz1az8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:24:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230360792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.915 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.917 254096 DEBUG nova.objects.instance [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'pci_devices' on Instance uuid a04ee12e-fa6a-4458-9472-b68930d7ba89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.931 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <uuid>a04ee12e-fa6a-4458-9472-b68930d7ba89</uuid>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <name>instance-00000001</name>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-577439965</nova:name>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:24:31</nova:creationTime>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:user uuid="d72f101cd6d049f694a1d30145a4ed24">tempest-DeleteServersAdminTestJSON-1062463304-project-member</nova:user>
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <nova:project uuid="185de66120c0404eb338d72909c776df">tempest-DeleteServersAdminTestJSON-1062463304</nova:project>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <system>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="serial">a04ee12e-fa6a-4458-9472-b68930d7ba89</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="uuid">a04ee12e-fa6a-4458-9472-b68930d7ba89</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </system>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <os>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </os>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <features>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </features>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a04ee12e-fa6a-4458-9472-b68930d7ba89_disk">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:24:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/console.log" append="off"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <video>
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </video>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:24:32 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:24:32 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:24:32 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:24:32 compute-0 nova_compute[254092]: </domain>
Nov 25 16:24:32 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.970 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.971 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.971 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Using config drive
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.991 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:32 compute-0 nova_compute[254092]: 2025-11-25 16:24:32.995 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnvz1az8n" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.014 254096 DEBUG nova.storage.rbd_utils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] rbd image 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.018 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.143 254096 DEBUG oslo_concurrency.processutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config 3bb45e40-37dd-4cae-a966-ecbd9260eb35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.144 254096 INFO nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deleting local config drive /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35/disk.config because it was imported into RBD.
Nov 25 16:24:33 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 16:24:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2658841496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:33 compute-0 ceph-mon[74985]: pgmap v1079: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 35 op/s
Nov 25 16:24:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3230360792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:33 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 16:24:33 compute-0 systemd-machined[216343]: New machine qemu-1-instance-00000002.
Nov 25 16:24:33 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.303 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Creating config drive at /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.308 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpimj7_php execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.428 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpimj7_php" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.447 254096 DEBUG nova.storage.rbd_utils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:33 compute-0 nova_compute[254092]: 2025-11-25 16:24:33.451 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.458 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087874.4577172, 3bb45e40-37dd-4cae-a966-ecbd9260eb35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.459 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] VM Resumed (Lifecycle Event)
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.462 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.463 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.467 254096 INFO nova.virt.libvirt.driver [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance spawned successfully.
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.468 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.507 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.510 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.553 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.554 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087874.4614913, 3bb45e40-37dd-4cae-a966-ecbd9260eb35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.554 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] VM Started (Lifecycle Event)
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.576 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.576 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.577 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.577 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.578 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.578 254096 DEBUG nova.virt.libvirt.driver [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.583 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.586 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.635 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.760 254096 INFO nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 7.96 seconds to spawn the instance on the hypervisor.
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.762 254096 DEBUG nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Nov 25 16:24:34 compute-0 nova_compute[254092]: 2025-11-25 16:24:34.923 254096 INFO nova.compute.manager [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 9.89 seconds to build instance.
Nov 25 16:24:35 compute-0 nova_compute[254092]: 2025-11-25 16:24:35.138 254096 DEBUG oslo_concurrency.lockutils [None req-096aa075-da3b-47ed-9b50-daaed7cd60dd cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Nov 25 16:24:37 compute-0 ceph-mon[74985]: pgmap v1080: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 24 op/s
Nov 25 16:24:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:37 compute-0 nova_compute[254092]: 2025-11-25 16:24:37.908 254096 DEBUG oslo_concurrency.processutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config a04ee12e-fa6a-4458-9472-b68930d7ba89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:37 compute-0 nova_compute[254092]: 2025-11-25 16:24:37.909 254096 INFO nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deleting local config drive /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89/disk.config because it was imported into RBD.
Nov 25 16:24:37 compute-0 systemd-machined[216343]: New machine qemu-2-instance-00000001.
Nov 25 16:24:37 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000001.
Nov 25 16:24:38 compute-0 ceph-mon[74985]: pgmap v1081: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 602 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Nov 25 16:24:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 583 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.903 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.904 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.904 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.905 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.905 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.906 254096 INFO nova.compute.manager [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Terminating instance
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.907 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "refresh_cache-3bb45e40-37dd-4cae-a966-ecbd9260eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.908 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquired lock "refresh_cache-3bb45e40-37dd-4cae-a966-ecbd9260eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:24:38 compute-0 nova_compute[254092]: 2025-11-25 16:24:38.908 254096 DEBUG nova.network.neutron [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:24:39 compute-0 ceph-mon[74985]: pgmap v1082: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 583 KiB/s rd, 4.1 MiB/s wr, 97 op/s
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.276 254096 DEBUG nova.network.neutron [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.326 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087879.3258498, a04ee12e-fa6a-4458-9472-b68930d7ba89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.326 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] VM Resumed (Lifecycle Event)
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.329 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.329 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.332 254096 INFO nova.virt.libvirt.driver [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance spawned successfully.
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.332 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.353 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.356 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.357 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.357 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.358 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.358 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.358 254096 DEBUG nova.virt.libvirt.driver [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.362 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.415 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.415 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087879.328936, a04ee12e-fa6a-4458-9472-b68930d7ba89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.416 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] VM Started (Lifecycle Event)
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.428 254096 INFO nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 19.51 seconds to spawn the instance on the hypervisor.
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.429 254096 DEBUG nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.434 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.436 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.467 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.509 254096 INFO nova.compute.manager [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 20.71 seconds to build instance.
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.541 254096 DEBUG oslo_concurrency.lockutils [None req-93cc8ba6-56e3-4a8a-b22e-9daccdcce9f8 d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.829 254096 DEBUG nova.network.neutron [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.843 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Releasing lock "refresh_cache-3bb45e40-37dd-4cae-a966-ecbd9260eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:24:39 compute-0 nova_compute[254092]: 2025-11-25 16:24:39.843 254096 DEBUG nova.compute.manager [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:24:40
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:24:40 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 25 16:24:40 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 5.866s CPU time.
Nov 25 16:24:40 compute-0 systemd-machined[216343]: Machine qemu-1-instance-00000002 terminated.
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.257 254096 INFO nova.virt.libvirt.driver [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance destroyed successfully.
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.258 254096 DEBUG nova.objects.instance [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lazy-loading 'resources' on Instance uuid 3bb45e40-37dd-4cae-a966-ecbd9260eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.567 254096 INFO nova.virt.libvirt.driver [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deleting instance files /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35_del
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.567 254096 INFO nova.virt.libvirt.driver [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deletion of /var/lib/nova/instances/3bb45e40-37dd-4cae-a966-ecbd9260eb35_del complete
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.663 254096 DEBUG nova.virt.libvirt.host [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.664 254096 INFO nova.virt.libvirt.host [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] UEFI support detected
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.666 254096 INFO nova.compute.manager [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.666 254096 DEBUG oslo.service.loopingcall [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.667 254096 DEBUG nova.compute.manager [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.667 254096 DEBUG nova.network.neutron [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:24:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.963 254096 DEBUG nova.network.neutron [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.974 254096 DEBUG nova.network.neutron [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:24:40 compute-0 nova_compute[254092]: 2025-11-25 16:24:40.986 254096 INFO nova.compute.manager [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Took 0.32 seconds to deallocate network for instance.
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.032 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.032 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.103 254096 DEBUG oslo_concurrency.processutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.402 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "a04ee12e-fa6a-4458-9472-b68930d7ba89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.403 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.405 254096 INFO nova.compute.manager [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Terminating instance
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.405 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "refresh_cache-a04ee12e-fa6a-4458-9472-b68930d7ba89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.405 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquired lock "refresh_cache-a04ee12e-fa6a-4458-9472-b68930d7ba89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.406 254096 DEBUG nova.network.neutron [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:24:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4051828136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.586 254096 DEBUG oslo_concurrency.processutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.591 254096 DEBUG nova.compute.provider_tree [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.606 254096 DEBUG nova.scheduler.client.report [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.627 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.650 254096 INFO nova.scheduler.client.report [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Deleted allocations for instance 3bb45e40-37dd-4cae-a966-ecbd9260eb35
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.664 254096 DEBUG nova.network.neutron [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.721 254096 DEBUG oslo_concurrency.lockutils [None req-376cf988-1759-4fb3-a5e8-0ca2550fbd71 cb43f8dd9fea4dfb9a472f26cde44200 fdc770ac85b7451fbb50764fcc8bf038 - - default default] Lock "3bb45e40-37dd-4cae-a966-ecbd9260eb35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:41 compute-0 ceph-mon[74985]: pgmap v1083: 321 pgs: 321 active+clean; 134 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Nov 25 16:24:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4051828136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.953 254096 DEBUG nova.network.neutron [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.971 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Releasing lock "refresh_cache-a04ee12e-fa6a-4458-9472-b68930d7ba89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:24:41 compute-0 nova_compute[254092]: 2025-11-25 16:24:41.971 254096 DEBUG nova.compute.manager [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:24:42 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 25 16:24:42 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Consumed 4.036s CPU time.
Nov 25 16:24:42 compute-0 systemd-machined[216343]: Machine qemu-2-instance-00000001 terminated.
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.193 254096 INFO nova.virt.libvirt.driver [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance destroyed successfully.
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.194 254096 DEBUG nova.objects.instance [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lazy-loading 'resources' on Instance uuid a04ee12e-fa6a-4458-9472-b68930d7ba89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 25 16:24:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 25 16:24:42 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.646 254096 INFO nova.virt.libvirt.driver [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deleting instance files /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89_del
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.647 254096 INFO nova.virt.libvirt.driver [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deletion of /var/lib/nova/instances/a04ee12e-fa6a-4458-9472-b68930d7ba89_del complete
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.729 254096 INFO nova.compute.manager [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.730 254096 DEBUG oslo.service.loopingcall [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.731 254096 DEBUG nova.compute.manager [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:24:42 compute-0 nova_compute[254092]: 2025-11-25 16:24:42.731 254096 DEBUG nova.network.neutron [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:24:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 204 op/s
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.045 254096 DEBUG nova.network.neutron [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.058 254096 DEBUG nova.network.neutron [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.079 254096 INFO nova.compute.manager [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Took 0.35 seconds to deallocate network for instance.
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.122 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.122 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.167 254096 DEBUG oslo_concurrency.processutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:43 compute-0 ceph-mon[74985]: osdmap e133: 3 total, 3 up, 3 in
Nov 25 16:24:43 compute-0 ceph-mon[74985]: pgmap v1085: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 204 op/s
Nov 25 16:24:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814541778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.664 254096 DEBUG oslo_concurrency.processutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.670 254096 DEBUG nova.compute.provider_tree [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.686 254096 DEBUG nova.scheduler.client.report [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.710 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.736 254096 INFO nova.scheduler.client.report [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Deleted allocations for instance a04ee12e-fa6a-4458-9472-b68930d7ba89
Nov 25 16:24:43 compute-0 nova_compute[254092]: 2025-11-25 16:24:43.968 254096 DEBUG oslo_concurrency.lockutils [None req-193e74ab-b6b8-487a-8876-4c84aa2b7ea0 1aa3ad5e797745f995cc58cae812fa94 35cb4a8f46914042b989742fb4a09457 - - default default] Lock "a04ee12e-fa6a-4458-9472-b68930d7ba89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3814541778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:44 compute-0 podman[266826]: 2025-11-25 16:24:44.653518811 +0000 UTC m=+0.066406539 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 16:24:44 compute-0 podman[266825]: 2025-11-25 16:24:44.680588227 +0000 UTC m=+0.093819324 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:24:44 compute-0 podman[266827]: 2025-11-25 16:24:44.695627537 +0000 UTC m=+0.104342141 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:24:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 204 op/s
Nov 25 16:24:45 compute-0 ceph-mon[74985]: pgmap v1086: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 204 op/s
Nov 25 16:24:46 compute-0 nova_compute[254092]: 2025-11-25 16:24:46.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:46 compute-0 nova_compute[254092]: 2025-11-25 16:24:46.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:46 compute-0 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:46 compute-0 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:46 compute-0 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:24:46 compute-0 nova_compute[254092]: 2025-11-25 16:24:46.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 204 op/s
Nov 25 16:24:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1832746166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:46 compute-0 nova_compute[254092]: 2025-11-25 16:24:46.959 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.151 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.154 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5063MB free_disk=59.950958251953125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.154 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.154 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.239 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.258 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:47 compute-0 ceph-mon[74985]: pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 204 op/s
Nov 25 16:24:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1832746166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.608 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.608 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.630 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:24:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824501766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.707 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.711 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.716 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.729 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:24:47 compute-0 sshd-session[266888]: Connection closed by authenticating user root 171.244.51.45 port 47664 [preauth]
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.750 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.751 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.752 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.758 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.759 254096 INFO nova.compute.claims [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:24:47 compute-0 nova_compute[254092]: 2025-11-25 16:24:47.895 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684486207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.333 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.340 254096 DEBUG nova.compute.provider_tree [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.353 254096 DEBUG nova.scheduler.client.report [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.378 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.379 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.439 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.440 254096 DEBUG nova.network.neutron [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.463 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.519 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:24:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/824501766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2684486207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.611 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.613 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.613 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Creating image(s)
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.634 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.657 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.678 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.682 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.769 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.770 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.770 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.771 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 204 op/s
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.793 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:48 compute-0 nova_compute[254092]: 2025-11-25 16:24:48.797 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.545 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.748s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:49 compute-0 ceph-mon[74985]: pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 204 op/s
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.597 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] resizing rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.753 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.754 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.793 254096 DEBUG nova.objects.instance [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'migration_context' on Instance uuid d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.810 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.810 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Ensure instance console log exists: /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.811 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.811 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:49 compute-0 nova_compute[254092]: 2025-11-25 16:24:49.811 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.023 254096 DEBUG nova.network.neutron [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.024 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.025 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.031 254096 WARNING nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.042 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.042 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.046 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.046 254096 DEBUG nova.virt.libvirt.host [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.047 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.047 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.048 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.049 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.049 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.049 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.050 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.050 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.051 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.051 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.051 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.052 254096 DEBUG nova.virt.hardware [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.056 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:24:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59151704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.601 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.630 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/59151704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:50 compute-0 nova_compute[254092]: 2025-11-25 16:24:50.634 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 76 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 172 op/s
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00021105755925636519 of space, bias 1.0, pg target 0.06331726777690956 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:24:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:24:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875928217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.090 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.093 254096 DEBUG nova.objects.instance [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'pci_devices' on Instance uuid d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.112 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <uuid>d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce</uuid>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <name>instance-00000003</name>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-1684340739</nova:name>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:24:50</nova:creationTime>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <nova:user uuid="d72f101cd6d049f694a1d30145a4ed24">tempest-DeleteServersAdminTestJSON-1062463304-project-member</nova:user>
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <nova:project uuid="185de66120c0404eb338d72909c776df">tempest-DeleteServersAdminTestJSON-1062463304</nova:project>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <system>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <entry name="serial">d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce</entry>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <entry name="uuid">d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce</entry>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </system>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <os>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   </os>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <features>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   </features>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk">
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config">
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:24:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/console.log" append="off"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <video>
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </video>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:24:51 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:24:51 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:24:51 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:24:51 compute-0 nova_compute[254092]: </domain>
Nov 25 16:24:51 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.214 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.215 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.216 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Using config drive
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.238 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:24:51 compute-0 ceph-mon[74985]: pgmap v1089: 321 pgs: 321 active+clean; 76 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 172 op/s
Nov 25 16:24:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1875928217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.738 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Creating config drive at /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.748 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxpnwewa6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.883 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxpnwewa6" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.915 254096 DEBUG nova.storage.rbd_utils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] rbd image d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:51 compute-0 nova_compute[254092]: 2025-11-25 16:24:51.920 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:52 compute-0 nova_compute[254092]: 2025-11-25 16:24:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:52 compute-0 nova_compute[254092]: 2025-11-25 16:24:52.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:24:52 compute-0 nova_compute[254092]: 2025-11-25 16:24:52.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:24:52 compute-0 nova_compute[254092]: 2025-11-25 16:24:52.524 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:24:52 compute-0 nova_compute[254092]: 2025-11-25 16:24:52.524 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:24:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 88 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 566 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 25 16:24:52 compute-0 ceph-mon[74985]: pgmap v1090: 321 pgs: 321 active+clean; 88 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 566 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.078 254096 DEBUG oslo_concurrency.processutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.079 254096 INFO nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deleting local config drive /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce/disk.config because it was imported into RBD.
Nov 25 16:24:53 compute-0 systemd-machined[216343]: New machine qemu-3-instance-00000003.
Nov 25 16:24:53 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.663 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087893.662957, d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.664 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] VM Resumed (Lifecycle Event)
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.667 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.668 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.672 254096 INFO nova.virt.libvirt.driver [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance spawned successfully.
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.673 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.693 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.698 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.702 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.702 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.703 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.703 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.703 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.704 254096 DEBUG nova.virt.libvirt.driver [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.726 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.726 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087893.6641448, d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.727 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] VM Started (Lifecycle Event)
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.761 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.765 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.789 254096 INFO nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 5.18 seconds to spawn the instance on the hypervisor.
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.790 254096 DEBUG nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.798 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.869 254096 INFO nova.compute.manager [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 6.19 seconds to build instance.
Nov 25 16:24:53 compute-0 nova_compute[254092]: 2025-11-25 16:24:53.884 254096 DEBUG oslo_concurrency.lockutils [None req-a0deedec-665e-4a48-b5de-bb880d28669e d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.123 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.124 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.140 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.215 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.215 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.221 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.221 254096 INFO nova.compute.claims [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.347 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 88 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 488 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 25 16:24:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:24:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438810855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.829 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.835 254096 DEBUG nova.compute.provider_tree [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.850 254096 DEBUG nova.scheduler.client.report [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:24:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/438810855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.873 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.873 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.913 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.914 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.931 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:24:54 compute-0 nova_compute[254092]: 2025-11-25 16:24:54.952 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.039 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.040 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.040 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Creating image(s)
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.068 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.098 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.126 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.133 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:24:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176779923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:24:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:24:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176779923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.200 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.202 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.204 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.204 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.229 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.233 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.256 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087880.2558208, 3bb45e40-37dd-4cae-a966-ecbd9260eb35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.257 254096 INFO nova.compute.manager [-] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] VM Stopped (Lifecycle Event)
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.288 254096 DEBUG nova.compute.manager [None req-70e7ad0d-7ea4-4a83-b23d-4cc07433c264 - - - - - -] [instance: 3bb45e40-37dd-4cae-a966-ecbd9260eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.385 254096 WARNING oslo_policy.policy [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.385 254096 WARNING oslo_policy.policy [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.389 254096 DEBUG nova.policy [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '787cb8b4238c4926a4466f3421db09ef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95df0d15c889499aba411e805ea145a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.537 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:55 compute-0 nova_compute[254092]: 2025-11-25 16:24:55.606 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] resizing rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:24:55 compute-0 ceph-mon[74985]: pgmap v1091: 321 pgs: 321 active+clean; 88 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 488 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 25 16:24:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3176779923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:24:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3176779923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.004 254096 DEBUG nova.objects.instance [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 71cf0ae0-6191-4b64-9a81-a955d807ceb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.016 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.017 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Ensure instance console log exists: /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.017 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.018 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.020 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.207 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.208 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.209 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.209 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.209 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.211 254096 INFO nova.compute.manager [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Terminating instance
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.212 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "refresh_cache-d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.212 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquired lock "refresh_cache-d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.212 254096 DEBUG nova.network.neutron [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:24:56 compute-0 nova_compute[254092]: 2025-11-25 16:24:56.579 254096 DEBUG nova.network.neutron [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:24:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 99 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Nov 25 16:24:57 compute-0 ceph-mon[74985]: pgmap v1092: 321 pgs: 321 active+clean; 99 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.191 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087882.1905575, a04ee12e-fa6a-4458-9472-b68930d7ba89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.192 254096 INFO nova.compute.manager [-] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] VM Stopped (Lifecycle Event)
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.194 254096 DEBUG nova.network.neutron [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.228 254096 DEBUG nova.compute.manager [None req-8f4e641a-2212-4597-9c36-f3165288a8d8 - - - - - -] [instance: a04ee12e-fa6a-4458-9472-b68930d7ba89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.229 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Releasing lock "refresh_cache-d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.229 254096 DEBUG nova.compute.manager [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:24:57 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 25 16:24:57 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.044s CPU time.
Nov 25 16:24:57 compute-0 systemd-machined[216343]: Machine qemu-3-instance-00000003 terminated.
Nov 25 16:24:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.448 254096 INFO nova.virt.libvirt.driver [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance destroyed successfully.
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.449 254096 DEBUG nova.objects.instance [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lazy-loading 'resources' on Instance uuid d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:24:57 compute-0 nova_compute[254092]: 2025-11-25 16:24:57.657 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Successfully created port: 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:24:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 99 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 25 16:24:59 compute-0 sudo[267511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:24:59 compute-0 sudo[267511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:24:59 compute-0 sudo[267511]: pam_unix(sudo:session): session closed for user root
Nov 25 16:24:59 compute-0 ceph-mon[74985]: pgmap v1093: 321 pgs: 321 active+clean; 99 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 25 16:24:59 compute-0 sudo[267536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:24:59 compute-0 sudo[267536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:24:59 compute-0 sudo[267536]: pam_unix(sudo:session): session closed for user root
Nov 25 16:24:59 compute-0 sudo[267561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:24:59 compute-0 sudo[267561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:24:59 compute-0 sudo[267561]: pam_unix(sudo:session): session closed for user root
Nov 25 16:24:59 compute-0 nova_compute[254092]: 2025-11-25 16:24:59.350 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Successfully updated port: 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:24:59 compute-0 nova_compute[254092]: 2025-11-25 16:24:59.393 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:24:59 compute-0 nova_compute[254092]: 2025-11-25 16:24:59.394 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquired lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:24:59 compute-0 nova_compute[254092]: 2025-11-25 16:24:59.394 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:24:59 compute-0 sudo[267586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:24:59 compute-0 sudo[267586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:24:59 compute-0 nova_compute[254092]: 2025-11-25 16:24:59.416 254096 DEBUG oslo_concurrency.processutils [None req-b0b49039-76ee-4344-ad31-434a905e3834 602139a4fa0b4ea6a916c8bf9b9b8ee3 ec8a4d65ee6043d18b0c81f32d05ce40 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:24:59 compute-0 nova_compute[254092]: 2025-11-25 16:24:59.481 254096 DEBUG oslo_concurrency.processutils [None req-b0b49039-76ee-4344-ad31-434a905e3834 602139a4fa0b4ea6a916c8bf9b9b8ee3 ec8a4d65ee6043d18b0c81f32d05ce40 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:24:59 compute-0 nova_compute[254092]: 2025-11-25 16:24:59.590 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:24:59 compute-0 sudo[267586]: pam_unix(sudo:session): session closed for user root
Nov 25 16:24:59 compute-0 sudo[267642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:24:59 compute-0 sudo[267642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:24:59 compute-0 sudo[267642]: pam_unix(sudo:session): session closed for user root
Nov 25 16:24:59 compute-0 sudo[267667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:24:59 compute-0 sudo[267667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:24:59 compute-0 sudo[267667]: pam_unix(sudo:session): session closed for user root
Nov 25 16:24:59 compute-0 sudo[267692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:24:59 compute-0 sudo[267692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:00 compute-0 sudo[267692]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:00 compute-0 sudo[267717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- inventory --format=json-pretty --filter-for-batch
Nov 25 16:25:00 compute-0 sudo[267717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:00 compute-0 podman[267783]: 2025-11-25 16:25:00.351218574 +0000 UTC m=+0.021540347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:00 compute-0 podman[267783]: 2025-11-25 16:25:00.481247163 +0000 UTC m=+0.151568866 container create 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 16:25:00 compute-0 systemd[1]: Started libpod-conmon-0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168.scope.
Nov 25 16:25:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:00 compute-0 podman[267783]: 2025-11-25 16:25:00.663105943 +0000 UTC m=+0.333427666 container init 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:25:00 compute-0 podman[267783]: 2025-11-25 16:25:00.669793774 +0000 UTC m=+0.340115477 container start 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:25:00 compute-0 cranky_villani[267800]: 167 167
Nov 25 16:25:00 compute-0 systemd[1]: libpod-0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168.scope: Deactivated successfully.
Nov 25 16:25:00 compute-0 podman[267783]: 2025-11-25 16:25:00.789538533 +0000 UTC m=+0.459860256 container attach 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:25:00 compute-0 podman[267783]: 2025-11-25 16:25:00.790138349 +0000 UTC m=+0.460460052 container died 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:25:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 134 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 134 op/s
Nov 25 16:25:01 compute-0 ceph-mon[74985]: pgmap v1094: 321 pgs: 321 active+clean; 134 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 134 op/s
Nov 25 16:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4995e226f290018d4d77c41ed4c88168a60bf1aaf3e3b94d12eaf03a61ef2dd9-merged.mount: Deactivated successfully.
Nov 25 16:25:01 compute-0 podman[267783]: 2025-11-25 16:25:01.307300554 +0000 UTC m=+0.977622267 container remove 0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:25:01 compute-0 systemd[1]: libpod-conmon-0c7da885237b3d76217945694c483fa8fa2646794c61d0c401ba4c12e82d6168.scope: Deactivated successfully.
Nov 25 16:25:01 compute-0 podman[267826]: 2025-11-25 16:25:01.513116725 +0000 UTC m=+0.062807460 container create 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:25:01 compute-0 podman[267826]: 2025-11-25 16:25:01.474858213 +0000 UTC m=+0.024548938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:01 compute-0 systemd[1]: Started libpod-conmon-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope.
Nov 25 16:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:01 compute-0 podman[267826]: 2025-11-25 16:25:01.637115569 +0000 UTC m=+0.186806314 container init 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:25:01 compute-0 podman[267826]: 2025-11-25 16:25:01.644348576 +0000 UTC m=+0.194039281 container start 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.645 254096 INFO nova.virt.libvirt.driver [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deleting instance files /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_del
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.646 254096 INFO nova.virt.libvirt.driver [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deletion of /var/lib/nova/instances/d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce_del complete
Nov 25 16:25:01 compute-0 podman[267826]: 2025-11-25 16:25:01.659997502 +0000 UTC m=+0.209688247 container attach 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.676 254096 DEBUG nova.compute.manager [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.677 254096 DEBUG nova.compute.manager [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing instance network info cache due to event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.677 254096 DEBUG oslo_concurrency.lockutils [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.738 254096 INFO nova.compute.manager [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 4.51 seconds to destroy the instance on the hypervisor.
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.739 254096 DEBUG oslo.service.loopingcall [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.739 254096 DEBUG nova.compute.manager [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.740 254096 DEBUG nova.network.neutron [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.983 254096 DEBUG nova.network.neutron [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:25:01 compute-0 nova_compute[254092]: 2025-11-25 16:25:01.996 254096 DEBUG nova.network.neutron [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.009 254096 INFO nova.compute.manager [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Took 0.27 seconds to deallocate network for instance.
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.098 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.099 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.190 254096 DEBUG oslo_concurrency.processutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.288 254096 DEBUG nova.network.neutron [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.339 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Releasing lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.340 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance network_info: |[{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.341 254096 DEBUG oslo_concurrency.lockutils [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.342 254096 DEBUG nova.network.neutron [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.345 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start _get_guest_xml network_info=[{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.351 254096 WARNING nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.362 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.363 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.367 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.367 254096 DEBUG nova.virt.libvirt.host [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.368 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.368 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:24:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='964051212',id=9,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1974927111',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.369 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.370 254096 DEBUG nova.virt.hardware [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.373 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:25:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042876011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.681 254096 DEBUG oslo_concurrency.processutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.688 254096 DEBUG nova.compute.provider_tree [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.713 254096 DEBUG nova.scheduler.client.report [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:25:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Nov 25 16:25:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:25:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136750213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.822 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.843 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.846 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3042876011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:02 compute-0 nova_compute[254092]: 2025-11-25 16:25:02.937 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.018 254096 INFO nova.scheduler.client.report [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Deleted allocations for instance d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce
Nov 25 16:25:03 compute-0 strange_pike[267844]: [
Nov 25 16:25:03 compute-0 strange_pike[267844]:     {
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "available": false,
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "ceph_device": false,
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "lsm_data": {},
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "lvs": [],
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "path": "/dev/sr0",
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "rejected_reasons": [
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "Has a FileSystem",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "Insufficient space (<5GB)"
Nov 25 16:25:03 compute-0 strange_pike[267844]:         ],
Nov 25 16:25:03 compute-0 strange_pike[267844]:         "sys_api": {
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "actuators": null,
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "device_nodes": "sr0",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "devname": "sr0",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "human_readable_size": "482.00 KB",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "id_bus": "ata",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "model": "QEMU DVD-ROM",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "nr_requests": "2",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "parent": "/dev/sr0",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "partitions": {},
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "path": "/dev/sr0",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "removable": "1",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "rev": "2.5+",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "ro": "0",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "rotational": "1",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "sas_address": "",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "sas_device_handle": "",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "scheduler_mode": "mq-deadline",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "sectors": 0,
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "sectorsize": "2048",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "size": 493568.0,
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "support_discard": "2048",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "type": "disk",
Nov 25 16:25:03 compute-0 strange_pike[267844]:             "vendor": "QEMU"
Nov 25 16:25:03 compute-0 strange_pike[267844]:         }
Nov 25 16:25:03 compute-0 strange_pike[267844]:     }
Nov 25 16:25:03 compute-0 strange_pike[267844]: ]
Nov 25 16:25:03 compute-0 systemd[1]: libpod-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope: Deactivated successfully.
Nov 25 16:25:03 compute-0 podman[267826]: 2025-11-25 16:25:03.075973107 +0000 UTC m=+1.625663822 container died 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:25:03 compute-0 systemd[1]: libpod-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope: Consumed 1.437s CPU time.
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.114 254096 DEBUG oslo_concurrency.lockutils [None req-604e6525-4441-4d7e-939f-36dbeff653bb d72f101cd6d049f694a1d30145a4ed24 185de66120c0404eb338d72909c776df - - default default] Lock "d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a88e13924a7eb658be72504548eb3088edd996290fd5896d92458bbb68a62f-merged.mount: Deactivated successfully.
Nov 25 16:25:03 compute-0 podman[267826]: 2025-11-25 16:25:03.194228655 +0000 UTC m=+1.743919360 container remove 6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:03 compute-0 systemd[1]: libpod-conmon-6f8f7b2d27bf0edbe3574fb9da7bf5d466f855419ca47c6986264e18b5beff18.scope: Deactivated successfully.
Nov 25 16:25:03 compute-0 sudo[267717]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323743500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:03 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 85f74c62-98ea-4cab-ae2d-c55962a1ccf6 does not exist
Nov 25 16:25:03 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5d5878a4-9993-4157-86cc-b51ee86754f3 does not exist
Nov 25 16:25:03 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e6173d6f-bcdb-4fd2-ad90-45d13513c2b8 does not exist
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.272 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.274 254096 DEBUG nova.virt.libvirt.vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(9),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-531690935',id=4,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=9,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-88lf5wc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:24:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=71cf0ae0-6191-4b64-9a81-a955d807ceb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:25:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:25:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:25:03 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.277 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.278 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.280 254096 DEBUG nova.objects.instance [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 71cf0ae0-6191-4b64-9a81-a955d807ceb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.304 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <uuid>71cf0ae0-6191-4b64-9a81-a955d807ceb4</uuid>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <name>instance-00000004</name>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-531690935</nova:name>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:25:02</nova:creationTime>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-1974927111">
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:user uuid="787cb8b4238c4926a4466f3421db09ef">tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member</nova:user>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:project uuid="95df0d15c889499aba411e805ea145a5">tempest-ServersWithSpecificFlavorTestJSON-341738122</nova:project>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <nova:port uuid="7adfcb53-33cb-482b-ba39-82d0ee72c4ea">
Nov 25 16:25:03 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <system>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <entry name="serial">71cf0ae0-6191-4b64-9a81-a955d807ceb4</entry>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <entry name="uuid">71cf0ae0-6191-4b64-9a81-a955d807ceb4</entry>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </system>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <os>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   </os>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <features>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   </features>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk">
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       </source>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config">
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       </source>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:25:03 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:7d:06:61"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <target dev="tap7adfcb53-33"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/console.log" append="off"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <video>
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </video>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:25:03 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:25:03 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:25:03 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:25:03 compute-0 nova_compute[254092]: </domain>
Nov 25 16:25:03 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.304 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Preparing to wait for external event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.304 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.305 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.305 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.306 254096 DEBUG nova.virt.libvirt.vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(9),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-531690935',id=4,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=9,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-88lf5wc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:24:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=71cf0ae0-6191-4b64-9a81-a955d807ceb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.306 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.308 254096 DEBUG nova.network.os_vif_util [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.308 254096 DEBUG os_vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:25:03 compute-0 sudo[269972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:25:03 compute-0 sudo[269972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:03 compute-0 sudo[269972]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.389 254096 DEBUG ovsdbapp.backend.ovs_idl [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.389 254096 DEBUG ovsdbapp.backend.ovs_idl [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.389 254096 DEBUG ovsdbapp.backend.ovs_idl [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.403 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:03 compute-0 sudo[269997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.403 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:25:03 compute-0 nova_compute[254092]: 2025-11-25 16:25:03.404 254096 INFO oslo.privsep.daemon [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmphzcrj_n1/privsep.sock']
Nov 25 16:25:03 compute-0 sudo[269997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:03 compute-0 sudo[269997]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:03 compute-0 sudo[270023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:25:03 compute-0 sudo[270023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:03 compute-0 sudo[270023]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:03 compute-0 sudo[270050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:25:03 compute-0 sudo[270050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:03 compute-0 podman[270116]: 2025-11-25 16:25:03.868468684 +0000 UTC m=+0.063399396 container create 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:25:03 compute-0 ceph-mon[74985]: pgmap v1095: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/136750213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2323743500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:25:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:25:03 compute-0 systemd[1]: Started libpod-conmon-382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde.scope.
Nov 25 16:25:03 compute-0 podman[270116]: 2025-11-25 16:25:03.830948363 +0000 UTC m=+0.025879095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:03 compute-0 podman[270116]: 2025-11-25 16:25:03.983483664 +0000 UTC m=+0.178414406 container init 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:25:03 compute-0 podman[270116]: 2025-11-25 16:25:03.992982042 +0000 UTC m=+0.187912764 container start 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:25:03 compute-0 eager_edison[270132]: 167 167
Nov 25 16:25:03 compute-0 podman[270116]: 2025-11-25 16:25:03.9987792 +0000 UTC m=+0.193709932 container attach 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 16:25:03 compute-0 systemd[1]: libpod-382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde.scope: Deactivated successfully.
Nov 25 16:25:04 compute-0 podman[270116]: 2025-11-25 16:25:04.000248761 +0000 UTC m=+0.195179483 container died 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f62538e57847b148cc866dc33ad70edf69d1d49d96287a30462150cfd2ab06a-merged.mount: Deactivated successfully.
Nov 25 16:25:04 compute-0 podman[270116]: 2025-11-25 16:25:04.11744899 +0000 UTC m=+0.312379722 container remove 382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:25:04 compute-0 systemd[1]: libpod-conmon-382394c60cdeb39fa80f8424a2a2554aa085b12559d038e87eec55ecac463bde.scope: Deactivated successfully.
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.151 254096 DEBUG nova.network.neutron [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updated VIF entry in instance network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.152 254096 DEBUG nova.network.neutron [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.169 254096 DEBUG oslo_concurrency.lockutils [req-d391eb5e-3f09-47b3-baba-8f2ff85b8154 req-0c61d149-3cdb-490e-bdce-24a8c4a80fb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:04 compute-0 podman[270155]: 2025-11-25 16:25:04.342966137 +0000 UTC m=+0.100385423 container create 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 16:25:04 compute-0 podman[270155]: 2025-11-25 16:25:04.265693295 +0000 UTC m=+0.023112601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:04 compute-0 systemd[1]: Started libpod-conmon-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope.
Nov 25 16:25:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.455 254096 INFO oslo.privsep.daemon [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Spawned new privsep daemon via rootwrap
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.327 270169 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.331 270169 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.333 270169 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.333 270169 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270169
Nov 25 16:25:04 compute-0 podman[270155]: 2025-11-25 16:25:04.528424614 +0000 UTC m=+0.285843920 container init 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:25:04 compute-0 podman[270155]: 2025-11-25 16:25:04.53523065 +0000 UTC m=+0.292649946 container start 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:25:04 compute-0 podman[270155]: 2025-11-25 16:25:04.558877113 +0000 UTC m=+0.316296419 container attach 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 25 16:25:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.819 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7adfcb53-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.820 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7adfcb53-33, col_values=(('external_ids', {'iface-id': '7adfcb53-33cb-482b-ba39-82d0ee72c4ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:06:61', 'vm-uuid': '71cf0ae0-6191-4b64-9a81-a955d807ceb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:04 compute-0 NetworkManager[48891]: <info>  [1764087904.8220] manager: (tap7adfcb53-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:04 compute-0 nova_compute[254092]: 2025-11-25 16:25:04.833 254096 INFO os_vif [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33')
Nov 25 16:25:04 compute-0 ceph-mon[74985]: pgmap v1096: 321 pgs: 321 active+clean; 122 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 16:25:05 compute-0 nova_compute[254092]: 2025-11-25 16:25:05.058 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:25:05 compute-0 nova_compute[254092]: 2025-11-25 16:25:05.058 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:25:05 compute-0 nova_compute[254092]: 2025-11-25 16:25:05.058 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No VIF found with MAC fa:16:3e:7d:06:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:25:05 compute-0 nova_compute[254092]: 2025-11-25 16:25:05.059 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Using config drive
Nov 25 16:25:05 compute-0 nova_compute[254092]: 2025-11-25 16:25:05.078 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:06 compute-0 hardcore_ritchie[270172]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:25:06 compute-0 hardcore_ritchie[270172]: --> relative data size: 1.0
Nov 25 16:25:06 compute-0 hardcore_ritchie[270172]: --> All data devices are unavailable
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.015 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Creating config drive at /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.024 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvoratgj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:06 compute-0 systemd[1]: libpod-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope: Deactivated successfully.
Nov 25 16:25:06 compute-0 podman[270155]: 2025-11-25 16:25:06.044003479 +0000 UTC m=+1.801422765 container died 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:25:06 compute-0 systemd[1]: libpod-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope: Consumed 1.119s CPU time.
Nov 25 16:25:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.047 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:25:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.048 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f61647d6267785a13ccd896b13757cfa49dc88a581617aff1515ee66249ee12b-merged.mount: Deactivated successfully.
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.153 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxvoratgj" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.175 254096 DEBUG nova.storage.rbd_utils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.180 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:06 compute-0 podman[270155]: 2025-11-25 16:25:06.255432424 +0000 UTC m=+2.012851710 container remove 963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:25:06 compute-0 systemd[1]: libpod-conmon-963d66119e0c3de356b09d40406c733f301c5954b8b1720d302431b8a386272c.scope: Deactivated successfully.
Nov 25 16:25:06 compute-0 sudo[270050]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:06 compute-0 sudo[270277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:25:06 compute-0 sudo[270277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:06 compute-0 sudo[270277]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:06 compute-0 sudo[270305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:25:06 compute-0 sudo[270305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:06 compute-0 sudo[270305]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:06 compute-0 sudo[270330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:25:06 compute-0 sudo[270330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:06 compute-0 sudo[270330]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.514 254096 DEBUG oslo_concurrency.processutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config 71cf0ae0-6191-4b64-9a81-a955d807ceb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.515 254096 INFO nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deleting local config drive /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4/disk.config because it was imported into RBD.
Nov 25 16:25:06 compute-0 sudo[270355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:25:06 compute-0 sudo[270355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:06 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 25 16:25:06 compute-0 kernel: tap7adfcb53-33: entered promiscuous mode
Nov 25 16:25:06 compute-0 NetworkManager[48891]: <info>  [1764087906.5995] manager: (tap7adfcb53-33): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:06 compute-0 ovn_controller[153477]: 2025-11-25T16:25:06Z|00027|binding|INFO|Claiming lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea for this chassis.
Nov 25 16:25:06 compute-0 ovn_controller[153477]: 2025-11-25T16:25:06Z|00028|binding|INFO|7adfcb53-33cb-482b-ba39-82d0ee72c4ea: Claiming fa:16:3e:7d:06:61 10.100.0.3
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.633 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:06:61 10.100.0.3'], port_security=['fa:16:3e:7d:06:61 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '71cf0ae0-6191-4b64-9a81-a955d807ceb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7adfcb53-33cb-482b-ba39-82d0ee72c4ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:25:06 compute-0 systemd-udevd[270394]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:25:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.634 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d bound to our chassis
Nov 25 16:25:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.636 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 16:25:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:06.637 163338 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp4a18eijb/privsep.sock']
Nov 25 16:25:06 compute-0 NetworkManager[48891]: <info>  [1764087906.6565] device (tap7adfcb53-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:25:06 compute-0 NetworkManager[48891]: <info>  [1764087906.6576] device (tap7adfcb53-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:25:06 compute-0 systemd-machined[216343]: New machine qemu-4-instance-00000004.
Nov 25 16:25:06 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:06 compute-0 ovn_controller[153477]: 2025-11-25T16:25:06Z|00029|binding|INFO|Setting lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea ovn-installed in OVS
Nov 25 16:25:06 compute-0 ovn_controller[153477]: 2025-11-25T16:25:06Z|00030|binding|INFO|Setting lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea up in Southbound
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 16:25:06 compute-0 podman[270450]: 2025-11-25 16:25:06.916845214 +0000 UTC m=+0.042463647 container create 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:25:06 compute-0 nova_compute[254092]: 2025-11-25 16:25:06.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:06 compute-0 systemd[1]: Started libpod-conmon-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope.
Nov 25 16:25:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:06 compute-0 podman[270450]: 2025-11-25 16:25:06.899570034 +0000 UTC m=+0.025188487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:06 compute-0 podman[270450]: 2025-11-25 16:25:06.998500476 +0000 UTC m=+0.124118929 container init 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:25:07 compute-0 podman[270450]: 2025-11-25 16:25:07.007430818 +0000 UTC m=+0.133049251 container start 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:25:07 compute-0 busy_bell[270468]: 167 167
Nov 25 16:25:07 compute-0 systemd[1]: libpod-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope: Deactivated successfully.
Nov 25 16:25:07 compute-0 conmon[270468]: conmon 049a4cd55a917e57a80f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope/container/memory.events
Nov 25 16:25:07 compute-0 podman[270450]: 2025-11-25 16:25:07.015027216 +0000 UTC m=+0.140645649 container attach 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:25:07 compute-0 podman[270450]: 2025-11-25 16:25:07.018397927 +0000 UTC m=+0.144016380 container died 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.141 254096 DEBUG nova.compute.manager [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.143 254096 DEBUG oslo_concurrency.lockutils [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.143 254096 DEBUG oslo_concurrency.lockutils [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.144 254096 DEBUG oslo_concurrency.lockutils [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.144 254096 DEBUG nova.compute.manager [req-e585552f-32be-4233-bfe8-7e1f8c977c46 req-2ee4a55e-b650-4e8c-ab0f-ca02af47e95f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Processing event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6adf3984ab92d16509350a16b052e04d81ded97682c0c05264e9d4fd0bb704ff-merged.mount: Deactivated successfully.
Nov 25 16:25:07 compute-0 podman[270450]: 2025-11-25 16:25:07.219818559 +0000 UTC m=+0.345436992 container remove 049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:25:07 compute-0 systemd[1]: libpod-conmon-049a4cd55a917e57a80f8ad1ff700ab486bba93de3c39f2e4ac30c4f4befc56d.scope: Deactivated successfully.
Nov 25 16:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.362 163338 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 16:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.364 163338 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4a18eijb/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 16:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.242 270486 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 16:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.245 270486 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 16:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.247 270486 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 25 16:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.248 270486 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270486
Nov 25 16:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:07.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[096b207e-37a7-4cfd-adf6-66a3318c3cbf]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:07 compute-0 podman[270510]: 2025-11-25 16:25:07.462943205 +0000 UTC m=+0.105041780 container create d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 16:25:07 compute-0 podman[270510]: 2025-11-25 16:25:07.383179965 +0000 UTC m=+0.025278580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:07 compute-0 systemd[1]: Started libpod-conmon-d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5.scope.
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.526 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087907.5262318, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.527 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Started (Lifecycle Event)
Nov 25 16:25:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.532 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.535 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.538 254096 INFO nova.virt.libvirt.driver [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance spawned successfully.
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.539 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.557 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.564 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.566 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.567 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.567 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.567 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.568 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.568 254096 DEBUG nova.virt.libvirt.driver [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:07 compute-0 podman[270510]: 2025-11-25 16:25:07.583950738 +0000 UTC m=+0.226049333 container init d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:25:07 compute-0 podman[270510]: 2025-11-25 16:25:07.593961531 +0000 UTC m=+0.236060086 container start d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.595 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.595 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087907.526333, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.596 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Paused (Lifecycle Event)
Nov 25 16:25:07 compute-0 podman[270510]: 2025-11-25 16:25:07.604100457 +0000 UTC m=+0.246199052 container attach d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.629 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087907.535144, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.632 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Resumed (Lifecycle Event)
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.646 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.649 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.658 254096 INFO nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 12.62 seconds to spawn the instance on the hypervisor.
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.659 254096 DEBUG nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.684 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.772 254096 INFO nova.compute.manager [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 13.58 seconds to build instance.
Nov 25 16:25:07 compute-0 nova_compute[254092]: 2025-11-25 16:25:07.816 254096 DEBUG oslo_concurrency.lockutils [None req-0763e648-b430-44be-be89-ad51a2dd2a47 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:07 compute-0 ceph-mon[74985]: pgmap v1097: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 16:25:08 compute-0 nervous_banzai[270553]: {
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:     "0": [
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:         {
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "devices": [
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "/dev/loop3"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             ],
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_name": "ceph_lv0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_size": "21470642176",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "name": "ceph_lv0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "tags": {
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cluster_name": "ceph",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.crush_device_class": "",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.encrypted": "0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osd_id": "0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.type": "block",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.vdo": "0"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             },
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "type": "block",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "vg_name": "ceph_vg0"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:         }
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:     ],
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:     "1": [
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:         {
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "devices": [
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "/dev/loop4"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             ],
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_name": "ceph_lv1",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_size": "21470642176",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "name": "ceph_lv1",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "tags": {
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cluster_name": "ceph",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.crush_device_class": "",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.encrypted": "0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osd_id": "1",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.type": "block",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.vdo": "0"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             },
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "type": "block",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "vg_name": "ceph_vg1"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:         }
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:     ],
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:     "2": [
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:         {
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "devices": [
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "/dev/loop5"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             ],
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_name": "ceph_lv2",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_size": "21470642176",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "name": "ceph_lv2",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "tags": {
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.cluster_name": "ceph",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.crush_device_class": "",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.encrypted": "0",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osd_id": "2",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.type": "block",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:                 "ceph.vdo": "0"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             },
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "type": "block",
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:             "vg_name": "ceph_vg2"
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:         }
Nov 25 16:25:08 compute-0 nervous_banzai[270553]:     ]
Nov 25 16:25:08 compute-0 nervous_banzai[270553]: }
Nov 25 16:25:08 compute-0 systemd[1]: libpod-d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5.scope: Deactivated successfully.
Nov 25 16:25:08 compute-0 podman[270510]: 2025-11-25 16:25:08.372835678 +0000 UTC m=+1.014934243 container died d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3a9a3eda90ca4825fd1b22179e9178c90ef190419bcfeb981f8bfc62c732abc-merged.mount: Deactivated successfully.
Nov 25 16:25:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Nov 25 16:25:09 compute-0 ceph-mon[74985]: pgmap v1098: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Nov 25 16:25:09 compute-0 podman[270510]: 2025-11-25 16:25:09.447121023 +0000 UTC m=+2.089219588 container remove d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:25:09 compute-0 nova_compute[254092]: 2025-11-25 16:25:09.452 254096 DEBUG nova.compute.manager [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:09 compute-0 nova_compute[254092]: 2025-11-25 16:25:09.452 254096 DEBUG oslo_concurrency.lockutils [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:09 compute-0 nova_compute[254092]: 2025-11-25 16:25:09.452 254096 DEBUG oslo_concurrency.lockutils [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:09 compute-0 nova_compute[254092]: 2025-11-25 16:25:09.453 254096 DEBUG oslo_concurrency.lockutils [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:09 compute-0 nova_compute[254092]: 2025-11-25 16:25:09.453 254096 DEBUG nova.compute.manager [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] No waiting events found dispatching network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:25:09 compute-0 nova_compute[254092]: 2025-11-25 16:25:09.453 254096 WARNING nova.compute.manager [req-20851e4c-cd8c-4852-a0e3-2e12d5699f0a req-306f0094-3b6f-4aca-aef4-f9126c3015ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received unexpected event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea for instance with vm_state active and task_state None.
Nov 25 16:25:09 compute-0 sudo[270355]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:09 compute-0 systemd[1]: libpod-conmon-d72f1ec93be9a2e3fa5f49f954fd6dfbf4ecbb34b8cb3248d73b722f88a0a5c5.scope: Deactivated successfully.
Nov 25 16:25:09 compute-0 sudo[270574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:25:09 compute-0 sudo[270574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:09 compute-0 sudo[270574]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:09 compute-0 sudo[270599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:25:09 compute-0 sudo[270599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:09 compute-0 sudo[270599]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:09 compute-0 sudo[270624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:25:09 compute-0 sudo[270624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:09 compute-0 sudo[270624]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:09 compute-0 sudo[270649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:25:09 compute-0 sudo[270649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:09.788 270486 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:09.788 270486 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:09.789 270486 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:09 compute-0 nova_compute[254092]: 2025-11-25 16:25:09.821 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:25:10 compute-0 podman[270713]: 2025-11-25 16:25:10.081731214 +0000 UTC m=+0.022307918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:10 compute-0 podman[270713]: 2025-11-25 16:25:10.198711808 +0000 UTC m=+0.139288492 container create d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:25:10 compute-0 systemd[1]: Started libpod-conmon-d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1.scope.
Nov 25 16:25:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:10 compute-0 podman[270713]: 2025-11-25 16:25:10.529335436 +0000 UTC m=+0.469912140 container init d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:25:10 compute-0 podman[270713]: 2025-11-25 16:25:10.540559751 +0000 UTC m=+0.481136435 container start d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 25 16:25:10 compute-0 unruffled_brattain[270730]: 167 167
Nov 25 16:25:10 compute-0 systemd[1]: libpod-d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1.scope: Deactivated successfully.
Nov 25 16:25:10 compute-0 podman[270713]: 2025-11-25 16:25:10.597019718 +0000 UTC m=+0.537596402 container attach d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 16:25:10 compute-0 podman[270713]: 2025-11-25 16:25:10.597539372 +0000 UTC m=+0.538116056 container died d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:25:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 102 op/s
Nov 25 16:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc10c22d8a9af846670dfb51fa57b11724cc369e4fdcc87e904b26a01d70e37-merged.mount: Deactivated successfully.
Nov 25 16:25:11 compute-0 ceph-mon[74985]: pgmap v1099: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 102 op/s
Nov 25 16:25:11 compute-0 podman[270713]: 2025-11-25 16:25:11.214925183 +0000 UTC m=+1.155501887 container remove d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_brattain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:11 compute-0 systemd[1]: libpod-conmon-d904565878cdc0c0ce84a109b54ea40179b9895a866edc60f1a0e4ad99ed5ff1.scope: Deactivated successfully.
Nov 25 16:25:11 compute-0 podman[270754]: 2025-11-25 16:25:11.367383753 +0000 UTC m=+0.025497095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:25:11 compute-0 podman[270754]: 2025-11-25 16:25:11.640459034 +0000 UTC m=+0.298572366 container create a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 16:25:11 compute-0 systemd[1]: Started libpod-conmon-a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a.scope.
Nov 25 16:25:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:11 compute-0 podman[270754]: 2025-11-25 16:25:11.881443372 +0000 UTC m=+0.539556814 container init a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:11 compute-0 podman[270754]: 2025-11-25 16:25:11.89314685 +0000 UTC m=+0.551260192 container start a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:11 compute-0 nova_compute[254092]: 2025-11-25 16:25:11.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:11 compute-0 podman[270754]: 2025-11-25 16:25:11.964050791 +0000 UTC m=+0.622164203 container attach a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:25:12 compute-0 sshd-session[270768]: Received disconnect from 80.94.93.119 port 53728:11:  [preauth]
Nov 25 16:25:12 compute-0 sshd-session[270768]: Disconnected from authenticating user root 80.94.93.119 port 53728 [preauth]
Nov 25 16:25:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:12 compute-0 nova_compute[254092]: 2025-11-25 16:25:12.447 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087897.445856, d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:12 compute-0 nova_compute[254092]: 2025-11-25 16:25:12.448 254096 INFO nova.compute.manager [-] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] VM Stopped (Lifecycle Event)
Nov 25 16:25:12 compute-0 nova_compute[254092]: 2025-11-25 16:25:12.471 254096 DEBUG nova.compute.manager [None req-d0378137-4191-4176-a6b2-cd3084e3fd56 - - - - - -] [instance: d2ac1be4-c9e7-4e7b-8142-b00fa9c40bce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 93 op/s
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]: {
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "osd_id": 1,
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "type": "bluestore"
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:     },
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "osd_id": 2,
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "type": "bluestore"
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:     },
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "osd_id": 0,
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:         "type": "bluestore"
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]:     }
Nov 25 16:25:12 compute-0 hopeful_wozniak[270773]: }
Nov 25 16:25:12 compute-0 systemd[1]: libpod-a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a.scope: Deactivated successfully.
Nov 25 16:25:12 compute-0 podman[270754]: 2025-11-25 16:25:12.870406436 +0000 UTC m=+1.528519768 container died a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:25:13 compute-0 ceph-mon[74985]: pgmap v1100: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 93 op/s
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ffa76d947d3ed534ef7c0b422f915df4cbbeb6d3192be88e9d5efe22d9b64f4-merged.mount: Deactivated successfully.
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.594 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.650 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[16b7479b-b138-4b79-a3a4-39fbfcfea5f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.652 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdd4097f8-d1 in ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.655 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdd4097f8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.655 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7472398-17db-421c-8487-db756f277a6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.663 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9180679f-3ce7-48d9-8ba1-99500dc808d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.692 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bc48e43e-990d-46e4-9174-53d573296656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:13 compute-0 podman[270754]: 2025-11-25 16:25:13.697445894 +0000 UTC m=+2.355559226 container remove a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:25:13 compute-0 systemd[1]: libpod-conmon-a4a136145a5e5189e95a83833dac3146bc380f0d36bb070a24fe2884ed003e9a.scope: Deactivated successfully.
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.730 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af713821-28d1-45cd-8b95-cd7c28e2d619]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:13.733 163338 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6in_ubzq/privsep.sock']
Nov 25 16:25:13 compute-0 sudo[270649]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:25:13 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:25:13 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:13 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2fcbd9e6-acb0-40ab-b451-385104887e48 does not exist
Nov 25 16:25:13 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e34d79f6-4496-4189-ad3c-1307eb1b79f1 does not exist
Nov 25 16:25:13 compute-0 sudo[270828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:25:13 compute-0 sudo[270828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:13 compute-0 sudo[270828]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:13 compute-0 sudo[270853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:25:13 compute-0 sudo[270853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:25:13 compute-0 sudo[270853]: pam_unix(sudo:session): session closed for user root
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3632] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3640] device (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3653] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3656] device (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3667] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3673] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3678] device (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 16:25:14 compute-0 NetworkManager[48891]: <info>  [1764087914.3682] device (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.409 163338 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.409 163338 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6in_ubzq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.256 270879 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.260 270879 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.262 270879 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.262 270879 INFO oslo.privsep.daemon [-] privsep daemon running as pid 270879
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.411 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c865618a-0e95-402b-a7a1-e63b26b61398]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:25:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 92 op/s
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.834 254096 DEBUG nova.compute.manager [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.835 254096 DEBUG nova.compute.manager [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing instance network info cache due to event network-changed-7adfcb53-33cb-482b-ba39-82d0ee72c4ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.835 254096 DEBUG oslo_concurrency.lockutils [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.836 254096 DEBUG oslo_concurrency.lockutils [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:14 compute-0 nova_compute[254092]: 2025-11-25 16:25:14.836 254096 DEBUG nova.network.neutron [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Refreshing network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.902 270879 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.902 270879 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:14.902 270879 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.481 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[80c7ef4e-2350-4a55-9b84-e2cdfdf1506c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 NetworkManager[48891]: <info>  [1764087915.5047] manager: (tapdd4097f8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b6c04d-8c48-43aa-8720-513a82e22255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.544 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc06131-587b-4d1c-ab5b-d3dda791679b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.548 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1c210419-c4e9-46dc-b5fe-9e1071fad83b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 NetworkManager[48891]: <info>  [1764087915.5749] device (tapdd4097f8-d0): carrier: link connected
Nov 25 16:25:15 compute-0 systemd-udevd[270928]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.581 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2203ecf4-8a44-45fa-8273-0382fb7ae8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfdb9dd-add5-42df-acb8-439e2cf270df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437316, 'reachable_time': 34477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270941, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[637a82c1-4e0b-46d2-9036-460c2bd7afe1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:9711'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437316, 'tstamp': 437316}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270968, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 podman[270889]: 2025-11-25 16:25:15.634391447 +0000 UTC m=+0.102875671 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:25:15 compute-0 podman[270892]: 2025-11-25 16:25:15.634463509 +0000 UTC m=+0.097846104 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:25:15 compute-0 podman[270891]: 2025-11-25 16:25:15.638314923 +0000 UTC m=+0.106928971 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.647 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[63e668a0-ce4b-4267-9b6b-17015e45132e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437316, 'reachable_time': 34477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270972, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.674 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a9f54f-eb58-4f25-ae9c-735a56da72cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a680a899-f1e2-49a1-a617-e13101be109d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.730 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd4097f8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:15 compute-0 kernel: tapdd4097f8-d0: entered promiscuous mode
Nov 25 16:25:15 compute-0 NetworkManager[48891]: <info>  [1764087915.7348] manager: (tapdd4097f8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 25 16:25:15 compute-0 nova_compute[254092]: 2025-11-25 16:25:15.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:15 compute-0 nova_compute[254092]: 2025-11-25 16:25:15.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.737 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd4097f8-d0, col_values=(('external_ids', {'iface-id': '65cb2392-e609-45e8-bc45-ba0ce2e7d527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:15 compute-0 ovn_controller[153477]: 2025-11-25T16:25:15Z|00031|binding|INFO|Releasing lport 65cb2392-e609-45e8-bc45-ba0ce2e7d527 from this chassis (sb_readonly=0)
Nov 25 16:25:15 compute-0 nova_compute[254092]: 2025-11-25 16:25:15.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.742 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8187a8e-cdaa-4a3e-8c70-e2f676b9131a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.743 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:25:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:15.744 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'env', 'PROCESS_TAG=haproxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dd4097f8-dcdf-451c-8fbb-2057e86e375d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:25:15 compute-0 nova_compute[254092]: 2025-11-25 16:25:15.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:15 compute-0 ceph-mon[74985]: pgmap v1101: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 92 op/s
Nov 25 16:25:16 compute-0 podman[271006]: 2025-11-25 16:25:16.081484195 +0000 UTC m=+0.053652142 container create c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 16:25:16 compute-0 systemd[1]: Started libpod-conmon-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e.scope.
Nov 25 16:25:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a5ecbe109b46af30ce696e568d65890d385bb2e36f742ce025f714d7cdb26e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:16 compute-0 podman[271006]: 2025-11-25 16:25:16.048871237 +0000 UTC m=+0.021039204 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:25:16 compute-0 podman[271006]: 2025-11-25 16:25:16.151631054 +0000 UTC m=+0.123799051 container init c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:25:16 compute-0 podman[271006]: 2025-11-25 16:25:16.156696251 +0000 UTC m=+0.128864198 container start c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:16 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : New worker (271027) forked
Nov 25 16:25:16 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : Loading success.
Nov 25 16:25:16 compute-0 nova_compute[254092]: 2025-11-25 16:25:16.493 254096 DEBUG nova.network.neutron [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updated VIF entry in instance network info cache for port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:25:16 compute-0 nova_compute[254092]: 2025-11-25 16:25:16.493 254096 DEBUG nova.network.neutron [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [{"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:16 compute-0 nova_compute[254092]: 2025-11-25 16:25:16.516 254096 DEBUG oslo_concurrency.lockutils [req-9196bf2b-5c4c-4496-bf47-679c2a53564c req-ac871e68-e630-4de7-a702-0cd5af040182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-71cf0ae0-6191-4b64-9a81-a955d807ceb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 92 op/s
Nov 25 16:25:17 compute-0 nova_compute[254092]: 2025-11-25 16:25:17.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:17 compute-0 ceph-mon[74985]: pgmap v1102: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 92 op/s
Nov 25 16:25:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 25 16:25:19 compute-0 ceph-mon[74985]: pgmap v1103: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 25 16:25:19 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 16:25:19 compute-0 nova_compute[254092]: 2025-11-25 16:25:19.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 95 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 996 KiB/s wr, 91 op/s
Nov 25 16:25:21 compute-0 ceph-mon[74985]: pgmap v1104: 321 pgs: 321 active+clean; 95 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 996 KiB/s wr, 91 op/s
Nov 25 16:25:22 compute-0 nova_compute[254092]: 2025-11-25 16:25:22.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:22 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 16:25:22 compute-0 ovn_controller[153477]: 2025-11-25T16:25:22Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:06:61 10.100.0.3
Nov 25 16:25:22 compute-0 ovn_controller[153477]: 2025-11-25T16:25:22Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:06:61 10.100.0.3
Nov 25 16:25:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 107 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 599 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 16:25:22 compute-0 ceph-mon[74985]: pgmap v1105: 321 pgs: 321 active+clean; 107 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 599 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 16:25:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 107 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 16:25:24 compute-0 nova_compute[254092]: 2025-11-25 16:25:24.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:25 compute-0 ceph-mon[74985]: pgmap v1106: 321 pgs: 321 active+clean; 107 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 16:25:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:25:27 compute-0 nova_compute[254092]: 2025-11-25 16:25:27.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:27 compute-0 ceph-mon[74985]: pgmap v1107: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:25:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:25:28 compute-0 ceph-mon[74985]: pgmap v1108: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:25:29 compute-0 nova_compute[254092]: 2025-11-25 16:25:29.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.314 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.314 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.315 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.315 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.315 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.316 254096 INFO nova.compute.manager [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Terminating instance
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.317 254096 DEBUG nova.compute.manager [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:25:31 compute-0 kernel: tap7adfcb53-33 (unregistering): left promiscuous mode
Nov 25 16:25:31 compute-0 NetworkManager[48891]: <info>  [1764087931.4212] device (tap7adfcb53-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:25:31 compute-0 ovn_controller[153477]: 2025-11-25T16:25:31Z|00032|binding|INFO|Releasing lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea from this chassis (sb_readonly=0)
Nov 25 16:25:31 compute-0 ovn_controller[153477]: 2025-11-25T16:25:31Z|00033|binding|INFO|Setting lport 7adfcb53-33cb-482b-ba39-82d0ee72c4ea down in Southbound
Nov 25 16:25:31 compute-0 ovn_controller[153477]: 2025-11-25T16:25:31Z|00034|binding|INFO|Removing iface tap7adfcb53-33 ovn-installed in OVS
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.452 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:06:61 10.100.0.3'], port_security=['fa:16:3e:7d:06:61 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '71cf0ae0-6191-4b64-9a81-a955d807ceb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7adfcb53-33cb-482b-ba39-82d0ee72c4ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:25:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.454 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7adfcb53-33cb-482b-ba39-82d0ee72c4ea in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d unbound from our chassis
Nov 25 16:25:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.456 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd4097f8-dcdf-451c-8fbb-2057e86e375d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:25:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.457 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40c03e34-eb65-4bbc-aa55-a7ad2b94e6af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:31.457 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace which is not needed anymore
Nov 25 16:25:31 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 25 16:25:31 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 13.700s CPU time.
Nov 25 16:25:31 compute-0 systemd-machined[216343]: Machine qemu-4-instance-00000004 terminated.
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.553 254096 INFO nova.virt.libvirt.driver [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Instance destroyed successfully.
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.554 254096 DEBUG nova.objects.instance [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'resources' on Instance uuid 71cf0ae0-6191-4b64-9a81-a955d807ceb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.566 254096 DEBUG nova.virt.libvirt.vif [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:24:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-531690935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(9),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-531690935',id=4,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=9,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:25:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-88lf5wc3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:25:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=71cf0ae0-6191-4b64-9a81-a955d807ceb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.567 254096 DEBUG nova.network.os_vif_util [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "address": "fa:16:3e:7d:06:61", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7adfcb53-33", "ovs_interfaceid": "7adfcb53-33cb-482b-ba39-82d0ee72c4ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.568 254096 DEBUG nova.network.os_vif_util [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.568 254096 DEBUG os_vif [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.571 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7adfcb53-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.580 254096 INFO os_vif [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:06:61,bridge_name='br-int',has_traffic_filtering=True,id=7adfcb53-33cb-482b-ba39-82d0ee72c4ea,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7adfcb53-33')
Nov 25 16:25:31 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : haproxy version is 2.8.14-c23fe91
Nov 25 16:25:31 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [NOTICE]   (271025) : path to executable is /usr/sbin/haproxy
Nov 25 16:25:31 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [WARNING]  (271025) : Exiting Master process...
Nov 25 16:25:31 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [WARNING]  (271025) : Exiting Master process...
Nov 25 16:25:31 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [ALERT]    (271025) : Current worker (271027) exited with code 143 (Terminated)
Nov 25 16:25:31 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[271021]: [WARNING]  (271025) : All workers exited. Exiting... (0)
Nov 25 16:25:31 compute-0 systemd[1]: libpod-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e.scope: Deactivated successfully.
Nov 25 16:25:31 compute-0 podman[271067]: 2025-11-25 16:25:31.612003919 +0000 UTC m=+0.054410791 container died c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.857 254096 DEBUG nova.compute.manager [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-unplugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG oslo_concurrency.lockutils [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG oslo_concurrency.lockutils [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG oslo_concurrency.lockutils [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.858 254096 DEBUG nova.compute.manager [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] No waiting events found dispatching network-vif-unplugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:25:31 compute-0 nova_compute[254092]: 2025-11-25 16:25:31.859 254096 DEBUG nova.compute.manager [req-6c02a99f-5f26-4798-b3af-8da82d0e7f3f req-66a58530-dc06-4e7d-88f4-1f1a3d32c4e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-unplugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:25:31 compute-0 ceph-mon[74985]: pgmap v1109: 321 pgs: 321 active+clean; 121 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e-userdata-shm.mount: Deactivated successfully.
Nov 25 16:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a5ecbe109b46af30ce696e568d65890d385bb2e36f742ce025f714d7cdb26e1-merged.mount: Deactivated successfully.
Nov 25 16:25:32 compute-0 podman[271067]: 2025-11-25 16:25:32.00187171 +0000 UTC m=+0.444278582 container cleanup c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:25:32 compute-0 systemd[1]: libpod-conmon-c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e.scope: Deactivated successfully.
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:32 compute-0 podman[271122]: 2025-11-25 16:25:32.088824896 +0000 UTC m=+0.051111863 container remove c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d0d960-4bee-42a8-9e56-4c4c4ee49bf5]: (4, ('Tue Nov 25 04:25:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e)\nc6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e\nTue Nov 25 04:25:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (c6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e)\nc6bcd89d170844ae7a042ab4a203f7d0a5b973be6409f12115f7a36b45ad048e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.098 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5dca94b7-84a1-41fd-a1e2-b96cabe8e208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.100 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:32 compute-0 kernel: tapdd4097f8-d0: left promiscuous mode
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.120 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c29476c9-ab07-4a13-b4dc-b3b62e093c36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40584d9b-b8fc-4e94-9684-defc47cec3d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.139 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9286c903-c3e4-42b7-a328-9f0b004e7f5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.158 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[720c546e-bdd9-40db-b66c-7c958b8a0329]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437306, 'reachable_time': 16528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271137, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:32 compute-0 systemd[1]: run-netns-ovnmeta\x2ddd4097f8\x2ddcdf\x2d451c\x2d8fbb\x2d2057e86e375d.mount: Deactivated successfully.
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.172 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:25:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:32.173 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c250be03-34aa-4250-8f5d-9f7c93c1d52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.428 254096 INFO nova.virt.libvirt.driver [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deleting instance files /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4_del
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.430 254096 INFO nova.virt.libvirt.driver [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deletion of /var/lib/nova/instances/71cf0ae0-6191-4b64-9a81-a955d807ceb4_del complete
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.486 254096 INFO nova.compute.manager [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 1.17 seconds to destroy the instance on the hypervisor.
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.487 254096 DEBUG oslo.service.loopingcall [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.487 254096 DEBUG nova.compute.manager [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:25:32 compute-0 nova_compute[254092]: 2025-11-25 16:25:32.487 254096 DEBUG nova.network.neutron [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:25:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.2 MiB/s wr, 46 op/s
Nov 25 16:25:32 compute-0 ceph-mon[74985]: pgmap v1110: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 1.2 MiB/s wr, 46 op/s
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.612 254096 DEBUG nova.network.neutron [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.641 254096 DEBUG nova.compute.manager [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.641 254096 DEBUG oslo_concurrency.lockutils [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.641 254096 DEBUG oslo_concurrency.lockutils [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.642 254096 DEBUG oslo_concurrency.lockutils [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.642 254096 DEBUG nova.compute.manager [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] No waiting events found dispatching network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.642 254096 WARNING nova.compute.manager [req-010b4e39-3288-42db-9c56-ece27d39e4c6 req-50a21057-d3a8-4710-bc81-4822a765c0fd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received unexpected event network-vif-plugged-7adfcb53-33cb-482b-ba39-82d0ee72c4ea for instance with vm_state active and task_state deleting.
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.644 254096 INFO nova.compute.manager [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Took 2.16 seconds to deallocate network for instance.
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.694 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.695 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:34 compute-0 nova_compute[254092]: 2025-11-25 16:25:34.773 254096 DEBUG oslo_concurrency.processutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 225 KiB/s rd, 306 KiB/s wr, 38 op/s
Nov 25 16:25:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:25:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1046537941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.201 254096 DEBUG oslo_concurrency.processutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.207 254096 DEBUG nova.compute.provider_tree [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.228 254096 DEBUG nova.scheduler.client.report [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.270 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "2c261173-944d-4c35-8d16-b066436572bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.270 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.294 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.299 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.364 254096 DEBUG nova.compute.manager [req-3c791b89-86ee-45dd-b52f-a19df22003de req-fe0cfe79-4bba-4be0-ad13-74ed382e2f10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Received event network-vif-deleted-7adfcb53-33cb-482b-ba39-82d0ee72c4ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.384 254096 INFO nova.scheduler.client.report [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Deleted allocations for instance 71cf0ae0-6191-4b64-9a81-a955d807ceb4
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.442 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.443 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.450 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.450 254096 INFO nova.compute.claims [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.488 254096 DEBUG oslo_concurrency.lockutils [None req-0beffc8c-5ef9-48bf-afe0-f319fd1ca5e0 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "71cf0ae0-6191-4b64-9a81-a955d807ceb4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.568 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:35 compute-0 ceph-mon[74985]: pgmap v1111: 321 pgs: 321 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 225 KiB/s rd, 306 KiB/s wr, 38 op/s
Nov 25 16:25:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1046537941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:25:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868291142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:35 compute-0 nova_compute[254092]: 2025-11-25 16:25:35.997 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.004 254096 DEBUG nova.compute.provider_tree [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.023 254096 DEBUG nova.scheduler.client.report [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.052 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.053 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.119 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.120 254096 DEBUG nova.network.neutron [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.138 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.153 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.154 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.167 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.187 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.286 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.288 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.289 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Creating image(s)
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.325 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.363 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.386 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.389 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.411 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.413 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.426 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.427 254096 INFO nova.compute.claims [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.446 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.446 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.447 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.448 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.470 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.473 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c261173-944d-4c35-8d16-b066436572bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.577 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.615 254096 DEBUG nova.network.neutron [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.616 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:25:36 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 16:25:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 307 KiB/s wr, 66 op/s
Nov 25 16:25:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1868291142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.879 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c261173-944d-4c35-8d16-b066436572bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:36 compute-0 nova_compute[254092]: 2025-11-25 16:25:36.929 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] resizing rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.015 254096 DEBUG nova.objects.instance [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c261173-944d-4c35-8d16-b066436572bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.029 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.030 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Ensure instance console log exists: /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.031 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.031 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.031 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.033 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.038 254096 WARNING nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.043 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.043 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:25:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:25:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2298148670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.073 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.073 254096 DEBUG nova.virt.libvirt.host [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.074 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.074 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.075 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.075 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.075 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.076 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.076 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.076 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.077 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.077 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.077 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.078 254096 DEBUG nova.virt.hardware [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.080 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.100 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.108 254096 DEBUG nova.compute.provider_tree [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.123 254096 DEBUG nova.scheduler.client.report [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.194 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.196 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.260 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.260 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.304 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.342 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.421 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.422 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.423 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Creating image(s)
Nov 25 16:25:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.440 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.458 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.479 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.483 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:25:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358463041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.506 254096 DEBUG nova.policy [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '787cb8b4238c4926a4466f3421db09ef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95df0d15c889499aba411e805ea145a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.515 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.536 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.540 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.556 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.557 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.557 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.557 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.583 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:37 compute-0 nova_compute[254092]: 2025-11-25 16:25:37.587 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:25:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951548492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:38 compute-0 nova_compute[254092]: 2025-11-25 16:25:38.087 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:38 compute-0 nova_compute[254092]: 2025-11-25 16:25:38.089 254096 DEBUG nova.objects.instance [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c261173-944d-4c35-8d16-b066436572bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:38 compute-0 nova_compute[254092]: 2025-11-25 16:25:38.106 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <uuid>2c261173-944d-4c35-8d16-b066436572bb</uuid>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <name>instance-00000005</name>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerExternalEventsTest-server-583466621</nova:name>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:25:37</nova:creationTime>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <nova:user uuid="c56561c97a2d48ffa5ee1c65800dc0fa">tempest-ServerExternalEventsTest-65175838-project-member</nova:user>
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <nova:project uuid="b3a17f41085a44e38251177c55db1ed1">tempest-ServerExternalEventsTest-65175838</nova:project>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <system>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <entry name="serial">2c261173-944d-4c35-8d16-b066436572bb</entry>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <entry name="uuid">2c261173-944d-4c35-8d16-b066436572bb</entry>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </system>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <os>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   </os>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <features>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   </features>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2c261173-944d-4c35-8d16-b066436572bb_disk">
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2c261173-944d-4c35-8d16-b066436572bb_disk.config">
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:25:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/console.log" append="off"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <video>
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </video>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:25:38 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:25:38 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:25:38 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:25:38 compute-0 nova_compute[254092]: </domain>
Nov 25 16:25:38 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:25:38 compute-0 ceph-mon[74985]: pgmap v1112: 321 pgs: 321 active+clean; 41 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 307 KiB/s wr, 66 op/s
Nov 25 16:25:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2298148670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3358463041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:38 compute-0 nova_compute[254092]: 2025-11-25 16:25:38.443 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:25:38 compute-0 nova_compute[254092]: 2025-11-25 16:25:38.444 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:25:38 compute-0 nova_compute[254092]: 2025-11-25 16:25:38.445 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Using config drive
Nov 25 16:25:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 25 16:25:38 compute-0 nova_compute[254092]: 2025-11-25 16:25:38.861 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:39 compute-0 nova_compute[254092]: 2025-11-25 16:25:39.123 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Creating config drive at /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config
Nov 25 16:25:39 compute-0 nova_compute[254092]: 2025-11-25 16:25:39.129 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_hcf6r_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:39 compute-0 nova_compute[254092]: 2025-11-25 16:25:39.256 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw_hcf6r_" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:39 compute-0 nova_compute[254092]: 2025-11-25 16:25:39.467 254096 DEBUG nova.storage.rbd_utils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] rbd image 2c261173-944d-4c35-8d16-b066436572bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:39 compute-0 nova_compute[254092]: 2025-11-25 16:25:39.471 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config 2c261173-944d-4c35-8d16-b066436572bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1951548492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:39 compute-0 ceph-mon[74985]: pgmap v1113: 321 pgs: 321 active+clean; 41 MiB data, 225 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.022 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Successfully created port: 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:25:40
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control']
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.266 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.317 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] resizing rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.646 254096 DEBUG oslo_concurrency.processutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config 2c261173-944d-4c35-8d16-b066436572bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.647 254096 INFO nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deleting local config drive /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb/disk.config because it was imported into RBD.
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.653 254096 DEBUG nova.objects.instance [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.694 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:40 compute-0 systemd-machined[216343]: New machine qemu-5-instance-00000005.
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.721 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.725 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.726 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.726 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:40 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.753 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.754 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 81 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.928 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.930 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.950 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:40 compute-0 nova_compute[254092]: 2025-11-25 16:25:40.954 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:40 compute-0 ceph-mon[74985]: pgmap v1114: 321 pgs: 321 active+clean; 81 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.6 MiB/s wr, 66 op/s
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.139 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087941.1388347, 2c261173-944d-4c35-8d16-b066436572bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.140 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] VM Resumed (Lifecycle Event)
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.146 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.146 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.167 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.169 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance spawned successfully.
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.170 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.173 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.300 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.300 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.301 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.301 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.302 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.303 254096 DEBUG nova.virt.libvirt.driver [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.472 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.472 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087941.1415298, 2c261173-944d-4c35-8d16-b066436572bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.473 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] VM Started (Lifecycle Event)
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.504 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.507 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.529 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.544 254096 INFO nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 5.26 seconds to spawn the instance on the hypervisor.
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.545 254096 DEBUG nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.619 254096 INFO nova.compute.manager [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 6.21 seconds to build instance.
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.644 254096 DEBUG oslo_concurrency.lockutils [None req-96531d1f-e2b3-4c6f-bfbc-7df427140555 c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.840 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.926 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.927 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Ensure instance console log exists: /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.928 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.928 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:41 compute-0 nova_compute[254092]: 2025-11-25 16:25:41.929 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.071 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Successfully updated port: 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.087 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.087 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.088 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.412 254096 DEBUG nova.compute.manager [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.412 254096 DEBUG nova.compute.manager [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing instance network info cache due to event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.412 254096 DEBUG oslo_concurrency.lockutils [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:42 compute-0 nova_compute[254092]: 2025-11-25 16:25:42.413 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:25:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 106 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Nov 25 16:25:42 compute-0 ceph-mon[74985]: pgmap v1115: 321 pgs: 321 active+clean; 106 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Nov 25 16:25:43 compute-0 nova_compute[254092]: 2025-11-25 16:25:43.812 254096 DEBUG nova.compute.manager [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:43 compute-0 nova_compute[254092]: 2025-11-25 16:25:43.813 254096 DEBUG nova.compute.manager [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:25:43 compute-0 nova_compute[254092]: 2025-11-25 16:25:43.813 254096 DEBUG oslo_concurrency.lockutils [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] Acquiring lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:43 compute-0 nova_compute[254092]: 2025-11-25 16:25:43.813 254096 DEBUG oslo_concurrency.lockutils [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] Acquired lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:43 compute-0 nova_compute[254092]: 2025-11-25 16:25:43.814 254096 DEBUG nova.network.neutron [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:25:43 compute-0 nova_compute[254092]: 2025-11-25 16:25:43.997 254096 DEBUG nova.network.neutron [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:25:44 compute-0 nova_compute[254092]: 2025-11-25 16:25:44.094 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "2c261173-944d-4c35-8d16-b066436572bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:44 compute-0 nova_compute[254092]: 2025-11-25 16:25:44.094 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:44 compute-0 nova_compute[254092]: 2025-11-25 16:25:44.094 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "2c261173-944d-4c35-8d16-b066436572bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:44 compute-0 nova_compute[254092]: 2025-11-25 16:25:44.095 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:44 compute-0 nova_compute[254092]: 2025-11-25 16:25:44.095 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:44 compute-0 nova_compute[254092]: 2025-11-25 16:25:44.096 254096 INFO nova.compute.manager [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Terminating instance
Nov 25 16:25:44 compute-0 nova_compute[254092]: 2025-11-25 16:25:44.096 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 106 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.042 254096 DEBUG nova.network.neutron [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.072 254096 DEBUG oslo_concurrency.lockutils [None req-38d03fd4-c9d8-48ce-b3c5-868edacb831a 4bc2d486c45d43c486b3742df4e3db23 bb9474f3b8fb4ff8837fd300da3e51e1 - - default default] Releasing lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.073 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquired lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.073 254096 DEBUG nova.network.neutron [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.082 254096 DEBUG nova.network.neutron [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.117 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.118 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance network_info: |[{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.118 254096 DEBUG oslo_concurrency.lockutils [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.119 254096 DEBUG nova.network.neutron [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.122 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start _get_guest_xml network_info=[{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [{'size': 1, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.126 254096 WARNING nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.137 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.138 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.148 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.149 254096 DEBUG nova.virt.libvirt.host [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.149 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.150 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:24:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='991195749',id=8,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-223579632',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.150 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.150 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.151 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.151 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.151 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.152 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.153 254096 DEBUG nova.virt.hardware [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.155 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:25:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135733257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.604 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.605 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:45 compute-0 ceph-mon[74985]: pgmap v1116: 321 pgs: 321 active+clean; 106 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Nov 25 16:25:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4135733257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:45 compute-0 nova_compute[254092]: 2025-11-25 16:25:45.881 254096 DEBUG nova.network.neutron [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:25:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:25:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400848585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.022 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.044 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.047 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.201 254096 DEBUG nova.network.neutron [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.220 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Releasing lock "refresh_cache-2c261173-944d-4c35-8d16-b066436572bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.221 254096 DEBUG nova.compute.manager [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:25:46 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 25 16:25:46 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 5.476s CPU time.
Nov 25 16:25:46 compute-0 systemd-machined[216343]: Machine qemu-5-instance-00000005 terminated.
Nov 25 16:25:46 compute-0 podman[271929]: 2025-11-25 16:25:46.347873671 +0000 UTC m=+0.061776752 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:46 compute-0 podman[271930]: 2025-11-25 16:25:46.36842325 +0000 UTC m=+0.082010313 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 16:25:46 compute-0 podman[271931]: 2025-11-25 16:25:46.380539959 +0000 UTC m=+0.086835364 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.438 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance destroyed successfully.
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.438 254096 DEBUG nova.objects.instance [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lazy-loading 'resources' on Instance uuid 2c261173-944d-4c35-8d16-b066436572bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:25:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197307260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.468 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.470 254096 DEBUG nova.virt.libvirt.vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-423978472',id=6,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-236be33o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:25:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=4b2c6795-15b5-424c-b7c5-4b695a348f41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.470 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.471 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.472 254096 DEBUG nova.objects.instance [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.488 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <uuid>4b2c6795-15b5-424c-b7c5-4b695a348f41</uuid>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <name>instance-00000006</name>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-423978472</nova:name>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:25:45</nova:creationTime>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-223579632">
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:ephemeral>1</nova:ephemeral>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:user uuid="787cb8b4238c4926a4466f3421db09ef">tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member</nova:user>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:project uuid="95df0d15c889499aba411e805ea145a5">tempest-ServersWithSpecificFlavorTestJSON-341738122</nova:project>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <nova:port uuid="05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8">
Nov 25 16:25:46 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <system>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <entry name="serial">4b2c6795-15b5-424c-b7c5-4b695a348f41</entry>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <entry name="uuid">4b2c6795-15b5-424c-b7c5-4b695a348f41</entry>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </system>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <os>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   </os>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <features>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   </features>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4b2c6795-15b5-424c-b7c5-4b695a348f41_disk">
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </source>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.eph0">
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </source>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <target dev="vdb" bus="virtio"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config">
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </source>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:25:46 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:fb:ac:25"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <target dev="tap05cb1f2f-ce"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/console.log" append="off"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <video>
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </video>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:25:46 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:25:46 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:25:46 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:25:46 compute-0 nova_compute[254092]: </domain>
Nov 25 16:25:46 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.490 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Preparing to wait for external event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.490 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.491 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.491 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.492 254096 DEBUG nova.virt.libvirt.vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-423978472',id=6,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-236be33o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:25:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=4b2c6795-15b5-424c-b7c5-4b695a348f41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.492 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.493 254096 DEBUG nova.network.os_vif_util [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.493 254096 DEBUG os_vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.495 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.495 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.499 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05cb1f2f-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.499 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05cb1f2f-ce, col_values=(('external_ids', {'iface-id': '05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:ac:25', 'vm-uuid': '4b2c6795-15b5-424c-b7c5-4b695a348f41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:46 compute-0 NetworkManager[48891]: <info>  [1764087946.5015] manager: (tap05cb1f2f-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.508 254096 INFO os_vif [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce')
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.551 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087931.550435, 71cf0ae0-6191-4b64-9a81-a955d807ceb4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.551 254096 INFO nova.compute.manager [-] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] VM Stopped (Lifecycle Event)
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.576 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.577 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.577 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.577 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] No VIF found with MAC fa:16:3e:fb:ac:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.578 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Using config drive
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.594 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:46 compute-0 nova_compute[254092]: 2025-11-25 16:25:46.599 254096 DEBUG nova.compute.manager [None req-7eee63e9-4bf5-46db-a0d7-0467786475cf - - - - - -] [instance: 71cf0ae0-6191-4b64-9a81-a955d807ceb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 136 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Nov 25 16:25:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1400848585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1197307260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.014 254096 INFO nova.virt.libvirt.driver [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deleting instance files /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb_del
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.015 254096 INFO nova.virt.libvirt.driver [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deletion of /var/lib/nova/instances/2c261173-944d-4c35-8d16-b066436572bb_del complete
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.124 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Creating config drive at /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.133 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gpv9x0o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.156 254096 INFO nova.compute.manager [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.157 254096 DEBUG oslo.service.loopingcall [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.157 254096 DEBUG nova.compute.manager [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.157 254096 DEBUG nova.network.neutron [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.192 254096 DEBUG nova.network.neutron [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updated VIF entry in instance network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.193 254096 DEBUG nova.network.neutron [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.207 254096 DEBUG oslo_concurrency.lockutils [req-8f5fd358-e5fb-44b6-83a1-16f1b24497f6 req-662ad87b-5658-4bb3-98cd-71bc4d257cf6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.259 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0gpv9x0o" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.286 254096 DEBUG nova.storage.rbd_utils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] rbd image 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.290 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.343 254096 DEBUG nova.network.neutron [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.370 254096 DEBUG nova.network.neutron [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.396 254096 INFO nova.compute.manager [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Took 0.24 seconds to deallocate network for instance.
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.414 254096 DEBUG oslo_concurrency.processutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config 4b2c6795-15b5-424c-b7c5-4b695a348f41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.415 254096 INFO nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deleting local config drive /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41/disk.config because it was imported into RBD.
Nov 25 16:25:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:47 compute-0 kernel: tap05cb1f2f-ce: entered promiscuous mode
Nov 25 16:25:47 compute-0 NetworkManager[48891]: <info>  [1764087947.4619] manager: (tap05cb1f2f-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 16:25:47 compute-0 ovn_controller[153477]: 2025-11-25T16:25:47Z|00035|binding|INFO|Claiming lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for this chassis.
Nov 25 16:25:47 compute-0 ovn_controller[153477]: 2025-11-25T16:25:47Z|00036|binding|INFO|05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8: Claiming fa:16:3e:fb:ac:25 10.100.0.6
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:47 compute-0 systemd-udevd[271959]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.470 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:ac:25 10.100.0.6'], port_security=['fa:16:3e:fb:ac:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b2c6795-15b5-424c-b7c5-4b695a348f41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.471 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d bound to our chassis
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.472 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 16:25:47 compute-0 NetworkManager[48891]: <info>  [1764087947.4778] device (tap05cb1f2f-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.478 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:47 compute-0 NetworkManager[48891]: <info>  [1764087947.4798] device (tap05cb1f2f-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.478 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:47 compute-0 ovn_controller[153477]: 2025-11-25T16:25:47Z|00037|binding|INFO|Setting lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 ovn-installed in OVS
Nov 25 16:25:47 compute-0 ovn_controller[153477]: 2025-11-25T16:25:47Z|00038|binding|INFO|Setting lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 up in Southbound
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.487 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c997e0c8-5af4-4e23-9ade-6c4ae7613a8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.488 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdd4097f8-d1 in ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.490 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdd4097f8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d95897d-d6da-49a0-8a35-d5b5876034ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.491 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37cc2450-5225-4e3d-9202-0dc4fe7a444b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:47 compute-0 systemd-machined[216343]: New machine qemu-6-instance-00000006.
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.503 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d69b81f3-23d6-4485-904a-1c40d5ddb2ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.514 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2141af-cc82-46f0-9ccd-dc66583ae169]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.533 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.545 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b466f7-3d5e-44a1-9257-1d86609796c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.551 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[775822b3-e1bc-4a8e-8ead-2ea5bb2ef461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 NetworkManager[48891]: <info>  [1764087947.5525] manager: (tapdd4097f8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.582 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[712265ee-9224-433b-a2d6-4ce673d3216c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.585 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a00456f0-0be2-4011-8bff-fafbe4ea8d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.588 254096 DEBUG oslo_concurrency.processutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:47 compute-0 NetworkManager[48891]: <info>  [1764087947.6118] device (tapdd4097f8-d0): carrier: link connected
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.616 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcce62a-93cf-46c4-a5ce-d462d5711357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e7efb996-00cb-4066-b320-e0b4d77d1152]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440520, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272120, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.647 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03e5a1c8-8a2c-45b6-977a-6c640a89fbd4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:9711'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440520, 'tstamp': 440520}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272121, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.665 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5548975-2b95-445d-bffe-5e03d077df77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd4097f8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:97:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440520, 'reachable_time': 24391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272122, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.695 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[69bb0f8b-8f04-47ec-b62b-4231db6b9349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.758 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87d9f297-0721-48a6-aab5-093ee8584293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.759 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.759 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.760 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd4097f8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:47 compute-0 kernel: tapdd4097f8-d0: entered promiscuous mode
Nov 25 16:25:47 compute-0 NetworkManager[48891]: <info>  [1764087947.7627] manager: (tapdd4097f8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.767 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd4097f8-d0, col_values=(('external_ids', {'iface-id': '65cb2392-e609-45e8-bc45-ba0ce2e7d527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:25:47 compute-0 ovn_controller[153477]: 2025-11-25T16:25:47Z|00039|binding|INFO|Releasing lport 65cb2392-e609-45e8-bc45-ba0ce2e7d527 from this chassis (sb_readonly=0)
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.773 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.775 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5327ea-c198-464a-94be-743c85e0cb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.776 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/dd4097f8-dcdf-451c-8fbb-2057e86e375d.pid.haproxy
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID dd4097f8-dcdf-451c-8fbb-2057e86e375d
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:25:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:25:47.776 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'env', 'PROCESS_TAG=haproxy-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dd4097f8-dcdf-451c-8fbb-2057e86e375d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:47 compute-0 ceph-mon[74985]: pgmap v1117: 321 pgs: 321 active+clean; 136 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.957 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087947.9565694, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Started (Lifecycle Event)
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.985 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.989 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087947.9566867, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:47 compute-0 nova_compute[254092]: 2025-11-25 16:25:47.989 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Paused (Lifecycle Event)
Nov 25 16:25:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:25:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617436220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.015 254096 DEBUG oslo_concurrency.processutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.018 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.021 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.026 254096 DEBUG nova.compute.provider_tree [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.049 254096 DEBUG nova.scheduler.client.report [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.054 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.082 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.084 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.085 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.085 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.085 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.139 254096 INFO nova.scheduler.client.report [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Deleted allocations for instance 2c261173-944d-4c35-8d16-b066436572bb
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.151 254096 DEBUG nova.compute.manager [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.151 254096 DEBUG oslo_concurrency.lockutils [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.152 254096 DEBUG oslo_concurrency.lockutils [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.152 254096 DEBUG oslo_concurrency.lockutils [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.152 254096 DEBUG nova.compute.manager [req-0c0b888c-26ed-46f5-aaa6-a3e93ed41b68 req-43caaaa9-b06a-4d5c-ab35-2bd4720189b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Processing event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.153 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.158 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087948.1584089, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.159 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Resumed (Lifecycle Event)
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.160 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.167 254096 INFO nova.virt.libvirt.driver [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance spawned successfully.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.167 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:25:48 compute-0 podman[272236]: 2025-11-25 16:25:48.187399562 +0000 UTC m=+0.069708158 container create 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.210 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.219 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:25:48 compute-0 systemd[1]: Started libpod-conmon-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d.scope.
Nov 25 16:25:48 compute-0 podman[272236]: 2025-11-25 16:25:48.139115589 +0000 UTC m=+0.021424195 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.239 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.250 254096 DEBUG oslo_concurrency.lockutils [None req-785ea770-1765-4340-8781-63c74e7eaf5b c56561c97a2d48ffa5ee1c65800dc0fa b3a17f41085a44e38251177c55db1ed1 - - default default] Lock "2c261173-944d-4c35-8d16-b066436572bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.256 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.258 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.258 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.259 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565d56c1463fe42daf52e3cc2c80f46ee13778cefe2ff5f841cf7e2212503eba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.259 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.260 254096 DEBUG nova.virt.libvirt.driver [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:25:48 compute-0 podman[272236]: 2025-11-25 16:25:48.273404863 +0000 UTC m=+0.155713459 container init 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:25:48 compute-0 podman[272236]: 2025-11-25 16:25:48.279211741 +0000 UTC m=+0.161520327 container start 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:25:48 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : New worker (272275) forked
Nov 25 16:25:48 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : Loading success.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.321 254096 INFO nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 10.90 seconds to spawn the instance on the hypervisor.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.322 254096 DEBUG nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.416 254096 INFO nova.compute.manager [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 12.16 seconds to build instance.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.430 254096 DEBUG oslo_concurrency.lockutils [None req-53268db5-97bd-44fb-acf2-1ff1d08257b4 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:25:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1745681765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.620 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.620 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.621 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.781 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.783 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4673MB free_disk=59.94660568237305GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 136 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 140 op/s
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.843 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 4b2c6795-15b5-424c-b7c5-4b695a348f41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:25:48 compute-0 nova_compute[254092]: 2025-11-25 16:25:48.904 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:25:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3617436220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1745681765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:48 compute-0 ceph-mon[74985]: pgmap v1118: 321 pgs: 321 active+clean; 136 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 140 op/s
Nov 25 16:25:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:25:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731796822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:49 compute-0 nova_compute[254092]: 2025-11-25 16:25:49.350 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:25:49 compute-0 nova_compute[254092]: 2025-11-25 16:25:49.357 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:25:49 compute-0 nova_compute[254092]: 2025-11-25 16:25:49.378 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:25:49 compute-0 nova_compute[254092]: 2025-11-25 16:25:49.425 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:25:49 compute-0 nova_compute[254092]: 2025-11-25 16:25:49.426 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/731796822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:25:50 compute-0 nova_compute[254092]: 2025-11-25 16:25:50.481 254096 DEBUG nova.compute.manager [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:50 compute-0 nova_compute[254092]: 2025-11-25 16:25:50.483 254096 DEBUG oslo_concurrency.lockutils [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:25:50 compute-0 nova_compute[254092]: 2025-11-25 16:25:50.484 254096 DEBUG oslo_concurrency.lockutils [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:25:50 compute-0 nova_compute[254092]: 2025-11-25 16:25:50.484 254096 DEBUG oslo_concurrency.lockutils [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:25:50 compute-0 nova_compute[254092]: 2025-11-25 16:25:50.485 254096 DEBUG nova.compute.manager [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] No waiting events found dispatching network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:25:50 compute-0 nova_compute[254092]: 2025-11-25 16:25:50.485 254096 WARNING nova.compute.manager [req-b94bdc6b-e7af-4eb9-9256-dff782db33a3 req-b6e3b556-7e9b-48cb-a822-42c0c4fe428d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received unexpected event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for instance with vm_state active and task_state None.
Nov 25 16:25:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 103 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 25 16:25:50 compute-0 ceph-mon[74985]: pgmap v1119: 321 pgs: 321 active+clean; 103 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004066625765123397 of space, bias 1.0, pg target 0.1219987729537019 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:25:51 compute-0 nova_compute[254092]: 2025-11-25 16:25:51.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.426 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:52 compute-0 nova_compute[254092]: 2025-11-25 16:25:52.427 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:25:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 204 op/s
Nov 25 16:25:53 compute-0 ceph-mon[74985]: pgmap v1120: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 204 op/s
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.590 254096 DEBUG nova.compute.manager [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.591 254096 DEBUG nova.compute.manager [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing instance network info cache due to event network-changed-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.591 254096 DEBUG oslo_concurrency.lockutils [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.592 254096 DEBUG oslo_concurrency.lockutils [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:53 compute-0 nova_compute[254092]: 2025-11-25 16:25:53.592 254096 DEBUG nova.network.neutron [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Refreshing network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:25:54 compute-0 nova_compute[254092]: 2025-11-25 16:25:54.126 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:25:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 25 16:25:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:25:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1942886310' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:25:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:25:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1942886310' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:25:55 compute-0 ceph-mon[74985]: pgmap v1121: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 25 16:25:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1942886310' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:25:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1942886310' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:25:56 compute-0 nova_compute[254092]: 2025-11-25 16:25:56.245 254096 DEBUG nova.network.neutron [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updated VIF entry in instance network info cache for port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:25:56 compute-0 nova_compute[254092]: 2025-11-25 16:25:56.246 254096 DEBUG nova.network.neutron [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:56 compute-0 nova_compute[254092]: 2025-11-25 16:25:56.269 254096 DEBUG oslo_concurrency.lockutils [req-fe883fc1-1ddf-420b-be04-b1e25ebc5a70 req-c9d5ad96-fe23-4bd3-ac78-89ebb8905d2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:56 compute-0 nova_compute[254092]: 2025-11-25 16:25:56.269 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:25:56 compute-0 nova_compute[254092]: 2025-11-25 16:25:56.269 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:25:56 compute-0 nova_compute[254092]: 2025-11-25 16:25:56.270 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:25:56 compute-0 nova_compute[254092]: 2025-11-25 16:25:56.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 25 16:25:56 compute-0 ceph-mon[74985]: pgmap v1122: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 195 op/s
Nov 25 16:25:57 compute-0 nova_compute[254092]: 2025-11-25 16:25:57.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:25:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:25:58 compute-0 nova_compute[254092]: 2025-11-25 16:25:58.594 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [{"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:25:58 compute-0 nova_compute[254092]: 2025-11-25 16:25:58.611 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-4b2c6795-15b5-424c-b7c5-4b695a348f41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:25:58 compute-0 nova_compute[254092]: 2025-11-25 16:25:58.612 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:25:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 102 op/s
Nov 25 16:26:00 compute-0 ceph-mon[74985]: pgmap v1123: 321 pgs: 321 active+clean; 90 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 102 op/s
Nov 25 16:26:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 95 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 923 KiB/s wr, 117 op/s
Nov 25 16:26:01 compute-0 ceph-mon[74985]: pgmap v1124: 321 pgs: 321 active+clean; 95 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 923 KiB/s wr, 117 op/s
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.437 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087946.4354992, 2c261173-944d-4c35-8d16-b066436572bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.438 254096 INFO nova.compute.manager [-] [instance: 2c261173-944d-4c35-8d16-b066436572bb] VM Stopped (Lifecycle Event)
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.487 254096 DEBUG nova.compute.manager [None req-2934ee08-b3a5-41c6-b3e8-5b4df85e7bdb - - - - - -] [instance: 2c261173-944d-4c35-8d16-b066436572bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.606 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.943 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.944 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:01 compute-0 nova_compute[254092]: 2025-11-25 16:26:01.959 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.040 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.041 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.049 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.050 254096 INFO nova.compute.claims [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.207 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:02 compute-0 ovn_controller[153477]: 2025-11-25T16:26:02Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:ac:25 10.100.0.6
Nov 25 16:26:02 compute-0 ovn_controller[153477]: 2025-11-25T16:26:02Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:ac:25 10.100.0.6
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.263 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.263 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.278 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.346 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519270304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.650 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.655 254096 DEBUG nova.compute.provider_tree [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.678 254096 DEBUG nova.scheduler.client.report [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2519270304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.707 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.707 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.710 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.717 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.717 254096 INFO nova.compute.claims [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.801 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.802 254096 DEBUG nova.network.neutron [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:26:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 103 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.915 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:26:02 compute-0 nova_compute[254092]: 2025-11-25 16:26:02.949 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.097 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.172 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.173 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.174 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Creating image(s)
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.192 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.210 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.228 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.231 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.249 254096 DEBUG nova.network.neutron [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.250 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.289 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.290 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.290 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.291 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.308 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.311 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0ce39e6-663b-4ff2-84e5-98aa54955701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301432048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.461 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.466 254096 DEBUG nova.compute.provider_tree [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.488 254096 DEBUG nova.scheduler.client.report [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.551 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.553 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.637 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.637 254096 DEBUG nova.network.neutron [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.671 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:26:03 compute-0 ceph-mon[74985]: pgmap v1125: 321 pgs: 321 active+clean; 103 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 632 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Nov 25 16:26:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1301432048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.691 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.706 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0ce39e6-663b-4ff2-84e5-98aa54955701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.765 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] resizing rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.853 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.854 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.855 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Creating image(s)
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.877 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.896 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.925 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.930 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.959 254096 DEBUG nova.objects.instance [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'migration_context' on Instance uuid e0ce39e6-663b-4ff2-84e5-98aa54955701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.979 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.980 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Ensure instance console log exists: /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.981 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.981 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.982 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.984 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.989 254096 WARNING nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.992 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.993 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.993 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:03 compute-0 nova_compute[254092]: 2025-11-25 16:26:03.994 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.020 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.024 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.054 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.055 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.058 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.059 254096 DEBUG nova.virt.libvirt.host [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.060 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.060 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.061 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.062 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.062 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.063 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.063 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.064 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.064 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.065 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.065 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.065 254096 DEBUG nova.virt.hardware [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.071 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1539899894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.510 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.537 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.556 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.560 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.608 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] resizing rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:26:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1539899894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 103 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.866 254096 DEBUG nova.objects.instance [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lazy-loading 'migration_context' on Instance uuid e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.883 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.883 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Ensure instance console log exists: /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.884 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.884 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:04 compute-0 nova_compute[254092]: 2025-11-25 16:26:04.885 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510083338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:05 compute-0 nova_compute[254092]: 2025-11-25 16:26:05.024 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:05 compute-0 nova_compute[254092]: 2025-11-25 16:26:05.026 254096 DEBUG nova.objects.instance [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'pci_devices' on Instance uuid e0ce39e6-663b-4ff2-84e5-98aa54955701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:05 compute-0 nova_compute[254092]: 2025-11-25 16:26:05.040 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <uuid>e0ce39e6-663b-4ff2-84e5-98aa54955701</uuid>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <name>instance-00000007</name>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1320978474</nova:name>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:26:03</nova:creationTime>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <nova:user uuid="820b9ab982364678b8e75b2c9cc4cfed">tempest-ServersAdminNegativeTestJSON-1866473487-project-member</nova:user>
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <nova:project uuid="511bc9af98844c8995c27adbee1a3d4c">tempest-ServersAdminNegativeTestJSON-1866473487</nova:project>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <system>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <entry name="serial">e0ce39e6-663b-4ff2-84e5-98aa54955701</entry>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <entry name="uuid">e0ce39e6-663b-4ff2-84e5-98aa54955701</entry>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </system>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <os>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   </os>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <features>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   </features>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e0ce39e6-663b-4ff2-84e5-98aa54955701_disk">
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config">
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:05 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/console.log" append="off"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <video>
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </video>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:26:05 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:26:05 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:26:05 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:26:05 compute-0 nova_compute[254092]: </domain>
Nov 25 16:26:05 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:26:05 compute-0 nova_compute[254092]: 2025-11-25 16:26:05.141 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:05 compute-0 nova_compute[254092]: 2025-11-25 16:26:05.142 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:05 compute-0 nova_compute[254092]: 2025-11-25 16:26:05.142 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Using config drive
Nov 25 16:26:05 compute-0 nova_compute[254092]: 2025-11-25 16:26:05.161 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:06 compute-0 ceph-mon[74985]: pgmap v1126: 321 pgs: 321 active+clean; 103 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.6 MiB/s wr, 21 op/s
Nov 25 16:26:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3510083338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.110 254096 DEBUG nova.network.neutron [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.111 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.112 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.116 254096 WARNING nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.121 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.121 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.libvirt.host [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.124 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.125 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.126 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.127 254096 DEBUG nova.virt.hardware [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.129 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000459050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.554 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.572 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.576 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.812 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Creating config drive at /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.817 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprhbazlr9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 215 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 5.7 MiB/s wr, 187 op/s
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.941 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprhbazlr9" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280467783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.972 254096 DEBUG nova.storage.rbd_utils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.976 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.996 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:06 compute-0 nova_compute[254092]: 2025-11-25 16:26:06.998 254096 DEBUG nova.objects.instance [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lazy-loading 'pci_devices' on Instance uuid e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.014 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <uuid>e9e96b2e-62f4-4f02-96ec-5306f9e39ca8</uuid>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <name>instance-00000008</name>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1796428729</nova:name>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:26:06</nova:creationTime>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <nova:user uuid="65b0c04bdd69400aa1e7ba74ddab759e">tempest-ServerDiagnosticsNegativeTest-1808094176-project-member</nova:user>
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <nova:project uuid="e8bfbfb2d0e640eea61c1a51c8c5f71e">tempest-ServerDiagnosticsNegativeTest-1808094176</nova:project>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <system>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <entry name="serial">e9e96b2e-62f4-4f02-96ec-5306f9e39ca8</entry>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <entry name="uuid">e9e96b2e-62f4-4f02-96ec-5306f9e39ca8</entry>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </system>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <os>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   </os>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <features>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   </features>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk">
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config">
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:07 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/console.log" append="off"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <video>
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </video>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:26:07 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:26:07 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:26:07 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:26:07 compute-0 nova_compute[254092]: </domain>
Nov 25 16:26:07 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:07.040 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:26:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:07.042 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:26:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2000459050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:07 compute-0 ceph-mon[74985]: pgmap v1127: 321 pgs: 321 active+clean; 215 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 5.7 MiB/s wr, 187 op/s
Nov 25 16:26:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1280467783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.061 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.061 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.062 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Using config drive
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.081 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.131 254096 DEBUG oslo_concurrency.processutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config e0ce39e6-663b-4ff2-84e5-98aa54955701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.132 254096 INFO nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deleting local config drive /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701/disk.config because it was imported into RBD.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:07 compute-0 systemd-machined[216343]: New machine qemu-7-instance-00000007.
Nov 25 16:26:07 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.295 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Creating config drive at /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.303 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ygwn9w9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.431 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ygwn9w9" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.461 254096 DEBUG nova.storage.rbd_utils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] rbd image e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.466 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.554 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087967.5539758, e0ce39e6-663b-4ff2-84e5-98aa54955701 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.555 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] VM Resumed (Lifecycle Event)
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.558 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.559 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.562 254096 INFO nova.virt.libvirt.driver [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance spawned successfully.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.563 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.575 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.584 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.590 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.590 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.591 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.591 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.592 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.593 254096 DEBUG nova.virt.libvirt.driver [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.636 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.636 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087967.558507, e0ce39e6-663b-4ff2-84e5-98aa54955701 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.637 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] VM Started (Lifecycle Event)
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.640 254096 DEBUG oslo_concurrency.processutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.641 254096 INFO nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deleting local config drive /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8/disk.config because it was imported into RBD.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.681 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.691 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.697 254096 INFO nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 4.52 seconds to spawn the instance on the hypervisor.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.698 254096 DEBUG nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:07 compute-0 systemd-machined[216343]: New machine qemu-8-instance-00000008.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.732 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:07 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.782 254096 INFO nova.compute.manager [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 5.78 seconds to build instance.
Nov 25 16:26:07 compute-0 nova_compute[254092]: 2025-11-25 16:26:07.826 254096 DEBUG oslo_concurrency.lockutils [None req-844763bd-de63-49a9-894d-c9fa16712a66 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.169 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087968.1687279, e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.169 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] VM Resumed (Lifecycle Event)
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.172 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.173 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.176 254096 INFO nova.virt.libvirt.driver [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance spawned successfully.
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.176 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.187 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.193 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.196 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.197 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.197 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.197 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.198 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.198 254096 DEBUG nova.virt.libvirt.driver [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.223 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.224 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087968.1698375, e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.224 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] VM Started (Lifecycle Event)
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.247 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.250 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.280 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.281 254096 INFO nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 4.43 seconds to spawn the instance on the hypervisor.
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.282 254096 DEBUG nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.344 254096 INFO nova.compute.manager [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 6.02 seconds to build instance.
Nov 25 16:26:08 compute-0 nova_compute[254092]: 2025-11-25 16:26:08.359 254096 DEBUG oslo_concurrency.lockutils [None req-ac249225-6dd0-4b78-adf4-82016b3db4eb 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 215 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 5.7 MiB/s wr, 187 op/s
Nov 25 16:26:08 compute-0 ceph-mon[74985]: pgmap v1128: 321 pgs: 321 active+clean; 215 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 5.7 MiB/s wr, 187 op/s
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.338 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.338 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.339 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.339 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.339 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.340 254096 INFO nova.compute.manager [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Terminating instance
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.341 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "refresh_cache-e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.341 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquired lock "refresh_cache-e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.342 254096 DEBUG nova.network.neutron [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:26:09 compute-0 nova_compute[254092]: 2025-11-25 16:26:09.874 254096 DEBUG nova.network.neutron [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.217 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.218 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.219 254096 INFO nova.compute.manager [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Terminating instance
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.221 254096 DEBUG nova.compute.manager [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.276 254096 DEBUG nova.network.neutron [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.288 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Releasing lock "refresh_cache-e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.289 254096 DEBUG nova.compute.manager [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:26:10 compute-0 kernel: tap05cb1f2f-ce (unregistering): left promiscuous mode
Nov 25 16:26:10 compute-0 NetworkManager[48891]: <info>  [1764087970.3112] device (tap05cb1f2f-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:10 compute-0 ovn_controller[153477]: 2025-11-25T16:26:10Z|00040|binding|INFO|Releasing lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 from this chassis (sb_readonly=0)
Nov 25 16:26:10 compute-0 ovn_controller[153477]: 2025-11-25T16:26:10Z|00041|binding|INFO|Setting lport 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 down in Southbound
Nov 25 16:26:10 compute-0 ovn_controller[153477]: 2025-11-25T16:26:10Z|00042|binding|INFO|Removing iface tap05cb1f2f-ce ovn-installed in OVS
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.330 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:ac:25 10.100.0.6'], port_security=['fa:16:3e:fb:ac:25 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4b2c6795-15b5-424c-b7c5-4b695a348f41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95df0d15c889499aba411e805ea145a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9e0e128-a918-470b-9624-970f7d13e397', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f0418d-ac4b-47f9-993a-6ad9c6384b37, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.331 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 in datapath dd4097f8-dcdf-451c-8fbb-2057e86e375d unbound from our chassis
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.332 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd4097f8-dcdf-451c-8fbb-2057e86e375d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.332 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[335251c7-440d-474d-b030-7278a0479d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.333 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d namespace which is not needed anymore
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:10 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 25 16:26:10 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.172s CPU time.
Nov 25 16:26:10 compute-0 systemd-machined[216343]: Machine qemu-6-instance-00000006 terminated.
Nov 25 16:26:10 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 25 16:26:10 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 2.564s CPU time.
Nov 25 16:26:10 compute-0 systemd-machined[216343]: Machine qemu-8-instance-00000008 terminated.
Nov 25 16:26:10 compute-0 NetworkManager[48891]: <info>  [1764087970.4427] manager: (tap05cb1f2f-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.459 254096 INFO nova.virt.libvirt.driver [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Instance destroyed successfully.
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.461 254096 DEBUG nova.objects.instance [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lazy-loading 'resources' on Instance uuid 4b2c6795-15b5-424c-b7c5-4b695a348f41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.476 254096 DEBUG nova.virt.libvirt.vif [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-423978472',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-423978472',id=6,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSmbQTCEowz9Br3mA84W52215mjLyCGHTHsUJ1HsTz8qleFdGWCmNHBOWPSc/8w+47nHDrMi2gOITQ7Dn+rkiEiUtK0/z3HXn3ebs4Ev7qOse6EBs27GTPhFXHM2eKGPA==',key_name='tempest-keypair-494555026',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:25:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95df0d15c889499aba411e805ea145a5',ramdisk_id='',reservation_id='r-236be33o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-341738122',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-341738122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:25:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='787cb8b4238c4926a4466f3421db09ef',uuid=4b2c6795-15b5-424c-b7c5-4b695a348f41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.477 254096 DEBUG nova.network.os_vif_util [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converting VIF {"id": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "address": "fa:16:3e:fb:ac:25", "network": {"id": "dd4097f8-dcdf-451c-8fbb-2057e86e375d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-256857767-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95df0d15c889499aba411e805ea145a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05cb1f2f-ce", "ovs_interfaceid": "05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.477 254096 DEBUG nova.network.os_vif_util [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.478 254096 DEBUG os_vif [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05cb1f2f-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.488 254096 INFO os_vif [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:ac:25,bridge_name='br-int',has_traffic_filtering=True,id=05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8,network=Network(dd4097f8-dcdf-451c-8fbb-2057e86e375d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05cb1f2f-ce')
Nov 25 16:26:10 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : haproxy version is 2.8.14-c23fe91
Nov 25 16:26:10 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [NOTICE]   (272273) : path to executable is /usr/sbin/haproxy
Nov 25 16:26:10 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [WARNING]  (272273) : Exiting Master process...
Nov 25 16:26:10 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [ALERT]    (272273) : Current worker (272275) exited with code 143 (Terminated)
Nov 25 16:26:10 compute-0 neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d[272268]: [WARNING]  (272273) : All workers exited. Exiting... (0)
Nov 25 16:26:10 compute-0 systemd[1]: libpod-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d.scope: Deactivated successfully.
Nov 25 16:26:10 compute-0 podman[273063]: 2025-11-25 16:26:10.511187592 +0000 UTC m=+0.070277443 container died 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.524 254096 INFO nova.virt.libvirt.driver [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance destroyed successfully.
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.525 254096 DEBUG nova.objects.instance [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lazy-loading 'resources' on Instance uuid e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d-userdata-shm.mount: Deactivated successfully.
Nov 25 16:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-565d56c1463fe42daf52e3cc2c80f46ee13778cefe2ff5f841cf7e2212503eba-merged.mount: Deactivated successfully.
Nov 25 16:26:10 compute-0 podman[273063]: 2025-11-25 16:26:10.556620818 +0000 UTC m=+0.115710669 container cleanup 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:26:10 compute-0 systemd[1]: libpod-conmon-7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d.scope: Deactivated successfully.
Nov 25 16:26:10 compute-0 podman[273136]: 2025-11-25 16:26:10.652811077 +0000 UTC m=+0.066858921 container remove 7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.662 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8649169c-4b61-4ad2-b9b7-14a4368ed177]: (4, ('Tue Nov 25 04:26:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d)\n7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d\nTue Nov 25 04:26:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d (7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d)\n7e95a915560e1b90366f3da9741754f56abb6cc3fca7df71495dc810c5080f7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.665 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c92dff68-9acc-40f8-91e5-8a505bb8877c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.666 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd4097f8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:10 compute-0 kernel: tapdd4097f8-d0: left promiscuous mode
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.678 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c1fd12-264f-4f6c-a5f9-5373c6725963]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 nova_compute[254092]: 2025-11-25 16:26:10.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f58de09-0dee-4237-9063-81558d16e37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.693 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfe09fb-4926-4119-9d33-9377cec45126]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2043c817-0c18-47cd-91b3-4f305d5212ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440513, 'reachable_time': 26511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273151, 'error': None, 'target': 'ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 systemd[1]: run-netns-ovnmeta\x2ddd4097f8\x2ddcdf\x2d451c\x2d8fbb\x2d2057e86e375d.mount: Deactivated successfully.
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.719 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dd4097f8-dcdf-451c-8fbb-2057e86e375d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:10.719 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4a15ec11-9fca-4e2a-81b2-f5b4711721fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 216 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 302 op/s
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.058 254096 INFO nova.virt.libvirt.driver [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deleting instance files /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_del
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.059 254096 INFO nova.virt.libvirt.driver [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deletion of /var/lib/nova/instances/e9e96b2e-62f4-4f02-96ec-5306f9e39ca8_del complete
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.133 254096 INFO nova.compute.manager [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.134 254096 DEBUG oslo.service.loopingcall [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.134 254096 DEBUG nova.compute.manager [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.135 254096 DEBUG nova.network.neutron [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.171 254096 INFO nova.virt.libvirt.driver [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deleting instance files /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41_del
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.172 254096 INFO nova.virt.libvirt.driver [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deletion of /var/lib/nova/instances/4b2c6795-15b5-424c-b7c5-4b695a348f41_del complete
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.213 254096 DEBUG nova.compute.manager [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-unplugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.213 254096 DEBUG oslo_concurrency.lockutils [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.214 254096 DEBUG oslo_concurrency.lockutils [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.214 254096 DEBUG oslo_concurrency.lockutils [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.214 254096 DEBUG nova.compute.manager [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] No waiting events found dispatching network-vif-unplugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.215 254096 DEBUG nova.compute.manager [req-b8a5922b-e339-494b-b112-dbd369f0c831 req-ad5dff4b-36e6-4c51-adbc-26be0902b2c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-unplugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.229 254096 INFO nova.compute.manager [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.230 254096 DEBUG oslo.service.loopingcall [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.230 254096 DEBUG nova.compute.manager [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.231 254096 DEBUG nova.network.neutron [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.308 254096 DEBUG nova.network.neutron [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.324 254096 DEBUG nova.network.neutron [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.334 254096 INFO nova.compute.manager [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Took 0.20 seconds to deallocate network for instance.
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.378 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.379 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.481 254096 DEBUG oslo_concurrency.processutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.856 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "25540ca1-3029-48b2-8ab3-c800a16c8175" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.857 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:11 compute-0 ceph-mon[74985]: pgmap v1129: 321 pgs: 321 active+clean; 216 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 302 op/s
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.886 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:26:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279493066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.933 254096 DEBUG oslo_concurrency.processutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.952 254096 DEBUG nova.compute.provider_tree [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.971 254096 DEBUG nova.scheduler.client.report [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:11 compute-0 nova_compute[254092]: 2025-11-25 16:26:11.982 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.045 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.049 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.059 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.059 254096 INFO nova.compute.claims [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.080 254096 INFO nova.scheduler.client.report [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Deleted allocations for instance e9e96b2e-62f4-4f02-96ec-5306f9e39ca8
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.170 254096 DEBUG oslo_concurrency.lockutils [None req-e53709cc-c5d0-4a4c-9538-ff13d2051525 65b0c04bdd69400aa1e7ba74ddab759e e8bfbfb2d0e640eea61c1a51c8c5f71e - - default default] Lock "e9e96b2e-62f4-4f02-96ec-5306f9e39ca8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.237 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.638 254096 DEBUG nova.network.neutron [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718175817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.664 254096 INFO nova.compute.manager [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Took 1.43 seconds to deallocate network for instance.
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.678 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.684 254096 DEBUG nova.compute.provider_tree [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.705 254096 DEBUG nova.scheduler.client.report [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.720 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.734 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.735 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.743 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.794 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.796 254096 DEBUG nova.network.neutron [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:26:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.8 MiB/s wr, 323 op/s
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.826 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.853 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:26:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/279493066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2718175817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:12 compute-0 nova_compute[254092]: 2025-11-25 16:26:12.894 254096 DEBUG oslo_concurrency.processutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.896299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087972896335, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2099, "num_deletes": 251, "total_data_size": 3406324, "memory_usage": 3457280, "flush_reason": "Manual Compaction"}
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087972934026, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3317966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21122, "largest_seqno": 23220, "table_properties": {"data_size": 3308560, "index_size": 5900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19580, "raw_average_key_size": 20, "raw_value_size": 3289453, "raw_average_value_size": 3394, "num_data_blocks": 266, "num_entries": 969, "num_filter_entries": 969, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087762, "oldest_key_time": 1764087762, "file_creation_time": 1764087972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 37798 microseconds, and 12014 cpu microseconds.
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.934090) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3317966 bytes OK
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.934116) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.938087) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.938143) EVENT_LOG_v1 {"time_micros": 1764087972938130, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.938169) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3397464, prev total WAL file size 3397464, number of live WAL files 2.
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.939302) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3240KB)], [50(7616KB)]
Nov 25 16:26:12 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087972939345, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11117191, "oldest_snapshot_seqno": -1}
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.011 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.016 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.017 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Creating image(s)
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4889 keys, 9374666 bytes, temperature: kUnknown
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087973020108, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9374666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9339338, "index_size": 22034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 120310, "raw_average_key_size": 24, "raw_value_size": 9248326, "raw_average_value_size": 1891, "num_data_blocks": 926, "num_entries": 4889, "num_filter_entries": 4889, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764087972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.020368) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9374666 bytes
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.021596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.5 rd, 115.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5407, records dropped: 518 output_compression: NoCompression
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.021615) EVENT_LOG_v1 {"time_micros": 1764087973021606, "job": 26, "event": "compaction_finished", "compaction_time_micros": 80865, "compaction_time_cpu_micros": 22878, "output_level": 6, "num_output_files": 1, "total_output_size": 9374666, "num_input_records": 5407, "num_output_records": 4889, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087973022443, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764087973023951, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:12.939237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:26:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:26:13.024066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.045 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.070 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.100 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.105 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.182 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.183 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.186 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.186 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.210 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.213 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 25540ca1-3029-48b2-8ab3-c800a16c8175_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005132630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.379 254096 DEBUG oslo_concurrency.processutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.386 254096 DEBUG nova.compute.provider_tree [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.402 254096 DEBUG nova.scheduler.client.report [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.422 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.482 254096 DEBUG nova.network.neutron [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.483 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.485 254096 INFO nova.scheduler.client.report [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Deleted allocations for instance 4b2c6795-15b5-424c-b7c5-4b695a348f41
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.510 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 25540ca1-3029-48b2-8ab3-c800a16c8175_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.585 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] resizing rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:26:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:13.595 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:13.596 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.623 254096 DEBUG oslo_concurrency.lockutils [None req-f596057f-2886-4f1e-9bcd-0312fb891e63 787cb8b4238c4926a4466f3421db09ef 95df0d15c889499aba411e805ea145a5 - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.678 254096 DEBUG nova.objects.instance [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'migration_context' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.689 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.690 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Ensure instance console log exists: /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.690 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.690 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.691 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.692 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.697 254096 WARNING nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.701 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.702 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.705 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.705 254096 DEBUG nova.virt.libvirt.host [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.706 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.706 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.706 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.707 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.708 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.708 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.708 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.709 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.709 254096 DEBUG nova.virt.hardware [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.712 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.739 254096 DEBUG nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.740 254096 DEBUG oslo_concurrency.lockutils [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.740 254096 DEBUG oslo_concurrency.lockutils [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 DEBUG oslo_concurrency.lockutils [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4b2c6795-15b5-424c-b7c5-4b695a348f41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 DEBUG nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] No waiting events found dispatching network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 WARNING nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received unexpected event network-vif-plugged-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 for instance with vm_state deleted and task_state None.
Nov 25 16:26:13 compute-0 nova_compute[254092]: 2025-11-25 16:26:13.741 254096 DEBUG nova.compute.manager [req-d818d64e-31bf-4249-a850-86aeef82cb24 req-146dbadf-e427-4feb-8625-2ccf67846f68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Received event network-vif-deleted-05cb1f2f-cef4-43d4-a4ca-2b128aa5e3e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:26:13 compute-0 ceph-mon[74985]: pgmap v1130: 321 pgs: 321 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.8 MiB/s wr, 323 op/s
Nov 25 16:26:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3005132630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:13 compute-0 sudo[273406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:13 compute-0 sudo[273406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:13 compute-0 sudo[273406]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:14 compute-0 sudo[273431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:26:14 compute-0 sudo[273431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:14 compute-0 sudo[273431]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:14 compute-0 sudo[273456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:14 compute-0 sudo[273456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:14 compute-0 sudo[273456]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:14 compute-0 sudo[273481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:26:14 compute-0 sudo[273481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796005807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.206 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.230 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.234 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767150283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.733 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.735 254096 DEBUG nova.objects.instance [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'pci_devices' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.751 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <uuid>25540ca1-3029-48b2-8ab3-c800a16c8175</uuid>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <name>instance-00000009</name>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-4808240</nova:name>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:26:13</nova:creationTime>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <nova:user uuid="820b9ab982364678b8e75b2c9cc4cfed">tempest-ServersAdminNegativeTestJSON-1866473487-project-member</nova:user>
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <nova:project uuid="511bc9af98844c8995c27adbee1a3d4c">tempest-ServersAdminNegativeTestJSON-1866473487</nova:project>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <system>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <entry name="serial">25540ca1-3029-48b2-8ab3-c800a16c8175</entry>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <entry name="uuid">25540ca1-3029-48b2-8ab3-c800a16c8175</entry>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </system>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <os>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   </os>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <features>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   </features>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/25540ca1-3029-48b2-8ab3-c800a16c8175_disk">
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config">
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/console.log" append="off"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <video>
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </video>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:26:14 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:26:14 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:26:14 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:26:14 compute-0 nova_compute[254092]: </domain>
Nov 25 16:26:14 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:26:14 compute-0 sudo[273481]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:26:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 316 op/s
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.829 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.829 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.830 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Using config drive
Nov 25 16:26:14 compute-0 nova_compute[254092]: 2025-11-25 16:26:14.871 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:26:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f9496648-21d8-46f8-9a07-7aff6cca85b9 does not exist
Nov 25 16:26:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9b0474e6-86bf-435b-9a50-1471161836ab does not exist
Nov 25 16:26:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 53b3a34d-41e3-44a4-926b-4c4a96c36d4d does not exist
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:26:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:26:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1796005807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2767150283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:26:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:26:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:26:15 compute-0 ceph-mon[74985]: pgmap v1131: 321 pgs: 321 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 316 op/s
Nov 25 16:26:15 compute-0 sudo[273598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:15 compute-0 sudo[273598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:15 compute-0 sudo[273598]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:15 compute-0 sudo[273623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:26:15 compute-0 sudo[273623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:15 compute-0 sudo[273623]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:15 compute-0 sudo[273648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:15 compute-0 sudo[273648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:15 compute-0 sudo[273648]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.192 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Creating config drive at /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.199 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10rbe7lh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:15 compute-0 sudo[273673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:26:15 compute-0 sudo[273673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.325 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10rbe7lh" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.347 254096 DEBUG nova.storage.rbd_utils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] rbd image 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.349 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.492 254096 DEBUG oslo_concurrency.processutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config 25540ca1-3029-48b2-8ab3-c800a16c8175_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.492 254096 INFO nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deleting local config drive /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175/disk.config because it was imported into RBD.
Nov 25 16:26:15 compute-0 systemd-machined[216343]: New machine qemu-9-instance-00000009.
Nov 25 16:26:15 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 25 16:26:15 compute-0 podman[273781]: 2025-11-25 16:26:15.589215109 +0000 UTC m=+0.048621325 container create 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 16:26:15 compute-0 systemd[1]: Started libpod-conmon-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope.
Nov 25 16:26:15 compute-0 podman[273781]: 2025-11-25 16:26:15.574013575 +0000 UTC m=+0.033419811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:26:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:26:15 compute-0 podman[273781]: 2025-11-25 16:26:15.685492209 +0000 UTC m=+0.144898455 container init 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:26:15 compute-0 podman[273781]: 2025-11-25 16:26:15.6917799 +0000 UTC m=+0.151186116 container start 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:26:15 compute-0 podman[273781]: 2025-11-25 16:26:15.694699379 +0000 UTC m=+0.154105625 container attach 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:26:15 compute-0 systemd[1]: libpod-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope: Deactivated successfully.
Nov 25 16:26:15 compute-0 magical_pike[273806]: 167 167
Nov 25 16:26:15 compute-0 conmon[273806]: conmon 8a3b3d90fd44f2d70bf8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope/container/memory.events
Nov 25 16:26:15 compute-0 podman[273781]: 2025-11-25 16:26:15.698702879 +0000 UTC m=+0.158109095 container died 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-95a3ad12d5de1a989277333a70de65cb8078a9d27f91cb250dd33e640af23259-merged.mount: Deactivated successfully.
Nov 25 16:26:15 compute-0 podman[273781]: 2025-11-25 16:26:15.73550963 +0000 UTC m=+0.194915846 container remove 8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pike, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:26:15 compute-0 systemd[1]: libpod-conmon-8a3b3d90fd44f2d70bf85e64ca39f7c2b99ed2b40266b169af08c238e1c83650.scope: Deactivated successfully.
Nov 25 16:26:15 compute-0 podman[273870]: 2025-11-25 16:26:15.898612949 +0000 UTC m=+0.039851486 container create 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.901 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087975.9009004, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.902 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Resumed (Lifecycle Event)
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.904 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.905 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.908 254096 INFO nova.virt.libvirt.driver [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance spawned successfully.
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.909 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.922 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.932 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.941 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.942 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.942 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.943 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.943 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.943 254096 DEBUG nova.virt.libvirt.driver [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:15 compute-0 systemd[1]: Started libpod-conmon-948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef.scope.
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.968 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.969 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087975.9017322, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:15 compute-0 nova_compute[254092]: 2025-11-25 16:26:15.969 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Started (Lifecycle Event)
Nov 25 16:26:15 compute-0 podman[273870]: 2025-11-25 16:26:15.882284344 +0000 UTC m=+0.023522911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:26:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:16 compute-0 nova_compute[254092]: 2025-11-25 16:26:16.000 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:16 compute-0 nova_compute[254092]: 2025-11-25 16:26:16.005 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:16 compute-0 nova_compute[254092]: 2025-11-25 16:26:16.007 254096 INFO nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 3.00 seconds to spawn the instance on the hypervisor.
Nov 25 16:26:16 compute-0 nova_compute[254092]: 2025-11-25 16:26:16.007 254096 DEBUG nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:16 compute-0 podman[273870]: 2025-11-25 16:26:16.015932732 +0000 UTC m=+0.157171279 container init 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:26:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:26:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:26:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:26:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:26:16 compute-0 podman[273870]: 2025-11-25 16:26:16.022878641 +0000 UTC m=+0.164117178 container start 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:26:16 compute-0 podman[273870]: 2025-11-25 16:26:16.02543273 +0000 UTC m=+0.166671267 container attach 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:26:16 compute-0 nova_compute[254092]: 2025-11-25 16:26:16.043 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:16 compute-0 nova_compute[254092]: 2025-11-25 16:26:16.091 254096 INFO nova.compute.manager [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 4.15 seconds to build instance.
Nov 25 16:26:16 compute-0 nova_compute[254092]: 2025-11-25 16:26:16.114 254096 DEBUG oslo_concurrency.lockutils [None req-c5ad8812-5dd7-4fe8-9816-63127dc4c41b 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:16 compute-0 podman[273894]: 2025-11-25 16:26:16.648950799 +0000 UTC m=+0.057455514 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:26:16 compute-0 podman[273893]: 2025-11-25 16:26:16.663885105 +0000 UTC m=+0.072420371 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 16:26:16 compute-0 podman[273895]: 2025-11-25 16:26:16.68353554 +0000 UTC m=+0.090503774 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:26:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 407 op/s
Nov 25 16:26:17 compute-0 ceph-mon[74985]: pgmap v1132: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 407 op/s
Nov 25 16:26:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:17.043 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:17 compute-0 jovial_mendel[273888]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:26:17 compute-0 jovial_mendel[273888]: --> relative data size: 1.0
Nov 25 16:26:17 compute-0 jovial_mendel[273888]: --> All data devices are unavailable
Nov 25 16:26:17 compute-0 systemd[1]: libpod-948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef.scope: Deactivated successfully.
Nov 25 16:26:17 compute-0 podman[273870]: 2025-11-25 16:26:17.091686678 +0000 UTC m=+1.232925205 container died 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f7452a4c8cff1fba31a917b08ff91b70538e1e9f1a9b94b3cc671953a66f525-merged.mount: Deactivated successfully.
Nov 25 16:26:17 compute-0 nova_compute[254092]: 2025-11-25 16:26:17.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:17 compute-0 podman[273870]: 2025-11-25 16:26:17.150692953 +0000 UTC m=+1.291931490 container remove 948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:26:17 compute-0 systemd[1]: libpod-conmon-948f66006549e597b03531f43e6efe545091d59b8fc67bbe42605ed3ea2ab2ef.scope: Deactivated successfully.
Nov 25 16:26:17 compute-0 sudo[273673]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:17 compute-0 sudo[273989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:17 compute-0 sudo[273989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:17 compute-0 sudo[273989]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:17 compute-0 sudo[274014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:26:17 compute-0 sudo[274014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:17 compute-0 sudo[274014]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:17 compute-0 sudo[274039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:17 compute-0 sudo[274039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:17 compute-0 sudo[274039]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:17 compute-0 sudo[274064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:26:17 compute-0 sudo[274064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:17 compute-0 podman[274128]: 2025-11-25 16:26:17.717209861 +0000 UTC m=+0.036805333 container create 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 16:26:17 compute-0 systemd[1]: Started libpod-conmon-30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd.scope.
Nov 25 16:26:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:26:17 compute-0 podman[274128]: 2025-11-25 16:26:17.787956457 +0000 UTC m=+0.107551959 container init 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:26:17 compute-0 podman[274128]: 2025-11-25 16:26:17.796040907 +0000 UTC m=+0.115636379 container start 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 16:26:17 compute-0 podman[274128]: 2025-11-25 16:26:17.701801952 +0000 UTC m=+0.021397444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:26:17 compute-0 podman[274128]: 2025-11-25 16:26:17.799021458 +0000 UTC m=+0.118616930 container attach 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:26:17 compute-0 pedantic_chatelet[274144]: 167 167
Nov 25 16:26:17 compute-0 systemd[1]: libpod-30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd.scope: Deactivated successfully.
Nov 25 16:26:17 compute-0 podman[274128]: 2025-11-25 16:26:17.802763259 +0000 UTC m=+0.122358731 container died 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-41fa1edb85ed9328c94f361cb3c43dcd9cb90f741a7319f0bc423c598f553c9c-merged.mount: Deactivated successfully.
Nov 25 16:26:17 compute-0 podman[274128]: 2025-11-25 16:26:17.861086777 +0000 UTC m=+0.180682249 container remove 30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:26:17 compute-0 systemd[1]: libpod-conmon-30762f74104887e8a5eb3b207c5250cb9647d961f62d2cee7a52d3ffe69361fd.scope: Deactivated successfully.
Nov 25 16:26:18 compute-0 podman[274168]: 2025-11-25 16:26:18.047217242 +0000 UTC m=+0.060745754 container create 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:26:18 compute-0 systemd[1]: Started libpod-conmon-79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6.scope.
Nov 25 16:26:18 compute-0 podman[274168]: 2025-11-25 16:26:18.019418825 +0000 UTC m=+0.032947377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:26:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:18 compute-0 podman[274168]: 2025-11-25 16:26:18.128619997 +0000 UTC m=+0.142148519 container init 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:26:18 compute-0 podman[274168]: 2025-11-25 16:26:18.136282736 +0000 UTC m=+0.149811248 container start 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 25 16:26:18 compute-0 podman[274168]: 2025-11-25 16:26:18.139189955 +0000 UTC m=+0.152718457 container attach 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:26:18 compute-0 nova_compute[254092]: 2025-11-25 16:26:18.291 254096 DEBUG nova.objects.instance [None req-bc560204-8aeb-47da-b23d-3b7879e6f0ba 002b88c8dbb14a3b9516bcf2c1ec67e4 9b9d5cbb2ff14e48aa429ebc506b3a74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:18 compute-0 nova_compute[254092]: 2025-11-25 16:26:18.315 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764087978.3094976, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:18 compute-0 nova_compute[254092]: 2025-11-25 16:26:18.315 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Paused (Lifecycle Event)
Nov 25 16:26:18 compute-0 nova_compute[254092]: 2025-11-25 16:26:18.333 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:18 compute-0 nova_compute[254092]: 2025-11-25 16:26:18.337 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:18 compute-0 nova_compute[254092]: 2025-11-25 16:26:18.371 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 16:26:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]: {
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:     "0": [
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:         {
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "devices": [
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "/dev/loop3"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             ],
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_name": "ceph_lv0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_size": "21470642176",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "name": "ceph_lv0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "tags": {
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cluster_name": "ceph",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.crush_device_class": "",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.encrypted": "0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osd_id": "0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.type": "block",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.vdo": "0"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             },
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "type": "block",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "vg_name": "ceph_vg0"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:         }
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:     ],
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:     "1": [
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:         {
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "devices": [
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "/dev/loop4"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             ],
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_name": "ceph_lv1",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_size": "21470642176",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "name": "ceph_lv1",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "tags": {
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cluster_name": "ceph",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.crush_device_class": "",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.encrypted": "0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osd_id": "1",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.type": "block",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.vdo": "0"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             },
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "type": "block",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "vg_name": "ceph_vg1"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:         }
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:     ],
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:     "2": [
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:         {
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "devices": [
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "/dev/loop5"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             ],
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_name": "ceph_lv2",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_size": "21470642176",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "name": "ceph_lv2",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "tags": {
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.cluster_name": "ceph",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.crush_device_class": "",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.encrypted": "0",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osd_id": "2",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.type": "block",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:                 "ceph.vdo": "0"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             },
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "type": "block",
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:             "vg_name": "ceph_vg2"
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:         }
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]:     ]
Nov 25 16:26:18 compute-0 zealous_grothendieck[274185]: }
Nov 25 16:26:18 compute-0 systemd[1]: libpod-79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6.scope: Deactivated successfully.
Nov 25 16:26:18 compute-0 podman[274168]: 2025-11-25 16:26:18.924058765 +0000 UTC m=+0.937587277 container died 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:26:19 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 16:26:19 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 2.882s CPU time.
Nov 25 16:26:19 compute-0 systemd-machined[216343]: Machine qemu-9-instance-00000009 terminated.
Nov 25 16:26:19 compute-0 ceph-mon[74985]: pgmap v1133: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Nov 25 16:26:19 compute-0 nova_compute[254092]: 2025-11-25 16:26:19.258 254096 DEBUG nova.compute.manager [None req-bc560204-8aeb-47da-b23d-3b7879e6f0ba 002b88c8dbb14a3b9516bcf2c1ec67e4 9b9d5cbb2ff14e48aa429ebc506b3a74 - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c34880268d60184cc95be1c77bcd5b00b2ca45ec2d13cee08f46d7ed3725ce08-merged.mount: Deactivated successfully.
Nov 25 16:26:19 compute-0 podman[274168]: 2025-11-25 16:26:19.611624007 +0000 UTC m=+1.625152519 container remove 79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:26:19 compute-0 systemd[1]: libpod-conmon-79138f3f08d6759274c85e3771f7b467fe13ef8f0a716edf2f72ddda5e52edc6.scope: Deactivated successfully.
Nov 25 16:26:19 compute-0 sudo[274064]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:19 compute-0 sudo[274212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:19 compute-0 sudo[274212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:19 compute-0 sudo[274212]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:19 compute-0 sudo[274237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:26:19 compute-0 sudo[274237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:19 compute-0 sudo[274237]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:19 compute-0 sudo[274262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:19 compute-0 sudo[274262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:19 compute-0 sudo[274262]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:19 compute-0 sudo[274287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:26:19 compute-0 sudo[274287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:20 compute-0 podman[274350]: 2025-11-25 16:26:20.21880102 +0000 UTC m=+0.045549690 container create 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:26:20 compute-0 systemd[1]: Started libpod-conmon-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope.
Nov 25 16:26:20 compute-0 podman[274350]: 2025-11-25 16:26:20.192422263 +0000 UTC m=+0.019170963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:26:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:26:20 compute-0 podman[274350]: 2025-11-25 16:26:20.316783937 +0000 UTC m=+0.143532667 container init 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:26:20 compute-0 podman[274350]: 2025-11-25 16:26:20.324766834 +0000 UTC m=+0.151515504 container start 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:26:20 compute-0 podman[274350]: 2025-11-25 16:26:20.327743705 +0000 UTC m=+0.154492365 container attach 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:26:20 compute-0 tender_allen[274366]: 167 167
Nov 25 16:26:20 compute-0 systemd[1]: libpod-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope: Deactivated successfully.
Nov 25 16:26:20 compute-0 conmon[274366]: conmon 151c14e1faeb67f9ec44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope/container/memory.events
Nov 25 16:26:20 compute-0 podman[274350]: 2025-11-25 16:26:20.331682423 +0000 UTC m=+0.158431093 container died 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 16:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c2e79a6d24c61f4120a499bc58c6468acbd8d0ed6a1fecf4a29a995d55d6346-merged.mount: Deactivated successfully.
Nov 25 16:26:20 compute-0 podman[274350]: 2025-11-25 16:26:20.364161206 +0000 UTC m=+0.190909876 container remove 151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:26:20 compute-0 systemd[1]: libpod-conmon-151c14e1faeb67f9ec44873f762e24cd6db482e406c0a50f706c8f1fa1160342.scope: Deactivated successfully.
Nov 25 16:26:20 compute-0 podman[274390]: 2025-11-25 16:26:20.526305619 +0000 UTC m=+0.041968123 container create d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:26:20 compute-0 nova_compute[254092]: 2025-11-25 16:26:20.551 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:20 compute-0 systemd[1]: Started libpod-conmon-d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd.scope.
Nov 25 16:26:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:20 compute-0 podman[274390]: 2025-11-25 16:26:20.507721364 +0000 UTC m=+0.023383888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:20 compute-0 podman[274390]: 2025-11-25 16:26:20.615394844 +0000 UTC m=+0.131057358 container init d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 16:26:20 compute-0 podman[274390]: 2025-11-25 16:26:20.621724636 +0000 UTC m=+0.137387140 container start d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:26:20 compute-0 podman[274390]: 2025-11-25 16:26:20.625732615 +0000 UTC m=+0.141395109 container attach d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:26:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 144 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.0 MiB/s wr, 323 op/s
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]: {
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "osd_id": 1,
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "type": "bluestore"
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:     },
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "osd_id": 2,
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "type": "bluestore"
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:     },
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "osd_id": 0,
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:         "type": "bluestore"
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]:     }
Nov 25 16:26:21 compute-0 compassionate_mirzakhani[274406]: }
Nov 25 16:26:21 compute-0 systemd[1]: libpod-d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd.scope: Deactivated successfully.
Nov 25 16:26:21 compute-0 podman[274390]: 2025-11-25 16:26:21.581002622 +0000 UTC m=+1.096665126 container died d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-583f1eb9b0a733d86938ec86bb03e623d4c7127f4a3d1a1dc467427003feb12e-merged.mount: Deactivated successfully.
Nov 25 16:26:21 compute-0 podman[274390]: 2025-11-25 16:26:21.634283702 +0000 UTC m=+1.149946206 container remove d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:26:21 compute-0 systemd[1]: libpod-conmon-d2b7939650747b2b0709d403f87267f53b1957eb18cba767de2606fbe7797bcd.scope: Deactivated successfully.
Nov 25 16:26:21 compute-0 sudo[274287]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:26:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:26:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:26:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:26:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 15ad8e01-bfac-4eb8-b1b1-eaee2825df45 does not exist
Nov 25 16:26:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6d4df03b-122f-4ae2-9440-2db2ba8163ab does not exist
Nov 25 16:26:21 compute-0 sudo[274453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:26:21 compute-0 sudo[274453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:21 compute-0 sudo[274453]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.789 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "25540ca1-3029-48b2-8ab3-c800a16c8175" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.790 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.790 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "25540ca1-3029-48b2-8ab3-c800a16c8175-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.791 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.791 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.792 254096 INFO nova.compute.manager [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Terminating instance
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.793 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "refresh_cache-25540ca1-3029-48b2-8ab3-c800a16c8175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.793 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquired lock "refresh_cache-25540ca1-3029-48b2-8ab3-c800a16c8175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.794 254096 DEBUG nova.network.neutron [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:26:21 compute-0 sudo[274478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:26:21 compute-0 sudo[274478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:26:21 compute-0 sudo[274478]: pam_unix(sudo:session): session closed for user root
Nov 25 16:26:21 compute-0 ceph-mon[74985]: pgmap v1134: 321 pgs: 321 active+clean; 144 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 3.0 MiB/s wr, 323 op/s
Nov 25 16:26:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:26:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:26:21 compute-0 nova_compute[254092]: 2025-11-25 16:26:21.998 254096 DEBUG nova.network.neutron [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:22 compute-0 nova_compute[254092]: 2025-11-25 16:26:22.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:22 compute-0 nova_compute[254092]: 2025-11-25 16:26:22.463 254096 DEBUG nova.network.neutron [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:22 compute-0 nova_compute[254092]: 2025-11-25 16:26:22.487 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Releasing lock "refresh_cache-25540ca1-3029-48b2-8ab3-c800a16c8175" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:26:22 compute-0 nova_compute[254092]: 2025-11-25 16:26:22.487 254096 DEBUG nova.compute.manager [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:26:22 compute-0 nova_compute[254092]: 2025-11-25 16:26:22.493 254096 INFO nova.virt.libvirt.driver [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance destroyed successfully.
Nov 25 16:26:22 compute-0 nova_compute[254092]: 2025-11-25 16:26:22.493 254096 DEBUG nova.objects.instance [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'resources' on Instance uuid 25540ca1-3029-48b2-8ab3-c800a16c8175 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 159 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 231 op/s
Nov 25 16:26:22 compute-0 ceph-mon[74985]: pgmap v1135: 321 pgs: 321 active+clean; 159 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 231 op/s
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.020 254096 INFO nova.virt.libvirt.driver [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deleting instance files /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175_del
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.021 254096 INFO nova.virt.libvirt.driver [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deletion of /var/lib/nova/instances/25540ca1-3029-48b2-8ab3-c800a16c8175_del complete
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.090 254096 INFO nova.compute.manager [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 0.60 seconds to destroy the instance on the hypervisor.
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.091 254096 DEBUG oslo.service.loopingcall [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.092 254096 DEBUG nova.compute.manager [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.092 254096 DEBUG nova.network.neutron [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.228 254096 DEBUG nova.network.neutron [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.241 254096 DEBUG nova.network.neutron [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.252 254096 INFO nova.compute.manager [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Took 0.16 seconds to deallocate network for instance.
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.300 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.301 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.382 254096 DEBUG oslo_concurrency.processutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474540718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.853 254096 DEBUG oslo_concurrency.processutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.860 254096 DEBUG nova.compute.provider_tree [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.892 254096 DEBUG nova.scheduler.client.report [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1474540718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.926 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:23 compute-0 nova_compute[254092]: 2025-11-25 16:26:23.991 254096 INFO nova.scheduler.client.report [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Deleted allocations for instance 25540ca1-3029-48b2-8ab3-c800a16c8175
Nov 25 16:26:24 compute-0 nova_compute[254092]: 2025-11-25 16:26:24.065 254096 DEBUG oslo_concurrency.lockutils [None req-b6551a2f-c68e-47e0-b13c-1a95253958a4 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "25540ca1-3029-48b2-8ab3-c800a16c8175" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 159 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 195 op/s
Nov 25 16:26:24 compute-0 ceph-mon[74985]: pgmap v1136: 321 pgs: 321 active+clean; 159 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 195 op/s
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.456 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087970.4554572, 4b2c6795-15b5-424c-b7c5-4b695a348f41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.456 254096 INFO nova.compute.manager [-] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] VM Stopped (Lifecycle Event)
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.483 254096 DEBUG nova.compute.manager [None req-0a44f5b5-ad17-4f00-91df-95b561184e57 - - - - - -] [instance: 4b2c6795-15b5-424c-b7c5-4b695a348f41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.520 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087970.5175343, e9e96b2e-62f4-4f02-96ec-5306f9e39ca8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.520 254096 INFO nova.compute.manager [-] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] VM Stopped (Lifecycle Event)
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.551 254096 DEBUG nova.compute.manager [None req-0e010ba8-300d-4e71-b4be-31cde3740ee5 - - - - - -] [instance: e9e96b2e-62f4-4f02-96ec-5306f9e39ca8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.554 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.555 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.555 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "e0ce39e6-663b-4ff2-84e5-98aa54955701-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.556 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.556 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.557 254096 INFO nova.compute.manager [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Terminating instance
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.559 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "refresh_cache-e0ce39e6-663b-4ff2-84e5-98aa54955701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.559 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquired lock "refresh_cache-e0ce39e6-663b-4ff2-84e5-98aa54955701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.559 254096 DEBUG nova.network.neutron [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:26:25 compute-0 nova_compute[254092]: 2025-11-25 16:26:25.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:26 compute-0 nova_compute[254092]: 2025-11-25 16:26:26.810 254096 DEBUG nova.network.neutron [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 254 op/s
Nov 25 16:26:26 compute-0 ceph-mon[74985]: pgmap v1137: 321 pgs: 321 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 254 op/s
Nov 25 16:26:27 compute-0 nova_compute[254092]: 2025-11-25 16:26:27.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:27 compute-0 nova_compute[254092]: 2025-11-25 16:26:27.152 254096 DEBUG nova.network.neutron [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:27 compute-0 nova_compute[254092]: 2025-11-25 16:26:27.166 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Releasing lock "refresh_cache-e0ce39e6-663b-4ff2-84e5-98aa54955701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:26:27 compute-0 nova_compute[254092]: 2025-11-25 16:26:27.167 254096 DEBUG nova.compute.manager [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:26:27 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 25 16:26:27 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 13.078s CPU time.
Nov 25 16:26:27 compute-0 systemd-machined[216343]: Machine qemu-7-instance-00000007 terminated.
Nov 25 16:26:27 compute-0 nova_compute[254092]: 2025-11-25 16:26:27.399 254096 INFO nova.virt.libvirt.driver [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance destroyed successfully.
Nov 25 16:26:27 compute-0 nova_compute[254092]: 2025-11-25 16:26:27.400 254096 DEBUG nova.objects.instance [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lazy-loading 'resources' on Instance uuid e0ce39e6-663b-4ff2-84e5-98aa54955701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Nov 25 16:26:28 compute-0 nova_compute[254092]: 2025-11-25 16:26:28.836 254096 INFO nova.virt.libvirt.driver [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deleting instance files /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701_del
Nov 25 16:26:28 compute-0 nova_compute[254092]: 2025-11-25 16:26:28.837 254096 INFO nova.virt.libvirt.driver [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deletion of /var/lib/nova/instances/e0ce39e6-663b-4ff2-84e5-98aa54955701_del complete
Nov 25 16:26:28 compute-0 ceph-mon[74985]: pgmap v1138: 321 pgs: 321 active+clean; 121 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Nov 25 16:26:28 compute-0 nova_compute[254092]: 2025-11-25 16:26:28.961 254096 INFO nova.compute.manager [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 1.79 seconds to destroy the instance on the hypervisor.
Nov 25 16:26:28 compute-0 nova_compute[254092]: 2025-11-25 16:26:28.961 254096 DEBUG oslo.service.loopingcall [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:26:28 compute-0 nova_compute[254092]: 2025-11-25 16:26:28.962 254096 DEBUG nova.compute.manager [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:26:28 compute-0 nova_compute[254092]: 2025-11-25 16:26:28.962 254096 DEBUG nova.network.neutron [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.245 254096 DEBUG nova.network.neutron [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.260 254096 DEBUG nova.network.neutron [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.277 254096 INFO nova.compute.manager [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Took 0.31 seconds to deallocate network for instance.
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.376 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.376 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.428 254096 DEBUG oslo_concurrency.processutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649592809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.831 254096 DEBUG oslo_concurrency.processutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.836 254096 DEBUG nova.compute.provider_tree [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:29 compute-0 nova_compute[254092]: 2025-11-25 16:26:29.856 254096 DEBUG nova.scheduler.client.report [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:30 compute-0 nova_compute[254092]: 2025-11-25 16:26:30.087 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2649592809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:30 compute-0 nova_compute[254092]: 2025-11-25 16:26:30.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 70 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Nov 25 16:26:30 compute-0 nova_compute[254092]: 2025-11-25 16:26:30.971 254096 INFO nova.scheduler.client.report [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Deleted allocations for instance e0ce39e6-663b-4ff2-84e5-98aa54955701
Nov 25 16:26:31 compute-0 nova_compute[254092]: 2025-11-25 16:26:31.056 254096 DEBUG oslo_concurrency.lockutils [None req-7d96aee5-fa5b-48e0-a3fb-a64fff32ae0d 820b9ab982364678b8e75b2c9cc4cfed 511bc9af98844c8995c27adbee1a3d4c - - default default] Lock "e0ce39e6-663b-4ff2-84e5-98aa54955701" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:31 compute-0 ceph-mon[74985]: pgmap v1139: 321 pgs: 321 active+clean; 70 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Nov 25 16:26:32 compute-0 nova_compute[254092]: 2025-11-25 16:26:32.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 692 KiB/s rd, 967 KiB/s wr, 110 op/s
Nov 25 16:26:33 compute-0 ceph-mon[74985]: pgmap v1140: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 692 KiB/s rd, 967 KiB/s wr, 110 op/s
Nov 25 16:26:34 compute-0 nova_compute[254092]: 2025-11-25 16:26:34.259 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087979.2575266, 25540ca1-3029-48b2-8ab3-c800a16c8175 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:34 compute-0 nova_compute[254092]: 2025-11-25 16:26:34.259 254096 INFO nova.compute.manager [-] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] VM Stopped (Lifecycle Event)
Nov 25 16:26:34 compute-0 nova_compute[254092]: 2025-11-25 16:26:34.279 254096 DEBUG nova.compute.manager [None req-8ba24401-7337-4a14-ab4a-f09f834090bc - - - - - -] [instance: 25540ca1-3029-48b2-8ab3-c800a16c8175] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 100 KiB/s wr, 87 op/s
Nov 25 16:26:34 compute-0 ceph-mon[74985]: pgmap v1141: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 100 KiB/s wr, 87 op/s
Nov 25 16:26:35 compute-0 nova_compute[254092]: 2025-11-25 16:26:35.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:35 compute-0 nova_compute[254092]: 2025-11-25 16:26:35.854 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "cf8226e4-d68b-425a-8419-e273b162e9ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:35 compute-0 nova_compute[254092]: 2025-11-25 16:26:35.854 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:35 compute-0 nova_compute[254092]: 2025-11-25 16:26:35.880 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.033 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.034 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.041 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.041 254096 INFO nova.compute.claims [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.197 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904941502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.657 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.662 254096 DEBUG nova.compute.provider_tree [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.679 254096 DEBUG nova.scheduler.client.report [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/904941502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.699 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.700 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.749 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.766 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.783 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:26:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 100 KiB/s wr, 87 op/s
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.877 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.879 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.879 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Creating image(s)
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.904 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.930 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.949 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:36 compute-0 nova_compute[254092]: 2025-11-25 16:26:36.952 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.024 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.025 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.025 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.026 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.046 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.050 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cf8226e4-d68b-425a-8419-e273b162e9ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.289 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cf8226e4-d68b-425a-8419-e273b162e9ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.344 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] resizing rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.432 254096 DEBUG nova.objects.instance [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lazy-loading 'migration_context' on Instance uuid cf8226e4-d68b-425a-8419-e273b162e9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.451 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.451 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Ensure instance console log exists: /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.452 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.452 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.452 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.454 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.459 254096 WARNING nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.463 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.463 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.466 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.466 254096 DEBUG nova.virt.libvirt.host [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.466 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.467 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.467 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.468 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.469 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.470 254096 DEBUG nova.virt.hardware [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.473 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:37 compute-0 ceph-mon[74985]: pgmap v1142: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 100 KiB/s wr, 87 op/s
Nov 25 16:26:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/471529986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.970 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.994 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:37 compute-0 nova_compute[254092]: 2025-11-25 16:26:37.998 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019693713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.432 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.434 254096 DEBUG nova.objects.instance [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf8226e4-d68b-425a-8419-e273b162e9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.465 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <uuid>cf8226e4-d68b-425a-8419-e273b162e9ee</uuid>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <name>instance-0000000a</name>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1206401827</nova:name>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:26:37</nova:creationTime>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <nova:user uuid="ef9096d47e8b4fceb4fdb347f45e82ea">tempest-ServerDiagnosticsV248Test-1494605572-project-member</nova:user>
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <nova:project uuid="3c8b74363ca84877a8f0a40f07822af8">tempest-ServerDiagnosticsV248Test-1494605572</nova:project>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <system>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <entry name="serial">cf8226e4-d68b-425a-8419-e273b162e9ee</entry>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <entry name="uuid">cf8226e4-d68b-425a-8419-e273b162e9ee</entry>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </system>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <os>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   </os>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <features>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   </features>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/cf8226e4-d68b-425a-8419-e273b162e9ee_disk">
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config">
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/console.log" append="off"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <video>
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </video>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:26:38 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:26:38 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:26:38 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:26:38 compute-0 nova_compute[254092]: </domain>
Nov 25 16:26:38 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.537 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.538 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.538 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Using config drive
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.558 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/471529986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4019693713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.943 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Creating config drive at /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config
Nov 25 16:26:38 compute-0 nova_compute[254092]: 2025-11-25 16:26:38.948 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6kyk9p23 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:39 compute-0 nova_compute[254092]: 2025-11-25 16:26:39.080 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6kyk9p23" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:39 compute-0 nova_compute[254092]: 2025-11-25 16:26:39.105 254096 DEBUG nova.storage.rbd_utils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] rbd image cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:39 compute-0 nova_compute[254092]: 2025-11-25 16:26:39.110 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:39 compute-0 nova_compute[254092]: 2025-11-25 16:26:39.271 254096 DEBUG oslo_concurrency.processutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config cf8226e4-d68b-425a-8419-e273b162e9ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:39 compute-0 nova_compute[254092]: 2025-11-25 16:26:39.272 254096 INFO nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deleting local config drive /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee/disk.config because it was imported into RBD.
Nov 25 16:26:39 compute-0 systemd-machined[216343]: New machine qemu-10-instance-0000000a.
Nov 25 16:26:39 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 25 16:26:39 compute-0 ceph-mon[74985]: pgmap v1143: 321 pgs: 321 active+clean; 41 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:26:40
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes', 'backups']
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.112 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088000.112107, cf8226e4-d68b-425a-8419-e273b162e9ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.114 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] VM Resumed (Lifecycle Event)
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.116 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.116 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.119 254096 INFO nova.virt.libvirt.driver [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance spawned successfully.
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.120 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.150 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.154 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.157 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.158 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.158 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.158 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.159 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.159 254096 DEBUG nova.virt.libvirt.driver [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.196 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.196 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088000.1132455, cf8226e4-d68b-425a-8419-e273b162e9ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.197 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] VM Started (Lifecycle Event)
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.224 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.227 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.240 254096 INFO nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 3.36 seconds to spawn the instance on the hypervisor.
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.241 254096 DEBUG nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.253 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.300 254096 INFO nova.compute.manager [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 4.30 seconds to build instance.
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.319 254096 DEBUG oslo_concurrency.lockutils [None req-da7014b9-f7fd-43d2-a5d0-d587c5df702f ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:26:40 compute-0 nova_compute[254092]: 2025-11-25 16:26:40.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 81 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Nov 25 16:26:41 compute-0 ceph-mon[74985]: pgmap v1144: 321 pgs: 321 active+clean; 81 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Nov 25 16:26:41 compute-0 nova_compute[254092]: 2025-11-25 16:26:41.349 254096 DEBUG nova.compute.manager [None req-501c19a4-dd2d-4f7c-a52d-05d7197f4554 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:41 compute-0 nova_compute[254092]: 2025-11-25 16:26:41.351 254096 INFO nova.compute.manager [None req-501c19a4-dd2d-4f7c-a52d-05d7197f4554 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Retrieving diagnostics
Nov 25 16:26:42 compute-0 nova_compute[254092]: 2025-11-25 16:26:42.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:42 compute-0 nova_compute[254092]: 2025-11-25 16:26:42.397 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764087987.395677, e0ce39e6-663b-4ff2-84e5-98aa54955701 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:42 compute-0 nova_compute[254092]: 2025-11-25 16:26:42.398 254096 INFO nova.compute.manager [-] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] VM Stopped (Lifecycle Event)
Nov 25 16:26:42 compute-0 nova_compute[254092]: 2025-11-25 16:26:42.421 254096 DEBUG nova.compute.manager [None req-fd08d703-3f26-4ec9-a35a-49f8cd5ccc9e - - - - - -] [instance: e0ce39e6-663b-4ff2-84e5-98aa54955701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 16:26:42 compute-0 ceph-mon[74985]: pgmap v1145: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 16:26:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 16:26:44 compute-0 nova_compute[254092]: 2025-11-25 16:26:44.923 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:44 compute-0 nova_compute[254092]: 2025-11-25 16:26:44.924 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:44 compute-0 nova_compute[254092]: 2025-11-25 16:26:44.997 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:26:45 compute-0 ceph-mon[74985]: pgmap v1146: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.102 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.103 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.110 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.111 254096 INFO nova.compute.claims [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.336 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085521449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.821 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.827 254096 DEBUG nova.compute.provider_tree [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.847 254096 DEBUG nova.scheduler.client.report [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.930 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:45 compute-0 nova_compute[254092]: 2025-11-25 16:26:45.932 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:26:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 25 16:26:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4085521449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.118 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.119 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.465 254096 DEBUG nova.policy [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dda8ef18e79e4220b420023d65ccb78a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33fbb668df82403b9f379e45132213fd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:26:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 25 16:26:46 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.530 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.551 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.652 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.653 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.654 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Creating image(s)
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.683 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.711 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.736 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.740 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.798 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.799 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.800 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.800 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.823 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:46 compute-0 nova_compute[254092]: 2025-11-25 16:26:46.828 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 138942ff-b720-4101-8dcf-38958751745b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.144 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 138942ff-b720-4101-8dcf-38958751745b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.240 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Successfully created port: f445c9f8-c211-4af8-a66d-21cacc81fdc5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.246 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] resizing rbd image 138942ff-b720-4101-8dcf-38958751745b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.332 254096 DEBUG nova.objects.instance [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lazy-loading 'migration_context' on Instance uuid 138942ff-b720-4101-8dcf-38958751745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.344 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.344 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Ensure instance console log exists: /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.345 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.345 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.346 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:47 compute-0 ceph-mon[74985]: osdmap e134: 3 total, 3 up, 3 in
Nov 25 16:26:47 compute-0 ceph-mon[74985]: pgmap v1148: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 16:26:47 compute-0 podman[275143]: 2025-11-25 16:26:47.645903056 +0000 UTC m=+0.060956720 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 16:26:47 compute-0 podman[275142]: 2025-11-25 16:26:47.653428651 +0000 UTC m=+0.068431864 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 16:26:47 compute-0 podman[275144]: 2025-11-25 16:26:47.687791416 +0000 UTC m=+0.089002543 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.900 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Successfully updated port: f445c9f8-c211-4af8-a66d-21cacc81fdc5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.915 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.916 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:26:47 compute-0 nova_compute[254092]: 2025-11-25 16:26:47.917 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.002 254096 DEBUG nova.compute.manager [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.003 254096 DEBUG nova.compute.manager [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing instance network info cache due to event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.003 254096 DEBUG oslo_concurrency.lockutils [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.071 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 25 16:26:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 25 16:26:48 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 25 16:26:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 221 KiB/s wr, 112 op/s
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.967 254096 DEBUG nova.network.neutron [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.990 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.990 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance network_info: |[{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.991 254096 DEBUG oslo_concurrency.lockutils [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.992 254096 DEBUG nova.network.neutron [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.995 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start _get_guest_xml network_info=[{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:26:48 compute-0 nova_compute[254092]: 2025-11-25 16:26:48.999 254096 WARNING nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.004 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.004 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.009 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.010 254096 DEBUG nova.virt.libvirt.host [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.010 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.011 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.011 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.011 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.012 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.013 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.013 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.013 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.014 254096 DEBUG nova.virt.hardware [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.016 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867437894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.464 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.484 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.488 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.506 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:49 compute-0 ceph-mon[74985]: osdmap e135: 3 total, 3 up, 3 in
Nov 25 16:26:49 compute-0 ceph-mon[74985]: pgmap v1150: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 221 KiB/s wr, 112 op/s
Nov 25 16:26:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2867437894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.533 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.534 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.535 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:26:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188071052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.947 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.950 254096 DEBUG nova.virt.libvirt.vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-964763500',id=11,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33fbb668df82403b9f379e45132213fd',ramdisk_id='',reservation_id='r-7p1lpp0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:26:46Z,user_data=None,user_id='dda8ef18e79e4220b420023d65ccb78a',uuid=138942ff-b720-4101-8dcf-38958751745b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.950 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converting VIF {"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.952 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.954 254096 DEBUG nova.objects.instance [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 138942ff-b720-4101-8dcf-38958751745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.969 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <uuid>138942ff-b720-4101-8dcf-38958751745b</uuid>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <name>instance-0000000b</name>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500</nova:name>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:26:49</nova:creationTime>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:user uuid="dda8ef18e79e4220b420023d65ccb78a">tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member</nova:user>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:project uuid="33fbb668df82403b9f379e45132213fd">tempest-FloatingIPsAssociationNegativeTestJSON-1669688315</nova:project>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <nova:port uuid="f445c9f8-c211-4af8-a66d-21cacc81fdc5">
Nov 25 16:26:49 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <system>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <entry name="serial">138942ff-b720-4101-8dcf-38958751745b</entry>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <entry name="uuid">138942ff-b720-4101-8dcf-38958751745b</entry>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </system>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <os>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   </os>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <features>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   </features>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/138942ff-b720-4101-8dcf-38958751745b_disk">
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/138942ff-b720-4101-8dcf-38958751745b_disk.config">
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       </source>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:26:49 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f7:94:8b"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <target dev="tapf445c9f8-c2"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/console.log" append="off"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <video>
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </video>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:26:49 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:26:49 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:26:49 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:26:49 compute-0 nova_compute[254092]: </domain>
Nov 25 16:26:49 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:26:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2615863893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.973 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Preparing to wait for external event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.974 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.974 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.974 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.975 254096 DEBUG nova.virt.libvirt.vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-964763500',id=11,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33fbb668df82403b9f379e45132213fd',ramdisk_id='',reservation_id='r-7p1lpp0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:26:46Z,user_data=None,user_id='dda8ef18e79e4220b420023d65ccb78a',uuid=138942ff-b720-4101-8dcf-38958751745b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.975 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converting VIF {"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.975 254096 DEBUG nova.network.os_vif_util [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.976 254096 DEBUG os_vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.977 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.977 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.980 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf445c9f8-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.980 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf445c9f8-c2, col_values=(('external_ids', {'iface-id': 'f445c9f8-c211-4af8-a66d-21cacc81fdc5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:94:8b', 'vm-uuid': '138942ff-b720-4101-8dcf-38958751745b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:49 compute-0 NetworkManager[48891]: <info>  [1764088009.9830] manager: (tapf445c9f8-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.990 254096 INFO os_vif [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2')
Nov 25 16:26:49 compute-0 nova_compute[254092]: 2025-11-25 16:26:49.992 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.148 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.148 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.149 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] No VIF found with MAC fa:16:3e:f7:94:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.149 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Using config drive
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.165 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.196 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.197 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.202 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.202 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.368 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.368 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4547MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.369 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.369 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cf8226e4-d68b-425a-8419-e273b162e9ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 138942ff-b720-4101-8dcf-38958751745b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.456 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.580 254096 DEBUG nova.network.neutron [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updated VIF entry in instance network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.581 254096 DEBUG nova.network.neutron [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.602 254096 DEBUG oslo_concurrency.lockutils [req-c8eb34c9-159d-42de-8718-e824baefb791 req-c34c4ed0-b441-419e-b1e9-350353a5b8bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.664 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Creating config drive at /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.671 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcix2h5r3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3188071052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:26:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2615863893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.801 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcix2h5r3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.835 254096 DEBUG nova.storage.rbd_utils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] rbd image 138942ff-b720-4101-8dcf-38958751745b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:26:50 compute-0 nova_compute[254092]: 2025-11-25 16:26:50.838 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config 138942ff-b720-4101-8dcf-38958751745b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 126 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Nov 25 16:26:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315259713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.020 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.028 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006243971600898613 of space, bias 1.0, pg target 0.1873191480269584 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.042 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.163 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.164 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.591 254096 DEBUG nova.compute.manager [None req-6938c30d-9e94-4bb2-ac62-c4a87fe747ce 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.594 254096 INFO nova.compute.manager [None req-6938c30d-9e94-4bb2-ac62-c4a87fe747ce 785c35f34b5848c589bb5d9b05f39b6a 6290220a7fce4e0e92afb6b7d48fbc5e - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Retrieving diagnostics
Nov 25 16:26:51 compute-0 ceph-mon[74985]: pgmap v1151: 321 pgs: 321 active+clean; 126 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Nov 25 16:26:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1315259713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.743 254096 DEBUG oslo_concurrency.processutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config 138942ff-b720-4101-8dcf-38958751745b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.906s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.744 254096 INFO nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deleting local config drive /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b/disk.config because it was imported into RBD.
Nov 25 16:26:51 compute-0 kernel: tapf445c9f8-c2: entered promiscuous mode
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:51 compute-0 ovn_controller[153477]: 2025-11-25T16:26:51Z|00043|binding|INFO|Claiming lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 for this chassis.
Nov 25 16:26:51 compute-0 ovn_controller[153477]: 2025-11-25T16:26:51Z|00044|binding|INFO|f445c9f8-c211-4af8-a66d-21cacc81fdc5: Claiming fa:16:3e:f7:94:8b 10.100.0.3
Nov 25 16:26:51 compute-0 NetworkManager[48891]: <info>  [1764088011.8121] manager: (tapf445c9f8-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.827 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:94:8b 10.100.0.3'], port_security=['fa:16:3e:f7:94:8b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '138942ff-b720-4101-8dcf-38958751745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33fbb668df82403b9f379e45132213fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '554076c7-aad0-4f65-8aee-4a40c468d6fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8958ae03-9d40-4b3e-bee3-6d4c69009647, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f445c9f8-c211-4af8-a66d-21cacc81fdc5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.830 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f445c9f8-c211-4af8-a66d-21cacc81fdc5 in datapath 12698a0a-7c9a-41c0-97e4-92c265b0a639 bound to our chassis
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.831 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12698a0a-7c9a-41c0-97e4-92c265b0a639
Nov 25 16:26:51 compute-0 systemd-udevd[275380]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:26:51 compute-0 systemd-machined[216343]: New machine qemu-11-instance-0000000b.
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.852 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b603723e-3d49-435b-b324-2146012ffa08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.853 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12698a0a-71 in ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.856 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12698a0a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.856 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d76e2cbe-8859-4c19-9646-8b7078842931]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.857 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e80c5c3f-93e5-4a79-9701-933bf0715c6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 25 16:26:51 compute-0 NetworkManager[48891]: <info>  [1764088011.8664] device (tapf445c9f8-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:26:51 compute-0 NetworkManager[48891]: <info>  [1764088011.8689] device (tapf445c9f8-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.871 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce65b14-ecb1-4110-a2c5-5d099df517d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:51 compute-0 ovn_controller[153477]: 2025-11-25T16:26:51Z|00045|binding|INFO|Setting lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 ovn-installed in OVS
Nov 25 16:26:51 compute-0 ovn_controller[153477]: 2025-11-25T16:26:51Z|00046|binding|INFO|Setting lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 up in Southbound
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.896 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8a3912-b0df-41bb-8d4d-aa8aa34c0912]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.935 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1463890e-cbeb-40ee-88e2-9317639666d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 systemd-udevd[275383]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.942 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9eed15f8-e462-4192-b31f-4301975f28d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 NetworkManager[48891]: <info>  [1764088011.9452] manager: (tap12698a0a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.959 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "cf8226e4-d68b-425a-8419-e273b162e9ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.959 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.959 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "cf8226e4-d68b-425a-8419-e273b162e9ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.960 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.960 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.961 254096 INFO nova.compute.manager [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Terminating instance
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.961 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "refresh_cache-cf8226e4-d68b-425a-8419-e273b162e9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.962 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquired lock "refresh_cache-cf8226e4-d68b-425a-8419-e273b162e9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:26:51 compute-0 nova_compute[254092]: 2025-11-25 16:26:51.962 254096 DEBUG nova.network.neutron [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.976 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4297cf-fd20-4a72-a6f5-56ffc42aaeba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:51.980 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8e27aa2d-1da2-4bac-9ba3-8874d77ac949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 NetworkManager[48891]: <info>  [1764088012.0023] device (tap12698a0a-70): carrier: link connected
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.007 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c700ef8f-fb93-4263-b301-ca16043b9b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.023 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6b1d56-d018-4dba-aaf7-8d5517486119]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12698a0a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:89:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446959, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275412, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.037 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a86f6cfc-7cc5-4d78-a9ec-0f29c853edb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:89d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446959, 'tstamp': 446959}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275414, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[990e5676-5747-4d5e-b215-bf85ac252c2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12698a0a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:89:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446959, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275415, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1e2e5f-4bf1-4a0e-a186-8f5afb464420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b63c14e8-9454-47e1-bb90-a06098414c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.150 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12698a0a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.150 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.151 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12698a0a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:52 compute-0 NetworkManager[48891]: <info>  [1764088012.1531] manager: (tap12698a0a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.154 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.155 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:26:52 compute-0 kernel: tap12698a0a-70: entered promiscuous mode
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.157 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12698a0a-70, col_values=(('external_ids', {'iface-id': '0d00cd0f-f859-441b-b6f5-82cb8f44f315'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:52 compute-0 ovn_controller[153477]: 2025-11-25T16:26:52Z|00047|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.176 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.177 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12698a0a-7c9a-41c0-97e4-92c265b0a639.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12698a0a-7c9a-41c0-97e4-92c265b0a639.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da86e0a0-4822-4aeb-a78a-f1d7ab92fb64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.178 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-12698a0a-7c9a-41c0-97e4-92c265b0a639
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/12698a0a-7c9a-41c0-97e4-92c265b0a639.pid.haproxy
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 12698a0a-7c9a-41c0-97e4-92c265b0a639
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:26:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:26:52.179 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'env', 'PROCESS_TAG=haproxy-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12698a0a-7c9a-41c0-97e4-92c265b0a639.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.346 254096 DEBUG nova.network.neutron [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.468 254096 DEBUG nova.compute.manager [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.469 254096 DEBUG oslo_concurrency.lockutils [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.470 254096 DEBUG oslo_concurrency.lockutils [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.471 254096 DEBUG oslo_concurrency.lockutils [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.472 254096 DEBUG nova.compute.manager [req-d7d69bb2-560c-4b36-ab8a-5dd60d3e2790 req-ddf3d933-e8a4-4fd1-a03e-72050810e546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Processing event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.600 254096 DEBUG nova.network.neutron [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.613 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Releasing lock "refresh_cache-cf8226e4-d68b-425a-8419-e273b162e9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.614 254096 DEBUG nova.compute.manager [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:26:52 compute-0 podman[275447]: 2025-11-25 16:26:52.647108631 +0000 UTC m=+0.074096017 container create 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 16:26:52 compute-0 podman[275447]: 2025-11-25 16:26:52.603697651 +0000 UTC m=+0.030685077 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:26:52 compute-0 systemd[1]: Started libpod-conmon-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582.scope.
Nov 25 16:26:52 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 25 16:26:52 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 11.737s CPU time.
Nov 25 16:26:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:26:52 compute-0 systemd-machined[216343]: Machine qemu-10-instance-0000000a terminated.
Nov 25 16:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5912db2bca04cbf22b9662e16eec824478d559ed5d99feb271afa3bc163b746/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:26:52 compute-0 podman[275447]: 2025-11-25 16:26:52.762160802 +0000 UTC m=+0.189148148 container init 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 16:26:52 compute-0 podman[275447]: 2025-11-25 16:26:52.768567048 +0000 UTC m=+0.195554394 container start 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:26:52 compute-0 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : New worker (275468) forked
Nov 25 16:26:52 compute-0 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : Loading success.
Nov 25 16:26:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.843 254096 INFO nova.virt.libvirt.driver [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance destroyed successfully.
Nov 25 16:26:52 compute-0 nova_compute[254092]: 2025-11-25 16:26:52.844 254096 DEBUG nova.objects.instance [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lazy-loading 'resources' on Instance uuid cf8226e4-d68b-425a-8419-e273b162e9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.295 254096 INFO nova.virt.libvirt.driver [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deleting instance files /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee_del
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.297 254096 INFO nova.virt.libvirt.driver [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deletion of /var/lib/nova/instances/cf8226e4-d68b-425a-8419-e273b162e9ee_del complete
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.339 254096 INFO nova.compute.manager [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.341 254096 DEBUG oslo.service.loopingcall [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.341 254096 DEBUG nova.compute.manager [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.341 254096 DEBUG nova.network.neutron [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:26:53 compute-0 ceph-mon[74985]: pgmap v1152: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.517 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.579 254096 DEBUG nova.network.neutron [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.594 254096 DEBUG nova.network.neutron [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.608 254096 INFO nova.compute.manager [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Took 0.27 seconds to deallocate network for instance.
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.650 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.651 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:53 compute-0 nova_compute[254092]: 2025-11-25 16:26:53.719 254096 DEBUG oslo_concurrency.processutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.184 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.186 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088014.1832747, 138942ff-b720-4101-8dcf-38958751745b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.187 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Started (Lifecycle Event)
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.197 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:26:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:26:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728620109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.205 254096 INFO nova.virt.libvirt.driver [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance spawned successfully.
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.206 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.219 254096 DEBUG oslo_concurrency.processutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.226 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.236 254096 DEBUG nova.compute.provider_tree [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.238 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.247 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.248 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.249 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.250 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.251 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.252 254096 DEBUG nova.virt.libvirt.driver [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.263 254096 DEBUG nova.scheduler.client.report [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.268 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.269 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088014.1856723, 138942ff-b720-4101-8dcf-38958751745b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.269 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Paused (Lifecycle Event)
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.325 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.334 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.338 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088014.190129, 138942ff-b720-4101-8dcf-38958751745b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.338 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Resumed (Lifecycle Event)
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.352 254096 INFO nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 7.70 seconds to spawn the instance on the hypervisor.
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.353 254096 DEBUG nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.359 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.361 254096 INFO nova.scheduler.client.report [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Deleted allocations for instance cf8226e4-d68b-425a-8419-e273b162e9ee
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.363 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.429 254096 INFO nova.compute.manager [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 9.37 seconds to build instance.
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.458 254096 DEBUG oslo_concurrency.lockutils [None req-06ffc5b8-e1e8-46b8-8a54-f000031d472b dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.461 254096 DEBUG oslo_concurrency.lockutils [None req-a45075c7-8c64-463d-abdc-8b972f6b76d6 ef9096d47e8b4fceb4fdb347f45e82ea 3c8b74363ca84877a8f0a40f07822af8 - - default default] Lock "cf8226e4-d68b-425a-8419-e273b162e9ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/728620109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.630 254096 DEBUG nova.compute.manager [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.631 254096 DEBUG oslo_concurrency.lockutils [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.631 254096 DEBUG oslo_concurrency.lockutils [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.632 254096 DEBUG oslo_concurrency.lockutils [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.632 254096 DEBUG nova.compute.manager [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] No waiting events found dispatching network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.633 254096 WARNING nova.compute.manager [req-964365d0-5018-4a31-a9bd-87f5c2d02148 req-85b623b7-eaaa-4baa-812b-41b080043fb5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received unexpected event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 for instance with vm_state active and task_state None.
Nov 25 16:26:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Nov 25 16:26:54 compute-0 nova_compute[254092]: 2025-11-25 16:26:54.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:26:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732839102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:26:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:26:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732839102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:26:55 compute-0 ceph-mon[74985]: pgmap v1153: 321 pgs: 321 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Nov 25 16:26:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3732839102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:26:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3732839102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:26:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 194 op/s
Nov 25 16:26:57 compute-0 nova_compute[254092]: 2025-11-25 16:26:57.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:26:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:26:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 25 16:26:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 25 16:26:57 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 25 16:26:58 compute-0 ceph-mon[74985]: pgmap v1154: 321 pgs: 321 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 194 op/s
Nov 25 16:26:58 compute-0 ceph-mon[74985]: osdmap e136: 3 total, 3 up, 3 in
Nov 25 16:26:58 compute-0 nova_compute[254092]: 2025-11-25 16:26:58.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:58 compute-0 nova_compute[254092]: 2025-11-25 16:26:58.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:26:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 194 op/s
Nov 25 16:26:59 compute-0 ceph-mon[74985]: pgmap v1156: 321 pgs: 321 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 194 op/s
Nov 25 16:26:59 compute-0 nova_compute[254092]: 2025-11-25 16:26:59.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.9 MiB/s wr, 181 op/s
Nov 25 16:27:00 compute-0 ceph-mon[74985]: pgmap v1157: 321 pgs: 321 active+clean; 88 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.9 MiB/s wr, 181 op/s
Nov 25 16:27:02 compute-0 nova_compute[254092]: 2025-11-25 16:27:02.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 175 op/s
Nov 25 16:27:02 compute-0 ceph-mon[74985]: pgmap v1158: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 175 op/s
Nov 25 16:27:04 compute-0 sshd-session[275562]: Connection reset by 198.235.24.165 port 58336 [preauth]
Nov 25 16:27:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 175 op/s
Nov 25 16:27:04 compute-0 ceph-mon[74985]: pgmap v1159: 321 pgs: 321 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 175 op/s
Nov 25 16:27:04 compute-0 nova_compute[254092]: 2025-11-25 16:27:04.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:05 compute-0 NetworkManager[48891]: <info>  [1764088025.2810] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 16:27:05 compute-0 NetworkManager[48891]: <info>  [1764088025.2821] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:05 compute-0 ovn_controller[153477]: 2025-11-25T16:27:05Z|00048|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.835 254096 DEBUG nova.compute.manager [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.835 254096 DEBUG nova.compute.manager [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing instance network info cache due to event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.835 254096 DEBUG oslo_concurrency.lockutils [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.836 254096 DEBUG oslo_concurrency.lockutils [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:27:05 compute-0 nova_compute[254092]: 2025-11-25 16:27:05.836 254096 DEBUG nova.network.neutron [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:27:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 95 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 649 KiB/s wr, 66 op/s
Nov 25 16:27:06 compute-0 ceph-mon[74985]: pgmap v1160: 321 pgs: 321 active+clean; 95 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 649 KiB/s wr, 66 op/s
Nov 25 16:27:07 compute-0 ovn_controller[153477]: 2025-11-25T16:27:07Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:94:8b 10.100.0.3
Nov 25 16:27:07 compute-0 ovn_controller[153477]: 2025-11-25T16:27:07Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:94:8b 10.100.0.3
Nov 25 16:27:07 compute-0 nova_compute[254092]: 2025-11-25 16:27:07.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:07 compute-0 nova_compute[254092]: 2025-11-25 16:27:07.838 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088012.8369787, cf8226e4-d68b-425a-8419-e273b162e9ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:07 compute-0 nova_compute[254092]: 2025-11-25 16:27:07.839 254096 INFO nova.compute.manager [-] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] VM Stopped (Lifecycle Event)
Nov 25 16:27:07 compute-0 nova_compute[254092]: 2025-11-25 16:27:07.866 254096 DEBUG nova.compute.manager [None req-a5c9ebac-bd5a-4ba6-bdda-03ca66c240b9 - - - - - -] [instance: cf8226e4-d68b-425a-8419-e273b162e9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.197 254096 DEBUG nova.network.neutron [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updated VIF entry in instance network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.198 254096 DEBUG nova.network.neutron [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.212 254096 DEBUG oslo_concurrency.lockutils [req-63c4e6b3-7e5e-40ed-a033-68c32ec21bc1 req-a52b818e-3eb7-4e92-af61-2ba1a82b7a46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.756 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.758 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.777 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:27:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 95 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 581 KiB/s wr, 59 op/s
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.849 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.849 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.859 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.860 254096 INFO nova.compute.claims [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:27:08 compute-0 ceph-mon[74985]: pgmap v1161: 321 pgs: 321 active+clean; 95 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 581 KiB/s wr, 59 op/s
Nov 25 16:27:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:08.948 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:08.949 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:27:08 compute-0 nova_compute[254092]: 2025-11-25 16:27:08.997 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4100195329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.411 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.419 254096 DEBUG nova.compute.provider_tree [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.433 254096 DEBUG nova.scheduler.client.report [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.459 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.460 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.507 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.520 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.534 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.636 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.638 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.638 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating image(s)
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.657 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.677 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.696 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.699 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.753 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.754 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.754 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.755 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.772 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.775 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.827 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "38463335-bf41-4609-b81c-08bc8da299af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.828 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.845 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.924 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.925 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.937 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.938 254096 INFO nova.compute.claims [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:27:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4100195329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:09 compute-0 nova_compute[254092]: 2025-11-25 16:27:09.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.101 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.173 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.211 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] resizing rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.305 254096 DEBUG nova.objects.instance [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'migration_context' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.320 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.320 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ensure instance console log exists: /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.321 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.321 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.322 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.324 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.328 254096 WARNING nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.333 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.334 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.336 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.337 254096 DEBUG nova.virt.libvirt.host [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.337 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.338 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.338 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.339 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.340 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.340 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.340 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.341 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.341 254096 DEBUG nova.virt.hardware [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.344 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548401150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.653 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.661 254096 DEBUG nova.compute.provider_tree [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.677 254096 DEBUG nova.scheduler.client.report [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.693 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.694 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.736 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.736 254096 DEBUG nova.network.neutron [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.750 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:27:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895172410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.765 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.780 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.805 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.811 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 117 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.898 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.899 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.900 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Creating image(s)
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.917 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.934 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.952 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:10 compute-0 nova_compute[254092]: 2025-11-25 16:27:10.955 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2548401150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1895172410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:10 compute-0 ceph-mon[74985]: pgmap v1162: 321 pgs: 321 active+clean; 117 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.012 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.013 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.014 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.014 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.035 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.038 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 38463335-bf41-4609-b81c-08bc8da299af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.281 254096 DEBUG nova.network.neutron [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.281 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:27:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411282142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.309 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 38463335-bf41-4609-b81c-08bc8da299af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.332 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.334 254096 DEBUG nova.objects.instance [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.362 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <uuid>5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</uuid>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <name>instance-0000000c</name>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdmin275Test-server-1909956679</nova:name>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:27:10</nova:creationTime>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <nova:user uuid="f16e272341774153991e7ed856e34188">tempest-ServersAdmin275Test-1636693528-project-member</nova:user>
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <nova:project uuid="a9a4c2749782401c89e06948356b0e0a">tempest-ServersAdmin275Test-1636693528</nova:project>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <system>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <entry name="serial">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <entry name="uuid">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </system>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <os>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   </os>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <features>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   </features>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk">
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config">
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:11 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log" append="off"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <video>
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </video>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:27:11 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:27:11 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:27:11 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:27:11 compute-0 nova_compute[254092]: </domain>
Nov 25 16:27:11 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.368 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] resizing rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.442 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.443 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.443 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Using config drive
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.464 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.616 254096 DEBUG nova.objects.instance [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'migration_context' on Instance uuid 38463335-bf41-4609-b81c-08bc8da299af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.630 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.630 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Ensure instance console log exists: /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.631 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.631 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.631 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.632 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.638 254096 WARNING nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.643 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.644 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.646 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.libvirt.host [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.647 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.648 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.649 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.649 254096 DEBUG nova.virt.hardware [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:27:11 compute-0 nova_compute[254092]: 2025-11-25 16:27:11.651 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1411282142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165087591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.147 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.165 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.170 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.193 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating config drive at /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.198 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ra_u8gw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.327 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ra_u8gw" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.345 254096 DEBUG nova.storage.rbd_utils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.348 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945289862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.590 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.592 254096 DEBUG nova.objects.instance [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38463335-bf41-4609-b81c-08bc8da299af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.610 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <uuid>38463335-bf41-4609-b81c-08bc8da299af</uuid>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <name>instance-0000000d</name>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1273486075</nova:name>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:27:11</nova:creationTime>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <nova:user uuid="6fecc7ec96b94801b693d75b96da5cca">tempest-LiveMigrationNegativeTest-1263337402-project-member</nova:user>
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <nova:project uuid="200751433d7c4e9994df0ea449a6cb48">tempest-LiveMigrationNegativeTest-1263337402</nova:project>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <system>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <entry name="serial">38463335-bf41-4609-b81c-08bc8da299af</entry>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <entry name="uuid">38463335-bf41-4609-b81c-08bc8da299af</entry>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </system>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <os>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   </os>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <features>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   </features>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/38463335-bf41-4609-b81c-08bc8da299af_disk">
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/38463335-bf41-4609-b81c-08bc8da299af_disk.config">
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/console.log" append="off"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <video>
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </video>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:27:12 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:27:12 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:27:12 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:27:12 compute-0 nova_compute[254092]: </domain>
Nov 25 16:27:12 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.710 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.711 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.712 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Using config drive
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.734 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.872 254096 DEBUG oslo_concurrency.processutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:12 compute-0 nova_compute[254092]: 2025-11-25 16:27:12.873 254096 INFO nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting local config drive /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config because it was imported into RBD.
Nov 25 16:27:12 compute-0 systemd-machined[216343]: New machine qemu-12-instance-0000000c.
Nov 25 16:27:12 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 25 16:27:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3165087591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1945289862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:12 compute-0 ceph-mon[74985]: pgmap v1163: 321 pgs: 321 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.162 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Creating config drive at /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.168 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2f_isdis execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.216 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.2164097, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.217 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Resumed (Lifecycle Event)
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.220 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.221 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.224 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance spawned successfully.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.225 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.238 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.243 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.247 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.247 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.248 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.248 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.249 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.249 254096 DEBUG nova.virt.libvirt.driver [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.274 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.274 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.2174766, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Started (Lifecycle Event)
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.295 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2f_isdis" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.317 254096 DEBUG nova.storage.rbd_utils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image 38463335-bf41-4609-b81c-08bc8da299af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.319 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config 38463335-bf41-4609-b81c-08bc8da299af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.343 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.345 254096 INFO nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 3.71 seconds to spawn the instance on the hypervisor.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.346 254096 DEBUG nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.349 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.379 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.426 254096 INFO nova.compute.manager [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 4.60 seconds to build instance.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.449 254096 DEBUG oslo_concurrency.lockutils [None req-67007586-e331-465d-8646-752b972f9ee5 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.481 254096 DEBUG oslo_concurrency.processutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config 38463335-bf41-4609-b81c-08bc8da299af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.482 254096 INFO nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deleting local config drive /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af/disk.config because it was imported into RBD.
Nov 25 16:27:13 compute-0 systemd-machined[216343]: New machine qemu-13-instance-0000000d.
Nov 25 16:27:13 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 25 16:27:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.599 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.599 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.600 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.865 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.8648262, 38463335-bf41-4609-b81c-08bc8da299af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.868 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] VM Resumed (Lifecycle Event)
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.870 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.871 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.876 254096 INFO nova.virt.libvirt.driver [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance spawned successfully.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.876 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.889 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.896 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.901 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.902 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.903 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.904 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.904 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.905 254096 DEBUG nova.virt.libvirt.driver [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.928 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.929 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088033.8650072, 38463335-bf41-4609-b81c-08bc8da299af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.930 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] VM Started (Lifecycle Event)
Nov 25 16:27:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:13.951 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.951 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.956 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.973 254096 INFO nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 3.07 seconds to spawn the instance on the hypervisor.
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.974 254096 DEBUG nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:13 compute-0 nova_compute[254092]: 2025-11-25 16:27:13.983 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:14 compute-0 nova_compute[254092]: 2025-11-25 16:27:14.033 254096 INFO nova.compute.manager [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 4.14 seconds to build instance.
Nov 25 16:27:14 compute-0 nova_compute[254092]: 2025-11-25 16:27:14.048 254096 DEBUG oslo_concurrency.lockutils [None req-0d76f67d-2cfa-4cc0-a62f-d9d240b7692e 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 16:27:14 compute-0 ceph-mon[74985]: pgmap v1164: 321 pgs: 321 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 16:27:15 compute-0 nova_compute[254092]: 2025-11-25 16:27:14.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.7 MiB/s wr, 243 op/s
Nov 25 16:27:16 compute-0 ceph-mon[74985]: pgmap v1165: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.7 MiB/s wr, 243 op/s
Nov 25 16:27:17 compute-0 nova_compute[254092]: 2025-11-25 16:27:17.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.166 254096 INFO nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Rebuilding instance
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.299 254096 DEBUG nova.compute.manager [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.300 254096 DEBUG nova.compute.manager [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing instance network info cache due to event network-changed-f445c9f8-c211-4af8-a66d-21cacc81fdc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.301 254096 DEBUG oslo_concurrency.lockutils [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.301 254096 DEBUG oslo_concurrency.lockutils [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.301 254096 DEBUG nova.network.neutron [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Refreshing network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.562 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.583 254096 DEBUG nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:18 compute-0 podman[276298]: 2025-11-25 16:27:18.652318868 +0000 UTC m=+0.061027962 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.654 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'pci_requests' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:18 compute-0 podman[276297]: 2025-11-25 16:27:18.659103763 +0000 UTC m=+0.068285640 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.666 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'pci_devices' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.683 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'resources' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:18 compute-0 podman[276299]: 2025-11-25 16:27:18.691515035 +0000 UTC m=+0.094255576 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.695 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'migration_context' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.715 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:27:18 compute-0 nova_compute[254092]: 2025-11-25 16:27:18.718 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:27:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.2 MiB/s wr, 231 op/s
Nov 25 16:27:18 compute-0 ceph-mon[74985]: pgmap v1166: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.2 MiB/s wr, 231 op/s
Nov 25 16:27:20 compute-0 nova_compute[254092]: 2025-11-25 16:27:20.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:20 compute-0 nova_compute[254092]: 2025-11-25 16:27:20.009 254096 DEBUG nova.network.neutron [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updated VIF entry in instance network info cache for port f445c9f8-c211-4af8-a66d-21cacc81fdc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:27:20 compute-0 nova_compute[254092]: 2025-11-25 16:27:20.010 254096 DEBUG nova.network.neutron [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [{"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:20 compute-0 nova_compute[254092]: 2025-11-25 16:27:20.025 254096 DEBUG oslo_concurrency.lockutils [req-f57213d4-7a66-4d58-a3e5-4dae2b93fde4 req-41108bf1-f054-4641-b870-01ca23eebc9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-138942ff-b720-4101-8dcf-38958751745b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:27:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.2 MiB/s wr, 254 op/s
Nov 25 16:27:20 compute-0 ceph-mon[74985]: pgmap v1167: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.2 MiB/s wr, 254 op/s
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.345 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "c058218b-7732-4ced-b6a3-bb04203967d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.345 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.372 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.487 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.487 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.495 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.495 254096 INFO nova.compute.claims [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:27:21 compute-0 nova_compute[254092]: 2025-11-25 16:27:21.725 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:21 compute-0 sudo[276356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:21 compute-0 sudo[276356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:21 compute-0 sudo[276356]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:21 compute-0 sudo[276400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:27:21 compute-0 sudo[276400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:21 compute-0 sudo[276400]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:22 compute-0 sudo[276425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:22 compute-0 sudo[276425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:22 compute-0 sudo[276425]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:22 compute-0 sudo[276450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 16:27:22 compute-0 sudo[276450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332587435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.223 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.231 254096 DEBUG nova.compute.provider_tree [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3332587435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.261 254096 DEBUG nova.scheduler.client.report [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.291 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.294 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.348 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.349 254096 DEBUG nova.network.neutron [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.370 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.391 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.506 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.508 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.509 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Creating image(s)
Nov 25 16:27:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.541 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.565 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.588 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.592 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:22 compute-0 podman[276582]: 2025-11-25 16:27:22.655935614 +0000 UTC m=+0.075136985 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.669 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.670 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.670 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.671 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.697 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.703 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c058218b-7732-4ced-b6a3-bb04203967d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:22 compute-0 podman[276582]: 2025-11-25 16:27:22.742212682 +0000 UTC m=+0.161414043 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.795 254096 DEBUG nova.network.neutron [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.796 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:27:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.945 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.945 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.946 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.946 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.946 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.950 254096 INFO nova.compute.manager [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Terminating instance
Nov 25 16:27:22 compute-0 nova_compute[254092]: 2025-11-25 16:27:22.952 254096 DEBUG nova.compute.manager [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:27:23 compute-0 ovn_controller[153477]: 2025-11-25T16:27:23Z|00049|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.073 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c058218b-7732-4ced-b6a3-bb04203967d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:23 compute-0 kernel: tapf445c9f8-c2 (unregistering): left promiscuous mode
Nov 25 16:27:23 compute-0 NetworkManager[48891]: <info>  [1764088043.0869] device (tapf445c9f8-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:27:23 compute-0 ovn_controller[153477]: 2025-11-25T16:27:23Z|00050|binding|INFO|Releasing lport 0d00cd0f-f859-441b-b6f5-82cb8f44f315 from this chassis (sb_readonly=0)
Nov 25 16:27:23 compute-0 ovn_controller[153477]: 2025-11-25T16:27:23Z|00051|binding|INFO|Releasing lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 from this chassis (sb_readonly=0)
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 ovn_controller[153477]: 2025-11-25T16:27:23Z|00052|binding|INFO|Removing iface tapf445c9f8-c2 ovn-installed in OVS
Nov 25 16:27:23 compute-0 ovn_controller[153477]: 2025-11-25T16:27:23Z|00053|binding|INFO|Setting lport f445c9f8-c211-4af8-a66d-21cacc81fdc5 down in Southbound
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.128 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:94:8b 10.100.0.3'], port_security=['fa:16:3e:f7:94:8b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '138942ff-b720-4101-8dcf-38958751745b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33fbb668df82403b9f379e45132213fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '554076c7-aad0-4f65-8aee-4a40c468d6fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8958ae03-9d40-4b3e-bee3-6d4c69009647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f445c9f8-c211-4af8-a66d-21cacc81fdc5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.129 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f445c9f8-c211-4af8-a66d-21cacc81fdc5 in datapath 12698a0a-7c9a-41c0-97e4-92c265b0a639 unbound from our chassis
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.129 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12698a0a-7c9a-41c0-97e4-92c265b0a639, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.131 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3229cc-74fe-4090-88e2-c5ac1a07adc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.131 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 namespace which is not needed anymore
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 25 16:27:23 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 14.723s CPU time.
Nov 25 16:27:23 compute-0 systemd-machined[216343]: Machine qemu-11-instance-0000000b terminated.
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.205 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] resizing rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 ceph-mon[74985]: pgmap v1168: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 16:27:23 compute-0 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : haproxy version is 2.8.14-c23fe91
Nov 25 16:27:23 compute-0 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [NOTICE]   (275466) : path to executable is /usr/sbin/haproxy
Nov 25 16:27:23 compute-0 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [WARNING]  (275466) : Exiting Master process...
Nov 25 16:27:23 compute-0 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [ALERT]    (275466) : Current worker (275468) exited with code 143 (Terminated)
Nov 25 16:27:23 compute-0 neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639[275462]: [WARNING]  (275466) : All workers exited. Exiting... (0)
Nov 25 16:27:23 compute-0 systemd[1]: libpod-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582.scope: Deactivated successfully.
Nov 25 16:27:23 compute-0 podman[276827]: 2025-11-25 16:27:23.305882722 +0000 UTC m=+0.066343336 container died 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.327 254096 DEBUG nova.objects.instance [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'migration_context' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.341 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.342 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Ensure instance console log exists: /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.342 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.343 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.343 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.344 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.347 254096 DEBUG nova.compute.manager [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-unplugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.347 254096 DEBUG oslo_concurrency.lockutils [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG oslo_concurrency.lockutils [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG oslo_concurrency.lockutils [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG nova.compute.manager [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] No waiting events found dispatching network-vif-unplugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.348 254096 DEBUG nova.compute.manager [req-8bb379fa-51aa-4bfa-95ac-6486a51216e2 req-19c85634-0495-425d-993e-b7b9bdc99d27 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-unplugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582-userdata-shm.mount: Deactivated successfully.
Nov 25 16:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5912db2bca04cbf22b9662e16eec824478d559ed5d99feb271afa3bc163b746-merged.mount: Deactivated successfully.
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.355 254096 WARNING nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.361 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.362 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.366 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.366 254096 DEBUG nova.virt.libvirt.host [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.367 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.367 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.367 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.368 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.369 254096 DEBUG nova.virt.hardware [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.372 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:23 compute-0 podman[276827]: 2025-11-25 16:27:23.382518738 +0000 UTC m=+0.142979352 container cleanup 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 systemd[1]: libpod-conmon-15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582.scope: Deactivated successfully.
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.412 254096 INFO nova.virt.libvirt.driver [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Instance destroyed successfully.
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.412 254096 DEBUG nova.objects.instance [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lazy-loading 'resources' on Instance uuid 138942ff-b720-4101-8dcf-38958751745b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.427 254096 DEBUG nova.virt.libvirt.vif [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-964763500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-964763500',id=11,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:26:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33fbb668df82403b9f379e45132213fd',ramdisk_id='',reservation_id='r-7p1lpp0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1669688315-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:26:54Z,user_data=None,user_id='dda8ef18e79e4220b420023d65ccb78a',uuid=138942ff-b720-4101-8dcf-38958751745b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.428 254096 DEBUG nova.network.os_vif_util [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converting VIF {"id": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "address": "fa:16:3e:f7:94:8b", "network": {"id": "12698a0a-7c9a-41c0-97e4-92c265b0a639", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1381578390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33fbb668df82403b9f379e45132213fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf445c9f8-c2", "ovs_interfaceid": "f445c9f8-c211-4af8-a66d-21cacc81fdc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.429 254096 DEBUG nova.network.os_vif_util [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.429 254096 DEBUG os_vif [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.432 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf445c9f8-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:27:23 compute-0 sudo[276450]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.440 254096 INFO os_vif [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:94:8b,bridge_name='br-int',has_traffic_filtering=True,id=f445c9f8-c211-4af8-a66d-21cacc81fdc5,network=Network(12698a0a-7c9a-41c0-97e4-92c265b0a639),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf445c9f8-c2')
Nov 25 16:27:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:27:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:27:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:23 compute-0 podman[276907]: 2025-11-25 16:27:23.485999464 +0000 UTC m=+0.065496574 container remove 15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.493 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[540ef775-ff86-45a7-bacc-8f01a308776e]: (4, ('Tue Nov 25 04:27:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 (15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582)\n15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582\nTue Nov 25 04:27:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 (15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582)\n15b1183ce451190b88380abbb8981e29cf1ae3c50ce72d80046ae7dbfccc5582\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.496 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0eeddf2-7612-450f-b3b3-71b725ed639d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.497 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12698a0a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:27:23 compute-0 kernel: tap12698a0a-70: left promiscuous mode
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 sudo[276936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.515 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9951429-7cd1-4ee8-9cc7-5b688fcf33e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 sudo[276936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:23 compute-0 sudo[276936]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.537 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7849fbc-aab9-4fe8-b422-f8d97210aca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.539 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b1689e-9031-4fe4-8e25-be936af12294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dffe8853-6e40-4936-8469-b8fbaed47c9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446951, 'reachable_time': 44104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276999, 'error': None, 'target': 'ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d12698a0a\x2d7c9a\x2d41c0\x2d97e4\x2d92c265b0a639.mount: Deactivated successfully.
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.560 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12698a0a-7c9a-41c0-97e4-92c265b0a639 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:27:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:27:23.561 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d8dee2a3-8ac7-4ae8-b69f-73003447c8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:27:23 compute-0 sudo[276986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:27:23 compute-0 sudo[276986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:23 compute-0 sudo[276986]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:23 compute-0 sudo[277013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:23 compute-0 sudo[277013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:23 compute-0 sudo[277013]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:23 compute-0 sudo[277038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:27:23 compute-0 sudo[277038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165044617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.869 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.903 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.919 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.954 254096 INFO nova.virt.libvirt.driver [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deleting instance files /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b_del
Nov 25 16:27:23 compute-0 nova_compute[254092]: 2025-11-25 16:27:23.955 254096 INFO nova.virt.libvirt.driver [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deletion of /var/lib/nova/instances/138942ff-b720-4101-8dcf-38958751745b_del complete
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.050 254096 INFO nova.compute.manager [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 1.10 seconds to destroy the instance on the hypervisor.
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.051 254096 DEBUG oslo.service.loopingcall [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.051 254096 DEBUG nova.compute.manager [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.051 254096 DEBUG nova.network.neutron [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:27:24 compute-0 sudo[277038]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:27:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:27:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:27:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev de185658-503a-4bf6-8832-8651140d8904 does not exist
Nov 25 16:27:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e7c1a43a-83fa-4d74-8b3b-7f624197e110 does not exist
Nov 25 16:27:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7542f6f8-f3c9-4dbd-88e7-473bb591cb63 does not exist
Nov 25 16:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:27:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:27:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:27:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:27:24 compute-0 sudo[277133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:24 compute-0 sudo[277133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:24 compute-0 sudo[277133]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603418679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:24 compute-0 sudo[277158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:27:24 compute-0 sudo[277158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:24 compute-0 sudo[277158]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.387 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.388 254096 DEBUG nova.objects.instance [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'pci_devices' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.403 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <uuid>c058218b-7732-4ced-b6a3-bb04203967d4</uuid>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <name>instance-0000000e</name>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <nova:name>tempest-LiveMigrationNegativeTest-server-3347679</nova:name>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:27:23</nova:creationTime>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <nova:user uuid="6fecc7ec96b94801b693d75b96da5cca">tempest-LiveMigrationNegativeTest-1263337402-project-member</nova:user>
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <nova:project uuid="200751433d7c4e9994df0ea449a6cb48">tempest-LiveMigrationNegativeTest-1263337402</nova:project>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <system>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <entry name="serial">c058218b-7732-4ced-b6a3-bb04203967d4</entry>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <entry name="uuid">c058218b-7732-4ced-b6a3-bb04203967d4</entry>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </system>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <os>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   </os>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <features>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   </features>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c058218b-7732-4ced-b6a3-bb04203967d4_disk">
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c058218b-7732-4ced-b6a3-bb04203967d4_disk.config">
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:24 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/console.log" append="off"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <video>
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </video>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:27:24 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:27:24 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:27:24 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:27:24 compute-0 nova_compute[254092]: </domain>
Nov 25 16:27:24 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:27:24 compute-0 sudo[277185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:24 compute-0 sudo[277185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:24 compute-0 sudo[277185]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:24 compute-0 sudo[277211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:27:24 compute-0 sudo[277211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.497 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.497 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.498 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Using config drive
Nov 25 16:27:24 compute-0 nova_compute[254092]: 2025-11-25 16:27:24.517 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2165044617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3603418679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 186 op/s
Nov 25 16:27:24 compute-0 podman[277292]: 2025-11-25 16:27:24.860266634 +0000 UTC m=+0.079735341 container create e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:27:24 compute-0 systemd[1]: Started libpod-conmon-e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2.scope.
Nov 25 16:27:24 compute-0 podman[277292]: 2025-11-25 16:27:24.814313014 +0000 UTC m=+0.033781741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:27:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:27:24 compute-0 podman[277292]: 2025-11-25 16:27:24.949317157 +0000 UTC m=+0.168785945 container init e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:27:24 compute-0 podman[277292]: 2025-11-25 16:27:24.956300857 +0000 UTC m=+0.175769564 container start e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 25 16:27:24 compute-0 podman[277292]: 2025-11-25 16:27:24.960308167 +0000 UTC m=+0.179776964 container attach e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:27:24 compute-0 naughty_lehmann[277309]: 167 167
Nov 25 16:27:24 compute-0 systemd[1]: libpod-e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2.scope: Deactivated successfully.
Nov 25 16:27:24 compute-0 podman[277292]: 2025-11-25 16:27:24.962839756 +0000 UTC m=+0.182308453 container died e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a928615c3f69d9f80e997e222642240b35ff7882a670b333c0c791bf417cfe0-merged.mount: Deactivated successfully.
Nov 25 16:27:24 compute-0 podman[277292]: 2025-11-25 16:27:24.995317379 +0000 UTC m=+0.214786096 container remove e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:27:25 compute-0 systemd[1]: libpod-conmon-e797a4d1149c5c358875f2b365a76d396ee5907cda43ca73fd0521c47df465d2.scope: Deactivated successfully.
Nov 25 16:27:25 compute-0 podman[277333]: 2025-11-25 16:27:25.196176136 +0000 UTC m=+0.071972340 container create 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:27:25 compute-0 podman[277333]: 2025-11-25 16:27:25.1797866 +0000 UTC m=+0.055582834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:27:25 compute-0 systemd[1]: Started libpod-conmon-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope.
Nov 25 16:27:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:25 compute-0 podman[277333]: 2025-11-25 16:27:25.302205982 +0000 UTC m=+0.178002206 container init 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:27:25 compute-0 podman[277333]: 2025-11-25 16:27:25.307834945 +0000 UTC m=+0.183631149 container start 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:27:25 compute-0 podman[277333]: 2025-11-25 16:27:25.310533329 +0000 UTC m=+0.186329553 container attach 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 16:27:25 compute-0 nova_compute[254092]: 2025-11-25 16:27:25.393 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Creating config drive at /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config
Nov 25 16:27:25 compute-0 nova_compute[254092]: 2025-11-25 16:27:25.400 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9iqo5hjb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:25 compute-0 nova_compute[254092]: 2025-11-25 16:27:25.543 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9iqo5hjb" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:25 compute-0 nova_compute[254092]: 2025-11-25 16:27:25.646 254096 DEBUG nova.storage.rbd_utils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] rbd image c058218b-7732-4ced-b6a3-bb04203967d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:25 compute-0 nova_compute[254092]: 2025-11-25 16:27:25.649 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config c058218b-7732-4ced-b6a3-bb04203967d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:25 compute-0 ceph-mon[74985]: pgmap v1169: 321 pgs: 321 active+clean; 214 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 186 op/s
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.086 254096 DEBUG oslo_concurrency.processutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config c058218b-7732-4ced-b6a3-bb04203967d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.088 254096 INFO nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deleting local config drive /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4/disk.config because it was imported into RBD.
Nov 25 16:27:26 compute-0 systemd-machined[216343]: New machine qemu-14-instance-0000000e.
Nov 25 16:27:26 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.176 254096 DEBUG nova.compute.manager [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.176 254096 DEBUG oslo_concurrency.lockutils [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "138942ff-b720-4101-8dcf-38958751745b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 DEBUG oslo_concurrency.lockutils [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 DEBUG oslo_concurrency.lockutils [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 DEBUG nova.compute.manager [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] No waiting events found dispatching network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.177 254096 WARNING nova.compute.manager [req-8927c24a-3d68-4cdb-9d54-06f7e6b981c6 req-e3445eb7-55ad-45ff-b564-052857d5bc4f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received unexpected event network-vif-plugged-f445c9f8-c211-4af8-a66d-21cacc81fdc5 for instance with vm_state active and task_state deleting.
Nov 25 16:27:26 compute-0 unruffled_heyrovsky[277350]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:27:26 compute-0 unruffled_heyrovsky[277350]: --> relative data size: 1.0
Nov 25 16:27:26 compute-0 unruffled_heyrovsky[277350]: --> All data devices are unavailable
Nov 25 16:27:26 compute-0 systemd[1]: libpod-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope: Deactivated successfully.
Nov 25 16:27:26 compute-0 systemd[1]: libpod-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope: Consumed 1.010s CPU time.
Nov 25 16:27:26 compute-0 conmon[277350]: conmon 52be2a0773d208fb3080 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope/container/memory.events
Nov 25 16:27:26 compute-0 podman[277333]: 2025-11-25 16:27:26.414253006 +0000 UTC m=+1.290049210 container died 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.429 254096 DEBUG nova.network.neutron [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e68ad2a68104bca78eb9b3a5d33b3e93b3ec61e3d71ca24cfab96b9c057b908e-merged.mount: Deactivated successfully.
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.464 254096 INFO nova.compute.manager [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] Took 2.41 seconds to deallocate network for instance.
Nov 25 16:27:26 compute-0 podman[277333]: 2025-11-25 16:27:26.476365466 +0000 UTC m=+1.352161670 container remove 52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.504 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.504 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:26 compute-0 systemd[1]: libpod-conmon-52be2a0773d208fb30806a820171df3a3a1a17b086ef0335ae01bd2c92cf8927.scope: Deactivated successfully.
Nov 25 16:27:26 compute-0 sudo[277211]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.512 254096 DEBUG nova.compute.manager [req-52d1e66c-1561-4a55-bf6e-96a0474b88d3 req-3a4f7140-adb9-4e8d-a1be-f86090c77745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 138942ff-b720-4101-8dcf-38958751745b] Received event network-vif-deleted-f445c9f8-c211-4af8-a66d-21cacc81fdc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:27:26 compute-0 sudo[277444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:26 compute-0 sudo[277444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:26 compute-0 sudo[277444]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.601 254096 DEBUG oslo_concurrency.processutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:26 compute-0 sudo[277476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:27:26 compute-0 sudo[277476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:26 compute-0 sudo[277476]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:26 compute-0 sudo[277528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:26 compute-0 sudo[277528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:26 compute-0 sudo[277528]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:26 compute-0 sudo[277561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:27:26 compute-0 sudo[277561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.750 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088046.7499244, c058218b-7732-4ced-b6a3-bb04203967d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.751 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Resumed (Lifecycle Event)
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.756 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.757 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.761 254096 INFO nova.virt.libvirt.driver [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance spawned successfully.
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.762 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.790 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.796 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.809 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.809 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.810 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.810 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.811 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.812 254096 DEBUG nova.virt.libvirt.driver [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.821 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.822 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088046.755951, c058218b-7732-4ced-b6a3-bb04203967d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.822 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Started (Lifecycle Event)
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.854 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 206 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.0 MiB/s wr, 272 op/s
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.859 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.898 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.915 254096 INFO nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 4.41 seconds to spawn the instance on the hypervisor.
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.916 254096 DEBUG nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:26 compute-0 ceph-mon[74985]: pgmap v1170: 321 pgs: 321 active+clean; 206 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 7.0 MiB/s wr, 272 op/s
Nov 25 16:27:26 compute-0 nova_compute[254092]: 2025-11-25 16:27:26.981 254096 INFO nova.compute.manager [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 5.52 seconds to build instance.
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.002 254096 DEBUG oslo_concurrency.lockutils [None req-ec4acb48-2096-4feb-8868-e0a5a4513623 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595038871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.112 254096 DEBUG oslo_concurrency.processutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:27 compute-0 podman[277643]: 2025-11-25 16:27:27.113252809 +0000 UTC m=+0.041745477 container create 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.120 254096 DEBUG nova.compute.provider_tree [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.142 254096 DEBUG nova.scheduler.client.report [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.161 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:27 compute-0 systemd[1]: Started libpod-conmon-85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7.scope.
Nov 25 16:27:27 compute-0 podman[277643]: 2025-11-25 16:27:27.093347757 +0000 UTC m=+0.021840465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:27:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.211 254096 INFO nova.scheduler.client.report [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Deleted allocations for instance 138942ff-b720-4101-8dcf-38958751745b
Nov 25 16:27:27 compute-0 podman[277643]: 2025-11-25 16:27:27.218708069 +0000 UTC m=+0.147200757 container init 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:27:27 compute-0 podman[277643]: 2025-11-25 16:27:27.226022938 +0000 UTC m=+0.154515606 container start 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 16:27:27 compute-0 distracted_dirac[277661]: 167 167
Nov 25 16:27:27 compute-0 systemd[1]: libpod-85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7.scope: Deactivated successfully.
Nov 25 16:27:27 compute-0 podman[277643]: 2025-11-25 16:27:27.250207916 +0000 UTC m=+0.178700594 container attach 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:27:27 compute-0 podman[277643]: 2025-11-25 16:27:27.250899405 +0000 UTC m=+0.179392073 container died 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2094a39dfbda8b681b880ee4017f5b653a163c6362aba76a267c57cddcb640-merged.mount: Deactivated successfully.
Nov 25 16:27:27 compute-0 podman[277643]: 2025-11-25 16:27:27.298069168 +0000 UTC m=+0.226561836 container remove 85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dirac, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:27:27 compute-0 systemd[1]: libpod-conmon-85d379235eb467404601c1ec186a322e66ed8e1567a0f0d2c78fb94f2d1e7cd7.scope: Deactivated successfully.
Nov 25 16:27:27 compute-0 nova_compute[254092]: 2025-11-25 16:27:27.313 254096 DEBUG oslo_concurrency.lockutils [None req-a11e8ac4-aa80-4ce9-9c82-d233ecc4d651 dda8ef18e79e4220b420023d65ccb78a 33fbb668df82403b9f379e45132213fd - - default default] Lock "138942ff-b720-4101-8dcf-38958751745b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:27 compute-0 podman[277685]: 2025-11-25 16:27:27.496972371 +0000 UTC m=+0.060741103 container create 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:27:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:27 compute-0 systemd[1]: Started libpod-conmon-5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689.scope.
Nov 25 16:27:27 compute-0 podman[277685]: 2025-11-25 16:27:27.478075378 +0000 UTC m=+0.041844110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:27:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:27 compute-0 podman[277685]: 2025-11-25 16:27:27.627066712 +0000 UTC m=+0.190835494 container init 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:27:27 compute-0 podman[277685]: 2025-11-25 16:27:27.637830195 +0000 UTC m=+0.201598937 container start 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 16:27:27 compute-0 podman[277685]: 2025-11-25 16:27:27.64207312 +0000 UTC m=+0.205841862 container attach 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:27:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1595038871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:28 compute-0 serene_merkle[277701]: {
Nov 25 16:27:28 compute-0 serene_merkle[277701]:     "0": [
Nov 25 16:27:28 compute-0 serene_merkle[277701]:         {
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "devices": [
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "/dev/loop3"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             ],
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_name": "ceph_lv0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_size": "21470642176",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "name": "ceph_lv0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "tags": {
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cluster_name": "ceph",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.crush_device_class": "",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.encrypted": "0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osd_id": "0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.type": "block",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.vdo": "0"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             },
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "type": "block",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "vg_name": "ceph_vg0"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:         }
Nov 25 16:27:28 compute-0 serene_merkle[277701]:     ],
Nov 25 16:27:28 compute-0 serene_merkle[277701]:     "1": [
Nov 25 16:27:28 compute-0 serene_merkle[277701]:         {
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "devices": [
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "/dev/loop4"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             ],
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_name": "ceph_lv1",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_size": "21470642176",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "name": "ceph_lv1",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "tags": {
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cluster_name": "ceph",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.crush_device_class": "",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.encrypted": "0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osd_id": "1",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.type": "block",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.vdo": "0"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             },
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "type": "block",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "vg_name": "ceph_vg1"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:         }
Nov 25 16:27:28 compute-0 serene_merkle[277701]:     ],
Nov 25 16:27:28 compute-0 serene_merkle[277701]:     "2": [
Nov 25 16:27:28 compute-0 serene_merkle[277701]:         {
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "devices": [
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "/dev/loop5"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             ],
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_name": "ceph_lv2",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_size": "21470642176",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "name": "ceph_lv2",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "tags": {
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.cluster_name": "ceph",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.crush_device_class": "",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.encrypted": "0",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osd_id": "2",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.type": "block",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:                 "ceph.vdo": "0"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             },
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "type": "block",
Nov 25 16:27:28 compute-0 serene_merkle[277701]:             "vg_name": "ceph_vg2"
Nov 25 16:27:28 compute-0 serene_merkle[277701]:         }
Nov 25 16:27:28 compute-0 serene_merkle[277701]:     ]
Nov 25 16:27:28 compute-0 serene_merkle[277701]: }
Nov 25 16:27:28 compute-0 systemd[1]: libpod-5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689.scope: Deactivated successfully.
Nov 25 16:27:28 compute-0 podman[277685]: 2025-11-25 16:27:28.403555544 +0000 UTC m=+0.967324296 container died 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd4cda011432aba8a2ed89616d7f594f6ee879d32a40bdcfb0ffd75e574d3854-merged.mount: Deactivated successfully.
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:28 compute-0 podman[277685]: 2025-11-25 16:27:28.449832484 +0000 UTC m=+1.013601216 container remove 5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 16:27:28 compute-0 systemd[1]: libpod-conmon-5dc8936969dc5805193146082640b254e703d94943590b8ea964862d2bdaa689.scope: Deactivated successfully.
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.471 254096 DEBUG nova.objects.instance [None req-4733b19c-fa6c-4ae0-86c7-09d9ba1fa2e8 c35aca60808846df92d36162b59d2ba0 e0ca320f9ba34efda13ff2001e7d6cdc - - default default] Lazy-loading 'pci_devices' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:28 compute-0 sudo[277561]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.489 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088048.487984, c058218b-7732-4ced-b6a3-bb04203967d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.489 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Paused (Lifecycle Event)
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.503 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.506 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.520 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 16:27:28 compute-0 sudo[277722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:28 compute-0 sudo[277722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:28 compute-0 sudo[277722]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:28 compute-0 sudo[277750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:27:28 compute-0 sudo[277750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:28 compute-0 sudo[277750]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:28 compute-0 sudo[277775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:28 compute-0 sudo[277775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:28 compute-0 sudo[277775]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:28 compute-0 sudo[277800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:27:28 compute-0 sudo[277800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:28 compute-0 nova_compute[254092]: 2025-11-25 16:27:28.792 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:27:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 206 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 794 KiB/s rd, 4.1 MiB/s wr, 108 op/s
Nov 25 16:27:28 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 25 16:27:28 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 2.358s CPU time.
Nov 25 16:27:28 compute-0 systemd-machined[216343]: Machine qemu-14-instance-0000000e terminated.
Nov 25 16:27:28 compute-0 ceph-mon[74985]: pgmap v1171: 321 pgs: 321 active+clean; 206 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 794 KiB/s rd, 4.1 MiB/s wr, 108 op/s
Nov 25 16:27:29 compute-0 nova_compute[254092]: 2025-11-25 16:27:29.082 254096 DEBUG nova.compute.manager [None req-4733b19c-fa6c-4ae0-86c7-09d9ba1fa2e8 c35aca60808846df92d36162b59d2ba0 e0ca320f9ba34efda13ff2001e7d6cdc - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:29 compute-0 podman[277866]: 2025-11-25 16:27:29.092804471 +0000 UTC m=+0.054191345 container create 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:27:29 compute-0 systemd[1]: Started libpod-conmon-0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8.scope.
Nov 25 16:27:29 compute-0 podman[277866]: 2025-11-25 16:27:29.068747586 +0000 UTC m=+0.030134490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:27:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:27:29 compute-0 podman[277866]: 2025-11-25 16:27:29.189573976 +0000 UTC m=+0.150960850 container init 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:27:29 compute-0 podman[277866]: 2025-11-25 16:27:29.197165761 +0000 UTC m=+0.158552665 container start 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:27:29 compute-0 podman[277866]: 2025-11-25 16:27:29.202473106 +0000 UTC m=+0.163859970 container attach 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:27:29 compute-0 awesome_lamarr[277885]: 167 167
Nov 25 16:27:29 compute-0 systemd[1]: libpod-0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8.scope: Deactivated successfully.
Nov 25 16:27:29 compute-0 podman[277866]: 2025-11-25 16:27:29.206110706 +0000 UTC m=+0.167497570 container died 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c74a36c89d0d601c55a7c55070f438a7f662bf51dec763f711a4a8fea7b1461-merged.mount: Deactivated successfully.
Nov 25 16:27:29 compute-0 podman[277866]: 2025-11-25 16:27:29.248050956 +0000 UTC m=+0.209437820 container remove 0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lamarr, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:27:29 compute-0 systemd[1]: libpod-conmon-0a28274acd0aa918c2fa3eff1710f3fc15f5170cbc6a0dbc9d6d4fea9f2c5ca8.scope: Deactivated successfully.
Nov 25 16:27:29 compute-0 podman[277910]: 2025-11-25 16:27:29.450226059 +0000 UTC m=+0.057079525 container create 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:27:29 compute-0 systemd[1]: Started libpod-conmon-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope.
Nov 25 16:27:29 compute-0 podman[277910]: 2025-11-25 16:27:29.421353422 +0000 UTC m=+0.028206908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:27:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:27:29 compute-0 podman[277910]: 2025-11-25 16:27:29.544843713 +0000 UTC m=+0.151697189 container init 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:27:29 compute-0 podman[277910]: 2025-11-25 16:27:29.553949222 +0000 UTC m=+0.160802688 container start 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:27:29 compute-0 podman[277910]: 2025-11-25 16:27:29.556768848 +0000 UTC m=+0.163622314 container attach 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:27:30 compute-0 clever_joliot[277926]: {
Nov 25 16:27:30 compute-0 clever_joliot[277926]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "osd_id": 1,
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "type": "bluestore"
Nov 25 16:27:30 compute-0 clever_joliot[277926]:     },
Nov 25 16:27:30 compute-0 clever_joliot[277926]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "osd_id": 2,
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "type": "bluestore"
Nov 25 16:27:30 compute-0 clever_joliot[277926]:     },
Nov 25 16:27:30 compute-0 clever_joliot[277926]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "osd_id": 0,
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:27:30 compute-0 clever_joliot[277926]:         "type": "bluestore"
Nov 25 16:27:30 compute-0 clever_joliot[277926]:     }
Nov 25 16:27:30 compute-0 clever_joliot[277926]: }
Nov 25 16:27:30 compute-0 systemd[1]: libpod-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope: Deactivated successfully.
Nov 25 16:27:30 compute-0 podman[277910]: 2025-11-25 16:27:30.574944887 +0000 UTC m=+1.181798353 container died 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:27:30 compute-0 systemd[1]: libpod-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope: Consumed 1.021s CPU time.
Nov 25 16:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d630c2ac8485141ee5e76edfb3638a6bdef6add8c2e2ab830db593cce71f83-merged.mount: Deactivated successfully.
Nov 25 16:27:30 compute-0 podman[277910]: 2025-11-25 16:27:30.64302789 +0000 UTC m=+1.249881356 container remove 55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:27:30 compute-0 systemd[1]: libpod-conmon-55e0417685f6a8d44cb6e91289ff24437bf456d7decd6905f0ac0163912b8280.scope: Deactivated successfully.
Nov 25 16:27:30 compute-0 sudo[277800]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:27:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:27:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 574c18aa-dfe6-4b0c-848e-60314bff8d82 does not exist
Nov 25 16:27:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 86b7a3df-3283-4689-8d2a-ed740b75ef7d does not exist
Nov 25 16:27:30 compute-0 sudo[277973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:27:30 compute-0 sudo[277973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:30 compute-0 sudo[277973]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:30 compute-0 sudo[277998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:27:30 compute-0 sudo[277998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:27:30 compute-0 sudo[277998]: pam_unix(sudo:session): session closed for user root
Nov 25 16:27:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 242 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 236 op/s
Nov 25 16:27:31 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 16:27:31 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.213s CPU time.
Nov 25 16:27:31 compute-0 systemd-machined[216343]: Machine qemu-12-instance-0000000c terminated.
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.641 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "c058218b-7732-4ced-b6a3-bb04203967d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "c058218b-7732-4ced-b6a3-bb04203967d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.642 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.643 254096 INFO nova.compute.manager [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Terminating instance
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.644 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "refresh_cache-c058218b-7732-4ced-b6a3-bb04203967d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.644 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquired lock "refresh_cache-c058218b-7732-4ced-b6a3-bb04203967d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.644 254096 DEBUG nova.network.neutron [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:27:31 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:31 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:27:31 compute-0 ceph-mon[74985]: pgmap v1172: 321 pgs: 321 active+clean; 242 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 236 op/s
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.811 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance shutdown successfully after 13 seconds.
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.816 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.
Nov 25 16:27:31 compute-0 nova_compute[254092]: 2025-11-25 16:27:31.820 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.161 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting instance files /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.162 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deletion of /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del complete
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.270 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.308 254096 DEBUG nova.network.neutron [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.523 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.524 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating image(s)
Nov 25 16:27:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.555 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.590 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.630 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.634 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:32 compute-0 nova_compute[254092]: 2025-11-25 16:27:32.635 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 246 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 25 16:27:32 compute-0 ceph-mon[74985]: pgmap v1173: 321 pgs: 321 active+clean; 246 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.072 254096 DEBUG nova.virt.libvirt.imagebackend [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/a4aa3708-bb73-4b5a-b3f3-42153358021e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/a4aa3708-bb73-4b5a-b3f3-42153358021e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.209 254096 DEBUG nova.network.neutron [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.230 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Releasing lock "refresh_cache-c058218b-7732-4ced-b6a3-bb04203967d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.230 254096 DEBUG nova.compute.manager [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.237 254096 INFO nova.virt.libvirt.driver [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance destroyed successfully.
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.237 254096 DEBUG nova.objects.instance [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'resources' on Instance uuid c058218b-7732-4ced-b6a3-bb04203967d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.693 254096 INFO nova.virt.libvirt.driver [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deleting instance files /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4_del
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.693 254096 INFO nova.virt.libvirt.driver [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deletion of /var/lib/nova/instances/c058218b-7732-4ced-b6a3-bb04203967d4_del complete
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.885 254096 INFO nova.compute.manager [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 0.65 seconds to destroy the instance on the hypervisor.
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.885 254096 DEBUG oslo.service.loopingcall [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.886 254096 DEBUG nova.compute.manager [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:27:33 compute-0 nova_compute[254092]: 2025-11-25 16:27:33.886 254096 DEBUG nova.network.neutron [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.188 254096 DEBUG nova.network.neutron [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.200 254096 DEBUG nova.network.neutron [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.211 254096 INFO nova.compute.manager [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Took 0.32 seconds to deallocate network for instance.
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.249 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.250 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.357 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.377 254096 DEBUG oslo_concurrency.processutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.422 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.424 254096 DEBUG nova.virt.images [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] a4aa3708-bb73-4b5a-b3f3-42153358021e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.425 254096 DEBUG nova.privsep.utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.425 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.713 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.part /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.717 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.770 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6.converted --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.772 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.788 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.791 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378797721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.817 254096 DEBUG oslo_concurrency.processutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.823 254096 DEBUG nova.compute.provider_tree [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.840 254096 DEBUG nova.scheduler.client.report [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2378797721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 246 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.867 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.898 254096 INFO nova.scheduler.client.report [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Deleted allocations for instance c058218b-7732-4ced-b6a3-bb04203967d4
Nov 25 16:27:34 compute-0 nova_compute[254092]: 2025-11-25 16:27:34.973 254096 DEBUG oslo_concurrency.lockutils [None req-2be97849-f35c-446f-b098-097e9cdf7ab4 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "c058218b-7732-4ced-b6a3-bb04203967d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.103 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.166 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] resizing rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.255 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.255 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ensure instance console log exists: /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.256 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.256 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.256 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.258 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.260 254096 WARNING nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.269 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.270 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.272 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.273 254096 DEBUG nova.virt.libvirt.host [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.273 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.273 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.274 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.274 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.275 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.275 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.275 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.276 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.276 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.276 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.277 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.277 254096 DEBUG nova.virt.hardware [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.277 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.313 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.499 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "38463335-bf41-4609-b81c-08bc8da299af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.499 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.500 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "38463335-bf41-4609-b81c-08bc8da299af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.500 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.500 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.501 254096 INFO nova.compute.manager [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Terminating instance
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.502 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "refresh_cache-38463335-bf41-4609-b81c-08bc8da299af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.502 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquired lock "refresh_cache-38463335-bf41-4609-b81c-08bc8da299af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.502 254096 DEBUG nova.network.neutron [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:27:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791826067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.731 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.762 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.766 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:35 compute-0 nova_compute[254092]: 2025-11-25 16:27:35.785 254096 DEBUG nova.network.neutron [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:27:35 compute-0 ceph-mon[74985]: pgmap v1174: 321 pgs: 321 active+clean; 246 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 252 op/s
Nov 25 16:27:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3791826067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376679774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.211 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.213 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <uuid>5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</uuid>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <name>instance-0000000c</name>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdmin275Test-server-1909956679</nova:name>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:27:35</nova:creationTime>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <nova:user uuid="f16e272341774153991e7ed856e34188">tempest-ServersAdmin275Test-1636693528-project-member</nova:user>
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <nova:project uuid="a9a4c2749782401c89e06948356b0e0a">tempest-ServersAdmin275Test-1636693528</nova:project>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <system>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <entry name="serial">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <entry name="uuid">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </system>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <os>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   </os>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <features>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   </features>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk">
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config">
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log" append="off"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <video>
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </video>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:27:36 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:27:36 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:27:36 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:27:36 compute-0 nova_compute[254092]: </domain>
Nov 25 16:27:36 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.259 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.259 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.259 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Using config drive
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.281 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.300 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.375 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'keypairs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.519 254096 DEBUG nova.network.neutron [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.770 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Releasing lock "refresh_cache-38463335-bf41-4609-b81c-08bc8da299af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.771 254096 DEBUG nova.compute.manager [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:27:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 326 op/s
Nov 25 16:27:36 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 25 16:27:36 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 13.091s CPU time.
Nov 25 16:27:36 compute-0 systemd-machined[216343]: Machine qemu-13-instance-0000000d terminated.
Nov 25 16:27:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 25 16:27:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 25 16:27:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2376679774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:36 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.900 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating config drive at /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.911 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpseyo0wf2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.993 254096 INFO nova.virt.libvirt.driver [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance destroyed successfully.
Nov 25 16:27:36 compute-0 nova_compute[254092]: 2025-11-25 16:27:36.993 254096 DEBUG nova.objects.instance [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lazy-loading 'resources' on Instance uuid 38463335-bf41-4609-b81c-08bc8da299af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.057 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpseyo0wf2" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.078 254096 DEBUG nova.storage.rbd_utils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.081 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.211 254096 DEBUG oslo_concurrency.processutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.212 254096 INFO nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting local config drive /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config because it was imported into RBD.
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:37 compute-0 systemd-machined[216343]: New machine qemu-15-instance-0000000c.
Nov 25 16:27:37 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000c.
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.379 254096 INFO nova.virt.libvirt.driver [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deleting instance files /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af_del
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.380 254096 INFO nova.virt.libvirt.driver [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deletion of /var/lib/nova/instances/38463335-bf41-4609-b81c-08bc8da299af_del complete
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.424 254096 INFO nova.compute.manager [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 0.65 seconds to destroy the instance on the hypervisor.
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.425 254096 DEBUG oslo.service.loopingcall [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.425 254096 DEBUG nova.compute.manager [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.426 254096 DEBUG nova.network.neutron [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:27:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.567 254096 DEBUG nova.network.neutron [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.581 254096 DEBUG nova.network.neutron [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.592 254096 INFO nova.compute.manager [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Took 0.17 seconds to deallocate network for instance.
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.641 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.642 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.727 254096 DEBUG oslo_concurrency.processutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.849 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.849 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088057.848299, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.850 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Resumed (Lifecycle Event)
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.854 254096 DEBUG nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.854 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.859 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance spawned successfully.
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.860 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.875 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.883 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:37 compute-0 ceph-mon[74985]: pgmap v1175: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.3 MiB/s wr, 326 op/s
Nov 25 16:27:37 compute-0 ceph-mon[74985]: osdmap e137: 3 total, 3 up, 3 in
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.889 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.890 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.890 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.891 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.891 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.892 254096 DEBUG nova.virt.libvirt.driver [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.914 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.914 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088057.8529596, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.914 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Started (Lifecycle Event)
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.933 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.936 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.974 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:27:37 compute-0 nova_compute[254092]: 2025-11-25 16:27:37.983 254096 DEBUG nova.compute.manager [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.052 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062311794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.168 254096 DEBUG oslo_concurrency.processutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.172 254096 DEBUG nova.compute.provider_tree [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.189 254096 DEBUG nova.scheduler.client.report [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.204 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.206 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.207 254096 DEBUG nova.objects.instance [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.236 254096 INFO nova.scheduler.client.report [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Deleted allocations for instance 38463335-bf41-4609-b81c-08bc8da299af
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.269 254096 DEBUG oslo_concurrency.lockutils [None req-367bd5b5-4607-4aa3-b802-43e9ae93b103 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.312 254096 DEBUG oslo_concurrency.lockutils [None req-79d9b90d-9db3-47cc-8c94-13a6be975f44 6fecc7ec96b94801b693d75b96da5cca 200751433d7c4e9994df0ea449a6cb48 - - default default] Lock "38463335-bf41-4609-b81c-08bc8da299af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.400 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088043.3983412, 138942ff-b720-4101-8dcf-38958751745b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.400 254096 INFO nova.compute.manager [-] [instance: 138942ff-b720-4101-8dcf-38958751745b] VM Stopped (Lifecycle Event)
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.419 254096 DEBUG nova.compute.manager [None req-433e2c21-1419-4e89-b385-481cde54bb28 - - - - - -] [instance: 138942ff-b720-4101-8dcf-38958751745b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:38 compute-0 nova_compute[254092]: 2025-11-25 16:27:38.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 289 op/s
Nov 25 16:27:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1062311794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:39 compute-0 ceph-mon[74985]: pgmap v1177: 321 pgs: 321 active+clean; 124 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 289 op/s
Nov 25 16:27:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 25 16:27:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 25 16:27:39 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.030 254096 INFO nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Rebuilding instance
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:27:40
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'images', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.log']
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.665 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.683 254096 DEBUG nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.725 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'pci_requests' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.735 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'pci_devices' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.746 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'resources' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.753 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'migration_context' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.773 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:27:40 compute-0 nova_compute[254092]: 2025-11-25 16:27:40.777 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:27:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 102 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.7 MiB/s wr, 292 op/s
Nov 25 16:27:40 compute-0 ceph-mon[74985]: osdmap e138: 3 total, 3 up, 3 in
Nov 25 16:27:40 compute-0 ceph-mon[74985]: pgmap v1179: 321 pgs: 321 active+clean; 102 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.7 MiB/s wr, 292 op/s
Nov 25 16:27:42 compute-0 nova_compute[254092]: 2025-11-25 16:27:42.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.548097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062548132, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1393, "num_deletes": 507, "total_data_size": 1527975, "memory_usage": 1563504, "flush_reason": "Manual Compaction"}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062556683, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1318115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23221, "largest_seqno": 24613, "table_properties": {"data_size": 1312367, "index_size": 2503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16401, "raw_average_key_size": 19, "raw_value_size": 1298370, "raw_average_value_size": 1520, "num_data_blocks": 112, "num_entries": 854, "num_filter_entries": 854, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764087973, "oldest_key_time": 1764087973, "file_creation_time": 1764088062, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8635 microseconds, and 4440 cpu microseconds.
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.556728) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1318115 bytes OK
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.556749) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558101) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558115) EVENT_LOG_v1 {"time_micros": 1764088062558110, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558133) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1520609, prev total WAL file size 1520609, number of live WAL files 2.
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558866) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1287KB)], [53(9154KB)]
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062558906, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10692781, "oldest_snapshot_seqno": -1}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4725 keys, 7478442 bytes, temperature: kUnknown
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062610701, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7478442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7446774, "index_size": 18758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118648, "raw_average_key_size": 25, "raw_value_size": 7361270, "raw_average_value_size": 1557, "num_data_blocks": 779, "num_entries": 4725, "num_filter_entries": 4725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088062, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.610980) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7478442 bytes
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.612739) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.0 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(13.8) write-amplify(5.7) OK, records in: 5743, records dropped: 1018 output_compression: NoCompression
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.612767) EVENT_LOG_v1 {"time_micros": 1764088062612754, "job": 28, "event": "compaction_finished", "compaction_time_micros": 51905, "compaction_time_cpu_micros": 20620, "output_level": 6, "num_output_files": 1, "total_output_size": 7478442, "num_input_records": 5743, "num_output_records": 4725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062613172, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088062615029, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.558750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:27:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:27:42.615122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:27:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 321 op/s
Nov 25 16:27:43 compute-0 nova_compute[254092]: 2025-11-25 16:27:43.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:43 compute-0 ceph-mon[74985]: pgmap v1180: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 321 op/s
Nov 25 16:27:44 compute-0 nova_compute[254092]: 2025-11-25 16:27:44.085 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088049.0827372, c058218b-7732-4ced-b6a3-bb04203967d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:44 compute-0 nova_compute[254092]: 2025-11-25 16:27:44.085 254096 INFO nova.compute.manager [-] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] VM Stopped (Lifecycle Event)
Nov 25 16:27:44 compute-0 nova_compute[254092]: 2025-11-25 16:27:44.105 254096 DEBUG nova.compute.manager [None req-2792c046-fba4-4e57-9936-5be4edc4fec1 - - - - - -] [instance: c058218b-7732-4ced-b6a3-bb04203967d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 25 16:27:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 25 16:27:44 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 25 16:27:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 210 op/s
Nov 25 16:27:45 compute-0 ceph-mon[74985]: osdmap e139: 3 total, 3 up, 3 in
Nov 25 16:27:45 compute-0 ceph-mon[74985]: pgmap v1182: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 210 op/s
Nov 25 16:27:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 229 op/s
Nov 25 16:27:46 compute-0 ceph-mon[74985]: pgmap v1183: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.4 MiB/s wr, 229 op/s
Nov 25 16:27:47 compute-0 nova_compute[254092]: 2025-11-25 16:27:47.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.726 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "12043b7f-9853-45a8-b963-ae96713754b4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.727 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.755 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.846 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.846 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.854 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:27:48 compute-0 nova_compute[254092]: 2025-11-25 16:27:48.855 254096 INFO nova.compute.claims [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:27:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 3.3 KiB/s wr, 43 op/s
Nov 25 16:27:48 compute-0 ceph-mon[74985]: pgmap v1184: 321 pgs: 321 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 3.3 KiB/s wr, 43 op/s
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.007 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001850020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.429 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.435 254096 DEBUG nova.compute.provider_tree [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.456 254096 DEBUG nova.scheduler.client.report [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.497 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.498 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.567 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.568 254096 DEBUG nova.network.neutron [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.597 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.618 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:27:49 compute-0 podman[278509]: 2025-11-25 16:27:49.650534755 +0000 UTC m=+0.065728378 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:27:49 compute-0 podman[278510]: 2025-11-25 16:27:49.670866618 +0000 UTC m=+0.082812233 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:27:49 compute-0 podman[278516]: 2025-11-25 16:27:49.689483204 +0000 UTC m=+0.080620003 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.707 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.708 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.709 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Creating image(s)
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.726 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.743 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.759 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.762 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.814 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.817 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.832 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:49 compute-0 nova_compute[254092]: 2025-11-25 16:27:49.835 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12043b7f-9853-45a8-b963-ae96713754b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1001850020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.007 254096 DEBUG nova.network.neutron [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.007 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.245 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12043b7f-9853-45a8-b963-ae96713754b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.300 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] resizing rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.376 254096 DEBUG nova.objects.instance [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'migration_context' on Instance uuid 12043b7f-9853-45a8-b963-ae96713754b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.396 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.396 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Ensure instance console log exists: /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.397 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.398 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.398 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.401 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.407 254096 WARNING nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.421 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.422 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.425 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.426 254096 DEBUG nova.virt.libvirt.host [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.426 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.426 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.427 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.428 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.429 254096 DEBUG nova.virt.hardware [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.431 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:50 compute-0 sshd-session[278704]: error: kex_exchange_identification: read: Connection reset by peer
Nov 25 16:27:50 compute-0 sshd-session[278704]: Connection reset by 45.140.17.97 port 11879
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.537 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "2174ef15-55fa-4734-8cc2-89064853919b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.537 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.543 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.544 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.545 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.567 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.694 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.695 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.701 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.701 254096 INFO nova.compute.claims [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.848 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1665008175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 104 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 499 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.868 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.870 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.890 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.894 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960485916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:50 compute-0 nova_compute[254092]: 2025-11-25 16:27:50.961 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1665008175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:51 compute-0 ceph-mon[74985]: pgmap v1185: 321 pgs: 321 active+clean; 104 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 499 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 16:27:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2960485916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.023 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.023 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000645763637917562 of space, bias 1.0, pg target 0.19372909137526861 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668757529139806 of space, bias 1.0, pg target 0.20006272587419416 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.160 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.161 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4445MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.161 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831315958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059842282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.332 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.333 254096 DEBUG nova.objects.instance [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12043b7f-9853-45a8-b963-ae96713754b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.337 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.342 254096 DEBUG nova.compute.provider_tree [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.358 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <uuid>12043b7f-9853-45a8-b963-ae96713754b4</uuid>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <name>instance-0000000f</name>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <nova:name>tempest-ListImageFiltersTestJSON-server-842538226</nova:name>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:27:50</nova:creationTime>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <nova:user uuid="435d184bc01d4b1b878995bce4319f96">tempest-ListImageFiltersTestJSON-1919749506-project-member</nova:user>
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <nova:project uuid="b791fb0a74ad43e9b9270c33338d5556">tempest-ListImageFiltersTestJSON-1919749506</nova:project>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <system>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <entry name="serial">12043b7f-9853-45a8-b963-ae96713754b4</entry>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <entry name="uuid">12043b7f-9853-45a8-b963-ae96713754b4</entry>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </system>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <os>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   </os>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <features>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   </features>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/12043b7f-9853-45a8-b963-ae96713754b4_disk">
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/12043b7f-9853-45a8-b963-ae96713754b4_disk.config">
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/console.log" append="off"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <video>
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </video>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:27:51 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:27:51 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:27:51 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:27:51 compute-0 nova_compute[254092]: </domain>
Nov 25 16:27:51 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.365 254096 DEBUG nova.scheduler.client.report [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.411 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.412 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.414 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.446 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.446 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.447 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Using config drive
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.463 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.471 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.472 254096 DEBUG nova.network.neutron [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.510 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.527 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 12043b7f-9853-45a8-b963-ae96713754b4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2174ef15-55fa-4734-8cc2-89064853919b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.625 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.645 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.647 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.647 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Creating image(s)
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.667 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.687 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.707 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.710 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.770 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.771 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.771 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.772 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.796 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.800 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2174ef15-55fa-4734-8cc2-89064853919b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.993 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088056.9909072, 38463335-bf41-4609-b81c-08bc8da299af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:51 compute-0 nova_compute[254092]: 2025-11-25 16:27:51.993 254096 INFO nova.compute.manager [-] [instance: 38463335-bf41-4609-b81c-08bc8da299af] VM Stopped (Lifecycle Event)
Nov 25 16:27:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2831315958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2059842282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.035 254096 DEBUG nova.compute.manager [None req-975bc03c-ea7a-4416-bc0c-14cb8d121734 - - - - - -] [instance: 38463335-bf41-4609-b81c-08bc8da299af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:27:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128952328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.059 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.063 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2174ef15-55fa-4734-8cc2-89064853919b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.091 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.121 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.128 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] resizing rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.155 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.211 254096 DEBUG nova.objects.instance [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'migration_context' on Instance uuid 2174ef15-55fa-4734-8cc2-89064853919b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.223 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.223 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Ensure instance console log exists: /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.224 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.224 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.224 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.257 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Creating config drive at /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.261 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2jw8rai execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.384 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2jw8rai" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.408 254096 DEBUG nova.storage.rbd_utils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 12043b7f-9853-45a8-b963-ae96713754b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.411 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config 12043b7f-9853-45a8-b963-ae96713754b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.431 254096 DEBUG nova.network.neutron [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.432 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.433 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.438 254096 WARNING nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.443 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.443 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.446 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.446 254096 DEBUG nova.virt.libvirt.host [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.447 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.447 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.447 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.448 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.449 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.450 254096 DEBUG nova.virt.hardware [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.453 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.544 254096 DEBUG oslo_concurrency.processutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config 12043b7f-9853-45a8-b963-ae96713754b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.545 254096 INFO nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deleting local config drive /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4/disk.config because it was imported into RBD.
Nov 25 16:27:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:27:52 compute-0 systemd-machined[216343]: New machine qemu-16-instance-0000000f.
Nov 25 16:27:52 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 25 16:27:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 135 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Nov 25 16:27:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489301489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.901 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.924 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:52 compute-0 nova_compute[254092]: 2025-11-25 16:27:52.928 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3128952328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:27:53 compute-0 ceph-mon[74985]: pgmap v1186: 321 pgs: 321 active+clean; 135 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Nov 25 16:27:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2489301489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:53 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 16:27:53 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000c.scope: Consumed 12.603s CPU time.
Nov 25 16:27:53 compute-0 systemd-machined[216343]: Machine qemu-15-instance-0000000c terminated.
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.151 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088073.1514132, 12043b7f-9853-45a8-b963-ae96713754b4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.152 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] VM Resumed (Lifecycle Event)
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.155 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.155 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.158 254096 INFO nova.virt.libvirt.driver [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance spawned successfully.
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.158 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.172 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.176 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.184 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.184 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.184 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.185 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.185 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.185 254096 DEBUG nova.virt.libvirt.driver [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088073.1544726, 12043b7f-9853-45a8-b963-ae96713754b4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] VM Started (Lifecycle Event)
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.230 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.249 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.287 254096 INFO nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 3.58 seconds to spawn the instance on the hypervisor.
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.287 254096 DEBUG nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607220910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.362 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.363 254096 DEBUG nova.objects.instance [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2174ef15-55fa-4734-8cc2-89064853919b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.377 254096 INFO nova.compute.manager [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 4.58 seconds to build instance.
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.385 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <uuid>2174ef15-55fa-4734-8cc2-89064853919b</uuid>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <name>instance-00000010</name>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1252766420</nova:name>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:27:52</nova:creationTime>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <nova:user uuid="435d184bc01d4b1b878995bce4319f96">tempest-ListImageFiltersTestJSON-1919749506-project-member</nova:user>
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <nova:project uuid="b791fb0a74ad43e9b9270c33338d5556">tempest-ListImageFiltersTestJSON-1919749506</nova:project>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <system>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <entry name="serial">2174ef15-55fa-4734-8cc2-89064853919b</entry>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <entry name="uuid">2174ef15-55fa-4734-8cc2-89064853919b</entry>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </system>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <os>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   </os>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <features>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   </features>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2174ef15-55fa-4734-8cc2-89064853919b_disk">
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2174ef15-55fa-4734-8cc2-89064853919b_disk.config">
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:53 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/console.log" append="off"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <video>
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </video>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:27:53 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:27:53 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:27:53 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:27:53 compute-0 nova_compute[254092]: </domain>
Nov 25 16:27:53 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.431 254096 DEBUG oslo_concurrency.lockutils [None req-80511f95-2179-4849-bc69-8d6b21ced6ee 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.471 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.472 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.473 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Using config drive
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.503 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.508 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.794 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Creating config drive at /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.820 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpruwhl31l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.945 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpruwhl31l" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.972 254096 DEBUG nova.storage.rbd_utils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] rbd image 2174ef15-55fa-4734-8cc2-89064853919b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:53 compute-0 nova_compute[254092]: 2025-11-25 16:27:53.975 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config 2174ef15-55fa-4734-8cc2-89064853919b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.002 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance shutdown successfully after 13 seconds.
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.008 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.012 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.
Nov 25 16:27:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/607220910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.174 254096 DEBUG oslo_concurrency.processutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config 2174ef15-55fa-4734-8cc2-89064853919b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.174 254096 INFO nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deleting local config drive /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b/disk.config because it was imported into RBD.
Nov 25 16:27:54 compute-0 systemd-machined[216343]: New machine qemu-17-instance-00000010.
Nov 25 16:27:54 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000010.
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.397 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting instance files /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.398 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deletion of /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del complete
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.668 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.669 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating image(s)
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.688 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.708 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.727 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.730 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.766 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088074.7655294, 2174ef15-55fa-4734-8cc2-89064853919b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] VM Resumed (Lifecycle Event)
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.778 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.778 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.781 254096 INFO nova.virt.libvirt.driver [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance spawned successfully.
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.782 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.786 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.787 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.788 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.788 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.805 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.808 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.839 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.841 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.842 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.842 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.842 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.843 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.843 254096 DEBUG nova.virt.libvirt.driver [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.848 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 135 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 3.5 MiB/s wr, 91 op/s
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.876 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.877 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088074.7779467, 2174ef15-55fa-4734-8cc2-89064853919b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.877 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] VM Started (Lifecycle Event)
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.900 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.902 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.923 254096 INFO nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 3.28 seconds to spawn the instance on the hypervisor.
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.924 254096 DEBUG nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:54 compute-0 nova_compute[254092]: 2025-11-25 16:27:54.926 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.030 254096 INFO nova.compute.manager [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 4.37 seconds to build instance.
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.056 254096 DEBUG oslo_concurrency.lockutils [None req-4242f707-7798-418e-a2b5-b6d1a30a0006 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:55 compute-0 ceph-mon[74985]: pgmap v1187: 321 pgs: 321 active+clean; 135 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 411 KiB/s rd, 3.5 MiB/s wr, 91 op/s
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.164 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:27:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345017531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:27:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:27:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345017531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.224 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] resizing rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.315 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.316 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Ensure instance console log exists: /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.316 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.317 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.317 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.319 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.324 254096 WARNING nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.332 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.333 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.336 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.337 254096 DEBUG nova.virt.libvirt.host [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.337 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.338 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.338 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.338 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.339 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.virt.hardware [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.340 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.356 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.516 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.517 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.798 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:27:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1177493047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.880 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.904 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:55 compute-0 nova_compute[254092]: 2025-11-25 16:27:55.911 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/345017531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:27:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/345017531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:27:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1177493047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.256 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.278 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.278 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:27:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:27:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229204161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.395 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.398 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <uuid>5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</uuid>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <name>instance-0000000c</name>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdmin275Test-server-1909956679</nova:name>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:27:55</nova:creationTime>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <nova:user uuid="f16e272341774153991e7ed856e34188">tempest-ServersAdmin275Test-1636693528-project-member</nova:user>
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <nova:project uuid="a9a4c2749782401c89e06948356b0e0a">tempest-ServersAdmin275Test-1636693528</nova:project>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <system>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <entry name="serial">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <entry name="uuid">5832b7f9-d051-4ddb-a8a0-920b2d3cfab2</entry>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </system>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <os>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   </os>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <features>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   </features>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk">
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config">
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:27:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/console.log" append="off"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <video>
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </video>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:27:56 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:27:56 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:27:56 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:27:56 compute-0 nova_compute[254092]: </domain>
Nov 25 16:27:56 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.462 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.462 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.463 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Using config drive
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.487 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.506 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.543 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lazy-loading 'keypairs' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:27:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 148 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 MiB/s wr, 268 op/s
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.921 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Creating config drive at /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config
Nov 25 16:27:56 compute-0 nova_compute[254092]: 2025-11-25 16:27:56.927 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv21jnxz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.055 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv21jnxz" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.077 254096 DEBUG nova.storage.rbd_utils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] rbd image 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.080 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:27:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/229204161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:27:57 compute-0 ceph-mon[74985]: pgmap v1188: 321 pgs: 321 active+clean; 148 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 MiB/s wr, 268 op/s
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.152 254096 DEBUG nova.compute.manager [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.199 254096 INFO nova.compute.manager [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] instance snapshotting
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.212 254096 DEBUG oslo_concurrency.processutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.213 254096 INFO nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting local config drive /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2/disk.config because it was imported into RBD.
Nov 25 16:27:57 compute-0 systemd-machined[216343]: New machine qemu-18-instance-0000000c.
Nov 25 16:27:57 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-0000000c.
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.705 254096 INFO nova.virt.libvirt.driver [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Beginning live snapshot process
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.871 254096 DEBUG nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.872 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.872 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.873 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088077.8086817, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.873 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Resumed (Lifecycle Event)
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.881 254096 DEBUG nova.virt.libvirt.imagebackend [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.885 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance spawned successfully.
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.886 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.909 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.920 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.923 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.923 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.923 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.924 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.925 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.926 254096 DEBUG nova.virt.libvirt.driver [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.947 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.949 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088077.8087618, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.949 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Started (Lifecycle Event)
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.973 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:57 compute-0 nova_compute[254092]: 2025-11-25 16:27:57.978 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.001 254096 DEBUG nova.compute.manager [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.002 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.089 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.091 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.092 254096 DEBUG nova.objects.instance [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.136 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(8d2851002d5d4f4ab95baf095588850d) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.194 254096 DEBUG oslo_concurrency.lockutils [None req-ff4d08e2-f4c2-4290-a9ae-6c0f607fc5c2 bffaf3b7bdbf4a1ab50752e5be198834 7bb561c8fb274134b8a08bbcadd0e1dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 25 16:27:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 25 16:27:58 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.326 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] cloning vms/12043b7f-9853-45a8-b963-ae96713754b4_disk@8d2851002d5d4f4ab95baf095588850d to images/83bd8229-570c-4485-b723-55b6d19bebf1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.449 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] flattening images/83bd8229-570c-4485-b723-55b6d19bebf1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.505 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:27:58 compute-0 nova_compute[254092]: 2025-11-25 16:27:58.786 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] removing snapshot(8d2851002d5d4f4ab95baf095588850d) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:27:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 148 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 8.0 MiB/s wr, 306 op/s
Nov 25 16:27:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 25 16:27:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 25 16:27:59 compute-0 ceph-mon[74985]: osdmap e140: 3 total, 3 up, 3 in
Nov 25 16:27:59 compute-0 ceph-mon[74985]: pgmap v1190: 321 pgs: 321 active+clean; 148 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 8.0 MiB/s wr, 306 op/s
Nov 25 16:27:59 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.299 254096 DEBUG nova.storage.rbd_utils [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(snap) on rbd image(83bd8229-570c-4485-b723-55b6d19bebf1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.414 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.414 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.415 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.415 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.416 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.418 254096 INFO nova.compute.manager [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Terminating instance
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.419 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.419 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquired lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.420 254096 DEBUG nova.network.neutron [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:27:59 compute-0 nova_compute[254092]: 2025-11-25 16:27:59.636 254096 DEBUG nova.network.neutron [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:00 compute-0 nova_compute[254092]: 2025-11-25 16:28:00.241 254096 DEBUG nova.network.neutron [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 25 16:28:00 compute-0 ceph-mon[74985]: osdmap e141: 3 total, 3 up, 3 in
Nov 25 16:28:00 compute-0 nova_compute[254092]: 2025-11-25 16:28:00.258 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Releasing lock "refresh_cache-5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:00 compute-0 nova_compute[254092]: 2025-11-25 16:28:00.258 254096 DEBUG nova.compute.manager [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:28:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 25 16:28:00 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 25 16:28:00 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 16:28:00 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000c.scope: Consumed 2.937s CPU time.
Nov 25 16:28:00 compute-0 systemd-machined[216343]: Machine qemu-18-instance-0000000c terminated.
Nov 25 16:28:00 compute-0 nova_compute[254092]: 2025-11-25 16:28:00.490 254096 INFO nova.virt.libvirt.driver [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance destroyed successfully.
Nov 25 16:28:00 compute-0 nova_compute[254092]: 2025-11-25 16:28:00.490 254096 DEBUG nova.objects.instance [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lazy-loading 'resources' on Instance uuid 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 209 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 642 op/s
Nov 25 16:28:00 compute-0 nova_compute[254092]: 2025-11-25 16:28:00.973 254096 INFO nova.virt.libvirt.driver [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deleting instance files /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del
Nov 25 16:28:00 compute-0 nova_compute[254092]: 2025-11-25 16:28:00.974 254096 INFO nova.virt.libvirt.driver [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deletion of /var/lib/nova/instances/5832b7f9-d051-4ddb-a8a0-920b2d3cfab2_del complete
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.040 254096 INFO nova.compute.manager [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.041 254096 DEBUG oslo.service.loopingcall [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.041 254096 DEBUG nova.compute.manager [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.042 254096 DEBUG nova.network.neutron [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.237 254096 DEBUG nova.network.neutron [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.256 254096 DEBUG nova.network.neutron [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:01 compute-0 ceph-mon[74985]: osdmap e142: 3 total, 3 up, 3 in
Nov 25 16:28:01 compute-0 ceph-mon[74985]: pgmap v1193: 321 pgs: 321 active+clean; 209 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 642 op/s
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.284 254096 INFO nova.compute.manager [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Took 0.24 seconds to deallocate network for instance.
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.343 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.344 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.431 254096 DEBUG oslo_concurrency.processutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548129775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.865 254096 DEBUG oslo_concurrency.processutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.870 254096 DEBUG nova.compute.provider_tree [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.886 254096 DEBUG nova.scheduler.client.report [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.910 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:01 compute-0 nova_compute[254092]: 2025-11-25 16:28:01.953 254096 INFO nova.scheduler.client.report [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Deleted allocations for instance 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2
Nov 25 16:28:02 compute-0 nova_compute[254092]: 2025-11-25 16:28:02.023 254096 DEBUG oslo_concurrency.lockutils [None req-2cdd050e-8842-4b84-a96e-ed446c21d6b6 f16e272341774153991e7ed856e34188 a9a4c2749782401c89e06948356b0e0a - - default default] Lock "5832b7f9-d051-4ddb-a8a0-920b2d3cfab2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:02 compute-0 nova_compute[254092]: 2025-11-25 16:28:02.071 254096 INFO nova.virt.libvirt.driver [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Snapshot image upload complete
Nov 25 16:28:02 compute-0 nova_compute[254092]: 2025-11-25 16:28:02.071 254096 INFO nova.compute.manager [None req-7d66a2ec-9532-49ce-8459-58cc5c2fe32c 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 4.87 seconds to snapshot the instance on the hypervisor.
Nov 25 16:28:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/548129775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:02 compute-0 nova_compute[254092]: 2025-11-25 16:28:02.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 213 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 5.2 MiB/s wr, 395 op/s
Nov 25 16:28:03 compute-0 ceph-mon[74985]: pgmap v1194: 321 pgs: 321 active+clean; 213 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 5.2 MiB/s wr, 395 op/s
Nov 25 16:28:03 compute-0 nova_compute[254092]: 2025-11-25 16:28:03.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:04 compute-0 nova_compute[254092]: 2025-11-25 16:28:04.508 254096 DEBUG nova.compute.manager [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:04 compute-0 nova_compute[254092]: 2025-11-25 16:28:04.570 254096 INFO nova.compute.manager [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] instance snapshotting
Nov 25 16:28:04 compute-0 ovn_controller[153477]: 2025-11-25T16:28:04Z|00054|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 16:28:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 213 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.7 MiB/s wr, 357 op/s
Nov 25 16:28:05 compute-0 ceph-mon[74985]: pgmap v1195: 321 pgs: 321 active+clean; 213 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.7 MiB/s wr, 357 op/s
Nov 25 16:28:05 compute-0 nova_compute[254092]: 2025-11-25 16:28:05.266 254096 INFO nova.virt.libvirt.driver [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Beginning live snapshot process
Nov 25 16:28:05 compute-0 nova_compute[254092]: 2025-11-25 16:28:05.403 254096 DEBUG nova.virt.libvirt.imagebackend [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:28:05 compute-0 nova_compute[254092]: 2025-11-25 16:28:05.747 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(12d7e5362aa64c338d0fcf8b1fbee135) on rbd image(2174ef15-55fa-4734-8cc2-89064853919b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:28:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 25 16:28:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 25 16:28:06 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 25 16:28:06 compute-0 nova_compute[254092]: 2025-11-25 16:28:06.136 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] cloning vms/2174ef15-55fa-4734-8cc2-89064853919b_disk@12d7e5362aa64c338d0fcf8b1fbee135 to images/55734109-7b58-4129-b190-ab05588f6d0a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:28:06 compute-0 nova_compute[254092]: 2025-11-25 16:28:06.307 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] flattening images/55734109-7b58-4129-b190-ab05588f6d0a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:28:06 compute-0 nova_compute[254092]: 2025-11-25 16:28:06.648 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] removing snapshot(12d7e5362aa64c338d0fcf8b1fbee135) on rbd image(2174ef15-55fa-4734-8cc2-89064853919b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:28:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 213 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.5 MiB/s wr, 465 op/s
Nov 25 16:28:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 25 16:28:07 compute-0 ceph-mon[74985]: osdmap e143: 3 total, 3 up, 3 in
Nov 25 16:28:07 compute-0 ceph-mon[74985]: pgmap v1197: 321 pgs: 321 active+clean; 213 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.5 MiB/s wr, 465 op/s
Nov 25 16:28:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 25 16:28:07 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 25 16:28:07 compute-0 nova_compute[254092]: 2025-11-25 16:28:07.100 254096 DEBUG nova.storage.rbd_utils [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(snap) on rbd image(55734109-7b58-4129-b190-ab05588f6d0a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:28:07 compute-0 nova_compute[254092]: 2025-11-25 16:28:07.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 25 16:28:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 25 16:28:07 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 25 16:28:08 compute-0 ceph-mon[74985]: osdmap e144: 3 total, 3 up, 3 in
Nov 25 16:28:08 compute-0 ceph-mon[74985]: osdmap e145: 3 total, 3 up, 3 in
Nov 25 16:28:08 compute-0 nova_compute[254092]: 2025-11-25 16:28:08.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 213 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 621 KiB/s rd, 4.3 MiB/s wr, 195 op/s
Nov 25 16:28:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:08.994 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:28:08 compute-0 nova_compute[254092]: 2025-11-25 16:28:08.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:08.995 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:28:09 compute-0 ceph-mon[74985]: pgmap v1200: 321 pgs: 321 active+clean; 213 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 621 KiB/s rd, 4.3 MiB/s wr, 195 op/s
Nov 25 16:28:09 compute-0 nova_compute[254092]: 2025-11-25 16:28:09.650 254096 INFO nova.virt.libvirt.driver [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Snapshot image upload complete
Nov 25 16:28:09 compute-0 nova_compute[254092]: 2025-11-25 16:28:09.651 254096 INFO nova.compute.manager [None req-7d691279-cab6-41c8-ae2e-5f2e9a15a2a6 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 5.08 seconds to snapshot the instance on the hypervisor.
Nov 25 16:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:28:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 258 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 8.9 MiB/s wr, 414 op/s
Nov 25 16:28:11 compute-0 ceph-mon[74985]: pgmap v1201: 321 pgs: 321 active+clean; 258 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 8.9 MiB/s wr, 414 op/s
Nov 25 16:28:12 compute-0 nova_compute[254092]: 2025-11-25 16:28:12.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:12 compute-0 nova_compute[254092]: 2025-11-25 16:28:12.584 254096 DEBUG nova.compute.manager [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:12 compute-0 nova_compute[254092]: 2025-11-25 16:28:12.627 254096 INFO nova.compute.manager [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] instance snapshotting
Nov 25 16:28:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 292 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.9 MiB/s wr, 250 op/s
Nov 25 16:28:13 compute-0 nova_compute[254092]: 2025-11-25 16:28:13.063 254096 INFO nova.virt.libvirt.driver [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Beginning live snapshot process
Nov 25 16:28:13 compute-0 ceph-mon[74985]: pgmap v1202: 321 pgs: 321 active+clean; 292 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.9 MiB/s wr, 250 op/s
Nov 25 16:28:13 compute-0 nova_compute[254092]: 2025-11-25 16:28:13.407 254096 DEBUG nova.virt.libvirt.imagebackend [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:28:13 compute-0 nova_compute[254092]: 2025-11-25 16:28:13.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:13.600 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:13 compute-0 nova_compute[254092]: 2025-11-25 16:28:13.783 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(0bd486026d0d4f82b447ccc4811ee656) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:28:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 25 16:28:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 25 16:28:14 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 25 16:28:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 292 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.0 MiB/s wr, 219 op/s
Nov 25 16:28:14 compute-0 nova_compute[254092]: 2025-11-25 16:28:14.970 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] cloning vms/12043b7f-9853-45a8-b963-ae96713754b4_disk@0bd486026d0d4f82b447ccc4811ee656 to images/41202dc9-0782-4f14-be74-5217dd621459 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:28:15 compute-0 ceph-mon[74985]: osdmap e146: 3 total, 3 up, 3 in
Nov 25 16:28:15 compute-0 ceph-mon[74985]: pgmap v1204: 321 pgs: 321 active+clean; 292 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 6.0 MiB/s wr, 219 op/s
Nov 25 16:28:15 compute-0 nova_compute[254092]: 2025-11-25 16:28:15.479 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088080.478113, 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:15 compute-0 nova_compute[254092]: 2025-11-25 16:28:15.480 254096 INFO nova.compute.manager [-] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] VM Stopped (Lifecycle Event)
Nov 25 16:28:15 compute-0 nova_compute[254092]: 2025-11-25 16:28:15.502 254096 DEBUG nova.compute.manager [None req-eccaf6b2-7612-4009-8e39-ce5cbefb96f2 - - - - - -] [instance: 5832b7f9-d051-4ddb-a8a0-920b2d3cfab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:15 compute-0 nova_compute[254092]: 2025-11-25 16:28:15.767 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] flattening images/41202dc9-0782-4f14-be74-5217dd621459 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:28:16 compute-0 nova_compute[254092]: 2025-11-25 16:28:16.143 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] removing snapshot(0bd486026d0d4f82b447ccc4811ee656) on rbd image(12043b7f-9853-45a8-b963-ae96713754b4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:28:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 25 16:28:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 25 16:28:16 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 25 16:28:16 compute-0 nova_compute[254092]: 2025-11-25 16:28:16.698 254096 DEBUG nova.storage.rbd_utils [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] creating snapshot(snap) on rbd image(41202dc9-0782-4f14-be74-5217dd621459) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:28:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 300 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.6 MiB/s wr, 238 op/s
Nov 25 16:28:17 compute-0 nova_compute[254092]: 2025-11-25 16:28:17.358 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 25 16:28:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 25 16:28:17 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 25 16:28:18 compute-0 ceph-mon[74985]: osdmap e147: 3 total, 3 up, 3 in
Nov 25 16:28:18 compute-0 ceph-mon[74985]: pgmap v1206: 321 pgs: 321 active+clean; 300 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 6.6 MiB/s wr, 238 op/s
Nov 25 16:28:18 compute-0 nova_compute[254092]: 2025-11-25 16:28:18.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 300 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 958 KiB/s wr, 33 op/s
Nov 25 16:28:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:18.997 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:19 compute-0 ceph-mon[74985]: osdmap e148: 3 total, 3 up, 3 in
Nov 25 16:28:19 compute-0 ceph-mon[74985]: pgmap v1208: 321 pgs: 321 active+clean; 300 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 958 KiB/s wr, 33 op/s
Nov 25 16:28:20 compute-0 nova_compute[254092]: 2025-11-25 16:28:20.490 254096 INFO nova.virt.libvirt.driver [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Snapshot image upload complete
Nov 25 16:28:20 compute-0 nova_compute[254092]: 2025-11-25 16:28:20.491 254096 INFO nova.compute.manager [None req-5d4da585-0d13-4b53-a248-26f199de4fa3 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 7.86 seconds to snapshot the instance on the hypervisor.
Nov 25 16:28:20 compute-0 podman[280160]: 2025-11-25 16:28:20.680072068 +0000 UTC m=+0.084574371 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:28:20 compute-0 podman[280161]: 2025-11-25 16:28:20.693410001 +0000 UTC m=+0.094397549 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 16:28:20 compute-0 podman[280162]: 2025-11-25 16:28:20.716983711 +0000 UTC m=+0.122641356 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:28:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 330 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 3.3 MiB/s wr, 109 op/s
Nov 25 16:28:20 compute-0 ceph-mon[74985]: pgmap v1209: 321 pgs: 321 active+clean; 330 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 3.3 MiB/s wr, 109 op/s
Nov 25 16:28:22 compute-0 nova_compute[254092]: 2025-11-25 16:28:22.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Nov 25 16:28:23 compute-0 ceph-mon[74985]: pgmap v1210: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Nov 25 16:28:23 compute-0 nova_compute[254092]: 2025-11-25 16:28:23.585 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.0 MiB/s wr, 102 op/s
Nov 25 16:28:24 compute-0 ceph-mon[74985]: pgmap v1211: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.0 MiB/s wr, 102 op/s
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.332 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.332 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.348 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.428 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.429 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.436 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.437 254096 INFO nova.compute.claims [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:28:25 compute-0 nova_compute[254092]: 2025-11-25 16:28:25.602 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209411780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.066 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.076 254096 DEBUG nova.compute.provider_tree [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3209411780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.107 254096 DEBUG nova.scheduler.client.report [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.126 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.127 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.180 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.181 254096 DEBUG nova.network.neutron [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.198 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.220 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.337 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.340 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.341 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Creating image(s)
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.366 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.393 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.418 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.422 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.498 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.499 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.499 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.500 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.516 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.520 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.811 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.862 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] resizing rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:28:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 84 op/s
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.937 254096 DEBUG nova.objects.instance [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lazy-loading 'migration_context' on Instance uuid d5372b02-1d93-4354-8f5c-c4228e8d3ec4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.948 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.948 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Ensure instance console log exists: /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.949 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.949 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:26 compute-0 nova_compute[254092]: 2025-11-25 16:28:26.949 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.031 254096 DEBUG nova.network.neutron [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.032 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.035 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.041 254096 WARNING nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.046 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.047 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.050 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.051 254096 DEBUG nova.virt.libvirt.host [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.052 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.052 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.053 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.054 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.054 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.055 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.055 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.056 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.056 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.056 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.057 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.057 254096 DEBUG nova.virt.hardware [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.062 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:27 compute-0 ceph-mon[74985]: pgmap v1212: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 84 op/s
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.362 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150345003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.519 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.546 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.549 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 25 16:28:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 25 16:28:27 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 25 16:28:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039421316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.967 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.970 254096 DEBUG nova.objects.instance [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d5372b02-1d93-4354-8f5c-c4228e8d3ec4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:27 compute-0 nova_compute[254092]: 2025-11-25 16:28:27.995 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <uuid>d5372b02-1d93-4354-8f5c-c4228e8d3ec4</uuid>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <name>instance-00000011</name>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiagnosticsTest-server-1671210126</nova:name>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:28:27</nova:creationTime>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <nova:user uuid="e3ab5ee2932743f4acd4c73f7a5aa7d3">tempest-ServerDiagnosticsTest-1741880264-project-member</nova:user>
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <nova:project uuid="3c8a4dd83dcd4ef1a79608794e3620d2">tempest-ServerDiagnosticsTest-1741880264</nova:project>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <system>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <entry name="serial">d5372b02-1d93-4354-8f5c-c4228e8d3ec4</entry>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <entry name="uuid">d5372b02-1d93-4354-8f5c-c4228e8d3ec4</entry>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </system>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <os>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   </os>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <features>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   </features>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk">
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config">
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:27 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/console.log" append="off"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <video>
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </video>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:28:27 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:28:27 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:28:27 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:28:27 compute-0 nova_compute[254092]: </domain>
Nov 25 16:28:27 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.051 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.052 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.052 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Using config drive
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.104 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4150345003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:28 compute-0 ceph-mon[74985]: osdmap e149: 3 total, 3 up, 3 in
Nov 25 16:28:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1039421316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.686 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Creating config drive at /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.691 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydrrzxyy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.838 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydrrzxyy" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.861 254096 DEBUG nova.storage.rbd_utils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] rbd image d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:28 compute-0 nova_compute[254092]: 2025-11-25 16:28:28.864 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 84 op/s
Nov 25 16:28:29 compute-0 nova_compute[254092]: 2025-11-25 16:28:29.407 254096 DEBUG oslo_concurrency.processutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config d5372b02-1d93-4354-8f5c-c4228e8d3ec4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:29 compute-0 nova_compute[254092]: 2025-11-25 16:28:29.407 254096 INFO nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deleting local config drive /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4/disk.config because it was imported into RBD.
Nov 25 16:28:29 compute-0 ceph-mon[74985]: pgmap v1214: 321 pgs: 321 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 84 op/s
Nov 25 16:28:29 compute-0 systemd-machined[216343]: New machine qemu-19-instance-00000011.
Nov 25 16:28:29 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.045 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088110.0448537, d5372b02-1d93-4354-8f5c-c4228e8d3ec4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.046 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] VM Resumed (Lifecycle Event)
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.049 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.049 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.053 254096 INFO nova.virt.libvirt.driver [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance spawned successfully.
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.054 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.090 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.090 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.091 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.091 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.092 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.092 254096 DEBUG nova.virt.libvirt.driver [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.095 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.132 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.133 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088110.0457306, d5372b02-1d93-4354-8f5c-c4228e8d3ec4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.133 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] VM Started (Lifecycle Event)
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.149 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.152 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.186 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.222 254096 INFO nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 3.88 seconds to spawn the instance on the hypervisor.
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.222 254096 DEBUG nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.285 254096 INFO nova.compute.manager [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 4.89 seconds to build instance.
Nov 25 16:28:30 compute-0 nova_compute[254092]: 2025-11-25 16:28:30.319 254096 DEBUG oslo_concurrency.lockutils [None req-e34b5033-6355-4c84-94e1-ea91b9e66d3c e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 402 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Nov 25 16:28:30 compute-0 sudo[280588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:30 compute-0 sudo[280588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:30 compute-0 sudo[280588]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:30 compute-0 sudo[280613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:28:30 compute-0 sudo[280613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:30 compute-0 sudo[280613]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:31 compute-0 sudo[280638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:31 compute-0 ceph-mon[74985]: pgmap v1215: 321 pgs: 321 active+clean; 402 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Nov 25 16:28:31 compute-0 sudo[280638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:31 compute-0 sudo[280638]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:31 compute-0 sudo[280663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:28:31 compute-0 sudo[280663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:31 compute-0 sudo[280663]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:28:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:28:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:28:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:28:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:28:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:28:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 61ccdb25-ef7a-4f0b-9b60-e1f2fad8730c does not exist
Nov 25 16:28:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 22b699b9-80aa-43a8-9872-d83cdbe2fe10 does not exist
Nov 25 16:28:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1ca48de6-a556-4f13-a72d-813baf0b05d1 does not exist
Nov 25 16:28:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:28:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:28:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:28:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:28:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:28:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:28:31 compute-0 sudo[280720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:31 compute-0 sudo[280720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:31 compute-0 sudo[280720]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:31 compute-0 nova_compute[254092]: 2025-11-25 16:28:31.635 254096 DEBUG nova.compute.manager [None req-f60a9db8-2af0-4352-b8fe-ea537001bf50 fc63e752c50f4250ae6c27d066bc5d5d 2d249d5c3f4a443d9512925398689e22 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:31 compute-0 nova_compute[254092]: 2025-11-25 16:28:31.638 254096 INFO nova.compute.manager [None req-f60a9db8-2af0-4352-b8fe-ea537001bf50 fc63e752c50f4250ae6c27d066bc5d5d 2d249d5c3f4a443d9512925398689e22 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Retrieving diagnostics
Nov 25 16:28:31 compute-0 sudo[280745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:28:31 compute-0 sudo[280745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:31 compute-0 sudo[280745]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:31 compute-0 sudo[280770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:31 compute-0 sudo[280770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:31 compute-0 sudo[280770]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:31 compute-0 sudo[280795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:28:31 compute-0 sudo[280795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:28:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:28:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:28:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:28:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:28:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:28:32 compute-0 podman[280860]: 2025-11-25 16:28:32.131267279 +0000 UTC m=+0.022775270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.236 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.237 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.237 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.238 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.238 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.239 254096 INFO nova.compute.manager [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Terminating instance
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.240 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "refresh_cache-d5372b02-1d93-4354-8f5c-c4228e8d3ec4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.240 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquired lock "refresh_cache-d5372b02-1d93-4354-8f5c-c4228e8d3ec4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.240 254096 DEBUG nova.network.neutron [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:28:32 compute-0 podman[280860]: 2025-11-25 16:28:32.311554662 +0000 UTC m=+0.203062633 container create bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:32 compute-0 systemd[1]: Started libpod-conmon-bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f.scope.
Nov 25 16:28:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:28:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 25 16:28:32 compute-0 podman[280860]: 2025-11-25 16:28:32.583166179 +0000 UTC m=+0.474674180 container init bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:28:32 compute-0 podman[280860]: 2025-11-25 16:28:32.594738123 +0000 UTC m=+0.486246094 container start bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:28:32 compute-0 amazing_mahavira[280877]: 167 167
Nov 25 16:28:32 compute-0 systemd[1]: libpod-bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f.scope: Deactivated successfully.
Nov 25 16:28:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 25 16:28:32 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 25 16:28:32 compute-0 podman[280860]: 2025-11-25 16:28:32.638239917 +0000 UTC m=+0.529747908 container attach bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:28:32 compute-0 podman[280860]: 2025-11-25 16:28:32.640157158 +0000 UTC m=+0.531665129 container died bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:28:32 compute-0 nova_compute[254092]: 2025-11-25 16:28:32.640 254096 DEBUG nova.network.neutron [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 418 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.7 MiB/s wr, 68 op/s
Nov 25 16:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c1592cde8910527eeffe88100e252768d08b73e864999e837b827ed08dbbef4-merged.mount: Deactivated successfully.
Nov 25 16:28:33 compute-0 nova_compute[254092]: 2025-11-25 16:28:33.019 254096 DEBUG nova.network.neutron [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:33 compute-0 nova_compute[254092]: 2025-11-25 16:28:33.031 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Releasing lock "refresh_cache-d5372b02-1d93-4354-8f5c-c4228e8d3ec4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:33 compute-0 nova_compute[254092]: 2025-11-25 16:28:33.032 254096 DEBUG nova.compute.manager [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:28:33 compute-0 podman[280860]: 2025-11-25 16:28:33.432400904 +0000 UTC m=+1.323908875 container remove bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:28:33 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 25 16:28:33 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 3.524s CPU time.
Nov 25 16:28:33 compute-0 systemd-machined[216343]: Machine qemu-19-instance-00000011 terminated.
Nov 25 16:28:33 compute-0 systemd[1]: libpod-conmon-bff974849a850dde5a0b723a3089c4decbdde2f562c507abad247867a721345f.scope: Deactivated successfully.
Nov 25 16:28:33 compute-0 nova_compute[254092]: 2025-11-25 16:28:33.453 254096 INFO nova.virt.libvirt.driver [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance destroyed successfully.
Nov 25 16:28:33 compute-0 nova_compute[254092]: 2025-11-25 16:28:33.453 254096 DEBUG nova.objects.instance [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lazy-loading 'resources' on Instance uuid d5372b02-1d93-4354-8f5c-c4228e8d3ec4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:33 compute-0 podman[280925]: 2025-11-25 16:28:33.571710583 +0000 UTC m=+0.023754998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:28:33 compute-0 nova_compute[254092]: 2025-11-25 16:28:33.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:33 compute-0 podman[280925]: 2025-11-25 16:28:33.766766337 +0000 UTC m=+0.218810732 container create f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:28:33 compute-0 ceph-mon[74985]: osdmap e150: 3 total, 3 up, 3 in
Nov 25 16:28:33 compute-0 ceph-mon[74985]: pgmap v1217: 321 pgs: 321 active+clean; 418 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.7 MiB/s wr, 68 op/s
Nov 25 16:28:33 compute-0 systemd[1]: Started libpod-conmon-f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978.scope.
Nov 25 16:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:34 compute-0 podman[280925]: 2025-11-25 16:28:34.135835834 +0000 UTC m=+0.587880269 container init f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:28:34 compute-0 podman[280925]: 2025-11-25 16:28:34.143342269 +0000 UTC m=+0.595386664 container start f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:28:34 compute-0 podman[280925]: 2025-11-25 16:28:34.149702691 +0000 UTC m=+0.601747106 container attach f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:28:34 compute-0 nova_compute[254092]: 2025-11-25 16:28:34.462 254096 INFO nova.virt.libvirt.driver [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deleting instance files /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_del
Nov 25 16:28:34 compute-0 nova_compute[254092]: 2025-11-25 16:28:34.465 254096 INFO nova.virt.libvirt.driver [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deletion of /var/lib/nova/instances/d5372b02-1d93-4354-8f5c-c4228e8d3ec4_del complete
Nov 25 16:28:34 compute-0 nova_compute[254092]: 2025-11-25 16:28:34.526 254096 INFO nova.compute.manager [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 1.49 seconds to destroy the instance on the hypervisor.
Nov 25 16:28:34 compute-0 nova_compute[254092]: 2025-11-25 16:28:34.527 254096 DEBUG oslo.service.loopingcall [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:28:34 compute-0 nova_compute[254092]: 2025-11-25 16:28:34.527 254096 DEBUG nova.compute.manager [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:28:34 compute-0 nova_compute[254092]: 2025-11-25 16:28:34.528 254096 DEBUG nova.network.neutron [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:28:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 418 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.7 MiB/s wr, 67 op/s
Nov 25 16:28:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 25 16:28:35 compute-0 ceph-mon[74985]: pgmap v1218: 321 pgs: 321 active+clean; 418 MiB data, 463 MiB used, 60 GiB / 60 GiB avail; 382 KiB/s rd, 2.7 MiB/s wr, 67 op/s
Nov 25 16:28:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 25 16:28:35 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 25 16:28:35 compute-0 focused_archimedes[280942]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:28:35 compute-0 focused_archimedes[280942]: --> relative data size: 1.0
Nov 25 16:28:35 compute-0 focused_archimedes[280942]: --> All data devices are unavailable
Nov 25 16:28:35 compute-0 systemd[1]: libpod-f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978.scope: Deactivated successfully.
Nov 25 16:28:35 compute-0 podman[280925]: 2025-11-25 16:28:35.190132777 +0000 UTC m=+1.642177172 container died f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dfa17691af626a1dbca62d99c7347948d41daed9b04ecf1bdaba38ff44e1066-merged.mount: Deactivated successfully.
Nov 25 16:28:35 compute-0 podman[280925]: 2025-11-25 16:28:35.257721775 +0000 UTC m=+1.709766160 container remove f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:28:35 compute-0 systemd[1]: libpod-conmon-f2802f38e3a5934e4e241b7d4515b9b77a95dcf6e3c6787d60e311c88cf4a978.scope: Deactivated successfully.
Nov 25 16:28:35 compute-0 sudo[280795]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.322 254096 DEBUG nova.network.neutron [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.339 254096 DEBUG nova.network.neutron [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:35 compute-0 sudo[280986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.356 254096 INFO nova.compute.manager [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Took 0.83 seconds to deallocate network for instance.
Nov 25 16:28:35 compute-0 sudo[280986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:35 compute-0 sudo[280986]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.398 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.399 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:35 compute-0 sudo[281011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:28:35 compute-0 sudo[281011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:35 compute-0 sudo[281011]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.480 254096 DEBUG oslo_concurrency.processutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:35 compute-0 sudo[281036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:35 compute-0 sudo[281036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:35 compute-0 sudo[281036]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:35 compute-0 sudo[281062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:28:35 compute-0 sudo[281062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959057331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.955 254096 DEBUG oslo_concurrency.processutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:35 compute-0 nova_compute[254092]: 2025-11-25 16:28:35.962 254096 DEBUG nova.compute.provider_tree [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:36 compute-0 podman[281146]: 2025-11-25 16:28:35.954425372 +0000 UTC m=+0.031316513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:28:36 compute-0 podman[281146]: 2025-11-25 16:28:36.059967592 +0000 UTC m=+0.136858733 container create 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:28:36 compute-0 systemd[1]: Started libpod-conmon-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope.
Nov 25 16:28:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:28:36 compute-0 ceph-mon[74985]: osdmap e151: 3 total, 3 up, 3 in
Nov 25 16:28:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2959057331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:36 compute-0 podman[281146]: 2025-11-25 16:28:36.232563517 +0000 UTC m=+0.309454668 container init 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:28:36 compute-0 podman[281146]: 2025-11-25 16:28:36.242109086 +0000 UTC m=+0.319000207 container start 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:28:36 compute-0 systemd[1]: libpod-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope: Deactivated successfully.
Nov 25 16:28:36 compute-0 great_antonelli[281164]: 167 167
Nov 25 16:28:36 compute-0 conmon[281164]: conmon 4c3bb87abc91c60861ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope/container/memory.events
Nov 25 16:28:36 compute-0 nova_compute[254092]: 2025-11-25 16:28:36.254 254096 DEBUG nova.scheduler.client.report [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:36 compute-0 podman[281146]: 2025-11-25 16:28:36.270467707 +0000 UTC m=+0.347358848 container attach 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 16:28:36 compute-0 podman[281146]: 2025-11-25 16:28:36.270997771 +0000 UTC m=+0.347888892 container died 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 16:28:36 compute-0 nova_compute[254092]: 2025-11-25 16:28:36.295 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:36 compute-0 nova_compute[254092]: 2025-11-25 16:28:36.369 254096 INFO nova.scheduler.client.report [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Deleted allocations for instance d5372b02-1d93-4354-8f5c-c4228e8d3ec4
Nov 25 16:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-08616db84c0cb2b069427b5dbe09086f84bf86a647d8e979628ff42fa94890f6-merged.mount: Deactivated successfully.
Nov 25 16:28:36 compute-0 nova_compute[254092]: 2025-11-25 16:28:36.430 254096 DEBUG oslo_concurrency.lockutils [None req-61fac452-1a5c-4dda-a0f5-e54e15c24d82 e3ab5ee2932743f4acd4c73f7a5aa7d3 3c8a4dd83dcd4ef1a79608794e3620d2 - - default default] Lock "d5372b02-1d93-4354-8f5c-c4228e8d3ec4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:36 compute-0 podman[281146]: 2025-11-25 16:28:36.461409389 +0000 UTC m=+0.538300510 container remove 4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 16:28:36 compute-0 systemd[1]: libpod-conmon-4c3bb87abc91c60861ff9641fe9709bf1c1002c01b23b77a303c516a68b01602.scope: Deactivated successfully.
Nov 25 16:28:36 compute-0 podman[281189]: 2025-11-25 16:28:36.645318441 +0000 UTC m=+0.048409607 container create 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:28:36 compute-0 systemd[1]: Started libpod-conmon-3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa.scope.
Nov 25 16:28:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:36 compute-0 podman[281189]: 2025-11-25 16:28:36.620694362 +0000 UTC m=+0.023785548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:36 compute-0 podman[281189]: 2025-11-25 16:28:36.732103522 +0000 UTC m=+0.135194728 container init 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:28:36 compute-0 podman[281189]: 2025-11-25 16:28:36.738093064 +0000 UTC m=+0.141184280 container start 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:28:36 compute-0 podman[281189]: 2025-11-25 16:28:36.74195714 +0000 UTC m=+0.145048356 container attach 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 16:28:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 275 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Nov 25 16:28:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 25 16:28:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 25 16:28:37 compute-0 ceph-mon[74985]: pgmap v1220: 321 pgs: 321 active+clean; 275 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Nov 25 16:28:37 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 25 16:28:37 compute-0 nova_compute[254092]: 2025-11-25 16:28:37.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:37 compute-0 boring_brattain[281205]: {
Nov 25 16:28:37 compute-0 boring_brattain[281205]:     "0": [
Nov 25 16:28:37 compute-0 boring_brattain[281205]:         {
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "devices": [
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "/dev/loop3"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             ],
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_name": "ceph_lv0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_size": "21470642176",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "name": "ceph_lv0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "tags": {
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cluster_name": "ceph",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.crush_device_class": "",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.encrypted": "0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osd_id": "0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.type": "block",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.vdo": "0"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             },
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "type": "block",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "vg_name": "ceph_vg0"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:         }
Nov 25 16:28:37 compute-0 boring_brattain[281205]:     ],
Nov 25 16:28:37 compute-0 boring_brattain[281205]:     "1": [
Nov 25 16:28:37 compute-0 boring_brattain[281205]:         {
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "devices": [
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "/dev/loop4"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             ],
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_name": "ceph_lv1",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_size": "21470642176",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "name": "ceph_lv1",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "tags": {
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cluster_name": "ceph",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.crush_device_class": "",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.encrypted": "0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osd_id": "1",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.type": "block",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.vdo": "0"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             },
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "type": "block",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "vg_name": "ceph_vg1"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:         }
Nov 25 16:28:37 compute-0 boring_brattain[281205]:     ],
Nov 25 16:28:37 compute-0 boring_brattain[281205]:     "2": [
Nov 25 16:28:37 compute-0 boring_brattain[281205]:         {
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "devices": [
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "/dev/loop5"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             ],
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_name": "ceph_lv2",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_size": "21470642176",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "name": "ceph_lv2",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "tags": {
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.cluster_name": "ceph",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.crush_device_class": "",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.encrypted": "0",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osd_id": "2",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.type": "block",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:                 "ceph.vdo": "0"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             },
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "type": "block",
Nov 25 16:28:37 compute-0 boring_brattain[281205]:             "vg_name": "ceph_vg2"
Nov 25 16:28:37 compute-0 boring_brattain[281205]:         }
Nov 25 16:28:37 compute-0 boring_brattain[281205]:     ]
Nov 25 16:28:37 compute-0 boring_brattain[281205]: }
Nov 25 16:28:37 compute-0 systemd[1]: libpod-3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa.scope: Deactivated successfully.
Nov 25 16:28:37 compute-0 podman[281189]: 2025-11-25 16:28:37.622334282 +0000 UTC m=+1.025425488 container died 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-248413785d8bea299909fd44367dd50cac3ed2f9a039515ab486c92023238d63-merged.mount: Deactivated successfully.
Nov 25 16:28:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 25 16:28:37 compute-0 podman[281189]: 2025-11-25 16:28:37.683042092 +0000 UTC m=+1.086133268 container remove 3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_brattain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 16:28:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 25 16:28:37 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 25 16:28:37 compute-0 systemd[1]: libpod-conmon-3e4aee5fa3b7fc0d82ea0eb1222cf9f5a4ada38db11b5da83a527be7abfae6fa.scope: Deactivated successfully.
Nov 25 16:28:37 compute-0 sudo[281062]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:37 compute-0 sudo[281226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:37 compute-0 sudo[281226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:37 compute-0 sudo[281226]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:37 compute-0 sudo[281251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:28:37 compute-0 sudo[281251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:37 compute-0 sudo[281251]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:37 compute-0 sudo[281276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:37 compute-0 sudo[281276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:37 compute-0 sudo[281276]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:37 compute-0 sudo[281301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:28:37 compute-0 sudo[281301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.021 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "2174ef15-55fa-4734-8cc2-89064853919b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "2174ef15-55fa-4734-8cc2-89064853919b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.022 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.023 254096 INFO nova.compute.manager [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Terminating instance
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.024 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "refresh_cache-2174ef15-55fa-4734-8cc2-89064853919b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.024 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquired lock "refresh_cache-2174ef15-55fa-4734-8cc2-89064853919b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.024 254096 DEBUG nova.network.neutron [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:28:38 compute-0 ceph-mon[74985]: osdmap e152: 3 total, 3 up, 3 in
Nov 25 16:28:38 compute-0 ceph-mon[74985]: osdmap e153: 3 total, 3 up, 3 in
Nov 25 16:28:38 compute-0 podman[281367]: 2025-11-25 16:28:38.295780276 +0000 UTC m=+0.038124287 container create 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:28:38 compute-0 systemd[1]: Started libpod-conmon-5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906.scope.
Nov 25 16:28:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:28:38 compute-0 podman[281367]: 2025-11-25 16:28:38.367432865 +0000 UTC m=+0.109776916 container init 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 16:28:38 compute-0 podman[281367]: 2025-11-25 16:28:38.373537761 +0000 UTC m=+0.115881782 container start 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:28:38 compute-0 podman[281367]: 2025-11-25 16:28:38.28043534 +0000 UTC m=+0.022779371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:28:38 compute-0 podman[281367]: 2025-11-25 16:28:38.376976485 +0000 UTC m=+0.119320516 container attach 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:28:38 compute-0 happy_einstein[281383]: 167 167
Nov 25 16:28:38 compute-0 systemd[1]: libpod-5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906.scope: Deactivated successfully.
Nov 25 16:28:38 compute-0 podman[281367]: 2025-11-25 16:28:38.37827068 +0000 UTC m=+0.120614691 container died 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.396 254096 DEBUG nova.network.neutron [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e074f30a0705dd4b08b96e5d96e9fa4536faba549c6fd066adbfc813e063569d-merged.mount: Deactivated successfully.
Nov 25 16:28:38 compute-0 podman[281367]: 2025-11-25 16:28:38.410902287 +0000 UTC m=+0.153246298 container remove 5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:28:38 compute-0 systemd[1]: libpod-conmon-5e2846b081a9364de92d7a6d185ee274077940c8701ef55db233cfd516886906.scope: Deactivated successfully.
Nov 25 16:28:38 compute-0 podman[281406]: 2025-11-25 16:28:38.59598224 +0000 UTC m=+0.057233017 container create 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:28:38 compute-0 systemd[1]: Started libpod-conmon-60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4.scope.
Nov 25 16:28:38 compute-0 podman[281406]: 2025-11-25 16:28:38.569267214 +0000 UTC m=+0.030517991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:28:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:38 compute-0 podman[281406]: 2025-11-25 16:28:38.696381141 +0000 UTC m=+0.157631958 container init 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:28:38 compute-0 podman[281406]: 2025-11-25 16:28:38.702715083 +0000 UTC m=+0.163965850 container start 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:28:38 compute-0 podman[281406]: 2025-11-25 16:28:38.706227629 +0000 UTC m=+0.167478346 container attach 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 16:28:38 compute-0 nova_compute[254092]: 2025-11-25 16:28:38.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 275 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 KiB/s wr, 228 op/s
Nov 25 16:28:39 compute-0 nova_compute[254092]: 2025-11-25 16:28:39.167 254096 DEBUG nova.network.neutron [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:39 compute-0 nova_compute[254092]: 2025-11-25 16:28:39.182 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Releasing lock "refresh_cache-2174ef15-55fa-4734-8cc2-89064853919b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:39 compute-0 nova_compute[254092]: 2025-11-25 16:28:39.183 254096 DEBUG nova.compute.manager [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:28:39 compute-0 ceph-mon[74985]: pgmap v1223: 321 pgs: 321 active+clean; 275 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 KiB/s wr, 228 op/s
Nov 25 16:28:39 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 25 16:28:39 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Consumed 13.859s CPU time.
Nov 25 16:28:39 compute-0 systemd-machined[216343]: Machine qemu-17-instance-00000010 terminated.
Nov 25 16:28:39 compute-0 nova_compute[254092]: 2025-11-25 16:28:39.416 254096 INFO nova.virt.libvirt.driver [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance destroyed successfully.
Nov 25 16:28:39 compute-0 nova_compute[254092]: 2025-11-25 16:28:39.417 254096 DEBUG nova.objects.instance [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'resources' on Instance uuid 2174ef15-55fa-4734-8cc2-89064853919b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]: {
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "osd_id": 1,
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "type": "bluestore"
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:     },
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "osd_id": 2,
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "type": "bluestore"
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:     },
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "osd_id": 0,
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:         "type": "bluestore"
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]:     }
Nov 25 16:28:39 compute-0 jolly_torvalds[281423]: }
Nov 25 16:28:39 compute-0 systemd[1]: libpod-60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4.scope: Deactivated successfully.
Nov 25 16:28:39 compute-0 podman[281406]: 2025-11-25 16:28:39.688486062 +0000 UTC m=+1.149736799 container died 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 16:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f201671a0602d6082a5b7bebe8d97cec12f0ed395bcfb757349c01b147a4e5b-merged.mount: Deactivated successfully.
Nov 25 16:28:39 compute-0 podman[281406]: 2025-11-25 16:28:39.801403263 +0000 UTC m=+1.262653990 container remove 60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 16:28:39 compute-0 systemd[1]: libpod-conmon-60da2c9b37a1729abf81fc00beb1ce9559535773b71a6a79f6556a8be06b44a4.scope: Deactivated successfully.
Nov 25 16:28:39 compute-0 sudo[281301]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:28:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:28:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:28:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:28:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 49aab584-00ed-4519-b3e7-440900f0dc29 does not exist
Nov 25 16:28:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0bb34bcc-caa2-4855-82c2-6c23440f2c78 does not exist
Nov 25 16:28:39 compute-0 sudo[281492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:28:39 compute-0 sudo[281492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:39 compute-0 sudo[281492]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:40 compute-0 sudo[281518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:28:40 compute-0 sudo[281518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:28:40 compute-0 sudo[281518]: pam_unix(sudo:session): session closed for user root
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:28:40
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'backups', 'images', '.mgr']
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.068 254096 INFO nova.virt.libvirt.driver [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deleting instance files /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b_del
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.069 254096 INFO nova.virt.libvirt.driver [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deletion of /var/lib/nova/instances/2174ef15-55fa-4734-8cc2-89064853919b_del complete
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.130 254096 INFO nova.compute.manager [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 0.95 seconds to destroy the instance on the hypervisor.
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.131 254096 DEBUG oslo.service.loopingcall [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.131 254096 DEBUG nova.compute.manager [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.131 254096 DEBUG nova.network.neutron [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.292 254096 DEBUG nova.network.neutron [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.302 254096 DEBUG nova.network.neutron [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.315 254096 INFO nova.compute.manager [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Took 0.18 seconds to deallocate network for instance.
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.364 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.364 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.418 254096 DEBUG oslo_concurrency.processutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.611 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.612 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.631 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.710 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:28:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:28:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610683225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.880 254096 DEBUG oslo_concurrency.processutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 173 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 47 KiB/s wr, 302 op/s
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.888 254096 DEBUG nova.compute.provider_tree [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.912 254096 DEBUG nova.scheduler.client.report [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.931 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.933 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.939 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.939 254096 INFO nova.compute.claims [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:28:40 compute-0 nova_compute[254092]: 2025-11-25 16:28:40.984 254096 INFO nova.scheduler.client.report [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Deleted allocations for instance 2174ef15-55fa-4734-8cc2-89064853919b
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.053 254096 DEBUG oslo_concurrency.lockutils [None req-0b2cdcf6-d3fa-4477-acbb-c3a2275c54f4 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "2174ef15-55fa-4734-8cc2-89064853919b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.062 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147019412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.527 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.534 254096 DEBUG nova.compute.provider_tree [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.552 254096 DEBUG nova.scheduler.client.report [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.620 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.621 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.825 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.826 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.917 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "12043b7f-9853-45a8-b963-ae96713754b4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.918 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.918 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "12043b7f-9853-45a8-b963-ae96713754b4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.919 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.919 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.920 254096 INFO nova.compute.manager [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Terminating instance
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.921 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "refresh_cache-12043b7f-9853-45a8-b963-ae96713754b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.922 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquired lock "refresh_cache-12043b7f-9853-45a8-b963-ae96713754b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:41 compute-0 nova_compute[254092]: 2025-11-25 16:28:41.922 254096 DEBUG nova.network.neutron [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:28:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/610683225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:41 compute-0 ceph-mon[74985]: pgmap v1224: 321 pgs: 321 active+clean; 173 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 47 KiB/s wr, 302 op/s
Nov 25 16:28:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3147019412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.002 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.053 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.083 254096 DEBUG nova.policy [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.352 254096 DEBUG nova.network.neutron [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.434 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.436 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.437 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating image(s)
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.475 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.503 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.525 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.528 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.594 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.596 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.596 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.597 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.623 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.628 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 25 16:28:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 25 16:28:42 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 25 16:28:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 43 KiB/s wr, 108 op/s
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.962 254096 DEBUG nova.network.neutron [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.987 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Releasing lock "refresh_cache-12043b7f-9853-45a8-b963-ae96713754b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:42 compute-0 nova_compute[254092]: 2025-11-25 16:28:42.988 254096 DEBUG nova.compute.manager [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:28:43 compute-0 nova_compute[254092]: 2025-11-25 16:28:43.274 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Successfully created port: d6146886-91a1-4d5f-9234-e1d0154b4230 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:28:43 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 25 16:28:43 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 13.364s CPU time.
Nov 25 16:28:43 compute-0 systemd-machined[216343]: Machine qemu-16-instance-0000000f terminated.
Nov 25 16:28:43 compute-0 nova_compute[254092]: 2025-11-25 16:28:43.411 254096 INFO nova.virt.libvirt.driver [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance destroyed successfully.
Nov 25 16:28:43 compute-0 nova_compute[254092]: 2025-11-25 16:28:43.412 254096 DEBUG nova.objects.instance [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lazy-loading 'resources' on Instance uuid 12043b7f-9853-45a8-b963-ae96713754b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:43 compute-0 nova_compute[254092]: 2025-11-25 16:28:43.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:43 compute-0 ceph-mon[74985]: osdmap e154: 3 total, 3 up, 3 in
Nov 25 16:28:43 compute-0 ceph-mon[74985]: pgmap v1226: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 43 KiB/s wr, 108 op/s
Nov 25 16:28:43 compute-0 nova_compute[254092]: 2025-11-25 16:28:43.829 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:43 compute-0 nova_compute[254092]: 2025-11-25 16:28:43.829 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:43 compute-0 nova_compute[254092]: 2025-11-25 16:28:43.896 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:28:44 compute-0 nova_compute[254092]: 2025-11-25 16:28:44.117 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:44 compute-0 nova_compute[254092]: 2025-11-25 16:28:44.177 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:28:44 compute-0 nova_compute[254092]: 2025-11-25 16:28:44.352 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:44 compute-0 nova_compute[254092]: 2025-11-25 16:28:44.352 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:44 compute-0 nova_compute[254092]: 2025-11-25 16:28:44.359 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:28:44 compute-0 nova_compute[254092]: 2025-11-25 16:28:44.359 254096 INFO nova.compute.claims [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:28:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 34 KiB/s wr, 85 op/s
Nov 25 16:28:44 compute-0 ceph-mon[74985]: pgmap v1227: 321 pgs: 321 active+clean; 121 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 34 KiB/s wr, 85 op/s
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.021 254096 DEBUG nova.objects.instance [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.041 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.042 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ensure instance console log exists: /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.042 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.043 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.043 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.174 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.309 254096 INFO nova.virt.libvirt.driver [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deleting instance files /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4_del
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.310 254096 INFO nova.virt.libvirt.driver [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deletion of /var/lib/nova/instances/12043b7f-9853-45a8-b963-ae96713754b4_del complete
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.378 254096 INFO nova.compute.manager [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 2.39 seconds to destroy the instance on the hypervisor.
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.379 254096 DEBUG oslo.service.loopingcall [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.379 254096 DEBUG nova.compute.manager [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.379 254096 DEBUG nova.network.neutron [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.382 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Successfully updated port: d6146886-91a1-4d5f-9234-e1d0154b4230 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.400 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.401 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.401 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.513 254096 DEBUG nova.compute.manager [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-changed-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.513 254096 DEBUG nova.compute.manager [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Refreshing instance network info cache due to event network-changed-d6146886-91a1-4d5f-9234-e1d0154b4230. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.514 254096 DEBUG oslo_concurrency.lockutils [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065165170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.602 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.608 254096 DEBUG nova.compute.provider_tree [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.645 254096 DEBUG nova.network.neutron [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.661 254096 DEBUG nova.scheduler.client.report [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.665 254096 DEBUG nova.network.neutron [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.682 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.700 254096 INFO nova.compute.manager [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Took 0.32 seconds to deallocate network for instance.
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.706 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.707 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.774 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.775 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.780 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.781 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.802 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.830 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.891 254096 DEBUG oslo_concurrency.processutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.933 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.935 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.936 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Creating image(s)
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.964 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:45 compute-0 nova_compute[254092]: 2025-11-25 16:28:45.988 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2065165170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.023 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.026 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.083 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.084 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.084 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.084 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.107 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.112 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 090ac2d7-979e-4706-8a01-5e94ab72282d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.139 254096 DEBUG nova.policy [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:28:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801946160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.346 254096 DEBUG oslo_concurrency.processutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.355 254096 DEBUG nova.compute.provider_tree [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.370 254096 DEBUG nova.scheduler.client.report [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.392 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.398 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 090ac2d7-979e-4706-8a01-5e94ab72282d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.433 254096 INFO nova.scheduler.client.report [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Deleted allocations for instance 12043b7f-9853-45a8-b963-ae96713754b4
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.480 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.523 254096 DEBUG oslo_concurrency.lockutils [None req-9bdda237-7138-4087-abae-f9e746795db9 435d184bc01d4b1b878995bce4319f96 b791fb0a74ad43e9b9270c33338d5556 - - default default] Lock "12043b7f-9853-45a8-b963-ae96713754b4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.613 254096 DEBUG nova.objects.instance [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 090ac2d7-979e-4706-8a01-5e94ab72282d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.627 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.628 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Ensure instance console log exists: /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.628 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.628 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:46 compute-0 nova_compute[254092]: 2025-11-25 16:28:46.629 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 134 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 2.3 MiB/s wr, 142 op/s
Nov 25 16:28:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3801946160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:47 compute-0 ceph-mon[74985]: pgmap v1228: 321 pgs: 321 active+clean; 134 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 2.3 MiB/s wr, 142 op/s
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.091 254096 DEBUG nova.network.neutron [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.116 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.117 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance network_info: |[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.118 254096 DEBUG oslo_concurrency.lockutils [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.119 254096 DEBUG nova.network.neutron [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Refreshing network info cache for port d6146886-91a1-4d5f-9234-e1d0154b4230 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.124 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start _get_guest_xml network_info=[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.132 254096 WARNING nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.139 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.140 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.145 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.145 254096 DEBUG nova.virt.libvirt.host [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.147 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.147 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.148 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.149 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.149 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.150 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.150 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.151 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.151 254096 DEBUG nova.virt.hardware [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.153 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.385 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.533 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.534 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.552 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Successfully created port: 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.557 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588178945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.618 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.642 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.647 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 25 16:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 25 16:28:47 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.810 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.812 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.825 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:28:47 compute-0 nova_compute[254092]: 2025-11-25 16:28:47.825 254096 INFO nova.compute.claims [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:28:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3588178945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:48 compute-0 ceph-mon[74985]: osdmap e155: 3 total, 3 up, 3 in
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.144 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618127445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.179 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.180 254096 DEBUG nova.virt.libvirt.vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:42Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.181 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.181 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.183 254096 DEBUG nova.objects.instance [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.196 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <uuid>3375e096-321c-459b-8b6a-e085bb62872f</uuid>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <name>instance-00000012</name>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminTestJSON-server-1705426121</nova:name>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:28:47</nova:creationTime>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <nova:port uuid="d6146886-91a1-4d5f-9234-e1d0154b4230">
Nov 25 16:28:48 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <system>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <entry name="serial">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <entry name="uuid">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </system>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <os>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   </os>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <features>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   </features>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk">
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk.config">
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:48 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:dd:a2:8e"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <target dev="tapd6146886-91"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log" append="off"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <video>
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </video>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:28:48 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:28:48 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:28:48 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:28:48 compute-0 nova_compute[254092]: </domain>
Nov 25 16:28:48 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.197 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Preparing to wait for external event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.197 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.198 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.198 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.198 254096 DEBUG nova.virt.libvirt.vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:42Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.199 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.199 254096 DEBUG nova.network.os_vif_util [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.200 254096 DEBUG os_vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.204 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6146886-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.205 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6146886-91, col_values=(('external_ids', {'iface-id': 'd6146886-91a1-4d5f-9234-e1d0154b4230', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:a2:8e', 'vm-uuid': '3375e096-321c-459b-8b6a-e085bb62872f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:48 compute-0 NetworkManager[48891]: <info>  [1764088128.2075] manager: (tapd6146886-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.215 254096 INFO os_vif [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.288 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.289 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.289 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:dd:a2:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.290 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Using config drive
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.314 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.459 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088113.4494846, d5372b02-1d93-4354-8f5c-c4228e8d3ec4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.459 254096 INFO nova.compute.manager [-] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] VM Stopped (Lifecycle Event)
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.480 254096 DEBUG nova.compute.manager [None req-8339a880-b614-4487-a680-90e3683d573c - - - - - -] [instance: d5372b02-1d93-4354-8f5c-c4228e8d3ec4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251603989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.587 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.593 254096 DEBUG nova.compute.provider_tree [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.605 254096 DEBUG nova.scheduler.client.report [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.609 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Successfully updated port: 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.626 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.627 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.629 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.630 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.630 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.685 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.686 254096 DEBUG nova.network.neutron [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.707 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.728 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.842 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.845 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.845 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Creating image(s)
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.871 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 134 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 2.7 MiB/s wr, 108 op/s
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.898 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.926 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.930 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.959 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.965 254096 DEBUG nova.network.neutron [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updated VIF entry in instance network info cache for port d6146886-91a1-4d5f-9234-e1d0154b4230. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.966 254096 DEBUG nova.network.neutron [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.974 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating config drive at /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config
Nov 25 16:28:48 compute-0 nova_compute[254092]: 2025-11-25 16:28:48.979 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu2rbk3_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 25 16:28:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.019 254096 DEBUG nova.compute.manager [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-changed-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.020 254096 DEBUG nova.compute.manager [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Refreshing instance network info cache due to event network-changed-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.020 254096 DEBUG oslo_concurrency.lockutils [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.021 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.023 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.023 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.024 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 25 16:28:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3618127445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2251603989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:49 compute-0 ceph-mon[74985]: pgmap v1230: 321 pgs: 321 active+clean; 134 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 2.7 MiB/s wr, 108 op/s
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.085 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.092 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.125 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqu2rbk3_" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.127 254096 DEBUG oslo_concurrency.lockutils [req-e59a8078-a22d-4c5a-b493-f0299ebab1ed req-5ec36ee9-091c-455d-94fb-9c40a94193ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.154 254096 DEBUG nova.storage.rbd_utils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.158 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.195 254096 DEBUG nova.network.neutron [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.196 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.396 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.435 254096 DEBUG oslo_concurrency.processutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.437 254096 INFO nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting local config drive /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config because it was imported into RBD.
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.483 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] resizing rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:28:49 compute-0 kernel: tapd6146886-91: entered promiscuous mode
Nov 25 16:28:49 compute-0 NetworkManager[48891]: <info>  [1764088129.4888] manager: (tapd6146886-91): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 16:28:49 compute-0 ovn_controller[153477]: 2025-11-25T16:28:49Z|00055|binding|INFO|Claiming lport d6146886-91a1-4d5f-9234-e1d0154b4230 for this chassis.
Nov 25 16:28:49 compute-0 ovn_controller[153477]: 2025-11-25T16:28:49Z|00056|binding|INFO|d6146886-91a1-4d5f-9234-e1d0154b4230: Claiming fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.512 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.513 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.515 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:28:49 compute-0 systemd-udevd[282287]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:28:49 compute-0 systemd-machined[216343]: New machine qemu-20-instance-00000012.
Nov 25 16:28:49 compute-0 NetworkManager[48891]: <info>  [1764088129.5321] device (tapd6146886-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.531 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb04495-1861-4502-abc8-1268ee1fa644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 NetworkManager[48891]: <info>  [1764088129.5342] device (tapd6146886-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.534 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3403825e-11 in ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.539 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3403825e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.539 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc8d25d-c0d5-49c3-b789-3541786e7454]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.541 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b734f031-1255-41c7-b4b6-78076a8daaf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.558 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[0da5b2c2-1af9-4e8e-8b2d-4f8c47d8ec3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_controller[153477]: 2025-11-25T16:28:49Z|00057|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 ovn-installed in OVS
Nov 25 16:28:49 compute-0 ovn_controller[153477]: 2025-11-25T16:28:49Z|00058|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 up in Southbound
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.588 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72275494-5be3-4260-b3e0-0ba070847a44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.626 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c1439f47-463f-4880-a584-268247f180b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 NetworkManager[48891]: <info>  [1764088129.6348] manager: (tap3403825e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.635 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d039af9f-ab38-429c-8db6-80ac33bd637d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.642 254096 DEBUG nova.objects.instance [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lazy-loading 'migration_context' on Instance uuid 9440e9b4-329e-44cf-a489-5a0634a8aa30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.661 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.661 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Ensure instance console log exists: /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.662 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.662 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.662 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.663 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.669 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d52f58-b542-43a0-9943-534643773076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.671 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e02d8bc0-b192-4800-9377-7a82c3def111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.672 254096 WARNING nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.678 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.679 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.682 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.682 254096 DEBUG nova.virt.libvirt.host [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.683 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.684 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.685 254096 DEBUG nova.virt.hardware [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.687 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:49 compute-0 NetworkManager[48891]: <info>  [1764088129.6958] device (tap3403825e-10): carrier: link connected
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.701 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[22c74b09-e1dc-4718-9367-106b2fcd143b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c7f5f5-0f36-4008-95fd-8b26b413e462]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282343, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.732 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73c7e927-d04b-489a-8e59-3b9013342651]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:44ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458728, 'tstamp': 458728}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282344, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.749 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae2168b-061e-4e7d-87a7-85fef4bd5e6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282345, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5dce62-87b6-4168-86db-5587922bd4dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc824152-268b-48c4-bf44-a25fa685fcb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.841 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.841 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.842 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:49 compute-0 kernel: tap3403825e-10: entered promiscuous mode
Nov 25 16:28:49 compute-0 NetworkManager[48891]: <info>  [1764088129.8449] manager: (tap3403825e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.846 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:49 compute-0 ovn_controller[153477]: 2025-11-25T16:28:49Z|00059|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 16:28:49 compute-0 nova_compute[254092]: 2025-11-25 16:28:49.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.866 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3403825e-13ff-43e0-80c4-b59cf23ed30b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3403825e-13ff-43e0-80c4-b59cf23ed30b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.867 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7040131c-8455-4880-a3bb-74cd656f2633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.868 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/3403825e-13ff-43e0-80c4-b59cf23ed30b.pid.haproxy
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:28:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:49.868 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'env', 'PROCESS_TAG=haproxy-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3403825e-13ff-43e0-80c4-b59cf23ed30b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:28:50 compute-0 ceph-mon[74985]: osdmap e156: 3 total, 3 up, 3 in
Nov 25 16:28:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77643635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.146 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.176 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.181 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.209 254096 DEBUG nova.network.neutron [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.236 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.236 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance network_info: |[{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.237 254096 DEBUG oslo_concurrency.lockutils [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.238 254096 DEBUG nova.network.neutron [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Refreshing network info cache for port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.241 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start _get_guest_xml network_info=[{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.248 254096 WARNING nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:28:50 compute-0 podman[282412]: 2025-11-25 16:28:50.251361218 +0000 UTC m=+0.067200358 container create a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.252 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.254 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.266 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.267 254096 DEBUG nova.virt.libvirt.host [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.268 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.268 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.269 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.269 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.270 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.270 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.270 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.271 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.272 254096 DEBUG nova.virt.hardware [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.275 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:50 compute-0 podman[282412]: 2025-11-25 16:28:50.217932689 +0000 UTC m=+0.033771849 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:28:50 compute-0 systemd[1]: Started libpod-conmon-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb.scope.
Nov 25 16:28:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493ef27e1af43f842f739b06d394ae37970231d4264741926f2efe0d296ab9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:28:50 compute-0 podman[282412]: 2025-11-25 16:28:50.381987241 +0000 UTC m=+0.197826381 container init a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 16:28:50 compute-0 podman[282412]: 2025-11-25 16:28:50.395856188 +0000 UTC m=+0.211695328 container start a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 16:28:50 compute-0 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : New worker (282492) forked
Nov 25 16:28:50 compute-0 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : Loading success.
Nov 25 16:28:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821084128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.633 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.636 254096 DEBUG nova.objects.instance [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lazy-loading 'pci_devices' on Instance uuid 9440e9b4-329e-44cf-a489-5a0634a8aa30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.662 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <uuid>9440e9b4-329e-44cf-a489-5a0634a8aa30</uuid>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <name>instance-00000014</name>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <nova:name>tempest-TenantUsagesTestJSON-server-39150975</nova:name>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:28:49</nova:creationTime>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <nova:user uuid="06ffced0e9004a60b3fe2f455857f494">tempest-TenantUsagesTestJSON-707754235-project-member</nova:user>
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <nova:project uuid="99f704e0b6c04f648538e8070f335bdc">tempest-TenantUsagesTestJSON-707754235</nova:project>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <system>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <entry name="serial">9440e9b4-329e-44cf-a489-5a0634a8aa30</entry>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <entry name="uuid">9440e9b4-329e-44cf-a489-5a0634a8aa30</entry>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </system>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <os>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   </os>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <features>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   </features>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9440e9b4-329e-44cf-a489-5a0634a8aa30_disk">
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config">
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:50 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/console.log" append="off"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <video>
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </video>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:28:50 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:28:50 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:28:50 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:28:50 compute-0 nova_compute[254092]: </domain>
Nov 25 16:28:50 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.666 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088130.6634674, 3375e096-321c-459b-8b6a-e085bb62872f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.667 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Started (Lifecycle Event)
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.686 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.691 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088130.6637552, 3375e096-321c-459b-8b6a-e085bb62872f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.691 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Paused (Lifecycle Event)
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.715 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2494545252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.732 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.733 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.733 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Using config drive
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.753 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.758 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.780 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.784 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:50 compute-0 nova_compute[254092]: 2025-11-25 16:28:50.811 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 159 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 6.6 MiB/s wr, 166 op/s
Nov 25 16:28:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 25 16:28:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/77643635' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/821084128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2494545252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:51 compute-0 ceph-mon[74985]: pgmap v1232: 321 pgs: 321 active+clean; 159 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 6.6 MiB/s wr, 166 op/s
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010402803892361789 of space, bias 1.0, pg target 0.3120841167708537 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:28:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 25 16:28:51 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 25 16:28:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:28:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258874472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.238 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.241 254096 DEBUG nova.virt.libvirt.vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-391611256',display_name='tempest-ServersAdminTestJSON-server-391611256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-391611256',id=19,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-6weq4dsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:45Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=090ac2d7-979e-4706-8a01-5e94ab72282d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.241 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.242 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.244 254096 DEBUG nova.objects.instance [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 090ac2d7-979e-4706-8a01-5e94ab72282d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.270 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <uuid>090ac2d7-979e-4706-8a01-5e94ab72282d</uuid>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <name>instance-00000013</name>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminTestJSON-server-391611256</nova:name>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:28:50</nova:creationTime>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <nova:port uuid="7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c">
Nov 25 16:28:51 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <system>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <entry name="serial">090ac2d7-979e-4706-8a01-5e94ab72282d</entry>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <entry name="uuid">090ac2d7-979e-4706-8a01-5e94ab72282d</entry>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </system>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <os>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   </os>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <features>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   </features>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/090ac2d7-979e-4706-8a01-5e94ab72282d_disk">
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config">
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:28:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d8:5a:58"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <target dev="tap7c4d5f4d-36"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/console.log" append="off"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <video>
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </video>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:28:51 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:28:51 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:28:51 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:28:51 compute-0 nova_compute[254092]: </domain>
Nov 25 16:28:51 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.272 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Preparing to wait for external event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.272 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.273 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.273 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.274 254096 DEBUG nova.virt.libvirt.vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-391611256',display_name='tempest-ServersAdminTestJSON-server-391611256',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-391611256',id=19,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-6weq4dsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:28:45Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=090ac2d7-979e-4706-8a01-5e94ab72282d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.274 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.275 254096 DEBUG nova.network.os_vif_util [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.275 254096 DEBUG os_vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.276 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.277 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.281 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c4d5f4d-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.281 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c4d5f4d-36, col_values=(('external_ids', {'iface-id': '7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:5a:58', 'vm-uuid': '090ac2d7-979e-4706-8a01-5e94ab72282d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:51 compute-0 NetworkManager[48891]: <info>  [1764088131.2841] manager: (tap7c4d5f4d-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.285 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.292 254096 INFO os_vif [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36')
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.333 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.334 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.334 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:d8:5a:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.335 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Using config drive
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.358 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.371 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Creating config drive at /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.376 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqz72wcln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.518 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqz72wcln" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.549 254096 DEBUG nova.storage.rbd_utils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] rbd image 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.554 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.587 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:51 compute-0 podman[282634]: 2025-11-25 16:28:51.65496924 +0000 UTC m=+0.058282666 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:28:51 compute-0 podman[282632]: 2025-11-25 16:28:51.658628339 +0000 UTC m=+0.068271187 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:28:51 compute-0 podman[282635]: 2025-11-25 16:28:51.692752927 +0000 UTC m=+0.096525005 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.714 254096 DEBUG oslo_concurrency.processutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config 9440e9b4-329e-44cf-a489-5a0634a8aa30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:51 compute-0 nova_compute[254092]: 2025-11-25 16:28:51.715 254096 INFO nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deleting local config drive /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30/disk.config because it was imported into RBD.
Nov 25 16:28:51 compute-0 systemd-machined[216343]: New machine qemu-21-instance-00000014.
Nov 25 16:28:51 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000014.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.050 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Creating config drive at /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.054 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi78vhy9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:52 compute-0 ceph-mon[74985]: osdmap e157: 3 total, 3 up, 3 in
Nov 25 16:28:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1258874472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.094 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088132.0935538, 9440e9b4-329e-44cf-a489-5a0634a8aa30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.094 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] VM Resumed (Lifecycle Event)
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.097 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.097 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.101 254096 INFO nova.virt.libvirt.driver [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance spawned successfully.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.101 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.123 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.123 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.124 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.124 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.124 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.125 254096 DEBUG nova.virt.libvirt.driver [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.128 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.153 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.153 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088132.096536, 9440e9b4-329e-44cf-a489-5a0634a8aa30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.154 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] VM Started (Lifecycle Event)
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.186 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.188 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoi78vhy9" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.215 254096 DEBUG nova.storage.rbd_utils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.218 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.253 254096 INFO nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 3.41 seconds to spawn the instance on the hypervisor.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.253 254096 DEBUG nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.255 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.285 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.332 254096 INFO nova.compute.manager [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 4.72 seconds to build instance.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.359 254096 DEBUG oslo_concurrency.lockutils [None req-6c760804-d271-4110-a90e-9fb5d2d9d019 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.361 254096 DEBUG oslo_concurrency.processutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config 090ac2d7-979e-4706-8a01-5e94ab72282d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.361 254096 INFO nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deleting local config drive /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d/disk.config because it was imported into RBD.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:52 compute-0 kernel: tap7c4d5f4d-36: entered promiscuous mode
Nov 25 16:28:52 compute-0 NetworkManager[48891]: <info>  [1764088132.4100] manager: (tap7c4d5f4d-36): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 16:28:52 compute-0 systemd-udevd[282331]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:52 compute-0 ovn_controller[153477]: 2025-11-25T16:28:52Z|00060|binding|INFO|Claiming lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for this chassis.
Nov 25 16:28:52 compute-0 ovn_controller[153477]: 2025-11-25T16:28:52Z|00061|binding|INFO|7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c: Claiming fa:16:3e:d8:5a:58 10.100.0.11
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.422 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:5a:58 10.100.0.11'], port_security=['fa:16:3e:d8:5a:58 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '090ac2d7-979e-4706-8a01-5e94ab72282d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.423 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis
Nov 25 16:28:52 compute-0 NetworkManager[48891]: <info>  [1764088132.4253] device (tap7c4d5f4d-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:28:52 compute-0 NetworkManager[48891]: <info>  [1764088132.4276] device (tap7c4d5f4d-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:52 compute-0 ovn_controller[153477]: 2025-11-25T16:28:52Z|00062|binding|INFO|Setting lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c ovn-installed in OVS
Nov 25 16:28:52 compute-0 ovn_controller[153477]: 2025-11-25T16:28:52Z|00063|binding|INFO|Setting lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c up in Southbound
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bedb7ad4-1cb5-4a8d-89b4-5088e9610ffe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:52 compute-0 systemd-machined[216343]: New machine qemu-22-instance-00000013.
Nov 25 16:28:52 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000013.
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.473 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4830f569-e897-47ad-abe8-172384ab0823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.476 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[799224ce-6d31-49fa-985b-50a4e28acf43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.503 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccbec04-46d1-4547-87df-8669c2dfc08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.532 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af423c4d-a367-4d6c-976b-01ea5f4ec789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282832, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.543 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31df6ad6-0216-4c59-9dd3-9ff22112e042]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282835, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282835, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.544 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:28:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:28:52.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:28:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 180 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.1 MiB/s wr, 227 op/s
Nov 25 16:28:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148694826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:52 compute-0 nova_compute[254092]: 2025-11-25 16:28:52.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.060 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.060 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:28:53 compute-0 ceph-mon[74985]: pgmap v1234: 321 pgs: 321 active+clean; 180 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 7.1 MiB/s wr, 227 op/s
Nov 25 16:28:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/148694826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.065 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.065 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.070 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088133.0703099, 090ac2d7-979e-4706-8a01-5e94ab72282d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.071 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Started (Lifecycle Event)
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.073 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.100 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.104 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088133.0717514, 090ac2d7-979e-4706-8a01-5e94ab72282d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.104 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Paused (Lifecycle Event)
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.122 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.125 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.142 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.277 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4404MB free_disk=59.93641662597656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.279 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.279 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 3375e096-321c-459b-8b6a-e085bb62872f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 090ac2d7-979e-4706-8a01-5e94ab72282d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 9440e9b4-329e-44cf-a489-5a0634a8aa30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.477 254096 DEBUG nova.network.neutron [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updated VIF entry in instance network info cache for port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.478 254096 DEBUG nova.network.neutron [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [{"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.481 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.511 254096 DEBUG oslo_concurrency.lockutils [req-ce6ca76b-42d9-4fd3-983d-ffc97559b6b9 req-f9536156-ebe2-49f1-889e-59f389258507 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-090ac2d7-979e-4706-8a01-5e94ab72282d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:28:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:28:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753621258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.955 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.960 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.975 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.996 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:28:53 compute-0 nova_compute[254092]: 2025-11-25 16:28:53.997 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3753621258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.415 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088119.4138906, 2174ef15-55fa-4734-8cc2-89064853919b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.416 254096 INFO nova.compute.manager [-] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] VM Stopped (Lifecycle Event)
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.419 254096 DEBUG nova.compute.manager [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.420 254096 DEBUG oslo_concurrency.lockutils [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.420 254096 DEBUG oslo_concurrency.lockutils [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.421 254096 DEBUG oslo_concurrency.lockutils [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.421 254096 DEBUG nova.compute.manager [req-9d954a6a-db49-4125-83e6-0d58d7b7a38f req-d944d61c-786b-466d-b01f-191d4bf8f6ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Processing event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.422 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.425 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088134.4256034, 090ac2d7-979e-4706-8a01-5e94ab72282d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.426 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Resumed (Lifecycle Event)
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.428 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.431 254096 INFO nova.virt.libvirt.driver [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance spawned successfully.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.431 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.450 254096 DEBUG nova.compute.manager [None req-d89895fa-1222-4643-88e2-6124354a414e - - - - - -] [instance: 2174ef15-55fa-4734-8cc2-89064853919b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.459 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.462 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.468 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.468 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.469 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.469 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.469 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.470 254096 DEBUG nova.virt.libvirt.driver [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.498 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.501 254096 DEBUG nova.compute.manager [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.501 254096 DEBUG oslo_concurrency.lockutils [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.502 254096 DEBUG oslo_concurrency.lockutils [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.502 254096 DEBUG oslo_concurrency.lockutils [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.502 254096 DEBUG nova.compute.manager [req-c9d50de4-089c-40a2-a343-07b867eb1d73 req-18869db1-4b80-47ca-b2ac-32193f4b9297 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Processing event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.503 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.506 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.507 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088134.506741, 3375e096-321c-459b-8b6a-e085bb62872f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.507 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Resumed (Lifecycle Event)
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.515 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance spawned successfully.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.515 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.527 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.531 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.535 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.535 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.536 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.536 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.537 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.537 254096 DEBUG nova.virt.libvirt.driver [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.571 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.641 254096 INFO nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 8.71 seconds to spawn the instance on the hypervisor.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.641 254096 DEBUG nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.673 254096 INFO nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 12.24 seconds to spawn the instance on the hypervisor.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.674 254096 DEBUG nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.737 254096 INFO nova.compute.manager [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 10.40 seconds to build instance.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.817 254096 INFO nova.compute.manager [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 14.13 seconds to build instance.
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.841 254096 DEBUG oslo_concurrency.lockutils [None req-b4077756-3ded-4b9c-bf52-f6444a4cf7bc a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 180 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 6.0 MiB/s wr, 190 op/s
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.891 254096 DEBUG oslo_concurrency.lockutils [None req-15f23029-caf0-479a-a205-5197da85f660 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.996 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.997 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:54 compute-0 nova_compute[254092]: 2025-11-25 16:28:54.998 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:28:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:28:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457479105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:28:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:28:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457479105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:28:55 compute-0 ceph-mon[74985]: pgmap v1235: 321 pgs: 321 active+clean; 180 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 6.0 MiB/s wr, 190 op/s
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.499 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.500 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.500 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "9440e9b4-329e-44cf-a489-5a0634a8aa30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.501 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.501 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.503 254096 INFO nova.compute.manager [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Terminating instance
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.504 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "refresh_cache-9440e9b4-329e-44cf-a489-5a0634a8aa30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.504 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquired lock "refresh_cache-9440e9b4-329e-44cf-a489-5a0634a8aa30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.505 254096 DEBUG nova.network.neutron [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:28:56 compute-0 nova_compute[254092]: 2025-11-25 16:28:56.617 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:28:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/457479105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:28:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/457479105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:28:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 387 op/s
Nov 25 16:28:57 compute-0 nova_compute[254092]: 2025-11-25 16:28:57.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:28:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:28:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 25 16:28:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 25 16:28:57 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 25 16:28:58 compute-0 ceph-mon[74985]: pgmap v1236: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 387 op/s
Nov 25 16:28:58 compute-0 nova_compute[254092]: 2025-11-25 16:28:58.409 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088123.4073539, 12043b7f-9853-45a8-b963-ae96713754b4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:28:58 compute-0 nova_compute[254092]: 2025-11-25 16:28:58.409 254096 INFO nova.compute.manager [-] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] VM Stopped (Lifecycle Event)
Nov 25 16:28:58 compute-0 nova_compute[254092]: 2025-11-25 16:28:58.425 254096 DEBUG nova.compute.manager [None req-8ded2092-1658-4b4b-b675-8ed3080575fe - - - - - -] [instance: 12043b7f-9853-45a8-b963-ae96713754b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:28:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.4 MiB/s wr, 303 op/s
Nov 25 16:28:59 compute-0 ceph-mon[74985]: osdmap e158: 3 total, 3 up, 3 in
Nov 25 16:28:59 compute-0 ceph-mon[74985]: pgmap v1238: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.4 MiB/s wr, 303 op/s
Nov 25 16:28:59 compute-0 nova_compute[254092]: 2025-11-25 16:28:59.611 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:28:59 compute-0 nova_compute[254092]: 2025-11-25 16:28:59.612 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.330 254096 DEBUG nova.network.neutron [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.449 254096 DEBUG nova.compute.manager [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.451 254096 DEBUG oslo_concurrency.lockutils [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.451 254096 DEBUG oslo_concurrency.lockutils [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.452 254096 DEBUG oslo_concurrency.lockutils [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.452 254096 DEBUG nova.compute.manager [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] No waiting events found dispatching network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.452 254096 WARNING nova.compute.manager [req-9aac2fe9-b1ae-4911-a66a-6a5fd76e2eb1 req-5681ad15-2dfe-4b0f-8cce-db2963d2d76f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received unexpected event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for instance with vm_state active and task_state None.
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.560 254096 DEBUG nova.compute.manager [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.561 254096 DEBUG oslo_concurrency.lockutils [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.563 254096 DEBUG oslo_concurrency.lockutils [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.564 254096 DEBUG oslo_concurrency.lockutils [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.564 254096 DEBUG nova.compute.manager [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.565 254096 WARNING nova.compute.manager [req-8ed88030-7087-41e8-bd42-f1e4ac4a27df req-bfee3de6-4758-4115-9bc2-e61397cf9327 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.720 254096 DEBUG nova.network.neutron [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.736 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Releasing lock "refresh_cache-9440e9b4-329e-44cf-a489-5a0634a8aa30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.737 254096 DEBUG nova.compute.manager [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:29:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 32 KiB/s wr, 288 op/s
Nov 25 16:29:00 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 25 16:29:00 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000014.scope: Consumed 9.024s CPU time.
Nov 25 16:29:00 compute-0 systemd-machined[216343]: Machine qemu-21-instance-00000014 terminated.
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.954 254096 INFO nova.virt.libvirt.driver [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance destroyed successfully.
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.955 254096 DEBUG nova.objects.instance [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lazy-loading 'resources' on Instance uuid 9440e9b4-329e-44cf-a489-5a0634a8aa30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.992 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:00 compute-0 nova_compute[254092]: 2025-11-25 16:29:00.994 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:01 compute-0 nova_compute[254092]: 2025-11-25 16:29:01.018 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:29:01 compute-0 anacron[126663]: Job `cron.weekly' started
Nov 25 16:29:01 compute-0 anacron[126663]: Job `cron.weekly' terminated
Nov 25 16:29:01 compute-0 nova_compute[254092]: 2025-11-25 16:29:01.275 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:01 compute-0 nova_compute[254092]: 2025-11-25 16:29:01.277 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:01 compute-0 nova_compute[254092]: 2025-11-25 16:29:01.283 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:29:01 compute-0 nova_compute[254092]: 2025-11-25 16:29:01.284 254096 INFO nova.compute.claims [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:29:01 compute-0 ceph-mon[74985]: pgmap v1239: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 32 KiB/s wr, 288 op/s
Nov 25 16:29:01 compute-0 nova_compute[254092]: 2025-11-25 16:29:01.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:01 compute-0 nova_compute[254092]: 2025-11-25 16:29:01.579 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694543479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.052 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.062 254096 DEBUG nova.compute.provider_tree [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.077 254096 DEBUG nova.scheduler.client.report [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.121 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.122 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.192 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.193 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.219 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.246 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:29:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1694543479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.390 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.392 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.395 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Creating image(s)
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.465 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.494 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.514 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.517 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.543 254096 DEBUG nova.policy [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '09afd60d3afd4a57a14e7e93a66275f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f99f039b80564f5684a91f3bc27c2249', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.577 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.578 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.579 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.579 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.608 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:02 compute-0 nova_compute[254092]: 2025-11-25 16:29:02.616 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 25 16:29:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 25 16:29:02 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 25 16:29:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 39 KiB/s wr, 357 op/s
Nov 25 16:29:04 compute-0 ceph-mon[74985]: osdmap e159: 3 total, 3 up, 3 in
Nov 25 16:29:04 compute-0 ceph-mon[74985]: pgmap v1241: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 39 KiB/s wr, 357 op/s
Nov 25 16:29:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 140 op/s
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.322 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Successfully created port: ab0cfddf-69e0-4494-a106-e603168444a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:29:05 compute-0 ceph-mon[74985]: pgmap v1242: 321 pgs: 321 active+clean; 181 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 140 op/s
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.383 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.455 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] resizing rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.509 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.509 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.525 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.667 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.668 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:05 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.719 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.722 254096 INFO nova.compute.claims [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:29:05 compute-0 nova_compute[254092]: 2025-11-25 16:29:05.905 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.016 254096 DEBUG nova.objects.instance [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b9f60af-05f0-43c7-bce7-227cb54ec793 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.032 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.032 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Ensure instance console log exists: /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.033 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.033 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.033 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020884996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.349 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.361 254096 INFO nova.virt.libvirt.driver [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deleting instance files /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30_del
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.364 254096 INFO nova.virt.libvirt.driver [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deletion of /var/lib/nova/instances/9440e9b4-329e-44cf-a489-5a0634a8aa30_del complete
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.371 254096 DEBUG nova.compute.provider_tree [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.395 254096 DEBUG nova.scheduler.client.report [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:29:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1020884996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.431 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.432 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.437 254096 INFO nova.compute.manager [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 5.70 seconds to destroy the instance on the hypervisor.
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.438 254096 DEBUG oslo.service.loopingcall [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.438 254096 DEBUG nova.compute.manager [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.438 254096 DEBUG nova.network.neutron [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.495 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.496 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.529 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.556 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.685 254096 DEBUG nova.network.neutron [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.706 254096 DEBUG nova.network.neutron [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.721 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.722 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.722 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Creating image(s)
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.740 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.759 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.785 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.788 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.823 254096 DEBUG nova.policy [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.829 254096 INFO nova.compute.manager [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Took 0.39 seconds to deallocate network for instance.
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.867 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.868 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.868 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.869 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 172 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 177 op/s
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.892 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.898 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.926 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:06 compute-0 nova_compute[254092]: 2025-11-25 16:29:06.927 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.070 254096 DEBUG oslo_concurrency.processutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.100 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Successfully updated port: ab0cfddf-69e0-4494-a106-e603168444a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.186 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.186 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquired lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.187 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.400 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826807844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.520 254096 DEBUG nova.compute.manager [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.520 254096 DEBUG nova.compute.manager [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing instance network info cache due to event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.521 254096 DEBUG oslo_concurrency.lockutils [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.531 254096 DEBUG oslo_concurrency.processutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.537 254096 DEBUG nova.compute.provider_tree [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.549 254096 DEBUG nova.scheduler.client.report [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:29:07 compute-0 ceph-mon[74985]: pgmap v1243: 321 pgs: 321 active+clean; 172 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 177 op/s
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.593 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.732 254096 INFO nova.scheduler.client.report [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Deleted allocations for instance 9440e9b4-329e-44cf-a489-5a0634a8aa30
Nov 25 16:29:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:07 compute-0 nova_compute[254092]: 2025-11-25 16:29:07.892 254096 DEBUG oslo_concurrency.lockutils [None req-5c08d7e3-46f0-4f28-b580-d4db77ee20c9 06ffced0e9004a60b3fe2f455857f494 99f704e0b6c04f648538e8070f335bdc - - default default] Lock "9440e9b4-329e-44cf-a489-5a0634a8aa30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:08 compute-0 ovn_controller[153477]: 2025-11-25T16:29:08Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 16:29:08 compute-0 ovn_controller[153477]: 2025-11-25T16:29:08Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 16:29:08 compute-0 ovn_controller[153477]: 2025-11-25T16:29:08Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:5a:58 10.100.0.11
Nov 25 16:29:08 compute-0 ovn_controller[153477]: 2025-11-25T16:29:08Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:5a:58 10.100.0.11
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.406 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Successfully created port: 0d1cf86d-6639-47eb-8de1-718476d1c006 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.758 254096 DEBUG nova.network.neutron [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.799 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Releasing lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.800 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance network_info: |[{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.800 254096 DEBUG oslo_concurrency.lockutils [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.800 254096 DEBUG nova.network.neutron [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.803 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start _get_guest_xml network_info=[{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.807 254096 WARNING nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.811 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.812 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.818 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.818 254096 DEBUG nova.virt.libvirt.host [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:29:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2826807844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.819 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.820 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.821 254096 DEBUG nova.virt.hardware [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:29:08 compute-0 nova_compute[254092]: 2025-11-25 16:29:08.824 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 172 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 159 op/s
Nov 25 16:29:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318660158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.235 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.254 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.257 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.468 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.521 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:29:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394803419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.792 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.796 254096 DEBUG nova.virt.libvirt.vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1649655527',display_name='tempest-ServersTestJSON-server-1649655527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1649655527',id=21,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCW+Ktdde8brSi3dDjuPJuQcQZxoIsAM7EI886G65qnVc9QFdS/VNcpSvi1Y/e9z6GKqL8cPDahYUZN5KOZSYOR8WETlyE2X3Pf8M2fEr9LePpk/dgU5OnSDx/LsY9Zvng==',key_name='tempest-keypair-749925212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f99f039b80564f5684a91f3bc27c2249',ramdisk_id='',reservation_id='r-00qo170f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-841201839',owner_user_name='tempest-ServersTestJSON-841201839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09afd60d3afd4a57a14e7e93a66275f9',uuid=7b9f60af-05f0-43c7-bce7-227cb54ec793,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.797 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converting VIF {"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.797 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.798 254096 DEBUG nova.objects.instance [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b9f60af-05f0-43c7-bce7-227cb54ec793 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.820 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <uuid>7b9f60af-05f0-43c7-bce7-227cb54ec793</uuid>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <name>instance-00000015</name>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestJSON-server-1649655527</nova:name>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:29:08</nova:creationTime>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:user uuid="09afd60d3afd4a57a14e7e93a66275f9">tempest-ServersTestJSON-841201839-project-member</nova:user>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:project uuid="f99f039b80564f5684a91f3bc27c2249">tempest-ServersTestJSON-841201839</nova:project>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <nova:port uuid="ab0cfddf-69e0-4494-a106-e603168444a4">
Nov 25 16:29:09 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <system>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <entry name="serial">7b9f60af-05f0-43c7-bce7-227cb54ec793</entry>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <entry name="uuid">7b9f60af-05f0-43c7-bce7-227cb54ec793</entry>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </system>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <os>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   </os>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <features>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   </features>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7b9f60af-05f0-43c7-bce7-227cb54ec793_disk">
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config">
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:09 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:7d:e6:95"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <target dev="tapab0cfddf-69"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/console.log" append="off"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <video>
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </video>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:29:09 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:29:09 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:29:09 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:29:09 compute-0 nova_compute[254092]: </domain>
Nov 25 16:29:09 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.820 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Preparing to wait for external event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.820 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.821 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.821 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.822 254096 DEBUG nova.virt.libvirt.vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1649655527',display_name='tempest-ServersTestJSON-server-1649655527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1649655527',id=21,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCW+Ktdde8brSi3dDjuPJuQcQZxoIsAM7EI886G65qnVc9QFdS/VNcpSvi1Y/e9z6GKqL8cPDahYUZN5KOZSYOR8WETlyE2X3Pf8M2fEr9LePpk/dgU5OnSDx/LsY9Zvng==',key_name='tempest-keypair-749925212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f99f039b80564f5684a91f3bc27c2249',ramdisk_id='',reservation_id='r-00qo170f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-841201839',owner_user_name='tempest-ServersTestJSON-841201839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09afd60d3afd4a57a14e7e93a66275f9',uuid=7b9f60af-05f0-43c7-bce7-227cb54ec793,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.822 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converting VIF {"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.822 254096 DEBUG nova.network.os_vif_util [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.823 254096 DEBUG os_vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.824 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.824 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab0cfddf-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab0cfddf-69, col_values=(('external_ids', {'iface-id': 'ab0cfddf-69e0-4494-a106-e603168444a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:e6:95', 'vm-uuid': '7b9f60af-05f0-43c7-bce7-227cb54ec793'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:09 compute-0 NetworkManager[48891]: <info>  [1764088149.8290] manager: (tapab0cfddf-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.834 254096 INFO os_vif [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69')
Nov 25 16:29:09 compute-0 ceph-mon[74985]: pgmap v1244: 321 pgs: 321 active+clean; 172 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 159 op/s
Nov 25 16:29:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2318660158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1394803419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.942 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.943 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.943 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] No VIF found with MAC fa:16:3e:7d:e6:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:29:09 compute-0 nova_compute[254092]: 2025-11-25 16:29:09.944 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Using config drive
Nov 25 16:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.061 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.236 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Successfully updated port: 0d1cf86d-6639-47eb-8de1-718476d1c006 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.259 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.259 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.259 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.311 254096 DEBUG nova.objects.instance [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.322 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.322 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Ensure instance console log exists: /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.323 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.323 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.323 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.326 254096 DEBUG nova.compute.manager [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-changed-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.326 254096 DEBUG nova.compute.manager [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Refreshing instance network info cache due to event network-changed-0d1cf86d-6639-47eb-8de1-718476d1c006. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.326 254096 DEBUG oslo_concurrency.lockutils [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.481 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.563 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Creating config drive at /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.578 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ztfi_ea execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.611 254096 DEBUG nova.network.neutron [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updated VIF entry in instance network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.612 254096 DEBUG nova.network.neutron [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.636 254096 DEBUG oslo_concurrency.lockutils [req-d689d932-0def-4a52-80de-f10d93cef505 req-c1b325c3-a2ac-4c1e-bb96-3f1afe1f341a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.716 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ztfi_ea" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.740 254096 DEBUG nova.storage.rbd_utils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] rbd image 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:10 compute-0 nova_compute[254092]: 2025-11-25 16:29:10.744 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 248 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 667 KiB/s rd, 7.4 MiB/s wr, 180 op/s
Nov 25 16:29:11 compute-0 ceph-mon[74985]: pgmap v1245: 321 pgs: 321 active+clean; 248 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 667 KiB/s rd, 7.4 MiB/s wr, 180 op/s
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.511 254096 DEBUG oslo_concurrency.processutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config 7b9f60af-05f0-43c7-bce7-227cb54ec793_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.767s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.512 254096 INFO nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deleting local config drive /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793/disk.config because it was imported into RBD.
Nov 25 16:29:11 compute-0 kernel: tapab0cfddf-69: entered promiscuous mode
Nov 25 16:29:11 compute-0 NetworkManager[48891]: <info>  [1764088151.5659] manager: (tapab0cfddf-69): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:11 compute-0 ovn_controller[153477]: 2025-11-25T16:29:11Z|00064|binding|INFO|Claiming lport ab0cfddf-69e0-4494-a106-e603168444a4 for this chassis.
Nov 25 16:29:11 compute-0 ovn_controller[153477]: 2025-11-25T16:29:11Z|00065|binding|INFO|ab0cfddf-69e0-4494-a106-e603168444a4: Claiming fa:16:3e:7d:e6:95 10.100.0.13
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:11 compute-0 systemd-udevd[283480]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:29:11 compute-0 systemd-machined[216343]: New machine qemu-23-instance-00000015.
Nov 25 16:29:11 compute-0 NetworkManager[48891]: <info>  [1764088151.6122] device (tapab0cfddf-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:29:11 compute-0 NetworkManager[48891]: <info>  [1764088151.6135] device (tapab0cfddf-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.619 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e6:95 10.100.0.13'], port_security=['fa:16:3e:7d:e6:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b9f60af-05f0-43c7-bce7-227cb54ec793', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f99f039b80564f5684a91f3bc27c2249', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a14e6fc5-327e-44fa-8134-4f62c2b97373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16d598ad-25ce-4f41-98a7-2a9985da8936, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ab0cfddf-69e0-4494-a106-e603168444a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.621 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ab0cfddf-69e0-4494-a106-e603168444a4 in datapath 09313f5b-a3fb-41e8-87c2-c636c3ed13c6 bound to our chassis
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.622 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09313f5b-a3fb-41e8-87c2-c636c3ed13c6
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.635 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c349ab9-6c06-4b6a-a3fd-eef8dfca9d99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.636 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap09313f5b-a1 in ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.638 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap09313f5b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.638 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1bae544b-2166-484d-8c7a-29f68b9d280c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.639 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48bf4560-4ac6-4ac5-91a7-a27adfd8c40e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:11 compute-0 ovn_controller[153477]: 2025-11-25T16:29:11Z|00066|binding|INFO|Setting lport ab0cfddf-69e0-4494-a106-e603168444a4 ovn-installed in OVS
Nov 25 16:29:11 compute-0 ovn_controller[153477]: 2025-11-25T16:29:11Z|00067|binding|INFO|Setting lport ab0cfddf-69e0-4494-a106-e603168444a4 up in Southbound
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.655 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6783e455-2f6e-434c-9a2e-478530e61a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.673 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e7960a-bc06-4533-8722-024d5a012860]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.710 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[afc3a34e-8a92-48e3-87d6-e363a52add1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 systemd-udevd[283483]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:29:11 compute-0 NetworkManager[48891]: <info>  [1764088151.7178] manager: (tap09313f5b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7bf524-85bc-467a-9878-9dccf1598662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.752 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7aab9342-fd03-4f98-abce-fa0efc999ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.757 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3f67fd5f-fdfd-45f7-ad0b-0779cecd05fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 NetworkManager[48891]: <info>  [1764088151.7811] device (tap09313f5b-a0): carrier: link connected
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.786 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[535b145f-f1cf-4b9e-bf3d-7b7b10faa817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4dd850-64de-467d-b68e-72b0357d7f06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09313f5b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:45:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460937, 'reachable_time': 34619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283514, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.815 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[692e761a-4e82-4e44-bf02-3fc4cc7041d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:454c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460937, 'tstamp': 460937}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283515, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.834 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bc4195-ad31-4af8-a06b-a1830aa1fe5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09313f5b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:45:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460937, 'reachable_time': 34619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283516, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.865 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8650a1-8847-46b6-b325-30e48789e286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bb13e6-7d55-4f1d-a593-5df9eccc8bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.934 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09313f5b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.934 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.935 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09313f5b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:11 compute-0 NetworkManager[48891]: <info>  [1764088151.9380] manager: (tap09313f5b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 25 16:29:11 compute-0 kernel: tap09313f5b-a0: entered promiscuous mode
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.941 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09313f5b-a0, col_values=(('external_ids', {'iface-id': '491a5ecd-1693-49b3-bd97-98ff227e2ff8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:11 compute-0 ovn_controller[153477]: 2025-11-25T16:29:11Z|00068|binding|INFO|Releasing lport 491a5ecd-1693-49b3-bd97-98ff227e2ff8 from this chassis (sb_readonly=0)
Nov 25 16:29:11 compute-0 nova_compute[254092]: 2025-11-25 16:29:11.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.976 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02daecc3-2d06-4104-9c41-c648fce37797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.977 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-09313f5b-a3fb-41e8-87c2-c636c3ed13c6
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.pid.haproxy
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 09313f5b-a3fb-41e8-87c2-c636c3ed13c6
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:29:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:11.978 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'env', 'PROCESS_TAG=haproxy-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/09313f5b-a3fb-41e8-87c2-c636c3ed13c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.074 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088152.0715392, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.075 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Started (Lifecycle Event)
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.101 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.106 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088152.0724978, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.106 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Paused (Lifecycle Event)
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.127 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.132 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.152 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:12 compute-0 podman[283591]: 2025-11-25 16:29:12.358770182 +0000 UTC m=+0.025790843 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:29:12 compute-0 podman[283591]: 2025-11-25 16:29:12.612555043 +0000 UTC m=+0.279575694 container create 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 16:29:12 compute-0 systemd[1]: Started libpod-conmon-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0.scope.
Nov 25 16:29:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeccf6ecdea4aa475a47ca66b66b3f98184438b7398c8f4397fb73ed72c289ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:12.759 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:12 compute-0 podman[283591]: 2025-11-25 16:29:12.811661948 +0000 UTC m=+0.478682629 container init 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 16:29:12 compute-0 podman[283591]: 2025-11-25 16:29:12.817084426 +0000 UTC m=+0.484105057 container start 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:29:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:12 compute-0 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : New worker (283612) forked
Nov 25 16:29:12 compute-0 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : Loading success.
Nov 25 16:29:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 282 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 806 KiB/s rd, 9.3 MiB/s wr, 223 op/s
Nov 25 16:29:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:12.943 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.957 254096 DEBUG nova.compute.manager [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.957 254096 DEBUG oslo_concurrency.lockutils [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.958 254096 DEBUG oslo_concurrency.lockutils [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.958 254096 DEBUG oslo_concurrency.lockutils [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.958 254096 DEBUG nova.compute.manager [req-7a62cc99-5b96-440b-b277-ed26df074c8e req-2a939739-c6b4-4bca-9d16-549cfb06be97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Processing event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.959 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.962 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088152.9625418, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.963 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Resumed (Lifecycle Event)
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.964 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.968 254096 INFO nova.virt.libvirt.driver [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance spawned successfully.
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.968 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:29:12 compute-0 nova_compute[254092]: 2025-11-25 16:29:12.994 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.002 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.007 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.008 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.008 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.009 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.009 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:13 compute-0 ceph-mon[74985]: pgmap v1246: 321 pgs: 321 active+clean; 282 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 806 KiB/s rd, 9.3 MiB/s wr, 223 op/s
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.010 254096 DEBUG nova.virt.libvirt.driver [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.039 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.232 254096 INFO nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 10.84 seconds to spawn the instance on the hypervisor.
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.232 254096 DEBUG nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.319 254096 DEBUG nova.network.neutron [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updating instance_info_cache with network_info: [{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.390 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.390 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance network_info: |[{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.391 254096 INFO nova.compute.manager [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 12.15 seconds to build instance.
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.392 254096 DEBUG oslo_concurrency.lockutils [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.392 254096 DEBUG nova.network.neutron [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Refreshing network info cache for port 0d1cf86d-6639-47eb-8de1-718476d1c006 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.394 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start _get_guest_xml network_info=[{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.398 254096 WARNING nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.403 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.404 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.libvirt.host [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.407 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.408 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.409 254096 DEBUG nova.virt.hardware [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.412 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.457 254096 DEBUG oslo_concurrency.lockutils [None req-125b3caa-91c7-4e48-9b7c-4b6430829c5a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:13.601 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:13.602 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124175984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.926 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.946 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:13 compute-0 nova_compute[254092]: 2025-11-25 16:29:13.950 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/124175984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385547933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.418 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.422 254096 DEBUG nova.virt.libvirt.vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2013800485',display_name='tempest-ServersAdminTestJSON-server-2013800485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2013800485',id=22,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-p696wseq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:06Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=f0cb83d8-c2a3-49d1-8c01-b9be9922abd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.423 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.425 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.429 254096 DEBUG nova.objects.instance [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.445 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <uuid>f0cb83d8-c2a3-49d1-8c01-b9be9922abd1</uuid>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <name>instance-00000016</name>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminTestJSON-server-2013800485</nova:name>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:29:13</nova:creationTime>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <nova:port uuid="0d1cf86d-6639-47eb-8de1-718476d1c006">
Nov 25 16:29:14 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <system>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <entry name="serial">f0cb83d8-c2a3-49d1-8c01-b9be9922abd1</entry>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <entry name="uuid">f0cb83d8-c2a3-49d1-8c01-b9be9922abd1</entry>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </system>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <os>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   </os>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <features>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   </features>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk">
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config">
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:78:52:60"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <target dev="tap0d1cf86d-66"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/console.log" append="off"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <video>
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </video>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:29:14 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:29:14 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:29:14 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:29:14 compute-0 nova_compute[254092]: </domain>
Nov 25 16:29:14 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.446 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Preparing to wait for external event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.446 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.447 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.448 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.449 254096 DEBUG nova.virt.libvirt.vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2013800485',display_name='tempest-ServersAdminTestJSON-server-2013800485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2013800485',id=22,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-p696wseq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:06Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=f0cb83d8-c2a3-49d1-8c01-b9be9922abd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.450 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.451 254096 DEBUG nova.network.os_vif_util [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.452 254096 DEBUG os_vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.454 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.460 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d1cf86d-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.461 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d1cf86d-66, col_values=(('external_ids', {'iface-id': '0d1cf86d-6639-47eb-8de1-718476d1c006', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:52:60', 'vm-uuid': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:14 compute-0 NetworkManager[48891]: <info>  [1764088154.4647] manager: (tap0d1cf86d-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.477 254096 INFO os_vif [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66')
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.581 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.582 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.583 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:78:52:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.584 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Using config drive
Nov 25 16:29:14 compute-0 nova_compute[254092]: 2025-11-25 16:29:14.619 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 282 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 673 KiB/s rd, 7.8 MiB/s wr, 186 op/s
Nov 25 16:29:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/385547933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:15 compute-0 ceph-mon[74985]: pgmap v1247: 321 pgs: 321 active+clean; 282 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 673 KiB/s rd, 7.8 MiB/s wr, 186 op/s
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG nova.compute.manager [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG oslo_concurrency.lockutils [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG oslo_concurrency.lockutils [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.089 254096 DEBUG oslo_concurrency.lockutils [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.090 254096 DEBUG nova.compute.manager [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] No waiting events found dispatching network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.090 254096 WARNING nova.compute.manager [req-904fd6f3-0940-4131-96c8-d09908a13203 req-aa6aebcd-4a12-4960-a294-775ba5d1e2f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received unexpected event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 for instance with vm_state active and task_state None.
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.239826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155239860, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1351, "num_deletes": 260, "total_data_size": 1763308, "memory_usage": 1791824, "flush_reason": "Manual Compaction"}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155350734, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1740717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24614, "largest_seqno": 25964, "table_properties": {"data_size": 1734268, "index_size": 3588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14575, "raw_average_key_size": 20, "raw_value_size": 1721027, "raw_average_value_size": 2465, "num_data_blocks": 158, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088063, "oldest_key_time": 1764088063, "file_creation_time": 1764088155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 110949 microseconds, and 4864 cpu microseconds.
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.350774) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1740717 bytes OK
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.350791) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429333) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429371) EVENT_LOG_v1 {"time_micros": 1764088155429363, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429391) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1757035, prev total WAL file size 1757035, number of live WAL files 2.
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.430033) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1699KB)], [56(7303KB)]
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155430077, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9219159, "oldest_snapshot_seqno": -1}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4894 keys, 7474213 bytes, temperature: kUnknown
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155514919, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7474213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7441323, "index_size": 19532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 123171, "raw_average_key_size": 25, "raw_value_size": 7352752, "raw_average_value_size": 1502, "num_data_blocks": 803, "num_entries": 4894, "num_filter_entries": 4894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.515124) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7474213 bytes
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.520539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.6 rd, 88.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.6) write-amplify(4.3) OK, records in: 5423, records dropped: 529 output_compression: NoCompression
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.520561) EVENT_LOG_v1 {"time_micros": 1764088155520550, "job": 30, "event": "compaction_finished", "compaction_time_micros": 84902, "compaction_time_cpu_micros": 16673, "output_level": 6, "num_output_files": 1, "total_output_size": 7474213, "num_input_records": 5423, "num_output_records": 4894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155520992, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088155522060, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.429930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:29:15 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:29:15.522124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.602 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Creating config drive at /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.608 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3lhzrr9x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.753 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3lhzrr9x" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.791 254096 DEBUG nova.storage.rbd_utils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.799 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.954 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088140.952928, 9440e9b4-329e-44cf-a489-5a0634a8aa30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.955 254096 INFO nova.compute.manager [-] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] VM Stopped (Lifecycle Event)
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.976 254096 DEBUG nova.compute.manager [None req-4a4aff5c-d5cf-4f0d-98d0-d9a7369e37c6 - - - - - -] [instance: 9440e9b4-329e-44cf-a489-5a0634a8aa30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.991 254096 DEBUG oslo_concurrency.processutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:15 compute-0 nova_compute[254092]: 2025-11-25 16:29:15.992 254096 INFO nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deleting local config drive /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1/disk.config because it was imported into RBD.
Nov 25 16:29:16 compute-0 kernel: tap0d1cf86d-66: entered promiscuous mode
Nov 25 16:29:16 compute-0 NetworkManager[48891]: <info>  [1764088156.0502] manager: (tap0d1cf86d-66): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:16 compute-0 ovn_controller[153477]: 2025-11-25T16:29:16Z|00069|binding|INFO|Claiming lport 0d1cf86d-6639-47eb-8de1-718476d1c006 for this chassis.
Nov 25 16:29:16 compute-0 ovn_controller[153477]: 2025-11-25T16:29:16Z|00070|binding|INFO|0d1cf86d-6639-47eb-8de1-718476d1c006: Claiming fa:16:3e:78:52:60 10.100.0.10
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.063 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.064 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.066 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.082 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[955fbd64-f76a-41d0-a5ce-6c1e5bd37fcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:16 compute-0 ovn_controller[153477]: 2025-11-25T16:29:16Z|00071|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 ovn-installed in OVS
Nov 25 16:29:16 compute-0 ovn_controller[153477]: 2025-11-25T16:29:16Z|00072|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 up in Southbound
Nov 25 16:29:16 compute-0 systemd-udevd[283757]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:29:16 compute-0 systemd-machined[216343]: New machine qemu-24-instance-00000016.
Nov 25 16:29:16 compute-0 NetworkManager[48891]: <info>  [1764088156.1058] device (tap0d1cf86d-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:29:16 compute-0 NetworkManager[48891]: <info>  [1764088156.1069] device (tap0d1cf86d-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:29:16 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.115 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[af42980d-0716-46f0-ac1f-7051e19b1a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.120 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1c43d1-4fc5-4e7e-8f72-e127fcedf265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.150 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f63e23a3-243c-4d0e-af7b-cb8ca4ba7c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01c95fb8-b3ee-4196-9f8e-121da0b79816]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283770, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.199 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2a68e5-83a7-4bc8-aa54-9118fea50f84]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283772, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283772, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.206 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.207 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.207 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:16.208 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.490 254096 DEBUG nova.network.neutron [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updated VIF entry in instance network info cache for port 0d1cf86d-6639-47eb-8de1-718476d1c006. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.490 254096 DEBUG nova.network.neutron [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updating instance_info_cache with network_info: [{"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.519 254096 DEBUG oslo_concurrency.lockutils [req-d0927c84-1a9e-48e6-ad8c-80c268aaed49 req-c21548d5-8ccd-435d-a37b-f4b0fc6ae4fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.658 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088156.6577435, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.658 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Started (Lifecycle Event)
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.675 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.678 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088156.6578999, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.678 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Paused (Lifecycle Event)
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.699 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.703 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:16 compute-0 nova_compute[254092]: 2025-11-25 16:29:16.723 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 293 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 279 op/s
Nov 25 16:29:16 compute-0 ceph-mon[74985]: pgmap v1248: 321 pgs: 321 active+clean; 293 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 279 op/s
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:17 compute-0 NetworkManager[48891]: <info>  [1764088157.0058] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 25 16:29:17 compute-0 NetworkManager[48891]: <info>  [1764088157.0067] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 25 16:29:17 compute-0 ovn_controller[153477]: 2025-11-25T16:29:17Z|00073|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 16:29:17 compute-0 ovn_controller[153477]: 2025-11-25T16:29:17Z|00074|binding|INFO|Releasing lport 491a5ecd-1693-49b3-bd97-98ff227e2ff8 from this chassis (sb_readonly=0)
Nov 25 16:29:17 compute-0 ovn_controller[153477]: 2025-11-25T16:29:17Z|00075|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 16:29:17 compute-0 ovn_controller[153477]: 2025-11-25T16:29:17Z|00076|binding|INFO|Releasing lport 491a5ecd-1693-49b3-bd97-98ff227e2ff8 from this chassis (sb_readonly=0)
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.233 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.234 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Processing event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG oslo_concurrency.lockutils [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.235 254096 DEBUG nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] No waiting events found dispatching network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.236 254096 WARNING nova.compute.manager [req-6b7f44c7-a0eb-4eb9-8575-6f69cc8b74f4 req-4d9000da-2baf-455a-9253-d11977d5b630 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received unexpected event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 for instance with vm_state building and task_state spawning.
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.236 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.239 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088157.23915, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.239 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Resumed (Lifecycle Event)
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.241 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.246 254096 INFO nova.virt.libvirt.driver [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance spawned successfully.
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.247 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.266 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.272 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.275 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.276 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.276 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.277 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.277 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.278 254096 DEBUG nova.virt.libvirt.driver [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.310 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.393 254096 INFO nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 10.67 seconds to spawn the instance on the hypervisor.
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.393 254096 DEBUG nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.473 254096 INFO nova.compute.manager [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 11.83 seconds to build instance.
Nov 25 16:29:17 compute-0 nova_compute[254092]: 2025-11-25 16:29:17.512 254096 DEBUG oslo_concurrency.lockutils [None req-ee114b33-9915-464f-8cd3-99afc1e1f47a a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 293 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 240 op/s
Nov 25 16:29:18 compute-0 ceph-mon[74985]: pgmap v1249: 321 pgs: 321 active+clean; 293 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 240 op/s
Nov 25 16:29:19 compute-0 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG nova.compute.manager [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:19 compute-0 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG nova.compute.manager [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing instance network info cache due to event network-changed-ab0cfddf-69e0-4494-a106-e603168444a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:29:19 compute-0 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG oslo_concurrency.lockutils [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:19 compute-0 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG oslo_concurrency.lockutils [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:19 compute-0 nova_compute[254092]: 2025-11-25 16:29:19.437 254096 DEBUG nova.network.neutron [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Refreshing network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:29:19 compute-0 nova_compute[254092]: 2025-11-25 16:29:19.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:20 compute-0 nova_compute[254092]: 2025-11-25 16:29:20.868 254096 DEBUG nova.network.neutron [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updated VIF entry in instance network info cache for port ab0cfddf-69e0-4494-a106-e603168444a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:29:20 compute-0 nova_compute[254092]: 2025-11-25 16:29:20.869 254096 DEBUG nova.network.neutron [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [{"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:20 compute-0 nova_compute[254092]: 2025-11-25 16:29:20.891 254096 DEBUG oslo_concurrency.lockutils [req-9ab3ae5b-e6db-4c45-9713-8faadeff2ef3 req-04a85cd0-9b67-40e3-8515-892c6b5c629c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7b9f60af-05f0-43c7-bce7-227cb54ec793" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 285 op/s
Nov 25 16:29:20 compute-0 ceph-mon[74985]: pgmap v1250: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 285 op/s
Nov 25 16:29:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:21.945 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:22 compute-0 nova_compute[254092]: 2025-11-25 16:29:22.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:22 compute-0 podman[283816]: 2025-11-25 16:29:22.664468562 +0000 UTC m=+0.076293016 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:29:22 compute-0 podman[283815]: 2025-11-25 16:29:22.666621021 +0000 UTC m=+0.070590151 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 16:29:22 compute-0 podman[283817]: 2025-11-25 16:29:22.689438231 +0000 UTC m=+0.098040067 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:29:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Nov 25 16:29:22 compute-0 ceph-mon[74985]: pgmap v1251: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Nov 25 16:29:22 compute-0 nova_compute[254092]: 2025-11-25 16:29:22.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:23 compute-0 nova_compute[254092]: 2025-11-25 16:29:23.769 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:23 compute-0 nova_compute[254092]: 2025-11-25 16:29:23.769 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:23 compute-0 nova_compute[254092]: 2025-11-25 16:29:23.797 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:29:23 compute-0 nova_compute[254092]: 2025-11-25 16:29:23.904 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:23 compute-0 nova_compute[254092]: 2025-11-25 16:29:23.905 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:23 compute-0 nova_compute[254092]: 2025-11-25 16:29:23.911 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:29:23 compute-0 nova_compute[254092]: 2025-11-25 16:29:23.912 254096 INFO nova.compute.claims [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.080 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541345696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.510 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.517 254096 DEBUG nova.compute.provider_tree [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.546 254096 DEBUG nova.scheduler.client.report [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:29:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/541345696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.572 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.573 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.660 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.661 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.682 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.704 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.835 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.837 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.837 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Creating image(s)
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.862 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.888 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 96 KiB/s wr, 167 op/s
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.914 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.918 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.977 254096 DEBUG nova.policy [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8e217b742fe4a57a4ac4ffc776670fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.986 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.986 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.987 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:24 compute-0 nova_compute[254092]: 2025-11-25 16:29:24.987 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.010 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.014 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7777dd86-925e-4f98-bd68-e38ac540d97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.332 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7777dd86-925e-4f98-bd68-e38ac540d97b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.411 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.529 254096 DEBUG nova.objects.instance [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 7777dd86-925e-4f98-bd68-e38ac540d97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 25 16:29:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 25 16:29:25 compute-0 ceph-mon[74985]: pgmap v1252: 321 pgs: 321 active+clean; 293 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 96 KiB/s wr, 167 op/s
Nov 25 16:29:25 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.938 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.938 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Ensure instance console log exists: /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.939 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.939 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:25 compute-0 nova_compute[254092]: 2025-11-25 16:29:25.939 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:26 compute-0 ovn_controller[153477]: 2025-11-25T16:29:26Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:e6:95 10.100.0.13
Nov 25 16:29:26 compute-0 ovn_controller[153477]: 2025-11-25T16:29:26Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:e6:95 10.100.0.13
Nov 25 16:29:26 compute-0 ceph-mon[74985]: osdmap e160: 3 total, 3 up, 3 in
Nov 25 16:29:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 339 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Nov 25 16:29:27 compute-0 nova_compute[254092]: 2025-11-25 16:29:27.438 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Successfully created port: 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:29:27 compute-0 nova_compute[254092]: 2025-11-25 16:29:27.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:27 compute-0 ceph-mon[74985]: pgmap v1254: 321 pgs: 321 active+clean; 339 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Nov 25 16:29:27 compute-0 nova_compute[254092]: 2025-11-25 16:29:27.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:28 compute-0 nova_compute[254092]: 2025-11-25 16:29:28.773 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Successfully updated port: 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:29:28 compute-0 nova_compute[254092]: 2025-11-25 16:29:28.831 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:28 compute-0 nova_compute[254092]: 2025-11-25 16:29:28.831 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquired lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:28 compute-0 nova_compute[254092]: 2025-11-25 16:29:28.831 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:29:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 339 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Nov 25 16:29:28 compute-0 ceph-mon[74985]: pgmap v1255: 321 pgs: 321 active+clean; 339 MiB data, 454 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 146 op/s
Nov 25 16:29:28 compute-0 nova_compute[254092]: 2025-11-25 16:29:28.986 254096 DEBUG nova.compute.manager [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-changed-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:28 compute-0 nova_compute[254092]: 2025-11-25 16:29:28.987 254096 DEBUG nova.compute.manager [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Refreshing instance network info cache due to event network-changed-0f27a287-0c09-4767-a6cf-a7f4f8870ea1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:29:28 compute-0 nova_compute[254092]: 2025-11-25 16:29:28.987 254096 DEBUG oslo_concurrency.lockutils [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:29 compute-0 nova_compute[254092]: 2025-11-25 16:29:29.044 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:29:29 compute-0 nova_compute[254092]: 2025-11-25 16:29:29.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:30 compute-0 ovn_controller[153477]: 2025-11-25T16:29:30Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:52:60 10.100.0.10
Nov 25 16:29:30 compute-0 ovn_controller[153477]: 2025-11-25T16:29:30Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:52:60 10.100.0.10
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.574 254096 DEBUG nova.network.neutron [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.608 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Releasing lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.609 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance network_info: |[{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.610 254096 DEBUG oslo_concurrency.lockutils [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.610 254096 DEBUG nova.network.neutron [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Refreshing network info cache for port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.613 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start _get_guest_xml network_info=[{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.618 254096 WARNING nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.626 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.627 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.638 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.639 254096 DEBUG nova.virt.libvirt.host [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.639 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.640 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.640 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.641 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.642 254096 DEBUG nova.virt.hardware [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:29:30 compute-0 nova_compute[254092]: 2025-11-25 16:29:30.646 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 382 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.4 MiB/s wr, 193 op/s
Nov 25 16:29:30 compute-0 ceph-mon[74985]: pgmap v1256: 321 pgs: 321 active+clean; 382 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.4 MiB/s wr, 193 op/s
Nov 25 16:29:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635119000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.130 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.159 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.164 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333427469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.610 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.611 254096 DEBUG nova.virt.libvirt.vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-890496249',display_name='tempest-ServersAdminTestJSON-server-890496249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-890496249',id=23,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-pzomq4ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:24Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=7777dd86-925e-4f98-bd68-e38ac540d97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.612 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.613 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.614 254096 DEBUG nova.objects.instance [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7777dd86-925e-4f98-bd68-e38ac540d97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.628 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <uuid>7777dd86-925e-4f98-bd68-e38ac540d97b</uuid>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <name>instance-00000017</name>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminTestJSON-server-890496249</nova:name>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:29:30</nova:creationTime>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <nova:port uuid="0f27a287-0c09-4767-a6cf-a7f4f8870ea1">
Nov 25 16:29:31 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <system>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <entry name="serial">7777dd86-925e-4f98-bd68-e38ac540d97b</entry>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <entry name="uuid">7777dd86-925e-4f98-bd68-e38ac540d97b</entry>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </system>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <os>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   </os>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <features>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   </features>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7777dd86-925e-4f98-bd68-e38ac540d97b_disk">
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config">
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:31 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:a8:bd:39"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <target dev="tap0f27a287-0c"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/console.log" append="off"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <video>
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </video>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:29:31 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:29:31 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:29:31 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:29:31 compute-0 nova_compute[254092]: </domain>
Nov 25 16:29:31 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.628 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Preparing to wait for external event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.628 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.629 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.629 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.629 254096 DEBUG nova.virt.libvirt.vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-890496249',display_name='tempest-ServersAdminTestJSON-server-890496249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-890496249',id=23,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-pzomq4ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:24Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=7777dd86-925e-4f98-bd68-e38ac540d97b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.630 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.630 254096 DEBUG nova.network.os_vif_util [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.630 254096 DEBUG os_vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.635 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f27a287-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.635 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f27a287-0c, col_values=(('external_ids', {'iface-id': '0f27a287-0c09-4767-a6cf-a7f4f8870ea1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:bd:39', 'vm-uuid': '7777dd86-925e-4f98-bd68-e38ac540d97b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:31 compute-0 NetworkManager[48891]: <info>  [1764088171.6384] manager: (tap0f27a287-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.644 254096 INFO os_vif [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c')
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.744 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.744 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.745 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:a8:bd:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.746 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Using config drive
Nov 25 16:29:31 compute-0 nova_compute[254092]: 2025-11-25 16:29:31.762 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2635119000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/333427469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:32 compute-0 nova_compute[254092]: 2025-11-25 16:29:32.293 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Creating config drive at /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config
Nov 25 16:29:32 compute-0 nova_compute[254092]: 2025-11-25 16:29:32.299 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4lb3ica execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:32 compute-0 nova_compute[254092]: 2025-11-25 16:29:32.430 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4lb3ica" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:32 compute-0 nova_compute[254092]: 2025-11-25 16:29:32.454 254096 DEBUG nova.storage.rbd_utils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:32 compute-0 nova_compute[254092]: 2025-11-25 16:29:32.457 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:32 compute-0 nova_compute[254092]: 2025-11-25 16:29:32.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 401 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 7.1 MiB/s wr, 181 op/s
Nov 25 16:29:33 compute-0 ceph-mon[74985]: pgmap v1257: 321 pgs: 321 active+clean; 401 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 7.1 MiB/s wr, 181 op/s
Nov 25 16:29:33 compute-0 nova_compute[254092]: 2025-11-25 16:29:33.514 254096 DEBUG nova.network.neutron [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updated VIF entry in instance network info cache for port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:29:33 compute-0 nova_compute[254092]: 2025-11-25 16:29:33.515 254096 DEBUG nova.network.neutron [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:33 compute-0 nova_compute[254092]: 2025-11-25 16:29:33.533 254096 DEBUG oslo_concurrency.lockutils [req-7d9a37c1-8a6e-4d58-8807-8af8f8e18076 req-d627f8ec-3cbd-4b2a-bb65-e7f1748032cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:34 compute-0 nova_compute[254092]: 2025-11-25 16:29:34.336 254096 DEBUG oslo_concurrency.processutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config 7777dd86-925e-4f98-bd68-e38ac540d97b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:34 compute-0 nova_compute[254092]: 2025-11-25 16:29:34.337 254096 INFO nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deleting local config drive /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b/disk.config because it was imported into RBD.
Nov 25 16:29:34 compute-0 kernel: tap0f27a287-0c: entered promiscuous mode
Nov 25 16:29:34 compute-0 NetworkManager[48891]: <info>  [1764088174.3922] manager: (tap0f27a287-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 25 16:29:34 compute-0 ovn_controller[153477]: 2025-11-25T16:29:34Z|00077|binding|INFO|Claiming lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for this chassis.
Nov 25 16:29:34 compute-0 ovn_controller[153477]: 2025-11-25T16:29:34Z|00078|binding|INFO|0f27a287-0c09-4767-a6cf-a7f4f8870ea1: Claiming fa:16:3e:a8:bd:39 10.100.0.9
Nov 25 16:29:34 compute-0 nova_compute[254092]: 2025-11-25 16:29:34.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:34 compute-0 ovn_controller[153477]: 2025-11-25T16:29:34Z|00079|binding|INFO|Setting lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 ovn-installed in OVS
Nov 25 16:29:34 compute-0 nova_compute[254092]: 2025-11-25 16:29:34.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:34 compute-0 nova_compute[254092]: 2025-11-25 16:29:34.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.422 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:bd:39 10.100.0.9'], port_security=['fa:16:3e:a8:bd:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7777dd86-925e-4f98-bd68-e38ac540d97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0f27a287-0c09-4767-a6cf-a7f4f8870ea1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:34 compute-0 ovn_controller[153477]: 2025-11-25T16:29:34Z|00080|binding|INFO|Setting lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 up in Southbound
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.424 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:29:34 compute-0 systemd-machined[216343]: New machine qemu-25-instance-00000017.
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[810ff814-9c24-4a39-9abe-29f3f556afcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:34 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Nov 25 16:29:34 compute-0 systemd-udevd[284204]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:29:34 compute-0 NetworkManager[48891]: <info>  [1764088174.4644] device (tap0f27a287-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:29:34 compute-0 NetworkManager[48891]: <info>  [1764088174.4654] device (tap0f27a287-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.478 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3f268f27-656b-4d7a-95fd-816f0415cfc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.481 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3212d5a0-1472-4628-8f4e-f0f5e18b78ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.509 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41ff44a7-4b16-4f54-bef1-b2b53d37f22e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.525 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a401624-5dba-4774-ae71-5084881324fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284216, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.542 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62b0e16b-b206-4b35-82ae-ec58a216644b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284217, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284217, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.543 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:34 compute-0 nova_compute[254092]: 2025-11-25 16:29:34.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:34 compute-0 nova_compute[254092]: 2025-11-25 16:29:34.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.546 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:34.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 401 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 7.1 MiB/s wr, 181 op/s
Nov 25 16:29:35 compute-0 ceph-mon[74985]: pgmap v1258: 321 pgs: 321 active+clean; 401 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 7.1 MiB/s wr, 181 op/s
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.333 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088175.3331664, 7777dd86-925e-4f98-bd68-e38ac540d97b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.334 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Started (Lifecycle Event)
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.353 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.358 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088175.3333547, 7777dd86-925e-4f98-bd68-e38ac540d97b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Paused (Lifecycle Event)
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.380 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.386 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:35 compute-0 nova_compute[254092]: 2025-11-25 16:29:35.410 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.284 254096 DEBUG nova.compute.manager [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.285 254096 DEBUG oslo_concurrency.lockutils [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.285 254096 DEBUG oslo_concurrency.lockutils [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.286 254096 DEBUG oslo_concurrency.lockutils [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.286 254096 DEBUG nova.compute.manager [req-740559e0-95e1-4306-acd9-930239afac15 req-90fa3f1d-22c4-476b-bca2-43a3c9ad8197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Processing event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.286 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.290 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088176.2896876, 7777dd86-925e-4f98-bd68-e38ac540d97b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.291 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Resumed (Lifecycle Event)
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.293 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.297 254096 INFO nova.virt.libvirt.driver [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance spawned successfully.
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.298 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.313 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.321 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.325 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.326 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.327 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.327 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.328 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.329 254096 DEBUG nova.virt.libvirt.driver [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.354 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.774 254096 INFO nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 11.94 seconds to spawn the instance on the hypervisor.
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.775 254096 DEBUG nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 5.4 MiB/s wr, 164 op/s
Nov 25 16:29:36 compute-0 nova_compute[254092]: 2025-11-25 16:29:36.981 254096 INFO nova.compute.manager [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 13.11 seconds to build instance.
Nov 25 16:29:37 compute-0 nova_compute[254092]: 2025-11-25 16:29:37.310 254096 DEBUG oslo_concurrency.lockutils [None req-026b2700-5056-4267-85a0-631d329c172f a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:37 compute-0 nova_compute[254092]: 2025-11-25 16:29:37.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:37 compute-0 ceph-mon[74985]: pgmap v1259: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 5.4 MiB/s wr, 164 op/s
Nov 25 16:29:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:38 compute-0 nova_compute[254092]: 2025-11-25 16:29:38.698 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:38 compute-0 nova_compute[254092]: 2025-11-25 16:29:38.699 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:38 compute-0 nova_compute[254092]: 2025-11-25 16:29:38.766 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:29:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 555 KiB/s rd, 3.4 MiB/s wr, 123 op/s
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.006 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.007 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.018 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.018 254096 INFO nova.compute.claims [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:29:39 compute-0 ceph-mon[74985]: pgmap v1260: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 555 KiB/s rd, 3.4 MiB/s wr, 123 op/s
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.247 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.255 254096 DEBUG nova.compute.manager [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.256 254096 DEBUG oslo_concurrency.lockutils [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.256 254096 DEBUG oslo_concurrency.lockutils [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.257 254096 DEBUG oslo_concurrency.lockutils [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.257 254096 DEBUG nova.compute.manager [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] No waiting events found dispatching network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.257 254096 WARNING nova.compute.manager [req-84275d34-acb9-4543-bebd-e693e7c6177d req-7f0fe022-48eb-47af-913a-f9a33764b9f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received unexpected event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for instance with vm_state active and task_state None.
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.268 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.269 254096 DEBUG nova.compute.provider_tree [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.286 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.294 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.294 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.295 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.295 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.295 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.296 254096 INFO nova.compute.manager [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Terminating instance
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.297 254096 DEBUG nova.compute.manager [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.319 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:29:39 compute-0 nova_compute[254092]: 2025-11-25 16:29:39.515 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:29:40
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'backups', '.rgw.root', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:29:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209716092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.086 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.097 254096 DEBUG nova.compute.provider_tree [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.111 254096 DEBUG nova.scheduler.client.report [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:29:40 compute-0 sudo[284281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:40 compute-0 sudo[284281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:40 compute-0 sudo[284281]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.157 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.158 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:29:40 compute-0 kernel: tapab0cfddf-69 (unregistering): left promiscuous mode
Nov 25 16:29:40 compute-0 NetworkManager[48891]: <info>  [1764088180.1872] device (tapab0cfddf-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:29:40 compute-0 sudo[284307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:29:40 compute-0 sudo[284307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:40 compute-0 ovn_controller[153477]: 2025-11-25T16:29:40Z|00081|binding|INFO|Releasing lport ab0cfddf-69e0-4494-a106-e603168444a4 from this chassis (sb_readonly=0)
Nov 25 16:29:40 compute-0 ovn_controller[153477]: 2025-11-25T16:29:40Z|00082|binding|INFO|Setting lport ab0cfddf-69e0-4494-a106-e603168444a4 down in Southbound
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:29:40 compute-0 ovn_controller[153477]: 2025-11-25T16:29:40Z|00083|binding|INFO|Removing iface tapab0cfddf-69 ovn-installed in OVS
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:29:40 compute-0 sudo[284307]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:40 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 25 16:29:40 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 13.670s CPU time.
Nov 25 16:29:40 compute-0 systemd-machined[216343]: Machine qemu-23-instance-00000015 terminated.
Nov 25 16:29:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.324 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e6:95 10.100.0.13'], port_security=['fa:16:3e:7d:e6:95 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7b9f60af-05f0-43c7-bce7-227cb54ec793', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f99f039b80564f5684a91f3bc27c2249', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a14e6fc5-327e-44fa-8134-4f62c2b97373', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16d598ad-25ce-4f41-98a7-2a9985da8936, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ab0cfddf-69e0-4494-a106-e603168444a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.326 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ab0cfddf-69e0-4494-a106-e603168444a4 in datapath 09313f5b-a3fb-41e8-87c2-c636c3ed13c6 unbound from our chassis
Nov 25 16:29:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.327 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09313f5b-a3fb-41e8-87c2-c636c3ed13c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:29:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.328 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f052e01-d820-4e80-9097-8c7307bfb045]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:40.328 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 namespace which is not needed anymore
Nov 25 16:29:40 compute-0 sudo[284334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:40 compute-0 sudo[284334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:40 compute-0 sudo[284334]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.334 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.335 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:29:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1209716092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:40 compute-0 sudo[284365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:29:40 compute-0 sudo[284365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.439 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.516 254096 DEBUG nova.policy [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.538 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.552 254096 INFO nova.virt.libvirt.driver [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Instance destroyed successfully.
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.552 254096 DEBUG nova.objects.instance [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lazy-loading 'resources' on Instance uuid 7b9f60af-05f0-43c7-bce7-227cb54ec793 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.568 254096 DEBUG nova.virt.libvirt.vif [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:28:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1649655527',display_name='tempest-ServersTestJSON-server-1649655527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1649655527',id=21,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCW+Ktdde8brSi3dDjuPJuQcQZxoIsAM7EI886G65qnVc9QFdS/VNcpSvi1Y/e9z6GKqL8cPDahYUZN5KOZSYOR8WETlyE2X3Pf8M2fEr9LePpk/dgU5OnSDx/LsY9Zvng==',key_name='tempest-keypair-749925212',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f99f039b80564f5684a91f3bc27c2249',ramdisk_id='',reservation_id='r-00qo170f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-841201839',owner_user_name='tempest-ServersTestJSON-841201839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09afd60d3afd4a57a14e7e93a66275f9',uuid=7b9f60af-05f0-43c7-bce7-227cb54ec793,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.569 254096 DEBUG nova.network.os_vif_util [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converting VIF {"id": "ab0cfddf-69e0-4494-a106-e603168444a4", "address": "fa:16:3e:7d:e6:95", "network": {"id": "09313f5b-a3fb-41e8-87c2-c636c3ed13c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-1359044902-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f99f039b80564f5684a91f3bc27c2249", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab0cfddf-69", "ovs_interfaceid": "ab0cfddf-69e0-4494-a106-e603168444a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.569 254096 DEBUG nova.network.os_vif_util [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.570 254096 DEBUG os_vif [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.572 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab0cfddf-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.583 254096 INFO os_vif [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e6:95,bridge_name='br-int',has_traffic_filtering=True,id=ab0cfddf-69e0-4494-a106-e603168444a4,network=Network(09313f5b-a3fb-41e8-87c2-c636c3ed13c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab0cfddf-69')
Nov 25 16:29:40 compute-0 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : haproxy version is 2.8.14-c23fe91
Nov 25 16:29:40 compute-0 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [NOTICE]   (283610) : path to executable is /usr/sbin/haproxy
Nov 25 16:29:40 compute-0 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [WARNING]  (283610) : Exiting Master process...
Nov 25 16:29:40 compute-0 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [ALERT]    (283610) : Current worker (283612) exited with code 143 (Terminated)
Nov 25 16:29:40 compute-0 neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6[283606]: [WARNING]  (283610) : All workers exited. Exiting... (0)
Nov 25 16:29:40 compute-0 systemd[1]: libpod-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0.scope: Deactivated successfully.
Nov 25 16:29:40 compute-0 podman[284406]: 2025-11-25 16:29:40.79381193 +0000 UTC m=+0.351471060 container died 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.897 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.899 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.899 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Creating image(s)
Nov 25 16:29:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 162 op/s
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.920 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.941 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.972 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:40 compute-0 nova_compute[254092]: 2025-11-25 16:29:40.975 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.034 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.035 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.036 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.036 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.061 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.064 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 df0db130-3ffa-4a60-8f7d-fb285a797631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:41 compute-0 sudo[284365]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:29:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:29:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:29:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:29:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:29:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:29:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev df56d931-08e0-419a-b65d-a62d57567bc0 does not exist
Nov 25 16:29:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5afbea57-d205-43fa-8ca9-d0c165bbff8d does not exist
Nov 25 16:29:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7f77ce50-39b3-46a9-91c2-1439c6aa063e does not exist
Nov 25 16:29:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:29:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:29:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:29:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:29:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:29:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0-userdata-shm.mount: Deactivated successfully.
Nov 25 16:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-eeccf6ecdea4aa475a47ca66b66b3f98184438b7398c8f4397fb73ed72c289ab-merged.mount: Deactivated successfully.
Nov 25 16:29:41 compute-0 sudo[284579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:41 compute-0 sudo[284579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:41 compute-0 sudo[284579]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:41 compute-0 ceph-mon[74985]: pgmap v1261: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 162 op/s
Nov 25 16:29:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:29:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:29:41 compute-0 podman[284406]: 2025-11-25 16:29:41.647533478 +0000 UTC m=+1.205192608 container cleanup 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:29:41 compute-0 sudo[284604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:29:41 compute-0 sudo[284604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:41 compute-0 sudo[284604]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:41 compute-0 systemd[1]: libpod-conmon-6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0.scope: Deactivated successfully.
Nov 25 16:29:41 compute-0 sudo[284637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:41 compute-0 sudo[284637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:41 compute-0 sudo[284637]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.768 254096 DEBUG oslo_concurrency.lockutils [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] Acquiring lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.768 254096 DEBUG oslo_concurrency.lockutils [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] Acquired lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.768 254096 DEBUG nova.network.neutron [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:29:41 compute-0 sudo[284671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:29:41 compute-0 sudo[284671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.838 254096 DEBUG nova.compute.manager [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-unplugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.839 254096 DEBUG oslo_concurrency.lockutils [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.839 254096 DEBUG oslo_concurrency.lockutils [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.840 254096 DEBUG oslo_concurrency.lockutils [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.840 254096 DEBUG nova.compute.manager [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] No waiting events found dispatching network-vif-unplugged-ab0cfddf-69e0-4494-a106-e603168444a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.840 254096 DEBUG nova.compute.manager [req-66cacc5b-4800-4cf4-afab-94930229435b req-468c283e-8cc3-4986-8d75-c94303ec4b2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-unplugged-ab0cfddf-69e0-4494-a106-e603168444a4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:29:41 compute-0 podman[284631]: 2025-11-25 16:29:41.88256184 +0000 UTC m=+0.194652446 container remove 6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.888 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[596414f9-3c37-4c7d-84f0-8c7c7aebf0cf]: (4, ('Tue Nov 25 04:29:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 (6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0)\n6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0\nTue Nov 25 04:29:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 (6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0)\n6baa1db3f236147dc0bbd99eed18035c811ce0835158bee0d374c9945781d4a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.889 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e20b6b07-150c-4d5a-acc4-7a359ce6e48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.890 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09313f5b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:41 compute-0 kernel: tap09313f5b-a0: left promiscuous mode
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.919 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[959376ce-d578-4e25-800c-4cc6fd479db0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:41 compute-0 nova_compute[254092]: 2025-11-25 16:29:41.932 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 df0db130-3ffa-4a60-8f7d-fb285a797631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.868s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.935 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ace72e8-6200-419c-b28f-d09c38e7d01e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.939 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1646ea-41c9-4d7b-9cf5-f8f5f742f5e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.956 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b9d92e-f039-4893-a8cd-3a5a922fcaaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460929, 'reachable_time': 18911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284708, 'error': None, 'target': 'ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d09313f5b\x2da3fb\x2d41e8\x2d87c2\x2dc636c3ed13c6.mount: Deactivated successfully.
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.962 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-09313f5b-a3fb-41e8-87c2-c636c3ed13c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:29:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:41.962 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[16ef63a0-8510-49ea-b3da-4414f638da00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.018 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.059 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully created port: 02104fc6-3780-400d-a6c2-577082384680 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.159 254096 DEBUG nova.objects.instance [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:42 compute-0 podman[284793]: 2025-11-25 16:29:42.178839486 +0000 UTC m=+0.053226418 container create 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.179 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.179 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Ensure instance console log exists: /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.181 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.182 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.182 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:42 compute-0 systemd[1]: Started libpod-conmon-6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca.scope.
Nov 25 16:29:42 compute-0 podman[284793]: 2025-11-25 16:29:42.14773002 +0000 UTC m=+0.022116982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:42 compute-0 podman[284793]: 2025-11-25 16:29:42.277204731 +0000 UTC m=+0.151591683 container init 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:29:42 compute-0 podman[284793]: 2025-11-25 16:29:42.286255738 +0000 UTC m=+0.160642670 container start 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:29:42 compute-0 podman[284793]: 2025-11-25 16:29:42.29074159 +0000 UTC m=+0.165128542 container attach 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:29:42 compute-0 crazy_antonelli[284828]: 167 167
Nov 25 16:29:42 compute-0 systemd[1]: libpod-6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca.scope: Deactivated successfully.
Nov 25 16:29:42 compute-0 podman[284793]: 2025-11-25 16:29:42.294866812 +0000 UTC m=+0.169253744 container died 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-23cf44d45d8860db81dcd4274e00625732c539e99ddd759a14f2b0f30493b4e3-merged.mount: Deactivated successfully.
Nov 25 16:29:42 compute-0 podman[284793]: 2025-11-25 16:29:42.348012708 +0000 UTC m=+0.222399630 container remove 6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:29:42 compute-0 systemd[1]: libpod-conmon-6f6bf4237e9b37a3d7b2a6b47c2b8da132111be659155b89fce0a5423b956fca.scope: Deactivated successfully.
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.386 254096 INFO nova.virt.libvirt.driver [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deleting instance files /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793_del
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.388 254096 INFO nova.virt.libvirt.driver [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deletion of /var/lib/nova/instances/7b9f60af-05f0-43c7-bce7-227cb54ec793_del complete
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.454 254096 INFO nova.compute.manager [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 3.16 seconds to destroy the instance on the hypervisor.
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.455 254096 DEBUG oslo.service.loopingcall [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.455 254096 DEBUG nova.compute.manager [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:29:42 compute-0 nova_compute[254092]: 2025-11-25 16:29:42.455 254096 DEBUG nova.network.neutron [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:29:42 compute-0 podman[284851]: 2025-11-25 16:29:42.540294496 +0000 UTC m=+0.036293128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:29:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:42 compute-0 podman[284851]: 2025-11-25 16:29:42.83570109 +0000 UTC m=+0.331699672 container create 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 16:29:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:29:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:29:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:29:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:29:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 797 KiB/s wr, 108 op/s
Nov 25 16:29:42 compute-0 systemd[1]: Started libpod-conmon-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope.
Nov 25 16:29:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:43 compute-0 podman[284851]: 2025-11-25 16:29:43.234040563 +0000 UTC m=+0.730039205 container init 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:29:43 compute-0 podman[284851]: 2025-11-25 16:29:43.24161331 +0000 UTC m=+0.737611902 container start 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 16:29:43 compute-0 podman[284851]: 2025-11-25 16:29:43.382970893 +0000 UTC m=+0.878969495 container attach 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.482 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully updated port: 02104fc6-3780-400d-a6c2-577082384680 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.520 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.520 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.521 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.624 254096 DEBUG nova.network.neutron [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.722 254096 DEBUG oslo_concurrency.lockutils [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] Releasing lock "refresh_cache-7777dd86-925e-4f98-bd68-e38ac540d97b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.723 254096 DEBUG nova.compute.manager [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.723 254096 DEBUG nova.compute.manager [None req-998c2f1d-d361-4651-9ba4-3a4698666d07 e037b47d42a0432bbaabbfe35ef5aa65 f4a8417f33c84768ba645fe8fc4f0876 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] network_info to inject: |[{"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 16:29:43 compute-0 nova_compute[254092]: 2025-11-25 16:29:43.824 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:29:43 compute-0 ceph-mon[74985]: pgmap v1262: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 797 KiB/s wr, 108 op/s
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.247 254096 DEBUG nova.compute.manager [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-changed-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.248 254096 DEBUG nova.compute.manager [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing instance network info cache due to event network-changed-02104fc6-3780-400d-a6c2-577082384680. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.249 254096 DEBUG oslo_concurrency.lockutils [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.250 254096 DEBUG nova.network.neutron [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.293 254096 INFO nova.compute.manager [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Took 1.84 seconds to deallocate network for instance.
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.328 254096 DEBUG nova.compute.manager [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.328 254096 DEBUG oslo_concurrency.lockutils [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 DEBUG oslo_concurrency.lockutils [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 DEBUG oslo_concurrency.lockutils [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 DEBUG nova.compute.manager [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] No waiting events found dispatching network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.329 254096 WARNING nova.compute.manager [req-801bd465-4dfb-44df-84c6-0b64792b2490 req-145318cb-ec27-4b42-958d-46bdc85cfb9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received unexpected event network-vif-plugged-ab0cfddf-69e0-4494-a106-e603168444a4 for instance with vm_state active and task_state deleting.
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.380 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.380 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:44 compute-0 nice_mclean[284869]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:29:44 compute-0 nice_mclean[284869]: --> relative data size: 1.0
Nov 25 16:29:44 compute-0 nice_mclean[284869]: --> All data devices are unavailable
Nov 25 16:29:44 compute-0 systemd[1]: libpod-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope: Deactivated successfully.
Nov 25 16:29:44 compute-0 systemd[1]: libpod-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope: Consumed 1.114s CPU time.
Nov 25 16:29:44 compute-0 podman[284898]: 2025-11-25 16:29:44.494569144 +0000 UTC m=+0.024496947 container died 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:29:44 compute-0 nova_compute[254092]: 2025-11-25 16:29:44.530 254096 DEBUG oslo_concurrency.processutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db818a4238768ad069149d390f192041fa9003b4fa7e7f10becd8d3ad6f93d9-merged.mount: Deactivated successfully.
Nov 25 16:29:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 89 op/s
Nov 25 16:29:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237505822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.022 254096 DEBUG oslo_concurrency.processutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:45 compute-0 ceph-mon[74985]: pgmap v1263: 321 pgs: 321 active+clean; 405 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 89 op/s
Nov 25 16:29:45 compute-0 podman[284898]: 2025-11-25 16:29:45.027181039 +0000 UTC m=+0.557108822 container remove 0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.032 254096 DEBUG nova.compute.provider_tree [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:29:45 compute-0 systemd[1]: libpod-conmon-0b7096a977c3ee7c40e819c22dad607e17e20aa8510c6b87558b1b5c84b03229.scope: Deactivated successfully.
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.069 254096 DEBUG nova.network.neutron [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.072 254096 DEBUG nova.scheduler.client.report [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:29:45 compute-0 sudo[284671]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.126 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:45 compute-0 sudo[284938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:45 compute-0 sudo[284938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:45 compute-0 sudo[284938]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.188 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.189 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance network_info: |[{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.190 254096 DEBUG oslo_concurrency.lockutils [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.190 254096 DEBUG nova.network.neutron [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing network info cache for port 02104fc6-3780-400d-a6c2-577082384680 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.193 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start _get_guest_xml network_info=[{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.198 254096 WARNING nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:29:45 compute-0 sudo[284963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:29:45 compute-0 sudo[284963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.204 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:29:45 compute-0 sudo[284963]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.207 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.216 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.216 254096 DEBUG nova.virt.libvirt.host [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.217 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.217 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.218 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.219 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.219 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.219 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.220 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.220 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.220 254096 DEBUG nova.virt.hardware [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.223 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.247 254096 INFO nova.scheduler.client.report [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Deleted allocations for instance 7b9f60af-05f0-43c7-bce7-227cb54ec793
Nov 25 16:29:45 compute-0 sudo[284988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:45 compute-0 sudo[284988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:45 compute-0 sudo[284988]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:45 compute-0 sudo[285014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:29:45 compute-0 sudo[285014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.365 254096 DEBUG oslo_concurrency.lockutils [None req-f604863e-aa17-44b7-bd02-ee168612964a 09afd60d3afd4a57a14e7e93a66275f9 f99f039b80564f5684a91f3bc27c2249 - - default default] Lock "7b9f60af-05f0-43c7-bce7-227cb54ec793" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538862028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.663 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.683 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:45 compute-0 nova_compute[254092]: 2025-11-25 16:29:45.688 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:45 compute-0 podman[285098]: 2025-11-25 16:29:45.695951297 +0000 UTC m=+0.054108534 container create 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:29:45 compute-0 podman[285098]: 2025-11-25 16:29:45.662970949 +0000 UTC m=+0.021128206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:29:45 compute-0 systemd[1]: Started libpod-conmon-5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992.scope.
Nov 25 16:29:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:46 compute-0 podman[285098]: 2025-11-25 16:29:46.047556218 +0000 UTC m=+0.405713465 container init 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:29:46 compute-0 podman[285098]: 2025-11-25 16:29:46.058147196 +0000 UTC m=+0.416304433 container start 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 16:29:46 compute-0 jolly_lalande[285135]: 167 167
Nov 25 16:29:46 compute-0 systemd[1]: libpod-5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992.scope: Deactivated successfully.
Nov 25 16:29:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2237505822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/538862028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:46 compute-0 podman[285098]: 2025-11-25 16:29:46.085085879 +0000 UTC m=+0.443243136 container attach 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:29:46 compute-0 podman[285098]: 2025-11-25 16:29:46.087284628 +0000 UTC m=+0.445441865 container died 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:29:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493919523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.188 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.190 254096 DEBUG nova.virt.libvirt.vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.191 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.192 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.193 254096 DEBUG nova.objects.instance [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.240 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <name>instance-00000018</name>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:29:45</nova:creationTime>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:29:46 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <system>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <entry name="serial">df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <entry name="uuid">df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </system>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <os>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   </os>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <features>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   </features>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk">
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config">
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:46 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ff:1b:ad"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <target dev="tap02104fc6-37"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log" append="off"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <video>
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </video>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:29:46 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:29:46 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:29:46 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:29:46 compute-0 nova_compute[254092]: </domain>
Nov 25 16:29:46 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Preparing to wait for external event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.247 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.248 254096 DEBUG nova.virt.libvirt.vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.249 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.250 254096 DEBUG nova.network.os_vif_util [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.250 254096 DEBUG os_vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.255 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.255 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.259 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02104fc6-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.260 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02104fc6-37, col_values=(('external_ids', {'iface-id': '02104fc6-3780-400d-a6c2-577082384680', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:1b:ad', 'vm-uuid': 'df0db130-3ffa-4a60-8f7d-fb285a797631'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:29:46 compute-0 NetworkManager[48891]: <info>  [1764088186.2647] manager: (tap02104fc6-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.272 254096 INFO os_vif [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37')
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.381 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.382 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.382 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:ff:1b:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.383 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Using config drive
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.405 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-023e75c84ae4ca7d6c292fc67163766d3be802501ec03bf569e41b6f9faa2664-merged.mount: Deactivated successfully.
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.685 254096 DEBUG nova.network.neutron [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updated VIF entry in instance network info cache for port 02104fc6-3780-400d-a6c2-577082384680. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.685 254096 DEBUG nova.network.neutron [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.699 254096 DEBUG oslo_concurrency.lockutils [req-929140ad-e10f-4fe2-88db-3747bcf96d70 req-6e06e724-28c4-4459-bfb6-d89b544ecf5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:46 compute-0 podman[285098]: 2025-11-25 16:29:46.763083508 +0000 UTC m=+1.121240745 container remove 5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.813 254096 DEBUG nova.compute.manager [req-05470c1b-c0f0-40bd-880e-68f0b2656644 req-a1d1d69b-7d12-45f0-bdfd-d9646528fc9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Received event network-vif-deleted-ab0cfddf-69e0-4494-a106-e603168444a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.846 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Creating config drive at /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.852 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcybppv7r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:46 compute-0 systemd[1]: libpod-conmon-5ad997e0f8e6ce016f5a0b2c51be0e30d33546b9dfc7b55471f5aadce2b6a992.scope: Deactivated successfully.
Nov 25 16:29:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Nov 25 16:29:46 compute-0 podman[285205]: 2025-11-25 16:29:46.973038448 +0000 UTC m=+0.050791302 container create 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:29:46 compute-0 nova_compute[254092]: 2025-11-25 16:29:46.993 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcybppv7r" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:47 compute-0 systemd[1]: Started libpod-conmon-38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7.scope.
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.031 254096 DEBUG nova.storage.rbd_utils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.036 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:47 compute-0 podman[285205]: 2025-11-25 16:29:46.949843046 +0000 UTC m=+0.027595930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:29:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1493919523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:47 compute-0 ceph-mon[74985]: pgmap v1264: 321 pgs: 321 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Nov 25 16:29:47 compute-0 podman[285205]: 2025-11-25 16:29:47.08894785 +0000 UTC m=+0.166700704 container init 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:29:47 compute-0 podman[285205]: 2025-11-25 16:29:47.097021819 +0000 UTC m=+0.174774673 container start 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:29:47 compute-0 podman[285205]: 2025-11-25 16:29:47.11395499 +0000 UTC m=+0.191707864 container attach 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.248 254096 DEBUG oslo_concurrency.processutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.250 254096 INFO nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deleting local config drive /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/disk.config because it was imported into RBD.
Nov 25 16:29:47 compute-0 kernel: tap02104fc6-37: entered promiscuous mode
Nov 25 16:29:47 compute-0 ovn_controller[153477]: 2025-11-25T16:29:47Z|00084|binding|INFO|Claiming lport 02104fc6-3780-400d-a6c2-577082384680 for this chassis.
Nov 25 16:29:47 compute-0 ovn_controller[153477]: 2025-11-25T16:29:47Z|00085|binding|INFO|02104fc6-3780-400d-a6c2-577082384680: Claiming fa:16:3e:ff:1b:ad 10.100.0.14
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 NetworkManager[48891]: <info>  [1764088187.3148] manager: (tap02104fc6-37): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.317 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:1b:ad 10.100.0.14'], port_security=['fa:16:3e:ff:1b:ad 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '55a7690b-4aae-4eb8-9614-a3e59161db74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=02104fc6-3780-400d-a6c2-577082384680) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.319 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 02104fc6-3780-400d-a6c2-577082384680 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.321 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:29:47 compute-0 ovn_controller[153477]: 2025-11-25T16:29:47Z|00086|binding|INFO|Setting lport 02104fc6-3780-400d-a6c2-577082384680 ovn-installed in OVS
Nov 25 16:29:47 compute-0 ovn_controller[153477]: 2025-11-25T16:29:47Z|00087|binding|INFO|Setting lport 02104fc6-3780-400d-a6c2-577082384680 up in Southbound
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.343 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4894ab-fc3c-4f8b-96dc-40b2a7640c23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.344 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52e7d5b9-01 in ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.346 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52e7d5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.346 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a9e466-dc3c-42b7-9e43-0b4e9a03fad4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.347 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[13e6bdc1-dc56-48e4-8e3c-1f9a991d0e8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 systemd-udevd[285277]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:29:47 compute-0 systemd-machined[216343]: New machine qemu-26-instance-00000018.
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.361 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[7da459b5-45f2-491d-a47a-ab041135dfbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 NetworkManager[48891]: <info>  [1764088187.3694] device (tap02104fc6-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:29:47 compute-0 NetworkManager[48891]: <info>  [1764088187.3705] device (tap02104fc6-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:29:47 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000018.
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.376 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15bd11f2-2af3-487d-ad65-dde434c5b70c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.414 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd3ce42-4a31-41e1-90b7-5034ac9a3c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 NetworkManager[48891]: <info>  [1764088187.4212] manager: (tap52e7d5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.420 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07f69700-b104-4a5b-a9d6-a109358698c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.458 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[06a580b6-44f7-4920-b529-784214bbe5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.461 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd22f08-7a36-4701-b578-b8891f64d1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 NetworkManager[48891]: <info>  [1764088187.4895] device (tap52e7d5b9-00): carrier: link connected
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.496 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f7981a-389d-4914-b758-70ef5acfeafa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.520 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[74b48dd8-d9ff-4025-af59-bfffc4cee7dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285309, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.544 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd308b1-f017-4e7b-907c-d9b5dddfd141]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:97ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464508, 'tstamp': 464508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285310, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c8b727-6264-4cdf-a74d-ee8de4cbc369]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285311, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.607 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a949216-a5b2-422f-9559-62eb0c3650b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.678 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a4ebc1-ec17-4c4c-a5fa-82c0cbb91574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.679 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.679 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.679 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:47 compute-0 NetworkManager[48891]: <info>  [1764088187.6821] manager: (tap52e7d5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 25 16:29:47 compute-0 kernel: tap52e7d5b9-00: entered promiscuous mode
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.685 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 ovn_controller[153477]: 2025-11-25T16:29:47Z|00088|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.689 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.704 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00d7e7c0-4ced-4079-87f1-55f312bce78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:47 compute-0 nova_compute[254092]: 2025-11-25 16:29:47.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.706 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:29:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:47.706 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'env', 'PROCESS_TAG=haproxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:29:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]: {
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:     "0": [
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:         {
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "devices": [
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "/dev/loop3"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             ],
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_name": "ceph_lv0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_size": "21470642176",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "name": "ceph_lv0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "tags": {
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cluster_name": "ceph",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.crush_device_class": "",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.encrypted": "0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osd_id": "0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.type": "block",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.vdo": "0"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             },
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "type": "block",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "vg_name": "ceph_vg0"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:         }
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:     ],
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:     "1": [
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:         {
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "devices": [
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "/dev/loop4"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             ],
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_name": "ceph_lv1",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_size": "21470642176",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "name": "ceph_lv1",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "tags": {
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cluster_name": "ceph",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.crush_device_class": "",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.encrypted": "0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osd_id": "1",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.type": "block",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.vdo": "0"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             },
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "type": "block",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "vg_name": "ceph_vg1"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:         }
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:     ],
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:     "2": [
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:         {
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "devices": [
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "/dev/loop5"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             ],
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_name": "ceph_lv2",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_size": "21470642176",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "name": "ceph_lv2",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "tags": {
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.cluster_name": "ceph",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.crush_device_class": "",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.encrypted": "0",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osd_id": "2",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.type": "block",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:                 "ceph.vdo": "0"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             },
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "type": "block",
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:             "vg_name": "ceph_vg2"
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:         }
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]:     ]
Nov 25 16:29:47 compute-0 flamboyant_leakey[285237]: }
Nov 25 16:29:47 compute-0 systemd[1]: libpod-38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7.scope: Deactivated successfully.
Nov 25 16:29:47 compute-0 podman[285327]: 2025-11-25 16:29:47.990111168 +0000 UTC m=+0.023698046 container died 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff380b5b8b60e11e23f1adcf5e17358b92be7e15699e84bacf696721ffbcfa2-merged.mount: Deactivated successfully.
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.030 254096 DEBUG nova.compute.manager [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.030 254096 DEBUG oslo_concurrency.lockutils [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.030 254096 DEBUG oslo_concurrency.lockutils [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.031 254096 DEBUG oslo_concurrency.lockutils [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.031 254096 DEBUG nova.compute.manager [req-10828663-272c-44ae-8ca5-8af4bd4af721 req-f62894be-0f0a-473f-b66e-9c9e0a65ac79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Processing event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:29:48 compute-0 podman[285327]: 2025-11-25 16:29:48.063512094 +0000 UTC m=+0.097098932 container remove 38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:29:48 compute-0 systemd[1]: libpod-conmon-38e71f52efee93b7e98a49f86b818b8fce2864c9f591347ba6be1d4eec0169b7.scope: Deactivated successfully.
Nov 25 16:29:48 compute-0 sudo[285014]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:48 compute-0 podman[285358]: 2025-11-25 16:29:48.111868999 +0000 UTC m=+0.075926946 container create 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 16:29:48 compute-0 podman[285358]: 2025-11-25 16:29:48.060612935 +0000 UTC m=+0.024670912 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:29:48 compute-0 systemd[1]: Started libpod-conmon-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d.scope.
Nov 25 16:29:48 compute-0 sudo[285369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:48 compute-0 sudo[285369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:48 compute-0 sudo[285369]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe048199884e7d28c39687c78a39d5bf8b12cab03e5bd84735091209e7c9cff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:48 compute-0 podman[285358]: 2025-11-25 16:29:48.200746926 +0000 UTC m=+0.164804923 container init 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:29:48 compute-0 podman[285358]: 2025-11-25 16:29:48.208050185 +0000 UTC m=+0.172108142 container start 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:29:48 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : New worker (285428) forked
Nov 25 16:29:48 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : Loading success.
Nov 25 16:29:48 compute-0 sudo[285402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:29:48 compute-0 sudo[285402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:48 compute-0 sudo[285402]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:48 compute-0 sudo[285439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:48 compute-0 sudo[285439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:48 compute-0 sudo[285439]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:48 compute-0 sudo[285464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:29:48 compute-0 sudo[285464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 25 16:29:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 25 16:29:48 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.518 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088188.51757, df0db130-3ffa-4a60-8f7d-fb285a797631 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.520 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Started (Lifecycle Event)
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.523 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.527 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.530 254096 INFO nova.virt.libvirt.driver [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance spawned successfully.
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.530 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.548 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.555 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.561 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.562 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.562 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.562 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.563 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.563 254096 DEBUG nova.virt.libvirt.driver [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.596 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.596 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088188.519018, df0db130-3ffa-4a60-8f7d-fb285a797631 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.597 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Paused (Lifecycle Event)
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.630 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.639 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088188.5262735, df0db130-3ffa-4a60-8f7d-fb285a797631 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.640 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Resumed (Lifecycle Event)
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.647 254096 INFO nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 7.75 seconds to spawn the instance on the hypervisor.
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.648 254096 DEBUG nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.665 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.707 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.736 254096 INFO nova.compute.manager [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 9.77 seconds to build instance.
Nov 25 16:29:48 compute-0 nova_compute[254092]: 2025-11-25 16:29:48.755 254096 DEBUG oslo_concurrency.lockutils [None req-81400de1-d8f3-4358-8780-1323243c05cc c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:48 compute-0 podman[285569]: 2025-11-25 16:29:48.773167023 +0000 UTC m=+0.047707079 container create 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 16:29:48 compute-0 systemd[1]: Started libpod-conmon-0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d.scope.
Nov 25 16:29:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:48 compute-0 podman[285569]: 2025-11-25 16:29:48.749707685 +0000 UTC m=+0.024247771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:29:48 compute-0 podman[285569]: 2025-11-25 16:29:48.849125538 +0000 UTC m=+0.123665614 container init 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:29:48 compute-0 podman[285569]: 2025-11-25 16:29:48.857149467 +0000 UTC m=+0.131689523 container start 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:29:48 compute-0 podman[285569]: 2025-11-25 16:29:48.860957671 +0000 UTC m=+0.135497727 container attach 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:29:48 compute-0 condescending_hamilton[285586]: 167 167
Nov 25 16:29:48 compute-0 systemd[1]: libpod-0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d.scope: Deactivated successfully.
Nov 25 16:29:48 compute-0 podman[285569]: 2025-11-25 16:29:48.869779431 +0000 UTC m=+0.144319477 container died 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 16:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fafadcdae995182c2a9d2d610769ba4d474fcfbe818019a6842b5a040835035-merged.mount: Deactivated successfully.
Nov 25 16:29:48 compute-0 podman[285569]: 2025-11-25 16:29:48.909187162 +0000 UTC m=+0.183727208 container remove 0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:29:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Nov 25 16:29:48 compute-0 systemd[1]: libpod-conmon-0c7bb1f6c8a7abf1d60e7429e3ef4b824ccd51eca59cc2bf1eba858a856d117d.scope: Deactivated successfully.
Nov 25 16:29:49 compute-0 podman[285610]: 2025-11-25 16:29:49.124482848 +0000 UTC m=+0.053468005 container create 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:29:49 compute-0 systemd[1]: Started libpod-conmon-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope.
Nov 25 16:29:49 compute-0 podman[285610]: 2025-11-25 16:29:49.104545836 +0000 UTC m=+0.033531013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:29:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:29:49 compute-0 podman[285610]: 2025-11-25 16:29:49.223073869 +0000 UTC m=+0.152059046 container init 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:29:49 compute-0 podman[285610]: 2025-11-25 16:29:49.230527632 +0000 UTC m=+0.159512789 container start 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:29:49 compute-0 podman[285610]: 2025-11-25 16:29:49.249100146 +0000 UTC m=+0.178085333 container attach 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 16:29:49 compute-0 ceph-mon[74985]: osdmap e161: 3 total, 3 up, 3 in
Nov 25 16:29:49 compute-0 ceph-mon[74985]: pgmap v1266: 321 pgs: 321 active+clean; 372 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Nov 25 16:29:49 compute-0 nova_compute[254092]: 2025-11-25 16:29:49.994 254096 INFO nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Rebuilding instance
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.116 254096 DEBUG nova.compute.manager [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.117 254096 DEBUG oslo_concurrency.lockutils [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.117 254096 DEBUG oslo_concurrency.lockutils [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.117 254096 DEBUG oslo_concurrency.lockutils [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.118 254096 DEBUG nova.compute.manager [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.118 254096 WARNING nova.compute.manager [req-9d12a423-613e-437d-87ad-57d8e8608999 req-1246cfde-65ac-4bab-92df-75f660026163 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 for instance with vm_state active and task_state None.
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.249 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.268 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.331 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.351 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.360 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.373 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.383 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:29:50 compute-0 nova_compute[254092]: 2025-11-25 16:29:50.387 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]: {
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "osd_id": 1,
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "type": "bluestore"
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:     },
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "osd_id": 2,
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "type": "bluestore"
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:     },
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "osd_id": 0,
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:         "type": "bluestore"
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]:     }
Nov 25 16:29:50 compute-0 optimistic_agnesi[285626]: }
Nov 25 16:29:50 compute-0 systemd[1]: libpod-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope: Deactivated successfully.
Nov 25 16:29:50 compute-0 systemd[1]: libpod-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope: Consumed 1.146s CPU time.
Nov 25 16:29:50 compute-0 podman[285610]: 2025-11-25 16:29:50.473228397 +0000 UTC m=+1.402213544 container died 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:29:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e016ac9e270bab4a40a6d935b27684cee00c11fb2099eef5073313624667c4b7-merged.mount: Deactivated successfully.
Nov 25 16:29:50 compute-0 podman[285610]: 2025-11-25 16:29:50.542783289 +0000 UTC m=+1.471768446 container remove 7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:29:50 compute-0 systemd[1]: libpod-conmon-7749bca3df2d3d522e3138afdc2eb803e1cafc472a4d03736ad03b669c9578c5.scope: Deactivated successfully.
Nov 25 16:29:50 compute-0 sshd-session[284934]: Connection closed by authenticating user root 171.244.51.45 port 48004 [preauth]
Nov 25 16:29:50 compute-0 sudo[285464]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:29:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:29:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:29:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:29:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0d25c212-69c7-4736-93ea-7611483c2ab9 does not exist
Nov 25 16:29:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7a10dc5e-ecd1-4b57-9baf-50be3615dda9 does not exist
Nov 25 16:29:50 compute-0 sudo[285669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:29:50 compute-0 sudo[285669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:50 compute-0 sudo[285669]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:50 compute-0 sudo[285694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:29:50 compute-0 sudo[285694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:29:50 compute-0 sudo[285694]: pam_unix(sudo:session): session closed for user root
Nov 25 16:29:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 372 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029724204556554014 of space, bias 1.0, pg target 0.8917261366966204 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:29:51 compute-0 nova_compute[254092]: 2025-11-25 16:29:51.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:29:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:29:51 compute-0 ceph-mon[74985]: pgmap v1267: 321 pgs: 321 active+clean; 372 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Nov 25 16:29:52 compute-0 ovn_controller[153477]: 2025-11-25T16:29:52Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:bd:39 10.100.0.9
Nov 25 16:29:52 compute-0 ovn_controller[153477]: 2025-11-25T16:29:52Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:bd:39 10.100.0.9
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.518 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:52 compute-0 kernel: tapd6146886-91 (unregistering): left promiscuous mode
Nov 25 16:29:52 compute-0 NetworkManager[48891]: <info>  [1764088192.7339] device (tapd6146886-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:29:52 compute-0 ovn_controller[153477]: 2025-11-25T16:29:52Z|00089|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:29:52 compute-0 ovn_controller[153477]: 2025-11-25T16:29:52Z|00090|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:52 compute-0 ovn_controller[153477]: 2025-11-25T16:29:52Z|00091|binding|INFO|Releasing lport d6146886-91a1-4d5f-9234-e1d0154b4230 from this chassis (sb_readonly=0)
Nov 25 16:29:52 compute-0 ovn_controller[153477]: 2025-11-25T16:29:52Z|00092|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 down in Southbound
Nov 25 16:29:52 compute-0 ovn_controller[153477]: 2025-11-25T16:29:52Z|00093|binding|INFO|Removing iface tapd6146886-91 ovn-installed in OVS
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.773 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.775 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.777 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:29:52 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 16:29:52 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 15.101s CPU time.
Nov 25 16:29:52 compute-0 systemd-machined[216343]: Machine qemu-20-instance-00000012 terminated.
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65edb48e-7c3c-48b1-b3e3-e99510d183b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.857 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[acd890ea-0754-4278-a1a9-f703e554c5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.861 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[39e79783-915a-4349-a45b-13ce4fd65adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:52 compute-0 podman[285739]: 2025-11-25 16:29:52.867663195 +0000 UTC m=+0.103718761 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 25 16:29:52 compute-0 podman[285742]: 2025-11-25 16:29:52.894187896 +0000 UTC m=+0.129986095 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:29:52 compute-0 podman[285743]: 2025-11-25 16:29:52.894498565 +0000 UTC m=+0.131225089 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.898 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8d0d07-240a-4a3f-a032-7b49e984e6bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.909 254096 DEBUG nova.compute.manager [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-changed-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.909 254096 DEBUG nova.compute.manager [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing instance network info cache due to event network-changed-02104fc6-3780-400d-a6c2-577082384680. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.909 254096 DEBUG oslo_concurrency.lockutils [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.910 254096 DEBUG oslo_concurrency.lockutils [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.910 254096 DEBUG nova.network.neutron [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing network info cache for port 02104fc6-3780-400d-a6c2-577082384680 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:29:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 375 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 206 op/s
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.921 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4211641-6a85-4de1-b8c5-ee3e7a56fc17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285808, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.940 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f51ad295-85dc-43f6-9898-c4934718fb39]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285809, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285809, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.941 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:52 compute-0 nova_compute[254092]: 2025-11-25 16:29:52.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.950 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.950 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:52.951 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065714878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:52 compute-0 ceph-mon[74985]: pgmap v1268: 321 pgs: 321 active+clean; 375 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 206 op/s
Nov 25 16:29:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1065714878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.098 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.099 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.102 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.102 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.105 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.105 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.108 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.108 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.112 254096 DEBUG nova.compute.manager [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.112 254096 DEBUG oslo_concurrency.lockutils [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.112 254096 DEBUG oslo_concurrency.lockutils [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.113 254096 DEBUG oslo_concurrency.lockutils [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.113 254096 DEBUG nova.compute.manager [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.113 254096 WARNING nova.compute.manager [req-be5c3d74-f836-4a94-aa5f-4087f27d8889 req-65c50793-5415-40de-b7e4-98357f3afde4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state error and task_state rebuilding.
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.307 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.309 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3748MB free_disk=59.804080963134766GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.309 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.309 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.415 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 3375e096-321c-459b-8b6a-e085bb62872f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 090ac2d7-979e-4706-8a01-5e94ab72282d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7777dd86-925e-4f98-bd68-e38ac540d97b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance df0db130-3ffa-4a60-8f7d-fb285a797631 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.417 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.417 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.421 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance shutdown successfully after 3 seconds.
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.427 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.431 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.432 254096 DEBUG nova.virt.libvirt.vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:49Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.433 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.433 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.434 254096 DEBUG os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.436 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6146886-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.444 254096 INFO os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.547 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.910 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting instance files /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.910 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deletion of /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del complete
Nov 25 16:29:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:29:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028882549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:53 compute-0 nova_compute[254092]: 2025-11-25 16:29:53.987 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.013 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:29:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4028882549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.095 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.096 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.170 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.171 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating image(s)
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.195 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.220 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.250 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.256 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.326 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.327 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.327 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.327 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.349 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.354 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.622 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.689 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.768 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.768 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ensure instance console log exists: /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.769 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.769 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.769 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.772 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start _get_guest_xml network_info=[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.779 254096 WARNING nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.783 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.783 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.787 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.788 254096 DEBUG nova.virt.libvirt.host [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.789 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.790 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.791 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.792 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.793 254096 DEBUG nova.virt.hardware [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.793 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:54 compute-0 nova_compute[254092]: 2025-11-25 16:29:54.811 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 370 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 249 op/s
Nov 25 16:29:55 compute-0 ceph-mon[74985]: pgmap v1269: 321 pgs: 321 active+clean; 370 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 249 op/s
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.096 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.097 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.097 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.097 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:29:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:29:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869273612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:29:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:29:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3869273612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:29:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946544503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.292 254096 DEBUG nova.compute.manager [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.292 254096 DEBUG oslo_concurrency.lockutils [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 DEBUG oslo_concurrency.lockutils [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 DEBUG oslo_concurrency.lockutils [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 DEBUG nova.compute.manager [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.293 254096 WARNING nova.compute.manager [req-91ff253c-f6fb-47ed-bda2-60cdfa2385a9 req-154e1b78-a0e4-4078-b7e3-51ef397388bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state error and task_state rebuild_spawning.
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.294 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.315 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.320 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.543 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088180.5394893, 7b9f60af-05f0-43c7-bce7-227cb54ec793 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.544 254096 INFO nova.compute.manager [-] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] VM Stopped (Lifecycle Event)
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.576 254096 DEBUG nova.compute.manager [None req-14dfef4a-30f9-4049-904d-48d0a625224c - - - - - -] [instance: 7b9f60af-05f0-43c7-bce7-227cb54ec793] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:29:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292593944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.785 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.786 254096 DEBUG nova.virt.libvirt.vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:54Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.786 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.787 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.789 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <uuid>3375e096-321c-459b-8b6a-e085bb62872f</uuid>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <name>instance-00000012</name>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminTestJSON-server-1705426121</nova:name>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:29:54</nova:creationTime>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <nova:port uuid="d6146886-91a1-4d5f-9234-e1d0154b4230">
Nov 25 16:29:55 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <system>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <entry name="serial">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <entry name="uuid">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </system>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <os>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   </os>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <features>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   </features>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk">
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk.config">
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       </source>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:29:55 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:dd:a2:8e"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <target dev="tapd6146886-91"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log" append="off"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <video>
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </video>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:29:55 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:29:55 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:29:55 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:29:55 compute-0 nova_compute[254092]: </domain>
Nov 25 16:29:55 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.789 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Preparing to wait for external event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.790 254096 DEBUG nova.virt.libvirt.vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:29:54Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.791 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.791 254096 DEBUG nova.network.os_vif_util [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.791 254096 DEBUG os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.792 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.792 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.795 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6146886-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.795 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6146886-91, col_values=(('external_ids', {'iface-id': 'd6146886-91a1-4d5f-9234-e1d0154b4230', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:a2:8e', 'vm-uuid': '3375e096-321c-459b-8b6a-e085bb62872f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:55 compute-0 NetworkManager[48891]: <info>  [1764088195.7975] manager: (tapd6146886-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.806 254096 INFO os_vif [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.858 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.858 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.858 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:dd:a2:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.859 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Using config drive
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.877 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.896 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:55 compute-0 nova_compute[254092]: 2025-11-25 16:29:55.927 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'keypairs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.040 254096 DEBUG nova.network.neutron [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updated VIF entry in instance network info cache for port 02104fc6-3780-400d-a6c2-577082384680. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.041 254096 DEBUG nova.network.neutron [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.062 254096 DEBUG oslo_concurrency.lockutils [req-18cf85be-7ac1-4d39-8e43-8a2be73761e0 req-4d7fdda9-8d0f-44c4-9487-7f94e0757818 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:29:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3869273612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:29:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3869273612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:29:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3946544503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3292593944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.775 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating config drive at /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.780 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6do501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.910 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6do501s" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 372 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 254 op/s
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.934 254096 DEBUG nova.storage.rbd_utils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:29:56 compute-0 nova_compute[254092]: 2025-11-25 16:29:56.937 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:29:57 compute-0 ceph-mon[74985]: pgmap v1270: 321 pgs: 321 active+clean; 372 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 254 op/s
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.097 254096 DEBUG oslo_concurrency.processutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.098 254096 INFO nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting local config drive /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config because it was imported into RBD.
Nov 25 16:29:57 compute-0 kernel: tapd6146886-91: entered promiscuous mode
Nov 25 16:29:57 compute-0 NetworkManager[48891]: <info>  [1764088197.1499] manager: (tapd6146886-91): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:57 compute-0 ovn_controller[153477]: 2025-11-25T16:29:57Z|00094|binding|INFO|Claiming lport d6146886-91a1-4d5f-9234-e1d0154b4230 for this chassis.
Nov 25 16:29:57 compute-0 ovn_controller[153477]: 2025-11-25T16:29:57Z|00095|binding|INFO|d6146886-91a1-4d5f-9234-e1d0154b4230: Claiming fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.158 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.159 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.161 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:29:57 compute-0 ovn_controller[153477]: 2025-11-25T16:29:57Z|00096|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 ovn-installed in OVS
Nov 25 16:29:57 compute-0 ovn_controller[153477]: 2025-11-25T16:29:57Z|00097|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 up in Southbound
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c85dbce-bbc2-4bcf-a6da-b79d501f04d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:57 compute-0 systemd-udevd[286168]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:57 compute-0 systemd-machined[216343]: New machine qemu-27-instance-00000012.
Nov 25 16:29:57 compute-0 NetworkManager[48891]: <info>  [1764088197.1987] device (tapd6146886-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:29:57 compute-0 NetworkManager[48891]: <info>  [1764088197.1994] device (tapd6146886-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:29:57 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000012.
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.215 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8780976d-c3eb-4096-b593-0a899ab59f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.219 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a5a588-e425-4b62-ad2f-ca485b1e7434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.251 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da3c93b3-e28d-4145-890a-4a2443eea37a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.267 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4e3d10-9d3c-4273-8f10-140d18b144c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 13, 'rx_bytes': 826, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286179, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.283 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9a837a12-72c9-4db9-9983-766307c24b4f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286181, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286181, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.284 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:29:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:29:57.288 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.555 254096 DEBUG nova.compute.manager [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.555 254096 DEBUG oslo_concurrency.lockutils [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.555 254096 DEBUG oslo_concurrency.lockutils [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.556 254096 DEBUG oslo_concurrency.lockutils [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.556 254096 DEBUG nova.compute.manager [req-c3259156-3f9b-43c5-9ad7-5d96fce03c4d req-7544efce-e038-4628-b6f7-f5cbccaf1370 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Processing event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.772 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 3375e096-321c-459b-8b6a-e085bb62872f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.773 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088197.7721932, 3375e096-321c-459b-8b6a-e085bb62872f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.774 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Started (Lifecycle Event)
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.776 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.779 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.783 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance spawned successfully.
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.783 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.812 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.821 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.829 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.829 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.830 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.831 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.832 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.833 254096 DEBUG nova.virt.libvirt.driver [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 25 16:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 25 16:29:57 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.859 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.859 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088197.7731946, 3375e096-321c-459b-8b6a-e085bb62872f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.860 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Paused (Lifecycle Event)
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.888 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.893 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088197.7790942, 3375e096-321c-459b-8b6a-e085bb62872f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.894 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Resumed (Lifecycle Event)
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.916 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.919 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.930 254096 DEBUG nova.compute.manager [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.956 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.989 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.989 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:57 compute-0 nova_compute[254092]: 2025-11-25 16:29:57.989 254096 DEBUG nova.objects.instance [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.052 254096 DEBUG oslo_concurrency.lockutils [None req-681f3d4a-b5eb-495f-b822-b4b8d31447fe a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.784 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.785 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.785 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:29:58 compute-0 nova_compute[254092]: 2025-11-25 16:29:58.786 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:29:58 compute-0 ceph-mon[74985]: osdmap e162: 3 total, 3 up, 3 in
Nov 25 16:29:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 372 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 254 op/s
Nov 25 16:29:59 compute-0 ceph-mon[74985]: pgmap v1272: 321 pgs: 321 active+clean; 372 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 254 op/s
Nov 25 16:29:59 compute-0 nova_compute[254092]: 2025-11-25 16:29:59.965 254096 DEBUG nova.compute.manager [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:29:59 compute-0 nova_compute[254092]: 2025-11-25 16:29:59.965 254096 DEBUG oslo_concurrency.lockutils [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:29:59 compute-0 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 DEBUG oslo_concurrency.lockutils [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:29:59 compute-0 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 DEBUG oslo_concurrency.lockutils [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:29:59 compute-0 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 DEBUG nova.compute.manager [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:29:59 compute-0 nova_compute[254092]: 2025-11-25 16:29:59.966 254096 WARNING nova.compute.manager [req-1bb671e0-8f05-4045-b3a2-fdb24c991168 req-0b19d0bb-e7ac-486b-a982-9002dc51a5a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.
Nov 25 16:30:00 compute-0 nova_compute[254092]: 2025-11-25 16:30:00.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 372 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 214 op/s
Nov 25 16:30:00 compute-0 ceph-mon[74985]: pgmap v1273: 321 pgs: 321 active+clean; 372 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 214 op/s
Nov 25 16:30:01 compute-0 nova_compute[254092]: 2025-11-25 16:30:01.027 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:01 compute-0 nova_compute[254092]: 2025-11-25 16:30:01.049 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-3375e096-321c-459b-8b6a-e085bb62872f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:01 compute-0 nova_compute[254092]: 2025-11-25 16:30:01.049 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:30:01 compute-0 nova_compute[254092]: 2025-11-25 16:30:01.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:02 compute-0 ovn_controller[153477]: 2025-11-25T16:30:02Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:1b:ad 10.100.0.14
Nov 25 16:30:02 compute-0 ovn_controller[153477]: 2025-11-25T16:30:02Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:1b:ad 10.100.0.14
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.518 254096 INFO nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Rebuilding instance
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.815 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.835 254096 DEBUG nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.889 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.907 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 372 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 211 op/s
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.916 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.922 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'migration_context' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.930 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:30:02 compute-0 nova_compute[254092]: 2025-11-25 16:30:02.933 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:30:02 compute-0 ceph-mon[74985]: pgmap v1274: 321 pgs: 321 active+clean; 372 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 211 op/s
Nov 25 16:30:03 compute-0 nova_compute[254092]: 2025-11-25 16:30:03.044 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 388 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Nov 25 16:30:04 compute-0 ceph-mon[74985]: pgmap v1275: 321 pgs: 321 active+clean; 388 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Nov 25 16:30:05 compute-0 nova_compute[254092]: 2025-11-25 16:30:05.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:05 compute-0 nova_compute[254092]: 2025-11-25 16:30:05.797 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 405 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 168 op/s
Nov 25 16:30:06 compute-0 ceph-mon[74985]: pgmap v1276: 321 pgs: 321 active+clean; 405 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 168 op/s
Nov 25 16:30:07 compute-0 nova_compute[254092]: 2025-11-25 16:30:07.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:07 compute-0 nova_compute[254092]: 2025-11-25 16:30:07.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 405 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 152 op/s
Nov 25 16:30:08 compute-0 ceph-mon[74985]: pgmap v1277: 321 pgs: 321 active+clean; 405 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 152 op/s
Nov 25 16:30:09 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 16:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:30:10 compute-0 nova_compute[254092]: 2025-11-25 16:30:10.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 422 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Nov 25 16:30:10 compute-0 ceph-mon[74985]: pgmap v1278: 321 pgs: 321 active+clean; 422 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Nov 25 16:30:11 compute-0 ovn_controller[153477]: 2025-11-25T16:30:11Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 16:30:11 compute-0 ovn_controller[153477]: 2025-11-25T16:30:11Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.445 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.445 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.462 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.488 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.489 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.490 254096 DEBUG nova.objects.instance [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.566 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.567 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.575 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.576 254096 INFO nova.compute.claims [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:30:11 compute-0 nova_compute[254092]: 2025-11-25 16:30:11.790 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.048 254096 DEBUG nova.objects.instance [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.061 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:30:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.228 254096 DEBUG nova.policy [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:30:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1972223262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.246 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.253 254096 DEBUG nova.compute.provider_tree [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.272 254096 DEBUG nova.scheduler.client.report [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.339 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.339 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:30:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1972223262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.438 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.438 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.472 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.494 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.653 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.654 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.655 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Creating image(s)
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.673 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.694 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.720 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.727 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.752 254096 DEBUG nova.policy [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb8bd106d2264d719b9ebd9f83f19c5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f2f26334db2f4e2cadc5664efd73eb67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.794 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.795 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.796 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.796 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.818 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.821 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 433 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 134 op/s
Nov 25 16:30:12 compute-0 nova_compute[254092]: 2025-11-25 16:30:12.975 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:30:13 compute-0 nova_compute[254092]: 2025-11-25 16:30:13.243 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully created port: 09e835b8-70c9-4cb4-bbc2-63fab5f2592e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:30:13 compute-0 ceph-mon[74985]: pgmap v1279: 321 pgs: 321 active+clean; 433 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 134 op/s
Nov 25 16:30:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:13.603 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:13.603 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:13.604 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 456 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 557 KiB/s rd, 5.0 MiB/s wr, 111 op/s
Nov 25 16:30:14 compute-0 nova_compute[254092]: 2025-11-25 16:30:14.971 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:15 compute-0 ceph-mon[74985]: pgmap v1280: 321 pgs: 321 active+clean; 456 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 557 KiB/s rd, 5.0 MiB/s wr, 111 op/s
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.200 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] resizing rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.653 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Successfully created port: fb46dd7a-52d4-44cb-b99e-81d7d653885c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.842 254096 DEBUG nova.objects.instance [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lazy-loading 'migration_context' on Instance uuid 33b19faf-57e1-463b-8b4a-b50479a0ef0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.862 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.862 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Ensure instance console log exists: /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.862 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.863 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:15 compute-0 nova_compute[254092]: 2025-11-25 16:30:15.863 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:16 compute-0 nova_compute[254092]: 2025-11-25 16:30:16.764 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:16 compute-0 nova_compute[254092]: 2025-11-25 16:30:16.764 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:16 compute-0 nova_compute[254092]: 2025-11-25 16:30:16.867 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Successfully updated port: 09e835b8-70c9-4cb4-bbc2-63fab5f2592e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:30:16 compute-0 nova_compute[254092]: 2025-11-25 16:30:16.870 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:30:16 compute-0 nova_compute[254092]: 2025-11-25 16:30:16.904 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:16 compute-0 nova_compute[254092]: 2025-11-25 16:30:16.905 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:16 compute-0 nova_compute[254092]: 2025-11-25 16:30:16.905 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:30:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 484 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 5.0 MiB/s wr, 122 op/s
Nov 25 16:30:17 compute-0 ceph-mon[74985]: pgmap v1281: 321 pgs: 321 active+clean; 484 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 5.0 MiB/s wr, 122 op/s
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.023 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.024 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.030 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.031 254096 INFO nova.compute.claims [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.093 254096 WARNING nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.296 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:17 compute-0 kernel: tapd6146886-91 (unregistering): left promiscuous mode
Nov 25 16:30:17 compute-0 NetworkManager[48891]: <info>  [1764088217.4373] device (tapd6146886-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:17 compute-0 ovn_controller[153477]: 2025-11-25T16:30:17Z|00098|binding|INFO|Releasing lport d6146886-91a1-4d5f-9234-e1d0154b4230 from this chassis (sb_readonly=0)
Nov 25 16:30:17 compute-0 ovn_controller[153477]: 2025-11-25T16:30:17Z|00099|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 down in Southbound
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:17 compute-0 ovn_controller[153477]: 2025-11-25T16:30:17Z|00100|binding|INFO|Removing iface tapd6146886-91 ovn-installed in OVS
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:17 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 16:30:17 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000012.scope: Consumed 13.602s CPU time.
Nov 25 16:30:17 compute-0 systemd-machined[216343]: Machine qemu-27-instance-00000012 terminated.
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.527 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.528 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.529 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.545 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0ea887-62a4-4a06-8cd7-c46065e58b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.573 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[770cc85e-8aa1-4f8c-a8b5-d911472d7f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.576 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba034cf9-5535-4ce1-96c2-197ad9baf092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.602 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf47656-542a-4882-bba7-06adce53572c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.622 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88725e88-28a7-4904-9c0a-05c5a435ae9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 15, 'rx_bytes': 868, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 39709, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286445, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.639 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54095571-a149-4f05-865c-837b93c44418]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286446, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286446, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.641 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.648 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:17.648 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712337094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.781 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.787 254096 DEBUG nova.compute.provider_tree [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.801 254096 DEBUG nova.scheduler.client.report [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.865 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.865 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.980 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:30:17 compute-0 nova_compute[254092]: 2025-11-25 16:30:17.981 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.012 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:30:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3712337094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.046 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.077 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Successfully updated port: fb46dd7a-52d4-44cb-b99e-81d7d653885c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.170 254096 DEBUG nova.compute.manager [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-changed-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.170 254096 DEBUG nova.compute.manager [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing instance network info cache due to event network-changed-09e835b8-70c9-4cb4-bbc2-63fab5f2592e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.170 254096 DEBUG oslo_concurrency.lockutils [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.174 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.174 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquired lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.174 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.205 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance shutdown successfully after 15 seconds.
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.210 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.218 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.219 254096 DEBUG nova.virt.libvirt.vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:01Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.219 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.220 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.220 254096 DEBUG os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.221 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6146886-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.226 254096 INFO os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.246 254096 DEBUG nova.policy [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01171e7ab3a4447497eacf11bf89be63', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e1bcf74bb1148a3a0f388525c96c919', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.287 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.288 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.288 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Creating image(s)
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.307 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.329 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.348 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.352 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.414 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.415 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.415 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.415 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.432 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.436 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 484 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.971 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:30:18 compute-0 nova_compute[254092]: 2025-11-25 16:30:18.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:18.986 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:18.987 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:30:19 compute-0 ceph-mon[74985]: pgmap v1282: 321 pgs: 321 active+clean; 484 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Nov 25 16:30:19 compute-0 nova_compute[254092]: 2025-11-25 16:30:19.167 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Successfully created port: 8daa55ae-6950-4c2f-8121-ce02930ab1d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:30:19 compute-0 nova_compute[254092]: 2025-11-25 16:30:19.534 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:19 compute-0 nova_compute[254092]: 2025-11-25 16:30:19.602 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] resizing rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.236 254096 DEBUG nova.network.neutron [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.316 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Releasing lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.317 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance network_info: |[{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.319 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start _get_guest_xml network_info=[{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.323 254096 WARNING nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.328 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.329 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.333 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.334 254096 DEBUG nova.virt.libvirt.host [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.334 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.335 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.335 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.336 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.337 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.338 254096 DEBUG nova.virt.hardware [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.341 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615290058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.792 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.812 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:20 compute-0 nova_compute[254092]: 2025-11-25 16:30:20.815 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 505 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 5.3 MiB/s wr, 109 op/s
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.047 254096 DEBUG nova.network.neutron [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.068 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Successfully updated port: 8daa55ae-6950-4c2f-8121-ce02930ab1d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.136 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.136 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing instance network info cache due to event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.137 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.137 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.137 254096 DEBUG nova.network.neutron [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:30:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1615290058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.339 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.340 254096 DEBUG oslo_concurrency.lockutils [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.340 254096 DEBUG nova.network.neutron [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Refreshing network info cache for port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.343 254096 DEBUG nova.virt.libvirt.vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.344 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.344 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.345 254096 DEBUG os_vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.345 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.346 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.349 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09e835b8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.349 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09e835b8-70, col_values=(('external_ids', {'iface-id': '09e835b8-70c9-4cb4-bbc2-63fab5f2592e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:4a:54', 'vm-uuid': 'df0db130-3ffa-4a60-8f7d-fb285a797631'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 NetworkManager[48891]: <info>  [1764088221.3522] manager: (tap09e835b8-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.354 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.354 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquired lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.355 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.361 254096 INFO os_vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70')
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.361 254096 DEBUG nova.virt.libvirt.vif [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.362 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.362 254096 DEBUG nova.network.os_vif_util [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.365 254096 DEBUG nova.virt.libvirt.guest [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:79:4a:54"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <target dev="tap09e835b8-70"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]: </interface>
Nov 25 16:30:21 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 16:30:21 compute-0 kernel: tap09e835b8-70: entered promiscuous mode
Nov 25 16:30:21 compute-0 NetworkManager[48891]: <info>  [1764088221.3769] manager: (tap09e835b8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Nov 25 16:30:21 compute-0 ovn_controller[153477]: 2025-11-25T16:30:21Z|00101|binding|INFO|Claiming lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e for this chassis.
Nov 25 16:30:21 compute-0 ovn_controller[153477]: 2025-11-25T16:30:21Z|00102|binding|INFO|09e835b8-70c9-4cb4-bbc2-63fab5f2592e: Claiming fa:16:3e:79:4a:54 10.100.0.13
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 ovn_controller[153477]: 2025-11-25T16:30:21Z|00103|binding|INFO|Setting lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e ovn-installed in OVS
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 systemd-udevd[286693]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:30:21 compute-0 NetworkManager[48891]: <info>  [1764088221.4177] device (tap09e835b8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:30:21 compute-0 NetworkManager[48891]: <info>  [1764088221.4189] device (tap09e835b8-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:30:21 compute-0 ovn_controller[153477]: 2025-11-25T16:30:21Z|00104|binding|INFO|Setting lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e up in Southbound
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.460 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:4a:54 10.100.0.13'], port_security=['fa:16:3e:79:4a:54 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=09e835b8-70c9-4cb4-bbc2-63fab5f2592e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.461 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.464 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.479 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d45b0c-6e34-493d-a692-99c84c38950d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791889168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.506 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[65f23f6d-1ce2-495f-9d7b-4cad8d2ecfe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.508 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.509 254096 DEBUG nova.virt.libvirt.vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1375199124',display_name='tempest-ServersTestManualDisk-server-1375199124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1375199124',id=25,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOQq+JT40N5kSAAkZKtTY8+kwc4Tq2+j0vXcLZMu4KKRGWjKEsrOB7QpF/UTscMrUzfK+p97q+eBa8XrywfkAV6Mo0KdjURR0zReL+ABXznVVDaiCZTtZ5HErawYUq7Fw==',key_name='tempest-keypair-637822798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2f26334db2f4e2cadc5664efd73eb67',ramdisk_id='',reservation_id='r-d0txwl4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-420094767',owner_user_name='tempest-ServersTestManualDisk-420094767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fb8bd106d2264d719b9ebd9f83f19c5a',uuid=33b19faf-57e1-463b-8b4a-b50479a0ef0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.509 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converting VIF {"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.509 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b73c8a6f-bd86-4c04-b00b-4e83d3a56871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.509 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.510 254096 DEBUG nova.objects.instance [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33b19faf-57e1-463b-8b4a-b50479a0ef0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.526 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <uuid>33b19faf-57e1-463b-8b4a-b50479a0ef0f</uuid>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <name>instance-00000019</name>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestManualDisk-server-1375199124</nova:name>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:30:20</nova:creationTime>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:user uuid="fb8bd106d2264d719b9ebd9f83f19c5a">tempest-ServersTestManualDisk-420094767-project-member</nova:user>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:project uuid="f2f26334db2f4e2cadc5664efd73eb67">tempest-ServersTestManualDisk-420094767</nova:project>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <nova:port uuid="fb46dd7a-52d4-44cb-b99e-81d7d653885c">
Nov 25 16:30:21 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <entry name="serial">33b19faf-57e1-463b-8b4a-b50479a0ef0f</entry>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <entry name="uuid">33b19faf-57e1-463b-8b4a-b50479a0ef0f</entry>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk">
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config">
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d4:54:45"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <target dev="tapfb46dd7a-52"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/console.log" append="off"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:21 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:21 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Preparing to wait for external event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.527 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.528 254096 DEBUG nova.virt.libvirt.vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1375199124',display_name='tempest-ServersTestManualDisk-server-1375199124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1375199124',id=25,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOQq+JT40N5kSAAkZKtTY8+kwc4Tq2+j0vXcLZMu4KKRGWjKEsrOB7QpF/UTscMrUzfK+p97q+eBa8XrywfkAV6Mo0KdjURR0zReL+ABXznVVDaiCZTtZ5HErawYUq7Fw==',key_name='tempest-keypair-637822798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2f26334db2f4e2cadc5664efd73eb67',ramdisk_id='',reservation_id='r-d0txwl4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-420094767',owner_user_name='tempest-ServersTestManualDisk-420094767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fb8bd106d2264d719b9ebd9f83f19c5a',uuid=33b19faf-57e1-463b-8b4a-b50479a0ef0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.528 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converting VIF {"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG nova.network.os_vif_util [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG os_vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.530 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.532 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb46dd7a-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.532 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb46dd7a-52, col_values=(('external_ids', {'iface-id': 'fb46dd7a-52d4-44cb-b99e-81d7d653885c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:54:45', 'vm-uuid': '33b19faf-57e1-463b-8b4a-b50479a0ef0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 NetworkManager[48891]: <info>  [1764088221.5341] manager: (tapfb46dd7a-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[85fb8c43-5917-415b-bebe-ce02d4dbca9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.538 254096 INFO os_vif [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52')
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e65be2b1-82eb-457e-a64e-f08997b42b8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286703, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[070eaee5-344b-43f4-8f98-6094531cf4a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464522, 'tstamp': 464522}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286705, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464526, 'tstamp': 464526}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286705, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.574 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:21.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.650 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.650 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.651 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:ff:1b:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.651 254096 DEBUG nova.virt.libvirt.driver [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:79:4a:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.695 254096 DEBUG nova.virt.libvirt.guest [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:30:21</nova:creationTime>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     <nova:port uuid="09e835b8-70c9-4cb4-bbc2-63fab5f2592e">
Nov 25 16:30:21 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:30:21 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:21 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:30:21 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:30:21 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.727 254096 DEBUG oslo_concurrency.lockutils [None req-07452a6c-5fb1-407b-b3fa-55e14a412297 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.731 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.731 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.731 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] No VIF found with MAC fa:16:3e:d4:54:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.732 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Using config drive
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.749 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.800 254096 DEBUG nova.objects.instance [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lazy-loading 'migration_context' on Instance uuid dc3c86a9-91cf-42fb-b11c-7de3305d8388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.811 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Ensure instance console log exists: /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:21 compute-0 nova_compute[254092]: 2025-11-25 16:30:21.812 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.029 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.213 254096 DEBUG nova.compute.manager [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-changed-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.213 254096 DEBUG nova.compute.manager [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Refreshing instance network info cache due to event network-changed-8daa55ae-6950-4c2f-8121-ce02930ab1d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.213 254096 DEBUG oslo_concurrency.lockutils [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:22 compute-0 ceph-mon[74985]: pgmap v1283: 321 pgs: 321 active+clean; 505 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 5.3 MiB/s wr, 109 op/s
Nov 25 16:30:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2791889168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.614 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Creating config drive at /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.621 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmpoyfu3z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.759 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmpoyfu3z" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.833 254096 DEBUG nova.storage.rbd_utils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] rbd image 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:22 compute-0 nova_compute[254092]: 2025-11-25 16:30:22.836 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 488 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Nov 25 16:30:23 compute-0 ovn_controller[153477]: 2025-11-25T16:30:23Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:4a:54 10.100.0.13
Nov 25 16:30:23 compute-0 ovn_controller[153477]: 2025-11-25T16:30:23Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:4a:54 10.100.0.13
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.624 254096 DEBUG nova.network.neutron [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updated VIF entry in instance network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.625 254096 DEBUG nova.network.neutron [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.639 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.639 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.639 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 WARNING nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state rebuilding.
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.640 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG oslo_concurrency.lockutils [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 DEBUG nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:23 compute-0 nova_compute[254092]: 2025-11-25 16:30:23.641 254096 WARNING nova.compute.manager [req-3fa44981-6406-496c-b78e-bc82d6bbc5a2 req-7eb98d36-ce99-4f38-95ba-bfc4c9f3e8f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state rebuilding.
Nov 25 16:30:23 compute-0 podman[286785]: 2025-11-25 16:30:23.650420421 +0000 UTC m=+0.060334032 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 16:30:23 compute-0 podman[286784]: 2025-11-25 16:30:23.660429433 +0000 UTC m=+0.070486567 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 16:30:23 compute-0 podman[286786]: 2025-11-25 16:30:23.689786861 +0000 UTC m=+0.098239732 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 16:30:23 compute-0 ceph-mon[74985]: pgmap v1284: 321 pgs: 321 active+clean; 488 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 4.3 MiB/s wr, 100 op/s
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.032 254096 DEBUG nova.network.neutron [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updating instance_info_cache with network_info: [{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.038 254096 DEBUG nova.network.neutron [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updated VIF entry in instance network info cache for port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.039 254096 DEBUG nova.network.neutron [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.050 254096 DEBUG oslo_concurrency.lockutils [req-f8ce61dc-8969-4ec7-a5b6-9d0a83bf4138 req-20321c35-44fb-41da-9aa0-c48a8867e9c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.099 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Releasing lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.099 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance network_info: |[{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.100 254096 DEBUG oslo_concurrency.lockutils [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.100 254096 DEBUG nova.network.neutron [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Refreshing network info cache for port 8daa55ae-6950-4c2f-8121-ce02930ab1d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.105 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start _get_guest_xml network_info=[{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.111 254096 WARNING nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.120 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.121 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.125 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.126 254096 DEBUG nova.virt.libvirt.host [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.127 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.127 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.128 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.129 254096 DEBUG nova.virt.hardware [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.132 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.300 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.300 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.300 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 WARNING nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.301 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.302 254096 DEBUG oslo_concurrency.lockutils [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.302 254096 DEBUG nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.302 254096 WARNING nova.compute.manager [req-8298b8ac-4a14-48f3-95f5-4f3ec177f64a req-d3811fcc-4313-48a5-b4fa-78e98752e078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.
Nov 25 16:30:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583423350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.601 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.633 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.639 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.746 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-09e835b8-70c9-4cb4-bbc2-63fab5f2592e" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.748 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-09e835b8-70c9-4cb4-bbc2-63fab5f2592e" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.769 254096 DEBUG nova.objects.instance [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.791 254096 DEBUG nova.virt.libvirt.vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.792 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.793 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.798 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.800 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.802 254096 DEBUG nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Attempting to detach device tap09e835b8-70 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 16:30:24 compute-0 nova_compute[254092]: 2025-11-25 16:30:24.803 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:30:24 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:79:4a:54"/>
Nov 25 16:30:24 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:30:24 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:30:24 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:30:24 compute-0 nova_compute[254092]:   <target dev="tap09e835b8-70"/>
Nov 25 16:30:24 compute-0 nova_compute[254092]: </interface>
Nov 25 16:30:24 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:30:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 457 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.007 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.010 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <name>instance-00000018</name>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:30:21</nova:creationTime>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:port uuid="09e835b8-70c9-4cb4-bbc2-63fab5f2592e">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev='tap02104fc6-37'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:79:4a:54'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev='tap09e835b8-70'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='net1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </target>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </console>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:25 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.025 254096 INFO nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap09e835b8-70 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the persistent domain config.
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.025 254096 DEBUG nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] (1/8): Attempting to detach device tap09e835b8-70 with device alias net1 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.026 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:79:4a:54"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <target dev="tap09e835b8-70"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]: </interface>
Nov 25 16:30:25 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:30:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494457192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.132 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.133 254096 DEBUG nova.virt.libvirt.vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1874518354',display_name='tempest-ImagesNegativeTestJSON-server-1874518354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1874518354',id=26,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e1bcf74bb1148a3a0f388525c96c919',ramdisk_id='',reservation_id='r-cagts7p4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1461337409',owner_user_name='tempest-ImagesNegativeTestJSON-1461337409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:18Z,user_data=None,user_id='01171e7ab3a4447497eacf11bf89be63',uuid=dc3c86a9-91cf-42fb-b11c-7de3305d8388,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.133 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converting VIF {"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.134 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.135 254096 DEBUG nova.objects.instance [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc3c86a9-91cf-42fb-b11c-7de3305d8388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:25 compute-0 kernel: tap09e835b8-70 (unregistering): left promiscuous mode
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.1472] device (tap09e835b8-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.148 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <uuid>dc3c86a9-91cf-42fb-b11c-7de3305d8388</uuid>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <name>instance-0000001a</name>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesNegativeTestJSON-server-1874518354</nova:name>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:30:24</nova:creationTime>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:user uuid="01171e7ab3a4447497eacf11bf89be63">tempest-ImagesNegativeTestJSON-1461337409-project-member</nova:user>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:project uuid="4e1bcf74bb1148a3a0f388525c96c919">tempest-ImagesNegativeTestJSON-1461337409</nova:project>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <nova:port uuid="8daa55ae-6950-4c2f-8121-ce02930ab1d9">
Nov 25 16:30:25 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name="serial">dc3c86a9-91cf-42fb-b11c-7de3305d8388</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name="uuid">dc3c86a9-91cf-42fb-b11c-7de3305d8388</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk">
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config">
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e0:3e:50"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev="tap8daa55ae-69"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/console.log" append="off"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:25 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:25 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.148 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Preparing to wait for external event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.149 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.149 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.149 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.150 254096 DEBUG nova.virt.libvirt.vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1874518354',display_name='tempest-ImagesNegativeTestJSON-server-1874518354',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1874518354',id=26,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e1bcf74bb1148a3a0f388525c96c919',ramdisk_id='',reservation_id='r-cagts7p4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-1461337409',owner_user_name='tempest-ImagesNegativeTestJSON-1461337409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:18Z,user_data=None,user_id='01171e7ab3a4447497eacf11bf89be63',uuid=dc3c86a9-91cf-42fb-b11c-7de3305d8388,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.150 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converting VIF {"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.151 254096 DEBUG nova.network.os_vif_util [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.151 254096 DEBUG os_vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.153 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.153 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.155 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daa55ae-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.156 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8daa55ae-69, col_values=(('external_ids', {'iface-id': '8daa55ae-6950-4c2f-8121-ce02930ab1d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:3e:50', 'vm-uuid': 'dc3c86a9-91cf-42fb-b11c-7de3305d8388'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00105|binding|INFO|Releasing lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e from this chassis (sb_readonly=0)
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00106|binding|INFO|Setting lport 09e835b8-70c9-4cb4-bbc2-63fab5f2592e down in Southbound
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00107|binding|INFO|Removing iface tap09e835b8-70 ovn-installed in OVS
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.1590] manager: (tap8daa55ae-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.163 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764088225.1634488, df0db130-3ffa-4a60-8f7d-fb285a797631 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.168 254096 DEBUG nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Start waiting for the detach event from libvirt for device tap09e835b8-70 with device alias net1 for instance df0db130-3ffa-4a60-8f7d-fb285a797631 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.169 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.173 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <name>instance-00000018</name>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:30:21</nova:creationTime>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:port uuid="09e835b8-70c9-4cb4-bbc2-63fab5f2592e">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target dev='tap02104fc6-37'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       </target>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </console>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:25 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:25 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.173 254096 INFO nova.virt.libvirt.driver [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap09e835b8-70 from instance df0db130-3ffa-4a60-8f7d-fb285a797631 from the live domain config.
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.174 254096 DEBUG nova.virt.libvirt.vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.176 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.177 254096 DEBUG nova.network.os_vif_util [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.177 254096 DEBUG os_vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.183 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.185 254096 INFO os_vif [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69')
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.186 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09e835b8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.193 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:4a:54 10.100.0.13'], port_security=['fa:16:3e:79:4a:54 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=09e835b8-70c9-4cb4-bbc2-63fab5f2592e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.194 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.196 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.198 254096 INFO os_vif [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70')
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.198 254096 DEBUG nova.virt.libvirt.guest [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:30:25</nova:creationTime>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:30:25 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:30:25 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:25 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:30:25 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:30:25 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4361dcb6-9402-4e19-9f20-e0abca65ad01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.243 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[677f96c0-40ed-4ae6-87eb-1b287e1e261e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.245 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6602c8fe-21d8-4d4c-9f4f-f7997ed4123b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.270 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2df52d-b49c-42c5-91af-727237d9aab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.286 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[265aab35-5959-44a9-a009-e01b9dd0fb43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464508, 'reachable_time': 42591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286931, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.299 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c096d181-ad92-487b-8afc-499c5f245bcb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464522, 'tstamp': 464522}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286932, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464526, 'tstamp': 464526}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286932, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.301 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.309 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.309 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.309 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.310 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.329 254096 DEBUG oslo_concurrency.processutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config 33b19faf-57e1-463b-8b4a-b50479a0ef0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.330 254096 INFO nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deleting local config drive /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f/disk.config because it was imported into RBD.
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] No VIF found with MAC fa:16:3e:e0:3e:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.361 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Using config drive
Nov 25 16:30:25 compute-0 kernel: tapfb46dd7a-52: entered promiscuous mode
Nov 25 16:30:25 compute-0 systemd-udevd[286917]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.3765] manager: (tapfb46dd7a-52): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00108|binding|INFO|Claiming lport fb46dd7a-52d4-44cb-b99e-81d7d653885c for this chassis.
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00109|binding|INFO|fb46dd7a-52d4-44cb-b99e-81d7d653885c: Claiming fa:16:3e:d4:54:45 10.100.0.10
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.3878] device (tapfb46dd7a-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.3893] device (tapfb46dd7a-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00110|binding|INFO|Setting lport fb46dd7a-52d4-44cb-b99e-81d7d653885c ovn-installed in OVS
Nov 25 16:30:25 compute-0 systemd-machined[216343]: New machine qemu-28-instance-00000019.
Nov 25 16:30:25 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00111|binding|INFO|Setting lport fb46dd7a-52d4-44cb-b99e-81d7d653885c up in Southbound
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.462 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:54:45 10.100.0.10'], port_security=['fa:16:3e:d4:54:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '33b19faf-57e1-463b-8b4a-b50479a0ef0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2f26334db2f4e2cadc5664efd73eb67', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0797621-f8b9-4c57-8ae8-f4d291e244fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6129ca0-5388-4dd2-a829-e686213800fe, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fb46dd7a-52d4-44cb-b99e-81d7d653885c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.463 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fb46dd7a-52d4-44cb-b99e-81d7d653885c in datapath 6ab64ae8-b8fa-4795-a243-9ebe45233e37 bound to our chassis
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.465 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ab64ae8-b8fa-4795-a243-9ebe45233e37
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.476 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d689522b-2f7d-47f6-9dd9-a6fda1349675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.477 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ab64ae8-b1 in ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.479 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ab64ae8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.479 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50aff5a9-0538-4f75-aa4e-7edf7b2d4cdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b25f70-2f42-4838-9572-7a77a7f664c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.492 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.498 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8fa3a9-2be1-4209-a47f-dc2b3026892e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4820a9-4b7f-4abd-a158-ad78fc487bcd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.548 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bcbec8-ca44-4516-be8f-d824ea4de090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f38ad65-e093-4503-9b37-f8c9752376bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.5544] manager: (tap6ab64ae8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.583 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[56414b02-5ec0-40c6-b47d-dfae6a578cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.585 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebb0c54-239a-4bfa-826d-95a64e09059c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.6089] device (tap6ab64ae8-b0): carrier: link connected
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.614 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c695c0c7-178c-4c97-9751-6075f856cddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.632 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dad3d284-faa0-4ef5-871b-85af700c6902]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ab64ae8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:67:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468319, 'reachable_time': 15631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287002, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.645 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[780c28c9-4563-4567-820a-037d63b4a63e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:6763'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468319, 'tstamp': 468319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287003, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.661 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38a0135a-ca13-46f4-9de7-bc06f923c5f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ab64ae8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:67:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468319, 'reachable_time': 15631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287004, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3583423350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:25 compute-0 ceph-mon[74985]: pgmap v1285: 321 pgs: 321 active+clean; 457 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Nov 25 16:30:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/494457192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.691 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9c7c2c-5c54-4b97-bc7a-dd3162d9852e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.744 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d550e787-ac2e-43c5-bb1e-a93f947b83ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ab64ae8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ab64ae8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 kernel: tap6ab64ae8-b0: entered promiscuous mode
Nov 25 16:30:25 compute-0 NetworkManager[48891]: <info>  [1764088225.7478] manager: (tap6ab64ae8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.753 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ab64ae8-b0, col_values=(('external_ids', {'iface-id': 'd23fff41-4296-47a5-8279-de62f83ea17d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 ovn_controller[153477]: 2025-11-25T16:30:25Z|00112|binding|INFO|Releasing lport d23fff41-4296-47a5-8279-de62f83ea17d from this chassis (sb_readonly=0)
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.775 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ab64ae8-b8fa-4795-a243-9ebe45233e37.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ab64ae8-b8fa-4795-a243-9ebe45233e37.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0aa86b2-4595-4d89-b87f-7126a6f06ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.777 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-6ab64ae8-b8fa-4795-a243-9ebe45233e37
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/6ab64ae8-b8fa-4795-a243-9ebe45233e37.pid.haproxy
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 6ab64ae8-b8fa-4795-a243-9ebe45233e37
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.777 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'env', 'PROCESS_TAG=haproxy-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ab64ae8-b8fa-4795-a243-9ebe45233e37.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.856 254096 DEBUG nova.network.neutron [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updated VIF entry in instance network info cache for port 8daa55ae-6950-4c2f-8121-ce02930ab1d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.857 254096 DEBUG nova.network.neutron [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updating instance_info_cache with network_info: [{"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:25 compute-0 nova_compute[254092]: 2025-11-25 16:30:25.880 254096 DEBUG oslo_concurrency.lockutils [req-9de4b88b-1dcd-4d0e-a4ac-9298b4747ce6 req-744a2db1-addb-4f96-99af-9b4e2b2646f4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dc3c86a9-91cf-42fb-b11c-7de3305d8388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:25.990 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.050 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Creating config drive at /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.054 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ypur5pd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.205 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ypur5pd" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:26 compute-0 podman[287042]: 2025-11-25 16:30:26.131504405 +0000 UTC m=+0.023307764 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.232 254096 DEBUG nova.storage.rbd_utils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] rbd image dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.235 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.460 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-unplugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.461 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.461 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.461 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-unplugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 WARNING nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-unplugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.462 254096 DEBUG oslo_concurrency.lockutils [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.463 254096 DEBUG nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.463 254096 WARNING nova.compute.manager [req-d587c94a-d0dd-449b-9839-af5dace576c4 req-7084ef67-6b17-4e34-9752-ced1141ba6f2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-09e835b8-70c9-4cb4-bbc2-63fab5f2592e for instance with vm_state active and task_state None.
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.634 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.634 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.634 254096 DEBUG nova.network.neutron [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.655 254096 DEBUG nova.compute.manager [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-deleted-09e835b8-70c9-4cb4-bbc2-63fab5f2592e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.656 254096 INFO nova.compute.manager [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Neutron deleted interface 09e835b8-70c9-4cb4-bbc2-63fab5f2592e; detaching it from the instance and deleting it from the info cache
Nov 25 16:30:26 compute-0 nova_compute[254092]: 2025-11-25 16:30:26.656 254096 DEBUG nova.network.neutron [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:26 compute-0 podman[287042]: 2025-11-25 16:30:26.731541503 +0000 UTC m=+0.623344852 container create 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:30:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 132 KiB/s rd, 2.9 MiB/s wr, 96 op/s
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.105 254096 DEBUG nova.objects.instance [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.165 254096 DEBUG nova.objects.instance [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.191 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088227.1914814, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.192 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Started (Lifecycle Event)
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.248 254096 DEBUG nova.virt.libvirt.vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.248 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.249 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.252 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.253 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.256 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088227.1915882, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.256 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Paused (Lifecycle Event)
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.257 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <name>instance-00000018</name>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:30:25</nova:creationTime>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:30:27 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target dev='tap02104fc6-37'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </target>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </console>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:27 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.258 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.261 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:4a:54"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap09e835b8-70"/></interface>not found in domain: <domain type='kvm' id='26'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <name>instance-00000018</name>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <uuid>df0db130-3ffa-4a60-8f7d-fb285a797631</uuid>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:30:25</nova:creationTime>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:30:27 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='serial'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='uuid'>df0db130-3ffa-4a60-8f7d-fb285a797631</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk' index='2'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/df0db130-3ffa-4a60-8f7d-fb285a797631_disk.config' index='1'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:ff:1b:ad'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target dev='tap02104fc6-37'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       </target>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631/console.log' append='off'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </console>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </input>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c874,c979</label>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c874,c979</imagelabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:30:27 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:27 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.262 254096 WARNING nova.virt.libvirt.driver [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Detaching interface fa:16:3e:79:4a:54 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap09e835b8-70' not found.
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.263 254096 DEBUG nova.virt.libvirt.vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.263 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "address": "fa:16:3e:79:4a:54", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e835b8-70", "ovs_interfaceid": "09e835b8-70c9-4cb4-bbc2-63fab5f2592e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.264 254096 DEBUG nova.network.os_vif_util [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.264 254096 DEBUG os_vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.269 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09e835b8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.270 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.272 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.273 254096 INFO os_vif [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:4a:54,bridge_name='br-int',has_traffic_filtering=True,id=09e835b8-70c9-4cb4-bbc2-63fab5f2592e,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e835b8-70')
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.275 254096 DEBUG nova.virt.libvirt.guest [req-cf7988eb-e259-4957-82cd-296004290e06 req-380d67b7-d5a5-4fd2-bd4b-526a4d3a5ecf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-175151571</nova:name>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:30:27</nova:creationTime>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     <nova:port uuid="02104fc6-3780-400d-a6c2-577082384680">
Nov 25 16:30:27 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:30:27 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:30:27 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:30:27 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:30:27 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.278 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.301 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:30:27 compute-0 systemd[1]: Started libpod-conmon-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771.scope.
Nov 25 16:30:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972254736acb7813ada732b95c02a171ed8ca59846d32b38baa12f81aa1e9009/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:27 compute-0 nova_compute[254092]: 2025-11-25 16:30:27.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:27 compute-0 podman[287042]: 2025-11-25 16:30:27.608481412 +0000 UTC m=+1.500284781 container init 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 16:30:27 compute-0 podman[287042]: 2025-11-25 16:30:27.6142942 +0000 UTC m=+1.506097549 container start 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:30:27 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : New worker (287146) forked
Nov 25 16:30:27 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : Loading success.
Nov 25 16:30:27 compute-0 ceph-mon[74985]: pgmap v1286: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 132 KiB/s rd, 2.9 MiB/s wr, 96 op/s
Nov 25 16:30:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.220 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.222 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.224 254096 INFO nova.compute.manager [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Terminating instance
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.224 254096 DEBUG nova.compute.manager [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.747 254096 INFO nova.network.neutron [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Port 09e835b8-70c9-4cb4-bbc2-63fab5f2592e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.747 254096 DEBUG nova.network.neutron [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [{"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.759 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-df0db130-3ffa-4a60-8f7d-fb285a797631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.783 254096 DEBUG oslo_concurrency.lockutils [None req-df39a297-ed69-4204-bb02-63754320a692 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-df0db130-3ffa-4a60-8f7d-fb285a797631-09e835b8-70c9-4cb4-bbc2-63fab5f2592e" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.843 254096 DEBUG oslo_concurrency.processutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config dc3c86a9-91cf-42fb-b11c-7de3305d8388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.844 254096 INFO nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deleting local config drive /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388/disk.config because it was imported into RBD.
Nov 25 16:30:28 compute-0 kernel: tap02104fc6-37 (unregistering): left promiscuous mode
Nov 25 16:30:28 compute-0 NetworkManager[48891]: <info>  [1764088228.8495] device (tap02104fc6-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.886 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.887 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.888 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.888 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.889 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Processing event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.889 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.890 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.890 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.891 254096 DEBUG oslo_concurrency.lockutils [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.892 254096 DEBUG nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] No waiting events found dispatching network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.892 254096 WARNING nova.compute.manager [req-9a546a18-b15e-482e-b411-9ca8ebb2e50d req-82f2aa5b-400c-40be-950c-d3b7332669d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received unexpected event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c for instance with vm_state building and task_state spawning.
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.893 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.898 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088228.8981485, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.899 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Resumed (Lifecycle Event)
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.901 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.910 254096 INFO nova.virt.libvirt.driver [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance spawned successfully.
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.911 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:30:28 compute-0 ovn_controller[153477]: 2025-11-25T16:30:28Z|00113|binding|INFO|Releasing lport 02104fc6-3780-400d-a6c2-577082384680 from this chassis (sb_readonly=0)
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:28 compute-0 ovn_controller[153477]: 2025-11-25T16:30:28Z|00114|binding|INFO|Setting lport 02104fc6-3780-400d-a6c2-577082384680 down in Southbound
Nov 25 16:30:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Nov 25 16:30:28 compute-0 ovn_controller[153477]: 2025-11-25T16:30:28Z|00115|binding|INFO|Removing iface tap02104fc6-37 ovn-installed in OVS
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.928 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.941 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.942 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.942 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.942 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.943 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.943 254096 DEBUG nova.virt.libvirt.driver [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.947 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:28 compute-0 NetworkManager[48891]: <info>  [1764088228.9509] manager: (tap8daa55ae-69): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Nov 25 16:30:28 compute-0 systemd-udevd[287164]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:30:28 compute-0 kernel: tap8daa55ae-69: entered promiscuous mode
Nov 25 16:30:28 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 25 16:30:28 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000018.scope: Consumed 15.640s CPU time.
Nov 25 16:30:28 compute-0 ovn_controller[153477]: 2025-11-25T16:30:28Z|00116|if_status|INFO|Not updating pb chassis for 8daa55ae-6950-4c2f-8121-ce02930ab1d9 now as sb is readonly
Nov 25 16:30:28 compute-0 systemd-machined[216343]: Machine qemu-26-instance-00000018 terminated.
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:28 compute-0 NetworkManager[48891]: <info>  [1764088228.9629] device (tap8daa55ae-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:30:28 compute-0 NetworkManager[48891]: <info>  [1764088228.9643] device (tap8daa55ae-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.982 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:28 compute-0 systemd-machined[216343]: New machine qemu-29-instance-0000001a.
Nov 25 16:30:28 compute-0 nova_compute[254092]: 2025-11-25 16:30:28.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:28 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001a.
Nov 25 16:30:29 compute-0 NetworkManager[48891]: <info>  [1764088229.0571] manager: (tap02104fc6-37): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.071 254096 INFO nova.virt.libvirt.driver [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Instance destroyed successfully.
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.071 254096 DEBUG nova.objects.instance [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'resources' on Instance uuid df0db130-3ffa-4a60-8f7d-fb285a797631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.081 254096 DEBUG nova.virt.libvirt.vif [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-175151571',display_name='tempest-AttachInterfacesTestJSON-server-175151571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-175151571',id=24,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNNpuTPFaMmG67SHL6Vnzpxg++tbzc+w8Hx9mEMHBp+Ppk67spJlgihZsy04EvrT9EPPCFaJqqycyDW1OLzG+Lc4OXw1D0urNXa3GNkPoKTEzwF+kapepgvV/e/LIpx3gw==',key_name='tempest-keypair-2002144633',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-wvdcod6f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=df0db130-3ffa-4a60-8f7d-fb285a797631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.082 254096 DEBUG nova.network.os_vif_util [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "02104fc6-3780-400d-a6c2-577082384680", "address": "fa:16:3e:ff:1b:ad", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02104fc6-37", "ovs_interfaceid": "02104fc6-3780-400d-a6c2-577082384680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.082 254096 DEBUG nova.network.os_vif_util [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.083 254096 DEBUG os_vif [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02104fc6-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.087 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.089 254096 INFO os_vif [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:1b:ad,bridge_name='br-int',has_traffic_filtering=True,id=02104fc6-3780-400d-a6c2-577082384680,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02104fc6-37')
Nov 25 16:30:29 compute-0 ovn_controller[153477]: 2025-11-25T16:30:29Z|00117|binding|INFO|Claiming lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 for this chassis.
Nov 25 16:30:29 compute-0 ovn_controller[153477]: 2025-11-25T16:30:29Z|00118|binding|INFO|8daa55ae-6950-4c2f-8121-ce02930ab1d9: Claiming fa:16:3e:e0:3e:50 10.100.0.13
Nov 25 16:30:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.205 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:1b:ad 10.100.0.14'], port_security=['fa:16:3e:ff:1b:ad 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'df0db130-3ffa-4a60-8f7d-fb285a797631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '55a7690b-4aae-4eb8-9614-a3e59161db74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=02104fc6-3780-400d-a6c2-577082384680) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.207 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 02104fc6-3780-400d-a6c2-577082384680 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:30:29 compute-0 ovn_controller[153477]: 2025-11-25T16:30:29Z|00119|binding|INFO|Setting lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 ovn-installed in OVS
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.209 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:30:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.210 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef59f743-86ca-4f01-95fd-0aac94773f3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.211 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace which is not needed anymore
Nov 25 16:30:29 compute-0 ceph-mon[74985]: pgmap v1287: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Nov 25 16:30:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:29.379 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3e:50 10.100.0.13'], port_security=['fa:16:3e:e0:3e:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dc3c86a9-91cf-42fb-b11c-7de3305d8388', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d276419-01f1-4c77-9031-28a38923f36b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e1bcf74bb1148a3a0f388525c96c919', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93dbcee8-770d-45ab-a643-3e941ad2b6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71f4e8d2-793d-4f17-8ffd-c8df15f9380b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8daa55ae-6950-4c2f-8121-ce02930ab1d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:29 compute-0 ovn_controller[153477]: 2025-11-25T16:30:29Z|00120|binding|INFO|Setting lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 up in Southbound
Nov 25 16:30:29 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : haproxy version is 2.8.14-c23fe91
Nov 25 16:30:29 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [NOTICE]   (285417) : path to executable is /usr/sbin/haproxy
Nov 25 16:30:29 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [WARNING]  (285417) : Exiting Master process...
Nov 25 16:30:29 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [WARNING]  (285417) : Exiting Master process...
Nov 25 16:30:29 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [ALERT]    (285417) : Current worker (285428) exited with code 143 (Terminated)
Nov 25 16:30:29 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[285397]: [WARNING]  (285417) : All workers exited. Exiting... (0)
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.500 254096 INFO nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 16.85 seconds to spawn the instance on the hypervisor.
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.500 254096 DEBUG nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:29 compute-0 systemd[1]: libpod-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d.scope: Deactivated successfully.
Nov 25 16:30:29 compute-0 podman[287231]: 2025-11-25 16:30:29.511586319 +0000 UTC m=+0.206710172 container died 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.686 254096 INFO nova.compute.manager [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 18.15 seconds to build instance.
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.722 254096 DEBUG oslo_concurrency.lockutils [None req-31bb3a51-cab6-4c5d-a5ee-112a2382cefc fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.824 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting instance files /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del
Nov 25 16:30:29 compute-0 nova_compute[254092]: 2025-11-25 16:30:29.825 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deletion of /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del complete
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.079 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.080 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating image(s)
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.099 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d-userdata-shm.mount: Deactivated successfully.
Nov 25 16:30:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fe048199884e7d28c39687c78a39d5bf8b12cab03e5bd84735091209e7c9cff-merged.mount: Deactivated successfully.
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.128 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.148 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.152 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.183 254096 DEBUG nova.compute.manager [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.184 254096 DEBUG oslo_concurrency.lockutils [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.184 254096 DEBUG oslo_concurrency.lockutils [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.184 254096 DEBUG oslo_concurrency.lockutils [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.185 254096 DEBUG nova.compute.manager [req-53ed34ad-de8d-4bc0-a636-f509d6120dbc req-4e5e5ad4-0377-4793-916e-9f289b187a5f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Processing event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.223 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.224 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.224 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.225 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.259 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.270 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.331 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088230.3313322, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.332 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Started (Lifecycle Event)
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.335 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.340 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.358 254096 INFO nova.virt.libvirt.driver [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance spawned successfully.
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.360 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.361 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.364 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.392 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088230.335023, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.392 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Paused (Lifecycle Event)
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.398 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.399 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.399 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.399 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.400 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.400 254096 DEBUG nova.virt.libvirt.driver [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.417 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.420 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088230.3396242, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.421 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Resumed (Lifecycle Event)
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.440 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:30 compute-0 podman[287231]: 2025-11-25 16:30:30.451171982 +0000 UTC m=+1.146295835 container cleanup 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:30:30 compute-0 systemd[1]: libpod-conmon-9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d.scope: Deactivated successfully.
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.484 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.594 254096 INFO nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 12.31 seconds to spawn the instance on the hypervisor.
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.595 254096 DEBUG nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.791 254096 INFO nova.compute.manager [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 13.79 seconds to build instance.
Nov 25 16:30:30 compute-0 nova_compute[254092]: 2025-11-25 16:30:30.840 254096 DEBUG oslo_concurrency.lockutils [None req-f7c11371-8068-4a70-a782-363d44c79fb7 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1017 KiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 16:30:31 compute-0 podman[287396]: 2025-11-25 16:30:31.320889264 +0000 UTC m=+0.847187171 container remove 9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.326 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6dde58c4-0bc1-4148-8b59-f89c2ea57ee2]: (4, ('Tue Nov 25 04:30:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d)\n9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d\nTue Nov 25 04:30:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d)\n9e5526b58a6d16fb232bbcd983847cd8d041205bbd4e455cffa4a67f4a7afd2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.328 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f54a3ef7-43b4-49fe-93ef-e734ce73019b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.328 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:31 compute-0 nova_compute[254092]: 2025-11-25 16:30:31.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:31 compute-0 kernel: tap52e7d5b9-00: left promiscuous mode
Nov 25 16:30:31 compute-0 nova_compute[254092]: 2025-11-25 16:30:31.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.334 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99fb2cc9-0054-4870-90fd-5981750ddb9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ceph-mon[74985]: pgmap v1288: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1017 KiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.348 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa3c591-7215-46ec-bd89-88ace77c7640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.351 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[622d48c9-41d4-4959-bfaa-a84ce1ccbaac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 nova_compute[254092]: 2025-11-25 16:30:31.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[437a4e85-4867-4319-800f-29a8799972de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464499, 'reachable_time': 32001, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287412, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d52e7d5b9\x2d0570\x2d4e5c\x2db3da\x2d9dfcb924b83d.mount: Deactivated successfully.
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.375 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.375 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[02d46be6-6139-4204-b972-e1265a526cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.376 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8daa55ae-6950-4c2f-8121-ce02930ab1d9 in datapath 1d276419-01f1-4c77-9031-28a38923f36b unbound from our chassis
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.378 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d276419-01f1-4c77-9031-28a38923f36b
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.389 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ab923d-08ef-425e-bc5d-9a3713bf4726]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.390 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d276419-01 in ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.392 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d276419-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.393 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7adca014-382b-4b8e-838e-7d1c92e58d16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.394 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7379971-b511-4573-b718-90d0a773ad04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.404 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4446d455-0d7f-4fb5-ad2f-d2c4db9cbd6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55630daa-5bdd-4a2a-bfb5-a766d3009859]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.443 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[23c6bdde-85a1-4ccd-bcef-280a1807d715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[430e6186-6d8a-46d1-aae3-7068f845ac54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 NetworkManager[48891]: <info>  [1764088231.4514] manager: (tap1d276419-00): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.487 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f5356c79-c783-4d2c-b66e-7ec7844e7b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.490 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf10e76-9c0f-468d-899d-585b15cc7f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 NetworkManager[48891]: <info>  [1764088231.5158] device (tap1d276419-00): carrier: link connected
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.521 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4b36b3-1728-418a-bae3-e48b0dc78906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.538 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9c1231-7c0e-4df8-b2fb-fc1a4aacea0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d276419-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:0e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468910, 'reachable_time': 43373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287440, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c530ef1d-3c92-4e69-9527-82140712af9d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:e94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468910, 'tstamp': 468910}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287441, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.577 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8ae60b-82b3-430c-8a0c-941d30da7ff2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d276419-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:0e:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468910, 'reachable_time': 43373, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287442, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.610 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1791b1-e594-4e93-9884-d68b5082397c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.674 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a528fa4d-d03c-4037-9f90-5bf34a8c7e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d276419-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d276419-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:31 compute-0 nova_compute[254092]: 2025-11-25 16:30:31.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:31 compute-0 NetworkManager[48891]: <info>  [1764088231.6790] manager: (tap1d276419-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 25 16:30:31 compute-0 kernel: tap1d276419-00: entered promiscuous mode
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.680 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d276419-00, col_values=(('external_ids', {'iface-id': 'a9472e97-6aa6-485c-8695-66d5c06679d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:31 compute-0 nova_compute[254092]: 2025-11-25 16:30:31.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:31 compute-0 ovn_controller[153477]: 2025-11-25T16:30:31Z|00121|binding|INFO|Releasing lport a9472e97-6aa6-485c-8695-66d5c06679d5 from this chassis (sb_readonly=0)
Nov 25 16:30:31 compute-0 nova_compute[254092]: 2025-11-25 16:30:31.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.698 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d276419-01f1-4c77-9031-28a38923f36b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d276419-01f1-4c77-9031-28a38923f36b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8de3374f-5d8c-4a74-b5ea-27030381238a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.702 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-1d276419-01f1-4c77-9031-28a38923f36b
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/1d276419-01f1-4c77-9031-28a38923f36b.pid.haproxy
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 1d276419-01f1-4c77-9031-28a38923f36b
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:30:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:31.702 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'env', 'PROCESS_TAG=haproxy-1d276419-01f1-4c77-9031-28a38923f36b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d276419-01f1-4c77-9031-28a38923f36b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.119 254096 DEBUG nova.compute.manager [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-unplugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.119 254096 DEBUG oslo_concurrency.lockutils [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG oslo_concurrency.lockutils [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG oslo_concurrency.lockutils [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG nova.compute.manager [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-unplugged-02104fc6-3780-400d-a6c2-577082384680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.120 254096 DEBUG nova.compute.manager [req-1daac077-b1fe-4262-82f4-a9680a95e430 req-b179c079-2294-4768-99fb-063ede0f21df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-unplugged-02104fc6-3780-400d-a6c2-577082384680 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:30:32 compute-0 podman[287474]: 2025-11-25 16:30:32.032161288 +0000 UTC m=+0.020949051 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:30:32 compute-0 podman[287474]: 2025-11-25 16:30:32.356152849 +0000 UTC m=+0.344940592 container create f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:30:32 compute-0 systemd[1]: Started libpod-conmon-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0.scope.
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:30:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4de00267f053678af55b710532281e3e8b0c27859e6b07f037a9b50c7bcde3c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.676 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088217.6750712, 3375e096-321c-459b-8b6a-e085bb62872f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.677 254096 INFO nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Stopped (Lifecycle Event)
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.678 254096 DEBUG nova.compute.manager [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG oslo_concurrency.lockutils [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG oslo_concurrency.lockutils [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG oslo_concurrency.lockutils [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.679 254096 DEBUG nova.compute.manager [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] No waiting events found dispatching network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.680 254096 WARNING nova.compute.manager [req-77a5b224-c712-4979-ace6-0d0e9075c911 req-a58db7b5-92c2-424e-b52d-110e8caf467b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received unexpected event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 for instance with vm_state active and task_state None.
Nov 25 16:30:32 compute-0 nova_compute[254092]: 2025-11-25 16:30:32.696 254096 DEBUG nova.compute.manager [None req-24f29e88-a0eb-49f8-b44b-27436800971d - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:32 compute-0 podman[287474]: 2025-11-25 16:30:32.740146272 +0000 UTC m=+0.728934015 container init f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 16:30:32 compute-0 podman[287474]: 2025-11-25 16:30:32.745718094 +0000 UTC m=+0.734505837 container start f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:30:32 compute-0 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : New worker (287495) forked
Nov 25 16:30:32 compute-0 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : Loading success.
Nov 25 16:30:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 410 KiB/s wr, 114 op/s
Nov 25 16:30:33 compute-0 ceph-mon[74985]: pgmap v1289: 321 pgs: 321 active+clean; 451 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 410 KiB/s wr, 114 op/s
Nov 25 16:30:33 compute-0 nova_compute[254092]: 2025-11-25 16:30:33.141 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3375e096-321c-459b-8b6a-e085bb62872f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.871s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:33 compute-0 nova_compute[254092]: 2025-11-25 16:30:33.270 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] resizing rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.117 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.118 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Ensure instance console log exists: /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.118 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.118 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.119 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.121 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start _get_guest_xml network_info=[{"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.125 254096 WARNING nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.129 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.129 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.131 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.132 254096 DEBUG nova.virt.libvirt.host [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.132 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.132 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.133 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.134 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.135 254096 DEBUG nova.virt.hardware [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.135 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.156 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.331 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.332 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.333 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.333 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.333 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.335 254096 INFO nova.compute.manager [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Terminating instance
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.336 254096 DEBUG nova.compute.manager [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:30:34 compute-0 kernel: tap8daa55ae-69 (unregistering): left promiscuous mode
Nov 25 16:30:34 compute-0 NetworkManager[48891]: <info>  [1764088234.4023] device (tap8daa55ae-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:34 compute-0 ovn_controller[153477]: 2025-11-25T16:30:34Z|00122|binding|INFO|Releasing lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 from this chassis (sb_readonly=0)
Nov 25 16:30:34 compute-0 ovn_controller[153477]: 2025-11-25T16:30:34Z|00123|binding|INFO|Setting lport 8daa55ae-6950-4c2f-8121-ce02930ab1d9 down in Southbound
Nov 25 16:30:34 compute-0 ovn_controller[153477]: 2025-11-25T16:30:34Z|00124|binding|INFO|Removing iface tap8daa55ae-69 ovn-installed in OVS
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:34 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 25 16:30:34 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Consumed 4.649s CPU time.
Nov 25 16:30:34 compute-0 systemd-machined[216343]: Machine qemu-29-instance-0000001a terminated.
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.573 254096 INFO nova.virt.libvirt.driver [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Instance destroyed successfully.
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.574 254096 DEBUG nova.objects.instance [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lazy-loading 'resources' on Instance uuid dc3c86a9-91cf-42fb-b11c-7de3305d8388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.591 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3e:50 10.100.0.13'], port_security=['fa:16:3e:e0:3e:50 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dc3c86a9-91cf-42fb-b11c-7de3305d8388', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d276419-01f1-4c77-9031-28a38923f36b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e1bcf74bb1148a3a0f388525c96c919', 'neutron:revision_number': '4', 'neutron:security_group_ids': '93dbcee8-770d-45ab-a643-3e941ad2b6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71f4e8d2-793d-4f17-8ffd-c8df15f9380b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8daa55ae-6950-4c2f-8121-ce02930ab1d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.594 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8daa55ae-6950-4c2f-8121-ce02930ab1d9 in datapath 1d276419-01f1-4c77-9031-28a38923f36b unbound from our chassis
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.596 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d276419-01f1-4c77-9031-28a38923f36b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0227ac-a2df-4c60-ad49-285d01542b5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.598 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b namespace which is not needed anymore
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.598 254096 DEBUG nova.virt.libvirt.vif [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1874518354',display_name='tempest-ImagesNegativeTestJSON-server-1874518354',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1874518354',id=26,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e1bcf74bb1148a3a0f388525c96c919',ramdisk_id='',reservation_id='r-cagts7p4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-1461337409',owner_user_name='tempest-ImagesNegativeTestJSON-1461337409-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:30Z,user_data=None,user_id='01171e7ab3a4447497eacf11bf89be63',uuid=dc3c86a9-91cf-42fb-b11c-7de3305d8388,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.598 254096 DEBUG nova.network.os_vif_util [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converting VIF {"id": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "address": "fa:16:3e:e0:3e:50", "network": {"id": "1d276419-01f1-4c77-9031-28a38923f36b", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1486844347-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e1bcf74bb1148a3a0f388525c96c919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daa55ae-69", "ovs_interfaceid": "8daa55ae-6950-4c2f-8121-ce02930ab1d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.599 254096 DEBUG nova.network.os_vif_util [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.599 254096 DEBUG os_vif [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.601 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daa55ae-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.607 254096 INFO os_vif [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3e:50,bridge_name='br-int',has_traffic_filtering=True,id=8daa55ae-6950-4c2f-8121-ce02930ab1d9,network=Network(1d276419-01f1-4c77-9031-28a38923f36b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daa55ae-69')
Nov 25 16:30:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/939421658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.669 254096 DEBUG nova.compute.manager [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG oslo_concurrency.lockutils [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG oslo_concurrency.lockutils [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG oslo_concurrency.lockutils [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 DEBUG nova.compute.manager [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] No waiting events found dispatching network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.671 254096 WARNING nova.compute.manager [req-5f270ef6-3ef5-4675-aa29-61aec007fa3d req-716c3ab3-abf1-4a7f-a42d-15da3257cfb4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received unexpected event network-vif-plugged-02104fc6-3780-400d-a6c2-577082384680 for instance with vm_state active and task_state deleting.
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.695 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.716 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.720 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/939421658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:34 compute-0 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : haproxy version is 2.8.14-c23fe91
Nov 25 16:30:34 compute-0 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [NOTICE]   (287493) : path to executable is /usr/sbin/haproxy
Nov 25 16:30:34 compute-0 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [WARNING]  (287493) : Exiting Master process...
Nov 25 16:30:34 compute-0 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [ALERT]    (287493) : Current worker (287495) exited with code 143 (Terminated)
Nov 25 16:30:34 compute-0 neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b[287489]: [WARNING]  (287493) : All workers exited. Exiting... (0)
Nov 25 16:30:34 compute-0 systemd[1]: libpod-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0.scope: Deactivated successfully.
Nov 25 16:30:34 compute-0 podman[287653]: 2025-11-25 16:30:34.772915415 +0000 UTC m=+0.065812042 container died f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0-userdata-shm.mount: Deactivated successfully.
Nov 25 16:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4de00267f053678af55b710532281e3e8b0c27859e6b07f037a9b50c7bcde3c-merged.mount: Deactivated successfully.
Nov 25 16:30:34 compute-0 podman[287653]: 2025-11-25 16:30:34.85696666 +0000 UTC m=+0.149863267 container cleanup f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:30:34 compute-0 systemd[1]: libpod-conmon-f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0.scope: Deactivated successfully.
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.874 254096 INFO nova.virt.libvirt.driver [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deleting instance files /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631_del
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.875 254096 INFO nova.virt.libvirt.driver [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deletion of /var/lib/nova/instances/df0db130-3ffa-4a60-8f7d-fb285a797631_del complete
Nov 25 16:30:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 424 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 205 op/s
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.953 254096 INFO nova.compute.manager [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 6.73 seconds to destroy the instance on the hypervisor.
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.954 254096 DEBUG oslo.service.loopingcall [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.954 254096 DEBUG nova.compute.manager [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.955 254096 DEBUG nova.network.neutron [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:30:34 compute-0 podman[287709]: 2025-11-25 16:30:34.962749327 +0000 UTC m=+0.079772500 container remove f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.970 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c057820b-9808-4580-86c0-aba7443a356b]: (4, ('Tue Nov 25 04:30:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b (f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0)\nf4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0\nTue Nov 25 04:30:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b (f4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0)\nf4da4bcb1aa62f26067d74561a0d3e67471b86037ff00dfa9b7507201a28e3e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.973 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a66e12dc-ad19-4c7c-bb94-dda519bd3a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:34.976 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d276419-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:34 compute-0 kernel: tap1d276419-00: left promiscuous mode
Nov 25 16:30:34 compute-0 nova_compute[254092]: 2025-11-25 16:30:34.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.002 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4f3ae4-c3cd-49df-a314-11a7bed39317]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.020 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17877fd8-d943-402f-9929-bdf1c1c0fc0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.021 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d685c491-7e6b-4559-9c34-57052948f9fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28e193df-2982-41f2-9f63-9ef6ed0e3b12]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468902, 'reachable_time': 42220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287733, 'error': None, 'target': 'ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d1d276419\x2d01f1\x2d4c77\x2d9031\x2d28a38923f36b.mount: Deactivated successfully.
Nov 25 16:30:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.051 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d276419-01f1-4c77-9031-28a38923f36b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:30:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:35.052 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a6f9a2-ee34-4939-b339-d9654de7206c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833353936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.290 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.293 254096 DEBUG nova.virt.libvirt.vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:29Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.294 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.296 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.299 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <uuid>3375e096-321c-459b-8b6a-e085bb62872f</uuid>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <name>instance-00000012</name>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAdminTestJSON-server-1705426121</nova:name>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:30:34</nova:creationTime>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:user uuid="a8e217b742fe4a57a4ac4ffc776670fe">tempest-ServersAdminTestJSON-2045227182-project-member</nova:user>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:project uuid="a4c9877f37b34e7fba5f3cc9642d1a48">tempest-ServersAdminTestJSON-2045227182</nova:project>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <nova:port uuid="d6146886-91a1-4d5f-9234-e1d0154b4230">
Nov 25 16:30:35 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <entry name="serial">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <entry name="uuid">3375e096-321c-459b-8b6a-e085bb62872f</entry>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk">
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3375e096-321c-459b-8b6a-e085bb62872f_disk.config">
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:dd:a2:8e"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <target dev="tapd6146886-91"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/console.log" append="off"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:30:35 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:30:35 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:35 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:35 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:35 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.307 254096 DEBUG nova.virt.libvirt.vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:29Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.308 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.309 254096 DEBUG nova.network.os_vif_util [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.310 254096 DEBUG os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.311 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.316 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6146886-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.317 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6146886-91, col_values=(('external_ids', {'iface-id': 'd6146886-91a1-4d5f-9234-e1d0154b4230', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:a2:8e', 'vm-uuid': '3375e096-321c-459b-8b6a-e085bb62872f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:35 compute-0 NetworkManager[48891]: <info>  [1764088235.3197] manager: (tapd6146886-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.326 254096 INFO os_vif [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.573 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.573 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.574 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] No VIF found with MAC fa:16:3e:dd:a2:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.574 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Using config drive
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.595 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.617 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.655 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'keypairs' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.786 254096 DEBUG nova.compute.manager [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-unplugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.786 254096 DEBUG oslo_concurrency.lockutils [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.787 254096 DEBUG oslo_concurrency.lockutils [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.787 254096 DEBUG oslo_concurrency.lockutils [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.787 254096 DEBUG nova.compute.manager [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] No waiting events found dispatching network-vif-unplugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:35 compute-0 nova_compute[254092]: 2025-11-25 16:30:35.788 254096 DEBUG nova.compute.manager [req-aab79724-0982-403b-bf60-6d3baeadf15e req-db3aeefb-e3a3-4186-a634-f35d0df7665c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-unplugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:30:35 compute-0 ceph-mon[74985]: pgmap v1290: 321 pgs: 321 active+clean; 424 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 205 op/s
Nov 25 16:30:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2833353936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.071 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Creating config drive at /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.078 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp67rqo0p6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.207 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp67rqo0p6" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.231 254096 DEBUG nova.storage.rbd_utils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] rbd image 3375e096-321c-459b-8b6a-e085bb62872f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.236 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.294 254096 INFO nova.virt.libvirt.driver [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deleting instance files /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388_del
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.296 254096 INFO nova.virt.libvirt.driver [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deletion of /var/lib/nova/instances/dc3c86a9-91cf-42fb-b11c-7de3305d8388_del complete
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.392 254096 INFO nova.compute.manager [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 2.06 seconds to destroy the instance on the hypervisor.
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.393 254096 DEBUG oslo.service.loopingcall [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.393 254096 DEBUG nova.compute.manager [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.394 254096 DEBUG nova.network.neutron [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.418 254096 DEBUG oslo_concurrency.processutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config 3375e096-321c-459b-8b6a-e085bb62872f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.419 254096 INFO nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting local config drive /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f/disk.config because it was imported into RBD.
Nov 25 16:30:36 compute-0 kernel: tapd6146886-91: entered promiscuous mode
Nov 25 16:30:36 compute-0 systemd-udevd[287601]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:30:36 compute-0 NetworkManager[48891]: <info>  [1764088236.4646] manager: (tapd6146886-91): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 25 16:30:36 compute-0 ovn_controller[153477]: 2025-11-25T16:30:36Z|00125|binding|INFO|Claiming lport d6146886-91a1-4d5f-9234-e1d0154b4230 for this chassis.
Nov 25 16:30:36 compute-0 ovn_controller[153477]: 2025-11-25T16:30:36Z|00126|binding|INFO|d6146886-91a1-4d5f-9234-e1d0154b4230: Claiming fa:16:3e:dd:a2:8e 10.100.0.12
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:36 compute-0 NetworkManager[48891]: <info>  [1764088236.4775] device (tapd6146886-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:30:36 compute-0 NetworkManager[48891]: <info>  [1764088236.4799] device (tapd6146886-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.481 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.486 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis
Nov 25 16:30:36 compute-0 ovn_controller[153477]: 2025-11-25T16:30:36Z|00127|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 ovn-installed in OVS
Nov 25 16:30:36 compute-0 ovn_controller[153477]: 2025-11-25T16:30:36Z|00128|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 up in Southbound
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.489 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.491 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.505 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad82042-461f-4827-911e-780e30f0b598]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:36 compute-0 systemd-machined[216343]: New machine qemu-30-instance-00000012.
Nov 25 16:30:36 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000012.
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.539 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ad909832-154b-4b43-9800-052afb622509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.543 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[561e4298-a891-4e04-8942-03f75577ebfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.574 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[07273b1f-cc8d-4112-bdd0-815ea730445c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a685e7c-08ae-4b2c-bdc2-36347c433885]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 17, 'rx_bytes': 868, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287821, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c747e66-ae1d-4042-a4fc-5e682870abfc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287823, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.631 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.632 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.632 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:36.633 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.765 254096 DEBUG nova.network.neutron [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.783 254096 INFO nova.compute.manager [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Took 1.83 seconds to deallocate network for instance.
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.842 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:36 compute-0 nova_compute[254092]: 2025-11-25 16:30:36.843 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 407 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.060 254096 DEBUG oslo_concurrency.processutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.136 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088237.1355405, 3375e096-321c-459b-8b6a-e085bb62872f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.137 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Resumed (Lifecycle Event)
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.141 254096 DEBUG nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.141 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.146 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance spawned successfully.
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.146 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.181 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.190 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.191 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.193 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.193 254096 DEBUG nova.virt.libvirt.driver [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.201 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:37 compute-0 ceph-mon[74985]: pgmap v1291: 321 pgs: 321 active+clean; 407 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.229 254096 DEBUG nova.network.neutron [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.255 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.255 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088237.144586, 3375e096-321c-459b-8b6a-e085bb62872f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.255 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Started (Lifecycle Event)
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.268 254096 INFO nova.compute.manager [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Took 0.88 seconds to deallocate network for instance.
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.275 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.280 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.286 254096 DEBUG nova.compute.manager [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.318 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.357 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.358 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171751371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.561 254096 DEBUG oslo_concurrency.processutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.567 254096 DEBUG nova.compute.provider_tree [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.582 254096 DEBUG nova.scheduler.client.report [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.610 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.612 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.648 254096 INFO nova.scheduler.client.report [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Deleted allocations for instance df0db130-3ffa-4a60-8f7d-fb285a797631
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.725 254096 DEBUG oslo_concurrency.lockutils [None req-a4833878-cd4c-4a68-b283-f5a8268b808d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "df0db130-3ffa-4a60-8f7d-fb285a797631" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.782 254096 DEBUG nova.compute.manager [req-f92792e7-9c1c-4878-bb24-0e6f9cdf07f2 req-91033713-4ffa-42ee-a09a-a7aee7e290df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-deleted-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.804 254096 DEBUG oslo_concurrency.processutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.926 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.927 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing instance network info cache due to event network-changed-fb46dd7a-52d4-44cb-b99e-81d7d653885c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.927 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.928 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:37 compute-0 nova_compute[254092]: 2025-11-25 16:30:37.928 254096 DEBUG nova.network.neutron [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Refreshing network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:30:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2171751371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041010035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.335 254096 DEBUG oslo_concurrency.processutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.340 254096 DEBUG nova.compute.provider_tree [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.356 254096 DEBUG nova.scheduler.client.report [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.382 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.385 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.385 254096 DEBUG nova.objects.instance [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.428 254096 INFO nova.scheduler.client.report [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Deleted allocations for instance dc3c86a9-91cf-42fb-b11c-7de3305d8388
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.453 254096 DEBUG oslo_concurrency.lockutils [None req-22c8194f-d57d-4562-833c-7447d2350a03 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:38 compute-0 nova_compute[254092]: 2025-11-25 16:30:38.503 254096 DEBUG oslo_concurrency.lockutils [None req-e292eb89-4382-48e1-8205-1ec23f5bc812 01171e7ab3a4447497eacf11bf89be63 4e1bcf74bb1148a3a0f388525c96c919 - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 407 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Nov 25 16:30:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1041010035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:39 compute-0 ceph-mon[74985]: pgmap v1292: 321 pgs: 321 active+clean; 407 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.279 254096 DEBUG nova.network.neutron [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updated VIF entry in instance network info cache for port fb46dd7a-52d4-44cb-b99e-81d7d653885c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.280 254096 DEBUG nova.network.neutron [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [{"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.299 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-33b19faf-57e1-463b-8b4a-b50479a0ef0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.299 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.299 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.300 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.300 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc3c86a9-91cf-42fb-b11c-7de3305d8388-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.300 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] No waiting events found dispatching network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 WARNING nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Received unexpected event network-vif-plugged-8daa55ae-6950-4c2f-8121-ce02930ab1d9 for instance with vm_state deleted and task_state None.
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Received event network-vif-deleted-02104fc6-3780-400d-a6c2-577082384680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.301 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 DEBUG oslo_concurrency.lockutils [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 DEBUG nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:39 compute-0 nova_compute[254092]: 2025-11-25 16:30:39.302 254096 WARNING nova.compute.manager [req-98cd4a9d-629a-477c-a9c4-151c25b03ed4 req-3636f11f-d29d-4c98-97cb-dfe941d9efe8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.001 254096 DEBUG nova.compute.manager [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.001 254096 DEBUG oslo_concurrency.lockutils [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.001 254096 DEBUG oslo_concurrency.lockutils [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.002 254096 DEBUG oslo_concurrency.lockutils [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.002 254096 DEBUG nova.compute.manager [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.002 254096 WARNING nova.compute.manager [req-c4275995-fe29-4109-b11b-a8479afb76e9 req-37734b42-fbc2-4b2c-b7c8-d5fec3cf82fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state None.
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:30:40
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.084 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.084 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.085 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.085 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.087 254096 INFO nova.compute.manager [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Terminating instance
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.088 254096 DEBUG nova.compute.manager [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:30:40 compute-0 kernel: tap0f27a287-0c (unregistering): left promiscuous mode
Nov 25 16:30:40 compute-0 NetworkManager[48891]: <info>  [1764088240.1366] device (tap0f27a287-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:40 compute-0 ovn_controller[153477]: 2025-11-25T16:30:40Z|00129|binding|INFO|Releasing lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 from this chassis (sb_readonly=0)
Nov 25 16:30:40 compute-0 ovn_controller[153477]: 2025-11-25T16:30:40Z|00130|binding|INFO|Setting lport 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 down in Southbound
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 ovn_controller[153477]: 2025-11-25T16:30:40Z|00131|binding|INFO|Removing iface tap0f27a287-0c ovn-installed in OVS
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.152 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:bd:39 10.100.0.9'], port_security=['fa:16:3e:a8:bd:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7777dd86-925e-4f98-bd68-e38ac540d97b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0f27a287-0c09-4767-a6cf-a7f4f8870ea1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.155 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0f27a287-0c09-4767-a6cf-a7f4f8870ea1 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.157 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cc8691-7080-4704-b6b1-05e8074870c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.207 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6316209f-aff6-4eca-8591-c23936611d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.210 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e74b7d-f80e-40ff-80eb-7799d218ca9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:40 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 25 16:30:40 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 17.802s CPU time.
Nov 25 16:30:40 compute-0 systemd-machined[216343]: Machine qemu-25-instance-00000017 terminated.
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.242 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7891a214-50ba-40c9-932e-ba59fbec2514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.259 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d67b4f0-709d-4794-b65d-f168610237d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287921, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.279 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ceaf5dab-6117-4b2c-8d8d-0b1df0208682]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287922, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287922, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.281 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:40.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.323 254096 INFO nova.virt.libvirt.driver [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Instance destroyed successfully.
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.323 254096 DEBUG nova.objects.instance [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 7777dd86-925e-4f98-bd68-e38ac540d97b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.335 254096 DEBUG nova.virt.libvirt.vif [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-890496249',display_name='tempest-ServersAdminTestJSON-server-890496249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-890496249',id=23,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-pzomq4ok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:36Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=7777dd86-925e-4f98-bd68-e38ac540d97b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.337 254096 DEBUG nova.network.os_vif_util [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "address": "fa:16:3e:a8:bd:39", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f27a287-0c", "ovs_interfaceid": "0f27a287-0c09-4767-a6cf-a7f4f8870ea1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.338 254096 DEBUG nova.network.os_vif_util [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.339 254096 DEBUG os_vif [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.341 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f27a287-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.347 254096 INFO os_vif [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:39,bridge_name='br-int',has_traffic_filtering=True,id=0f27a287-0c09-4767-a6cf-a7f4f8870ea1,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f27a287-0c')
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.903 254096 INFO nova.virt.libvirt.driver [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deleting instance files /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b_del
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.903 254096 INFO nova.virt.libvirt.driver [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deletion of /var/lib/nova/instances/7777dd86-925e-4f98-bd68-e38ac540d97b_del complete
Nov 25 16:30:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 372 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 281 op/s
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.948 254096 INFO nova.compute.manager [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.948 254096 DEBUG oslo.service.loopingcall [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.948 254096 DEBUG nova.compute.manager [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:30:40 compute-0 nova_compute[254092]: 2025-11-25 16:30:40.949 254096 DEBUG nova.network.neutron [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:30:40 compute-0 ceph-mon[74985]: pgmap v1293: 321 pgs: 321 active+clean; 372 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 281 op/s
Nov 25 16:30:41 compute-0 nova_compute[254092]: 2025-11-25 16:30:41.454 254096 DEBUG nova.network.neutron [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:41 compute-0 nova_compute[254092]: 2025-11-25 16:30:41.467 254096 INFO nova.compute.manager [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Took 0.52 seconds to deallocate network for instance.
Nov 25 16:30:41 compute-0 nova_compute[254092]: 2025-11-25 16:30:41.508 254096 DEBUG nova.compute.manager [req-43da5c4c-24cf-4713-9b0a-a2504e5e3adf req-dc9d189e-3c1e-4566-8725-cfee57909a22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-deleted-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:41 compute-0 nova_compute[254092]: 2025-11-25 16:30:41.510 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:41 compute-0 nova_compute[254092]: 2025-11-25 16:30:41.511 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:41 compute-0 nova_compute[254092]: 2025-11-25 16:30:41.633 254096 DEBUG oslo_concurrency.processutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:41 compute-0 ovn_controller[153477]: 2025-11-25T16:30:41Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:54:45 10.100.0.10
Nov 25 16:30:41 compute-0 ovn_controller[153477]: 2025-11-25T16:30:41Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:54:45 10.100.0.10
Nov 25 16:30:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788548521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.063 254096 DEBUG oslo_concurrency.processutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.069 254096 DEBUG nova.compute.provider_tree [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-unplugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.082 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] No waiting events found dispatching network-vif-unplugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 WARNING nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received unexpected event network-vif-unplugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for instance with vm_state deleted and task_state None.
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.083 254096 DEBUG oslo_concurrency.lockutils [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.084 254096 DEBUG nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] No waiting events found dispatching network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.084 254096 WARNING nova.compute.manager [req-65af56e8-1127-42b8-a6d4-f8b19758ab39 req-2eb1de35-1f18-4f98-81c1-cc5fcbb280b6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Received unexpected event network-vif-plugged-0f27a287-0c09-4767-a6cf-a7f4f8870ea1 for instance with vm_state deleted and task_state None.
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.089 254096 DEBUG nova.scheduler.client.report [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2788548521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.109 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.135 254096 INFO nova.scheduler.client.report [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance 7777dd86-925e-4f98-bd68-e38ac540d97b
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.208 254096 DEBUG oslo_concurrency.lockutils [None req-947ba014-af15-49a6-a913-04e43a45b29e a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "7777dd86-925e-4f98-bd68-e38ac540d97b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:42 compute-0 nova_compute[254092]: 2025-11-25 16:30:42.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 346 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Nov 25 16:30:43 compute-0 ceph-mon[74985]: pgmap v1294: 321 pgs: 321 active+clean; 346 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.256 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.256 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.257 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.258 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.258 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.259 254096 INFO nova.compute.manager [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Terminating instance
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.260 254096 DEBUG nova.compute.manager [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:30:43 compute-0 kernel: tap0d1cf86d-66 (unregistering): left promiscuous mode
Nov 25 16:30:43 compute-0 NetworkManager[48891]: <info>  [1764088243.3239] device (tap0d1cf86d-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00132|binding|INFO|Releasing lport 0d1cf86d-6639-47eb-8de1-718476d1c006 from this chassis (sb_readonly=0)
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00133|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 down in Southbound
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00134|binding|INFO|Removing iface tap0d1cf86d-66 ovn-installed in OVS
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.352 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.353 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.354 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.382 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[95d99d47-8093-4fea-9007-4a7f75fd01a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 25 16:30:43 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 16.532s CPU time.
Nov 25 16:30:43 compute-0 systemd-machined[216343]: Machine qemu-24-instance-00000016 terminated.
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.407 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[264a7b16-f966-457b-a029-31b10a649f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.410 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f51670f9-6abb-4656-9a28-f077d423bbf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.435 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2978b643-e178-470f-8a58-799ac2479833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b0e3b2-4e89-418b-b20f-70f9ef1f210f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287984, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.461 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c8cf46-f140-41cb-adb2-881cfd7da3ba]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287985, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287985, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.463 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.468 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.468 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:43 compute-0 systemd-udevd[287975]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:30:43 compute-0 kernel: tap0d1cf86d-66: entered promiscuous mode
Nov 25 16:30:43 compute-0 NetworkManager[48891]: <info>  [1764088243.4928] manager: (tap0d1cf86d-66): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00135|binding|INFO|Claiming lport 0d1cf86d-6639-47eb-8de1-718476d1c006 for this chassis.
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00136|binding|INFO|0d1cf86d-6639-47eb-8de1-718476d1c006: Claiming fa:16:3e:78:52:60 10.100.0.10
Nov 25 16:30:43 compute-0 kernel: tap0d1cf86d-66 (unregistering): left promiscuous mode
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.500 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.500 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.502 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b bound to our chassis
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.504 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.518 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d57a9d49-cf8b-4d09-8b66-57b5fcca1158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00137|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 ovn-installed in OVS
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00138|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 up in Southbound
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00139|binding|INFO|Releasing lport 0d1cf86d-6639-47eb-8de1-718476d1c006 from this chassis (sb_readonly=1)
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00140|if_status|INFO|Not setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 down as sb is readonly
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00141|binding|INFO|Removing iface tap0d1cf86d-66 ovn-installed in OVS
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00142|binding|INFO|Releasing lport 0d1cf86d-6639-47eb-8de1-718476d1c006 from this chassis (sb_readonly=0)
Nov 25 16:30:43 compute-0 ovn_controller[153477]: 2025-11-25T16:30:43Z|00143|binding|INFO|Setting lport 0d1cf86d-6639-47eb-8de1-718476d1c006 down in Southbound
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.531 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:52:60 10.100.0.10'], port_security=['fa:16:3e:78:52:60 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f0cb83d8-c2a3-49d1-8c01-b9be9922abd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d1cf86d-6639-47eb-8de1-718476d1c006) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.531 254096 INFO nova.virt.libvirt.driver [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Instance destroyed successfully.
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.532 254096 DEBUG nova.objects.instance [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.543 254096 DEBUG nova.virt.libvirt.vif [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:29:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2013800485',display_name='tempest-ServersAdminTestJSON-server-2013800485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2013800485',id=22,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:29:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-p696wseq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:29:17Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=f0cb83d8-c2a3-49d1-8c01-b9be9922abd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.544 254096 DEBUG nova.network.os_vif_util [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "0d1cf86d-6639-47eb-8de1-718476d1c006", "address": "fa:16:3e:78:52:60", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf86d-66", "ovs_interfaceid": "0d1cf86d-6639-47eb-8de1-718476d1c006", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.544 254096 DEBUG nova.network.os_vif_util [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.544 254096 DEBUG os_vif [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.546 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d1cf86d-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.549 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ef4213-d1bf-4f6b-82db-20482ea5c03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.550 254096 INFO os_vif [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:52:60,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf86d-6639-47eb-8de1-718476d1c006,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf86d-66')
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.557 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[00dcf611-265f-4f6c-bdcd-1a5e748baaa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.586 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[388fc9a7-4080-4efd-8474-133ef1021563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8894b2cc-59ed-4724-bdd7-858d0459531c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288014, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.621 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[773e6d23-b571-434e-ad25-cf07275f3f5f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288015, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288015, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.623 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.625 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.625 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.627 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf86d-6639-47eb-8de1-718476d1c006 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.629 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.644 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[51057887-a864-4962-ba87-fc855aa69e30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.676 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[12c3f08a-3543-448d-80e4-f665b7a0d70e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.679 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[87055f16-de57-407e-b7b0-ed3e3f3bc3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.710 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2f7742-e3bd-41ca-be7a-ff60d43ca328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.728 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b0512fca-ec5c-410f-bc99-24c9e8710546]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288021, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf8e57d-18cc-449f-945a-4a62bb220eec]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288023, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288023, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.746 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:43.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.894 254096 INFO nova.virt.libvirt.driver [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deleting instance files /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_del
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.895 254096 INFO nova.virt.libvirt.driver [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deletion of /var/lib/nova/instances/f0cb83d8-c2a3-49d1-8c01-b9be9922abd1_del complete
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.951 254096 INFO nova.compute.manager [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 0.69 seconds to destroy the instance on the hypervisor.
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.951 254096 DEBUG oslo.service.loopingcall [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.952 254096 DEBUG nova.compute.manager [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:30:43 compute-0 nova_compute[254092]: 2025-11-25 16:30:43.952 254096 DEBUG nova.network.neutron [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.067 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088229.0663986, df0db130-3ffa-4a60-8f7d-fb285a797631 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.068 254096 INFO nova.compute.manager [-] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] VM Stopped (Lifecycle Event)
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.086 254096 DEBUG nova.compute.manager [None req-5f6deaa3-bd11-45ff-a118-6d1ffc875df9 - - - - - -] [instance: df0db130-3ffa-4a60-8f7d-fb285a797631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:44 compute-0 ovn_controller[153477]: 2025-11-25T16:30:44Z|00144|binding|INFO|Releasing lport e9e1418a-32be-4b0e-81c1-ad489d6a782b from this chassis (sb_readonly=0)
Nov 25 16:30:44 compute-0 ovn_controller[153477]: 2025-11-25T16:30:44Z|00145|binding|INFO|Releasing lport d23fff41-4296-47a5-8279-de62f83ea17d from this chassis (sb_readonly=0)
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.289 254096 DEBUG nova.compute.manager [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-unplugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG oslo_concurrency.lockutils [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG oslo_concurrency.lockutils [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG oslo_concurrency.lockutils [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG nova.compute.manager [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] No waiting events found dispatching network-vif-unplugged-0d1cf86d-6639-47eb-8de1-718476d1c006 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.290 254096 DEBUG nova.compute.manager [req-98cff1f0-1265-4b38-a711-7dc01bcebc12 req-d12cd50e-9ad9-45d7-ac95-7d86e72a3c3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-unplugged-0d1cf86d-6639-47eb-8de1-718476d1c006 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.853 254096 DEBUG nova.network.neutron [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.872 254096 INFO nova.compute.manager [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Took 0.92 seconds to deallocate network for instance.
Nov 25 16:30:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 273 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 344 op/s
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.930 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:44 compute-0 nova_compute[254092]: 2025-11-25 16:30:44.932 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:44 compute-0 ceph-mon[74985]: pgmap v1295: 321 pgs: 321 active+clean; 273 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 344 op/s
Nov 25 16:30:45 compute-0 nova_compute[254092]: 2025-11-25 16:30:45.043 254096 DEBUG oslo_concurrency.processutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953352328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:45 compute-0 nova_compute[254092]: 2025-11-25 16:30:45.477 254096 DEBUG oslo_concurrency.processutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:45 compute-0 nova_compute[254092]: 2025-11-25 16:30:45.483 254096 DEBUG nova.compute.provider_tree [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:45 compute-0 nova_compute[254092]: 2025-11-25 16:30:45.502 254096 DEBUG nova.scheduler.client.report [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:45 compute-0 nova_compute[254092]: 2025-11-25 16:30:45.587 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:45 compute-0 nova_compute[254092]: 2025-11-25 16:30:45.613 254096 INFO nova.scheduler.client.report [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance f0cb83d8-c2a3-49d1-8c01-b9be9922abd1
Nov 25 16:30:45 compute-0 nova_compute[254092]: 2025-11-25 16:30:45.683 254096 DEBUG oslo_concurrency.lockutils [None req-16561adf-c6bf-4f72-9969-978d57ea8f18 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2953352328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.304 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.305 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.325 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.394 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.394 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.401 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.402 254096 INFO nova.compute.claims [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.485 254096 DEBUG nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.486 254096 DEBUG oslo_concurrency.lockutils [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.486 254096 DEBUG oslo_concurrency.lockutils [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 DEBUG oslo_concurrency.lockutils [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0cb83d8-c2a3-49d1-8c01-b9be9922abd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 DEBUG nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] No waiting events found dispatching network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 WARNING nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received unexpected event network-vif-plugged-0d1cf86d-6639-47eb-8de1-718476d1c006 for instance with vm_state deleted and task_state None.
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.487 254096 DEBUG nova.compute.manager [req-6187d9a4-a700-4c97-bab2-f040a632b331 req-88dd69af-e2e5-4597-8b7a-9c0cff79e38d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Received event network-vif-deleted-0d1cf86d-6639-47eb-8de1-718476d1c006 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.564 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 246 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 236 op/s
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.951 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.951 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.952 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.952 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.952 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.954 254096 INFO nova.compute.manager [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Terminating instance
Nov 25 16:30:46 compute-0 nova_compute[254092]: 2025-11-25 16:30:46.955 254096 DEBUG nova.compute.manager [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:30:47 compute-0 kernel: tap7c4d5f4d-36 (unregistering): left promiscuous mode
Nov 25 16:30:47 compute-0 NetworkManager[48891]: <info>  [1764088247.0061] device (tap7c4d5f4d-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:47 compute-0 ovn_controller[153477]: 2025-11-25T16:30:47Z|00146|binding|INFO|Releasing lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c from this chassis (sb_readonly=0)
Nov 25 16:30:47 compute-0 ovn_controller[153477]: 2025-11-25T16:30:47Z|00147|binding|INFO|Setting lport 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c down in Southbound
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.010 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 ovn_controller[153477]: 2025-11-25T16:30:47Z|00148|binding|INFO|Removing iface tap7c4d5f4d-36 ovn-installed in OVS
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 ceph-mon[74985]: pgmap v1296: 321 pgs: 321 active+clean; 246 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 236 op/s
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.020 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:5a:58 10.100.0.11'], port_security=['fa:16:3e:d8:5a:58 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '090ac2d7-979e-4706-8a01-5e94ab72282d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.023 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis
Nov 25 16:30:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268997110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.025 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3403825e-13ff-43e0-80c4-b59cf23ed30b
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ba09ff-4085-4e49-b03d-efa1b14bd31f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.059 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:47 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 25 16:30:47 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Consumed 16.546s CPU time.
Nov 25 16:30:47 compute-0 systemd-machined[216343]: Machine qemu-22-instance-00000013 terminated.
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.068 254096 DEBUG nova.compute.provider_tree [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.079 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc56c69-5b24-4798-bb2b-a02cb95ce52d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.081 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ed80fdb6-5ffe-46b0-89e0-2027a5283c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.086 254096 DEBUG nova.scheduler.client.report [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.105 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.106 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.110 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[183b61b8-d2ac-4fa3-b2b8-aef467a36de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.127 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7ad35c-353c-415f-8a86-4882a1e2aa2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3403825e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:44:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458728, 'reachable_time': 28517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288079, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.142 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8613e502-d8d9-4928-9617-98b972ef1000]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458739, 'tstamp': 458739}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288080, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3403825e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458742, 'tstamp': 458742}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288080, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.144 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.150 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3403825e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.151 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.152 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3403825e-10, col_values=(('external_ids', {'iface-id': 'e9e1418a-32be-4b0e-81c1-ad489d6a782b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:47.152 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.153 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.153 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.170 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.183 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.195 254096 INFO nova.virt.libvirt.driver [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Instance destroyed successfully.
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.195 254096 DEBUG nova.objects.instance [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 090ac2d7-979e-4706-8a01-5e94ab72282d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.220 254096 DEBUG nova.virt.libvirt.vif [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:28:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-391611256',display_name='tempest-ServersAdminTestJSON-server-391611256',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-391611256',id=19,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:28:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-6weq4dsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:28:54Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=090ac2d7-979e-4706-8a01-5e94ab72282d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.221 254096 DEBUG nova.network.os_vif_util [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "address": "fa:16:3e:d8:5a:58", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c4d5f4d-36", "ovs_interfaceid": "7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.221 254096 DEBUG nova.network.os_vif_util [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.222 254096 DEBUG os_vif [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.223 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c4d5f4d-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.227 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.230 254096 INFO os_vif [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:5a:58,bridge_name='br-int',has_traffic_filtering=True,id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c4d5f4d-36')
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.300 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.302 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.302 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Creating image(s)
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.327 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.352 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.382 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.386 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.457 254096 DEBUG nova.policy [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.480 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.481 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.481 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.482 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.503 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.506 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 07003872-27e7-4fd9-80cf-a34257d5aa97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.538 254096 DEBUG nova.compute.manager [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-unplugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.539 254096 DEBUG oslo_concurrency.lockutils [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.539 254096 DEBUG oslo_concurrency.lockutils [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.540 254096 DEBUG oslo_concurrency.lockutils [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.540 254096 DEBUG nova.compute.manager [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] No waiting events found dispatching network-vif-unplugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.540 254096 DEBUG nova.compute.manager [req-c4a43eb6-bb75-4113-9e69-71763671709a req-3c48369c-e18b-431b-9ace-17fcf4000735 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-unplugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.579 254096 INFO nova.virt.libvirt.driver [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deleting instance files /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d_del
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.580 254096 INFO nova.virt.libvirt.driver [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deletion of /var/lib/nova/instances/090ac2d7-979e-4706-8a01-5e94ab72282d_del complete
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.646 254096 INFO nova.compute.manager [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 0.69 seconds to destroy the instance on the hypervisor.
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.646 254096 DEBUG oslo.service.loopingcall [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.647 254096 DEBUG nova.compute.manager [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.647 254096 DEBUG nova.network.neutron [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.807 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 07003872-27e7-4fd9-80cf-a34257d5aa97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.879 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:30:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.973 254096 DEBUG nova.objects.instance [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.991 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.991 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Ensure instance console log exists: /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.992 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.992 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:47 compute-0 nova_compute[254092]: 2025-11-25 16:30:47.993 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2268997110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.470 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully created port: 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.548 254096 DEBUG nova.network.neutron [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.608 254096 DEBUG nova.compute.manager [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-deleted-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.608 254096 INFO nova.compute.manager [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Neutron deleted interface 7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c; detaching it from the instance and deleting it from the info cache
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.608 254096 DEBUG nova.network.neutron [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.643 254096 INFO nova.compute.manager [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Took 1.00 seconds to deallocate network for instance.
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.650 254096 DEBUG nova.compute.manager [req-68d0710d-c2f7-424d-b112-beba4e3bee52 req-87c99b9c-c972-481b-9989-e59abaea9cd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Detach interface failed, port_id=7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c, reason: Instance 090ac2d7-979e-4706-8a01-5e94ab72282d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.755 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.755 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:48 compute-0 nova_compute[254092]: 2025-11-25 16:30:48.860 254096 DEBUG oslo_concurrency.processutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 246 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 200 op/s
Nov 25 16:30:49 compute-0 ceph-mon[74985]: pgmap v1297: 321 pgs: 321 active+clean; 246 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 200 op/s
Nov 25 16:30:49 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 25 16:30:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2577763299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.341 254096 DEBUG oslo_concurrency.processutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.349 254096 DEBUG nova.compute.provider_tree [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.362 254096 DEBUG nova.scheduler.client.report [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.412 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.546 254096 INFO nova.scheduler.client.report [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance 090ac2d7-979e-4706-8a01-5e94ab72282d
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.569 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088234.5651274, dc3c86a9-91cf-42fb-b11c-7de3305d8388 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.569 254096 INFO nova.compute.manager [-] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] VM Stopped (Lifecycle Event)
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.607 254096 DEBUG nova.compute.manager [None req-8758d0a6-8765-42e6-826e-86e6630d2a22 - - - - - -] [instance: dc3c86a9-91cf-42fb-b11c-7de3305d8388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.610 254096 DEBUG nova.compute.manager [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG oslo_concurrency.lockutils [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG oslo_concurrency.lockutils [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG oslo_concurrency.lockutils [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.611 254096 DEBUG nova.compute.manager [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] No waiting events found dispatching network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.612 254096 WARNING nova.compute.manager [req-734dc380-6f30-4f04-a23e-f697a72f1a2b req-b99daef7-3711-499c-ab3e-011910d7f951 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Received unexpected event network-vif-plugged-7c4d5f4d-3676-46c0-a4e3-77e2f4eec38c for instance with vm_state deleted and task_state None.
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.636 254096 DEBUG oslo_concurrency.lockutils [None req-b257cbf7-5c3d-4f9a-8f8a-22b92a1127be a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "090ac2d7-979e-4706-8a01-5e94ab72282d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.803 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.837 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.837 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:49 compute-0 nova_compute[254092]: 2025-11-25 16:30:49.837 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.043 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:30:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2577763299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.233 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.234 254096 INFO nova.compute.manager [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Terminating instance
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.235 254096 DEBUG nova.compute.manager [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:30:50 compute-0 kernel: tapd6146886-91 (unregistering): left promiscuous mode
Nov 25 16:30:50 compute-0 NetworkManager[48891]: <info>  [1764088250.4223] device (tapd6146886-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:50 compute-0 ovn_controller[153477]: 2025-11-25T16:30:50Z|00149|binding|INFO|Releasing lport d6146886-91a1-4d5f-9234-e1d0154b4230 from this chassis (sb_readonly=0)
Nov 25 16:30:50 compute-0 ovn_controller[153477]: 2025-11-25T16:30:50Z|00150|binding|INFO|Setting lport d6146886-91a1-4d5f-9234-e1d0154b4230 down in Southbound
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 ovn_controller[153477]: 2025-11-25T16:30:50Z|00151|binding|INFO|Removing iface tapd6146886-91 ovn-installed in OVS
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.465 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:8e 10.100.0.12'], port_security=['fa:16:3e:dd:a2:8e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3375e096-321c-459b-8b6a-e085bb62872f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4c9877f37b34e7fba5f3cc9642d1a48', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2633f0b7-0e94-4653-a67a-e645ec4a69fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06f31098-96f7-4b9c-ab6f-85e953b1eed6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6146886-91a1-4d5f-9234-e1d0154b4230) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.466 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6146886-91a1-4d5f-9234-e1d0154b4230 in datapath 3403825e-13ff-43e0-80c4-b59cf23ed30b unbound from our chassis
Nov 25 16:30:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.468 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3403825e-13ff-43e0-80c4-b59cf23ed30b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:30:50 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 25 16:30:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.470 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79d452be-ffc6-465f-8aed-a171a4c03270]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:50.471 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b namespace which is not needed anymore
Nov 25 16:30:50 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000012.scope: Consumed 12.483s CPU time.
Nov 25 16:30:50 compute-0 systemd-machined[216343]: Machine qemu-30-instance-00000012 terminated.
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.711 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.729 254096 INFO nova.virt.libvirt.driver [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Instance destroyed successfully.
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.729 254096 DEBUG nova.objects.instance [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lazy-loading 'resources' on Instance uuid 3375e096-321c-459b-8b6a-e085bb62872f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.783 254096 DEBUG nova.virt.libvirt.vif [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1705426121',display_name='tempest-ServersAdminTestJSON-server-1705426121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1705426121',id=18,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4c9877f37b34e7fba5f3cc9642d1a48',ramdisk_id='',reservation_id='r-k4yvqw87',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-2045227182',owner_user_name='tempest-ServersAdminTestJSON-2045227182-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:39Z,user_data=None,user_id='a8e217b742fe4a57a4ac4ffc776670fe',uuid=3375e096-321c-459b-8b6a-e085bb62872f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.785 254096 DEBUG nova.network.os_vif_util [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converting VIF {"id": "d6146886-91a1-4d5f-9234-e1d0154b4230", "address": "fa:16:3e:dd:a2:8e", "network": {"id": "3403825e-13ff-43e0-80c4-b59cf23ed30b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-799367246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4c9877f37b34e7fba5f3cc9642d1a48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6146886-91", "ovs_interfaceid": "d6146886-91a1-4d5f-9234-e1d0154b4230", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.786 254096 DEBUG nova.network.os_vif_util [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.786 254096 DEBUG os_vif [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.789 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6146886-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.797 254096 INFO os_vif [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:a2:8e,bridge_name='br-int',has_traffic_filtering=True,id=d6146886-91a1-4d5f-9234-e1d0154b4230,network=Network(3403825e-13ff-43e0-80c4-b59cf23ed30b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6146886-91')
Nov 25 16:30:50 compute-0 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : haproxy version is 2.8.14-c23fe91
Nov 25 16:30:50 compute-0 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [NOTICE]   (282466) : path to executable is /usr/sbin/haproxy
Nov 25 16:30:50 compute-0 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [WARNING]  (282466) : Exiting Master process...
Nov 25 16:30:50 compute-0 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [ALERT]    (282466) : Current worker (282492) exited with code 143 (Terminated)
Nov 25 16:30:50 compute-0 neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b[282450]: [WARNING]  (282466) : All workers exited. Exiting... (0)
Nov 25 16:30:50 compute-0 systemd[1]: libpod-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb.scope: Deactivated successfully.
Nov 25 16:30:50 compute-0 podman[288322]: 2025-11-25 16:30:50.82133239 +0000 UTC m=+0.225656307 container died a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:30:50 compute-0 sudo[288341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:30:50 compute-0 sudo[288341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:50 compute-0 sudo[288341]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.846 254096 DEBUG nova.network.neutron [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:50 compute-0 sudo[288395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:30:50 compute-0 sudo[288395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:50 compute-0 sudo[288395]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 227 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 267 op/s
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG nova.compute.manager [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG oslo_concurrency.lockutils [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG oslo_concurrency.lockutils [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.943 254096 DEBUG oslo_concurrency.lockutils [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.944 254096 DEBUG nova.compute.manager [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:50 compute-0 nova_compute[254092]: 2025-11-25 16:30:50.944 254096 DEBUG nova.compute.manager [req-809afd94-1e7b-41b5-8d7f-cda40804ed0c req-fc2fc36f-ad61-4aae-8719-4daa0406eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-unplugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:30:50 compute-0 sudo[288421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:30:50 compute-0 sudo[288421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:50 compute-0 sudo[288421]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.007 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.008 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance network_info: |[{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.010 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start _get_guest_xml network_info=[{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.015 254096 WARNING nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.020 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.021 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.024 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.024 254096 DEBUG nova.virt.libvirt.host [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.024 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.025 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.026 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.027 254096 DEBUG nova.virt.hardware [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.029 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:51 compute-0 sudo[288446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:30:51 compute-0 sudo[288446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016913856466106662 of space, bias 1.0, pg target 0.5074156939831999 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:30:51 compute-0 ceph-mon[74985]: pgmap v1298: 321 pgs: 321 active+clean; 227 MiB data, 462 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 267 op/s
Nov 25 16:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb-userdata-shm.mount: Deactivated successfully.
Nov 25 16:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e493ef27e1af43f842f739b06d394ae37970231d4264741926f2efe0d296ab9a-merged.mount: Deactivated successfully.
Nov 25 16:30:51 compute-0 podman[288322]: 2025-11-25 16:30:51.240525351 +0000 UTC m=+0.644849268 container cleanup a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:30:51 compute-0 systemd[1]: libpod-conmon-a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb.scope: Deactivated successfully.
Nov 25 16:30:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107126535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:51 compute-0 podman[288507]: 2025-11-25 16:30:51.532768468 +0000 UTC m=+0.260908256 container remove a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.543 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3dbb396-ae6e-40ba-9fe2-4a18f7c1e697]: (4, ('Tue Nov 25 04:30:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b (a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb)\na2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb\nTue Nov 25 04:30:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b (a2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb)\na2e222c7ca6d5366ef9e3702eaf60190c97ef394179be18eea895e77b7e982eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.546 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e4c21e-4f53-4913-9162-46c689c2abe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3403825e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:51 compute-0 kernel: tap3403825e-10: left promiscuous mode
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfe7f5c-794b-4c92-9a79-302b02536c38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.586 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.589 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ec5762-9138-4ab3-9097-da9aa277a275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5861b4c6-e5b7-4786-b7ef-04cc60867ccf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.591 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50190f01-cae4-4175-83d3-8c5004ccd385]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458721, 'reachable_time': 35828, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288561, 'error': None, 'target': 'ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:51 compute-0 nova_compute[254092]: 2025-11-25 16:30:51.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.635 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3403825e-13ff-43e0-80c4-b59cf23ed30b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:30:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:51.636 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcf13c2-3338-4447-86ed-463bbb0032e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d3403825e\x2d13ff\x2d43e0\x2d80c4\x2db59cf23ed30b.mount: Deactivated successfully.
Nov 25 16:30:51 compute-0 sudo[288446]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:30:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:30:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:30:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:30:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:30:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 54df5eb0-2a86-4b99-83d4-4437222dd3b6 does not exist
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7b826c35-6482-4d41-b625-fae84abfa332 does not exist
Nov 25 16:30:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5d824418-87d2-4b86-9304-6c43d6d56495 does not exist
Nov 25 16:30:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:30:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:30:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:30:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:30:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:30:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:30:51 compute-0 sudo[288582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:30:51 compute-0 sudo[288582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:51 compute-0 sudo[288582]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:51 compute-0 sudo[288607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:30:51 compute-0 sudo[288607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:51 compute-0 sudo[288607]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:51 compute-0 sudo[288632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:30:51 compute-0 sudo[288632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:51 compute-0 sudo[288632]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:30:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195468863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:52 compute-0 sudo[288657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:30:52 compute-0 sudo[288657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.062 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.065 254096 DEBUG nova.virt.libvirt.vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.066 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.067 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.070 254096 DEBUG nova.objects.instance [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.088 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <name>instance-0000001b</name>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:30:51</nova:creationTime>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:30:52 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <system>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <entry name="serial">07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <entry name="uuid">07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </system>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <os>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   </os>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <features>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   </features>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk">
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config">
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       </source>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:30:52 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:83:61:d4"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <target dev="tap19d5425c-f0"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log" append="off"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <video>
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </video>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:30:52 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:30:52 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:30:52 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:30:52 compute-0 nova_compute[254092]: </domain>
Nov 25 16:30:52 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.093 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Preparing to wait for external event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.093 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.094 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.094 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.097 254096 DEBUG nova.virt.libvirt.vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.097 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.098 254096 DEBUG nova.network.os_vif_util [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.099 254096 DEBUG os_vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.102 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.103 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.107 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.107 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19d5425c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.108 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19d5425c-f0, col_values=(('external_ids', {'iface-id': '19d5425c-f0c6-4c68-b8a6-cb1c6357d249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:61:d4', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:52 compute-0 NetworkManager[48891]: <info>  [1764088252.1107] manager: (tap19d5425c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.119 254096 INFO os_vif [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0')
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.128 254096 DEBUG nova.compute.manager [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.128 254096 DEBUG nova.compute.manager [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.129 254096 DEBUG oslo_concurrency.lockutils [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.129 254096 DEBUG oslo_concurrency.lockutils [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.129 254096 DEBUG nova.network.neutron [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.216 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3107126535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.216 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.216 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.217 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Using config drive
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:30:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1195468863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.260 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.355 254096 INFO nova.virt.libvirt.driver [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deleting instance files /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.357 254096 INFO nova.virt.libvirt.driver [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deletion of /var/lib/nova/instances/3375e096-321c-459b-8b6a-e085bb62872f_del complete
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.414 254096 INFO nova.compute.manager [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 2.18 seconds to destroy the instance on the hypervisor.
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.415 254096 DEBUG oslo.service.loopingcall [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.415 254096 DEBUG nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.415 254096 DEBUG nova.network.neutron [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:30:52 compute-0 nova_compute[254092]: 2025-11-25 16:30:52.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:52 compute-0 podman[288747]: 2025-11-25 16:30:52.487532284 +0000 UTC m=+0.043171995 container create 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:30:52 compute-0 systemd[1]: Started libpod-conmon-0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be.scope.
Nov 25 16:30:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:30:52 compute-0 podman[288747]: 2025-11-25 16:30:52.468565548 +0000 UTC m=+0.024205269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:30:52 compute-0 podman[288747]: 2025-11-25 16:30:52.678211139 +0000 UTC m=+0.233850860 container init 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:30:52 compute-0 podman[288747]: 2025-11-25 16:30:52.686629218 +0000 UTC m=+0.242268929 container start 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:30:52 compute-0 dazzling_dijkstra[288764]: 167 167
Nov 25 16:30:52 compute-0 systemd[1]: libpod-0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be.scope: Deactivated successfully.
Nov 25 16:30:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:52 compute-0 podman[288747]: 2025-11-25 16:30:52.918855114 +0000 UTC m=+0.474494815 container attach 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 16:30:52 compute-0 podman[288747]: 2025-11-25 16:30:52.920669414 +0000 UTC m=+0.476309125 container died 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:30:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 235 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.0 MiB/s wr, 226 op/s
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.014 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Creating config drive at /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.025 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpytiwmf_q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.111 254096 DEBUG nova.compute.manager [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.112 254096 DEBUG oslo_concurrency.lockutils [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3375e096-321c-459b-8b6a-e085bb62872f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.113 254096 DEBUG oslo_concurrency.lockutils [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.113 254096 DEBUG oslo_concurrency.lockutils [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.113 254096 DEBUG nova.compute.manager [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] No waiting events found dispatching network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.114 254096 WARNING nova.compute.manager [req-5a4f6db7-cca6-4099-8cd4-c99adf32bc6e req-30578089-9a9b-4546-bc64-889286292e06 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received unexpected event network-vif-plugged-d6146886-91a1-4d5f-9234-e1d0154b4230 for instance with vm_state active and task_state deleting.
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.163 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpytiwmf_q" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.190 254096 DEBUG nova.storage.rbd_utils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:30:53 compute-0 nova_compute[254092]: 2025-11-25 16:30:53.195 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:53 compute-0 ceph-mon[74985]: pgmap v1299: 321 pgs: 321 active+clean; 235 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.0 MiB/s wr, 226 op/s
Nov 25 16:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ed4f6cbeead337044a1280cd64762640aed6ed14eafc6a1a92a0b35f0fb845-merged.mount: Deactivated successfully.
Nov 25 16:30:53 compute-0 podman[288747]: 2025-11-25 16:30:53.968596773 +0000 UTC m=+1.524236474 container remove 0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:30:54 compute-0 systemd[1]: libpod-conmon-0f03ffad48c547560b8eda14224063ee1f81d967903f6456b01e93450d9341be.scope: Deactivated successfully.
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.116 254096 DEBUG nova.network.neutron [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:54 compute-0 podman[288830]: 2025-11-25 16:30:54.122248191 +0000 UTC m=+0.023507170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.252 254096 INFO nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Took 1.84 seconds to deallocate network for instance.
Nov 25 16:30:54 compute-0 podman[288830]: 2025-11-25 16:30:54.2950303 +0000 UTC m=+0.196289249 container create 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.343 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.343 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:54 compute-0 podman[288833]: 2025-11-25 16:30:54.366468753 +0000 UTC m=+0.261152793 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:30:54 compute-0 podman[288826]: 2025-11-25 16:30:54.370300547 +0000 UTC m=+0.267028023 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 16:30:54 compute-0 systemd[1]: Started libpod-conmon-806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5.scope.
Nov 25 16:30:54 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:30:54 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.426 254096 DEBUG oslo_concurrency.processutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:54 compute-0 podman[288830]: 2025-11-25 16:30:54.621326794 +0000 UTC m=+0.522585763 container init 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:30:54 compute-0 podman[288830]: 2025-11-25 16:30:54.63221593 +0000 UTC m=+0.533474879 container start 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:30:54 compute-0 podman[288830]: 2025-11-25 16:30:54.823254435 +0000 UTC m=+0.724513415 container attach 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:30:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822219638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.880 254096 DEBUG oslo_concurrency.processutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.887 254096 DEBUG nova.compute.provider_tree [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:54 compute-0 podman[288834]: 2025-11-25 16:30:54.892335495 +0000 UTC m=+0.782968955 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.900 254096 DEBUG nova.scheduler.client.report [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 191 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 622 KiB/s rd, 5.9 MiB/s wr, 227 op/s
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.950 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.953 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.953 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.953 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:30:54 compute-0 nova_compute[254092]: 2025-11-25 16:30:54.954 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2822219638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.075 254096 DEBUG nova.network.neutron [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.076 254096 DEBUG nova.network.neutron [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.102 254096 DEBUG oslo_concurrency.lockutils [req-ab5d98ec-4706-461b-a7c4-6046f74d9c79 req-259117bb-5ba8-46d8-ae02-6295783e5686 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.130 254096 DEBUG oslo_concurrency.processutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config 07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.130 254096 INFO nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deleting local config drive /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/disk.config because it was imported into RBD.
Nov 25 16:30:55 compute-0 kernel: tap19d5425c-f0: entered promiscuous mode
Nov 25 16:30:55 compute-0 NetworkManager[48891]: <info>  [1764088255.1740] manager: (tap19d5425c-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Nov 25 16:30:55 compute-0 ovn_controller[153477]: 2025-11-25T16:30:55Z|00152|binding|INFO|Claiming lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 for this chassis.
Nov 25 16:30:55 compute-0 ovn_controller[153477]: 2025-11-25T16:30:55Z|00153|binding|INFO|19d5425c-f0c6-4c68-b8a6-cb1c6357d249: Claiming fa:16:3e:83:61:d4 10.100.0.8
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 systemd-udevd[288952]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:30:55 compute-0 ovn_controller[153477]: 2025-11-25T16:30:55Z|00154|binding|INFO|Setting lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 ovn-installed in OVS
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.211 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 NetworkManager[48891]: <info>  [1764088255.2196] device (tap19d5425c-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:30:55 compute-0 NetworkManager[48891]: <info>  [1764088255.2207] device (tap19d5425c-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:30:55 compute-0 systemd-machined[216343]: New machine qemu-31-instance-0000001b.
Nov 25 16:30:55 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.323 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088240.3218522, 7777dd86-925e-4f98-bd68-e38ac540d97b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.324 254096 INFO nova.compute.manager [-] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] VM Stopped (Lifecycle Event)
Nov 25 16:30:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:30:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1659250051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:30:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:30:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1659250051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.335 254096 DEBUG nova.compute.manager [req-062dc3a0-17eb-4e2f-8182-db0c41f2ed29 req-9d5ad000-f81a-488a-a01f-930bc83481bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Received event network-vif-deleted-d6146886-91a1-4d5f-9234-e1d0154b4230 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:55 compute-0 ovn_controller[153477]: 2025-11-25T16:30:55Z|00155|binding|INFO|Setting lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 up in Southbound
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.340 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:61:d4 10.100.0.8'], port_security=['fa:16:3e:83:61:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd11e91d-04bc-4ecb-8ad4-320a6572500c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=19d5425c-f0c6-4c68-b8a6-cb1c6357d249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.341 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.343 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.343 254096 INFO nova.scheduler.client.report [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Deleted allocations for instance 3375e096-321c-459b-8b6a-e085bb62872f
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.355 254096 DEBUG nova.compute.manager [None req-02984fc1-bd08-4954-8e5b-04bcaabb8c9a - - - - - -] [instance: 7777dd86-925e-4f98-bd68-e38ac540d97b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3d288e-d6ae-4990-9635-bbdb3ae355c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.361 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52e7d5b9-01 in ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.363 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52e7d5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.363 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3381f99e-2595-4122-b209-f20a7f52b2c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.364 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[647601fb-abde-4944-9d41-0b5cae055edc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.376 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fa11f420-5375-496a-9396-8b6af422ce20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.390 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[982e8a73-b2d3-4a38-817f-f6e453173591]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.426 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[caf16519-e446-4b4e-864a-2452ef32f7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.436 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8421fd58-0429-4d87-8e7e-0e3a0e2b5e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 NetworkManager[48891]: <info>  [1764088255.4372] manager: (tap52e7d5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.461 254096 DEBUG oslo_concurrency.lockutils [None req-b961b3ce-e0a6-4680-ad86-735e4e7c4e02 a8e217b742fe4a57a4ac4ffc776670fe a4c9877f37b34e7fba5f3cc9642d1a48 - - default default] Lock "3375e096-321c-459b-8b6a-e085bb62872f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.474 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd1293d-5797-49e5-963d-7012c3ab5bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.478 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f8180e70-c3e5-450a-bdca-72f4b01e0930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 NetworkManager[48891]: <info>  [1764088255.5025] device (tap52e7d5b9-00): carrier: link connected
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.507 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[936761a7-cdf7-4d34-8122-540abe20058c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.525 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0e2cda-8a8d-4c59-8a96-9babf150fded]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289011, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.542 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4491d3a6-3321-484a-a09c-146eb98ef3f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:97ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471309, 'tstamp': 471309}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289012, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.559 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5c37d9-b160-407f-8a06-56c3e4c0dcf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289014, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.588 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b711e6-0e05-4621-a5aa-391c6566ff9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727524151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.638 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.649 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01529207-1bd8-4dc5-a8c2-ef92fc6948bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.650 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.651 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.651 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 NetworkManager[48891]: <info>  [1764088255.6550] manager: (tap52e7d5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 25 16:30:55 compute-0 kernel: tap52e7d5b9-00: entered promiscuous mode
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.659 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 ovn_controller[153477]: 2025-11-25T16:30:55Z|00156|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.661 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.670 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd1b310-59e7-4bbc-a625-3c2ed257f070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.671 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:30:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:55.672 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'env', 'PROCESS_TAG=haproxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:55 compute-0 agitated_feynman[288896]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:30:55 compute-0 agitated_feynman[288896]: --> relative data size: 1.0
Nov 25 16:30:55 compute-0 agitated_feynman[288896]: --> All data devices are unavailable
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.712 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.713 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.716 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.717 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:30:55 compute-0 systemd[1]: libpod-806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5.scope: Deactivated successfully.
Nov 25 16:30:55 compute-0 podman[288830]: 2025-11-25 16:30:55.727144808 +0000 UTC m=+1.628403757 container died 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.845 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088255.844613, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.845 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Started (Lifecycle Event)
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.862 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.866 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088255.8447454, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.866 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Paused (Lifecycle Event)
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.884 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.887 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.910 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.910 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4210MB free_disk=59.903011322021484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.911 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.911 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:55 compute-0 nova_compute[254092]: 2025-11-25 16:30:55.917 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:30:56 compute-0 ceph-mon[74985]: pgmap v1300: 321 pgs: 321 active+clean; 191 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 622 KiB/s rd, 5.9 MiB/s wr, 227 op/s
Nov 25 16:30:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1659250051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:30:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1659250051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:30:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3727524151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.240 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 33b19faf-57e1-463b-8b4a-b50479a0ef0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.241 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.301 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e41de426d9c5149c6d4ff903738ac8dd0b8b86e3430de0f4cc1ebc52ec50ae-merged.mount: Deactivated successfully.
Nov 25 16:30:56 compute-0 podman[288830]: 2025-11-25 16:30:56.651057743 +0000 UTC m=+2.552316692 container remove 806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:30:56 compute-0 sudo[288657]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:56 compute-0 systemd[1]: libpod-conmon-806f8c602b1f32e14f86585cc4587fad789751a78bd217f6892c2f5a05ebfcc5.scope: Deactivated successfully.
Nov 25 16:30:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:30:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174536763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.742 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:30:56 compute-0 sudo[289123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.749 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:30:56 compute-0 sudo[289123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:56 compute-0 sudo[289123]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.763 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:30:56 compute-0 sudo[289167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:30:56 compute-0 sudo[289167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:56 compute-0 sudo[289167]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.849 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.849 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.850 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.850 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.850 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.851 254096 INFO nova.compute.manager [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Terminating instance
Nov 25 16:30:56 compute-0 nova_compute[254092]: 2025-11-25 16:30:56.852 254096 DEBUG nova.compute.manager [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:30:56 compute-0 podman[289152]: 2025-11-25 16:30:56.765851515 +0000 UTC m=+0.027451897 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:30:56 compute-0 sudo[289192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:30:56 compute-0 sudo[289192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:56 compute-0 sudo[289192]: pam_unix(sudo:session): session closed for user root
Nov 25 16:30:56 compute-0 sudo[289217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:30:56 compute-0 sudo[289217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:30:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.025 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.026 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:57 compute-0 podman[289152]: 2025-11-25 16:30:57.072818234 +0000 UTC m=+0.334418596 container create 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:57 compute-0 systemd[1]: Started libpod-conmon-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb.scope.
Nov 25 16:30:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:30:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43bbff1afa7ddae500921472f918939c0d088677612449620b59fac58799310/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:30:57 compute-0 kernel: tapfb46dd7a-52 (unregistering): left promiscuous mode
Nov 25 16:30:57 compute-0 NetworkManager[48891]: <info>  [1764088257.4821] device (tapfb46dd7a-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:30:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2174536763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:30:57 compute-0 ceph-mon[74985]: pgmap v1301: 321 pgs: 321 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:57 compute-0 ovn_controller[153477]: 2025-11-25T16:30:57Z|00157|binding|INFO|Releasing lport fb46dd7a-52d4-44cb-b99e-81d7d653885c from this chassis (sb_readonly=0)
Nov 25 16:30:57 compute-0 ovn_controller[153477]: 2025-11-25T16:30:57Z|00158|binding|INFO|Setting lport fb46dd7a-52d4-44cb-b99e-81d7d653885c down in Southbound
Nov 25 16:30:57 compute-0 ovn_controller[153477]: 2025-11-25T16:30:57Z|00159|binding|INFO|Removing iface tapfb46dd7a-52 ovn-installed in OVS
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.558 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:54:45 10.100.0.10'], port_security=['fa:16:3e:d4:54:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '33b19faf-57e1-463b-8b4a-b50479a0ef0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2f26334db2f4e2cadc5664efd73eb67', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0797621-f8b9-4c57-8ae8-f4d291e244fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6129ca0-5388-4dd2-a829-e686213800fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fb46dd7a-52d4-44cb-b99e-81d7d653885c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:30:57 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 25 16:30:57 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 14.335s CPU time.
Nov 25 16:30:57 compute-0 systemd-machined[216343]: Machine qemu-28-instance-00000019 terminated.
Nov 25 16:30:57 compute-0 podman[289152]: 2025-11-25 16:30:57.638044945 +0000 UTC m=+0.899645307 container init 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:30:57 compute-0 podman[289152]: 2025-11-25 16:30:57.644327586 +0000 UTC m=+0.905927948 container start 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:30:57 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : New worker (289285) forked
Nov 25 16:30:57 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : Loading success.
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.690 254096 INFO nova.virt.libvirt.driver [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Instance destroyed successfully.
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.691 254096 DEBUG nova.objects.instance [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lazy-loading 'resources' on Instance uuid 33b19faf-57e1-463b-8b4a-b50479a0ef0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.701 254096 DEBUG nova.virt.libvirt.vif [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1375199124',display_name='tempest-ServersTestManualDisk-server-1375199124',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1375199124',id=25,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCOQq+JT40N5kSAAkZKtTY8+kwc4Tq2+j0vXcLZMu4KKRGWjKEsrOB7QpF/UTscMrUzfK+p97q+eBa8XrywfkAV6Mo0KdjURR0zReL+ABXznVVDaiCZTtZ5HErawYUq7Fw==',key_name='tempest-keypair-637822798',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f2f26334db2f4e2cadc5664efd73eb67',ramdisk_id='',reservation_id='r-d0txwl4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-420094767',owner_user_name='tempest-ServersTestManualDisk-420094767-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fb8bd106d2264d719b9ebd9f83f19c5a',uuid=33b19faf-57e1-463b-8b4a-b50479a0ef0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.701 254096 DEBUG nova.network.os_vif_util [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converting VIF {"id": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "address": "fa:16:3e:d4:54:45", "network": {"id": "6ab64ae8-b8fa-4795-a243-9ebe45233e37", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1540086437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2f26334db2f4e2cadc5664efd73eb67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb46dd7a-52", "ovs_interfaceid": "fb46dd7a-52d4-44cb-b99e-81d7d653885c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.702 254096 DEBUG nova.network.os_vif_util [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.702 254096 DEBUG os_vif [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.704 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb46dd7a-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:57 compute-0 nova_compute[254092]: 2025-11-25 16:30:57.709 254096 INFO os_vif [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:54:45,bridge_name='br-int',has_traffic_filtering=True,id=fb46dd7a-52d4-44cb-b99e-81d7d653885c,network=Network(6ab64ae8-b8fa-4795-a243-9ebe45233e37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb46dd7a-52')
Nov 25 16:30:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:30:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.947 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fb46dd7a-52d4-44cb-b99e-81d7d653885c in datapath 6ab64ae8-b8fa-4795-a243-9ebe45233e37 unbound from our chassis
Nov 25 16:30:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.950 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ab64ae8-b8fa-4795-a243-9ebe45233e37, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:30:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.951 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af1f6063-bf1a-4fe7-bafe-2b479a9374dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:30:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:30:57.951 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 namespace which is not needed anymore
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.026 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.027 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:58 compute-0 podman[289339]: 2025-11-25 16:30:58.011197473 +0000 UTC m=+0.021015803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:30:58 compute-0 podman[289339]: 2025-11-25 16:30:58.302492525 +0000 UTC m=+0.312310825 container create 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 16:30:58 compute-0 systemd[1]: Started libpod-conmon-4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66.scope.
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.519 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088243.5145116, f0cb83d8-c2a3-49d1-8c01-b9be9922abd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.519 254096 INFO nova.compute.manager [-] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] VM Stopped (Lifecycle Event)
Nov 25 16:30:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:30:58 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : haproxy version is 2.8.14-c23fe91
Nov 25 16:30:58 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [NOTICE]   (287144) : path to executable is /usr/sbin/haproxy
Nov 25 16:30:58 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [WARNING]  (287144) : Exiting Master process...
Nov 25 16:30:58 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [WARNING]  (287144) : Exiting Master process...
Nov 25 16:30:58 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [ALERT]    (287144) : Current worker (287146) exited with code 143 (Terminated)
Nov 25 16:30:58 compute-0 neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37[287137]: [WARNING]  (287144) : All workers exited. Exiting... (0)
Nov 25 16:30:58 compute-0 systemd[1]: libpod-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771.scope: Deactivated successfully.
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.544 254096 DEBUG nova.compute.manager [None req-ecf1541e-2746-4b4a-b9f3-40ebf45e122f - - - - - -] [instance: f0cb83d8-c2a3-49d1-8c01-b9be9922abd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.729 254096 DEBUG nova.compute.manager [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG oslo_concurrency.lockutils [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG oslo_concurrency.lockutils [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG oslo_concurrency.lockutils [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.730 254096 DEBUG nova.compute.manager [req-12e1a4f0-931c-4614-9e8a-88ae1a9f595b req-1dbdcfd2-01a3-4a52-a9e4-dce95ee5f2d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Processing event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.731 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.734 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088258.7343185, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.734 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Resumed (Lifecycle Event)
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.736 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.739 254096 INFO nova.virt.libvirt.driver [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance spawned successfully.
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.739 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.756 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.759 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.759 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.760 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.760 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.760 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.761 254096 DEBUG nova.virt.libvirt.driver [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.765 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG nova.compute.manager [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-unplugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG oslo_concurrency.lockutils [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG oslo_concurrency.lockutils [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG oslo_concurrency.lockutils [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.787 254096 DEBUG nova.compute.manager [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] No waiting events found dispatching network-vif-unplugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.788 254096 DEBUG nova.compute.manager [req-3e008113-fe8a-4db8-ab39-3e1d3f9ada49 req-697d8f43-114d-4b8a-8278-76d3afaab94a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-unplugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.790 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:30:58 compute-0 podman[289339]: 2025-11-25 16:30:58.87266621 +0000 UTC m=+0.882484520 container init 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:30:58 compute-0 podman[289366]: 2025-11-25 16:30:58.876214928 +0000 UTC m=+0.842703319 container died 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:30:58 compute-0 podman[289339]: 2025-11-25 16:30:58.882376725 +0000 UTC m=+0.892195035 container start 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 16:30:58 compute-0 adoring_saha[289381]: 167 167
Nov 25 16:30:58 compute-0 systemd[1]: libpod-4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66.scope: Deactivated successfully.
Nov 25 16:30:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 3.8 MiB/s wr, 127 op/s
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.939 254096 INFO nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 11.64 seconds to spawn the instance on the hypervisor.
Nov 25 16:30:58 compute-0 nova_compute[254092]: 2025-11-25 16:30:58.939 254096 DEBUG nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:30:59 compute-0 nova_compute[254092]: 2025-11-25 16:30:59.152 254096 INFO nova.compute.manager [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 12.78 seconds to build instance.
Nov 25 16:30:59 compute-0 nova_compute[254092]: 2025-11-25 16:30:59.411 254096 DEBUG oslo_concurrency.lockutils [None req-e8f371b7-5e45-4f40-a9a7-3a5b4ee04350 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:30:59 compute-0 ceph-mon[74985]: pgmap v1302: 321 pgs: 321 active+clean; 167 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 3.8 MiB/s wr, 127 op/s
Nov 25 16:30:59 compute-0 podman[289339]: 2025-11-25 16:30:59.701436689 +0000 UTC m=+1.711255009 container attach 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 16:30:59 compute-0 podman[289339]: 2025-11-25 16:30:59.702453058 +0000 UTC m=+1.712271368 container died 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f8a2175b9392d89f04a2ce17873977b82d9c3b5170918db62c07eca67b932fb-merged.mount: Deactivated successfully.
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:31:00 compute-0 podman[289339]: 2025-11-25 16:31:00.55156913 +0000 UTC m=+2.561387440 container remove 4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_saha, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.559 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771-userdata-shm.mount: Deactivated successfully.
Nov 25 16:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-972254736acb7813ada732b95c02a171ed8ca59846d32b38baa12f81aa1e9009-merged.mount: Deactivated successfully.
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.798 254096 DEBUG nova.compute.manager [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG oslo_concurrency.lockutils [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG oslo_concurrency.lockutils [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG oslo_concurrency.lockutils [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 DEBUG nova.compute.manager [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.799 254096 WARNING nova.compute.manager [req-d0aa152a-070b-4237-a3a5-e1acb71f09ef req-125c262c-e89b-4249-b34d-a9186cb0fd43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 for instance with vm_state active and task_state None.
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.857 254096 DEBUG nova.compute.manager [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.857 254096 DEBUG oslo_concurrency.lockutils [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.857 254096 DEBUG oslo_concurrency.lockutils [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.858 254096 DEBUG oslo_concurrency.lockutils [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.858 254096 DEBUG nova.compute.manager [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] No waiting events found dispatching network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:00 compute-0 nova_compute[254092]: 2025-11-25 16:31:00.858 254096 WARNING nova.compute.manager [req-9cb336f2-e875-4da5-a954-383a284c6ed9 req-ab182235-f873-4f32-a7a8-61daaaaf8f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received unexpected event network-vif-plugged-fb46dd7a-52d4-44cb-b99e-81d7d653885c for instance with vm_state active and task_state deleting.
Nov 25 16:31:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 166 op/s
Nov 25 16:31:01 compute-0 podman[289421]: 2025-11-25 16:31:01.159844292 +0000 UTC m=+0.482218225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:31:01 compute-0 podman[289366]: 2025-11-25 16:31:01.216928994 +0000 UTC m=+3.183417385 container cleanup 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:31:01 compute-0 systemd[1]: libpod-conmon-533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771.scope: Deactivated successfully.
Nov 25 16:31:01 compute-0 podman[289421]: 2025-11-25 16:31:01.262914335 +0000 UTC m=+0.585288248 container create d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 16:31:01 compute-0 ceph-mon[74985]: pgmap v1303: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 166 op/s
Nov 25 16:31:01 compute-0 systemd[1]: libpod-conmon-4e3b8a1e37ed7bcd89e6132d9dcdc337d206d35c0f1a100e609dd5c278d34c66.scope: Deactivated successfully.
Nov 25 16:31:01 compute-0 systemd[1]: Started libpod-conmon-d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953.scope.
Nov 25 16:31:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:01 compute-0 nova_compute[254092]: 2025-11-25 16:31:01.566 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:01 compute-0 podman[289421]: 2025-11-25 16:31:01.893549896 +0000 UTC m=+1.215923819 container init d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:31:01 compute-0 podman[289421]: 2025-11-25 16:31:01.904342169 +0000 UTC m=+1.226716082 container start d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:31:02 compute-0 podman[289421]: 2025-11-25 16:31:02.182975656 +0000 UTC m=+1.505349599 container attach d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:31:02 compute-0 nova_compute[254092]: 2025-11-25 16:31:02.193 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088247.1926734, 090ac2d7-979e-4706-8a01-5e94ab72282d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:02 compute-0 nova_compute[254092]: 2025-11-25 16:31:02.193 254096 INFO nova.compute.manager [-] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] VM Stopped (Lifecycle Event)
Nov 25 16:31:02 compute-0 nova_compute[254092]: 2025-11-25 16:31:02.210 254096 DEBUG nova.compute.manager [None req-aea58e5c-377d-4dd4-83f1-01517dd0938a - - - - - -] [instance: 090ac2d7-979e-4706-8a01-5e94ab72282d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:02 compute-0 podman[289436]: 2025-11-25 16:31:02.374884026 +0000 UTC m=+1.133576210 container remove 533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.382 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaebdb6-88a0-4b22-ac36-7fdc665c21c7]: (4, ('Tue Nov 25 04:30:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 (533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771)\n533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771\nTue Nov 25 04:31:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 (533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771)\n533aa05d3161819cd0a2a80d8c311ee3debf8cbb4cca3e3f3c4aec555e031771\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.385 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99acf10c-ed50-4340-a3e8-2d684b161608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.386 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ab64ae8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:02 compute-0 nova_compute[254092]: 2025-11-25 16:31:02.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:02 compute-0 kernel: tap6ab64ae8-b0: left promiscuous mode
Nov 25 16:31:02 compute-0 nova_compute[254092]: 2025-11-25 16:31:02.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.407 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2bbe16-4284-4197-bd3f-65cefe4d5771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.422 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88499f58-edf6-43ec-b135-0c6e43c7fa5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.423 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cbd8d7-083c-4bd7-a34c-e16e3b44c86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.438 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86231929-f7ca-4a6f-8f52-9a76ab5b5b1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468313, 'reachable_time': 42982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289459, 'error': None, 'target': 'ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.442 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ab64ae8-b8fa-4795-a243-9ebe45233e37 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:31:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:02.442 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3e93f0-179f-4296-bb35-0ef0528aaa95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d6ab64ae8\x2db8fa\x2d4795\x2da243\x2d9ebe45233e37.mount: Deactivated successfully.
Nov 25 16:31:02 compute-0 nova_compute[254092]: 2025-11-25 16:31:02.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]: {
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:     "0": [
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:         {
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "devices": [
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "/dev/loop3"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             ],
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_name": "ceph_lv0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_size": "21470642176",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "name": "ceph_lv0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "tags": {
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cluster_name": "ceph",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.crush_device_class": "",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.encrypted": "0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osd_id": "0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.type": "block",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.vdo": "0"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             },
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "type": "block",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "vg_name": "ceph_vg0"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:         }
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:     ],
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:     "1": [
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:         {
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "devices": [
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "/dev/loop4"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             ],
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_name": "ceph_lv1",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_size": "21470642176",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "name": "ceph_lv1",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "tags": {
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cluster_name": "ceph",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.crush_device_class": "",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.encrypted": "0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osd_id": "1",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.type": "block",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.vdo": "0"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             },
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "type": "block",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "vg_name": "ceph_vg1"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:         }
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:     ],
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:     "2": [
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:         {
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "devices": [
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "/dev/loop5"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             ],
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_name": "ceph_lv2",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_size": "21470642176",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "name": "ceph_lv2",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "tags": {
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.cluster_name": "ceph",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.crush_device_class": "",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.encrypted": "0",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osd_id": "2",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.type": "block",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:                 "ceph.vdo": "0"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             },
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "type": "block",
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:             "vg_name": "ceph_vg2"
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:         }
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]:     ]
Nov 25 16:31:02 compute-0 stoic_dewdney[289453]: }
Nov 25 16:31:02 compute-0 nova_compute[254092]: 2025-11-25 16:31:02.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:02 compute-0 systemd[1]: libpod-d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953.scope: Deactivated successfully.
Nov 25 16:31:02 compute-0 podman[289421]: 2025-11-25 16:31:02.748058195 +0000 UTC m=+2.070432098 container died d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 16:31:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Nov 25 16:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ea10d2da1e68f2a40490276152aa7ebedc31ffe6ddb69b8a203fc4263a08a65-merged.mount: Deactivated successfully.
Nov 25 16:31:03 compute-0 ceph-mon[74985]: pgmap v1304: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Nov 25 16:31:03 compute-0 podman[289421]: 2025-11-25 16:31:03.751111773 +0000 UTC m=+3.073485686 container remove d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:31:03 compute-0 systemd[1]: libpod-conmon-d43bf9abf4e8294b2baddc6e2501efeebbc59b7cfc4ac17aeec0e0a2a5946953.scope: Deactivated successfully.
Nov 25 16:31:03 compute-0 sudo[289217]: pam_unix(sudo:session): session closed for user root
Nov 25 16:31:03 compute-0 sudo[289481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:31:03 compute-0 sudo[289481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:31:03 compute-0 sudo[289481]: pam_unix(sudo:session): session closed for user root
Nov 25 16:31:03 compute-0 sudo[289506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:31:03 compute-0 sudo[289506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:31:03 compute-0 sudo[289506]: pam_unix(sudo:session): session closed for user root
Nov 25 16:31:03 compute-0 sudo[289531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:31:03 compute-0 sudo[289531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:31:03 compute-0 sudo[289531]: pam_unix(sudo:session): session closed for user root
Nov 25 16:31:04 compute-0 sudo[289556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:31:04 compute-0 sudo[289556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:31:04 compute-0 podman[289618]: 2025-11-25 16:31:04.421441143 +0000 UTC m=+0.111215466 container create 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:31:04 compute-0 podman[289618]: 2025-11-25 16:31:04.335691501 +0000 UTC m=+0.025465824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:31:04 compute-0 systemd[1]: Started libpod-conmon-0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc.scope.
Nov 25 16:31:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:31:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 99 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 133 op/s
Nov 25 16:31:05 compute-0 podman[289618]: 2025-11-25 16:31:05.070834683 +0000 UTC m=+0.760609016 container init 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:31:05 compute-0 podman[289618]: 2025-11-25 16:31:05.080379393 +0000 UTC m=+0.770153686 container start 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:31:05 compute-0 gifted_haibt[289634]: 167 167
Nov 25 16:31:05 compute-0 systemd[1]: libpod-0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc.scope: Deactivated successfully.
Nov 25 16:31:05 compute-0 podman[289618]: 2025-11-25 16:31:05.288404731 +0000 UTC m=+0.978179054 container attach 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:31:05 compute-0 podman[289618]: 2025-11-25 16:31:05.288963896 +0000 UTC m=+0.978738199 container died 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 16:31:05 compute-0 ceph-mon[74985]: pgmap v1305: 321 pgs: 321 active+clean; 99 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 133 op/s
Nov 25 16:31:05 compute-0 nova_compute[254092]: 2025-11-25 16:31:05.727 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088250.725776, 3375e096-321c-459b-8b6a-e085bb62872f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:05 compute-0 nova_compute[254092]: 2025-11-25 16:31:05.729 254096 INFO nova.compute.manager [-] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] VM Stopped (Lifecycle Event)
Nov 25 16:31:05 compute-0 nova_compute[254092]: 2025-11-25 16:31:05.760 254096 DEBUG nova.compute.manager [None req-9b6703f6-d26c-4656-8a5a-56f1d920a103 - - - - - -] [instance: 3375e096-321c-459b-8b6a-e085bb62872f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-33f214967b0b7e82d7cdbd753440e937abb572c4677afed5a81a0e60710121ec-merged.mount: Deactivated successfully.
Nov 25 16:31:06 compute-0 podman[289618]: 2025-11-25 16:31:06.535678571 +0000 UTC m=+2.225452884 container remove 0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:31:06 compute-0 systemd[1]: libpod-conmon-0f5c58e7987ef92b50796f1d68c2aec49db045315d1f3c4b0b1f7c3fbdd141bc.scope: Deactivated successfully.
Nov 25 16:31:06 compute-0 podman[289660]: 2025-11-25 16:31:06.70737152 +0000 UTC m=+0.024324593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:31:06 compute-0 podman[289660]: 2025-11-25 16:31:06.887003935 +0000 UTC m=+0.203956958 container create 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 16:31:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 103 op/s
Nov 25 16:31:07 compute-0 systemd[1]: Started libpod-conmon-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope.
Nov 25 16:31:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:07 compute-0 podman[289660]: 2025-11-25 16:31:07.249545605 +0000 UTC m=+0.566498628 container init 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:31:07 compute-0 podman[289660]: 2025-11-25 16:31:07.259139386 +0000 UTC m=+0.576092409 container start 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:31:07 compute-0 ceph-mon[74985]: pgmap v1306: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 103 op/s
Nov 25 16:31:07 compute-0 nova_compute[254092]: 2025-11-25 16:31:07.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:07 compute-0 nova_compute[254092]: 2025-11-25 16:31:07.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:08 compute-0 podman[289660]: 2025-11-25 16:31:08.626174603 +0000 UTC m=+1.943127626 container attach 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:31:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 90 op/s
Nov 25 16:31:09 compute-0 ceph-mon[74985]: pgmap v1307: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 90 op/s
Nov 25 16:31:09 compute-0 cool_cerf[289676]: {
Nov 25 16:31:09 compute-0 cool_cerf[289676]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "osd_id": 1,
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "type": "bluestore"
Nov 25 16:31:09 compute-0 cool_cerf[289676]:     },
Nov 25 16:31:09 compute-0 cool_cerf[289676]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "osd_id": 2,
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "type": "bluestore"
Nov 25 16:31:09 compute-0 cool_cerf[289676]:     },
Nov 25 16:31:09 compute-0 cool_cerf[289676]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "osd_id": 0,
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:31:09 compute-0 cool_cerf[289676]:         "type": "bluestore"
Nov 25 16:31:09 compute-0 cool_cerf[289676]:     }
Nov 25 16:31:09 compute-0 cool_cerf[289676]: }
Nov 25 16:31:09 compute-0 systemd[1]: libpod-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope: Deactivated successfully.
Nov 25 16:31:09 compute-0 podman[289660]: 2025-11-25 16:31:09.398905309 +0000 UTC m=+2.715858342 container died 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:31:09 compute-0 systemd[1]: libpod-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope: Consumed 1.005s CPU time.
Nov 25 16:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c155da59dd21363d4dcc58a3fb83ab0260a00536bffae45c7d7cb1492b33676c-merged.mount: Deactivated successfully.
Nov 25 16:31:10 compute-0 nova_compute[254092]: 2025-11-25 16:31:10.028 254096 INFO nova.virt.libvirt.driver [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deleting instance files /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f_del
Nov 25 16:31:10 compute-0 nova_compute[254092]: 2025-11-25 16:31:10.029 254096 INFO nova.virt.libvirt.driver [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deletion of /var/lib/nova/instances/33b19faf-57e1-463b-8b4a-b50479a0ef0f_del complete
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:31:10 compute-0 podman[289660]: 2025-11-25 16:31:10.449407947 +0000 UTC m=+3.766360970 container remove 36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 16:31:10 compute-0 systemd[1]: libpod-conmon-36405b4451c16918675941ed82c615ef507b8302882c9a2f918c02dcd71f76a4.scope: Deactivated successfully.
Nov 25 16:31:10 compute-0 sudo[289556]: pam_unix(sudo:session): session closed for user root
Nov 25 16:31:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:31:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:31:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:31:10 compute-0 nova_compute[254092]: 2025-11-25 16:31:10.625 254096 INFO nova.compute.manager [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 13.77 seconds to destroy the instance on the hypervisor.
Nov 25 16:31:10 compute-0 nova_compute[254092]: 2025-11-25 16:31:10.626 254096 DEBUG oslo.service.loopingcall [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:31:10 compute-0 nova_compute[254092]: 2025-11-25 16:31:10.627 254096 DEBUG nova.compute.manager [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:31:10 compute-0 nova_compute[254092]: 2025-11-25 16:31:10.627 254096 DEBUG nova.network.neutron [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:31:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9c5c3c91-d17d-4353-b547-5a28208b06fe does not exist
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 92ef3fdd-e8d5-45d1-af6a-dee581d0088f does not exist
Nov 25 16:31:10 compute-0 sudo[289723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:31:10 compute-0 sudo[289723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:31:10 compute-0 sudo[289723]: pam_unix(sudo:session): session closed for user root
Nov 25 16:31:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 853 B/s wr, 94 op/s
Nov 25 16:31:10 compute-0 sudo[289748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:31:10 compute-0 sudo[289748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:31:10 compute-0 sudo[289748]: pam_unix(sudo:session): session closed for user root
Nov 25 16:31:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:31:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:31:11 compute-0 ceph-mon[74985]: pgmap v1308: 321 pgs: 321 active+clean; 88 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 853 B/s wr, 94 op/s
Nov 25 16:31:12 compute-0 nova_compute[254092]: 2025-11-25 16:31:12.589 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:12 compute-0 nova_compute[254092]: 2025-11-25 16:31:12.690 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088257.6891708, 33b19faf-57e1-463b-8b4a-b50479a0ef0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:12 compute-0 nova_compute[254092]: 2025-11-25 16:31:12.690 254096 INFO nova.compute.manager [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] VM Stopped (Lifecycle Event)
Nov 25 16:31:12 compute-0 nova_compute[254092]: 2025-11-25 16:31:12.710 254096 DEBUG nova.compute.manager [None req-264876d1-fff8-494b-bb36-1119b214a046 - - - - - -] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:12 compute-0 nova_compute[254092]: 2025-11-25 16:31:12.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 91 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 297 KiB/s wr, 62 op/s
Nov 25 16:31:13 compute-0 ceph-mon[74985]: pgmap v1309: 321 pgs: 321 active+clean; 91 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 297 KiB/s wr, 62 op/s
Nov 25 16:31:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:13.603 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:13.604 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:13.604 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:14 compute-0 ovn_controller[153477]: 2025-11-25T16:31:14Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:61:d4 10.100.0.8
Nov 25 16:31:14 compute-0 ovn_controller[153477]: 2025-11-25T16:31:14Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:61:d4 10.100.0.8
Nov 25 16:31:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 101 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.3 MiB/s wr, 51 op/s
Nov 25 16:31:15 compute-0 ceph-mon[74985]: pgmap v1310: 321 pgs: 321 active+clean; 101 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 1.3 MiB/s wr, 51 op/s
Nov 25 16:31:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 113 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Nov 25 16:31:17 compute-0 ceph-mon[74985]: pgmap v1311: 321 pgs: 321 active+clean; 113 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 214 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Nov 25 16:31:17 compute-0 nova_compute[254092]: 2025-11-25 16:31:17.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:17 compute-0 nova_compute[254092]: 2025-11-25 16:31:17.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 113 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.233 254096 DEBUG nova.compute.manager [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.233 254096 DEBUG nova.compute.manager [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-19d5425c-f0c6-4c68-b8a6-cb1c6357d249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.234 254096 DEBUG oslo_concurrency.lockutils [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.235 254096 DEBUG oslo_concurrency.lockutils [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.235 254096 DEBUG nova.network.neutron [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:31:19 compute-0 ceph-mon[74985]: pgmap v1312: 321 pgs: 321 active+clean; 113 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.301 254096 DEBUG nova.network.neutron [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.484 254096 INFO nova.compute.manager [-] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Took 8.86 seconds to deallocate network for instance.
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.630 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.631 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.812 254096 DEBUG oslo_concurrency.processutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:19.912 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:31:19 compute-0 nova_compute[254092]: 2025-11-25 16:31:19.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:19.914 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:31:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:31:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/43086363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:20 compute-0 nova_compute[254092]: 2025-11-25 16:31:20.243 254096 DEBUG oslo_concurrency.processutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:20 compute-0 nova_compute[254092]: 2025-11-25 16:31:20.252 254096 DEBUG nova.compute.provider_tree [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:31:20 compute-0 nova_compute[254092]: 2025-11-25 16:31:20.267 254096 DEBUG nova.scheduler.client.report [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:31:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/43086363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:20 compute-0 nova_compute[254092]: 2025-11-25 16:31:20.412 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:20 compute-0 nova_compute[254092]: 2025-11-25 16:31:20.667 254096 INFO nova.scheduler.client.report [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Deleted allocations for instance 33b19faf-57e1-463b-8b4a-b50479a0ef0f
Nov 25 16:31:20 compute-0 nova_compute[254092]: 2025-11-25 16:31:20.825 254096 DEBUG oslo_concurrency.lockutils [None req-672394fd-e75a-44f1-a7c5-31f88790e3f7 fb8bd106d2264d719b9ebd9f83f19c5a f2f26334db2f4e2cadc5664efd73eb67 - - default default] Lock "33b19faf-57e1-463b-8b4a-b50479a0ef0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 23.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 120 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 16:31:21 compute-0 nova_compute[254092]: 2025-11-25 16:31:21.447 254096 DEBUG nova.compute.manager [req-79fe46d2-5686-43e5-8d46-50c085c96d76 req-720b6f8d-9aca-42ec-91e6-0432a0b61664 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 33b19faf-57e1-463b-8b4a-b50479a0ef0f] Received event network-vif-deleted-fb46dd7a-52d4-44cb-b99e-81d7d653885c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:21 compute-0 ceph-mon[74985]: pgmap v1313: 321 pgs: 321 active+clean; 120 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 228 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 16:31:21 compute-0 nova_compute[254092]: 2025-11-25 16:31:21.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:21.916 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.068 254096 DEBUG nova.network.neutron [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.069 254096 DEBUG nova.network.neutron [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.162 254096 DEBUG oslo_concurrency.lockutils [req-e3806401-dafe-4a5e-ab84-a629be421b53 req-88947cb4-c261-4380-9c3b-503685de79a6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:22 compute-0 ovn_controller[153477]: 2025-11-25T16:31:22Z|00160|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.850 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.851 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.851 254096 DEBUG nova.objects.instance [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.883 254096 DEBUG nova.objects.instance [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:22 compute-0 nova_compute[254092]: 2025-11-25 16:31:22.896 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:31:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 16:31:23 compute-0 ceph-mon[74985]: pgmap v1314: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 25 16:31:23 compute-0 nova_compute[254092]: 2025-11-25 16:31:23.446 254096 DEBUG nova.policy [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:31:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:24 compute-0 podman[289797]: 2025-11-25 16:31:24.636510676 +0000 UTC m=+0.047774921 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 25 16:31:24 compute-0 podman[289796]: 2025-11-25 16:31:24.642683923 +0000 UTC m=+0.054121642 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 25 16:31:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Nov 25 16:31:25 compute-0 ceph-mon[74985]: pgmap v1315: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 252 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Nov 25 16:31:25 compute-0 nova_compute[254092]: 2025-11-25 16:31:25.631 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully created port: 63499eed-d192-4aec-8ab6-1c3384834ed4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:31:25 compute-0 podman[289835]: 2025-11-25 16:31:25.65996334 +0000 UTC m=+0.084728146 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:31:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 819 KiB/s wr, 29 op/s
Nov 25 16:31:27 compute-0 nova_compute[254092]: 2025-11-25 16:31:27.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:27 compute-0 nova_compute[254092]: 2025-11-25 16:31:27.725 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:27 compute-0 nova_compute[254092]: 2025-11-25 16:31:27.882 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:27 compute-0 nova_compute[254092]: 2025-11-25 16:31:27.883 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:27 compute-0 nova_compute[254092]: 2025-11-25 16:31:27.932 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.246 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.246 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.253 254096 INFO nova.compute.claims [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.292 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: 63499eed-d192-4aec-8ab6-1c3384834ed4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.397 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.397 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.398 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:31:28 compute-0 ceph-mon[74985]: pgmap v1316: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 819 KiB/s wr, 29 op/s
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.435 254096 DEBUG nova.compute.manager [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.435 254096 DEBUG nova.compute.manager [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-63499eed-d192-4aec-8ab6-1c3384834ed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.436 254096 DEBUG oslo_concurrency.lockutils [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.572 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:28 compute-0 nova_compute[254092]: 2025-11-25 16:31:28.670 254096 WARNING nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:31:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:31:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364636560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.051 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.057 254096 DEBUG nova.compute.provider_tree [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.075 254096 DEBUG nova.scheduler.client.report [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.100 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.101 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.156 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.157 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.182 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.203 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:31:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 101 KiB/s wr, 13 op/s
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.297 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.298 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.299 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Creating image(s)
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.318 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.341 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.362 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.366 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.401 254096 DEBUG nova.policy [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87058665de814ae0a51a12ff02b0d9aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed571eebde434695bae813d7bb21f4c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:31:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3364636560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.428 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.428 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.429 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.429 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.497 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:29 compute-0 nova_compute[254092]: 2025-11-25 16:31:29.500 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:30 compute-0 ovn_controller[153477]: 2025-11-25T16:31:30Z|00161|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:31:30 compute-0 nova_compute[254092]: 2025-11-25 16:31:30.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:30 compute-0 nova_compute[254092]: 2025-11-25 16:31:30.415 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Successfully created port: bb9f2265-a40c-44da-bb0e-dc52c5d0873b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:31:30 compute-0 ceph-mon[74985]: pgmap v1317: 321 pgs: 321 active+clean; 121 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 101 KiB/s wr, 13 op/s
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.101 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.165 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] resizing rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:31:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 126 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 238 KiB/s wr, 17 op/s
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.734 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Successfully updated port: bb9f2265-a40c-44da-bb0e-dc52c5d0873b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.784 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.784 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquired lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.784 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.791 254096 DEBUG nova.objects.instance [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'migration_context' on Instance uuid e1c7a84b-16df-49a8-83a7-a97bd47e0d43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.806 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.807 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Ensure instance console log exists: /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.807 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.807 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.808 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.809 254096 DEBUG nova.network.neutron [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.846 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.847 254096 DEBUG oslo_concurrency.lockutils [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.847 254096 DEBUG nova.network.neutron [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port 63499eed-d192-4aec-8ab6-1c3384834ed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.850 254096 DEBUG nova.virt.libvirt.vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.850 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.851 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.852 254096 DEBUG os_vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.853 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.854 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.855 254096 DEBUG nova.compute.manager [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-changed-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.856 254096 DEBUG nova.compute.manager [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Refreshing instance network info cache due to event network-changed-bb9f2265-a40c-44da-bb0e-dc52c5d0873b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.856 254096 DEBUG oslo_concurrency.lockutils [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.859 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63499eed-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.860 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63499eed-d1, col_values=(('external_ids', {'iface-id': '63499eed-d192-4aec-8ab6-1c3384834ed4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:56:d4', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:31 compute-0 NetworkManager[48891]: <info>  [1764088291.8631] manager: (tap63499eed-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.871 254096 INFO os_vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1')
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.872 254096 DEBUG nova.virt.libvirt.vif [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.872 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.873 254096 DEBUG nova.network.os_vif_util [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.877 254096 DEBUG nova.virt.libvirt.guest [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 16:31:31 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:9c:56:d4"/>
Nov 25 16:31:31 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:31:31 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:31:31 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:31:31 compute-0 nova_compute[254092]:   <target dev="tap63499eed-d1"/>
Nov 25 16:31:31 compute-0 nova_compute[254092]: </interface>
Nov 25 16:31:31 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 16:31:31 compute-0 kernel: tap63499eed-d1: entered promiscuous mode
Nov 25 16:31:31 compute-0 NetworkManager[48891]: <info>  [1764088291.8936] manager: (tap63499eed-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Nov 25 16:31:31 compute-0 ovn_controller[153477]: 2025-11-25T16:31:31Z|00162|binding|INFO|Claiming lport 63499eed-d192-4aec-8ab6-1c3384834ed4 for this chassis.
Nov 25 16:31:31 compute-0 ovn_controller[153477]: 2025-11-25T16:31:31Z|00163|binding|INFO|63499eed-d192-4aec-8ab6-1c3384834ed4: Claiming fa:16:3e:9c:56:d4 10.100.0.12
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:31 compute-0 ovn_controller[153477]: 2025-11-25T16:31:31Z|00164|binding|INFO|Setting lport 63499eed-d192-4aec-8ab6-1c3384834ed4 ovn-installed in OVS
Nov 25 16:31:31 compute-0 ovn_controller[153477]: 2025-11-25T16:31:31Z|00165|binding|INFO|Setting lport 63499eed-d192-4aec-8ab6-1c3384834ed4 up in Southbound
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.913 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:d4 10.100.0.12'], port_security=['fa:16:3e:9c:56:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63499eed-d192-4aec-8ab6-1c3384834ed4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:31:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.915 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63499eed-d192-4aec-8ab6-1c3384834ed4 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.916 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:31:31 compute-0 systemd-udevd[290055]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:31:31 compute-0 NetworkManager[48891]: <info>  [1764088291.9448] device (tap63499eed-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:31:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.942 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7a93eb-79d6-4cb8-9a58-87ccf859fc26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:31 compute-0 NetworkManager[48891]: <info>  [1764088291.9463] device (tap63499eed-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.974 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.975 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.975 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:31 compute-0 nova_compute[254092]: 2025-11-25 16:31:31.975 254096 DEBUG nova.virt.libvirt.driver [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:9c:56:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.976 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a060741c-9860-43c5-8eb2-c45bad864f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:31.979 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[955a786b-8715-466f-980a-287381203a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.005 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d56f0450-4c37-4de9-930e-7f64abceba0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.006 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.012 254096 DEBUG nova.virt.libvirt.guest [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:32 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:32</nova:creationTime>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:31:32 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 16:31:32 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:31:32 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:32 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:31:32 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:31:32 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.023 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7732f8-e61c-4164-9c78-bf1537adfc1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290063, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.043 254096 DEBUG oslo_concurrency.lockutils [None req-42fce279-c4d9-47d4-8c3a-e12ee53ca217 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.043 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf79fb63-a25d-4b48-ab75-e516d7396213]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290064, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290064, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.045 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.049 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.049 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:32.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:32 compute-0 ovn_controller[153477]: 2025-11-25T16:31:32Z|00166|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:32 compute-0 ceph-mon[74985]: pgmap v1318: 321 pgs: 321 active+clean; 126 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 238 KiB/s wr, 17 op/s
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.968 254096 DEBUG nova.compute.manager [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.969 254096 DEBUG oslo_concurrency.lockutils [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.969 254096 DEBUG oslo_concurrency.lockutils [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.969 254096 DEBUG oslo_concurrency.lockutils [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.970 254096 DEBUG nova.compute.manager [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:32 compute-0 nova_compute[254092]: 2025-11-25 16:31:32.970 254096 WARNING nova.compute.manager [req-8fa4b53d-f034-4a57-b131-1b837f2b5b12 req-881b9b10-0d3e-4ca7-9def-603c8b70f342 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.
Nov 25 16:31:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 126 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 155 KiB/s wr, 7 op/s
Nov 25 16:31:33 compute-0 nova_compute[254092]: 2025-11-25 16:31:33.316 254096 DEBUG nova.network.neutron [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port 63499eed-d192-4aec-8ab6-1c3384834ed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:31:33 compute-0 nova_compute[254092]: 2025-11-25 16:31:33.317 254096 DEBUG nova.network.neutron [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:33 compute-0 nova_compute[254092]: 2025-11-25 16:31:33.335 254096 DEBUG oslo_concurrency.lockutils [req-a98dca08-6fb9-4e16-aafc-2eab65eac1ba req-dd5a5e8a-56c7-45be-a102-00495f62e856 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:33 compute-0 nova_compute[254092]: 2025-11-25 16:31:33.685 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:33 compute-0 nova_compute[254092]: 2025-11-25 16:31:33.686 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:33 compute-0 nova_compute[254092]: 2025-11-25 16:31:33.686 254096 DEBUG nova.objects.instance [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:34 compute-0 ovn_controller[153477]: 2025-11-25T16:31:34Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:56:d4 10.100.0.12
Nov 25 16:31:34 compute-0 ovn_controller[153477]: 2025-11-25T16:31:34Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:56:d4 10.100.0.12
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.192 254096 DEBUG nova.objects.instance [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.203 254096 DEBUG nova.network.neutron [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updating instance_info_cache with network_info: [{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.205 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.226 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Releasing lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.227 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance network_info: |[{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.227 254096 DEBUG oslo_concurrency.lockutils [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.227 254096 DEBUG nova.network.neutron [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Refreshing network info cache for port bb9f2265-a40c-44da-bb0e-dc52c5d0873b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.230 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start _get_guest_xml network_info=[{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.234 254096 WARNING nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.239 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.239 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.248 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.249 254096 DEBUG nova.virt.libvirt.host [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.250 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.250 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.251 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.251 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.251 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.252 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.252 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.253 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.254 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.254 254096 DEBUG nova.virt.hardware [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.257 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.536 254096 DEBUG nova.policy [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:31:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:31:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767422786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.688 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.710 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:34 compute-0 nova_compute[254092]: 2025-11-25 16:31:34.714 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:34 compute-0 ceph-mon[74985]: pgmap v1319: 321 pgs: 321 active+clean; 126 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 155 KiB/s wr, 7 op/s
Nov 25 16:31:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1767422786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:31:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966684258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.155 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.157 254096 DEBUG nova.virt.libvirt.vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1117381742',id=28,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-f10hzrkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:31:29Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=e1c7a84b-16df-49a8-83a7-a97bd47e0d43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.157 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.158 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.159 254096 DEBUG nova.objects.instance [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e1c7a84b-16df-49a8-83a7-a97bd47e0d43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.190 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <uuid>e1c7a84b-16df-49a8-83a7-a97bd47e0d43</uuid>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <name>instance-0000001c</name>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1117381742</nova:name>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:31:34</nova:creationTime>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:user uuid="87058665de814ae0a51a12ff02b0d9aa">tempest-ImagesOneServerNegativeTestJSON-964953831-project-member</nova:user>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:project uuid="ed571eebde434695bae813d7bb21f4c3">tempest-ImagesOneServerNegativeTestJSON-964953831</nova:project>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <nova:port uuid="bb9f2265-a40c-44da-bb0e-dc52c5d0873b">
Nov 25 16:31:35 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <system>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <entry name="serial">e1c7a84b-16df-49a8-83a7-a97bd47e0d43</entry>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <entry name="uuid">e1c7a84b-16df-49a8-83a7-a97bd47e0d43</entry>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </system>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <os>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   </os>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <features>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   </features>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk">
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config">
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:31:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:79:09:1a"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <target dev="tapbb9f2265-a4"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/console.log" append="off"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <video>
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </video>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:31:35 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:31:35 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:31:35 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:31:35 compute-0 nova_compute[254092]: </domain>
Nov 25 16:31:35 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.192 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Preparing to wait for external event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.193 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.193 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.194 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.195 254096 DEBUG nova.virt.libvirt.vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1117381742',id=28,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-f10hzrkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:31:29Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=e1c7a84b-16df-49a8-83a7-a97bd47e0d43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.195 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.196 254096 DEBUG nova.network.os_vif_util [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.197 254096 DEBUG os_vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.198 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.198 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.199 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.203 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb9f2265-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.203 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbb9f2265-a4, col_values=(('external_ids', {'iface-id': 'bb9f2265-a40c-44da-bb0e-dc52c5d0873b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:09:1a', 'vm-uuid': 'e1c7a84b-16df-49a8-83a7-a97bd47e0d43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:35 compute-0 NetworkManager[48891]: <info>  [1764088295.2062] manager: (tapbb9f2265-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.211 254096 INFO os_vif [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4')
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.235 254096 DEBUG nova.compute.manager [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG oslo_concurrency.lockutils [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG oslo_concurrency.lockutils [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG oslo_concurrency.lockutils [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.236 254096 DEBUG nova.compute.manager [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.237 254096 WARNING nova.compute.manager [req-9654074f-c6a3-40e7-962f-f38f9ae89dcc req-1ce76e83-c6fd-4943-a6e0-e02347f2991d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.275 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.276 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.276 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No VIF found with MAC fa:16:3e:79:09:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.276 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Using config drive
Nov 25 16:31:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.295 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.682 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully created port: aa0c168b-de51-438f-a68b-f9c78a24ca7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.827 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Creating config drive at /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.832 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcktj4yfp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3966684258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:35 compute-0 nova_compute[254092]: 2025-11-25 16:31:35.977 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcktj4yfp" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.006 254096 DEBUG nova.storage.rbd_utils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.011 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.064 254096 DEBUG nova.network.neutron [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updated VIF entry in instance network info cache for port bb9f2265-a40c-44da-bb0e-dc52c5d0873b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.066 254096 DEBUG nova.network.neutron [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updating instance_info_cache with network_info: [{"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.100 254096 DEBUG oslo_concurrency.lockutils [req-b19ee9a2-3fbf-44eb-b8d3-0e5f70d03d38 req-0a2912e4-48d8-4b61-8983-a5d225b4ab18 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e1c7a84b-16df-49a8-83a7-a97bd47e0d43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.514 254096 DEBUG oslo_concurrency.processutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.515 254096 INFO nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deleting local config drive /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43/disk.config because it was imported into RBD.
Nov 25 16:31:36 compute-0 kernel: tapbb9f2265-a4: entered promiscuous mode
Nov 25 16:31:36 compute-0 NetworkManager[48891]: <info>  [1764088296.5619] manager: (tapbb9f2265-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:36 compute-0 ovn_controller[153477]: 2025-11-25T16:31:36Z|00167|binding|INFO|Claiming lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b for this chassis.
Nov 25 16:31:36 compute-0 ovn_controller[153477]: 2025-11-25T16:31:36Z|00168|binding|INFO|bb9f2265-a40c-44da-bb0e-dc52c5d0873b: Claiming fa:16:3e:79:09:1a 10.100.0.3
Nov 25 16:31:36 compute-0 ovn_controller[153477]: 2025-11-25T16:31:36Z|00169|binding|INFO|Setting lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b ovn-installed in OVS
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:36 compute-0 systemd-udevd[290200]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:31:36 compute-0 systemd-machined[216343]: New machine qemu-32-instance-0000001c.
Nov 25 16:31:36 compute-0 NetworkManager[48891]: <info>  [1764088296.6042] device (tapbb9f2265-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:31:36 compute-0 NetworkManager[48891]: <info>  [1764088296.6053] device (tapbb9f2265-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:31:36 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Nov 25 16:31:36 compute-0 ovn_controller[153477]: 2025-11-25T16:31:36Z|00170|binding|INFO|Setting lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b up in Southbound
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.686 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:09:1a 10.100.0.3'], port_security=['fa:16:3e:79:09:1a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1c7a84b-16df-49a8-83a7-a97bd47e0d43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bb9f2265-a40c-44da-bb0e-dc52c5d0873b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.688 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bb9f2265-a40c-44da-bb0e-dc52c5d0873b in datapath 50e18e22-7850-458c-8d66-5932e0495377 bound to our chassis
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.690 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.700 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ebef1ef1-3a0e-4dad-aa2e-7902e1f999e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.701 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50e18e22-71 in ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.702 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50e18e22-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.702 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4106c52-6e2a-47f5-947c-fc7a366abf7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3f528f-5762-44df-b9c2-31a14912720e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.714 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c53ddf-3c6f-493c-a205-3880d5524436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02e96196-c99b-4007-ad9f-88a6e3c1dc8e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.758 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7522ce99-390b-41da-81fb-41d8f3dcbc34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 NetworkManager[48891]: <info>  [1764088296.7645] manager: (tap50e18e22-70): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06836e8d-eec9-4a41-8c56-6eced22d5697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 systemd-udevd[290203]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.796 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2eb0be-b9d8-4d36-9a03-780a36d420d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.799 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8bf835-df91-49f1-8926-d210b8f5de14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 NetworkManager[48891]: <info>  [1764088296.8181] device (tap50e18e22-70): carrier: link connected
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.822 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5702b7-9aef-4839-a170-148bc59add2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.837 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1e09fb-4aa3-46e2-83a9-4df7302ad3a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475440, 'reachable_time': 38886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290235, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.853 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edbfa5a7-f623-49d0-ba4c-ddc9ce4812da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:147d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475440, 'tstamp': 475440}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290236, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.868 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02806e5a-a90b-4f2d-b73a-17ac47a0faf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475440, 'reachable_time': 38886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290237, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_controller[153477]: 2025-11-25T16:31:36Z|00171|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de81b308-d495-4c46-9863-27f56be5cfd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ceph-mon[74985]: pgmap v1320: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.959 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e23fd83e-35cb-4a93-a8aa-2e6b5a0bdece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e18e22-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:36 compute-0 NetworkManager[48891]: <info>  [1764088296.9644] manager: (tap50e18e22-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Nov 25 16:31:36 compute-0 kernel: tap50e18e22-70: entered promiscuous mode
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.968 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50e18e22-70, col_values=(('external_ids', {'iface-id': '5b591f76-4d04-4b30-9182-d359be87068c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:36 compute-0 ovn_controller[153477]: 2025-11-25T16:31:36Z|00172|binding|INFO|Releasing lport 5b591f76-4d04-4b30-9182-d359be87068c from this chassis (sb_readonly=0)
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.985 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.986 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40fd173d-9995-41c0-8111-d3319fbe20de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.987 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:31:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:36.988 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'env', 'PROCESS_TAG=haproxy-50e18e22-7850-458c-8d66-5932e0495377', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50e18e22-7850-458c-8d66-5932e0495377.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:31:36 compute-0 nova_compute[254092]: 2025-11-25 16:31:36.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:31:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6009 writes, 27K keys, 6009 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6009 writes, 6009 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1517 writes, 6836 keys, 1517 commit groups, 1.0 writes per commit group, ingest: 9.34 MB, 0.02 MB/s
                                           Interval WAL: 1517 writes, 1517 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     20.2      1.48              0.11        15    0.099       0      0       0.0       0.0
                                             L6      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     95.8     78.2      1.29              0.29        14    0.092     65K   7795       0.0       0.0
                                            Sum      1/0    7.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     44.7     47.3      2.77              0.40        29    0.096     65K   7795       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     57.4     57.0      0.67              0.12         8    0.084     21K   2586       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     95.8     78.2      1.29              0.29        14    0.092     65K   7795       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     20.8      1.43              0.11        14    0.102       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.029, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 2.8 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 13.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(865,12.66 MB,4.16329%) FilterBlock(30,188.05 KB,0.0604077%) IndexBlock(30,338.98 KB,0.108895%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 16:31:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.369 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088297.368979, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.370 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Started (Lifecycle Event)
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.387 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.391 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088297.3713624, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Paused (Lifecycle Event)
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.406 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.409 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:31:37 compute-0 podman[290310]: 2025-11-25 16:31:37.328920905 +0000 UTC m=+0.019634205 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.427 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:31:37 compute-0 podman[290310]: 2025-11-25 16:31:37.474627077 +0000 UTC m=+0.165340357 container create f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.600 254096 DEBUG nova.compute.manager [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.600 254096 DEBUG oslo_concurrency.lockutils [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.600 254096 DEBUG oslo_concurrency.lockutils [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.601 254096 DEBUG oslo_concurrency.lockutils [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.601 254096 DEBUG nova.compute.manager [req-e01f430a-ae26-4ba0-9844-47482d007302 req-46410db1-c610-497f-8bc2-9b0a7618d37d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Processing event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.602 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.605 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088297.6050208, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.606 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Resumed (Lifecycle Event)
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.612 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.616 254096 INFO nova.virt.libvirt.driver [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance spawned successfully.
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.616 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.632 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.638 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.643 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.644 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.644 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.644 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.645 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.645 254096 DEBUG nova.virt.libvirt.driver [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.683 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:31:37 compute-0 systemd[1]: Started libpod-conmon-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66.scope.
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.697 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: aa0c168b-de51-438f-a68b-f9c78a24ca7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:31:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a528aaf225b549659531755e9e5982927109e3e368e2369118320e622c687d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.740 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.741 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.742 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:31:37 compute-0 podman[290310]: 2025-11-25 16:31:37.748626689 +0000 UTC m=+0.439339999 container init f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:31:37 compute-0 podman[290310]: 2025-11-25 16:31:37.754445197 +0000 UTC m=+0.445158477 container start f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.765 254096 INFO nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 8.47 seconds to spawn the instance on the hypervisor.
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.766 254096 DEBUG nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:37 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : New worker (290332) forked
Nov 25 16:31:37 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : Loading success.
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.882 254096 DEBUG nova.compute.manager [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.882 254096 DEBUG nova.compute.manager [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-aa0c168b-de51-438f-a68b-f9c78a24ca7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.883 254096 DEBUG oslo_concurrency.lockutils [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.897 254096 INFO nova.compute.manager [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 9.69 seconds to build instance.
Nov 25 16:31:37 compute-0 nova_compute[254092]: 2025-11-25 16:31:37.939 254096 DEBUG oslo_concurrency.lockutils [None req-c02446dc-d572-44d8-8e2f-6f2afad39c21 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:38 compute-0 nova_compute[254092]: 2025-11-25 16:31:38.156 254096 WARNING nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:31:38 compute-0 nova_compute[254092]: 2025-11-25 16:31:38.157 254096 WARNING nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:31:38 compute-0 ceph-mon[74985]: pgmap v1321: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Nov 25 16:31:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:31:40
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.data']
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:31:40 compute-0 nova_compute[254092]: 2025-11-25 16:31:40.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:31:40 compute-0 nova_compute[254092]: 2025-11-25 16:31:40.297 254096 DEBUG nova.compute.manager [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:40 compute-0 nova_compute[254092]: 2025-11-25 16:31:40.298 254096 DEBUG oslo_concurrency.lockutils [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:40 compute-0 nova_compute[254092]: 2025-11-25 16:31:40.298 254096 DEBUG oslo_concurrency.lockutils [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:40 compute-0 nova_compute[254092]: 2025-11-25 16:31:40.299 254096 DEBUG oslo_concurrency.lockutils [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:40 compute-0 nova_compute[254092]: 2025-11-25 16:31:40.299 254096 DEBUG nova.compute.manager [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] No waiting events found dispatching network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:40 compute-0 nova_compute[254092]: 2025-11-25 16:31:40.299 254096 WARNING nova.compute.manager [req-719d236f-20e5-4dda-9c7a-b05defe16933 req-11a71c41-0f10-4e9a-94bb-3407baccd07a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received unexpected event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b for instance with vm_state active and task_state None.
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:31:40 compute-0 ceph-mon[74985]: pgmap v1322: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:31:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.761 254096 DEBUG nova.network.neutron [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.798 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.800 254096 DEBUG oslo_concurrency.lockutils [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.801 254096 DEBUG nova.network.neutron [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port aa0c168b-de51-438f-a68b-f9c78a24ca7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.804 254096 DEBUG nova.virt.libvirt.vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.805 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.806 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.806 254096 DEBUG os_vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.807 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.808 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.811 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa0c168b-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.811 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa0c168b-de, col_values=(('external_ids', {'iface-id': 'aa0c168b-de51-438f-a68b-f9c78a24ca7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:0b:3f', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 NetworkManager[48891]: <info>  [1764088301.8142] manager: (tapaa0c168b-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.821 254096 INFO os_vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de')
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.821 254096 DEBUG nova.virt.libvirt.vif [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.822 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.822 254096 DEBUG nova.network.os_vif_util [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.825 254096 DEBUG nova.virt.libvirt.guest [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:33:0b:3f"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <target dev="tapaa0c168b-de"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]: </interface>
Nov 25 16:31:41 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 16:31:41 compute-0 kernel: tapaa0c168b-de: entered promiscuous mode
Nov 25 16:31:41 compute-0 NetworkManager[48891]: <info>  [1764088301.8372] manager: (tapaa0c168b-de): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Nov 25 16:31:41 compute-0 ovn_controller[153477]: 2025-11-25T16:31:41Z|00173|binding|INFO|Claiming lport aa0c168b-de51-438f-a68b-f9c78a24ca7c for this chassis.
Nov 25 16:31:41 compute-0 ovn_controller[153477]: 2025-11-25T16:31:41Z|00174|binding|INFO|aa0c168b-de51-438f-a68b-f9c78a24ca7c: Claiming fa:16:3e:33:0b:3f 10.100.0.4
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.851 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:3f 10.100.0.4'], port_security=['fa:16:3e:33:0b:3f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aa0c168b-de51-438f-a68b-f9c78a24ca7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.853 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aa0c168b-de51-438f-a68b-f9c78a24ca7c in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.854 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:31:41 compute-0 ovn_controller[153477]: 2025-11-25T16:31:41Z|00175|binding|INFO|Setting lport aa0c168b-de51-438f-a68b-f9c78a24ca7c ovn-installed in OVS
Nov 25 16:31:41 compute-0 ovn_controller[153477]: 2025-11-25T16:31:41Z|00176|binding|INFO|Setting lport aa0c168b-de51-438f-a68b-f9c78a24ca7c up in Southbound
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[63ff6bd3-bee6-4a11-9ab7-b74b1f816af0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:41 compute-0 systemd-udevd[290348]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:31:41 compute-0 NetworkManager[48891]: <info>  [1764088301.8907] device (tapaa0c168b-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:31:41 compute-0 NetworkManager[48891]: <info>  [1764088301.8915] device (tapaa0c168b-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.906 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6161e431-f825-4baf-9641-15193abe4faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3e23d114-5bcb-41de-bd8d-e0c05d53f468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.918 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.919 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.919 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.919 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:9c:56:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.920 254096 DEBUG nova.virt.libvirt.driver [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:33:0b:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.944 254096 DEBUG nova.virt.libvirt.guest [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:41</nova:creationTime>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:31:41 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 16:31:41 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:31:41 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:31:41 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:41 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:31:41 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:31:41 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.944 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9db6c008-44ac-4fe4-8703-0cb1260e5a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.964 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5331882-bcac-4a77-aae6-4ddbfd4eda4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290355, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.977 254096 DEBUG oslo_concurrency.lockutils [None req-339c310f-6f6b-4498-be98-b2734f512a7d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[77ebec65-53bc-48fd-abb8-ffdc56bac787]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290356, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290356, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.980 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:41 compute-0 nova_compute[254092]: 2025-11-25 16:31:41.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.984 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:41.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:42 compute-0 nova_compute[254092]: 2025-11-25 16:31:42.551 254096 DEBUG nova.compute.manager [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:42 compute-0 nova_compute[254092]: 2025-11-25 16:31:42.551 254096 DEBUG oslo_concurrency.lockutils [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:42 compute-0 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 DEBUG oslo_concurrency.lockutils [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:42 compute-0 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 DEBUG oslo_concurrency.lockutils [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:42 compute-0 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 DEBUG nova.compute.manager [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:42 compute-0 nova_compute[254092]: 2025-11-25 16:31:42.552 254096 WARNING nova.compute.manager [req-3935169b-c694-4804-9c6d-8c026f95b591 req-12e42528-9e32-4ae6-b19b-798eaeac43ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c for instance with vm_state active and task_state None.
Nov 25 16:31:42 compute-0 ceph-mon[74985]: pgmap v1323: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 16:31:42 compute-0 nova_compute[254092]: 2025-11-25 16:31:42.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:42 compute-0 ovn_controller[153477]: 2025-11-25T16:31:42Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:0b:3f 10.100.0.4
Nov 25 16:31:42 compute-0 ovn_controller[153477]: 2025-11-25T16:31:42Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:0b:3f 10.100.0.4
Nov 25 16:31:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 97 op/s
Nov 25 16:31:43 compute-0 nova_compute[254092]: 2025-11-25 16:31:43.762 254096 DEBUG nova.network.neutron [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port aa0c168b-de51-438f-a68b-f9c78a24ca7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:31:43 compute-0 nova_compute[254092]: 2025-11-25 16:31:43.763 254096 DEBUG nova.network.neutron [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:43 compute-0 nova_compute[254092]: 2025-11-25 16:31:43.780 254096 DEBUG oslo_concurrency.lockutils [req-46a860b9-139e-4280-bd03-53ea307ad545 req-36093ae3-d9cf-4df3-9d7f-3995d29745b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.309 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "36f65013-2906-4794-9e23-e92dc7814b6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.310 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.328 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:31:44 compute-0 ceph-mon[74985]: pgmap v1324: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 97 op/s
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.653 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.653 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.662 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.663 254096 INFO nova.compute.claims [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.773 254096 DEBUG nova.compute.manager [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.773 254096 DEBUG oslo_concurrency.lockutils [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.774 254096 DEBUG oslo_concurrency.lockutils [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.774 254096 DEBUG oslo_concurrency.lockutils [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.774 254096 DEBUG nova.compute.manager [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.775 254096 WARNING nova.compute.manager [req-ab349fbb-2dd0-4ad8-90cc-04aaf05fcbcd req-5fa3340d-bf36-4d58-9ca4-a50fb1fc73ae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-aa0c168b-de51-438f-a68b-f9c78a24ca7c for instance with vm_state active and task_state None.
Nov 25 16:31:44 compute-0 nova_compute[254092]: 2025-11-25 16:31:44.845 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.091 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-fd70e1c0-089e-49c8-b856-6ffd16627e8b" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.092 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-fd70e1c0-089e-49c8-b856-6ffd16627e8b" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.092 254096 DEBUG nova.objects.instance [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Nov 25 16:31:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:31:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688448314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.366 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.372 254096 DEBUG nova.compute.provider_tree [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.387 254096 DEBUG nova.scheduler.client.report [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.556 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.557 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:31:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3688448314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.700 254096 DEBUG nova.compute.manager [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.728 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.759 254096 INFO nova.compute.manager [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] instance snapshotting
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.806 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:31:45 compute-0 nova_compute[254092]: 2025-11-25 16:31:45.920 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.278 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.279 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.279 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating image(s)
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.299 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.324 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.344 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.348 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.410 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.411 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.412 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.412 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.435 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.443 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:46 compute-0 nova_compute[254092]: 2025-11-25 16:31:46.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:47 compute-0 ceph-mon[74985]: pgmap v1325: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Nov 25 16:31:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Nov 25 16:31:47 compute-0 nova_compute[254092]: 2025-11-25 16:31:47.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.064 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:48 compute-0 ceph-mon[74985]: pgmap v1326: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.145 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] resizing rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.245 254096 DEBUG nova.objects.instance [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'migration_context' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.261 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.261 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Ensure instance console log exists: /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.262 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.262 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.262 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.264 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.274 254096 WARNING nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.275 254096 INFO nova.virt.libvirt.driver [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Beginning live snapshot process
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.280 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.281 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.287 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.288 254096 DEBUG nova.virt.libvirt.host [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.289 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.289 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.290 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.290 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.291 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.291 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.291 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.292 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.292 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.293 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.293 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.293 254096 DEBUG nova.virt.hardware [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.297 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.443 254096 DEBUG nova.objects.instance [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.456 254096 DEBUG nova.virt.libvirt.imagebackend [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.463 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:31:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:31:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300794903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.779 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.805 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.811 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:48 compute-0 nova_compute[254092]: 2025-11-25 16:31:48.842 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] creating snapshot(a9eea299738b43a3870b0b13d430950a) on rbd image(e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:31:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.067 254096 DEBUG nova.policy [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:31:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 25 16:31:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2300794903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 25 16:31:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.209 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] cloning vms/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk@a9eea299738b43a3870b0b13d430950a to images/bc794908-f5ef-4cca-8c6d-36584cf3f9c9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:31:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:31:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302803226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.274 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.276 254096 DEBUG nova.objects.instance [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.295 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <uuid>36f65013-2906-4794-9e23-e92dc7814b6e</uuid>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <name>instance-0000001d</name>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerShowV254Test-server-179263035</nova:name>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:31:48</nova:creationTime>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <nova:user uuid="23c828e6ebbd4d0488f6edbbe9616ca7">tempest-ServerShowV254Test-285881419-project-member</nova:user>
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <nova:project uuid="1a87d91cb59d45c29155c8f5cb5ad745">tempest-ServerShowV254Test-285881419</nova:project>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <system>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <entry name="serial">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <entry name="uuid">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </system>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <os>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   </os>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <features>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   </features>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk">
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk.config">
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:31:49 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log" append="off"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <video>
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </video>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 88 op/s
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:31:49 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:31:49 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:31:49 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:31:49 compute-0 nova_compute[254092]: </domain>
Nov 25 16:31:49 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.365 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] flattening images/bc794908-f5ef-4cca-8c6d-36584cf3f9c9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.416 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.418 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.419 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Using config drive
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.449 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.650 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating config drive at /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.655 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2rahvtk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.766 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] removing snapshot(a9eea299738b43a3870b0b13d430950a) on rbd image(e1c7a84b-16df-49a8-83a7-a97bd47e0d43_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.789 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk2rahvtk" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.810 254096 DEBUG nova.storage.rbd_utils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.819 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.983 254096 DEBUG oslo_concurrency.processutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:49 compute-0 nova_compute[254092]: 2025-11-25 16:31:49.984 254096 INFO nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting local config drive /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config because it was imported into RBD.
Nov 25 16:31:50 compute-0 systemd-machined[216343]: New machine qemu-33-instance-0000001d.
Nov 25 16:31:50 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Nov 25 16:31:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 25 16:31:50 compute-0 ceph-mon[74985]: osdmap e163: 3 total, 3 up, 3 in
Nov 25 16:31:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2302803226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:31:50 compute-0 ceph-mon[74985]: pgmap v1328: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 88 op/s
Nov 25 16:31:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 25 16:31:50 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.188 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] creating snapshot(snap) on rbd image(bc794908-f5ef-4cca-8c6d-36584cf3f9c9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.387 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088310.3870687, 36f65013-2906-4794-9e23-e92dc7814b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.388 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Resumed (Lifecycle Event)
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.390 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.390 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.394 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance spawned successfully.
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.394 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.418 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.422 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.423 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.423 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.423 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.424 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.424 254096 DEBUG nova.virt.libvirt.driver [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.428 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.458 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.458 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088310.3881764, 36f65013-2906-4794-9e23-e92dc7814b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.459 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Started (Lifecycle Event)
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.493 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.495 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.505 254096 INFO nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 4.23 seconds to spawn the instance on the hypervisor.
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.506 254096 DEBUG nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.518 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.564 254096 INFO nova.compute.manager [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 5.94 seconds to build instance.
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.584 254096 DEBUG oslo_concurrency.lockutils [None req-237b12eb-72b8-432e-b92c-ce9b4adcdad7 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.650 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Successfully updated port: fd70e1c0-089e-49c8-b856-6ffd16627e8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.677 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.677 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.678 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.845 254096 DEBUG nova.compute.manager [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-changed-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.845 254096 DEBUG nova.compute.manager [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing instance network info cache due to event network-changed-fd70e1c0-089e-49c8-b856-6ffd16627e8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.845 254096 DEBUG oslo_concurrency.lockutils [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.904 254096 WARNING nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.905 254096 WARNING nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:31:50 compute-0 nova_compute[254092]: 2025-11-25 16:31:50.905 254096 WARNING nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016669032240997591 of space, bias 1.0, pg target 0.5000709672299277 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009338295443445985 of space, bias 1.0, pg target 0.28014886330337957 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:31:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 25 16:31:51 compute-0 ceph-mon[74985]: osdmap e164: 3 total, 3 up, 3 in
Nov 25 16:31:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 25 16:31:51 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 25 16:31:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 293 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 284 op/s
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image bc794908-f5ef-4cca-8c6d-36584cf3f9c9 could not be found.
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID bc794908-f5ef-4cca-8c6d-36584cf3f9c9
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver 
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver 
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image bc794908-f5ef-4cca-8c6d-36584cf3f9c9 could not be found.
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.491 254096 ERROR nova.virt.libvirt.driver 
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.567 254096 DEBUG nova.storage.rbd_utils [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] removing snapshot(snap) on rbd image(bc794908-f5ef-4cca-8c6d-36584cf3f9c9) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:31:51 compute-0 nova_compute[254092]: 2025-11-25 16:31:51.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:51 compute-0 ovn_controller[153477]: 2025-11-25T16:31:51Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:09:1a 10.100.0.3
Nov 25 16:31:51 compute-0 ovn_controller[153477]: 2025-11-25T16:31:51Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:09:1a 10.100.0.3
Nov 25 16:31:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 25 16:31:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 25 16:31:52 compute-0 ceph-mon[74985]: osdmap e165: 3 total, 3 up, 3 in
Nov 25 16:31:52 compute-0 ceph-mon[74985]: pgmap v1331: 321 pgs: 321 active+clean; 293 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 11 MiB/s wr, 284 op/s
Nov 25 16:31:52 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 25 16:31:52 compute-0 nova_compute[254092]: 2025-11-25 16:31:52.417 254096 WARNING nova.compute.manager [None req-0c6ed523-0b0c-419e-af71-4d838f38c5cc 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Image not found during snapshot: nova.exception.ImageNotFound: Image bc794908-f5ef-4cca-8c6d-36584cf3f9c9 could not be found.
Nov 25 16:31:52 compute-0 nova_compute[254092]: 2025-11-25 16:31:52.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:53 compute-0 ceph-mon[74985]: osdmap e166: 3 total, 3 up, 3 in
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.281 254096 INFO nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Rebuilding instance
Nov 25 16:31:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 293 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 16 MiB/s wr, 411 op/s
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.512 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.526 254096 DEBUG nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.571 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'pci_requests' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.582 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.595 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'resources' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.604 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'migration_context' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.614 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:31:53 compute-0 nova_compute[254092]: 2025-11-25 16:31:53.617 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:31:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.115 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.116 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.116 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.117 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.117 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.118 254096 INFO nova.compute.manager [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Terminating instance
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.119 254096 DEBUG nova.compute.manager [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:31:54 compute-0 kernel: tapbb9f2265-a4 (unregistering): left promiscuous mode
Nov 25 16:31:54 compute-0 NetworkManager[48891]: <info>  [1764088314.1616] device (tapbb9f2265-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:31:54 compute-0 ovn_controller[153477]: 2025-11-25T16:31:54Z|00177|binding|INFO|Releasing lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b from this chassis (sb_readonly=0)
Nov 25 16:31:54 compute-0 ovn_controller[153477]: 2025-11-25T16:31:54Z|00178|binding|INFO|Setting lport bb9f2265-a40c-44da-bb0e-dc52c5d0873b down in Southbound
Nov 25 16:31:54 compute-0 ovn_controller[153477]: 2025-11-25T16:31:54Z|00179|binding|INFO|Removing iface tapbb9f2265-a4 ovn-installed in OVS
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.180 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:09:1a 10.100.0.3'], port_security=['fa:16:3e:79:09:1a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e1c7a84b-16df-49a8-83a7-a97bd47e0d43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bb9f2265-a40c-44da-bb0e-dc52c5d0873b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.181 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bb9f2265-a40c-44da-bb0e-dc52c5d0873b in datapath 50e18e22-7850-458c-8d66-5932e0495377 unbound from our chassis
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.182 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50e18e22-7850-458c-8d66-5932e0495377, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.184 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[177dc6d8-d297-42a6-9276-3bf0b1d20f1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.186 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace which is not needed anymore
Nov 25 16:31:54 compute-0 ceph-mon[74985]: pgmap v1333: 321 pgs: 321 active+clean; 293 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 16 MiB/s wr, 411 op/s
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 25 16:31:54 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 13.160s CPU time.
Nov 25 16:31:54 compute-0 systemd-machined[216343]: Machine qemu-32-instance-0000001c terminated.
Nov 25 16:31:54 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : haproxy version is 2.8.14-c23fe91
Nov 25 16:31:54 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [NOTICE]   (290330) : path to executable is /usr/sbin/haproxy
Nov 25 16:31:54 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [WARNING]  (290330) : Exiting Master process...
Nov 25 16:31:54 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [ALERT]    (290330) : Current worker (290332) exited with code 143 (Terminated)
Nov 25 16:31:54 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[290326]: [WARNING]  (290330) : All workers exited. Exiting... (0)
Nov 25 16:31:54 compute-0 systemd[1]: libpod-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66.scope: Deactivated successfully.
Nov 25 16:31:54 compute-0 podman[290925]: 2025-11-25 16:31:54.338929343 +0000 UTC m=+0.053452385 container died f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.353 254096 INFO nova.virt.libvirt.driver [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Instance destroyed successfully.
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.353 254096 DEBUG nova.objects.instance [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'resources' on Instance uuid e1c7a84b-16df-49a8-83a7-a97bd47e0d43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66-userdata-shm.mount: Deactivated successfully.
Nov 25 16:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a528aaf225b549659531755e9e5982927109e3e368e2369118320e622c687d-merged.mount: Deactivated successfully.
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.374 254096 DEBUG nova.virt.libvirt.vif [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1117381742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1117381742',id=28,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:31:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-f10hzrkc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:31:52Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=e1c7a84b-16df-49a8-83a7-a97bd47e0d43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.375 254096 DEBUG nova.network.os_vif_util [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "address": "fa:16:3e:79:09:1a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbb9f2265-a4", "ovs_interfaceid": "bb9f2265-a40c-44da-bb0e-dc52c5d0873b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.376 254096 DEBUG nova.network.os_vif_util [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.376 254096 DEBUG os_vif [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.379 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb9f2265-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.386 254096 INFO os_vif [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:09:1a,bridge_name='br-int',has_traffic_filtering=True,id=bb9f2265-a40c-44da-bb0e-dc52c5d0873b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbb9f2265-a4')
Nov 25 16:31:54 compute-0 podman[290925]: 2025-11-25 16:31:54.386097576 +0000 UTC m=+0.100620618 container cleanup f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:31:54 compute-0 systemd[1]: libpod-conmon-f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66.scope: Deactivated successfully.
Nov 25 16:31:54 compute-0 podman[290974]: 2025-11-25 16:31:54.460295713 +0000 UTC m=+0.045991841 container remove f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.467 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99e228aa-baa3-413d-95e2-920d27213d21]: (4, ('Tue Nov 25 04:31:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66)\nf1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66\nTue Nov 25 04:31:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (f1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66)\nf1f2804b3cfce42ab69f968bcb0ec5b2bb57b5fd1b1f756e19e48a5d5d89ef66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.469 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dae8e346-3df2-459c-918f-ce73f645683e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:54 compute-0 kernel: tap50e18e22-70: left promiscuous mode
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.476 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7ff43a-11b9-4b9b-9385-c651f2546497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.501 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed202205-845e-41d8-ad05-7434d949301e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.502 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91af90fd-fb36-4123-bac4-419262f79c2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.522 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2276a1af-94a0-48ab-b7a0-8fc9392ad16e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475434, 'reachable_time': 17749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291001, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d50e18e22\x2d7850\x2d458c\x2d8d66\x2d5932e0495377.mount: Deactivated successfully.
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.527 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:31:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:54.527 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6557e1d1-ee8a-4d68-a245-0159f4c0e607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.824 254096 INFO nova.virt.libvirt.driver [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deleting instance files /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_del
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.825 254096 INFO nova.virt.libvirt.driver [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deletion of /var/lib/nova/instances/e1c7a84b-16df-49a8-83a7-a97bd47e0d43_del complete
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.882 254096 INFO nova.compute.manager [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.883 254096 DEBUG oslo.service.loopingcall [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.884 254096 DEBUG nova.compute.manager [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:31:54 compute-0 nova_compute[254092]: 2025-11-25 16:31:54.884 254096 DEBUG nova.network.neutron [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:31:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:31:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1729059012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:31:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:31:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1729059012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:31:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1729059012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:31:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1729059012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:31:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 194 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 11 MiB/s wr, 499 op/s
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.582 254096 DEBUG nova.network.neutron [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.603 254096 INFO nova.compute.manager [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Took 0.72 seconds to deallocate network for instance.
Nov 25 16:31:55 compute-0 podman[291004]: 2025-11-25 16:31:55.648783105 +0000 UTC m=+0.063994751 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.652 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.652 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:55 compute-0 podman[291003]: 2025-11-25 16:31:55.676087697 +0000 UTC m=+0.093609566 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.726 254096 DEBUG nova.compute.manager [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-unplugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.726 254096 DEBUG oslo_concurrency.lockutils [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 DEBUG oslo_concurrency.lockutils [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 DEBUG oslo_concurrency.lockutils [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 DEBUG nova.compute.manager [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] No waiting events found dispatching network-vif-unplugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.727 254096 WARNING nova.compute.manager [req-1dd83b3c-cba4-43cd-9b2d-fa129e2ba93d req-90916a40-057f-47ac-aa9f-711eed0bb092 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received unexpected event network-vif-unplugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b for instance with vm_state deleted and task_state None.
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.745 254096 DEBUG oslo_concurrency.processutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:55 compute-0 podman[291039]: 2025-11-25 16:31:55.775420699 +0000 UTC m=+0.073812179 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.916 254096 DEBUG nova.network.neutron [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.940 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.941 254096 DEBUG oslo_concurrency.lockutils [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.941 254096 DEBUG nova.network.neutron [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Refreshing network info cache for port fd70e1c0-089e-49c8-b856-6ffd16627e8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.943 254096 DEBUG nova.virt.libvirt.vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.944 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.944 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.945 254096 DEBUG os_vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.946 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.946 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.948 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd70e1c0-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.949 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd70e1c0-08, col_values=(('external_ids', {'iface-id': 'fd70e1c0-089e-49c8-b856-6ffd16627e8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:d0:f4', 'vm-uuid': '07003872-27e7-4fd9-80cf-a34257d5aa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:55 compute-0 NetworkManager[48891]: <info>  [1764088315.9515] manager: (tapfd70e1c0-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.962 254096 INFO os_vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08')
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.962 254096 DEBUG nova.virt.libvirt.vif [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.963 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.963 254096 DEBUG nova.network.os_vif_util [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.966 254096 DEBUG nova.virt.libvirt.guest [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 16:31:55 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:4d:d0:f4"/>
Nov 25 16:31:55 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:31:55 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:31:55 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:31:55 compute-0 nova_compute[254092]:   <target dev="tapfd70e1c0-08"/>
Nov 25 16:31:55 compute-0 nova_compute[254092]: </interface>
Nov 25 16:31:55 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 16:31:55 compute-0 kernel: tapfd70e1c0-08: entered promiscuous mode
Nov 25 16:31:55 compute-0 systemd-udevd[290903]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:31:55 compute-0 ovn_controller[153477]: 2025-11-25T16:31:55Z|00180|binding|INFO|Claiming lport fd70e1c0-089e-49c8-b856-6ffd16627e8b for this chassis.
Nov 25 16:31:55 compute-0 NetworkManager[48891]: <info>  [1764088315.9787] manager: (tapfd70e1c0-08): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Nov 25 16:31:55 compute-0 ovn_controller[153477]: 2025-11-25T16:31:55Z|00181|binding|INFO|fd70e1c0-089e-49c8-b856-6ffd16627e8b: Claiming fa:16:3e:4d:d0:f4 10.100.0.14
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:55.985 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d0:f4 10.100.0.14'], port_security=['fa:16:3e:4d:d0:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fd70e1c0-089e-49c8-b856-6ffd16627e8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:31:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:55.986 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fd70e1c0-089e-49c8-b856-6ffd16627e8b in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:31:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:55.987 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:31:55 compute-0 NetworkManager[48891]: <info>  [1764088315.9895] device (tapfd70e1c0-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:31:55 compute-0 NetworkManager[48891]: <info>  [1764088315.9905] device (tapfd70e1c0-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:31:55 compute-0 ovn_controller[153477]: 2025-11-25T16:31:55Z|00182|binding|INFO|Setting lport fd70e1c0-089e-49c8-b856-6ffd16627e8b ovn-installed in OVS
Nov 25 16:31:55 compute-0 ovn_controller[153477]: 2025-11-25T16:31:55Z|00183|binding|INFO|Setting lport fd70e1c0-089e-49c8-b856-6ffd16627e8b up in Southbound
Nov 25 16:31:55 compute-0 nova_compute[254092]: 2025-11-25 16:31:55.996 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29b7b034-929a-4688-b626-db3205329806]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.040 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ec955e79-5f3e-4b40-b233-74531a470908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.043 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[23f1c6ff-2c14-4237-97cb-cb8f08f007b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:83:61:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.067 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:9c:56:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.068 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:33:0b:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.068 254096 DEBUG nova.virt.libvirt.driver [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:4d:d0:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[93d820c6-c830-4875-8104-b89ad13ce28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.086 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8240135e-c599-4b28-a77c-796a56411789]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291095, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.097 254096 DEBUG nova.virt.libvirt.guest [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:56 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:56</nova:creationTime>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:31:56 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 16:31:56 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:31:56 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 16:31:56 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:31:56 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:56 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:31:56 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:31:56 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a27f5ab-e2bf-4eda-adff-f8fa6b107628]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291096, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291096, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.101 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.104 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:56.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.124 254096 DEBUG oslo_concurrency.lockutils [None req-b92d16df-b228-487f-936e-f84712a25f36 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-fd70e1c0-089e-49c8-b856-6ffd16627e8b" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:56 compute-0 ceph-mon[74985]: pgmap v1334: 321 pgs: 321 active+clean; 194 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 11 MiB/s wr, 499 op/s
Nov 25 16:31:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:31:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432131039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.297 254096 DEBUG oslo_concurrency.processutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.303 254096 DEBUG nova.compute.provider_tree [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.317 254096 DEBUG nova.scheduler.client.report [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.338 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.366 254096 INFO nova.scheduler.client.report [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Deleted allocations for instance e1c7a84b-16df-49a8-83a7-a97bd47e0d43
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.436 254096 DEBUG oslo_concurrency.lockutils [None req-bed9b424-e587-4bf0-ab3c-7bce939a374a 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.516 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:31:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2088680910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:56 compute-0 nova_compute[254092]: 2025-11-25 16:31:56.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.071 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.115 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.115 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.115 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/432131039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2088680910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.289 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4078MB free_disk=59.915714263916016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.291 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.291 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 167 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 295 op/s
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.364 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 36f65013-2906-4794-9e23-e92dc7814b6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.427 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.818 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.818 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.818 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e1c7a84b-16df-49a8-83a7-a97bd47e0d43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] No waiting events found dispatching network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 WARNING nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received unexpected event network-vif-plugged-bb9f2265-a40c-44da-bb0e-dc52c5d0873b for instance with vm_state deleted and task_state None.
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Received event network-vif-deleted-bb9f2265-a40c-44da-bb0e-dc52c5d0873b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.819 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 WARNING nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b for instance with vm_state active and task_state None.
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.820 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.821 254096 DEBUG oslo_concurrency.lockutils [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.821 254096 DEBUG nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.821 254096 WARNING nova.compute.manager [req-a30044e3-a8f3-4bfa-8eaf-f35a4c6376a8 req-44b41759-b94a-44fc-805c-81024138d965 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-fd70e1c0-089e-49c8-b856-6ffd16627e8b for instance with vm_state active and task_state None.
Nov 25 16:31:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:31:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732409056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.842 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.847 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.859 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.902 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:31:57 compute-0 nova_compute[254092]: 2025-11-25 16:31:57.902 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:58 compute-0 ceph-mon[74985]: pgmap v1335: 321 pgs: 321 active+clean; 167 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 295 op/s
Nov 25 16:31:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1732409056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.295 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-63499eed-d192-4aec-8ab6-1c3384834ed4" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.295 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-63499eed-d192-4aec-8ab6-1c3384834ed4" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.318 254096 DEBUG nova.objects.instance [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.346 254096 DEBUG nova.virt.libvirt.vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.347 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.348 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.351 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.353 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.356 254096 DEBUG nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Attempting to detach device tap63499eed-d1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.356 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:9c:56:d4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <target dev="tap63499eed-d1"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]: </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.370 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.374 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <name>instance-0000001b</name>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:56</nova:creationTime>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:31:58 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <system>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </system>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <os>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </os>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <features>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </features>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:83:61:d4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='tap19d5425c-f0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:9c:56:d4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='tap63499eed-d1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='net1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='tapaa0c168b-de'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='net2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='tapfd70e1c0-08'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='net3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </target>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </console>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </input>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </input>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </input>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <video>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </video>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]: </domain>
Nov 25 16:31:58 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.374 254096 INFO nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap63499eed-d1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the persistent domain config.
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] (1/8): Attempting to detach device tap63499eed-d1 with device alias net1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.375 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:9c:56:d4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <target dev="tap63499eed-d1"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]: </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:31:58 compute-0 kernel: tap63499eed-d1 (unregistering): left promiscuous mode
Nov 25 16:31:58 compute-0 NetworkManager[48891]: <info>  [1764088318.4873] device (tap63499eed-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:58 compute-0 ovn_controller[153477]: 2025-11-25T16:31:58Z|00184|binding|INFO|Releasing lport 63499eed-d192-4aec-8ab6-1c3384834ed4 from this chassis (sb_readonly=0)
Nov 25 16:31:58 compute-0 ovn_controller[153477]: 2025-11-25T16:31:58Z|00185|binding|INFO|Setting lport 63499eed-d192-4aec-8ab6-1c3384834ed4 down in Southbound
Nov 25 16:31:58 compute-0 ovn_controller[153477]: 2025-11-25T16:31:58Z|00186|binding|INFO|Removing iface tap63499eed-d1 ovn-installed in OVS
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.501 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:56:d4 10.100.0.12'], port_security=['fa:16:3e:9c:56:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63499eed-d192-4aec-8ab6-1c3384834ed4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.502 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63499eed-d192-4aec-8ab6-1c3384834ed4 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.504 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.513 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764088318.5118237, 07003872-27e7-4fd9-80cf-a34257d5aa97 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.514 254096 DEBUG nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Start waiting for the detach event from libvirt for device tap63499eed-d1 with device alias net1 for instance 07003872-27e7-4fd9-80cf-a34257d5aa97 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.514 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.527 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <name>instance-0000001b</name>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:56</nova:creationTime>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="63499eed-d192-4aec-8ab6-1c3384834ed4">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:31:58 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <system>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </system>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <os>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </os>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <features>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </features>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0b05dc6f-4889-4380-b791-797384b5612b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:83:61:d4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='tap19d5425c-f0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='tapaa0c168b-de'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='net2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target dev='tapfd70e1c0-08'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='net3'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       </target>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </console>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </input>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </input>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </input>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <video>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </video>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:31:58 compute-0 nova_compute[254092]: </domain>
Nov 25 16:31:58 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.528 254096 INFO nova.virt.libvirt.driver [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap63499eed-d1 from instance 07003872-27e7-4fd9-80cf-a34257d5aa97 from the live domain config.
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.528 254096 DEBUG nova.virt.libvirt.vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.529 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.529 254096 DEBUG nova.network.os_vif_util [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.530 254096 DEBUG os_vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.531 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63499eed-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.539 254096 INFO os_vif [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1')
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.540 254096 DEBUG nova.virt.libvirt.guest [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:58</nova:creationTime>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 16:31:58 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:31:58 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:31:58 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:31:58 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:31:58 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.555 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[059aa060-3f2d-45a3-a929-6f83891f03cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.558 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6c6d7d-0044-4de7-b0c8-244a217ab1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.584 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3ce9b1-1722-4745-90b0-2f922fad0e99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b244d82b-866a-435c-890e-aad15e76f074]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291155, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.613 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45eed61f-2356-483a-a8c5-18e460a1722c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291156, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291156, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.614 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:58 compute-0 nova_compute[254092]: 2025-11-25 16:31:58.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.617 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.617 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:31:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:31:58.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:31:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:31:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 25 16:31:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 25 16:31:58 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 25 16:31:59 compute-0 ovn_controller[153477]: 2025-11-25T16:31:59Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:d0:f4 10.100.0.14
Nov 25 16:31:59 compute-0 ovn_controller[153477]: 2025-11-25T16:31:59Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:d0:f4 10.100.0.14
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.088 254096 DEBUG nova.network.neutron [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated VIF entry in instance network info cache for port fd70e1c0-089e-49c8-b856-6ffd16627e8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.088 254096 DEBUG nova.network.neutron [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.108 254096 DEBUG oslo_concurrency.lockutils [req-4d3c3d21-86bc-4b0e-90bf-8789e6c6393e req-c9f36471-0bf8-4120-a3fa-76f0d6d9120d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.286 254096 DEBUG nova.compute.manager [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-unplugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.286 254096 DEBUG oslo_concurrency.lockutils [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 DEBUG oslo_concurrency.lockutils [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 DEBUG oslo_concurrency.lockutils [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 DEBUG nova.compute.manager [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-unplugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:31:59 compute-0 nova_compute[254092]: 2025-11-25 16:31:59.287 254096 WARNING nova.compute.manager [req-29a98fc1-4485-4e46-ab24-895a09773f78 req-5ed61e95-64ad-40b6-bf99-6150b1ebbdf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-unplugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.
Nov 25 16:31:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 167 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 144 KiB/s wr, 214 op/s
Nov 25 16:31:59 compute-0 ceph-mon[74985]: osdmap e167: 3 total, 3 up, 3 in
Nov 25 16:32:00 compute-0 nova_compute[254092]: 2025-11-25 16:32:00.017 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:00 compute-0 nova_compute[254092]: 2025-11-25 16:32:00.017 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:00 compute-0 nova_compute[254092]: 2025-11-25 16:32:00.017 254096 DEBUG nova.network.neutron [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:00 compute-0 nova_compute[254092]: 2025-11-25 16:32:00.904 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:00 compute-0 nova_compute[254092]: 2025-11-25 16:32:00.905 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:00 compute-0 nova_compute[254092]: 2025-11-25 16:32:00.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:32:00 compute-0 nova_compute[254092]: 2025-11-25 16:32:00.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:32:00 compute-0 ceph-mon[74985]: pgmap v1337: 321 pgs: 321 active+clean; 167 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 144 KiB/s wr, 214 op/s
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.192 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 126 KiB/s wr, 188 op/s
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.388 254096 DEBUG nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.388 254096 DEBUG oslo_concurrency.lockutils [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.389 254096 DEBUG oslo_concurrency.lockutils [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.389 254096 DEBUG oslo_concurrency.lockutils [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.390 254096 DEBUG nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] No waiting events found dispatching network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.390 254096 WARNING nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received unexpected event network-vif-plugged-63499eed-d192-4aec-8ab6-1c3384834ed4 for instance with vm_state active and task_state None.
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.390 254096 DEBUG nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-63499eed-d192-4aec-8ab6-1c3384834ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.391 254096 INFO nova.compute.manager [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Neutron deleted interface 63499eed-d192-4aec-8ab6-1c3384834ed4; detaching it from the instance and deleting it from the info cache
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.391 254096 DEBUG nova.network.neutron [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.414 254096 DEBUG nova.objects.instance [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.439 254096 DEBUG nova.objects.instance [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.459 254096 DEBUG nova.virt.libvirt.vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.460 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.461 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.466 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.472 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <name>instance-0000001b</name>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:58</nova:creationTime>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:32:01 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:83:61:d4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='tap19d5425c-f0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='tapaa0c168b-de'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='net2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='tapfd70e1c0-08'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='net3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </target>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </console>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:01 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.473 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.478 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:9c:56:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap63499eed-d1"/></interface>not found in domain: <domain type='kvm' id='31'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <name>instance-0000001b</name>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <uuid>07003872-27e7-4fd9-80cf-a34257d5aa97</uuid>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:31:58</nova:creationTime>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:32:01 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='serial'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='uuid'>07003872-27e7-4fd9-80cf-a34257d5aa97</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk' index='2'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/07003872-27e7-4fd9-80cf-a34257d5aa97_disk.config' index='1'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:83:61:d4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='tap19d5425c-f0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:33:0b:3f'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='tapaa0c168b-de'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='net2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:4d:d0:f4'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target dev='tapfd70e1c0-08'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='net3'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       </target>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97/console.log' append='off'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </console>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c683,c757</label>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c683,c757</imagelabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:32:01 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:01 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.480 254096 WARNING nova.virt.libvirt.driver [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Detaching interface fa:16:3e:9c:56:d4 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap63499eed-d1' not found.
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.481 254096 DEBUG nova.virt.libvirt.vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.482 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "63499eed-d192-4aec-8ab6-1c3384834ed4", "address": "fa:16:3e:9c:56:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63499eed-d1", "ovs_interfaceid": "63499eed-d192-4aec-8ab6-1c3384834ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.483 254096 DEBUG nova.network.os_vif_util [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.483 254096 DEBUG os_vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.486 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63499eed-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.487 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.489 254096 INFO os_vif [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:56:d4,bridge_name='br-int',has_traffic_filtering=True,id=63499eed-d192-4aec-8ab6-1c3384834ed4,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63499eed-d1')
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.491 254096 DEBUG nova.virt.libvirt.guest [req-6791fdbd-3b7b-4b2e-b773-392c069ee438 req-e87e7b0c-f4f6-43a0-b74d-b2ba7b93af0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1158248018</nova:name>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:32:01</nova:creationTime>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="19d5425c-f0c6-4c68-b8a6-cb1c6357d249">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="aa0c168b-de51-438f-a68b-f9c78a24ca7c">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     <nova:port uuid="fd70e1c0-089e-49c8-b856-6ffd16627e8b">
Nov 25 16:32:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:32:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:32:01 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:32:01 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:32:01 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.591 254096 INFO nova.network.neutron [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Port 63499eed-d192-4aec-8ab6-1c3384834ed4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.766 163338 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 1f139a1f-92d8-4b65-b724-a2554b80ff31 with type ""
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.767 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:d0:f4 10.100.0.14'], port_security=['fa:16:3e:4d:d0:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-474064315', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fd70e1c0-089e-49c8-b856-6ffd16627e8b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.768 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fd70e1c0-089e-49c8-b856-6ffd16627e8b in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.769 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:32:01 compute-0 ovn_controller[153477]: 2025-11-25T16:32:01Z|00187|binding|INFO|Removing iface tapfd70e1c0-08 ovn-installed in OVS
Nov 25 16:32:01 compute-0 ovn_controller[153477]: 2025-11-25T16:32:01Z|00188|binding|INFO|Removing lport fd70e1c0-089e-49c8-b856-6ffd16627e8b ovn-installed in OVS
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.785 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15b4e6a5-11a2-4843-a195-0257a741eda5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.816 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc8a3cb-6a99-4020-b372-a91461154b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.820 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e5e958-d980-41c7-8b48-f0379040357e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.854 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[183aa633-0148-4a1f-b4d4-ee0d61a380d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88ba20bc-3045-474d-933a-1c7f865aba80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291162, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.894 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c214fdf-6c83-4f39-bfad-d2a5bb0d6995]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291163, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291163, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.896 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:01 compute-0 nova_compute[254092]: 2025-11-25 16:32:01.902 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.902 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.903 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.903 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:01.904 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:02 compute-0 ceph-mon[74985]: pgmap v1338: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 126 KiB/s wr, 188 op/s
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.070 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.071 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.071 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.072 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.072 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.074 254096 INFO nova.compute.manager [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Terminating instance
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.075 254096 DEBUG nova.compute.manager [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:32:02 compute-0 kernel: tap19d5425c-f0 (unregistering): left promiscuous mode
Nov 25 16:32:02 compute-0 NetworkManager[48891]: <info>  [1764088322.1526] device (tap19d5425c-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:32:02 compute-0 ovn_controller[153477]: 2025-11-25T16:32:02Z|00189|binding|INFO|Releasing lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 from this chassis (sb_readonly=0)
Nov 25 16:32:02 compute-0 ovn_controller[153477]: 2025-11-25T16:32:02Z|00190|binding|INFO|Setting lport 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 down in Southbound
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_controller[153477]: 2025-11-25T16:32:02Z|00191|binding|INFO|Removing iface tap19d5425c-f0 ovn-installed in OVS
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.167 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:61:d4 10.100.0.8'], port_security=['fa:16:3e:83:61:d4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd11e91d-04bc-4ecb-8ad4-320a6572500c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=19d5425c-f0c6-4c68-b8a6-cb1c6357d249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.169 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 19d5425c-f0c6-4c68-b8a6-cb1c6357d249 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.172 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 kernel: tapaa0c168b-de (unregistering): left promiscuous mode
Nov 25 16:32:02 compute-0 NetworkManager[48891]: <info>  [1764088322.1851] device (tapaa0c168b-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.190 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ef2fa6-3bf3-4659-8818-4acfa9fa9569]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_controller[153477]: 2025-11-25T16:32:02Z|00192|binding|INFO|Releasing lport aa0c168b-de51-438f-a68b-f9c78a24ca7c from this chassis (sb_readonly=0)
Nov 25 16:32:02 compute-0 ovn_controller[153477]: 2025-11-25T16:32:02Z|00193|binding|INFO|Setting lport aa0c168b-de51-438f-a68b-f9c78a24ca7c down in Southbound
Nov 25 16:32:02 compute-0 ovn_controller[153477]: 2025-11-25T16:32:02Z|00194|binding|INFO|Removing iface tapaa0c168b-de ovn-installed in OVS
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.204 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:0b:3f 10.100.0.4'], port_security=['fa:16:3e:33:0b:3f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '07003872-27e7-4fd9-80cf-a34257d5aa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aa0c168b-de51-438f-a68b-f9c78a24ca7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:02 compute-0 kernel: tapfd70e1c0-08 (unregistering): left promiscuous mode
Nov 25 16:32:02 compute-0 NetworkManager[48891]: <info>  [1764088322.2139] device (tapfd70e1c0-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.231 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fea464ee-4c07-480e-a86a-f1098b3c04be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.235 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5cb4ae-eaec-438d-bdb2-bfd09826b2ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.264 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8db6bbee-c529-4edb-af81-6e79a2923a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.285 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b847663c-f613-4a7e-a0cf-52f122f92e0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471309, 'reachable_time': 23086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291188, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 25 16:32:02 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 15.556s CPU time.
Nov 25 16:32:02 compute-0 systemd-machined[216343]: Machine qemu-31-instance-0000001b terminated.
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba2ff9c-0683-4bc2-8ec4-6abe32236e01]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471320, 'tstamp': 471320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291189, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471323, 'tstamp': 471323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291189, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.302 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.314 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.314 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.314 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.315 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.316 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aa0c168b-de51-438f-a68b-f9c78a24ca7c in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.317 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.317 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4303c86a-b904-4e5c-ae75-f9a9d1c59843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.318 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace which is not needed anymore
Nov 25 16:32:02 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : haproxy version is 2.8.14-c23fe91
Nov 25 16:32:02 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [NOTICE]   (289282) : path to executable is /usr/sbin/haproxy
Nov 25 16:32:02 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [WARNING]  (289282) : Exiting Master process...
Nov 25 16:32:02 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [WARNING]  (289282) : Exiting Master process...
Nov 25 16:32:02 compute-0 systemd[1]: libpod-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb.scope: Deactivated successfully.
Nov 25 16:32:02 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [ALERT]    (289282) : Current worker (289285) exited with code 143 (Terminated)
Nov 25 16:32:02 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[289262]: [WARNING]  (289282) : All workers exited. Exiting... (0)
Nov 25 16:32:02 compute-0 podman[291210]: 2025-11-25 16:32:02.461035241 +0000 UTC m=+0.048560566 container died 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb-userdata-shm.mount: Deactivated successfully.
Nov 25 16:32:02 compute-0 NetworkManager[48891]: <info>  [1764088322.4936] manager: (tap19d5425c-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Nov 25 16:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a43bbff1afa7ddae500921472f918939c0d088677612449620b59fac58799310-merged.mount: Deactivated successfully.
Nov 25 16:32:02 compute-0 podman[291210]: 2025-11-25 16:32:02.503506041 +0000 UTC m=+0.091031366 container cleanup 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:32:02 compute-0 NetworkManager[48891]: <info>  [1764088322.5077] manager: (tapaa0c168b-de): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Nov 25 16:32:02 compute-0 systemd[1]: libpod-conmon-2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb.scope: Deactivated successfully.
Nov 25 16:32:02 compute-0 NetworkManager[48891]: <info>  [1764088322.5176] manager: (tapfd70e1c0-08): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.542 254096 INFO nova.virt.libvirt.driver [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance destroyed successfully.
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.543 254096 DEBUG nova.objects.instance [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'resources' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.561 254096 DEBUG nova.virt.libvirt.vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.562 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.563 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.564 254096 DEBUG os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.567 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19d5425c-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.572 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 podman[291254]: 2025-11-25 16:32:02.575519412 +0000 UTC m=+0.048535136 container remove 2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.578 254096 INFO os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:61:d4,bridge_name='br-int',has_traffic_filtering=True,id=19d5425c-f0c6-4c68-b8a6-cb1c6357d249,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19d5425c-f0')
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.579 254096 DEBUG nova.virt.libvirt.vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.580 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.581 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.581 254096 DEBUG os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.582 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6708c2-d3a7-4bec-88bf-23b5ee6945b6]: (4, ('Tue Nov 25 04:32:02 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb)\n2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb\nTue Nov 25 04:32:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb)\n2c7eb472a69f6becd3854736d7e837e3c3b905540b2770fa51a6e4d91e6342cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.583 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa0c168b-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.584 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5eea209d-ee7e-4d73-b70f-051c46c89f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.585 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.586 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 kernel: tap52e7d5b9-00: left promiscuous mode
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.610 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[14cbb560-a96e-4a3d-beed-a33a1c930b1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.611 254096 INFO os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:0b:3f,bridge_name='br-int',has_traffic_filtering=True,id=aa0c168b-de51-438f-a68b-f9c78a24ca7c,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0c168b-de')
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.612 254096 DEBUG nova.virt.libvirt.vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1158248018',display_name='tempest-AttachInterfacesTestJSON-server-1158248018',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1158248018',id=27,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJmM+hyi3seBAE3K46sBI2YUlQc/OgmOr+XAosDXSMWbsQMM7MSa9VCyL6NIiHdrd/Mi+L5291JhLvNY+nEX94KPtdKY+CwKghfhbYtUXWreEE2o3A99BKI+adAxYStDFw==',key_name='tempest-keypair-134592934',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-83v8y104',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:30:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=07003872-27e7-4fd9-80cf-a34257d5aa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.613 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.613 254096 DEBUG nova.network.os_vif_util [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.614 254096 DEBUG os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.616 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd70e1c0-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:02 compute-0 nova_compute[254092]: 2025-11-25 16:32:02.626 254096 INFO os_vif [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:d0:f4,bridge_name='br-int',has_traffic_filtering=True,id=fd70e1c0-089e-49c8-b856-6ffd16627e8b,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd70e1c0-08')
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.627 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c839b81-a4a5-4fb5-b74c-56cf1ea1bf7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.628 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f44804dc-b867-424c-bdf2-af5069941083]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.650 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b168fc10-784d-45fd-bc14-41fe23ad3c83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471301, 'reachable_time': 29179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291296, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d52e7d5b9\x2d0570\x2d4e5c\x2db3da\x2d9dfcb924b83d.mount: Deactivated successfully.
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.653 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:32:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:02.654 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c454ce53-3c34-46d9-8b7a-0543ca3e5c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 115 KiB/s wr, 171 op/s
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.516 254096 INFO nova.virt.libvirt.driver [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deleting instance files /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97_del
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.517 254096 INFO nova.virt.libvirt.driver [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deletion of /var/lib/nova/instances/07003872-27e7-4fd9-80cf-a34257d5aa97_del complete
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.576 254096 INFO nova.compute.manager [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 1.50 seconds to destroy the instance on the hypervisor.
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.576 254096 DEBUG oslo.service.loopingcall [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.576 254096 DEBUG nova.compute.manager [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.577 254096 DEBUG nova.network.neutron [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.600 254096 DEBUG nova.compute.manager [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-fd70e1c0-089e-49c8-b856-6ffd16627e8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.600 254096 INFO nova.compute.manager [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Neutron deleted interface fd70e1c0-089e-49c8-b856-6ffd16627e8b; detaching it from the instance and deleting it from the info cache
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.601 254096 DEBUG nova.network.neutron [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.633 254096 DEBUG nova.compute.manager [req-18745b84-34bd-4ebe-a80f-9916440a9ade req-12a047ea-0d31-4347-ae92-adc9add8eb54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Detach interface failed, port_id=fd70e1c0-089e-49c8-b856-6ffd16627e8b, reason: Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 16:32:03 compute-0 nova_compute[254092]: 2025-11-25 16:32:03.667 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:32:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.263 254096 DEBUG nova.network.neutron [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "address": "fa:16:3e:4d:d0:f4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd70e1c0-08", "ovs_interfaceid": "fd70e1c0-089e-49c8-b856-6ffd16627e8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.297 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.299 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.300 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.300 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 07003872-27e7-4fd9-80cf-a34257d5aa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.320 254096 DEBUG oslo_concurrency.lockutils [None req-01aa532c-79e0-4854-803f-7884786f8d02 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-07003872-27e7-4fd9-80cf-a34257d5aa97-63499eed-d192-4aec-8ab6-1c3384834ed4" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:04 compute-0 ceph-mon[74985]: pgmap v1339: 321 pgs: 321 active+clean; 167 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 115 KiB/s wr, 171 op/s
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.497 254096 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port fd70e1c0-089e-49c8-b856-6ffd16627e8b could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Nov 25 16:32:04 compute-0 nova_compute[254092]: 2025-11-25 16:32:04.498 254096 DEBUG nova.network.neutron [-] Unable to show port fd70e1c0-089e-49c8-b856-6ffd16627e8b as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.029 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.030 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.049 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.129 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.129 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.136 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.136 254096 INFO nova.compute.claims [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.277 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 143 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1001 KiB/s rd, 1.3 MiB/s wr, 132 op/s
Nov 25 16:32:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255337934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.750 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.756 254096 DEBUG nova.compute.provider_tree [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.771 254096 DEBUG nova.scheduler.client.report [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.788 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.789 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.844 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.845 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.875 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.898 254096 DEBUG nova.network.neutron [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.900 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.926 254096 INFO nova.compute.manager [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Took 2.35 seconds to deallocate network for instance.
Nov 25 16:32:05 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.961 254096 DEBUG nova.compute.manager [req-6f43143f-2d74-4123-a054-7f105b4e3090 req-f0b59cc4-e73a-452b-b650-b6d36d3723ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-19d5425c-f0c6-4c68-b8a6-cb1c6357d249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:05.999 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.001 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.001 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Creating image(s)
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.019 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.043 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.068 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.072 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.102 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.103 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.139 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.140 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.141 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.141 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.159 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.162 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3573b86d-afab-4a6f-970e-7db532c23eb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.199 254096 DEBUG nova.policy [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87058665de814ae0a51a12ff02b0d9aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed571eebde434695bae813d7bb21f4c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.253 254096 DEBUG oslo_concurrency.processutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:06 compute-0 ceph-mon[74985]: pgmap v1340: 321 pgs: 321 active+clean; 143 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1001 KiB/s rd, 1.3 MiB/s wr, 132 op/s
Nov 25 16:32:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3255337934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.489 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3573b86d-afab-4a6f-970e-7db532c23eb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.568 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] resizing rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:06 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 25 16:32:06 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 13.709s CPU time.
Nov 25 16:32:06 compute-0 systemd-machined[216343]: Machine qemu-33-instance-0000001d terminated.
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.645 254096 DEBUG nova.objects.instance [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 3573b86d-afab-4a6f-970e-7db532c23eb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.658 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Ensure instance console log exists: /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.659 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.709 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updating instance_info_cache with network_info: [{"id": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "address": "fa:16:3e:83:61:d4", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19d5425c-f0", "ovs_interfaceid": "19d5425c-f0c6-4c68-b8a6-cb1c6357d249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "address": "fa:16:3e:33:0b:3f", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0c168b-de", "ovs_interfaceid": "aa0c168b-de51-438f-a68b-f9c78a24ca7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366631061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.735 254096 DEBUG oslo_concurrency.processutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.739 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-07003872-27e7-4fd9-80cf-a34257d5aa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.740 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.741 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.749 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance shutdown successfully after 13 seconds.
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.750 254096 DEBUG nova.compute.provider_tree [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.756 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance destroyed successfully.
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.761 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance destroyed successfully.
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.773 254096 DEBUG nova.scheduler.client.report [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.797 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.836 254096 INFO nova.scheduler.client.report [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Deleted allocations for instance 07003872-27e7-4fd9-80cf-a34257d5aa97
Nov 25 16:32:06 compute-0 nova_compute[254092]: 2025-11-25 16:32:06.914 254096 DEBUG oslo_concurrency.lockutils [None req-09ac666b-09a5-4509-ac68-649f92aa8a43 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "07003872-27e7-4fd9-80cf-a34257d5aa97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.085 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Successfully created port: 479811bd-7043-4423-9815-a17763247b3b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.233 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting instance files /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.234 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deletion of /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del complete
Nov 25 16:32:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 121 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.389 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.389 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating image(s)
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.409 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.428 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.447 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.450 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2366631061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.515 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.516 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.516 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.517 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.536 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.540 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.605 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.606 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.636 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.723 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.723 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.730 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.730 254096 INFO nova.compute.claims [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.827 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 36f65013-2906-4794-9e23-e92dc7814b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.881 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] resizing rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.958 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.959 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Ensure instance console log exists: /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.959 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.959 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.960 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.961 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.963 254096 WARNING nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.968 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.988 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.989 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.994 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.995 254096 DEBUG nova.virt.libvirt.host [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.995 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.995 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.996 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.996 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.997 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.998 254096 DEBUG nova.virt.hardware [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:07 compute-0 nova_compute[254092]: 2025-11-25 16:32:07.999 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.019 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.048 254096 DEBUG nova.compute.manager [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Received event network-vif-deleted-aa0c168b-de51-438f-a68b-f9c78a24ca7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.049 254096 INFO nova.compute.manager [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Neutron deleted interface aa0c168b-de51-438f-a68b-f9c78a24ca7c; detaching it from the instance and deleting it from the info cache
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.049 254096 DEBUG nova.network.neutron [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.051 254096 DEBUG nova.compute.manager [req-4f123ca1-7b73-49ff-89ae-3f01f356590a req-fa36c204-0fbb-45b1-a33a-1d346bedd9e1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Detach interface failed, port_id=aa0c168b-de51-438f-a68b-f9c78a24ca7c, reason: Instance 07003872-27e7-4fd9-80cf-a34257d5aa97 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 16:32:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285855637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.391 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.398 254096 DEBUG nova.compute.provider_tree [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.406 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Successfully updated port: 479811bd-7043-4423-9815-a17763247b3b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.418 254096 DEBUG nova.scheduler.client.report [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.422 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.422 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquired lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.422 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113155774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.441 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.442 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.445 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:08 compute-0 ceph-mon[74985]: pgmap v1341: 321 pgs: 321 active+clean; 121 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Nov 25 16:32:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1285855637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3113155774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.467 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.472 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.502 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.503 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.519 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.579 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.673 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.676 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.677 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Creating image(s)
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.697 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.717 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.736 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.740 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.764 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.801 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.801 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.802 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.802 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.819 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.822 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9bd4d655-c683-4433-a739-168946211a75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.856 254096 DEBUG nova.policy [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a46b9493b027436fbd21d09ff5ac15e4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ae7f32b97104afd930af5d5f5754532', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:32:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172723358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.886 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.888 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <uuid>36f65013-2906-4794-9e23-e92dc7814b6e</uuid>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <name>instance-0000001d</name>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerShowV254Test-server-179263035</nova:name>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:07</nova:creationTime>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <nova:user uuid="23c828e6ebbd4d0488f6edbbe9616ca7">tempest-ServerShowV254Test-285881419-project-member</nova:user>
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <nova:project uuid="1a87d91cb59d45c29155c8f5cb5ad745">tempest-ServerShowV254Test-285881419</nova:project>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <entry name="serial">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <entry name="uuid">36f65013-2906-4794-9e23-e92dc7814b6e</entry>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk">
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/36f65013-2906-4794-9e23-e92dc7814b6e_disk.config">
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:08 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/console.log" append="off"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:08 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:08 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:08 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:08 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:08 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.953 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.954 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.956 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Using config drive
Nov 25 16:32:08 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.979 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:08.999 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.120 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9bd4d655-c683-4433-a739-168946211a75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.166 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] resizing rbd image 9bd4d655-c683-4433-a739-168946211a75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.238 254096 DEBUG nova.objects.instance [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'migration_context' on Instance uuid 9bd4d655-c683-4433-a739-168946211a75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.251 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Ensure instance console log exists: /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.252 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 121 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.349 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088314.3483362, e1c7a84b-16df-49a8-83a7-a97bd47e0d43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.349 254096 INFO nova.compute.manager [-] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] VM Stopped (Lifecycle Event)
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.367 254096 DEBUG nova.compute.manager [None req-78d0d6d3-d257-4005-b426-dc63cee069fe - - - - - -] [instance: e1c7a84b-16df-49a8-83a7-a97bd47e0d43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3172723358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.556 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Creating config drive at /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.562 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c9uok1g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.694 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c9uok1g" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.720 254096 DEBUG nova.storage.rbd_utils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] rbd image 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.724 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.807 254096 DEBUG nova.network.neutron [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updating instance_info_cache with network_info: [{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.825 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Releasing lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.826 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance network_info: |[{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.828 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start _get_guest_xml network_info=[{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.833 254096 WARNING nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.837 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.838 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.841 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.841 254096 DEBUG nova.virt.libvirt.host [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.842 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.842 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.843 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.844 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.845 254096 DEBUG nova.virt.hardware [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.847 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.876 254096 DEBUG oslo_concurrency.processutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config 36f65013-2906-4794-9e23-e92dc7814b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:09 compute-0 nova_compute[254092]: 2025-11-25 16:32:09.877 254096 INFO nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting local config drive /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e/disk.config because it was imported into RBD.
Nov 25 16:32:09 compute-0 systemd-machined[216343]: New machine qemu-34-instance-0000001d.
Nov 25 16:32:09 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.030 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Successfully created port: e1641afa-e435-45ca-a0fe-d2bb9b12981a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.156 254096 DEBUG nova.compute.manager [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-changed-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.156 254096 DEBUG nova.compute.manager [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Refreshing instance network info cache due to event network-changed-479811bd-7043-4423-9815-a17763247b3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.157 254096 DEBUG oslo_concurrency.lockutils [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.157 254096 DEBUG oslo_concurrency.lockutils [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.157 254096 DEBUG nova.network.neutron [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Refreshing network info cache for port 479811bd-7043-4423-9815-a17763247b3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596985713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.274 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.293 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.297 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:10 compute-0 ceph-mon[74985]: pgmap v1342: 321 pgs: 321 active+clean; 121 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Nov 25 16:32:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1596985713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549115621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.710 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.712 254096 DEBUG nova.virt.libvirt.vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1171232620',id=30,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-wlltia02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:05Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=3573b86d-afab-4a6f-970e-7db532c23eb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.713 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.714 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.715 254096 DEBUG nova.objects.instance [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3573b86d-afab-4a6f-970e-7db532c23eb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.741 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <uuid>3573b86d-afab-4a6f-970e-7db532c23eb3</uuid>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <name>instance-0000001e</name>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1171232620</nova:name>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:09</nova:creationTime>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:user uuid="87058665de814ae0a51a12ff02b0d9aa">tempest-ImagesOneServerNegativeTestJSON-964953831-project-member</nova:user>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:project uuid="ed571eebde434695bae813d7bb21f4c3">tempest-ImagesOneServerNegativeTestJSON-964953831</nova:project>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <nova:port uuid="479811bd-7043-4423-9815-a17763247b3b">
Nov 25 16:32:10 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <entry name="serial">3573b86d-afab-4a6f-970e-7db532c23eb3</entry>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <entry name="uuid">3573b86d-afab-4a6f-970e-7db532c23eb3</entry>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3573b86d-afab-4a6f-970e-7db532c23eb3_disk">
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config">
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:10 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:34:0a:5f"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <target dev="tap479811bd-70"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/console.log" append="off"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:10 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:10 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:10 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:10 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:10 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.746 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Preparing to wait for external event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.747 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.747 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.747 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.748 254096 DEBUG nova.virt.libvirt.vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1171232620',id=30,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-wlltia02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:05Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=3573b86d-afab-4a6f-970e-7db532c23eb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.748 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.749 254096 DEBUG nova.network.os_vif_util [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.750 254096 DEBUG os_vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.751 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.751 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.754 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap479811bd-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.755 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap479811bd-70, col_values=(('external_ids', {'iface-id': '479811bd-7043-4423-9815-a17763247b3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:0a:5f', 'vm-uuid': '3573b86d-afab-4a6f-970e-7db532c23eb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:10 compute-0 NetworkManager[48891]: <info>  [1764088330.7575] manager: (tap479811bd-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.763 254096 INFO os_vif [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70')
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.811 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.812 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.812 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No VIF found with MAC fa:16:3e:34:0a:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.813 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Using config drive
Nov 25 16:32:10 compute-0 nova_compute[254092]: 2025-11-25 16:32:10.831 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:11 compute-0 sudo[292153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.015 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 36f65013-2906-4794-9e23-e92dc7814b6e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.017 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088331.015148, 36f65013-2906-4794-9e23-e92dc7814b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.017 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Resumed (Lifecycle Event)
Nov 25 16:32:11 compute-0 sudo[292153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.019 254096 DEBUG nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.020 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:11 compute-0 sudo[292153]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.024 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance spawned successfully.
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.024 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.041 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.046 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.049 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.050 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.050 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.050 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.051 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.052 254096 DEBUG nova.virt.libvirt.driver [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:11 compute-0 sudo[292180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:32:11 compute-0 sudo[292180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.075 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.076 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088331.015657, 36f65013-2906-4794-9e23-e92dc7814b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.076 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Started (Lifecycle Event)
Nov 25 16:32:11 compute-0 sudo[292180]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.111 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.115 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.128 254096 DEBUG nova.compute.manager [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:11 compute-0 sudo[292205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:11 compute-0 sudo[292205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 sudo[292205]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.148 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Successfully updated port: e1641afa-e435-45ca-a0fe-d2bb9b12981a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.167 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.171 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.171 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.171 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:11 compute-0 sudo[292230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:32:11 compute-0 sudo[292230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.208 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.209 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.209 254096 DEBUG nova.objects.instance [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.266 254096 DEBUG oslo_concurrency.lockutils [None req-e2220b20-f7c1-44e2-beed-4e2c4034675e 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 7.5 MiB/s wr, 206 op/s
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.331 254096 DEBUG nova.network.neutron [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updated VIF entry in instance network info cache for port 479811bd-7043-4423-9815-a17763247b3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.332 254096 DEBUG nova.network.neutron [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updating instance_info_cache with network_info: [{"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.344 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Creating config drive at /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.351 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhuvr8wt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.379 254096 DEBUG oslo_concurrency.lockutils [req-1af0e693-71f1-488b-a6c9-6dbccd0ab883 req-e5c2a139-da8d-40e5-8ece-b0f5f53ce6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3573b86d-afab-4a6f-970e-7db532c23eb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.411 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1549115621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.485 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmhuvr8wt" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.513 254096 DEBUG nova.storage.rbd_utils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.523 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:11 compute-0 sudo[292230]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.677 254096 DEBUG oslo_concurrency.processutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config 3573b86d-afab-4a6f-970e-7db532c23eb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.679 254096 INFO nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deleting local config drive /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3/disk.config because it was imported into RBD.
Nov 25 16:32:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:32:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:32:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:32:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:32:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:32:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:32:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3bb48ba9-bdf3-46d3-a080-1641f6eaa0df does not exist
Nov 25 16:32:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e981d795-a7e6-45ef-99d7-280ee226ad9f does not exist
Nov 25 16:32:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6daba3ae-5538-4ec1-a575-14b5b3c1b0c3 does not exist
Nov 25 16:32:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:32:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:32:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:32:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:32:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:32:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.726 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "36f65013-2906-4794-9e23-e92dc7814b6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.726 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.727 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "36f65013-2906-4794-9e23-e92dc7814b6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.727 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.727 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.729 254096 INFO nova.compute.manager [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Terminating instance
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.730 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "refresh_cache-36f65013-2906-4794-9e23-e92dc7814b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.730 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquired lock "refresh_cache-36f65013-2906-4794-9e23-e92dc7814b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.731 254096 DEBUG nova.network.neutron [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:11 compute-0 kernel: tap479811bd-70: entered promiscuous mode
Nov 25 16:32:11 compute-0 NetworkManager[48891]: <info>  [1764088331.7400] manager: (tap479811bd-70): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Nov 25 16:32:11 compute-0 systemd-udevd[292154]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:11 compute-0 ovn_controller[153477]: 2025-11-25T16:32:11Z|00195|binding|INFO|Claiming lport 479811bd-7043-4423-9815-a17763247b3b for this chassis.
Nov 25 16:32:11 compute-0 ovn_controller[153477]: 2025-11-25T16:32:11Z|00196|binding|INFO|479811bd-7043-4423-9815-a17763247b3b: Claiming fa:16:3e:34:0a:5f 10.100.0.14
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:11 compute-0 NetworkManager[48891]: <info>  [1764088331.7525] device (tap479811bd-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:32:11 compute-0 NetworkManager[48891]: <info>  [1764088331.7549] device (tap479811bd-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.753 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0a:5f 10.100.0.14'], port_security=['fa:16:3e:34:0a:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3573b86d-afab-4a6f-970e-7db532c23eb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479811bd-7043-4423-9815-a17763247b3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.755 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479811bd-7043-4423-9815-a17763247b3b in datapath 50e18e22-7850-458c-8d66-5932e0495377 bound to our chassis
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.756 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:32:11 compute-0 ovn_controller[153477]: 2025-11-25T16:32:11Z|00197|binding|INFO|Setting lport 479811bd-7043-4423-9815-a17763247b3b ovn-installed in OVS
Nov 25 16:32:11 compute-0 ovn_controller[153477]: 2025-11-25T16:32:11Z|00198|binding|INFO|Setting lport 479811bd-7043-4423-9815-a17763247b3b up in Southbound
Nov 25 16:32:11 compute-0 nova_compute[254092]: 2025-11-25 16:32:11.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.768 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e45ec790-82d5-4846-b640-71aa8307b9cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.771 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50e18e22-71 in ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.773 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50e18e22-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d615b7f-23ae-43c1-8efb-fe0e0c82958a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.774 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55910452-437d-40a1-a101-d121340bca8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 systemd-machined[216343]: New machine qemu-35-instance-0000001e.
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.789 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[af8f17e7-5d62-4585-aafe-adc2d356f49c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Nov 25 16:32:11 compute-0 sudo[292331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:11 compute-0 sudo[292331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 sudo[292331]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce155722-4ff6-4f52-afd4-6b2a68e28bf1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.848 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1f6464-4f93-48d0-b180-8714dbc387bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 NetworkManager[48891]: <info>  [1764088331.8559] manager: (tap50e18e22-70): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Nov 25 16:32:11 compute-0 systemd-udevd[292358]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.856 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d1e12d-90a2-4c73-802a-b82a4ff2831b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 sudo[292369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:32:11 compute-0 sudo[292369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 sudo[292369]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.897 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4749f00a-6e46-47d5-af3b-89e9fbf198de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.900 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[42e4558e-4751-4f17-a59f-81cd05ab5b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 sudo[292418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:11 compute-0 NetworkManager[48891]: <info>  [1764088331.9260] device (tap50e18e22-70): carrier: link connected
Nov 25 16:32:11 compute-0 sudo[292418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 sudo[292418]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.931 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24ffca32-9222-4042-9b7c-1f2794783c6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e021faa0-4fbb-46f9-b1a1-dca023d4aca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478951, 'reachable_time': 30093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292446, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.966 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d85fea67-f9e8-4a53-9a8f-47990142cc39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:147d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478951, 'tstamp': 478951}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292457, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:11 compute-0 sudo[292447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:32:11 compute-0 sudo[292447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:11.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b85fa956-4aa8-415f-99a9-577440a7a7ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478951, 'reachable_time': 30093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292471, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.020 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fd3d7c-88f7-4e70-962c-8b361abd343f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6864ff6-3a08-4e33-85f5-0548bd3996ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.083 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.083 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.083 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e18e22-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.094 254096 DEBUG nova.network.neutron [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:12 compute-0 kernel: tap50e18e22-70: entered promiscuous mode
Nov 25 16:32:12 compute-0 NetworkManager[48891]: <info>  [1764088332.1227] manager: (tap50e18e22-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.125 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50e18e22-70, col_values=(('external_ids', {'iface-id': '5b591f76-4d04-4b30-9182-d359be87068c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:12 compute-0 ovn_controller[153477]: 2025-11-25T16:32:12Z|00199|binding|INFO|Releasing lport 5b591f76-4d04-4b30-9182-d359be87068c from this chassis (sb_readonly=0)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.131 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.131 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2f9c9f-069e-426a-ab25-e0db4fdf6d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.132 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:32:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:12.134 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'env', 'PROCESS_TAG=haproxy-50e18e22-7850-458c-8d66-5932e0495377', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50e18e22-7850-458c-8d66-5932e0495377.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.272 254096 DEBUG nova.compute.manager [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.273 254096 DEBUG nova.compute.manager [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.273 254096 DEBUG oslo_concurrency.lockutils [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:12 compute-0 podman[292559]: 2025-11-25 16:32:12.337836398 +0000 UTC m=+0.055253438 container create e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.343 254096 DEBUG nova.compute.manager [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.344 254096 DEBUG oslo_concurrency.lockutils [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.344 254096 DEBUG oslo_concurrency.lockutils [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.345 254096 DEBUG oslo_concurrency.lockutils [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.345 254096 DEBUG nova.compute.manager [req-0acb343f-6017-4aef-be4c-4f068092861f req-3ffcdc23-bcb9-4f39-b2f9-2a527daf15f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Processing event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.357 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088332.3567722, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Started (Lifecycle Event)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.360 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.366 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.374 254096 INFO nova.virt.libvirt.driver [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance spawned successfully.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.375 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:12 compute-0 systemd[1]: Started libpod-conmon-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.384 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:12 compute-0 podman[292559]: 2025-11-25 16:32:12.306791517 +0000 UTC m=+0.024208577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.406 254096 DEBUG nova.network.neutron [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.408 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.409 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088332.3578255, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.409 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Paused (Lifecycle Event)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.412 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.412 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.413 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.413 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.414 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.414 254096 DEBUG nova.virt.libvirt.driver [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.421 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Releasing lock "refresh_cache-36f65013-2906-4794-9e23-e92dc7814b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.422 254096 DEBUG nova.compute.manager [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:32:12 compute-0 podman[292559]: 2025-11-25 16:32:12.432895503 +0000 UTC m=+0.150312563 container init e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 16:32:12 compute-0 podman[292559]: 2025-11-25 16:32:12.440114918 +0000 UTC m=+0.157531958 container start e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:32:12 compute-0 podman[292559]: 2025-11-25 16:32:12.443372397 +0000 UTC m=+0.160789457 container attach e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.444 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:12 compute-0 sharp_satoshi[292576]: 167 167
Nov 25 16:32:12 compute-0 systemd[1]: libpod-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope: Deactivated successfully.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.447 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088332.3633657, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:12 compute-0 conmon[292576]: conmon e08236e352e3c160d8c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope/container/memory.events
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.448 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Resumed (Lifecycle Event)
Nov 25 16:32:12 compute-0 podman[292559]: 2025-11-25 16:32:12.448953478 +0000 UTC m=+0.166370518 container died e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.472 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf6c28813c619b2b34335b3602d299632a8b7e2b6d6e94170426f143351e0b4-merged.mount: Deactivated successfully.
Nov 25 16:32:12 compute-0 ceph-mon[74985]: pgmap v1343: 321 pgs: 321 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 7.5 MiB/s wr, 206 op/s
Nov 25 16:32:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:32:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:32:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:32:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:32:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:32:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:32:12 compute-0 podman[292559]: 2025-11-25 16:32:12.498380446 +0000 UTC m=+0.215797486 container remove e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:32:12 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 25 16:32:12 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Consumed 2.503s CPU time.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.502 254096 INFO nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 6.50 seconds to spawn the instance on the hypervisor.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.504 254096 DEBUG nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.505 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:12 compute-0 systemd-machined[216343]: Machine qemu-34-instance-0000001d terminated.
Nov 25 16:32:12 compute-0 systemd[1]: libpod-conmon-e08236e352e3c160d8c69d05a47eb4d5c487cb2f583fc8e155d470727c31c10d.scope: Deactivated successfully.
Nov 25 16:32:12 compute-0 podman[292615]: 2025-11-25 16:32:12.577741956 +0000 UTC m=+0.071196940 container create 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.592 254096 INFO nova.compute.manager [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 7.49 seconds to build instance.
Nov 25 16:32:12 compute-0 podman[292615]: 2025-11-25 16:32:12.529808508 +0000 UTC m=+0.023263472 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.628 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.630 254096 DEBUG oslo_concurrency.lockutils [None req-8b20df78-b9ec-473c-b5bf-97094bd25c1b 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:12 compute-0 systemd[1]: Started libpod-conmon-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe.scope.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.651 254096 INFO nova.virt.libvirt.driver [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance destroyed successfully.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.652 254096 DEBUG nova.objects.instance [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lazy-loading 'resources' on Instance uuid 36f65013-2906-4794-9e23-e92dc7814b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0bd5c0024043409a719a708d58c044f8ad9aa267adbb154aaa37825ce7da9b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:12 compute-0 podman[292615]: 2025-11-25 16:32:12.695208298 +0000 UTC m=+0.188663282 container init 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.698 254096 DEBUG nova.network.neutron [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:12 compute-0 podman[292615]: 2025-11-25 16:32:12.705327922 +0000 UTC m=+0.198782886 container start 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:32:12 compute-0 podman[292641]: 2025-11-25 16:32:12.72035653 +0000 UTC m=+0.060423139 container create 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.724 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.725 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance network_info: |[{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.726 254096 DEBUG oslo_concurrency.lockutils [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.726 254096 DEBUG nova.network.neutron [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.729 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start _get_guest_xml network_info=[{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:12 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : New worker (292675) forked
Nov 25 16:32:12 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : Loading success.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.735 254096 WARNING nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.748 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.749 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.752 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.753 254096 DEBUG nova.virt.libvirt.host [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.753 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.754 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.754 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.754 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.755 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.756 254096 DEBUG nova.virt.hardware [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:12 compute-0 nova_compute[254092]: 2025-11-25 16:32:12.758 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:12 compute-0 podman[292641]: 2025-11-25 16:32:12.686890333 +0000 UTC m=+0.026956992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:32:12 compute-0 systemd[1]: Started libpod-conmon-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope.
Nov 25 16:32:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:12 compute-0 podman[292641]: 2025-11-25 16:32:12.882151681 +0000 UTC m=+0.222218310 container init 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:32:12 compute-0 podman[292641]: 2025-11-25 16:32:12.89353813 +0000 UTC m=+0.233604769 container start 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:32:12 compute-0 podman[292641]: 2025-11-25 16:32:12.918762393 +0000 UTC m=+0.258829022 container attach 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:32:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627263816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.289 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 7.5 MiB/s wr, 206 op/s
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.314 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.320 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:13.606 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:13.607 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:13.607 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/627263816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234581576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.865 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.867 254096 DEBUG nova.virt.libvirt.vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-744086586',display_name='tempest-FloatingIPsAssociationTestJSON-server-744086586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-744086586',id=31,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-ev11hsqu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:08Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=9bd4d655-c683-4433-a739-168946211a75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.868 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.869 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.870 254096 DEBUG nova.objects.instance [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9bd4d655-c683-4433-a739-168946211a75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.888 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <uuid>9bd4d655-c683-4433-a739-168946211a75</uuid>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <name>instance-0000001f</name>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-744086586</nova:name>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:12</nova:creationTime>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:user uuid="a46b9493b027436fbd21d09ff5ac15e4">tempest-FloatingIPsAssociationTestJSON-993856073-project-member</nova:user>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:project uuid="4ae7f32b97104afd930af5d5f5754532">tempest-FloatingIPsAssociationTestJSON-993856073</nova:project>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <nova:port uuid="e1641afa-e435-45ca-a0fe-d2bb9b12981a">
Nov 25 16:32:13 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <entry name="serial">9bd4d655-c683-4433-a739-168946211a75</entry>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <entry name="uuid">9bd4d655-c683-4433-a739-168946211a75</entry>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9bd4d655-c683-4433-a739-168946211a75_disk">
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9bd4d655-c683-4433-a739-168946211a75_disk.config">
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:13 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:3d:bb:1b"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <target dev="tape1641afa-e4"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/console.log" append="off"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:13 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:13 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:13 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:13 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:13 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.894 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Preparing to wait for external event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.894 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.894 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.895 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.895 254096 DEBUG nova.virt.libvirt.vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-744086586',display_name='tempest-FloatingIPsAssociationTestJSON-server-744086586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-744086586',id=31,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-ev11hsqu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:08Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=9bd4d655-c683-4433-a739-168946211a75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.896 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.896 254096 DEBUG nova.network.os_vif_util [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.897 254096 DEBUG os_vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.898 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.898 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1641afa-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1641afa-e4, col_values=(('external_ids', {'iface-id': 'e1641afa-e435-45ca-a0fe-d2bb9b12981a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:bb:1b', 'vm-uuid': '9bd4d655-c683-4433-a739-168946211a75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:13 compute-0 NetworkManager[48891]: <info>  [1764088333.9043] manager: (tape1641afa-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.911 254096 INFO os_vif [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4')
Nov 25 16:32:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.965 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.966 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.966 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No VIF found with MAC fa:16:3e:3d:bb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.967 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Using config drive
Nov 25 16:32:13 compute-0 nova_compute[254092]: 2025-11-25 16:32:13.985 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:14 compute-0 modest_goodall[292689]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:32:14 compute-0 modest_goodall[292689]: --> relative data size: 1.0
Nov 25 16:32:14 compute-0 modest_goodall[292689]: --> All data devices are unavailable
Nov 25 16:32:14 compute-0 systemd[1]: libpod-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope: Deactivated successfully.
Nov 25 16:32:14 compute-0 systemd[1]: libpod-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope: Consumed 1.069s CPU time.
Nov 25 16:32:14 compute-0 podman[292641]: 2025-11-25 16:32:14.055406882 +0000 UTC m=+1.395473501 container died 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.131 254096 DEBUG nova.network.neutron [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.132 254096 DEBUG nova.network.neutron [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.147 254096 DEBUG oslo_concurrency.lockutils [req-5ecfa4b4-1c1e-4092-8d26-cb939b78ec2a req-18a3872f-fc37-4447-a456-b6ec7cdf1a2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.364 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Creating config drive at /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.372 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh00k38w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.502 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh00k38w" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-086fc975d3e970a15621d78a7f3b1bc86c832b6c604420a2247d80a090d7d917-merged.mount: Deactivated successfully.
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.528 254096 DEBUG nova.storage.rbd_utils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image 9bd4d655-c683-4433-a739-168946211a75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.532 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config 9bd4d655-c683-4433-a739-168946211a75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.853 254096 DEBUG nova.compute.manager [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.854 254096 DEBUG oslo_concurrency.lockutils [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.855 254096 DEBUG oslo_concurrency.lockutils [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.855 254096 DEBUG oslo_concurrency.lockutils [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.855 254096 DEBUG nova.compute.manager [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] No waiting events found dispatching network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:14 compute-0 nova_compute[254092]: 2025-11-25 16:32:14.856 254096 WARNING nova.compute.manager [req-3038e261-454e-4352-be4b-f0524c1fce7c req-9abab6ee-1476-4505-b4f0-7ada18d0da0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received unexpected event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b for instance with vm_state active and task_state None.
Nov 25 16:32:14 compute-0 ceph-mon[74985]: pgmap v1344: 321 pgs: 321 active+clean; 180 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 7.5 MiB/s wr, 206 op/s
Nov 25 16:32:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1234581576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 140 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.5 MiB/s wr, 321 op/s
Nov 25 16:32:15 compute-0 podman[292641]: 2025-11-25 16:32:15.332749212 +0000 UTC m=+2.672815831 container remove 4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:32:15 compute-0 sudo[292447]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:15 compute-0 nova_compute[254092]: 2025-11-25 16:32:15.382 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:15 compute-0 nova_compute[254092]: 2025-11-25 16:32:15.383 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:15 compute-0 systemd[1]: libpod-conmon-4e681414ffb795a5b4bb978f0203dcb69507e895154899347d1ae1714d3deadb.scope: Deactivated successfully.
Nov 25 16:32:15 compute-0 nova_compute[254092]: 2025-11-25 16:32:15.521 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:32:15 compute-0 sudo[292850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:15 compute-0 sudo[292850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:15 compute-0 sudo[292850]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:15 compute-0 sudo[292878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:32:15 compute-0 sudo[292878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:15 compute-0 sudo[292878]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:15 compute-0 sudo[292903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:15 compute-0 sudo[292903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:15 compute-0 sudo[292903]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:15 compute-0 sudo[292928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:32:15 compute-0 sudo[292928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:15 compute-0 nova_compute[254092]: 2025-11-25 16:32:15.770 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:15 compute-0 nova_compute[254092]: 2025-11-25 16:32:15.771 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:15 compute-0 nova_compute[254092]: 2025-11-25 16:32:15.779 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:32:15 compute-0 nova_compute[254092]: 2025-11-25 16:32:15.779 254096 INFO nova.compute.claims [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:32:16 compute-0 podman[292993]: 2025-11-25 16:32:16.019934615 +0000 UTC m=+0.022818879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.136 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.171 254096 DEBUG nova.compute.manager [None req-dc585528-2c9f-4dc3-81ea-7304281f3d40 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.223 254096 INFO nova.compute.manager [None req-dc585528-2c9f-4dc3-81ea-7304281f3d40 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] instance snapshotting
Nov 25 16:32:16 compute-0 podman[292993]: 2025-11-25 16:32:16.285215731 +0000 UTC m=+0.288099995 container create 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 25 16:32:16 compute-0 ceph-mon[74985]: pgmap v1345: 321 pgs: 321 active+clean; 140 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.5 MiB/s wr, 321 op/s
Nov 25 16:32:16 compute-0 systemd[1]: Started libpod-conmon-7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa.scope.
Nov 25 16:32:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123802513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.751 254096 WARNING nova.compute.manager [None req-dc585528-2c9f-4dc3-81ea-7304281f3d40 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Image not found during snapshot: nova.exception.ImageNotFound: Image 9b3e0672-bbee-46c0-933c-e8761fa34df1 could not be found.
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.768 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.776 254096 DEBUG nova.compute.provider_tree [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.795 254096 DEBUG nova.scheduler.client.report [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.864 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:16 compute-0 nova_compute[254092]: 2025-11-25 16:32:16.865 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:32:16 compute-0 podman[292993]: 2025-11-25 16:32:16.882289514 +0000 UTC m=+0.885173828 container init 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:32:16 compute-0 podman[292993]: 2025-11-25 16:32:16.894147555 +0000 UTC m=+0.897031809 container start 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:32:16 compute-0 compassionate_matsumoto[293030]: 167 167
Nov 25 16:32:16 compute-0 systemd[1]: libpod-7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa.scope: Deactivated successfully.
Nov 25 16:32:17 compute-0 podman[292993]: 2025-11-25 16:32:17.266362998 +0000 UTC m=+1.269247292 container attach 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:32:17 compute-0 podman[292993]: 2025-11-25 16:32:17.267067958 +0000 UTC m=+1.269952222 container died 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:32:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 281 op/s
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.535 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088322.5347798, 07003872-27e7-4fd9-80cf-a34257d5aa97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.536 254096 INFO nova.compute.manager [-] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] VM Stopped (Lifecycle Event)
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.552 254096 DEBUG nova.compute.manager [None req-d169351b-30a7-493f-a590-536db81ab23e - - - - - -] [instance: 07003872-27e7-4fd9-80cf-a34257d5aa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2123802513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c76d64a2587e64b1f97564bb7ab2b1a03066c0a58d24e83ccd8071522880eb-merged.mount: Deactivated successfully.
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.667 254096 DEBUG oslo_concurrency.processutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config 9bd4d655-c683-4433-a739-168946211a75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.667 254096 INFO nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Deleting local config drive /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75/disk.config because it was imported into RBD.
Nov 25 16:32:17 compute-0 NetworkManager[48891]: <info>  [1764088337.7649] manager: (tape1641afa-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 25 16:32:17 compute-0 kernel: tape1641afa-e4: entered promiscuous mode
Nov 25 16:32:17 compute-0 ovn_controller[153477]: 2025-11-25T16:32:17Z|00200|binding|INFO|Claiming lport e1641afa-e435-45ca-a0fe-d2bb9b12981a for this chassis.
Nov 25 16:32:17 compute-0 ovn_controller[153477]: 2025-11-25T16:32:17Z|00201|binding|INFO|e1641afa-e435-45ca-a0fe-d2bb9b12981a: Claiming fa:16:3e:3d:bb:1b 10.100.0.4
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:17 compute-0 ovn_controller[153477]: 2025-11-25T16:32:17Z|00202|binding|INFO|Setting lport e1641afa-e435-45ca-a0fe-d2bb9b12981a ovn-installed in OVS
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:17 compute-0 systemd-udevd[293063]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:17 compute-0 systemd-machined[216343]: New machine qemu-36-instance-0000001f.
Nov 25 16:32:17 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-0000001f.
Nov 25 16:32:17 compute-0 NetworkManager[48891]: <info>  [1764088337.8489] device (tape1641afa-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:32:17 compute-0 NetworkManager[48891]: <info>  [1764088337.8501] device (tape1641afa-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:32:17 compute-0 ovn_controller[153477]: 2025-11-25T16:32:17Z|00203|binding|INFO|Setting lport e1641afa-e435-45ca-a0fe-d2bb9b12981a up in Southbound
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.971 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bb:1b 10.100.0.4'], port_security=['fa:16:3e:3d:bb:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9bd4d655-c683-4433-a739-168946211a75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8b56bc-492b-4082-b8de-60d496652da7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ae7f32b97104afd930af5d5f5754532', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c729f64-6320-4599-a74a-ed70b78f82ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81a9c566-d2a0-4e54-9d3c-480355005098, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e1641afa-e435-45ca-a0fe-d2bb9b12981a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.972 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e1641afa-e435-45ca-a0fe-d2bb9b12981a in datapath 7e8b56bc-492b-4082-b8de-60d496652da7 bound to our chassis
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.974 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e8b56bc-492b-4082-b8de-60d496652da7
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.980 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:32:17 compute-0 nova_compute[254092]: 2025-11-25 16:32:17.981 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.993 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a5ca85-5026-411d-86a6-09cf1a15ac2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.994 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e8b56bc-41 in ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.996 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e8b56bc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[baa1b9b1-3b3a-450f-8fc1-abc77258922e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:17.997 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcc292e-5bb7-4ebf-957e-a0e5c3c3fe2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.008 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[622832bf-d176-4328-b610-1bc4081df95b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.019 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f954c72-0f8b-4f43-8548-f8c141b5c461]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.034 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:32:18 compute-0 podman[292993]: 2025-11-25 16:32:18.036307734 +0000 UTC m=+2.039191998 container remove 7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.082 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb845b9-db5a-4007-9198-355b9f4b71fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.086 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.093 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb991be-91a9-4109-b93f-51a19f4ce7f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 NetworkManager[48891]: <info>  [1764088338.0945] manager: (tap7e8b56bc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 25 16:32:18 compute-0 systemd[1]: libpod-conmon-7954a22faaa2dbcac68f7bda3a64941531e26fec1695ebfb98eae846de031dfa.scope: Deactivated successfully.
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.136 254096 DEBUG nova.policy [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.134 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d10eb-bf6d-4bb1-b1c6-939f8450dab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.157 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5983060b-20b7-4707-93a6-3453788277f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 NetworkManager[48891]: <info>  [1764088338.1859] device (tap7e8b56bc-40): carrier: link connected
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.191 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dbaa2650-3d36-4ff9-b94f-b03fdc861248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ff902f-2a0b-440d-aaab-e3b0d145b642]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293124, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.226 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3fe7d5-54ce-4556-b7fe-24495f7b9d8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:16c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479577, 'tstamp': 479577}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293135, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.240 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e3c06e-e2ba-4256-b018-a8b689ab4dc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293136, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.285 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60ce4900-77d8-4257-b8a2-22411f41d330]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 podman[293122]: 2025-11-25 16:32:18.30824891 +0000 UTC m=+0.108961383 container create d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:32:18 compute-0 podman[293122]: 2025-11-25 16:32:18.225611632 +0000 UTC m=+0.026324145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.355 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.356 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.357 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Creating image(s)
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.373 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f5a85b-2f3d-4958-9a21-e874581bdf81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8b56bc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e8b56bc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:18 compute-0 kernel: tap7e8b56bc-40: entered promiscuous mode
Nov 25 16:32:18 compute-0 NetworkManager[48891]: <info>  [1764088338.3781] manager: (tap7e8b56bc-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.382 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e8b56bc-40, col_values=(('external_ids', {'iface-id': 'f3398af3-7278-4ca0-adcc-f3bb48f595e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:18 compute-0 ovn_controller[153477]: 2025-11-25T16:32:18Z|00204|binding|INFO|Releasing lport f3398af3-7278-4ca0-adcc-f3bb48f595e9 from this chassis (sb_readonly=0)
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.386 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e8b56bc-492b-4082-b8de-60d496652da7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e8b56bc-492b-4082-b8de-60d496652da7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.397 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[967b2812-c47c-4c33-9442-dcadd8ec5814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.401 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-7e8b56bc-492b-4082-b8de-60d496652da7
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/7e8b56bc-492b-4082-b8de-60d496652da7.pid.haproxy
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 7e8b56bc-492b-4082-b8de-60d496652da7
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:32:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:18.402 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'env', 'PROCESS_TAG=haproxy-7e8b56bc-492b-4082-b8de-60d496652da7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e8b56bc-492b-4082-b8de-60d496652da7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.404 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:18 compute-0 systemd[1]: Started libpod-conmon-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope.
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.449 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.484 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.502 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.533 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088338.4962533, 9bd4d655-c683-4433-a739-168946211a75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.533 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] VM Started (Lifecycle Event)
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.560 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.564 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088338.4963384, 9bd4d655-c683-4433-a739-168946211a75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.565 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] VM Paused (Lifecycle Event)
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.567 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.568 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.568 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.569 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.600 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.649 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.696 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:18 compute-0 podman[293122]: 2025-11-25 16:32:18.700074893 +0000 UTC m=+0.500787406 container init d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:18 compute-0 podman[293122]: 2025-11-25 16:32:18.713460726 +0000 UTC m=+0.514173209 container start d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.729 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.773 254096 DEBUG nova.compute.manager [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.774 254096 DEBUG oslo_concurrency.lockutils [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.774 254096 DEBUG oslo_concurrency.lockutils [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.774 254096 DEBUG oslo_concurrency.lockutils [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.775 254096 DEBUG nova.compute.manager [req-12897065-1f88-4e6b-b64d-612d92d8f199 req-43260859-a2b9-431d-8669-4b27d53d6d61 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Processing event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.775 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.790 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088338.7878494, 9bd4d655-c683-4433-a739-168946211a75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.791 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] VM Resumed (Lifecycle Event)
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.798 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.812 254096 INFO nova.virt.libvirt.driver [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance spawned successfully.
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.812 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.817 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.825 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:18 compute-0 podman[293122]: 2025-11-25 16:32:18.827841084 +0000 UTC m=+0.628553567 container attach d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.839 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.839 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.839 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.840 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.840 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.840 254096 DEBUG nova.virt.libvirt.driver [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.860 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:18 compute-0 nova_compute[254092]: 2025-11-25 16:32:18.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:18 compute-0 podman[293274]: 2025-11-25 16:32:18.850920519 +0000 UTC m=+0.103523705 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:32:18 compute-0 ceph-mon[74985]: pgmap v1346: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 281 op/s
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.029 254096 INFO nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Took 10.35 seconds to spawn the instance on the hypervisor.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.030 254096 DEBUG nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:19 compute-0 podman[293274]: 2025-11-25 16:32:19.031735777 +0000 UTC m=+0.284338933 container create 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:32:19 compute-0 systemd[1]: Started libpod-conmon-034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8.scope.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.084 254096 INFO nova.virt.libvirt.driver [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deleting instance files /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.086 254096 INFO nova.virt.libvirt.driver [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deletion of /var/lib/nova/instances/36f65013-2906-4794-9e23-e92dc7814b6e_del complete
Nov 25 16:32:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d27d58d374f702763ab8e079fd623c67dbe3ed8e1cbfe8cc867e245d586f30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.140 254096 INFO nova.compute.manager [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Took 11.44 seconds to build instance.
Nov 25 16:32:19 compute-0 podman[293274]: 2025-11-25 16:32:19.169148 +0000 UTC m=+0.421751176 container init 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:32:19 compute-0 podman[293274]: 2025-11-25 16:32:19.17693502 +0000 UTC m=+0.429538176 container start 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.183 254096 DEBUG oslo_concurrency.lockutils [None req-92387137-bcf6-49c8-9711-4c3e9948c3df a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.188 254096 INFO nova.compute.manager [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 6.77 seconds to destroy the instance on the hypervisor.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.189 254096 DEBUG oslo.service.loopingcall [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.189 254096 DEBUG nova.compute.manager [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.189 254096 DEBUG nova.network.neutron [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:32:19 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [NOTICE]   (293310) : New worker (293312) forked
Nov 25 16:32:19 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [NOTICE]   (293310) : Loading success.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.229 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Successfully created port: 82f63517-7636-46bf-b4e1-ba191ddad018 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.283 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 265 op/s
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.350 254096 DEBUG nova.network.neutron [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.358 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.391 254096 DEBUG nova.network.neutron [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.465 254096 INFO nova.compute.manager [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Took 0.28 seconds to deallocate network for instance.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.486 254096 DEBUG nova.objects.instance [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.511 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.511 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Ensure instance console log exists: /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.512 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.512 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.512 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.514 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.514 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:19 compute-0 practical_cartwright[293206]: {
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:     "0": [
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:         {
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "devices": [
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "/dev/loop3"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             ],
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_name": "ceph_lv0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_size": "21470642176",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "name": "ceph_lv0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "tags": {
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cluster_name": "ceph",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.crush_device_class": "",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.encrypted": "0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osd_id": "0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.type": "block",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.vdo": "0"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             },
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "type": "block",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "vg_name": "ceph_vg0"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:         }
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:     ],
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:     "1": [
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:         {
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "devices": [
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "/dev/loop4"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             ],
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_name": "ceph_lv1",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_size": "21470642176",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "name": "ceph_lv1",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "tags": {
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cluster_name": "ceph",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.crush_device_class": "",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.encrypted": "0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osd_id": "1",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.type": "block",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.vdo": "0"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             },
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "type": "block",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "vg_name": "ceph_vg1"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:         }
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:     ],
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:     "2": [
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:         {
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "devices": [
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "/dev/loop5"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             ],
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_name": "ceph_lv2",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_size": "21470642176",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "name": "ceph_lv2",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "tags": {
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.cluster_name": "ceph",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.crush_device_class": "",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.encrypted": "0",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osd_id": "2",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.type": "block",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:                 "ceph.vdo": "0"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             },
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "type": "block",
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:             "vg_name": "ceph_vg2"
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:         }
Nov 25 16:32:19 compute-0 practical_cartwright[293206]:     ]
Nov 25 16:32:19 compute-0 practical_cartwright[293206]: }
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.565 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.565 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.566 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.566 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.566 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.567 254096 INFO nova.compute.manager [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Terminating instance
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.568 254096 DEBUG nova.compute.manager [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:32:19 compute-0 systemd[1]: libpod-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope: Deactivated successfully.
Nov 25 16:32:19 compute-0 conmon[293206]: conmon d00d98f71129d2f079f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope/container/memory.events
Nov 25 16:32:19 compute-0 podman[293122]: 2025-11-25 16:32:19.597598404 +0000 UTC m=+1.398310907 container died d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 16:32:19 compute-0 kernel: tap479811bd-70 (unregistering): left promiscuous mode
Nov 25 16:32:19 compute-0 NetworkManager[48891]: <info>  [1764088339.6075] device (tap479811bd-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:19 compute-0 ovn_controller[153477]: 2025-11-25T16:32:19Z|00205|binding|INFO|Releasing lport 479811bd-7043-4423-9815-a17763247b3b from this chassis (sb_readonly=0)
Nov 25 16:32:19 compute-0 ovn_controller[153477]: 2025-11-25T16:32:19Z|00206|binding|INFO|Setting lport 479811bd-7043-4423-9815-a17763247b3b down in Southbound
Nov 25 16:32:19 compute-0 ovn_controller[153477]: 2025-11-25T16:32:19Z|00207|binding|INFO|Removing iface tap479811bd-70 ovn-installed in OVS
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-53b8483f3df3dda0a7839d70be0b5e32d26d0a7aae6ec5bcef6f854d71530986-merged.mount: Deactivated successfully.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.631 254096 DEBUG oslo_concurrency.processutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.633 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0a:5f 10.100.0.14'], port_security=['fa:16:3e:34:0a:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3573b86d-afab-4a6f-970e-7db532c23eb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479811bd-7043-4423-9815-a17763247b3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.635 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479811bd-7043-4423-9815-a17763247b3b in datapath 50e18e22-7850-458c-8d66-5932e0495377 unbound from our chassis
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.636 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50e18e22-7850-458c-8d66-5932e0495377, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.637 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6c6d61-430d-4edc-a3a2-a1f26735e5a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.638 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace which is not needed anymore
Nov 25 16:32:19 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 25 16:32:19 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 7.734s CPU time.
Nov 25 16:32:19 compute-0 systemd-machined[216343]: Machine qemu-35-instance-0000001e terminated.
Nov 25 16:32:19 compute-0 podman[293122]: 2025-11-25 16:32:19.659844681 +0000 UTC m=+1.460557164 container remove d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:19 compute-0 systemd[1]: libpod-conmon-d00d98f71129d2f079f46ca9790e4e65b6e202c64eccb5ff30707b41d1d62e5c.scope: Deactivated successfully.
Nov 25 16:32:19 compute-0 sudo[292928]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:19 compute-0 sudo[293425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:19 compute-0 sudo[293425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:19 compute-0 sudo[293425]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:19 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : haproxy version is 2.8.14-c23fe91
Nov 25 16:32:19 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [NOTICE]   (292673) : path to executable is /usr/sbin/haproxy
Nov 25 16:32:19 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [WARNING]  (292673) : Exiting Master process...
Nov 25 16:32:19 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [ALERT]    (292673) : Current worker (292675) exited with code 143 (Terminated)
Nov 25 16:32:19 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[292638]: [WARNING]  (292673) : All workers exited. Exiting... (0)
Nov 25 16:32:19 compute-0 systemd[1]: libpod-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe.scope: Deactivated successfully.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:19 compute-0 podman[293448]: 2025-11-25 16:32:19.818232871 +0000 UTC m=+0.064191900 container died 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.818 254096 INFO nova.virt.libvirt.driver [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Instance destroyed successfully.
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.819 254096 DEBUG nova.objects.instance [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'resources' on Instance uuid 3573b86d-afab-4a6f-970e-7db532c23eb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.836 254096 DEBUG nova.virt.libvirt.vif [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1171232620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1171232620',id=30,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-wlltia02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:16Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=3573b86d-afab-4a6f-970e-7db532c23eb3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.838 254096 DEBUG nova.network.os_vif_util [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "479811bd-7043-4423-9815-a17763247b3b", "address": "fa:16:3e:34:0a:5f", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479811bd-70", "ovs_interfaceid": "479811bd-7043-4423-9815-a17763247b3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.839 254096 DEBUG nova.network.os_vif_util [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.840 254096 DEBUG os_vif [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.846 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap479811bd-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.859 254096 INFO os_vif [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0a:5f,bridge_name='br-int',has_traffic_filtering=True,id=479811bd-7043-4423-9815-a17763247b3b,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479811bd-70')
Nov 25 16:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe-userdata-shm.mount: Deactivated successfully.
Nov 25 16:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0bd5c0024043409a719a708d58c044f8ad9aa267adbb154aaa37825ce7da9b6-merged.mount: Deactivated successfully.
Nov 25 16:32:19 compute-0 sudo[293482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:32:19 compute-0 sudo[293482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:19 compute-0 podman[293448]: 2025-11-25 16:32:19.881426443 +0000 UTC m=+0.127385442 container cleanup 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:32:19 compute-0 sudo[293482]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:19 compute-0 systemd[1]: libpod-conmon-46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe.scope: Deactivated successfully.
Nov 25 16:32:19 compute-0 sudo[293548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:19 compute-0 sudo[293548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:19 compute-0 sudo[293548]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:19 compute-0 podman[293554]: 2025-11-25 16:32:19.968578274 +0000 UTC m=+0.052387221 container remove 46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4f34c4-5664-403b-bdec-a7803a71b364]: (4, ('Tue Nov 25 04:32:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe)\n46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe\nTue Nov 25 04:32:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe)\n46e2163d66ecabbad99cf7bce8579e04484cffb5465bcb6e2088856f5faeb8fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.980 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc022f19-b545-4870-9be8-b9d557761a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:19.982 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:19 compute-0 ceph-mon[74985]: pgmap v1347: 321 pgs: 321 active+clean; 134 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 265 op/s
Nov 25 16:32:19 compute-0 kernel: tap50e18e22-70: left promiscuous mode
Nov 25 16:32:19 compute-0 nova_compute[254092]: 2025-11-25 16:32:19.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7bcb90-dc69-4562-bd3d-c999dc794c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2dda8c1b-7e83-4740-985c-0e8e156233cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.026 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.031 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7714e0f4-5f54-4c24-a723-1ecc32304fe8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:20 compute-0 sudo[293590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:32:20 compute-0 sudo[293590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.055 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5901d7-7fa9-4514-9356-dd1e7435d2cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478943, 'reachable_time': 34185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293615, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d50e18e22\x2d7850\x2d458c\x2d8d66\x2d5932e0495377.mount: Deactivated successfully.
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.059 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.059 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[17ee3968-0ebc-4afc-98b8-a134e5e341ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:20.066 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:32:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514997394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.208 254096 DEBUG oslo_concurrency.processutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.216 254096 DEBUG nova.compute.provider_tree [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.238 254096 DEBUG nova.scheduler.client.report [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.261 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.298 254096 INFO nova.scheduler.client.report [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Deleted allocations for instance 36f65013-2906-4794-9e23-e92dc7814b6e
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.346 254096 INFO nova.virt.libvirt.driver [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deleting instance files /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3_del
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.347 254096 INFO nova.virt.libvirt.driver [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deletion of /var/lib/nova/instances/3573b86d-afab-4a6f-970e-7db532c23eb3_del complete
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.369 254096 DEBUG oslo_concurrency.lockutils [None req-4fa5150e-3af7-41a0-bd2e-0e3bef7ae6de 23c828e6ebbd4d0488f6edbbe9616ca7 1a87d91cb59d45c29155c8f5cb5ad745 - - default default] Lock "36f65013-2906-4794-9e23-e92dc7814b6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.401 254096 DEBUG nova.compute.manager [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-unplugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.402 254096 DEBUG oslo_concurrency.lockutils [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.402 254096 DEBUG oslo_concurrency.lockutils [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.402 254096 DEBUG oslo_concurrency.lockutils [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.403 254096 DEBUG nova.compute.manager [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] No waiting events found dispatching network-vif-unplugged-479811bd-7043-4423-9815-a17763247b3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.403 254096 DEBUG nova.compute.manager [req-67823ed3-6fcd-4b6c-8056-71b9b6bbeb8a req-7d33adec-a9d3-4e6f-be97-9aaf32ccb480 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-unplugged-479811bd-7043-4423-9815-a17763247b3b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:32:20 compute-0 podman[293660]: 2025-11-25 16:32:20.405756726 +0000 UTC m=+0.044421225 container create 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.418 254096 INFO nova.compute.manager [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 0.85 seconds to destroy the instance on the hypervisor.
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.418 254096 DEBUG oslo.service.loopingcall [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.419 254096 DEBUG nova.compute.manager [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.419 254096 DEBUG nova.network.neutron [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:32:20 compute-0 systemd[1]: Started libpod-conmon-27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e.scope.
Nov 25 16:32:20 compute-0 podman[293660]: 2025-11-25 16:32:20.385515468 +0000 UTC m=+0.024179987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:32:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:20 compute-0 podman[293660]: 2025-11-25 16:32:20.511471189 +0000 UTC m=+0.150135688 container init 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:32:20 compute-0 podman[293660]: 2025-11-25 16:32:20.520406531 +0000 UTC m=+0.159071030 container start 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 16:32:20 compute-0 podman[293660]: 2025-11-25 16:32:20.524181993 +0000 UTC m=+0.162846552 container attach 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:32:20 compute-0 great_feynman[293677]: 167 167
Nov 25 16:32:20 compute-0 systemd[1]: libpod-27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e.scope: Deactivated successfully.
Nov 25 16:32:20 compute-0 podman[293660]: 2025-11-25 16:32:20.529156618 +0000 UTC m=+0.167821157 container died 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-640fae6eb89f25ed8c78e49639d6639a442b8fa9907d27487fdfd638d4ffbb7f-merged.mount: Deactivated successfully.
Nov 25 16:32:20 compute-0 podman[293660]: 2025-11-25 16:32:20.579528103 +0000 UTC m=+0.218192622 container remove 27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:32:20 compute-0 systemd[1]: libpod-conmon-27be9a4bd02e54a6f139825fd1154a4e13a0a8034db4f9b0aeaa66bb11f68b4e.scope: Deactivated successfully.
Nov 25 16:32:20 compute-0 podman[293700]: 2025-11-25 16:32:20.778740859 +0000 UTC m=+0.049027349 container create 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:32:20 compute-0 systemd[1]: Started libpod-conmon-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope.
Nov 25 16:32:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:20 compute-0 podman[293700]: 2025-11-25 16:32:20.758784108 +0000 UTC m=+0.029070628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.853 254096 DEBUG nova.compute.manager [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG oslo_concurrency.lockutils [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG oslo_concurrency.lockutils [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG oslo_concurrency.lockutils [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.855 254096 DEBUG nova.compute.manager [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] No waiting events found dispatching network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:20 compute-0 nova_compute[254092]: 2025-11-25 16:32:20.856 254096 WARNING nova.compute.manager [req-0fbba039-cedd-4b92-80a6-a59fc6261900 req-8365e9aa-34c9-42e5-9965-de6895099745 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received unexpected event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a for instance with vm_state active and task_state None.
Nov 25 16:32:20 compute-0 podman[293700]: 2025-11-25 16:32:20.875937082 +0000 UTC m=+0.146223592 container init 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:32:20 compute-0 podman[293700]: 2025-11-25 16:32:20.885774618 +0000 UTC m=+0.156061108 container start 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:32:20 compute-0 podman[293700]: 2025-11-25 16:32:20.889833318 +0000 UTC m=+0.160119828 container attach 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 16:32:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1514997394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 171 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.1 MiB/s wr, 350 op/s
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.382 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Successfully updated port: 82f63517-7636-46bf-b4e1-ba191ddad018 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.399 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.400 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.400 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.665 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.748 254096 DEBUG nova.network.neutron [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.774 254096 INFO nova.compute.manager [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Took 1.36 seconds to deallocate network for instance.
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.831 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.832 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:21 compute-0 nova_compute[254092]: 2025-11-25 16:32:21.916 254096 DEBUG oslo_concurrency.processutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:21 compute-0 modest_mahavira[293716]: {
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "osd_id": 1,
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "type": "bluestore"
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:     },
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "osd_id": 2,
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "type": "bluestore"
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:     },
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "osd_id": 0,
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:         "type": "bluestore"
Nov 25 16:32:21 compute-0 modest_mahavira[293716]:     }
Nov 25 16:32:21 compute-0 modest_mahavira[293716]: }
Nov 25 16:32:21 compute-0 podman[293700]: 2025-11-25 16:32:21.977673484 +0000 UTC m=+1.247959974 container died 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:32:21 compute-0 systemd[1]: libpod-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope: Deactivated successfully.
Nov 25 16:32:21 compute-0 systemd[1]: libpod-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope: Consumed 1.066s CPU time.
Nov 25 16:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4805e15c49e81d621926374d92c2800d086e317efc477b10feca4e4715f3be0b-merged.mount: Deactivated successfully.
Nov 25 16:32:22 compute-0 ceph-mon[74985]: pgmap v1348: 321 pgs: 321 active+clean; 171 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 7.1 MiB/s wr, 350 op/s
Nov 25 16:32:22 compute-0 podman[293700]: 2025-11-25 16:32:22.043878898 +0000 UTC m=+1.314165378 container remove 71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:32:22 compute-0 systemd[1]: libpod-conmon-71203b3aa39aa7b0df73d0ee6964305576cd9d4ef248e9bd626a4703c25c9898.scope: Deactivated successfully.
Nov 25 16:32:22 compute-0 sudo[293590]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:32:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:32:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:32:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:32:22 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 607c2767-74d2-4a41-8912-1a54487e6eeb does not exist
Nov 25 16:32:22 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 28a0e214-6bc0-4656-8e15-9805c998868b does not exist
Nov 25 16:32:22 compute-0 sudo[293780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:32:22 compute-0 sudo[293780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:22 compute-0 sudo[293780]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:22 compute-0 sudo[293805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:32:22 compute-0 sudo[293805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:32:22 compute-0 sudo[293805]: pam_unix(sudo:session): session closed for user root
Nov 25 16:32:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2196492239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.426 254096 DEBUG oslo_concurrency.processutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.435 254096 DEBUG nova.compute.provider_tree [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.450 254096 DEBUG nova.scheduler.client.report [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.468 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.492 254096 INFO nova.scheduler.client.report [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Deleted allocations for instance 3573b86d-afab-4a6f-970e-7db532c23eb3
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.548 254096 DEBUG oslo_concurrency.lockutils [None req-6d05fbe2-5d04-4d67-b30c-e1aaedd7c2f1 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.708 254096 DEBUG nova.network.neutron [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.731 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.732 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance network_info: |[{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.737 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start _get_guest_xml network_info=[{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.744 254096 WARNING nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.750 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.751 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.755 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.755 254096 DEBUG nova.virt.libvirt.host [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.756 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.756 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.757 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.758 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.759 254096 DEBUG nova.virt.hardware [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:22 compute-0 nova_compute[254092]: 2025-11-25 16:32:22.762 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.013 254096 DEBUG nova.compute.manager [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.014 254096 DEBUG oslo_concurrency.lockutils [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.014 254096 DEBUG oslo_concurrency.lockutils [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.014 254096 DEBUG oslo_concurrency.lockutils [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3573b86d-afab-4a6f-970e-7db532c23eb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.015 254096 DEBUG nova.compute.manager [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] No waiting events found dispatching network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.015 254096 WARNING nova.compute.manager [req-ace3a170-9421-462a-875f-c92abc00e2cc req-6d516acd-a490-4633-a1a1-b312295f4e3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received unexpected event network-vif-plugged-479811bd-7043-4423-9815-a17763247b3b for instance with vm_state deleted and task_state None.
Nov 25 16:32:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:32:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:32:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2196492239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40773822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.227 254096 DEBUG nova.compute.manager [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.228 254096 DEBUG nova.compute.manager [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.228 254096 DEBUG oslo_concurrency.lockutils [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.229 254096 DEBUG oslo_concurrency.lockutils [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.229 254096 DEBUG nova.network.neutron [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.245 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.269 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.276 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 171 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 233 op/s
Nov 25 16:32:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733694160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.720 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.723 254096 DEBUG nova.virt.libvirt.vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.724 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.726 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.729 254096 DEBUG nova.objects.instance [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.746 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <uuid>800c66e3-ee9f-4766-92f2-ecda5671cde3</uuid>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <name>instance-00000020</name>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:22</nova:creationTime>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 16:32:23 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <entry name="serial">800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <entry name="uuid">800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk">
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config">
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f0:7b:ca"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <target dev="tap82f63517-76"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log" append="off"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:23 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:23 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:23 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:23 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:23 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.759 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Preparing to wait for external event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.759 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.760 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.760 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.762 254096 DEBUG nova.virt.libvirt.vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.762 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.763 254096 DEBUG nova.network.os_vif_util [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.764 254096 DEBUG os_vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.767 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.768 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.773 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82f63517-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.774 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap82f63517-76, col_values=(('external_ids', {'iface-id': '82f63517-7636-46bf-b4e1-ba191ddad018', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:7b:ca', 'vm-uuid': '800c66e3-ee9f-4766-92f2-ecda5671cde3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:23 compute-0 NetworkManager[48891]: <info>  [1764088343.7780] manager: (tap82f63517-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.786 254096 INFO os_vif [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76')
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.847 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.848 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.848 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:f0:7b:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.849 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Using config drive
Nov 25 16:32:23 compute-0 nova_compute[254092]: 2025-11-25 16:32:23.872 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/40773822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:24 compute-0 ceph-mon[74985]: pgmap v1349: 321 pgs: 321 active+clean; 171 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 233 op/s
Nov 25 16:32:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/733694160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:24 compute-0 nova_compute[254092]: 2025-11-25 16:32:24.317 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Creating config drive at /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config
Nov 25 16:32:24 compute-0 nova_compute[254092]: 2025-11-25 16:32:24.330 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqezyp34s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:24 compute-0 nova_compute[254092]: 2025-11-25 16:32:24.476 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqezyp34s" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:24 compute-0 nova_compute[254092]: 2025-11-25 16:32:24.511 254096 DEBUG nova.storage.rbd_utils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:24 compute-0 nova_compute[254092]: 2025-11-25 16:32:24.517 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.227 254096 DEBUG oslo_concurrency.processutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config 800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.227 254096 INFO nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Deleting local config drive /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/disk.config because it was imported into RBD.
Nov 25 16:32:25 compute-0 kernel: tap82f63517-76: entered promiscuous mode
Nov 25 16:32:25 compute-0 ovn_controller[153477]: 2025-11-25T16:32:25Z|00208|binding|INFO|Claiming lport 82f63517-7636-46bf-b4e1-ba191ddad018 for this chassis.
Nov 25 16:32:25 compute-0 NetworkManager[48891]: <info>  [1764088345.3055] manager: (tap82f63517-76): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Nov 25 16:32:25 compute-0 ovn_controller[153477]: 2025-11-25T16:32:25Z|00209|binding|INFO|82f63517-7636-46bf-b4e1-ba191ddad018: Claiming fa:16:3e:f0:7b:ca 10.100.0.6
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 285 op/s
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.312 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:7b:ca 10.100.0.6'], port_security=['fa:16:3e:f0:7b:ca 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '800c66e3-ee9f-4766-92f2-ecda5671cde3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=82f63517-7636-46bf-b4e1-ba191ddad018) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.313 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 82f63517-7636-46bf-b4e1-ba191ddad018 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.314 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:25 compute-0 ovn_controller[153477]: 2025-11-25T16:32:25Z|00210|binding|INFO|Setting lport 82f63517-7636-46bf-b4e1-ba191ddad018 ovn-installed in OVS
Nov 25 16:32:25 compute-0 ovn_controller[153477]: 2025-11-25T16:32:25Z|00211|binding|INFO|Setting lport 82f63517-7636-46bf-b4e1-ba191ddad018 up in Southbound
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.333 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11ce2ded-3ac1-48bb-a9d4-d6f92e1f8abd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.334 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52e7d5b9-01 in ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.336 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52e7d5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5266d0fd-a93b-4bb6-b099-3766c8edbd1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.339 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[68f6cd5f-2654-4697-811b-77b675c90663]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.349 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d28eb5d8-0fe2-438d-a933-d586d4b0b7ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 systemd-udevd[293970]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:25 compute-0 systemd-machined[216343]: New machine qemu-37-instance-00000020.
Nov 25 16:32:25 compute-0 NetworkManager[48891]: <info>  [1764088345.3693] device (tap82f63517-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdc63b4-6dee-4a47-bde0-bec4dbb1b710]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 NetworkManager[48891]: <info>  [1764088345.3722] device (tap82f63517-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:32:25 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.400 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dffdef69-8fa0-4638-9f63-9ff022135a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.405 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9315e53-8aa0-4b51-9644-c77e06144948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 systemd-udevd[293973]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:25 compute-0 NetworkManager[48891]: <info>  [1764088345.4077] manager: (tap52e7d5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.444 254096 DEBUG nova.network.neutron [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.445 254096 DEBUG nova.network.neutron [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.449 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3c7966-c654-4b6a-9d50-a841aab84352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.452 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7e14f8-927f-46a2-858e-591082806ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.457 254096 DEBUG oslo_concurrency.lockutils [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.457 254096 DEBUG nova.compute.manager [req-8aa52a0b-ed19-454a-8136-4449aa204f83 req-5a0caa08-a208-4687-8bbb-73082340ac7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Received event network-vif-deleted-479811bd-7043-4423-9815-a17763247b3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:25 compute-0 NetworkManager[48891]: <info>  [1764088345.4768] device (tap52e7d5b9-00): carrier: link connected
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.487 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a46688-f9d2-46bb-b7a9-e91d85f7151e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[885046db-a3da-4df2-a996-62ed292f7484]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294001, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.519 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcdf522-4437-48f0-ac03-65faadbcf231]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:97ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480306, 'tstamp': 480306}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294002, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.535 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[98613650-1819-429b-b870-b87351242aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294003, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.563 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0330332a-1abe-4d50-9538-d590325277c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88b7a91c-a71f-4b3a-825b-4fe9ab3d8c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.614 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.614 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.615 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:25 compute-0 kernel: tap52e7d5b9-00: entered promiscuous mode
Nov 25 16:32:25 compute-0 NetworkManager[48891]: <info>  [1764088345.6183] manager: (tap52e7d5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.620 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:25 compute-0 ovn_controller[153477]: 2025-11-25T16:32:25Z|00212|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.623 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:25 compute-0 nova_compute[254092]: 2025-11-25 16:32:25.640 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.640 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.641 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae974164-1085-4136-ada2-b7f27a34b633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.643 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.pid.haproxy
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:32:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:25.644 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'env', 'PROCESS_TAG=haproxy-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52e7d5b9-0570-4e5c-b3da-9dfcb924b83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:32:26 compute-0 podman[294035]: 2025-11-25 16:32:26.033349422 +0000 UTC m=+0.068205249 container create 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 16:32:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:26.072 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:26 compute-0 systemd[1]: Started libpod-conmon-892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9.scope.
Nov 25 16:32:26 compute-0 podman[294035]: 2025-11-25 16:32:26.004631104 +0000 UTC m=+0.039487041 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:32:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e328b169ee953840c8739043c0f2cb5f976a6a6651659cd6ce366c78ba5f4b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:26 compute-0 podman[294048]: 2025-11-25 16:32:26.144840862 +0000 UTC m=+0.077606143 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 16:32:26 compute-0 podman[294035]: 2025-11-25 16:32:26.145309785 +0000 UTC m=+0.180165642 container init 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 16:32:26 compute-0 podman[294049]: 2025-11-25 16:32:26.145342366 +0000 UTC m=+0.071748975 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 16:32:26 compute-0 podman[294035]: 2025-11-25 16:32:26.151468901 +0000 UTC m=+0.186324738 container start 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:32:26 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [NOTICE]   (294132) : New worker (294150) forked
Nov 25 16:32:26 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [NOTICE]   (294132) : Loading success.
Nov 25 16:32:26 compute-0 podman[294050]: 2025-11-25 16:32:26.204535029 +0000 UTC m=+0.128139122 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.283 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088346.2824101, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.283 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] VM Started (Lifecycle Event)
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.305 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.308 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088346.282534, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.308 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] VM Paused (Lifecycle Event)
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.326 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.329 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.347 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:26 compute-0 ceph-mon[74985]: pgmap v1350: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 285 op/s
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.445 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.445 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.464 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.542 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.543 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.550 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.550 254096 INFO nova.compute.claims [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG nova.compute.manager [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG oslo_concurrency.lockutils [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG oslo_concurrency.lockutils [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.555 254096 DEBUG oslo_concurrency.lockutils [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.556 254096 DEBUG nova.compute.manager [req-5c0fab0a-8e12-48a5-9d74-d91b96ab8580 req-2498c039-87d1-4d5f-aaa2-0f7697b3b477 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Processing event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.556 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.559 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088346.5594587, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.560 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] VM Resumed (Lifecycle Event)
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.561 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.565 254096 INFO nova.virt.libvirt.driver [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance spawned successfully.
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.565 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.582 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.584 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.584 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.584 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.585 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.585 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.586 254096 DEBUG nova.virt.libvirt.driver [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.591 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.616 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.650 254096 INFO nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Took 8.29 seconds to spawn the instance on the hypervisor.
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.650 254096 DEBUG nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.683 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.762 254096 INFO nova.compute.manager [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Took 11.02 seconds to build instance.
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.791 254096 DEBUG oslo_concurrency.lockutils [None req-1979797e-3e64-489b-849a-960f26acfd6a c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.994 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:26 compute-0 nova_compute[254092]: 2025-11-25 16:32:26.994 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.018 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.099 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209657782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.213 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.220 254096 DEBUG nova.compute.provider_tree [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.239 254096 DEBUG nova.scheduler.client.report [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.261 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.262 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.264 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.270 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.270 254096 INFO nova.compute.claims [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:32:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.310 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "616ec95d-6c7d-420e-991d-3cbc11339768" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.310 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.349 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.350 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.370 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.374 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:32:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/209657782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.399 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.470 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.488 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.489 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.490 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Creating image(s)
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.507 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.527 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.547 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.550 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.579 254096 DEBUG nova.policy [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a46b9493b027436fbd21d09ff5ac15e4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ae7f32b97104afd930af5d5f5754532', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.596 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.629 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.630 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.631 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.632 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.657 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.664 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.692 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088332.6485481, 36f65013-2906-4794-9e23-e92dc7814b6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.693 254096 INFO nova.compute.manager [-] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] VM Stopped (Lifecycle Event)
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.715 254096 DEBUG nova.compute.manager [None req-360242e8-125a-447b-90d9-e7049a4b7e7c - - - - - -] [instance: 36f65013-2906-4794-9e23-e92dc7814b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:27 compute-0 nova_compute[254092]: 2025-11-25 16:32:27.944 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.000 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] resizing rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083808979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.059 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.065 254096 DEBUG nova.compute.provider_tree [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.102 254096 DEBUG nova.scheduler.client.report [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.115 254096 DEBUG nova.objects.instance [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'migration_context' on Instance uuid a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.139 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.140 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.144 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.145 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.146 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Ensure instance console log exists: /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.146 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.147 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.147 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.155 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.155 254096 INFO nova.compute.claims [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.210 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.211 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.232 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.248 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.334 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.335 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.336 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Creating image(s)
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.359 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.383 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:28 compute-0 ceph-mon[74985]: pgmap v1351: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Nov 25 16:32:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2083808979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.410 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.414 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.460 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.488 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.489 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.490 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.490 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.511 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.515 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.663 254096 DEBUG nova.policy [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87058665de814ae0a51a12ff02b0d9aa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed571eebde434695bae813d7bb21f4c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG nova.compute.manager [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG oslo_concurrency.lockutils [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG oslo_concurrency.lockutils [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.718 254096 DEBUG oslo_concurrency.lockutils [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.719 254096 DEBUG nova.compute.manager [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] No waiting events found dispatching network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.719 254096 WARNING nova.compute.manager [req-5a4dfe3a-5aab-45e7-a85d-7b8dcbc9def6 req-67da724e-e1d7-439e-a2f3-1e499ed57efd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received unexpected event network-vif-plugged-82f63517-7636-46bf-b4e1-ba191ddad018 for instance with vm_state active and task_state None.
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.836 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688985666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.894 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] resizing rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.924 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.932 254096 DEBUG nova.compute.provider_tree [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.946 254096 DEBUG nova.scheduler.client.report [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.965 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:28 compute-0 nova_compute[254092]: 2025-11-25 16:32:28.966 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.002 254096 DEBUG nova.objects.instance [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.008 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.017 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.018 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Ensure instance console log exists: /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.018 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.018 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.019 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.022 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.036 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.121 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.122 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.122 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating image(s)
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.146 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.171 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.198 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.201 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.295 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.296 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.297 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.297 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.320 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.324 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2688985666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.419 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Successfully created port: 5cf8fe87-4cac-403f-8611-0ddb37516abd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.598 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.673 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] resizing rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.785 254096 DEBUG nova.objects.instance [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.806 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.807 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Ensure instance console log exists: /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.808 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.808 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.808 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.810 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.814 254096 WARNING nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.821 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.821 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.824 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.825 254096 DEBUG nova.virt.libvirt.host [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.825 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.826 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.826 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.826 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.827 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.828 254096 DEBUG nova.virt.hardware [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:29 compute-0 nova_compute[254092]: 2025-11-25 16:32:29.831 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.039 254096 DEBUG nova.compute.manager [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.039 254096 DEBUG nova.compute.manager [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.040 254096 DEBUG oslo_concurrency.lockutils [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.040 254096 DEBUG oslo_concurrency.lockutils [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.040 254096 DEBUG nova.network.neutron [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.187 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Successfully created port: 2a798aec-112b-42d0-9128-639b456b201e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:32:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58006728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.287 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.311 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.317 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:30 compute-0 ceph-mon[74985]: pgmap v1352: 321 pgs: 321 active+clean; 134 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Nov 25 16:32:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/58006728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480326742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.735 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.738 254096 DEBUG nova.objects.instance [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.761 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <uuid>616ec95d-6c7d-420e-991d-3cbc11339768</uuid>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <name>instance-00000023</name>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerShowV257Test-server-2017219999</nova:name>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:29</nova:creationTime>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <nova:user uuid="204d6790ef4644f6a11d8afd611b7f8d">tempest-ServerShowV257Test-1749590920-project-member</nova:user>
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <nova:project uuid="f7ec8b6f4599458ebb55ba5d9a7463c3">tempest-ServerShowV257Test-1749590920</nova:project>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <entry name="serial">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <entry name="uuid">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk">
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk.config">
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log" append="off"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:30 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:30 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:30 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:30 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:30 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.832 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.833 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.833 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Using config drive
Nov 25 16:32:30 compute-0 nova_compute[254092]: 2025-11-25 16:32:30.860 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.048 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating config drive at /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.053 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesy6ubfe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.186 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesy6ubfe" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.209 254096 DEBUG nova.storage.rbd_utils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.213 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.256 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Successfully updated port: 5cf8fe87-4cac-403f-8611-0ddb37516abd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.274 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.275 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.275 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.310 254096 DEBUG nova.network.neutron [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.311 254096 DEBUG nova.network.neutron [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 276 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 294 op/s
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.334 254096 DEBUG oslo_concurrency.lockutils [req-13fc4597-0394-485a-839b-3a07af3e3704 req-b6f6523b-a37b-4da6-98f0-7114e57606c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.347 254096 DEBUG oslo_concurrency.processutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.347 254096 INFO nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting local config drive /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config because it was imported into RBD.
Nov 25 16:32:31 compute-0 ovn_controller[153477]: 2025-11-25T16:32:31Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:bb:1b 10.100.0.4
Nov 25 16:32:31 compute-0 ovn_controller[153477]: 2025-11-25T16:32:31Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:bb:1b 10.100.0.4
Nov 25 16:32:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/480326742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:31 compute-0 systemd-machined[216343]: New machine qemu-38-instance-00000023.
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.412 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:31 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000023.
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.926 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Successfully updated port: 2a798aec-112b-42d0-9128-639b456b201e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.947 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.947 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquired lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.948 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.971 254096 DEBUG nova.compute.manager [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.972 254096 DEBUG nova.compute.manager [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing instance network info cache due to event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:31 compute-0 nova_compute[254092]: 2025-11-25 16:32:31.972 254096 DEBUG oslo_concurrency.lockutils [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.150 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.218 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088352.2181742, 616ec95d-6c7d-420e-991d-3cbc11339768 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.219 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Resumed (Lifecycle Event)
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.221 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.221 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.233 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance spawned successfully.
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.233 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.248 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.253 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.258 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.259 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.259 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.260 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.260 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.261 254096 DEBUG nova.virt.libvirt.driver [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.284 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.284 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088352.2182398, 616ec95d-6c7d-420e-991d-3cbc11339768 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.285 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Started (Lifecycle Event)
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.309 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.312 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.331 254096 INFO nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 3.21 seconds to spawn the instance on the hypervisor.
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.332 254096 DEBUG nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.342 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.405 254096 INFO nova.compute.manager [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 4.97 seconds to build instance.
Nov 25 16:32:32 compute-0 ceph-mon[74985]: pgmap v1353: 321 pgs: 321 active+clean; 276 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 294 op/s
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.424 254096 DEBUG oslo_concurrency.lockutils [None req-5368cd25-077a-48fc-a53f-14b2113e29b9 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.560 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.561 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.578 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.632 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.632 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.638 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.638 254096 INFO nova.compute.claims [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.728 254096 DEBUG nova.network.neutron [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.757 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.757 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance network_info: |[{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.758 254096 DEBUG oslo_concurrency.lockutils [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.758 254096 DEBUG nova.network.neutron [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.761 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start _get_guest_xml network_info=[{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.767 254096 WARNING nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.772 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.774 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.786 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.787 254096 DEBUG nova.virt.libvirt.host [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.787 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.788 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.789 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.789 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.789 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.790 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.790 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.791 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.791 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.792 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.792 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.792 254096 DEBUG nova.virt.hardware [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.796 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:32 compute-0 nova_compute[254092]: 2025-11-25 16:32:32.862 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285403488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.239 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.266 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.274 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 276 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 210 op/s
Nov 25 16:32:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814882808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.342 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.349 254096 DEBUG nova.compute.provider_tree [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.365 254096 DEBUG nova.scheduler.client.report [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.386 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.387 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.421 254096 DEBUG nova.network.neutron [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updating instance_info_cache with network_info: [{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/285403488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/814882808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.430482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353430519, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2120, "num_deletes": 254, "total_data_size": 3156663, "memory_usage": 3222672, "flush_reason": "Manual Compaction"}
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.445 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.446 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353452306, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3099041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25965, "largest_seqno": 28084, "table_properties": {"data_size": 3089769, "index_size": 5702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20240, "raw_average_key_size": 20, "raw_value_size": 3070673, "raw_average_value_size": 3107, "num_data_blocks": 251, "num_entries": 988, "num_filter_entries": 988, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088156, "oldest_key_time": 1764088156, "file_creation_time": 1764088353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21872 microseconds, and 8525 cpu microseconds.
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.452 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Releasing lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.453 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance network_info: |[{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.452351) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3099041 bytes OK
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.452374) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.454077) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.454112) EVENT_LOG_v1 {"time_micros": 1764088353454104, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.454136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3147689, prev total WAL file size 3147689, number of live WAL files 2.
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.455378) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3026KB)], [59(7299KB)]
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353455423, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10573254, "oldest_snapshot_seqno": -1}
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.458 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start _get_guest_xml network_info=[{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.466 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.482 254096 WARNING nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.491 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.493 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.497 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.498 254096 DEBUG nova.virt.libvirt.host [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.498 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.499 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.500 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.500 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.500 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.501 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.501 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.502 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.502 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.502 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.503 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.503 254096 DEBUG nova.virt.hardware [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5359 keys, 8850373 bytes, temperature: kUnknown
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353505949, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8850373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8813246, "index_size": 22617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 133641, "raw_average_key_size": 24, "raw_value_size": 8715465, "raw_average_value_size": 1626, "num_data_blocks": 929, "num_entries": 5359, "num_filter_entries": 5359, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.506130) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8850373 bytes
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.507059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.1 rd, 175.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.1 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5882, records dropped: 523 output_compression: NoCompression
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.507073) EVENT_LOG_v1 {"time_micros": 1764088353507066, "job": 32, "event": "compaction_finished", "compaction_time_micros": 50576, "compaction_time_cpu_micros": 22840, "output_level": 6, "num_output_files": 1, "total_output_size": 8850373, "num_input_records": 5882, "num_output_records": 5359, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353507565, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088353508753, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.455293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:32:33 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:32:33.508798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.508 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.541 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.661 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.663 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.664 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Creating image(s)
Nov 25 16:32:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1134914171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.698 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.720 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.742 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.751 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.787 254096 DEBUG nova.policy [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.795 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.797 254096 DEBUG nova.virt.libvirt.vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1769139174',display_name='tempest-FloatingIPsAssociationTestJSON-server-1769139174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1769139174',id=33,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-f0yptx40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:27Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.797 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.798 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.799 254096 DEBUG nova.objects.instance [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.816 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <uuid>a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc</uuid>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <name>instance-00000021</name>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1769139174</nova:name>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:32</nova:creationTime>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:user uuid="a46b9493b027436fbd21d09ff5ac15e4">tempest-FloatingIPsAssociationTestJSON-993856073-project-member</nova:user>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:project uuid="4ae7f32b97104afd930af5d5f5754532">tempest-FloatingIPsAssociationTestJSON-993856073</nova:project>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <nova:port uuid="5cf8fe87-4cac-403f-8611-0ddb37516abd">
Nov 25 16:32:33 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <entry name="serial">a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc</entry>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <entry name="uuid">a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc</entry>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk">
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config">
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:33 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:8c:8a:c5"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <target dev="tap5cf8fe87-4c"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/console.log" append="off"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:33 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:33 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:33 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:33 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:33 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.817 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Preparing to wait for external event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.817 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.818 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.818 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.818 254096 DEBUG nova.virt.libvirt.vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1769139174',display_name='tempest-FloatingIPsAssociationTestJSON-server-1769139174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1769139174',id=33,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-f0yptx40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:27Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.819 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.819 254096 DEBUG nova.network.os_vif_util [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.820 254096 DEBUG os_vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.825 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.825 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.831 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.831 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.831 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.832 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.856 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.860 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.895 254096 INFO nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Rebuilding instance
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cf8fe87-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.903 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5cf8fe87-4c, col_values=(('external_ids', {'iface-id': '5cf8fe87-4cac-403f-8611-0ddb37516abd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:8a:c5', 'vm-uuid': 'a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:33 compute-0 NetworkManager[48891]: <info>  [1764088353.9052] manager: (tap5cf8fe87-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.916 254096 INFO os_vif [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c')
Nov 25 16:32:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428703887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.970 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.970 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.971 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] No VIF found with MAC fa:16:3e:8c:8a:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:32:33 compute-0 nova_compute[254092]: 2025-11-25 16:32:33.971 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Using config drive
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.000 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.029 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.059 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.076 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.136 254096 DEBUG nova.compute.manager [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-changed-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.136 254096 DEBUG nova.compute.manager [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Refreshing instance network info cache due to event network-changed-2a798aec-112b-42d0-9128-639b456b201e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.136 254096 DEBUG oslo_concurrency.lockutils [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.137 254096 DEBUG oslo_concurrency.lockutils [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.137 254096 DEBUG nova.network.neutron [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Refreshing network info cache for port 2a798aec-112b-42d0-9128-639b456b201e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.146 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.175 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.210 254096 DEBUG nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.215 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] resizing rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.294 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.301 254096 DEBUG nova.objects.instance [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'migration_context' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.304 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.312 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.312 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Ensure instance console log exists: /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.313 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.313 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.313 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.314 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'resources' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.324 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.332 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.335 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:32:34 compute-0 ceph-mon[74985]: pgmap v1354: 321 pgs: 321 active+clean; 276 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 210 op/s
Nov 25 16:32:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1134914171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1428703887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.473 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Creating config drive at /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.478 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5fwxjvwf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160125470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.548 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.550 254096 DEBUG nova.virt.libvirt.vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1220833469',id=34,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-g9yti154',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:28Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=2724cb7d-6b8e-4861-ae3d-72e34da31fe5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.550 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.551 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.552 254096 DEBUG nova.objects.instance [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.567 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <uuid>2724cb7d-6b8e-4861-ae3d-72e34da31fe5</uuid>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <name>instance-00000022</name>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1220833469</nova:name>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:33</nova:creationTime>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:user uuid="87058665de814ae0a51a12ff02b0d9aa">tempest-ImagesOneServerNegativeTestJSON-964953831-project-member</nova:user>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:project uuid="ed571eebde434695bae813d7bb21f4c3">tempest-ImagesOneServerNegativeTestJSON-964953831</nova:project>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <nova:port uuid="2a798aec-112b-42d0-9128-639b456b201e">
Nov 25 16:32:34 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <entry name="serial">2724cb7d-6b8e-4861-ae3d-72e34da31fe5</entry>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <entry name="uuid">2724cb7d-6b8e-4861-ae3d-72e34da31fe5</entry>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk">
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config">
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:af:3e:5a"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <target dev="tap2a798aec-11"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/console.log" append="off"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:34 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:34 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:34 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:34 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:34 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.568 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Preparing to wait for external event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.568 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.568 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.569 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.569 254096 DEBUG nova.virt.libvirt.vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1220833469',id=34,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-g9yti154',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:28Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=2724cb7d-6b8e-4861-ae3d-72e34da31fe5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.569 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.570 254096 DEBUG nova.network.os_vif_util [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.570 254096 DEBUG os_vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.571 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.572 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.574 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a798aec-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.575 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a798aec-11, col_values=(('external_ids', {'iface-id': '2a798aec-112b-42d0-9128-639b456b201e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:3e:5a', 'vm-uuid': '2724cb7d-6b8e-4861-ae3d-72e34da31fe5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:34 compute-0 NetworkManager[48891]: <info>  [1764088354.5775] manager: (tap2a798aec-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.585 254096 INFO os_vif [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11')
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.608 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5fwxjvwf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.631 254096 DEBUG nova.storage.rbd_utils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] rbd image a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.634 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.665 254096 DEBUG nova.network.neutron [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updated VIF entry in instance network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.666 254096 DEBUG nova.network.neutron [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.693 254096 DEBUG oslo_concurrency.lockutils [req-539cb2d2-4f2a-4989-acde-5292f0b5f5b1 req-4280d832-8ad6-4d9d-b1f9-4c430b67b15e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.696 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.696 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.696 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] No VIF found with MAC fa:16:3e:af:3e:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.697 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Using config drive
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.718 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.780 254096 DEBUG oslo_concurrency.processutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.781 254096 INFO nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deleting local config drive /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc/disk.config because it was imported into RBD.
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.810 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088339.8088124, 3573b86d-afab-4a6f-970e-7db532c23eb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.810 254096 INFO nova.compute.manager [-] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] VM Stopped (Lifecycle Event)
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.813 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Successfully created port: f95d61ca-d58c-4f07-879f-5e5412976e42 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.837 254096 DEBUG nova.compute.manager [None req-bff26bf7-c739-4fe7-be9d-d3eaabc09c0c - - - - - -] [instance: 3573b86d-afab-4a6f-970e-7db532c23eb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:34 compute-0 kernel: tap5cf8fe87-4c: entered promiscuous mode
Nov 25 16:32:34 compute-0 NetworkManager[48891]: <info>  [1764088354.8393] manager: (tap5cf8fe87-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Nov 25 16:32:34 compute-0 systemd-udevd[294905]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:34 compute-0 ovn_controller[153477]: 2025-11-25T16:32:34Z|00213|binding|INFO|Claiming lport 5cf8fe87-4cac-403f-8611-0ddb37516abd for this chassis.
Nov 25 16:32:34 compute-0 ovn_controller[153477]: 2025-11-25T16:32:34Z|00214|binding|INFO|5cf8fe87-4cac-403f-8611-0ddb37516abd: Claiming fa:16:3e:8c:8a:c5 10.100.0.9
Nov 25 16:32:34 compute-0 NetworkManager[48891]: <info>  [1764088354.8605] device (tap5cf8fe87-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:32:34 compute-0 NetworkManager[48891]: <info>  [1764088354.8617] device (tap5cf8fe87-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.863 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:8a:c5 10.100.0.9'], port_security=['fa:16:3e:8c:8a:c5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8b56bc-492b-4082-b8de-60d496652da7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ae7f32b97104afd930af5d5f5754532', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c729f64-6320-4599-a74a-ed70b78f82ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81a9c566-d2a0-4e54-9d3c-480355005098, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cf8fe87-4cac-403f-8611-0ddb37516abd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.865 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cf8fe87-4cac-403f-8611-0ddb37516abd in datapath 7e8b56bc-492b-4082-b8de-60d496652da7 bound to our chassis
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.868 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e8b56bc-492b-4082-b8de-60d496652da7
Nov 25 16:32:34 compute-0 ovn_controller[153477]: 2025-11-25T16:32:34Z|00215|binding|INFO|Setting lport 5cf8fe87-4cac-403f-8611-0ddb37516abd ovn-installed in OVS
Nov 25 16:32:34 compute-0 ovn_controller[153477]: 2025-11-25T16:32:34Z|00216|binding|INFO|Setting lport 5cf8fe87-4cac-403f-8611-0ddb37516abd up in Southbound
Nov 25 16:32:34 compute-0 nova_compute[254092]: 2025-11-25 16:32:34.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:34 compute-0 systemd-machined[216343]: New machine qemu-39-instance-00000021.
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.890 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79d8bd0e-a7fe-46ee-9839-43934f7606d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:34 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000021.
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.928 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[955ee817-e987-45ff-a854-298d430df4fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.931 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9644c2ff-2cd2-4185-a08b-9c254acebd55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.964 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b740a3-dee6-454a-892a-6c6d25df61e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:34.985 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d41919f3-5272-4cce-9066-581e6242eec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295327, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f28171a-c223-400c-aefb-756c2dce189d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479591, 'tstamp': 479591}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295329, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479595, 'tstamp': 479595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295329, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.007 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8b56bc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e8b56bc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.014 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e8b56bc-40, col_values=(('external_ids', {'iface-id': 'f3398af3-7278-4ca0-adcc-f3bb48f595e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.216 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088355.2155972, a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.216 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] VM Started (Lifecycle Event)
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.236 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.241 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088355.2157326, a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.241 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] VM Paused (Lifecycle Event)
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.256 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.258 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 334 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.9 MiB/s wr, 328 op/s
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.346 254096 DEBUG nova.compute.manager [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.347 254096 DEBUG oslo_concurrency.lockutils [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.347 254096 DEBUG oslo_concurrency.lockutils [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.348 254096 DEBUG oslo_concurrency.lockutils [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.348 254096 DEBUG nova.compute.manager [req-24474c01-adb6-4df0-928d-c830c4962e9a req-91e99f2e-5522-49a7-9934-1214a55e2710 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Processing event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.348 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.351 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088355.3510094, a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.351 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] VM Resumed (Lifecycle Event)
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.353 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.355 254096 INFO nova.virt.libvirt.driver [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance spawned successfully.
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.355 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.365 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Creating config drive at /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.369 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuweqdtsg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.401 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.406 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.406 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.407 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.407 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.408 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.408 254096 DEBUG nova.virt.libvirt.driver [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.413 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2160125470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.488 254096 INFO nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 8.00 seconds to spawn the instance on the hypervisor.
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.489 254096 DEBUG nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.502 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuweqdtsg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.528 254096 DEBUG nova.storage.rbd_utils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] rbd image 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.531 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.567 254096 DEBUG nova.network.neutron [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updated VIF entry in instance network info cache for port 2a798aec-112b-42d0-9128-639b456b201e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.568 254096 DEBUG nova.network.neutron [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updating instance_info_cache with network_info: [{"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.573 254096 INFO nova.compute.manager [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 9.06 seconds to build instance.
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.585 254096 DEBUG oslo_concurrency.lockutils [req-41093781-1723-4d5a-8f27-d19aa49beddf req-9cbd3c30-28c0-44eb-9103-f83853b7c2de a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2724cb7d-6b8e-4861-ae3d-72e34da31fe5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.586 254096 DEBUG oslo_concurrency.lockutils [None req-760aefcf-b872-4d00-bfc0-2c563b27fdfa a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.697 254096 DEBUG oslo_concurrency.processutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config 2724cb7d-6b8e-4861-ae3d-72e34da31fe5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.697 254096 INFO nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deleting local config drive /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5/disk.config because it was imported into RBD.
Nov 25 16:32:35 compute-0 kernel: tap2a798aec-11: entered promiscuous mode
Nov 25 16:32:35 compute-0 NetworkManager[48891]: <info>  [1764088355.7497] manager: (tap2a798aec-11): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Nov 25 16:32:35 compute-0 systemd-udevd[295374]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:35 compute-0 ovn_controller[153477]: 2025-11-25T16:32:35Z|00217|binding|INFO|Claiming lport 2a798aec-112b-42d0-9128-639b456b201e for this chassis.
Nov 25 16:32:35 compute-0 ovn_controller[153477]: 2025-11-25T16:32:35Z|00218|binding|INFO|2a798aec-112b-42d0-9128-639b456b201e: Claiming fa:16:3e:af:3e:5a 10.100.0.14
Nov 25 16:32:35 compute-0 NetworkManager[48891]: <info>  [1764088355.7675] device (tap2a798aec-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:32:35 compute-0 NetworkManager[48891]: <info>  [1764088355.7702] device (tap2a798aec-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.770 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3e:5a 10.100.0.14'], port_security=['fa:16:3e:af:3e:5a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2724cb7d-6b8e-4861-ae3d-72e34da31fe5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a798aec-112b-42d0-9128-639b456b201e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.772 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a798aec-112b-42d0-9128-639b456b201e in datapath 50e18e22-7850-458c-8d66-5932e0495377 bound to our chassis
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.773 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:32:35 compute-0 ovn_controller[153477]: 2025-11-25T16:32:35Z|00219|binding|INFO|Setting lport 2a798aec-112b-42d0-9128-639b456b201e ovn-installed in OVS
Nov 25 16:32:35 compute-0 ovn_controller[153477]: 2025-11-25T16:32:35Z|00220|binding|INFO|Setting lport 2a798aec-112b-42d0-9128-639b456b201e up in Southbound
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:35 compute-0 nova_compute[254092]: 2025-11-25 16:32:35.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad676319-6fe9-403e-ac59-b193fa83b2fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.787 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50e18e22-71 in ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.789 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50e18e22-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[52a9595e-1644-4b87-b360-3e2d5c260751]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.790 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[367dcdcf-72fd-4c91-b3fc-a007a9f95cfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 systemd-machined[216343]: New machine qemu-40-instance-00000022.
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.812 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6e2afe-0939-41ff-acbf-04dba9a432c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000022.
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.838 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a76a3a-c2a4-42fe-aad5-68916c4d767c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.871 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[96fcc073-fa7c-4bae-aff3-eedb823f4be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 NetworkManager[48891]: <info>  [1764088355.8807] manager: (tap50e18e22-70): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.883 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e91d90-e251-4f79-9142-cc503bdb0ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 systemd-udevd[295427]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.941 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd75704-5aa4-4ce2-acd8-8c4d6a0e347a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.944 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[80886324-85bd-4c96-933b-4d0382f60326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:35 compute-0 NetworkManager[48891]: <info>  [1764088355.9682] device (tap50e18e22-70): carrier: link connected
Nov 25 16:32:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:35.975 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eabd88f2-abe3-484f-9b36-ba054d488040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb040ed5-c824-40c7-b9dc-734dbf569cf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481355, 'reachable_time': 28567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295463, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.029 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbe71e0-9026-405e-92fa-3f33f3c2e2f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:147d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481355, 'tstamp': 481355}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295464, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.051 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Successfully updated port: f95d61ca-d58c-4f07-879f-5e5412976e42 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be70a05b-b77d-416b-b288-999f9801987e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50e18e22-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:14:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481355, 'reachable_time': 28567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295465, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.072 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.073 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.073 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.105 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a9ebc7-1814-4713-9ff4-1bbb664ce65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.139 254096 DEBUG nova.compute.manager [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.140 254096 DEBUG nova.compute.manager [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.140 254096 DEBUG oslo_concurrency.lockutils [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.208 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[966910e7-ba4f-43b4-8993-29d6bece2c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.211 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50e18e22-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:36 compute-0 kernel: tap50e18e22-70: entered promiscuous mode
Nov 25 16:32:36 compute-0 NetworkManager[48891]: <info>  [1764088356.2149] manager: (tap50e18e22-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.218 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50e18e22-70, col_values=(('external_ids', {'iface-id': '5b591f76-4d04-4b30-9182-d359be87068c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.229 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:36 compute-0 ovn_controller[153477]: 2025-11-25T16:32:36Z|00221|binding|INFO|Releasing lport 5b591f76-4d04-4b30-9182-d359be87068c from this chassis (sb_readonly=0)
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.234 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4121da8b-7a3f-4f9f-ae38-85bf27484ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.245 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50e18e22-7850-458c-8d66-5932e0495377.pid.haproxy
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50e18e22-7850-458c-8d66-5932e0495377
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:36.249 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'env', 'PROCESS_TAG=haproxy-50e18e22-7850-458c-8d66-5932e0495377', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50e18e22-7850-458c-8d66-5932e0495377.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:32:36 compute-0 nova_compute[254092]: 2025-11-25 16:32:36.253 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:36 compute-0 ceph-mon[74985]: pgmap v1355: 321 pgs: 321 active+clean; 334 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.9 MiB/s wr, 328 op/s
Nov 25 16:32:36 compute-0 podman[295495]: 2025-11-25 16:32:36.724750253 +0000 UTC m=+0.056440160 container create a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:32:36 compute-0 systemd[1]: Started libpod-conmon-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope.
Nov 25 16:32:36 compute-0 podman[295495]: 2025-11-25 16:32:36.692888 +0000 UTC m=+0.024577927 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:32:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70c021b67aa7e63f187afb6ab9f2ef19c3ce9218078adb4a42a3a569b52662a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:32:36 compute-0 podman[295495]: 2025-11-25 16:32:36.826586592 +0000 UTC m=+0.158276499 container init a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:32:36 compute-0 podman[295495]: 2025-11-25 16:32:36.848084063 +0000 UTC m=+0.179773990 container start a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:32:36 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : New worker (295515) forked
Nov 25 16:32:36 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : Loading success.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.091 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088357.0906174, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.091 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Started (Lifecycle Event)
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.120 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088357.0911803, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.120 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Paused (Lifecycle Event)
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.144 254096 DEBUG nova.network.neutron [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.146 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.149 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.168 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.169 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.169 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance network_info: |[{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.170 254096 DEBUG oslo_concurrency.lockutils [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.170 254096 DEBUG nova.network.neutron [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.172 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start _get_guest_xml network_info=[{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.176 254096 WARNING nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.181 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.181 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.187 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.187 254096 DEBUG nova.virt.libvirt.host [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.187 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.188 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.189 254096 DEBUG nova.virt.hardware [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.192 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 353 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 327 op/s
Nov 25 16:32:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755106099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.634 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.634 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] No waiting events found dispatching network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 WARNING nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received unexpected event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd for instance with vm_state active and task_state None.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.635 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Processing event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.636 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.637 254096 DEBUG oslo_concurrency.lockutils [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.637 254096 DEBUG nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] No waiting events found dispatching network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.637 254096 WARNING nova.compute.manager [req-6d60b5c3-8878-4918-a52a-269d8c291e01 req-876bfae3-6fbf-477e-8c7f-05c2a90ddfd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received unexpected event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e for instance with vm_state building and task_state spawning.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.638 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.638 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.641 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088357.6410172, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.641 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Resumed (Lifecycle Event)
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.643 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.662 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.665 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.694 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.695 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.702 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.704 254096 INFO nova.virt.libvirt.driver [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance spawned successfully.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.704 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.725 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.730 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.731 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.731 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.731 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.732 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.732 254096 DEBUG nova.virt.libvirt.driver [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.784 254096 INFO nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 9.45 seconds to spawn the instance on the hypervisor.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.785 254096 DEBUG nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.845 254096 INFO nova.compute.manager [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 10.76 seconds to build instance.
Nov 25 16:32:37 compute-0 nova_compute[254092]: 2025-11-25 16:32:37.862 254096 DEBUG oslo_concurrency.lockutils [None req-40fd9781-6927-4fdc-8e81-40ea0b268e1d 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271733712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.168 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.169 254096 DEBUG nova.virt.libvirt.vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.169 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.170 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.171 254096 DEBUG nova.objects.instance [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.184 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <uuid>1d318e56-4a8c-4806-aa87-e837708f2a1f</uuid>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <name>instance-00000024</name>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <nova:name>tempest-tempest.common.compute-instance-822239868</nova:name>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:37</nova:creationTime>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <nova:port uuid="f95d61ca-d58c-4f07-879f-5e5412976e42">
Nov 25 16:32:38 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <entry name="serial">1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <entry name="uuid">1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk">
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config">
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d7:b8:12"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <target dev="tapf95d61ca-d5"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log" append="off"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:38 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:38 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:38 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:38 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:38 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Preparing to wait for external event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.185 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.186 254096 DEBUG nova.virt.libvirt.vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:32:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.186 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.187 254096 DEBUG nova.network.os_vif_util [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.187 254096 DEBUG os_vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.188 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.188 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.191 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf95d61ca-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.191 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf95d61ca-d5, col_values=(('external_ids', {'iface-id': 'f95d61ca-d58c-4f07-879f-5e5412976e42', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:b8:12', 'vm-uuid': '1d318e56-4a8c-4806-aa87-e837708f2a1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:38 compute-0 NetworkManager[48891]: <info>  [1764088358.1938] manager: (tapf95d61ca-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.200 254096 INFO os_vif [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5')
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.254 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.255 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.255 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:d7:b8:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.255 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Using config drive
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.277 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:38 compute-0 ceph-mon[74985]: pgmap v1356: 321 pgs: 321 active+clean; 353 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 327 op/s
Nov 25 16:32:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3755106099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4271733712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.538 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Creating config drive at /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.544 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqv7kf432 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.693 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqv7kf432" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.727 254096 DEBUG nova.storage.rbd_utils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] rbd image 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.731 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.838 254096 DEBUG nova.network.neutron [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updated VIF entry in instance network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.839 254096 DEBUG nova.network.neutron [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.855 254096 DEBUG oslo_concurrency.lockutils [req-f0c618bc-8997-4e05-a844-5432075c7bcc req-3d80d345-6f09-4016-b04b-185b984b3f76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.922 254096 DEBUG oslo_concurrency.processutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config 1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.922 254096 INFO nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Deleting local config drive /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/disk.config because it was imported into RBD.
Nov 25 16:32:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:38 compute-0 kernel: tapf95d61ca-d5: entered promiscuous mode
Nov 25 16:32:38 compute-0 NetworkManager[48891]: <info>  [1764088358.9732] manager: (tapf95d61ca-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Nov 25 16:32:38 compute-0 nova_compute[254092]: 2025-11-25 16:32:38.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:38 compute-0 ovn_controller[153477]: 2025-11-25T16:32:38Z|00222|binding|INFO|Claiming lport f95d61ca-d58c-4f07-879f-5e5412976e42 for this chassis.
Nov 25 16:32:38 compute-0 ovn_controller[153477]: 2025-11-25T16:32:38Z|00223|binding|INFO|f95d61ca-d58c-4f07-879f-5e5412976e42: Claiming fa:16:3e:d7:b8:12 10.100.0.12
Nov 25 16:32:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:38.998 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b8:12 10.100.0.12'], port_security=['fa:16:3e:d7:b8:12 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1d318e56-4a8c-4806-aa87-e837708f2a1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f95d61ca-d58c-4f07-879f-5e5412976e42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.000 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f95d61ca-d58c-4f07-879f-5e5412976e42 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.002 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:32:39 compute-0 ovn_controller[153477]: 2025-11-25T16:32:39Z|00224|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 ovn-installed in OVS
Nov 25 16:32:39 compute-0 ovn_controller[153477]: 2025-11-25T16:32:39Z|00225|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 up in Southbound
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:39 compute-0 systemd-udevd[295701]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:32:39 compute-0 systemd-machined[216343]: New machine qemu-41-instance-00000024.
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.027 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8cc34f2-3490-40bf-932e-83869bc4ae1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:39 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000024.
Nov 25 16:32:39 compute-0 NetworkManager[48891]: <info>  [1764088359.0357] device (tapf95d61ca-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:32:39 compute-0 NetworkManager[48891]: <info>  [1764088359.0371] device (tapf95d61ca-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[71b40e96-aa5c-45b9-9cb6-1848e46e422b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.074 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[500ddd29-c2b1-43e1-8458-744a52245df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.110 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb221b10-415c-486e-b8f2-c4f10c6c2794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.140 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[193e9e66-f3de-46b5-90ab-3448b0a1a3b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295715, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.161 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc4fbbc-596e-4f94-9df7-0b8cb32cf383]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295716, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295716, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.163 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.165 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.166 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.166 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:39.166 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 353 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 327 op/s
Nov 25 16:32:39 compute-0 ovn_controller[153477]: 2025-11-25T16:32:39Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:7b:ca 10.100.0.6
Nov 25 16:32:39 compute-0 ovn_controller[153477]: 2025-11-25T16:32:39Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:7b:ca 10.100.0.6
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.582 254096 DEBUG nova.compute.manager [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG nova.compute.manager [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG oslo_concurrency.lockutils [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG oslo_concurrency.lockutils [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.583 254096 DEBUG nova.network.neutron [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.593 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088359.5920138, 1d318e56-4a8c-4806-aa87-e837708f2a1f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.593 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] VM Started (Lifecycle Event)
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.628 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.633 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088359.5921452, 1d318e56-4a8c-4806-aa87-e837708f2a1f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.634 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] VM Paused (Lifecycle Event)
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.654 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.659 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:39 compute-0 nova_compute[254092]: 2025-11-25 16:32:39.683 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:32:40
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'images']
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.315 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.316 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.316 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.317 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.317 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.318 254096 INFO nova.compute.manager [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Terminating instance
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.320 254096 DEBUG nova.compute.manager [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:32:40 compute-0 kernel: tap2a798aec-11 (unregistering): left promiscuous mode
Nov 25 16:32:40 compute-0 NetworkManager[48891]: <info>  [1764088360.3625] device (tap2a798aec-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 ovn_controller[153477]: 2025-11-25T16:32:40Z|00226|binding|INFO|Releasing lport 2a798aec-112b-42d0-9128-639b456b201e from this chassis (sb_readonly=0)
Nov 25 16:32:40 compute-0 ovn_controller[153477]: 2025-11-25T16:32:40Z|00227|binding|INFO|Setting lport 2a798aec-112b-42d0-9128-639b456b201e down in Southbound
Nov 25 16:32:40 compute-0 ovn_controller[153477]: 2025-11-25T16:32:40Z|00228|binding|INFO|Removing iface tap2a798aec-11 ovn-installed in OVS
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.378 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3e:5a 10.100.0.14'], port_security=['fa:16:3e:af:3e:5a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2724cb7d-6b8e-4861-ae3d-72e34da31fe5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50e18e22-7850-458c-8d66-5932e0495377', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed571eebde434695bae813d7bb21f4c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd93cbc19-cf50-4620-aef2-ea907e30850a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aff6ec99-f8ff-4a75-9004-f758cb9a51ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a798aec-112b-42d0-9128-639b456b201e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.379 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a798aec-112b-42d0-9128-639b456b201e in datapath 50e18e22-7850-458c-8d66-5932e0495377 unbound from our chassis
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.380 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50e18e22-7850-458c-8d66-5932e0495377, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9378c406-79f4-4040-80c1-034d2be85265]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.382 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 namespace which is not needed anymore
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.397 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 25 16:32:40 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000022.scope: Consumed 3.833s CPU time.
Nov 25 16:32:40 compute-0 systemd-machined[216343]: Machine qemu-40-instance-00000022 terminated.
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:32:40 compute-0 ceph-mon[74985]: pgmap v1357: 321 pgs: 321 active+clean; 353 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 9.3 MiB/s wr, 327 op/s
Nov 25 16:32:40 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : haproxy version is 2.8.14-c23fe91
Nov 25 16:32:40 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [NOTICE]   (295513) : path to executable is /usr/sbin/haproxy
Nov 25 16:32:40 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [WARNING]  (295513) : Exiting Master process...
Nov 25 16:32:40 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [WARNING]  (295513) : Exiting Master process...
Nov 25 16:32:40 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [ALERT]    (295513) : Current worker (295515) exited with code 143 (Terminated)
Nov 25 16:32:40 compute-0 neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377[295509]: [WARNING]  (295513) : All workers exited. Exiting... (0)
Nov 25 16:32:40 compute-0 systemd[1]: libpod-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope: Deactivated successfully.
Nov 25 16:32:40 compute-0 conmon[295509]: conmon a6ca155eb1f434f8462f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope/container/memory.events
Nov 25 16:32:40 compute-0 podman[295779]: 2025-11-25 16:32:40.532817823 +0000 UTC m=+0.046093130 container died a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:32:40 compute-0 NetworkManager[48891]: <info>  [1764088360.5372] manager: (tap2a798aec-11): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.550 254096 INFO nova.virt.libvirt.driver [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Instance destroyed successfully.
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.550 254096 DEBUG nova.objects.instance [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lazy-loading 'resources' on Instance uuid 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.566 254096 DEBUG nova.virt.libvirt.vif [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1220833469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1220833469',id=34,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed571eebde434695bae813d7bb21f4c3',ramdisk_id='',reservation_id='r-g9yti154',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-964953831',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-964953831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:37Z,user_data=None,user_id='87058665de814ae0a51a12ff02b0d9aa',uuid=2724cb7d-6b8e-4861-ae3d-72e34da31fe5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.566 254096 DEBUG nova.network.os_vif_util [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converting VIF {"id": "2a798aec-112b-42d0-9128-639b456b201e", "address": "fa:16:3e:af:3e:5a", "network": {"id": "50e18e22-7850-458c-8d66-5932e0495377", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-543248452-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed571eebde434695bae813d7bb21f4c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a798aec-11", "ovs_interfaceid": "2a798aec-112b-42d0-9128-639b456b201e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.567 254096 DEBUG nova.network.os_vif_util [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.567 254096 DEBUG os_vif [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.570 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a798aec-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68-userdata-shm.mount: Deactivated successfully.
Nov 25 16:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70c021b67aa7e63f187afb6ab9f2ef19c3ce9218078adb4a42a3a569b52662a-merged.mount: Deactivated successfully.
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.581 254096 INFO os_vif [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3e:5a,bridge_name='br-int',has_traffic_filtering=True,id=2a798aec-112b-42d0-9128-639b456b201e,network=Network(50e18e22-7850-458c-8d66-5932e0495377),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a798aec-11')
Nov 25 16:32:40 compute-0 podman[295779]: 2025-11-25 16:32:40.597756152 +0000 UTC m=+0.111031449 container cleanup a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:32:40 compute-0 systemd[1]: libpod-conmon-a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68.scope: Deactivated successfully.
Nov 25 16:32:40 compute-0 podman[295831]: 2025-11-25 16:32:40.682004054 +0000 UTC m=+0.056750788 container remove a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.688 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0cf3cc-cfc0-4df2-a135-225948deb479]: (4, ('Tue Nov 25 04:32:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68)\na6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68\nTue Nov 25 04:32:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 (a6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68)\na6ca155eb1f434f8462f2b8c75c4eb2b2f49726aa5907ef53fcef74074f15e68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.690 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a89221f-c9e6-4625-9ad7-923c4f99d8f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.691 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50e18e22-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 kernel: tap50e18e22-70: left promiscuous mode
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c0804e9a-a673-4a35-bf0f-33e5feccc357]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[584e1d2b-9d57-49af-b8f5-b0eb56aaa7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.728 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[299d21c2-0e2c-49de-bf9c-0c3ac9053a08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8508c59-5618-4c0c-a826-e7439e2d8327]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481345, 'reachable_time': 38178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295846, 'error': None, 'target': 'ovnmeta-50e18e22-7850-458c-8d66-5932e0495377', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d50e18e22\x2d7850\x2d458c\x2d8d66\x2d5932e0495377.mount: Deactivated successfully.
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.751 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50e18e22-7850-458c-8d66-5932e0495377 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:32:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:40.751 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[700382a7-8a85-4d3f-8e9c-e6ce43a9ae97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.916 254096 DEBUG nova.compute.manager [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-unplugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG oslo_concurrency.lockutils [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG oslo_concurrency.lockutils [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG oslo_concurrency.lockutils [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG nova.compute.manager [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] No waiting events found dispatching network-vif-unplugged-2a798aec-112b-42d0-9128-639b456b201e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.917 254096 DEBUG nova.compute.manager [req-97d25750-56f1-4b62-aafb-077211c59faf req-05878168-b390-4e21-b693-073bf98ffedb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-unplugged-2a798aec-112b-42d0-9128-639b456b201e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.982 254096 DEBUG nova.network.neutron [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.982 254096 DEBUG nova.network.neutron [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.988 254096 INFO nova.virt.libvirt.driver [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deleting instance files /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_del
Nov 25 16:32:40 compute-0 nova_compute[254092]: 2025-11-25 16:32:40.988 254096 INFO nova.virt.libvirt.driver [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deletion of /var/lib/nova/instances/2724cb7d-6b8e-4861-ae3d-72e34da31fe5_del complete
Nov 25 16:32:41 compute-0 nova_compute[254092]: 2025-11-25 16:32:41.005 254096 DEBUG oslo_concurrency.lockutils [req-6b0f40ae-3bf0-44be-a7dc-a21210389da8 req-d6367c9e-ba92-4e6c-88d1-5e86b6ae9373 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:41 compute-0 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 INFO nova.compute.manager [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 25 16:32:41 compute-0 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 DEBUG oslo.service.loopingcall [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:32:41 compute-0 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 DEBUG nova.compute.manager [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:32:41 compute-0 nova_compute[254092]: 2025-11-25 16:32:41.038 254096 DEBUG nova.network.neutron [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:32:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 381 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 11 MiB/s wr, 537 op/s
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.272 254096 DEBUG nova.network.neutron [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.342 254096 INFO nova.compute.manager [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Took 1.30 seconds to deallocate network for instance.
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.444 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.445 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.455 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.456 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Processing event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG oslo_concurrency.lockutils [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.457 254096 DEBUG nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.458 254096 WARNING nova.compute.manager [req-7be9e220-1aab-4354-805b-91a6a2779011 req-333c5a59-7c10-4849-a3b2-a44c8995d2af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with vm_state building and task_state spawning.
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.459 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.465 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088362.4633458, 1d318e56-4a8c-4806-aa87-e837708f2a1f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.465 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] VM Resumed (Lifecycle Event)
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.466 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.470 254096 INFO nova.virt.libvirt.driver [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance spawned successfully.
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.470 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.495 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.501 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.502 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.502 254096 DEBUG nova.virt.libvirt.driver [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.508 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:42 compute-0 ceph-mon[74985]: pgmap v1358: 321 pgs: 321 active+clean; 381 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 11 MiB/s wr, 537 op/s
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.539 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.564 254096 INFO nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Took 8.90 seconds to spawn the instance on the hypervisor.
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.564 254096 DEBUG nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.591 254096 DEBUG oslo_concurrency.processutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.626 254096 INFO nova.compute.manager [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Took 10.01 seconds to build instance.
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.640 254096 DEBUG oslo_concurrency.lockutils [None req-79a49a0d-5fd4-408f-a5f5-a7ae31ee6b84 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:42 compute-0 nova_compute[254092]: 2025-11-25 16:32:42.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.002 254096 DEBUG nova.compute.manager [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG oslo_concurrency.lockutils [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG oslo_concurrency.lockutils [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG oslo_concurrency.lockutils [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.003 254096 DEBUG nova.compute.manager [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] No waiting events found dispatching network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.004 254096 WARNING nova.compute.manager [req-5681f14e-87d9-4689-83f2-d78a079d0f07 req-e7207a18-3791-4e01-b39a-8351c0f4f1a0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received unexpected event network-vif-plugged-2a798aec-112b-42d0-9128-639b456b201e for instance with vm_state deleted and task_state None.
Nov 25 16:32:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925592700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.045 254096 DEBUG oslo_concurrency.processutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.053 254096 DEBUG nova.compute.provider_tree [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.065 254096 DEBUG nova.scheduler.client.report [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.085 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.107 254096 INFO nova.scheduler.client.report [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Deleted allocations for instance 2724cb7d-6b8e-4861-ae3d-72e34da31fe5
Nov 25 16:32:43 compute-0 nova_compute[254092]: 2025-11-25 16:32:43.154 254096 DEBUG oslo_concurrency.lockutils [None req-b3a12a9b-e80a-475c-a982-37bbdc3414df 87058665de814ae0a51a12ff02b0d9aa ed571eebde434695bae813d7bb21f4c3 - - default default] Lock "2724cb7d-6b8e-4861-ae3d-72e34da31fe5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 381 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.7 MiB/s wr, 378 op/s
Nov 25 16:32:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/925592700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:44 compute-0 nova_compute[254092]: 2025-11-25 16:32:44.382 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:32:44 compute-0 ceph-mon[74985]: pgmap v1359: 321 pgs: 321 active+clean; 381 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 5.7 MiB/s wr, 378 op/s
Nov 25 16:32:45 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 16:32:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 357 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.1 MiB/s wr, 497 op/s
Nov 25 16:32:45 compute-0 nova_compute[254092]: 2025-11-25 16:32:45.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:46 compute-0 nova_compute[254092]: 2025-11-25 16:32:46.340 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Received event network-vif-deleted-2a798aec-112b-42d0-9128-639b456b201e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:46 compute-0 nova_compute[254092]: 2025-11-25 16:32:46.341 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:46 compute-0 nova_compute[254092]: 2025-11-25 16:32:46.342 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:46 compute-0 nova_compute[254092]: 2025-11-25 16:32:46.342 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:46 compute-0 nova_compute[254092]: 2025-11-25 16:32:46.343 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:46 compute-0 nova_compute[254092]: 2025-11-25 16:32:46.343 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:46 compute-0 ceph-mon[74985]: pgmap v1360: 321 pgs: 321 active+clean; 357 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.1 MiB/s wr, 497 op/s
Nov 25 16:32:47 compute-0 nova_compute[254092]: 2025-11-25 16:32:47.265 254096 DEBUG nova.compute.manager [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:47 compute-0 nova_compute[254092]: 2025-11-25 16:32:47.265 254096 DEBUG nova.compute.manager [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:47 compute-0 nova_compute[254092]: 2025-11-25 16:32:47.266 254096 DEBUG oslo_concurrency.lockutils [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:47 compute-0 nova_compute[254092]: 2025-11-25 16:32:47.266 254096 DEBUG oslo_concurrency.lockutils [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:47 compute-0 nova_compute[254092]: 2025-11-25 16:32:47.266 254096 DEBUG nova.network.neutron [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 366 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 4.6 MiB/s wr, 407 op/s
Nov 25 16:32:47 compute-0 nova_compute[254092]: 2025-11-25 16:32:47.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:47 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 25 16:32:47 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000023.scope: Consumed 13.213s CPU time.
Nov 25 16:32:47 compute-0 systemd-machined[216343]: Machine qemu-38-instance-00000023 terminated.
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.281 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.281 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.297 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG nova.compute.manager [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing instance network info cache due to event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.298 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.404 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance shutdown successfully after 14 seconds.
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.411 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance destroyed successfully.
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.418 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance destroyed successfully.
Nov 25 16:32:48 compute-0 ceph-mon[74985]: pgmap v1361: 321 pgs: 321 active+clean; 366 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 4.6 MiB/s wr, 407 op/s
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.882 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting instance files /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del
Nov 25 16:32:48 compute-0 nova_compute[254092]: 2025-11-25 16:32:48.883 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deletion of /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del complete
Nov 25 16:32:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.040 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.042 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating image(s)
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.063 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.090 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.114 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.118 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.193 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.195 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.196 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.197 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.218 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.220 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 366 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.3 MiB/s wr, 357 op/s
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.336 254096 DEBUG nova.network.neutron [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.337 254096 DEBUG nova.network.neutron [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.357 254096 DEBUG nova.compute.manager [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.358 254096 DEBUG nova.compute.manager [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.358 254096 DEBUG oslo_concurrency.lockutils [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.358 254096 DEBUG oslo_concurrency.lockutils [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.359 254096 DEBUG nova.network.neutron [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.366 254096 DEBUG oslo_concurrency.lockutils [req-ec829654-1465-44e4-9dbd-a2e082cbddd0 req-5380acdc-8d77-426c-8b36-8baba5eac766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.483 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 616ec95d-6c7d-420e-991d-3cbc11339768_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.531 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] resizing rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:32:49 compute-0 ovn_controller[153477]: 2025-11-25T16:32:49Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:8a:c5 10.100.0.9
Nov 25 16:32:49 compute-0 ovn_controller[153477]: 2025-11-25T16:32:49Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:8a:c5 10.100.0.9
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.605 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.605 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Ensure instance console log exists: /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.607 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.613 254096 WARNING nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.621 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.621 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.626 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.626 254096 DEBUG nova.virt.libvirt.host [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.626 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.627 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.virt.hardware [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.628 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:49 compute-0 nova_compute[254092]: 2025-11-25 16:32:49.644 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2485257155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.122 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.142 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.145 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.170 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updated VIF entry in instance network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.171 254096 DEBUG nova.network.neutron [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.186 254096 DEBUG oslo_concurrency.lockutils [req-ec45d5dd-795a-48e6-a537-f62ad154d451 req-3cf377cf-1f11-4479-81d3-f711e1216cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.470 254096 DEBUG nova.compute.manager [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.470 254096 DEBUG nova.compute.manager [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.471 254096 DEBUG oslo_concurrency.lockutils [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:50 compute-0 ceph-mon[74985]: pgmap v1362: 321 pgs: 321 active+clean; 366 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 4.3 MiB/s wr, 357 op/s
Nov 25 16:32:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2485257155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:32:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743957538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.583 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.586 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <uuid>616ec95d-6c7d-420e-991d-3cbc11339768</uuid>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <name>instance-00000023</name>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerShowV257Test-server-2017219999</nova:name>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:32:49</nova:creationTime>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <nova:user uuid="204d6790ef4644f6a11d8afd611b7f8d">tempest-ServerShowV257Test-1749590920-project-member</nova:user>
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <nova:project uuid="f7ec8b6f4599458ebb55ba5d9a7463c3">tempest-ServerShowV257Test-1749590920</nova:project>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <system>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <entry name="serial">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <entry name="uuid">616ec95d-6c7d-420e-991d-3cbc11339768</entry>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </system>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <os>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   </os>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <features>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   </features>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk">
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/616ec95d-6c7d-420e-991d-3cbc11339768_disk.config">
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       </source>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:32:50 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/console.log" append="off"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <video>
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </video>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:32:50 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:32:50 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:32:50 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:32:50 compute-0 nova_compute[254092]: </domain>
Nov 25 16:32:50 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.640 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.641 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.642 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Using config drive
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.664 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.686 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.721 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'keypairs' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.865 254096 DEBUG nova.network.neutron [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updated VIF entry in instance network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.865 254096 DEBUG nova.network.neutron [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.886 254096 DEBUG oslo_concurrency.lockutils [req-b12f1624-8947-41c0-b9bc-fa2637ea9d9d req-9615ee3e-294e-4c4e-bb29-2dd56e003157 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.887 254096 DEBUG oslo_concurrency.lockutils [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:50 compute-0 nova_compute[254092]: 2025-11-25 16:32:50.887 254096 DEBUG nova.network.neutron [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029655526592107834 of space, bias 1.0, pg target 0.8896657977632351 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.128 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Creating config drive at /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.138 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpm99syc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.272 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpm99syc" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.295 254096 DEBUG nova.storage.rbd_utils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] rbd image 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.299 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 372 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 8.2 MiB/s wr, 484 op/s
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.338 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.339 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.340 254096 DEBUG nova.objects.instance [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.434 254096 DEBUG oslo_concurrency.processutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config 616ec95d-6c7d-420e-991d-3cbc11339768_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.435 254096 INFO nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting local config drive /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768/disk.config because it was imported into RBD.
Nov 25 16:32:51 compute-0 systemd-machined[216343]: New machine qemu-42-instance-00000023.
Nov 25 16:32:51 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000023.
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:32:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3743957538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.818 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 616ec95d-6c7d-420e-991d-3cbc11339768 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.819 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088371.8182952, 616ec95d-6c7d-420e-991d-3cbc11339768 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.820 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Resumed (Lifecycle Event)
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.827 254096 DEBUG nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.827 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.830 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance spawned successfully.
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.831 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.838 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.842 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.846 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.846 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.847 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.847 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.847 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.848 254096 DEBUG nova.virt.libvirt.driver [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.873 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.874 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088371.827008, 616ec95d-6c7d-420e-991d-3cbc11339768 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.874 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Started (Lifecycle Event)
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.894 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.896 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.905 254096 DEBUG nova.compute.manager [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.931 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.962 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.963 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:51 compute-0 nova_compute[254092]: 2025-11-25 16:32:51.963 254096 DEBUG nova.objects.instance [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:32:52 compute-0 nova_compute[254092]: 2025-11-25 16:32:52.024 254096 DEBUG oslo_concurrency.lockutils [None req-9556049b-afd0-48bd-a97f-8f25bba921c3 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:52 compute-0 ceph-mon[74985]: pgmap v1363: 321 pgs: 321 active+clean; 372 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 8.2 MiB/s wr, 484 op/s
Nov 25 16:32:52 compute-0 nova_compute[254092]: 2025-11-25 16:32:52.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:53 compute-0 nova_compute[254092]: 2025-11-25 16:32:53.216 254096 DEBUG nova.objects.instance [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:53 compute-0 nova_compute[254092]: 2025-11-25 16:32:53.230 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:32:53 compute-0 nova_compute[254092]: 2025-11-25 16:32:53.317 254096 DEBUG nova.network.neutron [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updated VIF entry in instance network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:53 compute-0 nova_compute[254092]: 2025-11-25 16:32:53.318 254096 DEBUG nova.network.neutron [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 372 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 274 op/s
Nov 25 16:32:53 compute-0 nova_compute[254092]: 2025-11-25 16:32:53.333 254096 DEBUG oslo_concurrency.lockutils [req-9663e43d-cd12-4aa7-bcea-0bde5a154909 req-49362623-5657-48ae-991e-9dc71314ffd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:53 compute-0 nova_compute[254092]: 2025-11-25 16:32:53.947 254096 DEBUG nova.policy [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.145 254096 DEBUG nova.compute.manager [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.145 254096 DEBUG nova.compute.manager [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing instance network info cache due to event network-changed-5cf8fe87-4cac-403f-8611-0ddb37516abd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.146 254096 DEBUG oslo_concurrency.lockutils [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.146 254096 DEBUG oslo_concurrency.lockutils [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.146 254096 DEBUG nova.network.neutron [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Refreshing network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.231 254096 DEBUG nova.compute.manager [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG nova.compute.manager [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG oslo_concurrency.lockutils [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG oslo_concurrency.lockutils [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.232 254096 DEBUG nova.network.neutron [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.400 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.402 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.402 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.402 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.403 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.404 254096 INFO nova.compute.manager [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Terminating instance
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.405 254096 DEBUG nova.compute.manager [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:32:54 compute-0 kernel: tap5cf8fe87-4c (unregistering): left promiscuous mode
Nov 25 16:32:54 compute-0 NetworkManager[48891]: <info>  [1764088374.4743] device (tap5cf8fe87-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:54 compute-0 ovn_controller[153477]: 2025-11-25T16:32:54Z|00229|binding|INFO|Releasing lport 5cf8fe87-4cac-403f-8611-0ddb37516abd from this chassis (sb_readonly=0)
Nov 25 16:32:54 compute-0 ovn_controller[153477]: 2025-11-25T16:32:54Z|00230|binding|INFO|Setting lport 5cf8fe87-4cac-403f-8611-0ddb37516abd down in Southbound
Nov 25 16:32:54 compute-0 ovn_controller[153477]: 2025-11-25T16:32:54Z|00231|binding|INFO|Removing iface tap5cf8fe87-4c ovn-installed in OVS
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.496 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:8a:c5 10.100.0.9'], port_security=['fa:16:3e:8c:8a:c5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8b56bc-492b-4082-b8de-60d496652da7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ae7f32b97104afd930af5d5f5754532', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c729f64-6320-4599-a74a-ed70b78f82ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81a9c566-d2a0-4e54-9d3c-480355005098, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cf8fe87-4cac-403f-8611-0ddb37516abd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.497 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cf8fe87-4cac-403f-8611-0ddb37516abd in datapath 7e8b56bc-492b-4082-b8de-60d496652da7 unbound from our chassis
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.498 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e8b56bc-492b-4082-b8de-60d496652da7
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.508 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.515 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a125578c-4d5b-47f7-b6c9-7565791c89f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:54 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000021.scope: Deactivated successfully.
Nov 25 16:32:54 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000021.scope: Consumed 13.207s CPU time.
Nov 25 16:32:54 compute-0 systemd-machined[216343]: Machine qemu-39-instance-00000021 terminated.
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.541 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6c52299a-3359-4816-b402-5e0a3b30bb5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.545 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[22217619-1070-47a6-a2d9-f37db149e3e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.572 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[86d22f75-59f2-498e-802c-e33f5601ced9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.574 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "616ec95d-6c7d-420e-991d-3cbc11339768" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.575 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.576 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "616ec95d-6c7d-420e-991d-3cbc11339768-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.576 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.576 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:54 compute-0 ceph-mon[74985]: pgmap v1364: 321 pgs: 321 active+clean; 372 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 274 op/s
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.578 254096 INFO nova.compute.manager [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Terminating instance
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.579 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "refresh_cache-616ec95d-6c7d-420e-991d-3cbc11339768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.579 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquired lock "refresh_cache-616ec95d-6c7d-420e-991d-3cbc11339768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.579 254096 DEBUG nova.network.neutron [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.590 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdcef5b-ae16-4f0b-beea-bf35d41786fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8b56bc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479577, 'reachable_time': 25547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296246, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.606 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbc2765-dfb3-4d7b-8c28-85e7a93e835c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479591, 'tstamp': 479591}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296247, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7e8b56bc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479595, 'tstamp': 479595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296247, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.609 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8b56bc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.619 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e8b56bc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.620 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.621 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e8b56bc-40, col_values=(('external_ids', {'iface-id': 'f3398af3-7278-4ca0-adcc-f3bb48f595e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:32:54.621 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.647 254096 INFO nova.virt.libvirt.driver [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Instance destroyed successfully.
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.648 254096 DEBUG nova.objects.instance [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'resources' on Instance uuid a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.661 254096 DEBUG nova.virt.libvirt.vif [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1769139174',display_name='tempest-FloatingIPsAssociationTestJSON-server-1769139174',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1769139174',id=33,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-f0yptx40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:35Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.662 254096 DEBUG nova.network.os_vif_util [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.662 254096 DEBUG nova.network.os_vif_util [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.663 254096 DEBUG os_vif [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.666 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cf8fe87-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.684 254096 INFO os_vif [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:8a:c5,bridge_name='br-int',has_traffic_filtering=True,id=5cf8fe87-4cac-403f-8611-0ddb37516abd,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cf8fe87-4c')
Nov 25 16:32:54 compute-0 nova_compute[254092]: 2025-11-25 16:32:54.765 254096 DEBUG nova.network.neutron [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.021 254096 INFO nova.virt.libvirt.driver [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deleting instance files /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_del
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.022 254096 INFO nova.virt.libvirt.driver [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deletion of /var/lib/nova/instances/a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc_del complete
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.075 254096 INFO nova.compute.manager [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 0.67 seconds to destroy the instance on the hypervisor.
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.076 254096 DEBUG oslo.service.loopingcall [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.077 254096 DEBUG nova.compute.manager [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.077 254096 DEBUG nova.network.neutron [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:32:55 compute-0 ovn_controller[153477]: 2025-11-25T16:32:55Z|00232|binding|INFO|Releasing lport f3398af3-7278-4ca0-adcc-f3bb48f595e9 from this chassis (sb_readonly=0)
Nov 25 16:32:55 compute-0 ovn_controller[153477]: 2025-11-25T16:32:55Z|00233|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:32:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:32:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63055523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:32:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:32:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/63055523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 343 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 364 op/s
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.549 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088360.5478022, 2724cb7d-6b8e-4861-ae3d-72e34da31fe5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.549 254096 INFO nova.compute.manager [-] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] VM Stopped (Lifecycle Event)
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.566 254096 DEBUG nova.network.neutron [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.570 254096 DEBUG nova.compute.manager [None req-09e0b0e4-f77d-40a1-bc81-8e5db12a1159 - - - - - -] [instance: 2724cb7d-6b8e-4861-ae3d-72e34da31fe5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:32:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/63055523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:32:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/63055523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.578 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Releasing lock "refresh_cache-616ec95d-6c7d-420e-991d-3cbc11339768" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.579 254096 DEBUG nova.compute.manager [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.587 254096 DEBUG nova.network.neutron [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updated VIF entry in instance network info cache for port 5cf8fe87-4cac-403f-8611-0ddb37516abd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.587 254096 DEBUG nova.network.neutron [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [{"id": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "address": "fa:16:3e:8c:8a:c5", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cf8fe87-4c", "ovs_interfaceid": "5cf8fe87-4cac-403f-8611-0ddb37516abd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.603 254096 DEBUG oslo_concurrency.lockutils [req-6cfd380f-566b-433a-9472-ad97d3df85a0 req-7b520e1e-d22e-44bc-bbf3-d5f0c44cda0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:55 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Deactivated successfully.
Nov 25 16:32:55 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000023.scope: Consumed 4.110s CPU time.
Nov 25 16:32:55 compute-0 systemd-machined[216343]: Machine qemu-42-instance-00000023 terminated.
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.777 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Successfully updated port: 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.799 254096 INFO nova.virt.libvirt.driver [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance destroyed successfully.
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.799 254096 DEBUG nova.objects.instance [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lazy-loading 'resources' on Instance uuid 616ec95d-6c7d-420e-991d-3cbc11339768 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:32:55 compute-0 nova_compute[254092]: 2025-11-25 16:32:55.802 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:32:55 compute-0 ovn_controller[153477]: 2025-11-25T16:32:55Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:b8:12 10.100.0.12
Nov 25 16:32:55 compute-0 ovn_controller[153477]: 2025-11-25T16:32:55Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:b8:12 10.100.0.12
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.207 254096 INFO nova.virt.libvirt.driver [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deleting instance files /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.208 254096 INFO nova.virt.libvirt.driver [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deletion of /var/lib/nova/instances/616ec95d-6c7d-420e-991d-3cbc11339768_del complete
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.292 254096 INFO nova.compute.manager [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.292 254096 DEBUG oslo.service.loopingcall [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.293 254096 DEBUG nova.compute.manager [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.293 254096 DEBUG nova.network.neutron [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.525 254096 DEBUG nova.network.neutron [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.526 254096 DEBUG nova.network.neutron [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:56 compute-0 ceph-mon[74985]: pgmap v1365: 321 pgs: 321 active+clean; 343 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 364 op/s
Nov 25 16:32:56 compute-0 podman[296302]: 2025-11-25 16:32:56.646391426 +0000 UTC m=+0.060162641 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 16:32:56 compute-0 podman[296301]: 2025-11-25 16:32:56.651412502 +0000 UTC m=+0.065180857 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:32:56 compute-0 podman[296303]: 2025-11-25 16:32:56.668577177 +0000 UTC m=+0.080843180 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.695 254096 DEBUG oslo_concurrency.lockutils [req-d5c09314-c50f-4e42-8a06-aa9232f1d6ea req-7bc6964d-c3e1-44fa-81aa-cbfe436b858c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.696 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.696 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.862 254096 DEBUG nova.network.neutron [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.877 254096 DEBUG nova.network.neutron [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.898 254096 DEBUG nova.network.neutron [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.899 254096 INFO nova.compute.manager [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Took 0.61 seconds to deallocate network for instance.
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.936 254096 INFO nova.compute.manager [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Took 1.86 seconds to deallocate network for instance.
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.946 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.946 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.972 254096 WARNING nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:32:56 compute-0 nova_compute[254092]: 2025-11-25 16:32:56.983 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.071 254096 DEBUG oslo_concurrency.processutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 300 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.7 MiB/s wr, 314 op/s
Nov 25 16:32:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/28605362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.516 254096 DEBUG oslo_concurrency.processutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.523 254096 DEBUG nova.compute.provider_tree [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.544 254096 DEBUG nova.scheduler.client.report [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.573 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/28605362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.607 254096 INFO nova.scheduler.client.report [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Deleted allocations for instance 616ec95d-6c7d-420e-991d-3cbc11339768
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.687 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.687 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.705 254096 DEBUG oslo_concurrency.processutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.735 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:57 compute-0 nova_compute[254092]: 2025-11-25 16:32:57.738 254096 DEBUG oslo_concurrency.lockutils [None req-3378e40b-3662-48f1-bbf2-9621697246bd 204d6790ef4644f6a11d8afd611b7f8d f7ec8b6f4599458ebb55ba5d9a7463c3 - - default default] Lock "616ec95d-6c7d-420e-991d-3cbc11339768" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573068817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.170 254096 DEBUG oslo_concurrency.processutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.176 254096 DEBUG nova.compute.provider_tree [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.188 254096 DEBUG nova.scheduler.client.report [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.208 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.210 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.211 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.211 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.211 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.264 254096 INFO nova.scheduler.client.report [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Deleted allocations for instance a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.322 254096 DEBUG oslo_concurrency.lockutils [None req-c1bd6c83-d670-48d2-b9d1-cb0ae173bb57 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:32:58 compute-0 ceph-mon[74985]: pgmap v1366: 321 pgs: 321 active+clean; 300 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.7 MiB/s wr, 314 op/s
Nov 25 16:32:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1573068817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164591214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.636 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.703 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.703 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.708 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.708 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000024 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.712 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.712 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:32:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.990 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.992 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3702MB free_disk=59.84079360961914GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.992 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:32:58 compute-0 nova_compute[254092]: 2025-11-25 16:32:58.993 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.061 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 9bd4d655-c683-4433-a739-168946211a75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1d318e56-4a8c-4806-aa87-e837708f2a1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.062 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.157 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:32:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 300 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 286 op/s
Nov 25 16:32:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3164591214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:32:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964841470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.646 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.654 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.670 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.695 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:32:59 compute-0 nova_compute[254092]: 2025-11-25 16:32:59.696 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.322 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-unplugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.323 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.323 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.323 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] No waiting events found dispatching network-vif-unplugged-5cf8fe87-4cac-403f-8611-0ddb37516abd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 WARNING nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received unexpected event network-vif-unplugged-5cf8fe87-4cac-403f-8611-0ddb37516abd for instance with vm_state deleted and task_state None.
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.324 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.325 254096 DEBUG oslo_concurrency.lockutils [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.325 254096 DEBUG nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] No waiting events found dispatching network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.325 254096 WARNING nova.compute.manager [req-36b1cea3-30f7-4f8c-b117-c2309b35ce87 req-bbcfe8de-dbf4-4393-abb4-dfcc867bc43b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received unexpected event network-vif-plugged-5cf8fe87-4cac-403f-8611-0ddb37516abd for instance with vm_state deleted and task_state None.
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.383 254096 DEBUG nova.network.neutron [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.397 254096 DEBUG nova.compute.manager [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.398 254096 DEBUG nova.compute.manager [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.398 254096 DEBUG oslo_concurrency.lockutils [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.402 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.403 254096 DEBUG oslo_concurrency.lockutils [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.403 254096 DEBUG nova.network.neutron [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.407 254096 DEBUG nova.virt.libvirt.vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.407 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.408 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.408 254096 DEBUG os_vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.409 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.410 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.412 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ce87d5c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.413 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ce87d5c-fb, col_values=(('external_ids', {'iface-id': '0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:a3:51', 'vm-uuid': '800c66e3-ee9f-4766-92f2-ecda5671cde3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:00 compute-0 NetworkManager[48891]: <info>  [1764088380.4157] manager: (tap0ce87d5c-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.424 254096 INFO os_vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb')
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.424 254096 DEBUG nova.virt.libvirt.vif [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.425 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.425 254096 DEBUG nova.network.os_vif_util [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.429 254096 DEBUG nova.virt.libvirt.guest [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <target dev="tap0ce87d5c-fb"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]: </interface>
Nov 25 16:33:00 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 16:33:00 compute-0 kernel: tap0ce87d5c-fb: entered promiscuous mode
Nov 25 16:33:00 compute-0 NetworkManager[48891]: <info>  [1764088380.4442] manager: (tap0ce87d5c-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Nov 25 16:33:00 compute-0 ovn_controller[153477]: 2025-11-25T16:33:00Z|00234|binding|INFO|Claiming lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for this chassis.
Nov 25 16:33:00 compute-0 ovn_controller[153477]: 2025-11-25T16:33:00Z|00235|binding|INFO|0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9: Claiming fa:16:3e:bc:a3:51 10.100.0.11
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.458 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:51 10.100.0.11'], port_security=['fa:16:3e:bc:a3:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '800c66e3-ee9f-4766-92f2-ecda5671cde3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.460 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.462 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:33:00 compute-0 ovn_controller[153477]: 2025-11-25T16:33:00Z|00236|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 ovn-installed in OVS
Nov 25 16:33:00 compute-0 ovn_controller[153477]: 2025-11-25T16:33:00Z|00237|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 up in Southbound
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.477 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c46bab26-7f35-40b8-9ed6-eb3b009dc132]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:00 compute-0 systemd-udevd[296461]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:33:00 compute-0 NetworkManager[48891]: <info>  [1764088380.5093] device (tap0ce87d5c-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:33:00 compute-0 NetworkManager[48891]: <info>  [1764088380.5108] device (tap0ce87d5c-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.517 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[474bed40-347e-4a41-9d67-3881125a7f69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.522 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4e8ff5-3b66-469f-9e4e-b82aa7a49c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.534 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.535 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.535 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:f0:7b:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.535 254096 DEBUG nova.virt.libvirt.driver [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:bc:a3:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.552 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4441af-4e6a-4e17-863a-dfbf996cccdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.569 254096 DEBUG nova.virt.libvirt.guest [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:00</nova:creationTime>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 16:33:00 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 16:33:00 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:33:00 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:00 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:00 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:00 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.571 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b134e59d-b700-4398-8857-d11d9f01c436]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296468, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72f56a1b-7f21-42f0-b32c-f2f2f635e574]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296469, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296469, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.587 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.589 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.590 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.590 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:00.591 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:00 compute-0 nova_compute[254092]: 2025-11-25 16:33:00.594 254096 DEBUG oslo_concurrency.lockutils [None req-6c18526d-ed03-4095-a6b0-d5807450ebd7 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:00 compute-0 ceph-mon[74985]: pgmap v1367: 321 pgs: 321 active+clean; 300 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 286 op/s
Nov 25 16:33:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1964841470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 279 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 317 op/s
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.825 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.826 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.841 254096 DEBUG nova.objects.instance [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.883 254096 DEBUG nova.virt.libvirt.vif [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.883 254096 DEBUG nova.network.os_vif_util [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.884 254096 DEBUG nova.network.os_vif_util [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.888 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.890 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.894 254096 DEBUG nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Attempting to detach device tap0ce87d5c-fb from instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.894 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <target dev="tap0ce87d5c-fb"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]: </interface>
Nov 25 16:33:01 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.899 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.906 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface>not found in domain: <domain type='kvm' id='37'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <name>instance-00000020</name>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <uuid>800c66e3-ee9f-4766-92f2-ecda5671cde3</uuid>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:00</nova:creationTime>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:01 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <system>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <entry name='serial'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <entry name='uuid'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </system>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <os>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </os>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <features>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </features>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk' index='2'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config' index='1'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:f0:7b:ca'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target dev='tap82f63517-76'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:bc:a3:51'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target dev='tap0ce87d5c-fb'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='net1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       </target>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </console>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <video>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </video>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c565,c717</label>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c717</imagelabel>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:33:01 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:01 compute-0 nova_compute[254092]: </domain>
Nov 25 16:33:01 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.907 254096 INFO nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap0ce87d5c-fb from instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 from the persistent domain config.
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.907 254096 DEBUG nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] (1/8): Attempting to detach device tap0ce87d5c-fb with device alias net1 from instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.908 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]:   <target dev="tap0ce87d5c-fb"/>
Nov 25 16:33:01 compute-0 nova_compute[254092]: </interface>
Nov 25 16:33:01 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:33:01 compute-0 kernel: tap0ce87d5c-fb (unregistering): left promiscuous mode
Nov 25 16:33:01 compute-0 NetworkManager[48891]: <info>  [1764088381.9682] device (tap0ce87d5c-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:33:01 compute-0 ovn_controller[153477]: 2025-11-25T16:33:01Z|00238|binding|INFO|Releasing lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 from this chassis (sb_readonly=0)
Nov 25 16:33:01 compute-0 ovn_controller[153477]: 2025-11-25T16:33:01Z|00239|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 down in Southbound
Nov 25 16:33:01 compute-0 ovn_controller[153477]: 2025-11-25T16:33:01Z|00240|binding|INFO|Removing iface tap0ce87d5c-fb ovn-installed in OVS
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:01.982 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:51 10.100.0.11'], port_security=['fa:16:3e:bc:a3:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '800c66e3-ee9f-4766-92f2-ecda5671cde3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:01.986 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:33:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:01.988 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:33:01 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.996 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764088381.9961386, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:01.999 254096 DEBUG nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Start waiting for the detach event from libvirt for device tap0ce87d5c-fb with device alias net1 for instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.000 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.006 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface>not found in domain: <domain type='kvm' id='37'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <name>instance-00000020</name>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <uuid>800c66e3-ee9f-4766-92f2-ecda5671cde3</uuid>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:00</nova:creationTime>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:02 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <system>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <entry name='serial'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <entry name='uuid'>800c66e3-ee9f-4766-92f2-ecda5671cde3</entry>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </system>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <os>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </os>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <features>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </features>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk' index='2'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/800c66e3-ee9f-4766-92f2-ecda5671cde3_disk.config' index='1'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:f0:7b:ca'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target dev='tap82f63517-76'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       </target>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/1'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <source path='/dev/pts/1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3/console.log' append='off'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </console>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <video>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </video>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c565,c717</label>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c565,c717</imagelabel>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:02 compute-0 nova_compute[254092]: </domain>
Nov 25 16:33:02 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.006 254096 INFO nova.virt.libvirt.driver [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap0ce87d5c-fb from instance 800c66e3-ee9f-4766-92f2-ecda5671cde3 from the live domain config.
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.008 254096 DEBUG nova.virt.libvirt.vif [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.009 254096 DEBUG nova.network.os_vif_util [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.010 254096 DEBUG nova.network.os_vif_util [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.011 254096 DEBUG os_vif [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3f032aa5-eed7-4d3e-9667-454ef1fc8843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.015 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ce87d5c-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.023 254096 INFO os_vif [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb')
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.024 254096 DEBUG nova.virt.libvirt.guest [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-1061549484</nova:name>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:02</nova:creationTime>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     <nova:port uuid="82f63517-7636-46bf-b4e1-ba191ddad018">
Nov 25 16:33:02 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:33:02 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:02 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:02 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:02 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.062 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[34937924-cf06-41f9-9a4c-6be0152b1369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.067 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2592d6e8-0770-4870-b4fc-ae895532eb03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.104 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[40098438-a152-4448-a18a-270d71b740d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.124 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ade639c2-7d31-4df9-b5ef-da5e2e830dac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296480, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.150 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99b538f3-4110-4a7b-9d42-fa42a0a45136]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296481, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296481, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.153 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.156 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.157 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.158 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:02.158 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:02 compute-0 rsyslogd[1006]: imjournal from <np0005535469:nova_compute>: begin to drop messages due to rate-limiting
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.197 254096 DEBUG nova.network.neutron [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.197 254096 DEBUG nova.network.neutron [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.218 254096 DEBUG oslo_concurrency.lockutils [req-93b88334-9f54-4f86-ab73-6b1bc4d43b1d req-ca0bd6c4-3202-44f4-a7a6-65d5db0e403a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.499 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Received event network-vif-deleted-5cf8fe87-4cac-403f-8611-0ddb37516abd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.499 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.500 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.500 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.500 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.502 254096 DEBUG nova.network.neutron [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.504 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.506 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.526 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.527 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.543 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.544 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:02 compute-0 ceph-mon[74985]: pgmap v1368: 321 pgs: 321 active+clean; 279 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 317 op/s
Nov 25 16:33:02 compute-0 nova_compute[254092]: 2025-11-25 16:33:02.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 279 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Nov 25 16:33:03 compute-0 nova_compute[254092]: 2025-11-25 16:33:03.813 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:03 compute-0 nova_compute[254092]: 2025-11-25 16:33:03.814 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:03 compute-0 nova_compute[254092]: 2025-11-25 16:33:03.814 254096 DEBUG nova.network.neutron [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:33:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.008 254096 DEBUG nova.compute.manager [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.009 254096 DEBUG nova.compute.manager [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing instance network info cache due to event network-changed-e1641afa-e435-45ca-a0fe-d2bb9b12981a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.009 254096 DEBUG oslo_concurrency.lockutils [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:04 compute-0 ceph-mon[74985]: pgmap v1369: 321 pgs: 321 active+clean; 279 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.788 254096 DEBUG nova.network.neutron [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.789 254096 DEBUG nova.network.neutron [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.807 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.808 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.808 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.809 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.809 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.809 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] No waiting events found dispatching network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.810 254096 WARNING nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received unexpected event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state None.
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.810 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.811 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.811 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.811 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.812 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] No waiting events found dispatching network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.812 254096 WARNING nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received unexpected event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state None.
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.813 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-unplugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.813 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.814 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.814 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.814 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] No waiting events found dispatching network-vif-unplugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.815 254096 WARNING nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received unexpected event network-vif-unplugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state None.
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.815 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.815 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.816 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.816 254096 DEBUG oslo_concurrency.lockutils [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.816 254096 DEBUG nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] No waiting events found dispatching network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.817 254096 WARNING nova.compute.manager [req-b5c3042c-5f44-46f9-824a-9dc36f0c0cba req-9ea74525-e084-42a9-b07b-7401fd4c6150 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received unexpected event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state None.
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.818 254096 DEBUG oslo_concurrency.lockutils [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:04 compute-0 nova_compute[254092]: 2025-11-25 16:33:04.818 254096 DEBUG nova.network.neutron [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Refreshing network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 279 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Nov 25 16:33:06 compute-0 nova_compute[254092]: 2025-11-25 16:33:06.177 254096 INFO nova.network.neutron [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 16:33:06 compute-0 nova_compute[254092]: 2025-11-25 16:33:06.178 254096 DEBUG nova.network.neutron [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:06 compute-0 nova_compute[254092]: 2025-11-25 16:33:06.223 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:06 compute-0 nova_compute[254092]: 2025-11-25 16:33:06.265 254096 DEBUG oslo_concurrency.lockutils [None req-5223478d-f8b5-41d4-af97-64b9f32de2a5 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-800c66e3-ee9f-4766-92f2-ecda5671cde3-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:06 compute-0 ceph-mon[74985]: pgmap v1370: 321 pgs: 321 active+clean; 279 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Nov 25 16:33:07 compute-0 nova_compute[254092]: 2025-11-25 16:33:07.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 898 KiB/s wr, 100 op/s
Nov 25 16:33:07 compute-0 nova_compute[254092]: 2025-11-25 16:33:07.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:07 compute-0 nova_compute[254092]: 2025-11-25 16:33:07.862 254096 DEBUG nova.network.neutron [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updated VIF entry in instance network info cache for port e1641afa-e435-45ca-a0fe-d2bb9b12981a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:33:07 compute-0 nova_compute[254092]: 2025-11-25 16:33:07.862 254096 DEBUG nova.network.neutron [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [{"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:07 compute-0 nova_compute[254092]: 2025-11-25 16:33:07.882 254096 DEBUG oslo_concurrency.lockutils [req-dc3665f8-159e-462c-8532-4650c8eaeb7a req-020ea0a9-140a-4e03-8478-ba02547c2a2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9bd4d655-c683-4433-a739-168946211a75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.196 254096 DEBUG nova.compute.manager [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.196 254096 DEBUG nova.compute.manager [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing instance network info cache due to event network-changed-82f63517-7636-46bf-b4e1-ba191ddad018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.196 254096 DEBUG oslo_concurrency.lockutils [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.196 254096 DEBUG oslo_concurrency.lockutils [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.196 254096 DEBUG nova.network.neutron [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Refreshing network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.382 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.382 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.382 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.383 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.383 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.384 254096 INFO nova.compute.manager [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Terminating instance
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.386 254096 DEBUG nova.compute.manager [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:33:08 compute-0 kernel: tape1641afa-e4 (unregistering): left promiscuous mode
Nov 25 16:33:08 compute-0 NetworkManager[48891]: <info>  [1764088388.4391] device (tape1641afa-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:33:08 compute-0 ovn_controller[153477]: 2025-11-25T16:33:08Z|00241|binding|INFO|Releasing lport e1641afa-e435-45ca-a0fe-d2bb9b12981a from this chassis (sb_readonly=0)
Nov 25 16:33:08 compute-0 ovn_controller[153477]: 2025-11-25T16:33:08Z|00242|binding|INFO|Setting lport e1641afa-e435-45ca-a0fe-d2bb9b12981a down in Southbound
Nov 25 16:33:08 compute-0 ovn_controller[153477]: 2025-11-25T16:33:08Z|00243|binding|INFO|Removing iface tape1641afa-e4 ovn-installed in OVS
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.452 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:bb:1b 10.100.0.4'], port_security=['fa:16:3e:3d:bb:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9bd4d655-c683-4433-a739-168946211a75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8b56bc-492b-4082-b8de-60d496652da7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ae7f32b97104afd930af5d5f5754532', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c729f64-6320-4599-a74a-ed70b78f82ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81a9c566-d2a0-4e54-9d3c-480355005098, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e1641afa-e435-45ca-a0fe-d2bb9b12981a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.454 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e1641afa-e435-45ca-a0fe-d2bb9b12981a in datapath 7e8b56bc-492b-4082-b8de-60d496652da7 unbound from our chassis
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.455 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e8b56bc-492b-4082-b8de-60d496652da7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.456 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cd17f6f3-2172-4bb1-9b48-336e3f12c017]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.457 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7 namespace which is not needed anymore
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:08 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 25 16:33:08 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Consumed 14.761s CPU time.
Nov 25 16:33:08 compute-0 systemd-machined[216343]: Machine qemu-36-instance-0000001f terminated.
Nov 25 16:33:08 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [NOTICE]   (293310) : haproxy version is 2.8.14-c23fe91
Nov 25 16:33:08 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [NOTICE]   (293310) : path to executable is /usr/sbin/haproxy
Nov 25 16:33:08 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [WARNING]  (293310) : Exiting Master process...
Nov 25 16:33:08 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [WARNING]  (293310) : Exiting Master process...
Nov 25 16:33:08 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [ALERT]    (293310) : Current worker (293312) exited with code 143 (Terminated)
Nov 25 16:33:08 compute-0 neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7[293306]: [WARNING]  (293310) : All workers exited. Exiting... (0)
Nov 25 16:33:08 compute-0 systemd[1]: libpod-034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8.scope: Deactivated successfully.
Nov 25 16:33:08 compute-0 podman[296506]: 2025-11-25 16:33:08.595598021 +0000 UTC m=+0.047108307 container died 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.619 254096 INFO nova.virt.libvirt.driver [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] Instance destroyed successfully.
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.620 254096 DEBUG nova.objects.instance [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lazy-loading 'resources' on Instance uuid 9bd4d655-c683-4433-a739-168946211a75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-92d27d58d374f702763ab8e079fd623c67dbe3ed8e1cbfe8cc867e245d586f30-merged.mount: Deactivated successfully.
Nov 25 16:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8-userdata-shm.mount: Deactivated successfully.
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.634 254096 DEBUG nova.virt.libvirt.vif [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-744086586',display_name='tempest-FloatingIPsAssociationTestJSON-server-744086586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-744086586',id=31,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ae7f32b97104afd930af5d5f5754532',ramdisk_id='',reservation_id='r-ev11hsqu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-993856073',owner_user_name='tempest-FloatingIPsAssociationTestJSON-993856073-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:19Z,user_data=None,user_id='a46b9493b027436fbd21d09ff5ac15e4',uuid=9bd4d655-c683-4433-a739-168946211a75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.635 254096 DEBUG nova.network.os_vif_util [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converting VIF {"id": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "address": "fa:16:3e:3d:bb:1b", "network": {"id": "7e8b56bc-492b-4082-b8de-60d496652da7", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-2034568195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ae7f32b97104afd930af5d5f5754532", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1641afa-e4", "ovs_interfaceid": "e1641afa-e435-45ca-a0fe-d2bb9b12981a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.636 254096 DEBUG nova.network.os_vif_util [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.636 254096 DEBUG os_vif [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:33:08 compute-0 podman[296506]: 2025-11-25 16:33:08.637330372 +0000 UTC m=+0.088840668 container cleanup 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.639 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1641afa-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.648 254096 INFO os_vif [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:bb:1b,bridge_name='br-int',has_traffic_filtering=True,id=e1641afa-e435-45ca-a0fe-d2bb9b12981a,network=Network(7e8b56bc-492b-4082-b8de-60d496652da7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1641afa-e4')
Nov 25 16:33:08 compute-0 systemd[1]: libpod-conmon-034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8.scope: Deactivated successfully.
Nov 25 16:33:08 compute-0 podman[296547]: 2025-11-25 16:33:08.699961608 +0000 UTC m=+0.041521905 container remove 034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d40994bf-72c4-499c-a3be-f60e86ab0a1d]: (4, ('Tue Nov 25 04:33:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7 (034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8)\n034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8\nTue Nov 25 04:33:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7 (034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8)\n034b224d621ea7ad507318b5e079d76415a841c3502a53d0f50766d9ad6688b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.709 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9cf34ab-5d73-46cd-b4ed-3ef774ceec41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.710 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8b56bc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.712 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:08 compute-0 kernel: tap7e8b56bc-40: left promiscuous mode
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.731 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c097dd3-6475-44b8-ae61-3c38312180dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 ceph-mon[74985]: pgmap v1371: 321 pgs: 321 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 995 KiB/s rd, 898 KiB/s wr, 100 op/s
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.748 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dec52549-7742-4346-90ce-2059fc9c7ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.749 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5512f4ab-5dac-4d9c-84a8-005a6fcde9a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[406015fd-bad6-4366-844c-c6d99fe312d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479566, 'reachable_time': 17732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296580, 'error': None, 'target': 'ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.768 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e8b56bc-492b-4082-b8de-60d496652da7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:33:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:08.769 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[15ccb949-bef7-4cb8-abc5-b6e141e8cc94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d7e8b56bc\x2d492b\x2d4082\x2db8de\x2d60d496652da7.mount: Deactivated successfully.
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.942 254096 DEBUG oslo_concurrency.lockutils [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-1d318e56-4a8c-4806-aa87-e837708f2a1f-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.942 254096 DEBUG oslo_concurrency.lockutils [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-1d318e56-4a8c-4806-aa87-e837708f2a1f-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:08 compute-0 nova_compute[254092]: 2025-11-25 16:33:08.942 254096 DEBUG nova.objects.instance [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 106 KiB/s wr, 32 op/s
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.540 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.642 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088374.6398418, a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.642 254096 INFO nova.compute.manager [-] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] VM Stopped (Lifecycle Event)
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.668 254096 DEBUG nova.compute.manager [None req-d4d71e61-4483-4bea-872c-869b6c46c6d9 - - - - - -] [instance: a5ccfa6c-821a-4f6b-9ff6-1eb82742cecc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.855 254096 INFO nova.virt.libvirt.driver [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Deleting instance files /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75_del
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.855 254096 INFO nova.virt.libvirt.driver [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Deletion of /var/lib/nova/instances/9bd4d655-c683-4433-a739-168946211a75_del complete
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.954 254096 INFO nova.compute.manager [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Took 1.57 seconds to destroy the instance on the hypervisor.
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.954 254096 DEBUG oslo.service.loopingcall [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.955 254096 DEBUG nova.compute.manager [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:33:09 compute-0 nova_compute[254092]: 2025-11-25 16:33:09.955 254096 DEBUG nova.network.neutron [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.176 254096 DEBUG nova.objects.instance [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'pci_requests' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.315 254096 DEBUG nova.network.neutron [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.449 254096 DEBUG nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.449 254096 DEBUG nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-f95d61ca-d58c-4f07-879f-5e5412976e42. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.450 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.450 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.450 254096 DEBUG nova.network.neutron [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:10 compute-0 ceph-mon[74985]: pgmap v1372: 321 pgs: 321 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 106 KiB/s wr, 32 op/s
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.798 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088375.7973676, 616ec95d-6c7d-420e-991d-3cbc11339768 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.798 254096 INFO nova.compute.manager [-] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] VM Stopped (Lifecycle Event)
Nov 25 16:33:10 compute-0 nova_compute[254092]: 2025-11-25 16:33:10.821 254096 DEBUG nova.compute.manager [None req-26de4fc6-7796-4d3c-81a0-991ad174feb6 - - - - - -] [instance: 616ec95d-6c7d-420e-991d-3cbc11339768] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 200 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 108 KiB/s wr, 60 op/s
Nov 25 16:33:11 compute-0 nova_compute[254092]: 2025-11-25 16:33:11.350 254096 DEBUG nova.network.neutron [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updated VIF entry in instance network info cache for port 82f63517-7636-46bf-b4e1-ba191ddad018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:33:11 compute-0 nova_compute[254092]: 2025-11-25 16:33:11.351 254096 DEBUG nova.network.neutron [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [{"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:11 compute-0 nova_compute[254092]: 2025-11-25 16:33:11.414 254096 DEBUG oslo_concurrency.lockutils [req-7958fb8c-9139-463b-8ee3-edbd9d0fb831 req-1eeb5485-2121-497f-9655-f1d95616a197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-800c66e3-ee9f-4766-92f2-ecda5671cde3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:11 compute-0 nova_compute[254092]: 2025-11-25 16:33:11.426 254096 DEBUG nova.policy [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2f73d928f424db49a62e7dff2b1ce14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36fd407edf16424e96041f57fccb9e2b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.690 254096 DEBUG nova.network.neutron [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updated VIF entry in instance network info cache for port f95d61ca-d58c-4f07-879f-5e5412976e42. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.690 254096 DEBUG nova.network.neutron [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.733 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.734 254096 DEBUG nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-unplugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.734 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.734 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.734 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.735 254096 DEBUG nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] No waiting events found dispatching network-vif-unplugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.735 254096 DEBUG nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-unplugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.735 254096 DEBUG nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.735 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9bd4d655-c683-4433-a739-168946211a75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.735 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.735 254096 DEBUG oslo_concurrency.lockutils [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.735 254096 DEBUG nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] No waiting events found dispatching network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.736 254096 WARNING nova.compute.manager [req-9072a0fb-fe9a-4883-bab3-92bc9b418bef req-bd255bf7-84a6-431b-8ee5-1a7786f27c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received unexpected event network-vif-plugged-e1641afa-e435-45ca-a0fe-d2bb9b12981a for instance with vm_state active and task_state deleting.
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.815 254096 DEBUG nova.network.neutron [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:12 compute-0 ceph-mon[74985]: pgmap v1373: 321 pgs: 321 active+clean; 200 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 108 KiB/s wr, 60 op/s
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.879 254096 INFO nova.compute.manager [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] Took 2.92 seconds to deallocate network for instance.
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.933 254096 DEBUG nova.network.neutron [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Successfully updated port: 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:33:12 compute-0 nova_compute[254092]: 2025-11-25 16:33:12.952 254096 DEBUG nova.compute.manager [req-afb4af6e-1dc6-461d-9d09-32acffef94f2 req-7301fafd-cf66-481c-a3d1-227ff99119c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9bd4d655-c683-4433-a739-168946211a75] Received event network-vif-deleted-e1641afa-e435-45ca-a0fe-d2bb9b12981a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.061 254096 DEBUG nova.compute.manager [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-changed-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.061 254096 DEBUG nova.compute.manager [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing instance network info cache due to event network-changed-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.062 254096 DEBUG oslo_concurrency.lockutils [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.062 254096 DEBUG oslo_concurrency.lockutils [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.062 254096 DEBUG nova.network.neutron [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Refreshing network info cache for port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.188 254096 DEBUG oslo_concurrency.lockutils [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.303 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.303 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 200 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 29 op/s
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.467 254096 DEBUG oslo_concurrency.processutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:13.607 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:13.608 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:13.608 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:33:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690719901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.972 254096 DEBUG oslo_concurrency.processutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:13 compute-0 nova_compute[254092]: 2025-11-25 16:33:13.981 254096 DEBUG nova.compute.provider_tree [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:33:14 compute-0 nova_compute[254092]: 2025-11-25 16:33:14.000 254096 DEBUG nova.scheduler.client.report [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:33:14 compute-0 nova_compute[254092]: 2025-11-25 16:33:14.041 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:14 compute-0 ceph-mon[74985]: pgmap v1374: 321 pgs: 321 active+clean; 200 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 29 op/s
Nov 25 16:33:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/690719901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:14 compute-0 nova_compute[254092]: 2025-11-25 16:33:14.161 254096 INFO nova.scheduler.client.report [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Deleted allocations for instance 9bd4d655-c683-4433-a739-168946211a75
Nov 25 16:33:14 compute-0 nova_compute[254092]: 2025-11-25 16:33:14.497 254096 DEBUG oslo_concurrency.lockutils [None req-599fd70f-77cc-4967-95db-1c9fc7aba3c0 a46b9493b027436fbd21d09ff5ac15e4 4ae7f32b97104afd930af5d5f5754532 - - default default] Lock "9bd4d655-c683-4433-a739-168946211a75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 29 op/s
Nov 25 16:33:16 compute-0 nova_compute[254092]: 2025-11-25 16:33:16.043 254096 DEBUG nova.network.neutron [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Added VIF to instance network info cache for port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3489
Nov 25 16:33:16 compute-0 nova_compute[254092]: 2025-11-25 16:33:16.043 254096 DEBUG nova.network.neutron [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:16 compute-0 nova_compute[254092]: 2025-11-25 16:33:16.342 254096 DEBUG oslo_concurrency.lockutils [req-6da95543-8270-4d5b-82cc-dcae8e60c00a req-4749a670-1392-4ab7-85bc-db5810d8da5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:16 compute-0 nova_compute[254092]: 2025-11-25 16:33:16.342 254096 DEBUG oslo_concurrency.lockutils [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:16 compute-0 nova_compute[254092]: 2025-11-25 16:33:16.342 254096 DEBUG nova.network.neutron [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:33:17 compute-0 ceph-mon[74985]: pgmap v1375: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 29 op/s
Nov 25 16:33:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Nov 25 16:33:17 compute-0 nova_compute[254092]: 2025-11-25 16:33:17.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:18 compute-0 ceph-mon[74985]: pgmap v1376: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Nov 25 16:33:18 compute-0 nova_compute[254092]: 2025-11-25 16:33:18.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 16:33:20 compute-0 ceph-mon[74985]: pgmap v1377: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 16:33:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 16:33:22 compute-0 nova_compute[254092]: 2025-11-25 16:33:22.347 254096 WARNING nova.network.neutron [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:33:22 compute-0 nova_compute[254092]: 2025-11-25 16:33:22.349 254096 WARNING nova.network.neutron [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d already exists in list: networks containing: ['52e7d5b9-0570-4e5c-b3da-9dfcb924b83d']. ignoring it
Nov 25 16:33:22 compute-0 nova_compute[254092]: 2025-11-25 16:33:22.350 254096 WARNING nova.network.neutron [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 already exists in list: port_ids containing: ['0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9']. ignoring it
Nov 25 16:33:22 compute-0 sudo[296605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:22 compute-0 sudo[296605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:22 compute-0 sudo[296605]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:22 compute-0 sudo[296630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:33:22 compute-0 sudo[296630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:22 compute-0 sudo[296630]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:22 compute-0 sudo[296655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:22 compute-0 sudo[296655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:22 compute-0 sudo[296655]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:22 compute-0 sudo[296680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:33:22 compute-0 sudo[296680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:22 compute-0 ceph-mon[74985]: pgmap v1378: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 16:33:22 compute-0 nova_compute[254092]: 2025-11-25 16:33:22.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:23 compute-0 sudo[296680]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:33:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 25 16:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:33:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev bba698dd-49aa-4e95-86c2-8e3ed68793d2 does not exist
Nov 25 16:33:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cc5a1cfe-40ff-49b2-a759-5d73ccf68dc0 does not exist
Nov 25 16:33:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 975842e6-a3b9-402a-80bd-a769580270d0 does not exist
Nov 25 16:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:33:23 compute-0 sudo[296735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:23 compute-0 sudo[296735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:23 compute-0 sudo[296735]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:23.594 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:23.596 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:33:23 compute-0 nova_compute[254092]: 2025-11-25 16:33:23.619 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088388.617388, 9bd4d655-c683-4433-a739-168946211a75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:23 compute-0 nova_compute[254092]: 2025-11-25 16:33:23.619 254096 INFO nova.compute.manager [-] [instance: 9bd4d655-c683-4433-a739-168946211a75] VM Stopped (Lifecycle Event)
Nov 25 16:33:23 compute-0 nova_compute[254092]: 2025-11-25 16:33:23.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:23 compute-0 sudo[296760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:33:23 compute-0 sudo[296760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:23 compute-0 sudo[296760]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:23 compute-0 nova_compute[254092]: 2025-11-25 16:33:23.641 254096 DEBUG nova.compute.manager [None req-f441664a-74bd-43dd-a718-2c321a623cc9 - - - - - -] [instance: 9bd4d655-c683-4433-a739-168946211a75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:23 compute-0 nova_compute[254092]: 2025-11-25 16:33:23.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:23 compute-0 sudo[296785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:23 compute-0 sudo[296785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:23 compute-0 sudo[296785]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:23 compute-0 sudo[296810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:33:23 compute-0 sudo[296810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:33:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:33:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:33:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:33:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:33:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:33:24 compute-0 podman[296875]: 2025-11-25 16:33:24.214163608 +0000 UTC m=+0.029276443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:33:24 compute-0 podman[296875]: 2025-11-25 16:33:24.974976796 +0000 UTC m=+0.790089571 container create 2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilson, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 16:33:25 compute-0 systemd[1]: Started libpod-conmon-2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c.scope.
Nov 25 16:33:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:33:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 25 16:33:25 compute-0 ceph-mon[74985]: pgmap v1379: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 25 16:33:25 compute-0 podman[296875]: 2025-11-25 16:33:25.708593468 +0000 UTC m=+1.523706263 container init 2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:33:25 compute-0 podman[296875]: 2025-11-25 16:33:25.716111682 +0000 UTC m=+1.531224457 container start 2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:33:25 compute-0 zen_wilson[296891]: 167 167
Nov 25 16:33:25 compute-0 systemd[1]: libpod-2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c.scope: Deactivated successfully.
Nov 25 16:33:25 compute-0 podman[296875]: 2025-11-25 16:33:25.799226663 +0000 UTC m=+1.614339458 container attach 2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilson, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:33:25 compute-0 podman[296875]: 2025-11-25 16:33:25.800486527 +0000 UTC m=+1.615599292 container died 2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.299 254096 DEBUG nova.network.neutron [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.638 254096 DEBUG oslo_concurrency.lockutils [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ab8bc30b0628732edc3b964bf961ea2d1e51bbdca6f00076d80abb55c12b68-merged.mount: Deactivated successfully.
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.643 254096 DEBUG nova.virt.libvirt.vif [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.644 254096 DEBUG nova.network.os_vif_util [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.645 254096 DEBUG nova.network.os_vif_util [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.645 254096 DEBUG os_vif [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.646 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.647 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.651 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ce87d5c-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.651 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ce87d5c-fb, col_values=(('external_ids', {'iface-id': '0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:a3:51', 'vm-uuid': '1d318e56-4a8c-4806-aa87-e837708f2a1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 NetworkManager[48891]: <info>  [1764088406.6544] manager: (tap0ce87d5c-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.661 254096 INFO os_vif [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb')
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.662 254096 DEBUG nova.virt.libvirt.vif [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.663 254096 DEBUG nova.network.os_vif_util [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.663 254096 DEBUG nova.network.os_vif_util [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.666 254096 DEBUG nova.virt.libvirt.guest [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] attach device xml: <interface type="ethernet">
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <target dev="tap0ce87d5c-fb"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]: </interface>
Nov 25 16:33:26 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 16:33:26 compute-0 kernel: tap0ce87d5c-fb: entered promiscuous mode
Nov 25 16:33:26 compute-0 NetworkManager[48891]: <info>  [1764088406.6792] manager: (tap0ce87d5c-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Nov 25 16:33:26 compute-0 ovn_controller[153477]: 2025-11-25T16:33:26Z|00244|binding|INFO|Claiming lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for this chassis.
Nov 25 16:33:26 compute-0 ovn_controller[153477]: 2025-11-25T16:33:26Z|00245|binding|INFO|0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9: Claiming fa:16:3e:bc:a3:51 10.100.0.11
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 ovn_controller[153477]: 2025-11-25T16:33:26Z|00246|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 ovn-installed in OVS
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 systemd-udevd[296943]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:33:26 compute-0 NetworkManager[48891]: <info>  [1764088406.7776] device (tap0ce87d5c-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:33:26 compute-0 NetworkManager[48891]: <info>  [1764088406.7784] device (tap0ce87d5c-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:33:26 compute-0 ovn_controller[153477]: 2025-11-25T16:33:26Z|00247|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.861 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:51 10.100.0.11'], port_security=['fa:16:3e:bc:a3:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1d318e56-4a8c-4806-aa87-e837708f2a1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.863 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.864 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:33:26 compute-0 ovn_controller[153477]: 2025-11-25T16:33:26Z|00248|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 up in Southbound
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.881 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[58dcd267-c87f-43e7-a64f-2c832235f067]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.900 254096 DEBUG nova.virt.libvirt.driver [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.900 254096 DEBUG nova.virt.libvirt.driver [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.901 254096 DEBUG nova.virt.libvirt.driver [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:d7:b8:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.901 254096 DEBUG nova.virt.libvirt.driver [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] No VIF found with MAC fa:16:3e:bc:a3:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.912 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9576b7d7-f7e5-447e-9cae-cb2fcc145ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.915 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4d15d23e-422b-4932-b5e7-6efa6483ee2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:26 compute-0 ceph-mon[74985]: pgmap v1380: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.943 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[50490825-8534-4958-844f-3d9e8c2a29c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.946 254096 DEBUG nova.virt.libvirt.guest [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-822239868</nova:name>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:26</nova:creationTime>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:port uuid="f95d61ca-d58c-4f07-879f-5e5412976e42">
Nov 25 16:33:26 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 16:33:26 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:33:26 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:26 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:26 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:26 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.960 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[44e306a0-eebe-4163-9d8c-f9dc95a6bfbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296951, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.971 254096 DEBUG oslo_concurrency.lockutils [None req-9a2e8e34-0295-4050-8ff5-22d68ed91961 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-1d318e56-4a8c-4806-aa87-e837708f2a1f-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 18.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.975 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b53cea92-f3cc-4ef7-b32a-a3d920665ee3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296952, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296952, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.976 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:26 compute-0 nova_compute[254092]: 2025-11-25 16:33:26.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:26.980 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:27 compute-0 podman[296875]: 2025-11-25 16:33:27.013560826 +0000 UTC m=+2.828673601 container remove 2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:33:27 compute-0 podman[296915]: 2025-11-25 16:33:27.05358374 +0000 UTC m=+0.340905096 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:33:27 compute-0 podman[296913]: 2025-11-25 16:33:27.062541432 +0000 UTC m=+0.349205789 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:33:27 compute-0 systemd[1]: libpod-conmon-2ffb0a4bdd4203abd3f9ba3bf7b4b2b14e06a0b67ae58a72e01e8a82eee8667c.scope: Deactivated successfully.
Nov 25 16:33:27 compute-0 podman[296916]: 2025-11-25 16:33:27.122072754 +0000 UTC m=+0.407977601 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 16:33:27 compute-0 podman[296994]: 2025-11-25 16:33:27.225538177 +0000 UTC m=+0.064132408 container create c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:33:27 compute-0 podman[296994]: 2025-11-25 16:33:27.186381146 +0000 UTC m=+0.024975437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:33:27 compute-0 systemd[1]: Started libpod-conmon-c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac.scope.
Nov 25 16:33:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f1aeb24821b2693aa6f88bc0a6267fe39b4454fce7a30d130682b82faf72c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f1aeb24821b2693aa6f88bc0a6267fe39b4454fce7a30d130682b82faf72c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f1aeb24821b2693aa6f88bc0a6267fe39b4454fce7a30d130682b82faf72c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f1aeb24821b2693aa6f88bc0a6267fe39b4454fce7a30d130682b82faf72c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f1aeb24821b2693aa6f88bc0a6267fe39b4454fce7a30d130682b82faf72c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:27 compute-0 nova_compute[254092]: 2025-11-25 16:33:27.455 254096 DEBUG nova.compute.manager [req-8133021a-55ca-43d8-8190-9c8958ffe314 req-770407e7-dd0e-4517-b7f4-75a03d89763c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:27 compute-0 nova_compute[254092]: 2025-11-25 16:33:27.456 254096 DEBUG oslo_concurrency.lockutils [req-8133021a-55ca-43d8-8190-9c8958ffe314 req-770407e7-dd0e-4517-b7f4-75a03d89763c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:27 compute-0 nova_compute[254092]: 2025-11-25 16:33:27.456 254096 DEBUG oslo_concurrency.lockutils [req-8133021a-55ca-43d8-8190-9c8958ffe314 req-770407e7-dd0e-4517-b7f4-75a03d89763c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:27 compute-0 nova_compute[254092]: 2025-11-25 16:33:27.456 254096 DEBUG oslo_concurrency.lockutils [req-8133021a-55ca-43d8-8190-9c8958ffe314 req-770407e7-dd0e-4517-b7f4-75a03d89763c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:27 compute-0 nova_compute[254092]: 2025-11-25 16:33:27.456 254096 DEBUG nova.compute.manager [req-8133021a-55ca-43d8-8190-9c8958ffe314 req-770407e7-dd0e-4517-b7f4-75a03d89763c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:27 compute-0 nova_compute[254092]: 2025-11-25 16:33:27.457 254096 WARNING nova.compute.manager [req-8133021a-55ca-43d8-8190-9c8958ffe314 req-770407e7-dd0e-4517-b7f4-75a03d89763c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state None.
Nov 25 16:33:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:33:27 compute-0 podman[296994]: 2025-11-25 16:33:27.560375988 +0000 UTC m=+0.398970219 container init c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:33:27 compute-0 podman[296994]: 2025-11-25 16:33:27.569236917 +0000 UTC m=+0.407831148 container start c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 16:33:27 compute-0 podman[296994]: 2025-11-25 16:33:27.596216669 +0000 UTC m=+0.434810920 container attach c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:33:27 compute-0 nova_compute[254092]: 2025-11-25 16:33:27.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 ceph-mon[74985]: pgmap v1381: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:33:28 compute-0 ovn_controller[153477]: 2025-11-25T16:33:28Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:a3:51 10.100.0.11
Nov 25 16:33:28 compute-0 ovn_controller[153477]: 2025-11-25T16:33:28Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:a3:51 10.100.0.11
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.339 254096 DEBUG oslo_concurrency.lockutils [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "interface-1d318e56-4a8c-4806-aa87-e837708f2a1f-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.340 254096 DEBUG oslo_concurrency.lockutils [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-1d318e56-4a8c-4806-aa87-e837708f2a1f-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.364 254096 DEBUG nova.objects.instance [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'flavor' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.394 254096 DEBUG nova.virt.libvirt.vif [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.394 254096 DEBUG nova.network.os_vif_util [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.396 254096 DEBUG nova.network.os_vif_util [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.399 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.401 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.403 254096 DEBUG nova.virt.libvirt.driver [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Attempting to detach device tap0ce87d5c-fb from instance 1d318e56-4a8c-4806-aa87-e837708f2a1f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.403 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <target dev="tap0ce87d5c-fb"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]: </interface>
Nov 25 16:33:28 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.432 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.435 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface>not found in domain: <domain type='kvm' id='41'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <name>instance-00000024</name>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <uuid>1d318e56-4a8c-4806-aa87-e837708f2a1f</uuid>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-822239868</nova:name>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:26</nova:creationTime>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:port uuid="f95d61ca-d58c-4f07-879f-5e5412976e42">
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:28 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <system>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='serial'>1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='uuid'>1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </system>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <os>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </os>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <features>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </features>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk' index='2'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config' index='1'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:d7:b8:12'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target dev='tapf95d61ca-d5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:bc:a3:51'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target dev='tap0ce87d5c-fb'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='net1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source path='/dev/pts/5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log' append='off'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </target>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/5'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source path='/dev/pts/5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log' append='off'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </console>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <video>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </video>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c913,c919</label>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c913,c919</imagelabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]: </domain>
Nov 25 16:33:28 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.435 254096 INFO nova.virt.libvirt.driver [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap0ce87d5c-fb from instance 1d318e56-4a8c-4806-aa87-e837708f2a1f from the persistent domain config.
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.436 254096 DEBUG nova.virt.libvirt.driver [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] (1/8): Attempting to detach device tap0ce87d5c-fb with device alias net1 from instance 1d318e56-4a8c-4806-aa87-e837708f2a1f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.437 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] detach device xml: <interface type="ethernet">
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:bc:a3:51"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <target dev="tap0ce87d5c-fb"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]: </interface>
Nov 25 16:33:28 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 16:33:28 compute-0 kernel: tap0ce87d5c-fb (unregistering): left promiscuous mode
Nov 25 16:33:28 compute-0 NetworkManager[48891]: <info>  [1764088408.5421] device (tap0ce87d5c-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:33:28 compute-0 ovn_controller[153477]: 2025-11-25T16:33:28Z|00249|binding|INFO|Releasing lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 from this chassis (sb_readonly=0)
Nov 25 16:33:28 compute-0 ovn_controller[153477]: 2025-11-25T16:33:28Z|00250|binding|INFO|Setting lport 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 down in Southbound
Nov 25 16:33:28 compute-0 ovn_controller[153477]: 2025-11-25T16:33:28Z|00251|binding|INFO|Removing iface tap0ce87d5c-fb ovn-installed in OVS
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.551 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.557 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764088408.5569012, 1d318e56-4a8c-4806-aa87-e837708f2a1f => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.558 254096 DEBUG nova.virt.libvirt.driver [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Start waiting for the detach event from libvirt for device tap0ce87d5c-fb with device alias net1 for instance 1d318e56-4a8c-4806-aa87-e837708f2a1f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.559 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.562 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:bc:a3:51"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0ce87d5c-fb"/></interface>not found in domain: <domain type='kvm' id='41'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <name>instance-00000024</name>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <uuid>1d318e56-4a8c-4806-aa87-e837708f2a1f</uuid>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-822239868</nova:name>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:26</nova:creationTime>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:port uuid="f95d61ca-d58c-4f07-879f-5e5412976e42">
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:port uuid="0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9">
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:28 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <resource>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </resource>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <system>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='serial'>1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='uuid'>1d318e56-4a8c-4806-aa87-e837708f2a1f</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </system>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <os>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </os>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <features>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </features>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk' index='2'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/1d318e56-4a8c-4806-aa87-e837708f2a1f_disk.config' index='1'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </controller>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:d7:b8:12'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target dev='tapf95d61ca-d5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source path='/dev/pts/5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log' append='off'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       </target>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/5'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <source path='/dev/pts/5'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f/console.log' append='off'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </console>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </input>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </graphics>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <video>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </video>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c913,c919</label>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c913,c919</imagelabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 16:33:28 compute-0 nova_compute[254092]: </domain>
Nov 25 16:33:28 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.562 254096 INFO nova.virt.libvirt.driver [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully detached device tap0ce87d5c-fb from instance 1d318e56-4a8c-4806-aa87-e837708f2a1f from the live domain config.
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.563 254096 DEBUG nova.virt.libvirt.vif [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.563 254096 DEBUG nova.network.os_vif_util [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.563 254096 DEBUG nova.network.os_vif_util [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.564 254096 DEBUG os_vif [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.567 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ce87d5c-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.572 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.574 254096 INFO os_vif [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb')
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.575 254096 DEBUG nova.virt.libvirt.guest [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:name>tempest-tempest.common.compute-instance-822239868</nova:name>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:33:28</nova:creationTime>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:user uuid="c2f73d928f424db49a62e7dff2b1ce14">tempest-AttachInterfacesTestJSON-1854144244-project-member</nova:user>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:project uuid="36fd407edf16424e96041f57fccb9e2b">tempest-AttachInterfacesTestJSON-1854144244</nova:project>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     <nova:port uuid="f95d61ca-d58c-4f07-879f-5e5412976e42">
Nov 25 16:33:28 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:33:28 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:33:28 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:33:28 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:33:28 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.581 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:a3:51 10.100.0.11'], port_security=['fa:16:3e:bc:a3:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1d318e56-4a8c-4806-aa87-e837708f2a1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-677304244', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4d6ed831-26b5-4c46-a07a-309ffadbe01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.583 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.584 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41f1518f-56bd-4574-9a36-8851a58d801c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:28 compute-0 practical_ptolemy[297011]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:33:28 compute-0 practical_ptolemy[297011]: --> relative data size: 1.0
Nov 25 16:33:28 compute-0 practical_ptolemy[297011]: --> All data devices are unavailable
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.635 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[863c91ea-05ef-4143-b441-64855fe4beda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.638 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb6b5117-a76f-4e7a-9c28-bc7010da2991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:28 compute-0 systemd[1]: libpod-c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac.scope: Deactivated successfully.
Nov 25 16:33:28 compute-0 systemd[1]: libpod-c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac.scope: Consumed 1.011s CPU time.
Nov 25 16:33:28 compute-0 podman[296994]: 2025-11-25 16:33:28.658842922 +0000 UTC m=+1.497437143 container died c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.674 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[97b3a22a-a50d-48ac-aadf-693571db1d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c930b7f-8c40-4806-acce-c097257947de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297060, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e856b1a6-b706-4bc3-a209-744d86642a8c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297061, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297061, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.720 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 nova_compute[254092]: 2025-11-25 16:33:28.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.723 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:28.725 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5f1aeb24821b2693aa6f88bc0a6267fe39b4454fce7a30d130682b82faf72c2-merged.mount: Deactivated successfully.
Nov 25 16:33:28 compute-0 podman[296994]: 2025-11-25 16:33:28.870911167 +0000 UTC m=+1.709505428 container remove c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 16:33:28 compute-0 systemd[1]: libpod-conmon-c2075eade3dc39151d62903e425a6fef9b9650b7bdbdd07c0ddb84a837fc72ac.scope: Deactivated successfully.
Nov 25 16:33:28 compute-0 sudo[296810]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:29 compute-0 sudo[297063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:29 compute-0 sudo[297063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:29 compute-0 sudo[297063]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:29 compute-0 sudo[297088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:33:29 compute-0 sudo[297088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:29 compute-0 sudo[297088]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:29 compute-0 sudo[297113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:29 compute-0 sudo[297113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:29 compute-0 sudo[297113]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:29 compute-0 sudo[297138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:33:29 compute-0 sudo[297138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:33:29 compute-0 podman[297202]: 2025-11-25 16:33:29.499961715 +0000 UTC m=+0.039469330 container create 7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:33:29 compute-0 systemd[1]: Started libpod-conmon-7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c.scope.
Nov 25 16:33:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:33:29 compute-0 podman[297202]: 2025-11-25 16:33:29.578793721 +0000 UTC m=+0.118301356 container init 7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 16:33:29 compute-0 podman[297202]: 2025-11-25 16:33:29.483023346 +0000 UTC m=+0.022530981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:33:29 compute-0 podman[297202]: 2025-11-25 16:33:29.584934338 +0000 UTC m=+0.124441953 container start 7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:33:29 compute-0 podman[297202]: 2025-11-25 16:33:29.588486694 +0000 UTC m=+0.127994309 container attach 7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:33:29 compute-0 charming_dubinsky[297218]: 167 167
Nov 25 16:33:29 compute-0 systemd[1]: libpod-7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c.scope: Deactivated successfully.
Nov 25 16:33:29 compute-0 podman[297202]: 2025-11-25 16:33:29.590393976 +0000 UTC m=+0.129901591 container died 7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.595 254096 DEBUG nova.compute.manager [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.596 254096 DEBUG oslo_concurrency.lockutils [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.596 254096 DEBUG oslo_concurrency.lockutils [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.597 254096 DEBUG oslo_concurrency.lockutils [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.597 254096 DEBUG nova.compute.manager [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.597 254096 WARNING nova.compute.manager [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state None.
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.597 254096 DEBUG nova.compute.manager [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-unplugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.597 254096 DEBUG oslo_concurrency.lockutils [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.598 254096 DEBUG oslo_concurrency.lockutils [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.598 254096 DEBUG oslo_concurrency.lockutils [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.598 254096 DEBUG nova.compute.manager [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-unplugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:29 compute-0 nova_compute[254092]: 2025-11-25 16:33:29.598 254096 WARNING nova.compute.manager [req-e31d205a-a2a0-4c0a-a947-15be85a0da05 req-d264a134-6ad8-4222-9f41-d2612a040e86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-unplugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state None.
Nov 25 16:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b95f3370d42314b5da7b5494165b22a20c3a57449c3adf5f880a30651eaa1231-merged.mount: Deactivated successfully.
Nov 25 16:33:29 compute-0 podman[297202]: 2025-11-25 16:33:29.63005042 +0000 UTC m=+0.169558035 container remove 7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:33:29 compute-0 systemd[1]: libpod-conmon-7ceb767d03349db6e6bc10a2f26da7505c95628050ce41e3a5a601db3b30e47c.scope: Deactivated successfully.
Nov 25 16:33:29 compute-0 podman[297241]: 2025-11-25 16:33:29.787784492 +0000 UTC m=+0.035756490 container create ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_goldwasser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:33:29 compute-0 systemd[1]: Started libpod-conmon-ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9.scope.
Nov 25 16:33:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e381a209b3a22a5a289ae2b5175e72f76b33e6a9cf83b869a14a9cee99a3b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e381a209b3a22a5a289ae2b5175e72f76b33e6a9cf83b869a14a9cee99a3b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e381a209b3a22a5a289ae2b5175e72f76b33e6a9cf83b869a14a9cee99a3b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8e381a209b3a22a5a289ae2b5175e72f76b33e6a9cf83b869a14a9cee99a3b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:29 compute-0 podman[297241]: 2025-11-25 16:33:29.858578249 +0000 UTC m=+0.106550267 container init ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_goldwasser, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:33:29 compute-0 podman[297241]: 2025-11-25 16:33:29.866043922 +0000 UTC m=+0.114015920 container start ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:33:29 compute-0 podman[297241]: 2025-11-25 16:33:29.772726754 +0000 UTC m=+0.020698772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:33:29 compute-0 podman[297241]: 2025-11-25 16:33:29.868765825 +0000 UTC m=+0.116737823 container attach ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_goldwasser, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:33:30 compute-0 ceph-mon[74985]: pgmap v1382: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]: {
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:     "0": [
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:         {
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "devices": [
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "/dev/loop3"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             ],
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_name": "ceph_lv0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_size": "21470642176",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "name": "ceph_lv0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "tags": {
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cluster_name": "ceph",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.crush_device_class": "",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.encrypted": "0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osd_id": "0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.type": "block",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.vdo": "0"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             },
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "type": "block",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "vg_name": "ceph_vg0"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:         }
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:     ],
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:     "1": [
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:         {
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "devices": [
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "/dev/loop4"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             ],
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_name": "ceph_lv1",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_size": "21470642176",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "name": "ceph_lv1",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "tags": {
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cluster_name": "ceph",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.crush_device_class": "",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.encrypted": "0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osd_id": "1",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.type": "block",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.vdo": "0"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             },
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "type": "block",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "vg_name": "ceph_vg1"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:         }
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:     ],
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:     "2": [
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:         {
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "devices": [
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "/dev/loop5"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             ],
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_name": "ceph_lv2",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_size": "21470642176",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "name": "ceph_lv2",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "tags": {
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.cluster_name": "ceph",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.crush_device_class": "",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.encrypted": "0",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osd_id": "2",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.type": "block",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:                 "ceph.vdo": "0"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             },
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "type": "block",
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:             "vg_name": "ceph_vg2"
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:         }
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]:     ]
Nov 25 16:33:30 compute-0 stoic_goldwasser[297258]: }
Nov 25 16:33:30 compute-0 systemd[1]: libpod-ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9.scope: Deactivated successfully.
Nov 25 16:33:30 compute-0 podman[297241]: 2025-11-25 16:33:30.679392753 +0000 UTC m=+0.927364771 container died ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_goldwasser, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8e381a209b3a22a5a289ae2b5175e72f76b33e6a9cf83b869a14a9cee99a3b8-merged.mount: Deactivated successfully.
Nov 25 16:33:30 compute-0 podman[297241]: 2025-11-25 16:33:30.747188349 +0000 UTC m=+0.995160347 container remove ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_goldwasser, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:33:30 compute-0 systemd[1]: libpod-conmon-ed125462cf08199f10749939065c1428b8406f9a635bc9a729d3c3a8f60d49a9.scope: Deactivated successfully.
Nov 25 16:33:30 compute-0 sudo[297138]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:30 compute-0 sudo[297281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:30 compute-0 sudo[297281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:30 compute-0 sudo[297281]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:30 compute-0 nova_compute[254092]: 2025-11-25 16:33:30.866 254096 DEBUG oslo_concurrency.lockutils [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:30 compute-0 nova_compute[254092]: 2025-11-25 16:33:30.867 254096 DEBUG oslo_concurrency.lockutils [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquired lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:30 compute-0 nova_compute[254092]: 2025-11-25 16:33:30.867 254096 DEBUG nova.network.neutron [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:33:30 compute-0 sudo[297306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:33:30 compute-0 sudo[297306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:30 compute-0 sudo[297306]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:30 compute-0 sudo[297331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:30 compute-0 sudo[297331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:30 compute-0 sudo[297331]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:31 compute-0 sudo[297356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:33:31 compute-0 sudo[297356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00252|binding|INFO|Releasing lport af5d2618-f326-4aaf-9395-f47548ac108b from this chassis (sb_readonly=0)
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 1 op/s
Nov 25 16:33:31 compute-0 podman[297421]: 2025-11-25 16:33:31.344074548 +0000 UTC m=+0.038869464 container create cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:33:31 compute-0 systemd[1]: Started libpod-conmon-cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9.scope.
Nov 25 16:33:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:33:31 compute-0 podman[297421]: 2025-11-25 16:33:31.406445637 +0000 UTC m=+0.101240553 container init cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:33:31 compute-0 podman[297421]: 2025-11-25 16:33:31.412720007 +0000 UTC m=+0.107514923 container start cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:33:31 compute-0 podman[297421]: 2025-11-25 16:33:31.415369329 +0000 UTC m=+0.110164275 container attach cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moser, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:33:31 compute-0 mystifying_moser[297437]: 167 167
Nov 25 16:33:31 compute-0 systemd[1]: libpod-cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9.scope: Deactivated successfully.
Nov 25 16:33:31 compute-0 podman[297421]: 2025-11-25 16:33:31.417940798 +0000 UTC m=+0.112735744 container died cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:33:31 compute-0 podman[297421]: 2025-11-25 16:33:31.326713097 +0000 UTC m=+0.021508043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-097fb7775c0cc41b35ea2cdf73e44b665c0fcf326161ef5d84ed2f40ec75a88b-merged.mount: Deactivated successfully.
Nov 25 16:33:31 compute-0 podman[297421]: 2025-11-25 16:33:31.453037749 +0000 UTC m=+0.147832665 container remove cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:33:31 compute-0 systemd[1]: libpod-conmon-cdc816a9a032d6e75d737ac9d95fe76d23e36369fa448a0581370b0ec68651c9.scope: Deactivated successfully.
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.555 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.555 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.556 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.556 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.556 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.557 254096 INFO nova.compute.manager [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Terminating instance
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.558 254096 DEBUG nova.compute.manager [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:33:31 compute-0 kernel: tapf95d61ca-d5 (unregistering): left promiscuous mode
Nov 25 16:33:31 compute-0 NetworkManager[48891]: <info>  [1764088411.5989] device (tapf95d61ca-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.601 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00253|binding|INFO|Releasing lport f95d61ca-d58c-4f07-879f-5e5412976e42 from this chassis (sb_readonly=0)
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00254|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 down in Southbound
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00255|binding|INFO|Removing iface tapf95d61ca-d5 ovn-installed in OVS
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.614 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b8:12 10.100.0.12'], port_security=['fa:16:3e:d7:b8:12 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1d318e56-4a8c-4806-aa87-e837708f2a1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f95d61ca-d58c-4f07-879f-5e5412976e42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.615 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f95d61ca-d58c-4f07-879f-5e5412976e42 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.615 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:33:31 compute-0 podman[297460]: 2025-11-25 16:33:31.624217646 +0000 UTC m=+0.040977891 container create e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.632 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[048e9733-4e78-4bbd-aa1d-81a2712e858f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Deactivated successfully.
Nov 25 16:33:31 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000024.scope: Consumed 14.672s CPU time.
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.663 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9ae9f7-2151-431a-b4d8-974a80c41c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.666 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[51371f3b-ebf0-4cdd-a19a-058985122fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 systemd[1]: Started libpod-conmon-e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc.scope.
Nov 25 16:33:31 compute-0 systemd-machined[216343]: Machine qemu-41-instance-00000024 terminated.
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.694 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[14e89425-5496-42fb-9f55-e4dd5bdd3191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 podman[297460]: 2025-11-25 16:33:31.60554289 +0000 UTC m=+0.022303155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:33:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.710 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35b58894-e68a-4aec-9559-e29f20f46c6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297490, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d804d4c83d44c3d0632079177a9b2db3461fa4f8ab29d933e72aba4bbb22f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d804d4c83d44c3d0632079177a9b2db3461fa4f8ab29d933e72aba4bbb22f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d804d4c83d44c3d0632079177a9b2db3461fa4f8ab29d933e72aba4bbb22f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d804d4c83d44c3d0632079177a9b2db3461fa4f8ab29d933e72aba4bbb22f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:31 compute-0 podman[297460]: 2025-11-25 16:33:31.722457828 +0000 UTC m=+0.139218093 container init e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 16:33:31 compute-0 podman[297460]: 2025-11-25 16:33:31.729652722 +0000 UTC m=+0.146412967 container start e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.730 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[967e2945-4618-44a2-9921-865ce8baaf95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297491, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297491, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.731 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 podman[297460]: 2025-11-25 16:33:31.733106365 +0000 UTC m=+0.149866610 container attach e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.738 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.738 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.738 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.739 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:31 compute-0 kernel: tapf95d61ca-d5: entered promiscuous mode
Nov 25 16:33:31 compute-0 NetworkManager[48891]: <info>  [1764088411.7782] manager: (tapf95d61ca-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00256|binding|INFO|Claiming lport f95d61ca-d58c-4f07-879f-5e5412976e42 for this chassis.
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00257|binding|INFO|f95d61ca-d58c-4f07-879f-5e5412976e42: Claiming fa:16:3e:d7:b8:12 10.100.0.12
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 systemd-udevd[297476]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:33:31 compute-0 kernel: tapf95d61ca-d5 (unregistering): left promiscuous mode
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00258|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 ovn-installed in OVS
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00259|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 up in Southbound
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.797 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b8:12 10.100.0.12'], port_security=['fa:16:3e:d7:b8:12 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1d318e56-4a8c-4806-aa87-e837708f2a1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f95d61ca-d58c-4f07-879f-5e5412976e42) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.799 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f95d61ca-d58c-4f07-879f-5e5412976e42 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d bound to our chassis
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.801 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00260|binding|INFO|Releasing lport f95d61ca-d58c-4f07-879f-5e5412976e42 from this chassis (sb_readonly=1)
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00261|binding|INFO|Removing iface tapf95d61ca-d5 ovn-installed in OVS
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00262|if_status|INFO|Dropped 2 log messages in last 168 seconds (most recently, 168 seconds ago) due to excessive rate
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00263|if_status|INFO|Not setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 down as sb is readonly
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00264|binding|INFO|Releasing lport f95d61ca-d58c-4f07-879f-5e5412976e42 from this chassis (sb_readonly=0)
Nov 25 16:33:31 compute-0 ovn_controller[153477]: 2025-11-25T16:33:31Z|00265|binding|INFO|Setting lport f95d61ca-d58c-4f07-879f-5e5412976e42 down in Southbound
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.808 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b8:12 10.100.0.12'], port_security=['fa:16:3e:d7:b8:12 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1d318e56-4a8c-4806-aa87-e837708f2a1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f95d61ca-d58c-4f07-879f-5e5412976e42) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.810 254096 INFO nova.virt.libvirt.driver [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Instance destroyed successfully.
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.810 254096 DEBUG nova.objects.instance [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'resources' on Instance uuid 1d318e56-4a8c-4806-aa87-e837708f2a1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.817 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc58ef7-b2e0-4b85-bab6-e9cd976da61f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.842 254096 DEBUG nova.virt.libvirt.vif [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.842 254096 DEBUG nova.network.os_vif_util [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.843 254096 DEBUG nova.network.os_vif_util [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.843 254096 DEBUG os_vif [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.845 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf95d61ca-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.845 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f1ef64-cacb-4c4a-b7b3-7584898734a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.846 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.849 254096 INFO os_vif [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b8:12,bridge_name='br-int',has_traffic_filtering=True,id=f95d61ca-d58c-4f07-879f-5e5412976e42,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf95d61ca-d5')
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.850 254096 DEBUG nova.virt.libvirt.vif [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-822239868',display_name='tempest-tempest.common.compute-instance-822239868',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-822239868',id=36,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-bctv2vy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=1d318e56-4a8c-4806-aa87-e837708f2a1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.850 254096 DEBUG nova.network.os_vif_util [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "address": "fa:16:3e:bc:a3:51", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ce87d5c-fb", "ovs_interfaceid": "0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.850 254096 DEBUG nova.network.os_vif_util [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.851 254096 DEBUG os_vif [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.852 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ce87d5c-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.852 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.854 254096 INFO os_vif [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:a3:51,bridge_name='br-int',has_traffic_filtering=True,id=0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0ce87d5c-fb')
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.849 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd5b676-79ff-49ba-909d-fc9d13e6719e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.889 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6d2729-346e-4792-a25a-6c5ee0ec8cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.906 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42f0e19f-34ba-40b7-aa72-85f96434afc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297522, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.925 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54933861-6f82-4e80-b29e-1404f9b607de]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297523, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297523, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.926 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 nova_compute[254092]: 2025-11-25 16:33:31.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.929 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.929 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.929 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.929 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.930 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f95d61ca-d58c-4f07-879f-5e5412976e42 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.931 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[113a9355-7936-4d80-9c75-3379c69dbf13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.978 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bdb383-300c-495b-82c0-75074fb2b0aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:31.981 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f12bd5-256c-4ddf-932a-1d17e154295e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.013 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1371efb9-348f-435d-a156-f597c53b3867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.034 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54e7638f-180b-4797-88a2-43fe8852bdc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52e7d5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:97:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480306, 'reachable_time': 15503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297529, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.050 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[30b0880b-af4a-4e3a-b50b-64ce9ddac251]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480317, 'tstamp': 480317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297531, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap52e7d5b9-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480320, 'tstamp': 480320}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297531, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.051 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.053 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.054 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52e7d5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.054 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.054 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52e7d5b9-00, col_values=(('external_ids', {'iface-id': 'af5d2618-f326-4aaf-9395-f47548ac108b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:32.055 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.163 254096 DEBUG nova.compute.manager [req-019da624-2fcf-4160-82aa-2d821bd175ce req-564bd4a8-57d2-4a75-8fba-12619f2ddbbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.164 254096 DEBUG oslo_concurrency.lockutils [req-019da624-2fcf-4160-82aa-2d821bd175ce req-564bd4a8-57d2-4a75-8fba-12619f2ddbbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.164 254096 DEBUG oslo_concurrency.lockutils [req-019da624-2fcf-4160-82aa-2d821bd175ce req-564bd4a8-57d2-4a75-8fba-12619f2ddbbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.164 254096 DEBUG oslo_concurrency.lockutils [req-019da624-2fcf-4160-82aa-2d821bd175ce req-564bd4a8-57d2-4a75-8fba-12619f2ddbbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.164 254096 DEBUG nova.compute.manager [req-019da624-2fcf-4160-82aa-2d821bd175ce req-564bd4a8-57d2-4a75-8fba-12619f2ddbbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.165 254096 WARNING nova.compute.manager [req-019da624-2fcf-4160-82aa-2d821bd175ce req-564bd4a8-57d2-4a75-8fba-12619f2ddbbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 for instance with vm_state active and task_state deleting.
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.221 254096 INFO nova.virt.libvirt.driver [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Deleting instance files /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f_del
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.222 254096 INFO nova.virt.libvirt.driver [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Deletion of /var/lib/nova/instances/1d318e56-4a8c-4806-aa87-e837708f2a1f_del complete
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.292 254096 INFO nova.compute.manager [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.293 254096 DEBUG oslo.service.loopingcall [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.294 254096 DEBUG nova.compute.manager [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.294 254096 DEBUG nova.network.neutron [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.303 254096 INFO nova.network.neutron [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Port 0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.304 254096 DEBUG nova.network.neutron [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [{"id": "f95d61ca-d58c-4f07-879f-5e5412976e42", "address": "fa:16:3e:d7:b8:12", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf95d61ca-d5", "ovs_interfaceid": "f95d61ca-d58c-4f07-879f-5e5412976e42", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.324 254096 DEBUG oslo_concurrency.lockutils [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Releasing lock "refresh_cache-1d318e56-4a8c-4806-aa87-e837708f2a1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.347 254096 DEBUG oslo_concurrency.lockutils [None req-d59b0b23-1469-411f-b330-b75f0881a1e6 c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "interface-1d318e56-4a8c-4806-aa87-e837708f2a1f-0ce87d5c-fb61-4b81-b779-2d4d7b77b3b9" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:32 compute-0 ceph-mon[74985]: pgmap v1383: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 1 op/s
Nov 25 16:33:32 compute-0 nova_compute[254092]: 2025-11-25 16:33:32.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]: {
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "osd_id": 1,
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "type": "bluestore"
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:     },
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "osd_id": 2,
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "type": "bluestore"
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:     },
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "osd_id": 0,
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:         "type": "bluestore"
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]:     }
Nov 25 16:33:32 compute-0 hopeful_williamson[297485]: }
Nov 25 16:33:32 compute-0 systemd[1]: libpod-e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc.scope: Deactivated successfully.
Nov 25 16:33:32 compute-0 podman[297460]: 2025-11-25 16:33:32.729006822 +0000 UTC m=+1.145767067 container died e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_williamson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-739d804d4c83d44c3d0632079177a9b2db3461fa4f8ab29d933e72aba4bbb22f-merged.mount: Deactivated successfully.
Nov 25 16:33:32 compute-0 podman[297460]: 2025-11-25 16:33:32.785517322 +0000 UTC m=+1.202277567 container remove e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:33:32 compute-0 systemd[1]: libpod-conmon-e390fc49dfe928448c79370ccc30c2e678778ef3f34d660c137ee059a24eccdc.scope: Deactivated successfully.
Nov 25 16:33:32 compute-0 sudo[297356]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:33:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:33:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:33:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:33:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 02040885-b013-4057-ba06-18761764a0e8 does not exist
Nov 25 16:33:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d52b4a48-2848-4f81-b335-8133b0d4b031 does not exist
Nov 25 16:33:32 compute-0 sudo[297572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:33:32 compute-0 sudo[297572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:32 compute-0 sudo[297572]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:32 compute-0 sudo[297597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:33:32 compute-0 sudo[297597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:33:32 compute-0 sudo[297597]: pam_unix(sudo:session): session closed for user root
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.139 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.139 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.154 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.227 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.228 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.236 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.236 254096 INFO nova.compute.claims [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:33:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 1 op/s
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.397 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:33:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:33:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:33:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1913938947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.871 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.877 254096 DEBUG nova.compute.provider_tree [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.894 254096 DEBUG nova.scheduler.client.report [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.915 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.915 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:33:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.966 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.966 254096 DEBUG nova.network.neutron [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.982 254096 INFO nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:33:33 compute-0 nova_compute[254092]: 2025-11-25 16:33:33.996 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.088 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.090 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.090 254096 INFO nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Creating image(s)
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.119 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.143 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.163 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.167 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.234 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.235 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.235 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.236 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.256 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.259 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.454 254096 DEBUG nova.policy [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b228702c02db4cb69105bb4c939c15d7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.525 254096 DEBUG nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-unplugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.525 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.525 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.526 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.526 254096 DEBUG nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-unplugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.526 254096 DEBUG nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-unplugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.526 254096 DEBUG nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.527 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.527 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.527 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.527 254096 DEBUG nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.527 254096 WARNING nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with vm_state active and task_state deleting.
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.527 254096 DEBUG nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.528 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.528 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.528 254096 DEBUG oslo_concurrency.lockutils [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.528 254096 DEBUG nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.528 254096 WARNING nova.compute.manager [req-5e6fae37-1e29-424b-a3ab-c5122c82e3db req-f9a38560-1b6d-41bd-9a19-1012c3fb4c03 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with vm_state active and task_state deleting.
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.546 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.596 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] resizing rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.683 254096 DEBUG nova.objects.instance [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'migration_context' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.702 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.703 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Ensure instance console log exists: /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.703 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.704 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:34 compute-0 nova_compute[254092]: 2025-11-25 16:33:34.704 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:34 compute-0 ceph-mon[74985]: pgmap v1384: 321 pgs: 321 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 1 op/s
Nov 25 16:33:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1913938947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Nov 25 16:33:35 compute-0 nova_compute[254092]: 2025-11-25 16:33:35.404 254096 DEBUG nova.network.neutron [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Successfully created port: 660536bc-d4bf-4a4b-9515-06043951c25e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:33:36 compute-0 nova_compute[254092]: 2025-11-25 16:33:36.535 254096 DEBUG nova.network.neutron [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Successfully updated port: 660536bc-d4bf-4a4b-9515-06043951c25e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:33:36 compute-0 nova_compute[254092]: 2025-11-25 16:33:36.556 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:36 compute-0 nova_compute[254092]: 2025-11-25 16:33:36.556 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:36 compute-0 nova_compute[254092]: 2025-11-25 16:33:36.556 254096 DEBUG nova.network.neutron [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:33:36 compute-0 nova_compute[254092]: 2025-11-25 16:33:36.846 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:36 compute-0 ceph-mon[74985]: pgmap v1385: 321 pgs: 321 active+clean; 181 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.008 254096 DEBUG nova.network.neutron [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:33:37 compute-0 sshd-session[297810]: Connection closed by authenticating user root 171.244.51.45 port 53554 [preauth]
Nov 25 16:33:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.386 254096 DEBUG nova.compute.manager [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-changed-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.386 254096 DEBUG nova.compute.manager [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Refreshing instance network info cache due to event network-changed-660536bc-d4bf-4a4b-9515-06043951c25e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.386 254096 DEBUG oslo_concurrency.lockutils [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.469 254096 DEBUG nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.469 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.470 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.470 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.470 254096 DEBUG nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.471 254096 WARNING nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with vm_state active and task_state deleting.
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.471 254096 DEBUG nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-unplugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.471 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.471 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.472 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.472 254096 DEBUG nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-unplugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.472 254096 DEBUG nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-unplugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.472 254096 DEBUG nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.473 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.473 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.473 254096 DEBUG oslo_concurrency.lockutils [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.473 254096 DEBUG nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] No waiting events found dispatching network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.474 254096 WARNING nova.compute.manager [req-12d66e9a-33b9-4423-8417-46db0db60995 req-e49c7356-9807-4648-8d55-b1595179b53d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received unexpected event network-vif-plugged-f95d61ca-d58c-4f07-879f-5e5412976e42 for instance with vm_state active and task_state deleting.
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.540 254096 DEBUG nova.network.neutron [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.559 254096 INFO nova.compute.manager [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Took 5.26 seconds to deallocate network for instance.
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.624 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.625 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:37 compute-0 nova_compute[254092]: 2025-11-25 16:33:37.730 254096 DEBUG oslo_concurrency.processutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:33:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245376748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.172 254096 DEBUG oslo_concurrency.processutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.178 254096 DEBUG nova.compute.provider_tree [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.196 254096 DEBUG nova.scheduler.client.report [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.219 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.247 254096 INFO nova.scheduler.client.report [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Deleted allocations for instance 1d318e56-4a8c-4806-aa87-e837708f2a1f
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.282 254096 DEBUG nova.network.neutron [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.320 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.321 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance network_info: |[{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.322 254096 DEBUG oslo_concurrency.lockutils [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.322 254096 DEBUG nova.network.neutron [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Refreshing network info cache for port 660536bc-d4bf-4a4b-9515-06043951c25e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.325 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Start _get_guest_xml network_info=[{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.330 254096 WARNING nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.334 254096 DEBUG nova.virt.libvirt.host [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.334 254096 DEBUG nova.virt.libvirt.host [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.339 254096 DEBUG oslo_concurrency.lockutils [None req-6cfc426a-3ced-4e83-8cd0-ae241f76323d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "1d318e56-4a8c-4806-aa87-e837708f2a1f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.343 254096 DEBUG nova.virt.libvirt.host [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.343 254096 DEBUG nova.virt.libvirt.host [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.344 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.344 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.344 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.344 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.344 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.345 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.345 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.345 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.345 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.345 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.345 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.345 254096 DEBUG nova.virt.hardware [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.348 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:33:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388058697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.795 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.817 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.820 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.845 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.845 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.846 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.846 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.846 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.847 254096 INFO nova.compute.manager [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Terminating instance
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.848 254096 DEBUG nova.compute.manager [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:33:38 compute-0 ceph-mon[74985]: pgmap v1386: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 16:33:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4245376748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2388058697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:38 compute-0 kernel: tap82f63517-76 (unregistering): left promiscuous mode
Nov 25 16:33:38 compute-0 NetworkManager[48891]: <info>  [1764088418.9093] device (tap82f63517-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:38 compute-0 ovn_controller[153477]: 2025-11-25T16:33:38Z|00266|binding|INFO|Releasing lport 82f63517-7636-46bf-b4e1-ba191ddad018 from this chassis (sb_readonly=0)
Nov 25 16:33:38 compute-0 ovn_controller[153477]: 2025-11-25T16:33:38Z|00267|binding|INFO|Setting lport 82f63517-7636-46bf-b4e1-ba191ddad018 down in Southbound
Nov 25 16:33:38 compute-0 ovn_controller[153477]: 2025-11-25T16:33:38Z|00268|binding|INFO|Removing iface tap82f63517-76 ovn-installed in OVS
Nov 25 16:33:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:38.950 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:7b:ca 10.100.0.6'], port_security=['fa:16:3e:f0:7b:ca 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '800c66e3-ee9f-4766-92f2-ecda5671cde3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36fd407edf16424e96041f57fccb9e2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '59e04c7b-1fad-4d48-ad96-e3b3cf180d61', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a0fe150-4b32-4abf-ad06-055b2d97facb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=82f63517-7636-46bf-b4e1-ba191ddad018) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:38.951 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 82f63517-7636-46bf-b4e1-ba191ddad018 in datapath 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d unbound from our chassis
Nov 25 16:33:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:38.952 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:33:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:38.953 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3431f318-df83-43d7-a52e-8085a44d9530]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:38.954 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d namespace which is not needed anymore
Nov 25 16:33:38 compute-0 nova_compute[254092]: 2025-11-25 16:33:38.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:38 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 25 16:33:38 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 16.154s CPU time.
Nov 25 16:33:38 compute-0 systemd-machined[216343]: Machine qemu-37-instance-00000020 terminated.
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.086 254096 INFO nova.virt.libvirt.driver [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Instance destroyed successfully.
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.088 254096 DEBUG nova.objects.instance [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lazy-loading 'resources' on Instance uuid 800c66e3-ee9f-4766-92f2-ecda5671cde3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.100 254096 DEBUG nova.virt.libvirt.vif [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:32:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1061549484',display_name='tempest-tempest.common.compute-instance-1061549484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1061549484',id=32,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGZe6LoY198VQqVmszTzRQn1eLrChbRZgirb9UFyHnKiwbK+onUZ9OAKdmbbmeuifOBSXxAmDh6Tzi6OA/iv9WrQsgxGvk1nGcnbFbOsVIGStOyhcAbSBeBMtXZYtHi4Q==',key_name='tempest-keypair-386194762',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:32:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36fd407edf16424e96041f57fccb9e2b',ramdisk_id='',reservation_id='r-kwb5mjaz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1854144244',owner_user_name='tempest-AttachInterfacesTestJSON-1854144244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:32:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c2f73d928f424db49a62e7dff2b1ce14',uuid=800c66e3-ee9f-4766-92f2-ecda5671cde3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.101 254096 DEBUG nova.network.os_vif_util [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converting VIF {"id": "82f63517-7636-46bf-b4e1-ba191ddad018", "address": "fa:16:3e:f0:7b:ca", "network": {"id": "52e7d5b9-0570-4e5c-b3da-9dfcb924b83d", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-58491035-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36fd407edf16424e96041f57fccb9e2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap82f63517-76", "ovs_interfaceid": "82f63517-7636-46bf-b4e1-ba191ddad018", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.102 254096 DEBUG nova.network.os_vif_util [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.102 254096 DEBUG os_vif [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.105 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.106 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82f63517-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:39 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [NOTICE]   (294132) : haproxy version is 2.8.14-c23fe91
Nov 25 16:33:39 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [NOTICE]   (294132) : path to executable is /usr/sbin/haproxy
Nov 25 16:33:39 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [WARNING]  (294132) : Exiting Master process...
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [ALERT]    (294132) : Current worker (294150) exited with code 143 (Terminated)
Nov 25 16:33:39 compute-0 neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d[294092]: [WARNING]  (294132) : All workers exited. Exiting... (0)
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:33:39 compute-0 systemd[1]: libpod-892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9.scope: Deactivated successfully.
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.114 254096 INFO os_vif [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:7b:ca,bridge_name='br-int',has_traffic_filtering=True,id=82f63517-7636-46bf-b4e1-ba191ddad018,network=Network(52e7d5b9-0570-4e5c-b3da-9dfcb924b83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap82f63517-76')
Nov 25 16:33:39 compute-0 podman[297917]: 2025-11-25 16:33:39.121186591 +0000 UTC m=+0.054395405 container died 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9-userdata-shm.mount: Deactivated successfully.
Nov 25 16:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e328b169ee953840c8739043c0f2cb5f976a6a6651659cd6ce366c78ba5f4b-merged.mount: Deactivated successfully.
Nov 25 16:33:39 compute-0 podman[297917]: 2025-11-25 16:33:39.164419872 +0000 UTC m=+0.097628676 container cleanup 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:33:39 compute-0 systemd[1]: libpod-conmon-892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9.scope: Deactivated successfully.
Nov 25 16:33:39 compute-0 podman[297973]: 2025-11-25 16:33:39.234627394 +0000 UTC m=+0.048213157 container remove 892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.242 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8ed4b5-64bc-4f41-9b15-3dee3603ba14]: (4, ('Tue Nov 25 04:33:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9)\n892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9\nTue Nov 25 04:33:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d (892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9)\n892c7a817c83b40b1c06df80538835a6ef877956d264f91de5ea00faa3e9f4e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.245 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[244cbb50-8071-4eb3-ac36-dba9a779b72c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.246 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52e7d5b9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 kernel: tap52e7d5b9-00: left promiscuous mode
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.267 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b9996981-6a20-44c0-8091-82afc7a5b8f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.278 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c84836ee-fa0a-4446-8aaf-ebeafdcd893a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.280 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[77fc2a5d-53c0-4e43-bb0e-f133c56909e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:33:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745096568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.299 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[448b7722-907a-410c-b535-95366dbf931e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480298, 'reachable_time': 28508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297989, 'error': None, 'target': 'ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.302 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52e7d5b9-0570-4e5c-b3da-9dfcb924b83d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:33:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:39.302 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ada0bf02-0462-41ef-b9a6-3c329164aa2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d52e7d5b9\x2d0570\x2d4e5c\x2db3da\x2d9dfcb924b83d.mount: Deactivated successfully.
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.315 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.316 254096 DEBUG nova.virt.libvirt.vif [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:33:34Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.317 254096 DEBUG nova.network.os_vif_util [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.317 254096 DEBUG nova.network.os_vif_util [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.318 254096 DEBUG nova.objects.instance [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.338 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <uuid>a3d5d205-98f0-4820-a96c-7f3e59d0cdd9</uuid>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <name>instance-00000025</name>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersNegativeTestJSON-server-2038779180</nova:name>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:33:38</nova:creationTime>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:user uuid="b228702c02db4cb69105bb4c939c15d7">tempest-ServersNegativeTestJSON-549107942-project-member</nova:user>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:project uuid="2d7c4dbc1eb44f39aa7ccb9b6363e554">tempest-ServersNegativeTestJSON-549107942</nova:project>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <nova:port uuid="660536bc-d4bf-4a4b-9515-06043951c25e">
Nov 25 16:33:39 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <system>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <entry name="serial">a3d5d205-98f0-4820-a96c-7f3e59d0cdd9</entry>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <entry name="uuid">a3d5d205-98f0-4820-a96c-7f3e59d0cdd9</entry>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </system>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <os>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   </os>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <features>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   </features>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk">
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config">
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:33:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:10:46:64"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <target dev="tap660536bc-d4"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/console.log" append="off"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <video>
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </video>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:33:39 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:33:39 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:33:39 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:33:39 compute-0 nova_compute[254092]: </domain>
Nov 25 16:33:39 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.339 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Preparing to wait for external event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.339 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.339 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.339 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.340 254096 DEBUG nova.virt.libvirt.vif [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:33:34Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.340 254096 DEBUG nova.network.os_vif_util [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.342 254096 DEBUG nova.network.os_vif_util [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.342 254096 DEBUG os_vif [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.343 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.343 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.346 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap660536bc-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.346 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap660536bc-d4, col_values=(('external_ids', {'iface-id': '660536bc-d4bf-4a4b-9515-06043951c25e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:46:64', 'vm-uuid': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 NetworkManager[48891]: <info>  [1764088419.3494] manager: (tap660536bc-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.350 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.355 254096 INFO os_vif [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4')
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.464 254096 INFO nova.virt.libvirt.driver [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Deleting instance files /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3_del
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.465 254096 INFO nova.virt.libvirt.driver [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Deletion of /var/lib/nova/instances/800c66e3-ee9f-4766-92f2-ecda5671cde3_del complete
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.475 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.475 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.476 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No VIF found with MAC fa:16:3e:10:46:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.476 254096 INFO nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Using config drive
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.492 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.549 254096 INFO nova.compute.manager [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.549 254096 DEBUG oslo.service.loopingcall [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.550 254096 DEBUG nova.compute.manager [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.550 254096 DEBUG nova.network.neutron [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.592 254096 DEBUG nova.compute.manager [req-8a83d813-edc1-4b8c-b610-7fb0fe22ed23 req-8a9b1478-8ff4-43fb-a068-94878cdca4db a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Received event network-vif-deleted-f95d61ca-d58c-4f07-879f-5e5412976e42 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.910 254096 DEBUG nova.network.neutron [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updated VIF entry in instance network info cache for port 660536bc-d4bf-4a4b-9515-06043951c25e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.911 254096 DEBUG nova.network.neutron [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:39 compute-0 nova_compute[254092]: 2025-11-25 16:33:39.927 254096 DEBUG oslo_concurrency.lockutils [req-1ac19023-0d02-4e97-80d8-7eecacd292ce req-c091c037-7e82-431e-bd81-9c73c4dc9217 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/745096568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:33:40
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', '.rgw.root', 'backups']
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.291 254096 INFO nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Creating config drive at /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.296 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdolixod7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.432 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdolixod7" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.453 254096 DEBUG nova.storage.rbd_utils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.457 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.602 254096 DEBUG oslo_concurrency.processutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.603 254096 INFO nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deleting local config drive /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config because it was imported into RBD.
Nov 25 16:33:40 compute-0 kernel: tap660536bc-d4: entered promiscuous mode
Nov 25 16:33:40 compute-0 systemd-udevd[297880]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:33:40 compute-0 NetworkManager[48891]: <info>  [1764088420.6508] manager: (tap660536bc-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Nov 25 16:33:40 compute-0 ovn_controller[153477]: 2025-11-25T16:33:40Z|00269|binding|INFO|Claiming lport 660536bc-d4bf-4a4b-9515-06043951c25e for this chassis.
Nov 25 16:33:40 compute-0 ovn_controller[153477]: 2025-11-25T16:33:40Z|00270|binding|INFO|660536bc-d4bf-4a4b-9515-06043951c25e: Claiming fa:16:3e:10:46:64 10.100.0.8
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:40 compute-0 NetworkManager[48891]: <info>  [1764088420.6621] device (tap660536bc-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:33:40 compute-0 NetworkManager[48891]: <info>  [1764088420.6634] device (tap660536bc-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:33:40 compute-0 ovn_controller[153477]: 2025-11-25T16:33:40Z|00271|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e ovn-installed in OVS
Nov 25 16:33:40 compute-0 ovn_controller[153477]: 2025-11-25T16:33:40Z|00272|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e up in Southbound
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.669 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:64 10.100.0.8'], port_security=['fa:16:3e:10:46:64 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=660536bc-d4bf-4a4b-9515-06043951c25e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.671 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 660536bc-d4bf-4a4b-9515-06043951c25e in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f bound to our chassis
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.673 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:33:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:33:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 14K writes, 61K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4385 syncs, 3.39 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8943 writes, 36K keys, 8943 commit groups, 1.0 writes per commit group, ingest: 35.79 MB, 0.06 MB/s
                                           Interval WAL: 8943 writes, 3395 syncs, 2.63 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:33:40 compute-0 systemd-machined[216343]: New machine qemu-43-instance-00000025.
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.688 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e519bae-0297-4fe4-afea-fc1b92ce6800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.689 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3960d4c5-61 in ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.691 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3960d4c5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.691 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[059abd40-1b38-4d01-9070-e014d6b52c62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[97edcee7-39bc-4c27-9299-9440b5e60512]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000025.
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.702 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4d990e66-3709-4313-a807-306dae51a08a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.716 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eee25d40-fdc5-4edc-a94b-f6c5b4336b0f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.741 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1f045ced-8b42-476a-ab58-3c01d7126142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 NetworkManager[48891]: <info>  [1764088420.7473] manager: (tap3960d4c5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71da8324-ed4e-42fb-9db0-c3e6665403ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.771 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8beab935-62cf-43ba-a598-df116236d1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.774 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[948dfdcc-330b-4998-a98a-81800efb4d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 NetworkManager[48891]: <info>  [1764088420.7946] device (tap3960d4c5-60): carrier: link connected
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.800 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cd067047-6417-4938-a7e2-05f62d98e84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.816 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[737c75df-e41e-432c-990f-54318d752406]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487838, 'reachable_time': 18390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298097, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.833 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c02a7259-f3bf-4665-b04a-93f199b5b8ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:428f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487838, 'tstamp': 487838}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298098, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.849 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce8f8aa-59dc-45d8-9bdf-4bed9dbc951d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487838, 'reachable_time': 18390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298099, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92b069db-4281-413b-ada9-4e12e80bac67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.934 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c83e7a5-3216-4bcf-b954-c6b7be2dba6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.936 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.936 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.936 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3960d4c5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:40 compute-0 NetworkManager[48891]: <info>  [1764088420.9391] manager: (tap3960d4c5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Nov 25 16:33:40 compute-0 kernel: tap3960d4c5-60: entered promiscuous mode
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.941 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3960d4c5-60, col_values=(('external_ids', {'iface-id': '9dd2e935-32e0-43d1-8a28-23e6ab045e91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:40 compute-0 ovn_controller[153477]: 2025-11-25T16:33:40Z|00273|binding|INFO|Releasing lport 9dd2e935-32e0-43d1-8a28-23e6ab045e91 from this chassis (sb_readonly=0)
Nov 25 16:33:40 compute-0 nova_compute[254092]: 2025-11-25 16:33:40.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.958 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.959 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a8e1c6-b5a8-4fb9-ae19-a3752439173c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.960 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:33:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:33:40.961 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'env', 'PROCESS_TAG=haproxy-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:33:40 compute-0 ceph-mon[74985]: pgmap v1387: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 16:33:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 16:33:41 compute-0 podman[298131]: 2025-11-25 16:33:41.279804862 +0000 UTC m=+0.024314578 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:33:41 compute-0 podman[298131]: 2025-11-25 16:33:41.409233428 +0000 UTC m=+0.153743114 container create f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:33:41 compute-0 nova_compute[254092]: 2025-11-25 16:33:41.427 254096 DEBUG nova.network.neutron [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:41 compute-0 nova_compute[254092]: 2025-11-25 16:33:41.443 254096 INFO nova.compute.manager [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Took 1.89 seconds to deallocate network for instance.
Nov 25 16:33:41 compute-0 systemd[1]: Started libpod-conmon-f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0.scope.
Nov 25 16:33:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b5991d4fb967b0680449d6a41a8175c21fb80442f2c06a6e211d9a0ef3cb1a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:33:41 compute-0 nova_compute[254092]: 2025-11-25 16:33:41.487 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:41 compute-0 nova_compute[254092]: 2025-11-25 16:33:41.487 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:41 compute-0 podman[298131]: 2025-11-25 16:33:41.493875132 +0000 UTC m=+0.238384838 container init f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:33:41 compute-0 podman[298131]: 2025-11-25 16:33:41.499258048 +0000 UTC m=+0.243767734 container start f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:33:41 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[298147]: [NOTICE]   (298151) : New worker (298153) forked
Nov 25 16:33:41 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[298147]: [NOTICE]   (298151) : Loading success.
Nov 25 16:33:41 compute-0 nova_compute[254092]: 2025-11-25 16:33:41.586 254096 DEBUG oslo_concurrency.processutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:41 compute-0 nova_compute[254092]: 2025-11-25 16:33:41.682 254096 DEBUG nova.compute.manager [req-969cf648-19b2-42f6-8a72-aa15b8b6e7b7 req-b3a56480-e1a6-423f-966b-b34fff7c7225 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Received event network-vif-deleted-82f63517-7636-46bf-b4e1-ba191ddad018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:33:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977362766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.009 254096 DEBUG oslo_concurrency.processutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.014 254096 DEBUG nova.compute.provider_tree [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.031 254096 DEBUG nova.scheduler.client.report [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:33:42 compute-0 ceph-mon[74985]: pgmap v1388: 321 pgs: 321 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Nov 25 16:33:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1977362766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.061 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.095 254096 INFO nova.scheduler.client.report [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Deleted allocations for instance 800c66e3-ee9f-4766-92f2-ecda5671cde3
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.161 254096 DEBUG oslo_concurrency.lockutils [None req-ec5ac2db-0fec-4641-944c-53dea2d3849d c2f73d928f424db49a62e7dff2b1ce14 36fd407edf16424e96041f57fccb9e2b - - default default] Lock "800c66e3-ee9f-4766-92f2-ecda5671cde3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:42 compute-0 nova_compute[254092]: 2025-11-25 16:33:42.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.141 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088423.1406972, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.142 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Started (Lifecycle Event)
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.160 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.165 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088423.140925, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.165 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Paused (Lifecycle Event)
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.192 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.197 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:33:43 compute-0 nova_compute[254092]: 2025-11-25 16:33:43.216 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:33:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 25 16:33:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:44 compute-0 nova_compute[254092]: 2025-11-25 16:33:44.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:44 compute-0 ceph-mon[74985]: pgmap v1389: 321 pgs: 321 active+clean; 88 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Nov 25 16:33:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.947 254096 DEBUG nova.compute.manager [req-acaddc56-5a75-4fdd-b9c5-8f937973eed4 req-c5988c96-4045-4aa6-a0ec-a3d2a4816b5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.948 254096 DEBUG oslo_concurrency.lockutils [req-acaddc56-5a75-4fdd-b9c5-8f937973eed4 req-c5988c96-4045-4aa6-a0ec-a3d2a4816b5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.948 254096 DEBUG oslo_concurrency.lockutils [req-acaddc56-5a75-4fdd-b9c5-8f937973eed4 req-c5988c96-4045-4aa6-a0ec-a3d2a4816b5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.948 254096 DEBUG oslo_concurrency.lockutils [req-acaddc56-5a75-4fdd-b9c5-8f937973eed4 req-c5988c96-4045-4aa6-a0ec-a3d2a4816b5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.948 254096 DEBUG nova.compute.manager [req-acaddc56-5a75-4fdd-b9c5-8f937973eed4 req-c5988c96-4045-4aa6-a0ec-a3d2a4816b5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Processing event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.949 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.953 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088425.953206, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.953 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Resumed (Lifecycle Event)
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.955 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.959 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance spawned successfully.
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.959 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.985 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:45 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.994 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:45.999 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.000 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.001 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.002 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.003 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.004 254096 DEBUG nova.virt.libvirt.driver [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.037 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.092 254096 INFO nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Took 12.00 seconds to spawn the instance on the hypervisor.
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.093 254096 DEBUG nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.154 254096 INFO nova.compute.manager [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Took 12.95 seconds to build instance.
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.168 254096 DEBUG oslo_concurrency.lockutils [None req-e337f32e-2965-412d-8109-3ecba7262cfa b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:46 compute-0 ceph-mon[74985]: pgmap v1390: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.808 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088411.807806, 1d318e56-4a8c-4806-aa87-e837708f2a1f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.809 254096 INFO nova.compute.manager [-] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] VM Stopped (Lifecycle Event)
Nov 25 16:33:46 compute-0 nova_compute[254092]: 2025-11-25 16:33:46.831 254096 DEBUG nova.compute.manager [None req-d0f70237-1c1c-4f40-8905-13399e7c91d0 - - - - - -] [instance: 1d318e56-4a8c-4806-aa87-e837708f2a1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 82 KiB/s wr, 72 op/s
Nov 25 16:33:47 compute-0 nova_compute[254092]: 2025-11-25 16:33:47.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:48 compute-0 ceph-mon[74985]: pgmap v1391: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 82 KiB/s wr, 72 op/s
Nov 25 16:33:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:49 compute-0 nova_compute[254092]: 2025-11-25 16:33:49.306 254096 DEBUG nova.compute.manager [req-00fbfd9d-0856-4eae-97c9-1ab5a67e1261 req-aa9ed94f-76ab-407f-9cef-cfedee8ffdc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:49 compute-0 nova_compute[254092]: 2025-11-25 16:33:49.307 254096 DEBUG oslo_concurrency.lockutils [req-00fbfd9d-0856-4eae-97c9-1ab5a67e1261 req-aa9ed94f-76ab-407f-9cef-cfedee8ffdc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:49 compute-0 nova_compute[254092]: 2025-11-25 16:33:49.307 254096 DEBUG oslo_concurrency.lockutils [req-00fbfd9d-0856-4eae-97c9-1ab5a67e1261 req-aa9ed94f-76ab-407f-9cef-cfedee8ffdc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:49 compute-0 nova_compute[254092]: 2025-11-25 16:33:49.307 254096 DEBUG oslo_concurrency.lockutils [req-00fbfd9d-0856-4eae-97c9-1ab5a67e1261 req-aa9ed94f-76ab-407f-9cef-cfedee8ffdc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:49 compute-0 nova_compute[254092]: 2025-11-25 16:33:49.308 254096 DEBUG nova.compute.manager [req-00fbfd9d-0856-4eae-97c9-1ab5a67e1261 req-aa9ed94f-76ab-407f-9cef-cfedee8ffdc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:33:49 compute-0 nova_compute[254092]: 2025-11-25 16:33:49.308 254096 WARNING nova.compute.manager [req-00fbfd9d-0856-4eae-97c9-1ab5a67e1261 req-aa9ed94f-76ab-407f-9cef-cfedee8ffdc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state active and task_state None.
Nov 25 16:33:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 14 KiB/s wr, 37 op/s
Nov 25 16:33:49 compute-0 nova_compute[254092]: 2025-11-25 16:33:49.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:33:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.2 total, 600.0 interval
                                           Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 5042 syncs, 3.32 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9665 writes, 37K keys, 9665 commit groups, 1.0 writes per commit group, ingest: 41.58 MB, 0.07 MB/s
                                           Interval WAL: 9665 writes, 3713 syncs, 2.60 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:33:50 compute-0 ceph-mon[74985]: pgmap v1392: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 14 KiB/s wr, 37 op/s
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:33:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.661 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.662 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.676 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:33:52 compute-0 ceph-mon[74985]: pgmap v1393: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.874 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.874 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.883 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:33:52 compute-0 nova_compute[254092]: 2025-11-25 16:33:52.883 254096 INFO nova.compute.claims [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.186 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Nov 25 16:33:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:33:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794651088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.629 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.635 254096 DEBUG nova.compute.provider_tree [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.656 254096 DEBUG nova.scheduler.client.report [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.684 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.685 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.735 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.735 254096 DEBUG nova.network.neutron [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.754 254096 INFO nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.768 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:33:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/794651088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.870 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.871 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.872 254096 INFO nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Creating image(s)
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.894 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.917 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.937 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.941 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:53 compute-0 nova_compute[254092]: 2025-11-25 16:33:53.984 254096 DEBUG nova.policy [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b228702c02db4cb69105bb4c939c15d7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.049 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.050 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.050 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.051 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.071 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.075 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.101 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088419.085196, 800c66e3-ee9f-4766-92f2-ecda5671cde3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.102 254096 INFO nova.compute.manager [-] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] VM Stopped (Lifecycle Event)
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.124 254096 DEBUG nova.compute.manager [None req-a947b1aa-fc7e-46a8-b262-a3b8a5aa325e - - - - - -] [instance: 800c66e3-ee9f-4766-92f2-ecda5671cde3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.761 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:54 compute-0 ceph-mon[74985]: pgmap v1394: 321 pgs: 321 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.799 254096 DEBUG nova.network.neutron [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Successfully created port: 62a6fc34-457f-4470-8bef-e1b9110d2f12 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:33:54 compute-0 nova_compute[254092]: 2025-11-25 16:33:54.838 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] resizing rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.016 254096 DEBUG nova.objects.instance [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'migration_context' on Instance uuid f77f4bf4-5651-4f45-af4b-9ef0d68df364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.031 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.032 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Ensure instance console log exists: /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.032 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.033 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.033 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:33:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/650881314' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:33:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:33:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/650881314' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:33:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 126 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 97 op/s
Nov 25 16:33:55 compute-0 ovn_controller[153477]: 2025-11-25T16:33:55Z|00274|binding|INFO|Releasing lport 9dd2e935-32e0-43d1-8a28-23e6ab045e91 from this chassis (sb_readonly=0)
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.525 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.548 254096 WARNING nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.549 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.549 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid f77f4bf4-5651-4f45-af4b-9ef0d68df364 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.550 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.551 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.551 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.582 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:55 compute-0 ovn_controller[153477]: 2025-11-25T16:33:55Z|00275|binding|INFO|Releasing lport 9dd2e935-32e0-43d1-8a28-23e6ab045e91 from this chassis (sb_readonly=0)
Nov 25 16:33:55 compute-0 nova_compute[254092]: 2025-11-25 16:33:55.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/650881314' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:33:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/650881314' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.482 254096 DEBUG nova.network.neutron [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Successfully updated port: 62a6fc34-457f-4470-8bef-e1b9110d2f12 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.499 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "refresh_cache-f77f4bf4-5651-4f45-af4b-9ef0d68df364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.499 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquired lock "refresh_cache-f77f4bf4-5651-4f45-af4b-9ef0d68df364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.500 254096 DEBUG nova.network.neutron [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.609 254096 DEBUG nova.compute.manager [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received event network-changed-62a6fc34-457f-4470-8bef-e1b9110d2f12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.610 254096 DEBUG nova.compute.manager [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Refreshing instance network info cache due to event network-changed-62a6fc34-457f-4470-8bef-e1b9110d2f12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.610 254096 DEBUG oslo_concurrency.lockutils [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f77f4bf4-5651-4f45-af4b-9ef0d68df364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:33:56 compute-0 nova_compute[254092]: 2025-11-25 16:33:56.719 254096 DEBUG nova.network.neutron [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:33:56 compute-0 ceph-mon[74985]: pgmap v1395: 321 pgs: 321 active+clean; 126 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 97 op/s
Nov 25 16:33:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 134 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 16:33:57 compute-0 nova_compute[254092]: 2025-11-25 16:33:57.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:57 compute-0 podman[298418]: 2025-11-25 16:33:57.639698069 +0000 UTC m=+0.061596760 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:33:57 compute-0 podman[298417]: 2025-11-25 16:33:57.649998408 +0000 UTC m=+0.072059893 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 16:33:57 compute-0 nova_compute[254092]: 2025-11-25 16:33:57.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:57 compute-0 podman[298419]: 2025-11-25 16:33:57.716537541 +0000 UTC m=+0.134878336 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 16:33:58 compute-0 ovn_controller[153477]: 2025-11-25T16:33:58Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:46:64 10.100.0.8
Nov 25 16:33:58 compute-0 ovn_controller[153477]: 2025-11-25T16:33:58Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:46:64 10.100.0.8
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.815 254096 DEBUG nova.network.neutron [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Updating instance_info_cache with network_info: [{"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:33:58 compute-0 ceph-mon[74985]: pgmap v1396: 321 pgs: 321 active+clean; 134 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.835 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Releasing lock "refresh_cache-f77f4bf4-5651-4f45-af4b-9ef0d68df364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.835 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Instance network_info: |[{"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.835 254096 DEBUG oslo_concurrency.lockutils [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f77f4bf4-5651-4f45-af4b-9ef0d68df364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.836 254096 DEBUG nova.network.neutron [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Refreshing network info cache for port 62a6fc34-457f-4470-8bef-e1b9110d2f12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.838 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Start _get_guest_xml network_info=[{"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.842 254096 WARNING nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.846 254096 DEBUG nova.virt.libvirt.host [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.847 254096 DEBUG nova.virt.libvirt.host [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.854 254096 DEBUG nova.virt.libvirt.host [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.855 254096 DEBUG nova.virt.libvirt.host [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.855 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.856 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.856 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.856 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.857 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.857 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.857 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.857 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.858 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.858 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.858 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.858 254096 DEBUG nova.virt.hardware [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:33:58 compute-0 nova_compute[254092]: 2025-11-25 16:33:58.861 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:33:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:33:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/382056297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.301 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.325 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.330 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 134 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.365 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:33:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:33:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2716962699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.773 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.775 254096 DEBUG nova.virt.libvirt.vif [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-130086796',display_name='tempest-ServersNegativeTestJSON-server-130086796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-130086796',id=38,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-45035np1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:33:53Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=f77f4bf4-5651-4f45-af4b-9ef0d68df364,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.775 254096 DEBUG nova.network.os_vif_util [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.776 254096 DEBUG nova.network.os_vif_util [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:36:43,bridge_name='br-int',has_traffic_filtering=True,id=62a6fc34-457f-4470-8bef-e1b9110d2f12,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62a6fc34-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.778 254096 DEBUG nova.objects.instance [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'pci_devices' on Instance uuid f77f4bf4-5651-4f45-af4b-9ef0d68df364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.793 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <uuid>f77f4bf4-5651-4f45-af4b-9ef0d68df364</uuid>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <name>instance-00000026</name>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersNegativeTestJSON-server-130086796</nova:name>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:33:58</nova:creationTime>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:user uuid="b228702c02db4cb69105bb4c939c15d7">tempest-ServersNegativeTestJSON-549107942-project-member</nova:user>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:project uuid="2d7c4dbc1eb44f39aa7ccb9b6363e554">tempest-ServersNegativeTestJSON-549107942</nova:project>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <nova:port uuid="62a6fc34-457f-4470-8bef-e1b9110d2f12">
Nov 25 16:33:59 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <system>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <entry name="serial">f77f4bf4-5651-4f45-af4b-9ef0d68df364</entry>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <entry name="uuid">f77f4bf4-5651-4f45-af4b-9ef0d68df364</entry>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </system>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <os>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   </os>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <features>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   </features>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk">
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk.config">
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       </source>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:33:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:7b:36:43"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <target dev="tap62a6fc34-45"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/console.log" append="off"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <video>
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </video>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:33:59 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:33:59 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:33:59 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:33:59 compute-0 nova_compute[254092]: </domain>
Nov 25 16:33:59 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.810 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Preparing to wait for external event network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.810 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.810 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.811 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.811 254096 DEBUG nova.virt.libvirt.vif [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-130086796',display_name='tempest-ServersNegativeTestJSON-server-130086796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-130086796',id=38,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-45035np1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:33:53Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=f77f4bf4-5651-4f45-af4b-9ef0d68df364,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.812 254096 DEBUG nova.network.os_vif_util [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.812 254096 DEBUG nova.network.os_vif_util [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:36:43,bridge_name='br-int',has_traffic_filtering=True,id=62a6fc34-457f-4470-8bef-e1b9110d2f12,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62a6fc34-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.813 254096 DEBUG os_vif [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:36:43,bridge_name='br-int',has_traffic_filtering=True,id=62a6fc34-457f-4470-8bef-e1b9110d2f12,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62a6fc34-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.815 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.816 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.819 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62a6fc34-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.820 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62a6fc34-45, col_values=(('external_ids', {'iface-id': '62a6fc34-457f-4470-8bef-e1b9110d2f12', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:36:43', 'vm-uuid': 'f77f4bf4-5651-4f45-af4b-9ef0d68df364'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:59 compute-0 NetworkManager[48891]: <info>  [1764088439.8230] manager: (tap62a6fc34-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:33:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/382056297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2716962699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.832 254096 INFO os_vif [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:36:43,bridge_name='br-int',has_traffic_filtering=True,id=62a6fc34-457f-4470-8bef-e1b9110d2f12,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62a6fc34-45')
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.882 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.884 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.884 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No VIF found with MAC fa:16:3e:7b:36:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.885 254096 INFO nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Using config drive
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.909 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:33:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:33:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140953414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:33:59 compute-0 nova_compute[254092]: 2025-11-25 16:33:59.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.036 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.037 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.040 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.041 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.185 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.186 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4154MB free_disk=59.94662857055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.186 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.187 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.279 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance f77f4bf4-5651-4f45-af4b-9ef0d68df364 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.336 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:34:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011562753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.751 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.758 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.785 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.799 254096 INFO nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Creating config drive at /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/disk.config
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.807 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sx3wv2z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.837 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.838 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:00 compute-0 ceph-mon[74985]: pgmap v1397: 321 pgs: 321 active+clean; 134 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 16:34:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1140953414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4011562753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.940 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sx3wv2z" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.964 254096 DEBUG nova.storage.rbd_utils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:00 compute-0 nova_compute[254092]: 2025-11-25 16:34:00.968 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/disk.config f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.154 254096 DEBUG oslo_concurrency.processutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/disk.config f77f4bf4-5651-4f45-af4b-9ef0d68df364_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.156 254096 INFO nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Deleting local config drive /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364/disk.config because it was imported into RBD.
Nov 25 16:34:01 compute-0 kernel: tap62a6fc34-45: entered promiscuous mode
Nov 25 16:34:01 compute-0 NetworkManager[48891]: <info>  [1764088441.2115] manager: (tap62a6fc34-45): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.258 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:01 compute-0 ovn_controller[153477]: 2025-11-25T16:34:01Z|00276|binding|INFO|Claiming lport 62a6fc34-457f-4470-8bef-e1b9110d2f12 for this chassis.
Nov 25 16:34:01 compute-0 ovn_controller[153477]: 2025-11-25T16:34:01Z|00277|binding|INFO|62a6fc34-457f-4470-8bef-e1b9110d2f12: Claiming fa:16:3e:7b:36:43 10.100.0.3
Nov 25 16:34:01 compute-0 systemd-udevd[298653]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:01 compute-0 ovn_controller[153477]: 2025-11-25T16:34:01Z|00278|binding|INFO|Setting lport 62a6fc34-457f-4470-8bef-e1b9110d2f12 ovn-installed in OVS
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:01 compute-0 NetworkManager[48891]: <info>  [1764088441.2969] device (tap62a6fc34-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:34:01 compute-0 NetworkManager[48891]: <info>  [1764088441.2977] device (tap62a6fc34-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:34:01 compute-0 ovn_controller[153477]: 2025-11-25T16:34:01Z|00279|binding|INFO|Setting lport 62a6fc34-457f-4470-8bef-e1b9110d2f12 up in Southbound
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.305 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:36:43 10.100.0.3'], port_security=['fa:16:3e:7b:36:43 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f77f4bf4-5651-4f45-af4b-9ef0d68df364', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=62a6fc34-457f-4470-8bef-e1b9110d2f12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.306 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 62a6fc34-457f-4470-8bef-e1b9110d2f12 in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f bound to our chassis
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.307 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:34:01 compute-0 systemd-machined[216343]: New machine qemu-44-instance-00000026.
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.328 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac179b72-c7ac-4dcb-b694-08cb8834961d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:01 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000026.
Nov 25 16:34:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.360 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f95aabe8-04e5-42ff-b140-170ff1bf9975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.364 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[63206d81-dc7f-4b19-a9dc-04760aa2a7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.399 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c1950e75-0eb1-414f-b625-e19af7f5402e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.420 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5aa736a-87b7-4b81-90ca-245466ab4b4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487838, 'reachable_time': 18390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298670, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.436 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8756dfd1-eb62-43cf-89c3-852b7cf6745e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3960d4c5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487849, 'tstamp': 487849}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298671, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3960d4c5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487852, 'tstamp': 487852}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298671, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.438 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.441 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3960d4c5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.441 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.441 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3960d4c5-60, col_values=(('external_ids', {'iface-id': '9dd2e935-32e0-43d1-8a28-23e6ab045e91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:01.442 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.699 254096 DEBUG nova.compute.manager [req-eacb9ead-0f8b-40ea-88ba-20c26b66a1b0 req-debf6154-9a8e-4e48-be11-8b8948202b53 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received event network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.700 254096 DEBUG oslo_concurrency.lockutils [req-eacb9ead-0f8b-40ea-88ba-20c26b66a1b0 req-debf6154-9a8e-4e48-be11-8b8948202b53 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.701 254096 DEBUG oslo_concurrency.lockutils [req-eacb9ead-0f8b-40ea-88ba-20c26b66a1b0 req-debf6154-9a8e-4e48-be11-8b8948202b53 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.701 254096 DEBUG oslo_concurrency.lockutils [req-eacb9ead-0f8b-40ea-88ba-20c26b66a1b0 req-debf6154-9a8e-4e48-be11-8b8948202b53 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.701 254096 DEBUG nova.compute.manager [req-eacb9ead-0f8b-40ea-88ba-20c26b66a1b0 req-debf6154-9a8e-4e48-be11-8b8948202b53 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Processing event network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.740 254096 DEBUG nova.network.neutron [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Updated VIF entry in instance network info cache for port 62a6fc34-457f-4470-8bef-e1b9110d2f12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.742 254096 DEBUG nova.network.neutron [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Updating instance_info_cache with network_info: [{"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.772 254096 DEBUG oslo_concurrency.lockutils [req-55859714-384b-41fc-9b31-69884a183b09 req-c13ec3b8-1745-4f7e-82de-5e5c203df9f0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f77f4bf4-5651-4f45-af4b-9ef0d68df364" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.940 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088441.9398243, f77f4bf4-5651-4f45-af4b-9ef0d68df364 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.941 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] VM Started (Lifecycle Event)
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.945 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.950 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.957 254096 INFO nova.virt.libvirt.driver [-] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Instance spawned successfully.
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.959 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.964 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.970 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.983 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.984 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.984 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.985 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.985 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.986 254096 DEBUG nova.virt.libvirt.driver [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.991 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.991 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088441.9399495, f77f4bf4-5651-4f45-af4b-9ef0d68df364 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:01 compute-0 nova_compute[254092]: 2025-11-25 16:34:01.992 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] VM Paused (Lifecycle Event)
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.019 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.024 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088441.949311, f77f4bf4-5651-4f45-af4b-9ef0d68df364 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.025 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] VM Resumed (Lifecycle Event)
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.042 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.047 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.062 254096 INFO nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Took 8.19 seconds to spawn the instance on the hypervisor.
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.062 254096 DEBUG nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.068 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.181 254096 INFO nova.compute.manager [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Took 9.46 seconds to build instance.
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.212 254096 DEBUG oslo_concurrency.lockutils [None req-bce1854d-57b6-403a-9e1b-7a6a9b111e08 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.213 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.213 254096 INFO nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.213 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.839 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:02 compute-0 nova_compute[254092]: 2025-11-25 16:34:02.840 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:02 compute-0 ceph-mon[74985]: pgmap v1398: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Nov 25 16:34:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 16:34:03 compute-0 nova_compute[254092]: 2025-11-25 16:34:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:03 compute-0 nova_compute[254092]: 2025-11-25 16:34:03.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:34:03 compute-0 nova_compute[254092]: 2025-11-25 16:34:03.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:34:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:04 compute-0 nova_compute[254092]: 2025-11-25 16:34:04.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:04 compute-0 nova_compute[254092]: 2025-11-25 16:34:04.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:04 compute-0 nova_compute[254092]: 2025-11-25 16:34:04.516 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:34:04 compute-0 nova_compute[254092]: 2025-11-25 16:34:04.516 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:04 compute-0 nova_compute[254092]: 2025-11-25 16:34:04.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:04 compute-0 ceph-mon[74985]: pgmap v1399: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.147 254096 DEBUG nova.compute.manager [req-7021d1a9-1a26-4abc-a9d4-d7cf40176114 req-0864ed22-25cc-462d-a310-fa307a288467 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received event network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.148 254096 DEBUG oslo_concurrency.lockutils [req-7021d1a9-1a26-4abc-a9d4-d7cf40176114 req-0864ed22-25cc-462d-a310-fa307a288467 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.148 254096 DEBUG oslo_concurrency.lockutils [req-7021d1a9-1a26-4abc-a9d4-d7cf40176114 req-0864ed22-25cc-462d-a310-fa307a288467 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.148 254096 DEBUG oslo_concurrency.lockutils [req-7021d1a9-1a26-4abc-a9d4-d7cf40176114 req-0864ed22-25cc-462d-a310-fa307a288467 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.148 254096 DEBUG nova.compute.manager [req-7021d1a9-1a26-4abc-a9d4-d7cf40176114 req-0864ed22-25cc-462d-a310-fa307a288467 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] No waiting events found dispatching network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.148 254096 WARNING nova.compute.manager [req-7021d1a9-1a26-4abc-a9d4-d7cf40176114 req-0864ed22-25cc-462d-a310-fa307a288467 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received unexpected event network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 for instance with vm_state active and task_state None.
Nov 25 16:34:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 144 op/s
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.414 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.414 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.415 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.415 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.416 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.417 254096 INFO nova.compute.manager [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Terminating instance
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.419 254096 DEBUG nova.compute.manager [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:34:05 compute-0 kernel: tap62a6fc34-45 (unregistering): left promiscuous mode
Nov 25 16:34:05 compute-0 NetworkManager[48891]: <info>  [1764088445.4618] device (tap62a6fc34-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:05 compute-0 ovn_controller[153477]: 2025-11-25T16:34:05Z|00280|binding|INFO|Releasing lport 62a6fc34-457f-4470-8bef-e1b9110d2f12 from this chassis (sb_readonly=0)
Nov 25 16:34:05 compute-0 ovn_controller[153477]: 2025-11-25T16:34:05Z|00281|binding|INFO|Setting lport 62a6fc34-457f-4470-8bef-e1b9110d2f12 down in Southbound
Nov 25 16:34:05 compute-0 ovn_controller[153477]: 2025-11-25T16:34:05Z|00282|binding|INFO|Removing iface tap62a6fc34-45 ovn-installed in OVS
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.482 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:36:43 10.100.0.3'], port_security=['fa:16:3e:7b:36:43 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f77f4bf4-5651-4f45-af4b-9ef0d68df364', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=62a6fc34-457f-4470-8bef-e1b9110d2f12) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.483 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 62a6fc34-457f-4470-8bef-e1b9110d2f12 in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f unbound from our chassis
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.485 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.499 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e64c408b-b4b4-484b-bc8c-a1d1f1b371a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.528 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f0d17c-65b4-4ea4-9857-131d087c0bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:05 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Deactivated successfully.
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.531 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[99d34547-8fe0-4e71-bd06-74aa562bafc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:05 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000026.scope: Consumed 4.096s CPU time.
Nov 25 16:34:05 compute-0 systemd-machined[216343]: Machine qemu-44-instance-00000026 terminated.
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.562 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e0dfdb-79ff-4080-aee6-ad26094e950a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.580 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6303f45c-99d9-41f3-b191-b3822970fcef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487838, 'reachable_time': 18390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298725, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[744848d0-c08e-4db0-9e67-33940b8a303c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3960d4c5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487849, 'tstamp': 487849}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298726, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3960d4c5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487852, 'tstamp': 487852}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298726, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.599 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.606 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3960d4c5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.606 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.607 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3960d4c5-60, col_values=(('external_ids', {'iface-id': '9dd2e935-32e0-43d1-8a28-23e6ab045e91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:05.607 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.649 254096 INFO nova.virt.libvirt.driver [-] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Instance destroyed successfully.
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.650 254096 DEBUG nova.objects.instance [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'resources' on Instance uuid f77f4bf4-5651-4f45-af4b-9ef0d68df364 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.671 254096 DEBUG nova.virt.libvirt.vif [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:33:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-130086796',display_name='tempest-ServersNegativeTestJSON-server-130086796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-130086796',id=38,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:34:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-45035np1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:34:02Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=f77f4bf4-5651-4f45-af4b-9ef0d68df364,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.671 254096 DEBUG nova.network.os_vif_util [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "address": "fa:16:3e:7b:36:43", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62a6fc34-45", "ovs_interfaceid": "62a6fc34-457f-4470-8bef-e1b9110d2f12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.672 254096 DEBUG nova.network.os_vif_util [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:36:43,bridge_name='br-int',has_traffic_filtering=True,id=62a6fc34-457f-4470-8bef-e1b9110d2f12,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62a6fc34-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.673 254096 DEBUG os_vif [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:36:43,bridge_name='br-int',has_traffic_filtering=True,id=62a6fc34-457f-4470-8bef-e1b9110d2f12,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62a6fc34-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.675 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.676 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62a6fc34-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:05 compute-0 nova_compute[254092]: 2025-11-25 16:34:05.683 254096 INFO os_vif [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:36:43,bridge_name='br-int',has_traffic_filtering=True,id=62a6fc34-457f-4470-8bef-e1b9110d2f12,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62a6fc34-45')
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.064 254096 INFO nova.virt.libvirt.driver [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Deleting instance files /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364_del
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.065 254096 INFO nova.virt.libvirt.driver [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Deletion of /var/lib/nova/instances/f77f4bf4-5651-4f45-af4b-9ef0d68df364_del complete
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.132 254096 INFO nova.compute.manager [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.133 254096 DEBUG oslo.service.loopingcall [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.133 254096 DEBUG nova.compute.manager [-] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.133 254096 DEBUG nova.network.neutron [-] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:34:06 compute-0 ceph-mon[74985]: pgmap v1400: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 144 op/s
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.889 254096 DEBUG nova.network.neutron [-] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.908 254096 INFO nova.compute.manager [-] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Took 0.77 seconds to deallocate network for instance.
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.960 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.960 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:06 compute-0 nova_compute[254092]: 2025-11-25 16:34:06.965 254096 DEBUG nova.compute.manager [req-ad40c8dd-5af3-4827-a870-44830643125e req-51cc62bd-3d4d-4828-b518-275afd9e99fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received event network-vif-deleted-62a6fc34-457f-4470-8bef-e1b9110d2f12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.035 254096 DEBUG oslo_concurrency.processutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.198 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.229 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.230 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.278 254096 DEBUG nova.compute.manager [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received event network-vif-unplugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.279 254096 DEBUG oslo_concurrency.lockutils [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.279 254096 DEBUG oslo_concurrency.lockutils [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.279 254096 DEBUG oslo_concurrency.lockutils [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.279 254096 DEBUG nova.compute.manager [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] No waiting events found dispatching network-vif-unplugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.280 254096 WARNING nova.compute.manager [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received unexpected event network-vif-unplugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 for instance with vm_state deleted and task_state None.
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.280 254096 DEBUG nova.compute.manager [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received event network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.280 254096 DEBUG oslo_concurrency.lockutils [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.281 254096 DEBUG oslo_concurrency.lockutils [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.281 254096 DEBUG oslo_concurrency.lockutils [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.281 254096 DEBUG nova.compute.manager [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] No waiting events found dispatching network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.281 254096 WARNING nova.compute.manager [req-e9bb7931-d4fe-40b3-8b84-b69d21fb28c2 req-45fee1eb-b09e-46b5-ae0e-3a0d5f982738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Received unexpected event network-vif-plugged-62a6fc34-457f-4470-8bef-e1b9110d2f12 for instance with vm_state deleted and task_state None.
Nov 25 16:34:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 159 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 141 op/s
Nov 25 16:34:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:34:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104151778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.467 254096 DEBUG oslo_concurrency.processutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.474 254096 DEBUG nova.compute.provider_tree [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.488 254096 DEBUG nova.scheduler.client.report [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.509 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.532 254096 INFO nova.scheduler.client.report [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Deleted allocations for instance f77f4bf4-5651-4f45-af4b-9ef0d68df364
Nov 25 16:34:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:34:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.5 total, 600.0 interval
                                           Cumulative writes: 14K writes, 58K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4209 syncs, 3.39 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8020 writes, 33K keys, 8020 commit groups, 1.0 writes per commit group, ingest: 35.89 MB, 0.06 MB/s
                                           Interval WAL: 8020 writes, 3109 syncs, 2.58 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.607 254096 DEBUG oslo_concurrency.lockutils [None req-d44e76ad-f628-4bd0-8af4-c2efeaa99072 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "f77f4bf4-5651-4f45-af4b-9ef0d68df364" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:07 compute-0 nova_compute[254092]: 2025-11-25 16:34:07.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/104151778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:08 compute-0 ceph-mon[74985]: pgmap v1401: 321 pgs: 321 active+clean; 159 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 141 op/s
Nov 25 16:34:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 159 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 16:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:34:10 compute-0 nova_compute[254092]: 2025-11-25 16:34:10.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:10 compute-0 ceph-mon[74985]: pgmap v1402: 321 pgs: 321 active+clean; 159 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 16:34:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 16:34:12 compute-0 nova_compute[254092]: 2025-11-25 16:34:12.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:12 compute-0 ceph-mon[74985]: pgmap v1403: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 16:34:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 100 op/s
Nov 25 16:34:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:13.609 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:13.610 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:13.611 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:14 compute-0 ceph-mon[74985]: pgmap v1404: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 100 op/s
Nov 25 16:34:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 100 op/s
Nov 25 16:34:15 compute-0 nova_compute[254092]: 2025-11-25 16:34:15.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:16 compute-0 ceph-mon[74985]: pgmap v1405: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 100 op/s
Nov 25 16:34:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 635 KiB/s rd, 13 KiB/s wr, 46 op/s
Nov 25 16:34:17 compute-0 nova_compute[254092]: 2025-11-25 16:34:17.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:18 compute-0 ceph-mon[74985]: pgmap v1406: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 635 KiB/s rd, 13 KiB/s wr, 46 op/s
Nov 25 16:34:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 25 16:34:20 compute-0 ceph-mon[74985]: pgmap v1407: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Nov 25 16:34:20 compute-0 nova_compute[254092]: 2025-11-25 16:34:20.649 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088445.6471446, f77f4bf4-5651-4f45-af4b-9ef0d68df364 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:20 compute-0 nova_compute[254092]: 2025-11-25 16:34:20.649 254096 INFO nova.compute.manager [-] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] VM Stopped (Lifecycle Event)
Nov 25 16:34:20 compute-0 nova_compute[254092]: 2025-11-25 16:34:20.682 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:20 compute-0 nova_compute[254092]: 2025-11-25 16:34:20.686 254096 DEBUG nova.compute.manager [None req-7ddf576c-ab82-4629-84d9-c54ca0ac2733 - - - - - -] [instance: f77f4bf4-5651-4f45-af4b-9ef0d68df364] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.2 KiB/s wr, 25 op/s
Nov 25 16:34:22 compute-0 ceph-mon[74985]: pgmap v1408: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.2 KiB/s wr, 25 op/s
Nov 25 16:34:22 compute-0 nova_compute[254092]: 2025-11-25 16:34:22.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:24 compute-0 ceph-mon[74985]: pgmap v1409: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:24 compute-0 nova_compute[254092]: 2025-11-25 16:34:24.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:24.531 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:24.534 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:34:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 16:34:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:25 compute-0 nova_compute[254092]: 2025-11-25 16:34:25.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:26 compute-0 ceph-mon[74985]: pgmap v1410: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:26.536 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:26 compute-0 nova_compute[254092]: 2025-11-25 16:34:26.999 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:26.999 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.015 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.097 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.098 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.107 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.107 254096 INFO nova.compute.claims [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.250 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:34:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2343476998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.722 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.727 254096 DEBUG nova.compute.provider_tree [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.741 254096 DEBUG nova.scheduler.client.report [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.762 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.763 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.803 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.804 254096 DEBUG nova.network.neutron [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.820 254096 INFO nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.835 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.916 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.918 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.918 254096 INFO nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Creating image(s)
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.938 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.957 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.979 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:27 compute-0 nova_compute[254092]: 2025-11-25 16:34:27.983 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.042 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.043 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.044 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.044 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.065 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.068 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 90437bdf-689c-4185-93de-c28fe2c2ab07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.325 254096 DEBUG nova.policy [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.357 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 90437bdf-689c-4185-93de-c28fe2c2ab07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.408 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] resizing rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.486 254096 DEBUG nova.objects.instance [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid 90437bdf-689c-4185-93de-c28fe2c2ab07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.500 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.501 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Ensure instance console log exists: /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.501 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.501 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.501 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:28 compute-0 ceph-mon[74985]: pgmap v1411: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2343476998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:28 compute-0 podman[298969]: 2025-11-25 16:34:28.649320408 +0000 UTC m=+0.059548184 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:34:28 compute-0 podman[298970]: 2025-11-25 16:34:28.649549944 +0000 UTC m=+0.058217538 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 16:34:28 compute-0 podman[298971]: 2025-11-25 16:34:28.722129431 +0000 UTC m=+0.127616469 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 16:34:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:28 compute-0 nova_compute[254092]: 2025-11-25 16:34:28.975 254096 DEBUG nova.network.neutron [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Successfully created port: f64d52c9-5dbe-4b99-af6f-4f3a4294d461 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:34:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:29 compute-0 nova_compute[254092]: 2025-11-25 16:34:29.779 254096 DEBUG nova.network.neutron [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Successfully updated port: f64d52c9-5dbe-4b99-af6f-4f3a4294d461 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:34:29 compute-0 nova_compute[254092]: 2025-11-25 16:34:29.798 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-90437bdf-689c-4185-93de-c28fe2c2ab07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:29 compute-0 nova_compute[254092]: 2025-11-25 16:34:29.798 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-90437bdf-689c-4185-93de-c28fe2c2ab07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:29 compute-0 nova_compute[254092]: 2025-11-25 16:34:29.798 254096 DEBUG nova.network.neutron [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:34:30 compute-0 nova_compute[254092]: 2025-11-25 16:34:30.310 254096 DEBUG nova.network.neutron [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:34:30 compute-0 nova_compute[254092]: 2025-11-25 16:34:30.473 254096 DEBUG nova.compute.manager [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received event network-changed-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:30 compute-0 nova_compute[254092]: 2025-11-25 16:34:30.474 254096 DEBUG nova.compute.manager [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Refreshing instance network info cache due to event network-changed-f64d52c9-5dbe-4b99-af6f-4f3a4294d461. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:30 compute-0 nova_compute[254092]: 2025-11-25 16:34:30.474 254096 DEBUG oslo_concurrency.lockutils [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-90437bdf-689c-4185-93de-c28fe2c2ab07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:30 compute-0 ceph-mon[74985]: pgmap v1412: 321 pgs: 321 active+clean; 121 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:34:30 compute-0 nova_compute[254092]: 2025-11-25 16:34:30.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.326 254096 DEBUG nova.network.neutron [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Updating instance_info_cache with network_info: [{"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.345 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-90437bdf-689c-4185-93de-c28fe2c2ab07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.346 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Instance network_info: |[{"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.347 254096 DEBUG oslo_concurrency.lockutils [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-90437bdf-689c-4185-93de-c28fe2c2ab07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.348 254096 DEBUG nova.network.neutron [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Refreshing network info cache for port f64d52c9-5dbe-4b99-af6f-4f3a4294d461 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.352 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Start _get_guest_xml network_info=[{"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.360 254096 WARNING nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.369 254096 DEBUG nova.virt.libvirt.host [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.370 254096 DEBUG nova.virt.libvirt.host [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.387 254096 DEBUG nova.virt.libvirt.host [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.388 254096 DEBUG nova.virt.libvirt.host [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.389 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.389 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.390 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.391 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.391 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.391 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.392 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.392 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.393 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.394 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.394 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.395 254096 DEBUG nova.virt.hardware [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.400 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:32 compute-0 ceph-mon[74985]: pgmap v1413: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.712 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145857243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.880 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.901 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:32 compute-0 nova_compute[254092]: 2025-11-25 16:34:32.904 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:33 compute-0 sudo[299073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:33 compute-0 sudo[299073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 sudo[299073]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:33 compute-0 sudo[299117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:34:33 compute-0 sudo[299117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 sudo[299117]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:33 compute-0 sudo[299142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:33 compute-0 sudo[299142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 sudo[299142]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:33 compute-0 sudo[299167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 16:34:33 compute-0 sudo[299167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230560195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.352 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.355 254096 DEBUG nova.virt.libvirt.vif [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-61720203',display_name='tempest-ImagesTestJSON-server-61720203',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-61720203',id=39,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-lxaolagh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:27Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=90437bdf-689c-4185-93de-c28fe2c2ab07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.355 254096 DEBUG nova.network.os_vif_util [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.356 254096 DEBUG nova.network.os_vif_util [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:1d:6a,bridge_name='br-int',has_traffic_filtering=True,id=f64d52c9-5dbe-4b99-af6f-4f3a4294d461,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf64d52c9-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.358 254096 DEBUG nova.objects.instance [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 90437bdf-689c-4185-93de-c28fe2c2ab07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.370 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <uuid>90437bdf-689c-4185-93de-c28fe2c2ab07</uuid>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <name>instance-00000027</name>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesTestJSON-server-61720203</nova:name>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:34:32</nova:creationTime>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <nova:port uuid="f64d52c9-5dbe-4b99-af6f-4f3a4294d461">
Nov 25 16:34:33 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <system>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <entry name="serial">90437bdf-689c-4185-93de-c28fe2c2ab07</entry>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <entry name="uuid">90437bdf-689c-4185-93de-c28fe2c2ab07</entry>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </system>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <os>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   </os>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <features>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   </features>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/90437bdf-689c-4185-93de-c28fe2c2ab07_disk">
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/90437bdf-689c-4185-93de-c28fe2c2ab07_disk.config">
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:33 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d6:1d:6a"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <target dev="tapf64d52c9-5d"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/console.log" append="off"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <video>
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </video>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:34:33 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:34:33 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:34:33 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:34:33 compute-0 nova_compute[254092]: </domain>
Nov 25 16:34:33 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.371 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Preparing to wait for external event network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.371 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.371 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.372 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.372 254096 DEBUG nova.virt.libvirt.vif [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-61720203',display_name='tempest-ImagesTestJSON-server-61720203',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-61720203',id=39,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-lxaolagh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:27Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=90437bdf-689c-4185-93de-c28fe2c2ab07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.373 254096 DEBUG nova.network.os_vif_util [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.373 254096 DEBUG nova.network.os_vif_util [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:1d:6a,bridge_name='br-int',has_traffic_filtering=True,id=f64d52c9-5dbe-4b99-af6f-4f3a4294d461,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf64d52c9-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.374 254096 DEBUG os_vif [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:1d:6a,bridge_name='br-int',has_traffic_filtering=True,id=f64d52c9-5dbe-4b99-af6f-4f3a4294d461,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf64d52c9-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.375 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.375 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.378 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf64d52c9-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.378 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf64d52c9-5d, col_values=(('external_ids', {'iface-id': 'f64d52c9-5dbe-4b99-af6f-4f3a4294d461', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:1d:6a', 'vm-uuid': '90437bdf-689c-4185-93de-c28fe2c2ab07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:33 compute-0 NetworkManager[48891]: <info>  [1764088473.3810] manager: (tapf64d52c9-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.387 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.388 254096 INFO os_vif [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:1d:6a,bridge_name='br-int',has_traffic_filtering=True,id=f64d52c9-5dbe-4b99-af6f-4f3a4294d461,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf64d52c9-5d')
Nov 25 16:34:33 compute-0 sudo[299167]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.500 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.500 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.500 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:d6:1d:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.501 254096 INFO nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Using config drive
Nov 25 16:34:33 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:34:33 compute-0 nova_compute[254092]: 2025-11-25 16:34:33.529 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:33 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4145857243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1230560195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:33 compute-0 sudo[299235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:33 compute-0 sudo[299235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 sudo[299235]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:33 compute-0 sudo[299260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:34:33 compute-0 sudo[299260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 sudo[299260]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:33 compute-0 sudo[299285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:33 compute-0 sudo[299285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 sudo[299285]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:33 compute-0 sudo[299310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:34:33 compute-0 sudo[299310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:34 compute-0 sudo[299310]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:34:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:34:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:34:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 68d1ac79-8d88-4d56-87ea-529fa4b0fec6 does not exist
Nov 25 16:34:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 25268a5e-202f-4925-9743-30d44abbb3f0 does not exist
Nov 25 16:34:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 86935a55-d26c-456b-a2fe-d59ff86e9f27 does not exist
Nov 25 16:34:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:34:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:34:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:34:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.432 254096 INFO nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Creating config drive at /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/disk.config
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.436 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfykp_3p3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:34 compute-0 sudo[299366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:34 compute-0 sudo[299366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:34 compute-0 sudo[299366]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:34 compute-0 sudo[299394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:34:34 compute-0 sudo[299394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:34 compute-0 sudo[299394]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:34 compute-0 sudo[299419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.567 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfykp_3p3" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:34 compute-0 sudo[299419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:34 compute-0 sudo[299419]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:34 compute-0 ceph-mon[74985]: pgmap v1414: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:34:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:34:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.594 254096 DEBUG nova.storage.rbd_utils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 90437bdf-689c-4185-93de-c28fe2c2ab07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.597 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/disk.config 90437bdf-689c-4185-93de-c28fe2c2ab07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:34 compute-0 sudo[299451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:34:34 compute-0 sudo[299451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.726 254096 DEBUG oslo_concurrency.processutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/disk.config 90437bdf-689c-4185-93de-c28fe2c2ab07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.727 254096 INFO nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Deleting local config drive /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07/disk.config because it was imported into RBD.
Nov 25 16:34:34 compute-0 kernel: tapf64d52c9-5d: entered promiscuous mode
Nov 25 16:34:34 compute-0 NetworkManager[48891]: <info>  [1764088474.7798] manager: (tapf64d52c9-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Nov 25 16:34:34 compute-0 ovn_controller[153477]: 2025-11-25T16:34:34Z|00283|binding|INFO|Claiming lport f64d52c9-5dbe-4b99-af6f-4f3a4294d461 for this chassis.
Nov 25 16:34:34 compute-0 ovn_controller[153477]: 2025-11-25T16:34:34Z|00284|binding|INFO|f64d52c9-5dbe-4b99-af6f-4f3a4294d461: Claiming fa:16:3e:d6:1d:6a 10.100.0.10
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.791 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:1d:6a 10.100.0.10'], port_security=['fa:16:3e:d6:1d:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '90437bdf-689c-4185-93de-c28fe2c2ab07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f64d52c9-5dbe-4b99-af6f-4f3a4294d461) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.793 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f64d52c9-5dbe-4b99-af6f-4f3a4294d461 in datapath 0816ae24-275c-455e-a549-929f4eb756e7 bound to our chassis
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.796 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.808 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e39d8a71-6ae9-4ccf-ac8c-dcfa850e6ae8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.809 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0816ae24-21 in ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:34:34 compute-0 systemd-udevd[299541]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.812 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0816ae24-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.812 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7511f73f-6216-4e5b-9805-0abc1b8ecd1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 systemd-machined[216343]: New machine qemu-45-instance-00000027.
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64440a66-cd91-428e-b470-fa295e87087c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 NetworkManager[48891]: <info>  [1764088474.8245] device (tapf64d52c9-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:34:34 compute-0 NetworkManager[48891]: <info>  [1764088474.8289] device (tapf64d52c9-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.827 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[23bfa8f2-489d-4162-949a-ffb7fddf338c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000027.
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.854 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.855 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26c776dc-ac43-47e0-85b0-e3b9b13b72f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 ovn_controller[153477]: 2025-11-25T16:34:34Z|00285|binding|INFO|Setting lport f64d52c9-5dbe-4b99-af6f-4f3a4294d461 ovn-installed in OVS
Nov 25 16:34:34 compute-0 ovn_controller[153477]: 2025-11-25T16:34:34Z|00286|binding|INFO|Setting lport f64d52c9-5dbe-4b99-af6f-4f3a4294d461 up in Southbound
Nov 25 16:34:34 compute-0 nova_compute[254092]: 2025-11-25 16:34:34.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.883 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[74d14883-b3d1-47bf-846a-2f1bfc9e0314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 systemd-udevd[299547]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.888 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7fa98e-11d8-44e5-835d-632c183e62a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 NetworkManager[48891]: <info>  [1764088474.8901] manager: (tap0816ae24-20): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.914 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[85c4b487-3363-4c67-a260-1f559c54b02a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.917 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[05e8362a-3d25-4621-914b-d3ddd693cf72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 NetworkManager[48891]: <info>  [1764088474.9447] device (tap0816ae24-20): carrier: link connected
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.951 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b33cd113-de51-430a-8232-0b5781ec16e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 podman[299575]: 2025-11-25 16:34:34.956534734 +0000 UTC m=+0.039676756 container create 5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[36aa9d36-8513-42bf-b08d-26ebd87e3b32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493253, 'reachable_time': 38612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299605, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.984 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9aee559-6afe-46d6-b858-d11b04272c16]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:524c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493253, 'tstamp': 493253}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299606, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:34 compute-0 systemd[1]: Started libpod-conmon-5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01.scope.
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:34.999 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2642292c-af61-4be5-82f4-7ffd6aa883bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493253, 'reachable_time': 38612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299609, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.026 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[346b9370-3368-4bce-b103-4be04bbf353c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:35 compute-0 podman[299575]: 2025-11-25 16:34:35.031121735 +0000 UTC m=+0.114263777 container init 5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:34:35 compute-0 podman[299575]: 2025-11-25 16:34:34.937621992 +0000 UTC m=+0.020764034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:34:35 compute-0 podman[299575]: 2025-11-25 16:34:35.037871517 +0000 UTC m=+0.121013539 container start 5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 16:34:35 compute-0 podman[299575]: 2025-11-25 16:34:35.04056238 +0000 UTC m=+0.123704432 container attach 5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:35 compute-0 goofy_perlman[299610]: 167 167
Nov 25 16:34:35 compute-0 systemd[1]: libpod-5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01.scope: Deactivated successfully.
Nov 25 16:34:35 compute-0 podman[299575]: 2025-11-25 16:34:35.045579166 +0000 UTC m=+0.128721188 container died 5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 16:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-61e2eeb72c0d39d2bafeec91a3bf979163fc356c8395d5c12c69d706e0283f8b-merged.mount: Deactivated successfully.
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.080 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c533c2c-a615-407b-9aa8-bd3aa9ee85b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:35 compute-0 podman[299575]: 2025-11-25 16:34:35.08226745 +0000 UTC m=+0.165409482 container remove 5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.086 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.087 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.087 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:35 compute-0 kernel: tap0816ae24-20: entered promiscuous mode
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:35 compute-0 NetworkManager[48891]: <info>  [1764088475.0916] manager: (tap0816ae24-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 25 16:34:35 compute-0 systemd[1]: libpod-conmon-5b8113de8e6f389a3648762953dc9ff29294d0f33ffdbd2e0029ae1047e02a01.scope: Deactivated successfully.
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.099 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.100 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:35 compute-0 ovn_controller[153477]: 2025-11-25T16:34:35Z|00287|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.115 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72724d7c-929d-4dd5-8c02-d42ad840e25a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.117 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:34:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:35.118 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'env', 'PROCESS_TAG=haproxy-0816ae24-275c-455e-a549-929f4eb756e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0816ae24-275c-455e-a549-929f4eb756e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:34:35 compute-0 podman[299642]: 2025-11-25 16:34:35.257939729 +0000 UTC m=+0.039966915 container create 3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:34:35 compute-0 systemd[1]: Started libpod-conmon-3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af.scope.
Nov 25 16:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b570e407c7262c8835d04d3c22e11e1eebb762c5dea031385389d63ecda5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b570e407c7262c8835d04d3c22e11e1eebb762c5dea031385389d63ecda5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b570e407c7262c8835d04d3c22e11e1eebb762c5dea031385389d63ecda5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b570e407c7262c8835d04d3c22e11e1eebb762c5dea031385389d63ecda5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b570e407c7262c8835d04d3c22e11e1eebb762c5dea031385389d63ecda5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:35 compute-0 podman[299642]: 2025-11-25 16:34:35.240865066 +0000 UTC m=+0.022892272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:34:35 compute-0 podman[299642]: 2025-11-25 16:34:35.344131313 +0000 UTC m=+0.126158519 container init 3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:35 compute-0 podman[299642]: 2025-11-25 16:34:35.351003309 +0000 UTC m=+0.133030495 container start 3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 16:34:35 compute-0 podman[299642]: 2025-11-25 16:34:35.3539863 +0000 UTC m=+0.136013486 container attach 3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:34:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 16:34:35 compute-0 podman[299691]: 2025-11-25 16:34:35.490265112 +0000 UTC m=+0.043390857 container create 00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:34:35 compute-0 systemd[1]: Started libpod-conmon-00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3.scope.
Nov 25 16:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345afeaaf6f0479944a877acbbdba76eb2134be0c4e4c8c1f27a96d67f87d7ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:35 compute-0 podman[299691]: 2025-11-25 16:34:35.561279995 +0000 UTC m=+0.114405770 container init 00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:35 compute-0 podman[299691]: 2025-11-25 16:34:35.466494247 +0000 UTC m=+0.019620002 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:34:35 compute-0 podman[299691]: 2025-11-25 16:34:35.568504301 +0000 UTC m=+0.121630046 container start 00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.576 254096 DEBUG nova.network.neutron [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Updated VIF entry in instance network info cache for port f64d52c9-5dbe-4b99-af6f-4f3a4294d461. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.576 254096 DEBUG nova.network.neutron [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Updating instance_info_cache with network_info: [{"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:35 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[299740]: [NOTICE]   (299745) : New worker (299747) forked
Nov 25 16:34:35 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[299740]: [NOTICE]   (299745) : Loading success.
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.597 254096 DEBUG oslo_concurrency.lockutils [req-4b74012a-a081-46cc-a697-066ae227983d req-0835ae7d-e2a2-423f-9c61-d85a86b32e84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-90437bdf-689c-4185-93de-c28fe2c2ab07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.599 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088475.5989382, 90437bdf-689c-4185-93de-c28fe2c2ab07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.599 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] VM Started (Lifecycle Event)
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.625 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.630 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088475.599377, 90437bdf-689c-4185-93de-c28fe2c2ab07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.630 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] VM Paused (Lifecycle Event)
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.652 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.655 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:35 compute-0 nova_compute[254092]: 2025-11-25 16:34:35.679 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:36 compute-0 practical_bartik[299658]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:34:36 compute-0 practical_bartik[299658]: --> relative data size: 1.0
Nov 25 16:34:36 compute-0 practical_bartik[299658]: --> All data devices are unavailable
Nov 25 16:34:36 compute-0 systemd[1]: libpod-3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af.scope: Deactivated successfully.
Nov 25 16:34:36 compute-0 systemd[1]: libpod-3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af.scope: Consumed 1.000s CPU time.
Nov 25 16:34:36 compute-0 podman[299642]: 2025-11-25 16:34:36.415945236 +0000 UTC m=+1.197972452 container died 3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:34:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-881b570e407c7262c8835d04d3c22e11e1eebb762c5dea031385389d63ecda5e-merged.mount: Deactivated successfully.
Nov 25 16:34:36 compute-0 podman[299642]: 2025-11-25 16:34:36.488861791 +0000 UTC m=+1.270888977 container remove 3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:34:36 compute-0 systemd[1]: libpod-conmon-3c53fa321cf8bbfc66fc8fda2a15de2f6efdd7eaf7b1ec15d9604b17c3a603af.scope: Deactivated successfully.
Nov 25 16:34:36 compute-0 sudo[299451]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:36 compute-0 ceph-mon[74985]: pgmap v1415: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 16:34:36 compute-0 sudo[299794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:36 compute-0 sudo[299794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:36 compute-0 sudo[299794]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:36 compute-0 sudo[299819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:34:36 compute-0 sudo[299819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:36 compute-0 sudo[299819]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:36 compute-0 sudo[299844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:36 compute-0 sudo[299844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:36 compute-0 sudo[299844]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:36 compute-0 sudo[299869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:34:36 compute-0 sudo[299869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.152 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.153 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.181 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:34:37 compute-0 podman[299936]: 2025-11-25 16:34:37.188563574 +0000 UTC m=+0.047834937 container create f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.211 254096 DEBUG nova.compute.manager [req-c24a011f-6734-4582-86cc-a8a5242ce239 req-3d481904-61b7-4f4d-a4b7-7f44a5bf8f95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received event network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.212 254096 DEBUG oslo_concurrency.lockutils [req-c24a011f-6734-4582-86cc-a8a5242ce239 req-3d481904-61b7-4f4d-a4b7-7f44a5bf8f95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.212 254096 DEBUG oslo_concurrency.lockutils [req-c24a011f-6734-4582-86cc-a8a5242ce239 req-3d481904-61b7-4f4d-a4b7-7f44a5bf8f95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.213 254096 DEBUG oslo_concurrency.lockutils [req-c24a011f-6734-4582-86cc-a8a5242ce239 req-3d481904-61b7-4f4d-a4b7-7f44a5bf8f95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.213 254096 DEBUG nova.compute.manager [req-c24a011f-6734-4582-86cc-a8a5242ce239 req-3d481904-61b7-4f4d-a4b7-7f44a5bf8f95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Processing event network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.214 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.217 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088477.2175143, 90437bdf-689c-4185-93de-c28fe2c2ab07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.218 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] VM Resumed (Lifecycle Event)
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.219 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:34:37 compute-0 systemd[1]: Started libpod-conmon-f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79.scope.
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.233 254096 INFO nova.virt.libvirt.driver [-] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Instance spawned successfully.
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.234 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.238 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.242 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.253 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.253 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.254 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.254 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.254 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.255 254096 DEBUG nova.virt.libvirt.driver [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:37 compute-0 podman[299936]: 2025-11-25 16:34:37.16884503 +0000 UTC m=+0.028116413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.262 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.265 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.265 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:37 compute-0 podman[299936]: 2025-11-25 16:34:37.274947204 +0000 UTC m=+0.134218647 container init f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.276 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.276 254096 INFO nova.compute.claims [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:34:37 compute-0 podman[299936]: 2025-11-25 16:34:37.283237659 +0000 UTC m=+0.142509022 container start f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:34:37 compute-0 podman[299936]: 2025-11-25 16:34:37.287807822 +0000 UTC m=+0.147079195 container attach f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:34:37 compute-0 pensive_cartwright[299952]: 167 167
Nov 25 16:34:37 compute-0 systemd[1]: libpod-f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79.scope: Deactivated successfully.
Nov 25 16:34:37 compute-0 podman[299936]: 2025-11-25 16:34:37.289621621 +0000 UTC m=+0.148892994 container died f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1088d5f3c58e2139b8e2fbae2c1cc1386323b5bddc4fd539fd7f2f4e64e2eddc-merged.mount: Deactivated successfully.
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.324 254096 INFO nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Took 9.41 seconds to spawn the instance on the hypervisor.
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.325 254096 DEBUG nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:37 compute-0 podman[299936]: 2025-11-25 16:34:37.330941761 +0000 UTC m=+0.190213114 container remove f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 16:34:37 compute-0 systemd[1]: libpod-conmon-f2d28adc79031b5783eda901b28c677c73cdb4194590684229d8abae325c3f79.scope: Deactivated successfully.
Nov 25 16:34:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.391 254096 INFO nova.compute.manager [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Took 10.33 seconds to build instance.
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.405 254096 DEBUG oslo_concurrency.lockutils [None req-56f3aa9d-7a91-4300-8ed0-a8133ba54bfb 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.454 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:37 compute-0 podman[299977]: 2025-11-25 16:34:37.546614183 +0000 UTC m=+0.045534135 container create 4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:34:37 compute-0 systemd[1]: Started libpod-conmon-4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa.scope.
Nov 25 16:34:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e448f39cafd9c9cb3adf5df88888577f13b948a9f243e2c2679e1ead9d79c98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e448f39cafd9c9cb3adf5df88888577f13b948a9f243e2c2679e1ead9d79c98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e448f39cafd9c9cb3adf5df88888577f13b948a9f243e2c2679e1ead9d79c98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e448f39cafd9c9cb3adf5df88888577f13b948a9f243e2c2679e1ead9d79c98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:37 compute-0 podman[299977]: 2025-11-25 16:34:37.528491891 +0000 UTC m=+0.027411863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:34:37 compute-0 podman[299977]: 2025-11-25 16:34:37.630292619 +0000 UTC m=+0.129212591 container init 4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:34:37 compute-0 podman[299977]: 2025-11-25 16:34:37.639762756 +0000 UTC m=+0.138682708 container start 4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 16:34:37 compute-0 podman[299977]: 2025-11-25 16:34:37.643594999 +0000 UTC m=+0.142514951 container attach 4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:34:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3360203213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.912 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.918 254096 DEBUG nova.compute.provider_tree [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:34:37 compute-0 nova_compute[254092]: 2025-11-25 16:34:37.932 254096 DEBUG nova.scheduler.client.report [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.096 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.097 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.140 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.140 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.157 254096 INFO nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.172 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.384 254096 DEBUG nova.policy [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '70f122fae9644012973ae5b56c1d459b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.437 254096 INFO nova.compute.manager [None req-b6050826-b62c-4fe6-a5ce-d5e7b44157b1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Pausing
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.437 254096 DEBUG nova.objects.instance [None req-b6050826-b62c-4fe6-a5ce-d5e7b44157b1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'flavor' on Instance uuid 90437bdf-689c-4185-93de-c28fe2c2ab07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]: {
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:     "0": [
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:         {
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "devices": [
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "/dev/loop3"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             ],
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_name": "ceph_lv0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_size": "21470642176",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "name": "ceph_lv0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "tags": {
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cluster_name": "ceph",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.crush_device_class": "",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.encrypted": "0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osd_id": "0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.type": "block",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.vdo": "0"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             },
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "type": "block",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "vg_name": "ceph_vg0"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:         }
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:     ],
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:     "1": [
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:         {
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "devices": [
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "/dev/loop4"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             ],
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_name": "ceph_lv1",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_size": "21470642176",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "name": "ceph_lv1",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "tags": {
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cluster_name": "ceph",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.crush_device_class": "",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.encrypted": "0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osd_id": "1",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.type": "block",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.vdo": "0"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             },
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "type": "block",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "vg_name": "ceph_vg1"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:         }
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:     ],
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:     "2": [
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:         {
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "devices": [
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "/dev/loop5"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             ],
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_name": "ceph_lv2",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_size": "21470642176",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "name": "ceph_lv2",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "tags": {
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.cluster_name": "ceph",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.crush_device_class": "",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.encrypted": "0",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osd_id": "2",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.type": "block",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:                 "ceph.vdo": "0"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             },
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "type": "block",
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:             "vg_name": "ceph_vg2"
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:         }
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]:     ]
Nov 25 16:34:38 compute-0 vibrant_satoshi[300013]: }
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.462 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.463 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.463 254096 INFO nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Creating image(s)
Nov 25 16:34:38 compute-0 systemd[1]: libpod-4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa.scope: Deactivated successfully.
Nov 25 16:34:38 compute-0 podman[299977]: 2025-11-25 16:34:38.467283131 +0000 UTC m=+0.966203083 container died 4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.495 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e448f39cafd9c9cb3adf5df88888577f13b948a9f243e2c2679e1ead9d79c98-merged.mount: Deactivated successfully.
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.522 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:38 compute-0 podman[299977]: 2025-11-25 16:34:38.529830125 +0000 UTC m=+1.028750077 container remove 4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:38 compute-0 systemd[1]: libpod-conmon-4d2569cf7804ab2d64372c93e74752ca6f02535c2000b25bc577e8d85067b2fa.scope: Deactivated successfully.
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.551 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.555 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:38 compute-0 sudo[299869]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.587 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088478.4765646, 90437bdf-689c-4185-93de-c28fe2c2ab07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.588 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] VM Paused (Lifecycle Event)
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.592 254096 DEBUG nova.compute.manager [None req-b6050826-b62c-4fe6-a5ce-d5e7b44157b1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:38 compute-0 ceph-mon[74985]: pgmap v1416: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 16:34:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3360203213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.613 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.617 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.631 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.632 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.632 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.632 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:38 compute-0 sudo[300090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:38 compute-0 sudo[300090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:38 compute-0 sudo[300090]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.658 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.662 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.686 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 16:34:38 compute-0 sudo[300133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:34:38 compute-0 sudo[300133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:38 compute-0 sudo[300133]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:38 compute-0 sudo[300168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:38 compute-0 sudo[300168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:38 compute-0 sudo[300168]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:38 compute-0 sudo[300204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:34:38 compute-0 sudo[300204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.850 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.850 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.866 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.950 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.950 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.963 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.964 254096 INFO nova.compute.claims [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:34:38 compute-0 nova_compute[254092]: 2025-11-25 16:34:38.992 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.061 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] resizing rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.184 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Successfully created port: 8be28993-accf-4bd5-8f8d-f1e94d84aca3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.191 254096 DEBUG nova.objects.instance [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c116477-6534-4f01-a0bb-ebdd9e027e05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.204 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.204 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Ensure instance console log exists: /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.204 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.205 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.205 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.206 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:39 compute-0 podman[300324]: 2025-11-25 16:34:39.210482853 +0000 UTC m=+0.050003356 container create 49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:34:39 compute-0 systemd[1]: Started libpod-conmon-49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5.scope.
Nov 25 16:34:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:39 compute-0 podman[300324]: 2025-11-25 16:34:39.189691659 +0000 UTC m=+0.029212192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:34:39 compute-0 podman[300324]: 2025-11-25 16:34:39.301379814 +0000 UTC m=+0.140900347 container init 49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:34:39 compute-0 podman[300324]: 2025-11-25 16:34:39.310326526 +0000 UTC m=+0.149847019 container start 49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:34:39 compute-0 podman[300324]: 2025-11-25 16:34:39.314568772 +0000 UTC m=+0.154089275 container attach 49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:34:39 compute-0 frosty_gauss[300358]: 167 167
Nov 25 16:34:39 compute-0 systemd[1]: libpod-49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5.scope: Deactivated successfully.
Nov 25 16:34:39 compute-0 podman[300324]: 2025-11-25 16:34:39.317685916 +0000 UTC m=+0.157206409 container died 49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7100bde3c296ebe4ddf64d5d25edd76b59f2f30c3bcb317a1cf6eafd7ee6baf5-merged.mount: Deactivated successfully.
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.340 254096 DEBUG nova.compute.manager [req-39bfd913-85ec-4e35-b2f4-ef8a0dde67c1 req-c1a6ecc4-ccfc-48d0-baa0-fc9dfc3bf74a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received event network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.341 254096 DEBUG oslo_concurrency.lockutils [req-39bfd913-85ec-4e35-b2f4-ef8a0dde67c1 req-c1a6ecc4-ccfc-48d0-baa0-fc9dfc3bf74a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.342 254096 DEBUG oslo_concurrency.lockutils [req-39bfd913-85ec-4e35-b2f4-ef8a0dde67c1 req-c1a6ecc4-ccfc-48d0-baa0-fc9dfc3bf74a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.342 254096 DEBUG oslo_concurrency.lockutils [req-39bfd913-85ec-4e35-b2f4-ef8a0dde67c1 req-c1a6ecc4-ccfc-48d0-baa0-fc9dfc3bf74a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.342 254096 DEBUG nova.compute.manager [req-39bfd913-85ec-4e35-b2f4-ef8a0dde67c1 req-c1a6ecc4-ccfc-48d0-baa0-fc9dfc3bf74a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] No waiting events found dispatching network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.342 254096 WARNING nova.compute.manager [req-39bfd913-85ec-4e35-b2f4-ef8a0dde67c1 req-c1a6ecc4-ccfc-48d0-baa0-fc9dfc3bf74a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received unexpected event network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 for instance with vm_state paused and task_state None.
Nov 25 16:34:39 compute-0 podman[300324]: 2025-11-25 16:34:39.355735136 +0000 UTC m=+0.195255639 container remove 49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gauss, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:34:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 16:34:39 compute-0 systemd[1]: libpod-conmon-49e8db46b3caa423cd1ba69422bc97a9959ebd5b02de82e0446b8e0f610bf6f5.scope: Deactivated successfully.
Nov 25 16:34:39 compute-0 podman[300400]: 2025-11-25 16:34:39.527102109 +0000 UTC m=+0.040138239 container create 5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 16:34:39 compute-0 systemd[1]: Started libpod-conmon-5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115.scope.
Nov 25 16:34:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aeb64eac3489c61a9d43cea38987f64ff37c8530d7bddea742fae9b7744acc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aeb64eac3489c61a9d43cea38987f64ff37c8530d7bddea742fae9b7744acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aeb64eac3489c61a9d43cea38987f64ff37c8530d7bddea742fae9b7744acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aeb64eac3489c61a9d43cea38987f64ff37c8530d7bddea742fae9b7744acc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:39 compute-0 podman[300400]: 2025-11-25 16:34:39.510042227 +0000 UTC m=+0.023078377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:34:39 compute-0 podman[300400]: 2025-11-25 16:34:39.610102587 +0000 UTC m=+0.123138727 container init 5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 16:34:39 compute-0 podman[300400]: 2025-11-25 16:34:39.617302812 +0000 UTC m=+0.130338962 container start 5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:34:39 compute-0 podman[300400]: 2025-11-25 16:34:39.621941667 +0000 UTC m=+0.134977827 container attach 5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_margulis, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:34:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:34:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265093960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.671 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.678 254096 DEBUG nova.compute.provider_tree [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.693 254096 DEBUG nova.scheduler.client.report [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.720 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.721 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.775 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.775 254096 DEBUG nova.network.neutron [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.799 254096 INFO nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.824 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.899 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.900 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.900 254096 INFO nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Creating image(s)
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.920 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.942 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.962 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.965 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:39 compute-0 nova_compute[254092]: 2025-11-25 16:34:39.991 254096 DEBUG nova.policy [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '650f3d90afcd4e85b7042981dc353a2d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fe7901baa563491c8609089aa4334bf1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.026 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.027 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.028 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.028 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.056 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.059 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 270ad7f6-74d4-4c29-9856-77768f170789_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:34:40
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'vms', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.373 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 270ad7f6-74d4-4c29-9856-77768f170789_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.423 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] resizing rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.497 254096 DEBUG nova.objects.instance [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'migration_context' on Instance uuid 270ad7f6-74d4-4c29-9856-77768f170789 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.520 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.521 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Ensure instance console log exists: /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.521 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.521 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:40 compute-0 nova_compute[254092]: 2025-11-25 16:34:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:40 compute-0 infallible_margulis[300417]: {
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "osd_id": 1,
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "type": "bluestore"
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:     },
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "osd_id": 2,
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "type": "bluestore"
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:     },
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "osd_id": 0,
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:         "type": "bluestore"
Nov 25 16:34:40 compute-0 infallible_margulis[300417]:     }
Nov 25 16:34:40 compute-0 infallible_margulis[300417]: }
Nov 25 16:34:40 compute-0 ceph-mon[74985]: pgmap v1417: 321 pgs: 321 active+clean; 167 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 16:34:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4265093960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:40 compute-0 systemd[1]: libpod-5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115.scope: Deactivated successfully.
Nov 25 16:34:40 compute-0 podman[300619]: 2025-11-25 16:34:40.652182734 +0000 UTC m=+0.022352397 container died 5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-58aeb64eac3489c61a9d43cea38987f64ff37c8530d7bddea742fae9b7744acc-merged.mount: Deactivated successfully.
Nov 25 16:34:40 compute-0 podman[300619]: 2025-11-25 16:34:40.705781925 +0000 UTC m=+0.075951568 container remove 5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:34:40 compute-0 systemd[1]: libpod-conmon-5ac01a26dea4fe53359b608cfc0564baecee023e3123e41e95210723920d7115.scope: Deactivated successfully.
Nov 25 16:34:40 compute-0 sudo[300204]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:34:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:34:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ed6b03c4-2493-414c-a821-652ef103ab3e does not exist
Nov 25 16:34:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 49d6ac77-dd53-45cc-9964-410078d7ef15 does not exist
Nov 25 16:34:40 compute-0 sudo[300632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:34:40 compute-0 sudo[300632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:40 compute-0 sudo[300632]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:40 compute-0 sudo[300657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:34:40 compute-0 sudo[300657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:34:40 compute-0 sudo[300657]: pam_unix(sudo:session): session closed for user root
Nov 25 16:34:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 224 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 117 op/s
Nov 25 16:34:41 compute-0 nova_compute[254092]: 2025-11-25 16:34:41.611 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Successfully created port: ec7e033b-7a98-44cb-9aab-c96b985fd4a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:34:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:34:41 compute-0 nova_compute[254092]: 2025-11-25 16:34:41.872 254096 DEBUG nova.network.neutron [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Successfully created port: 84ab0426-0174-4297-bb2b-e5964e453530 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:34:42 compute-0 nova_compute[254092]: 2025-11-25 16:34:42.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:42 compute-0 nova_compute[254092]: 2025-11-25 16:34:42.728 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Successfully created port: 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:34:42 compute-0 nova_compute[254092]: 2025-11-25 16:34:42.787 254096 DEBUG nova.compute.manager [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:42 compute-0 nova_compute[254092]: 2025-11-25 16:34:42.837 254096 INFO nova.compute.manager [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] instance snapshotting
Nov 25 16:34:42 compute-0 nova_compute[254092]: 2025-11-25 16:34:42.838 254096 WARNING nova.compute.manager [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] trying to snapshot a non-running instance: (state: 3 expected: 1)
Nov 25 16:34:42 compute-0 ceph-mon[74985]: pgmap v1418: 321 pgs: 321 active+clean; 224 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 117 op/s
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.126 254096 INFO nova.virt.libvirt.driver [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Beginning live snapshot process
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.357 254096 DEBUG nova.virt.libvirt.imagebackend [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:34:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 224 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 89 op/s
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.602 254096 DEBUG nova.storage.rbd_utils [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(427678db381d441f99b5c37187f2626a) on rbd image(90437bdf-689c-4185-93de-c28fe2c2ab07_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:34:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 25 16:34:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.930 254096 DEBUG nova.network.neutron [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Successfully updated port: 84ab0426-0174-4297-bb2b-e5964e453530 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:34:43 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.953 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.953 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquired lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:43 compute-0 nova_compute[254092]: 2025-11-25 16:34:43.954 254096 DEBUG nova.network.neutron [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:34:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:44 compute-0 nova_compute[254092]: 2025-11-25 16:34:44.148 254096 DEBUG nova.network.neutron [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:34:44 compute-0 nova_compute[254092]: 2025-11-25 16:34:44.277 254096 DEBUG nova.storage.rbd_utils [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning vms/90437bdf-689c-4185-93de-c28fe2c2ab07_disk@427678db381d441f99b5c37187f2626a to images/2ddb0df0-6f70-4b91-8c0c-ed752d817d5c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:34:44 compute-0 nova_compute[254092]: 2025-11-25 16:34:44.313 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Successfully updated port: 8be28993-accf-4bd5-8f8d-f1e94d84aca3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:34:44 compute-0 nova_compute[254092]: 2025-11-25 16:34:44.318 254096 DEBUG nova.compute.manager [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-changed-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:44 compute-0 nova_compute[254092]: 2025-11-25 16:34:44.318 254096 DEBUG nova.compute.manager [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Refreshing instance network info cache due to event network-changed-84ab0426-0174-4297-bb2b-e5964e453530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:44 compute-0 nova_compute[254092]: 2025-11-25 16:34:44.319 254096 DEBUG oslo_concurrency.lockutils [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:44 compute-0 nova_compute[254092]: 2025-11-25 16:34:44.487 254096 DEBUG nova.storage.rbd_utils [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] flattening images/2ddb0df0-6f70-4b91-8c0c-ed752d817d5c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:34:44 compute-0 ceph-mon[74985]: pgmap v1419: 321 pgs: 321 active+clean; 224 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 89 op/s
Nov 25 16:34:44 compute-0 ceph-mon[74985]: osdmap e168: 3 total, 3 up, 3 in
Nov 25 16:34:45 compute-0 nova_compute[254092]: 2025-11-25 16:34:45.024 254096 DEBUG nova.storage.rbd_utils [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(427678db381d441f99b5c37187f2626a) on rbd image(90437bdf-689c-4185-93de-c28fe2c2ab07_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:34:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 296 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.8 MiB/s wr, 150 op/s
Nov 25 16:34:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 25 16:34:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 25 16:34:45 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 25 16:34:45 compute-0 nova_compute[254092]: 2025-11-25 16:34:45.984 254096 DEBUG nova.storage.rbd_utils [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(snap) on rbd image(2ddb0df0-6f70-4b91-8c0c-ed752d817d5c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.081 254096 DEBUG nova.network.neutron [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updating instance_info_cache with network_info: [{"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.105 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Releasing lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.106 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Instance network_info: |[{"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.107 254096 DEBUG oslo_concurrency.lockutils [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.108 254096 DEBUG nova.network.neutron [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Refreshing network info cache for port 84ab0426-0174-4297-bb2b-e5964e453530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.114 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Start _get_guest_xml network_info=[{"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.123 254096 WARNING nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.135 254096 DEBUG nova.virt.libvirt.host [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.136 254096 DEBUG nova.virt.libvirt.host [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.142 254096 DEBUG nova.virt.libvirt.host [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.143 254096 DEBUG nova.virt.libvirt.host [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.143 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.144 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.144 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.144 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.144 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.144 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.145 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.145 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.145 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.145 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.145 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.145 254096 DEBUG nova.virt.hardware [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.148 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.441 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Successfully updated port: ec7e033b-7a98-44cb-9aab-c96b985fd4a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.557 254096 DEBUG nova.compute.manager [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-changed-8be28993-accf-4bd5-8f8d-f1e94d84aca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.557 254096 DEBUG nova.compute.manager [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Refreshing instance network info cache due to event network-changed-8be28993-accf-4bd5-8f8d-f1e94d84aca3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.557 254096 DEBUG oslo_concurrency.lockutils [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.557 254096 DEBUG oslo_concurrency.lockutils [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.558 254096 DEBUG nova.network.neutron [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Refreshing network info cache for port 8be28993-accf-4bd5-8f8d-f1e94d84aca3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629739741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.625 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.650 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.655 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:46 compute-0 nova_compute[254092]: 2025-11-25 16:34:46.734 254096 DEBUG nova.network.neutron [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:34:46 compute-0 ceph-mon[74985]: pgmap v1421: 321 pgs: 321 active+clean; 296 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.8 MiB/s wr, 150 op/s
Nov 25 16:34:46 compute-0 ceph-mon[74985]: osdmap e169: 3 total, 3 up, 3 in
Nov 25 16:34:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3629739741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 25 16:34:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 25 16:34:46 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 25 16:34:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225590623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.097 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.098 254096 DEBUG nova.virt.libvirt.vif [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-402821829',display_name='tempest-SecurityGroupsTestJSON-server-402821829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-402821829',id=41,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-qwbkjkuw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:39Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=270ad7f6-74d4-4c29-9856-77768f170789,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.098 254096 DEBUG nova.network.os_vif_util [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.099 254096 DEBUG nova.network.os_vif_util [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:28:4e,bridge_name='br-int',has_traffic_filtering=True,id=84ab0426-0174-4297-bb2b-e5964e453530,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84ab0426-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.101 254096 DEBUG nova.objects.instance [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 270ad7f6-74d4-4c29-9856-77768f170789 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.122 254096 DEBUG nova.network.neutron [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.126 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <uuid>270ad7f6-74d4-4c29-9856-77768f170789</uuid>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <name>instance-00000029</name>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <nova:name>tempest-SecurityGroupsTestJSON-server-402821829</nova:name>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:34:46</nova:creationTime>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:user uuid="650f3d90afcd4e85b7042981dc353a2d">tempest-SecurityGroupsTestJSON-716261307-project-member</nova:user>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:project uuid="fe7901baa563491c8609089aa4334bf1">tempest-SecurityGroupsTestJSON-716261307</nova:project>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <nova:port uuid="84ab0426-0174-4297-bb2b-e5964e453530">
Nov 25 16:34:47 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <system>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <entry name="serial">270ad7f6-74d4-4c29-9856-77768f170789</entry>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <entry name="uuid">270ad7f6-74d4-4c29-9856-77768f170789</entry>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </system>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <os>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   </os>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <features>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   </features>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/270ad7f6-74d4-4c29-9856-77768f170789_disk">
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/270ad7f6-74d4-4c29-9856-77768f170789_disk.config">
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:47 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:06:28:4e"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <target dev="tap84ab0426-01"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/console.log" append="off"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <video>
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </video>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:34:47 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:34:47 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:34:47 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:34:47 compute-0 nova_compute[254092]: </domain>
Nov 25 16:34:47 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.127 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Preparing to wait for external event network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.128 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.128 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.128 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.129 254096 DEBUG nova.virt.libvirt.vif [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-402821829',display_name='tempest-SecurityGroupsTestJSON-server-402821829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-402821829',id=41,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-qwbkjkuw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:39Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=270ad7f6-74d4-4c29-9856-77768f170789,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.129 254096 DEBUG nova.network.os_vif_util [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.129 254096 DEBUG nova.network.os_vif_util [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:28:4e,bridge_name='br-int',has_traffic_filtering=True,id=84ab0426-0174-4297-bb2b-e5964e453530,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84ab0426-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.130 254096 DEBUG os_vif [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:28:4e,bridge_name='br-int',has_traffic_filtering=True,id=84ab0426-0174-4297-bb2b-e5964e453530,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84ab0426-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.130 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.131 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.134 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84ab0426-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.134 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84ab0426-01, col_values=(('external_ids', {'iface-id': '84ab0426-0174-4297-bb2b-e5964e453530', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:28:4e', 'vm-uuid': '270ad7f6-74d4-4c29-9856-77768f170789'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:47 compute-0 NetworkManager[48891]: <info>  [1764088487.1373] manager: (tap84ab0426-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.143 254096 INFO os_vif [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:28:4e,bridge_name='br-int',has_traffic_filtering=True,id=84ab0426-0174-4297-bb2b-e5964e453530,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84ab0426-01')
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.144 254096 DEBUG oslo_concurrency.lockutils [req-d422a005-2ab2-4e2d-9251-c662854d9d58 req-7d29ec52-26ea-43c7-ad49-5b80fdd49cb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.201 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.202 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.203 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] No VIF found with MAC fa:16:3e:06:28:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.203 254096 INFO nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Using config drive
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.224 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 306 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.393 254096 DEBUG nova.network.neutron [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updated VIF entry in instance network info cache for port 84ab0426-0174-4297-bb2b-e5964e453530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.393 254096 DEBUG nova.network.neutron [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updating instance_info_cache with network_info: [{"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.412 254096 DEBUG oslo_concurrency.lockutils [req-920b35a5-eb8d-43b4-8c38-caf0edaadaca req-d2601507-1a1e-4835-9cea-a15041136071 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.418 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Successfully updated port: 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.430 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.430 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquired lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.430 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.648 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.703 254096 INFO nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Creating config drive at /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/disk.config
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.708 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe27xkr9_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.846 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe27xkr9_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.869 254096 DEBUG nova.storage.rbd_utils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 270ad7f6-74d4-4c29-9856-77768f170789_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:47 compute-0 nova_compute[254092]: 2025-11-25 16:34:47.873 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/disk.config 270ad7f6-74d4-4c29-9856-77768f170789_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:47 compute-0 ceph-mon[74985]: osdmap e170: 3 total, 3 up, 3 in
Nov 25 16:34:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/225590623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.034 254096 DEBUG oslo_concurrency.processutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/disk.config 270ad7f6-74d4-4c29-9856-77768f170789_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.036 254096 INFO nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Deleting local config drive /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789/disk.config because it was imported into RBD.
Nov 25 16:34:48 compute-0 kernel: tap84ab0426-01: entered promiscuous mode
Nov 25 16:34:48 compute-0 NetworkManager[48891]: <info>  [1764088488.0998] manager: (tap84ab0426-01): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.099 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:48 compute-0 ovn_controller[153477]: 2025-11-25T16:34:48Z|00288|binding|INFO|Claiming lport 84ab0426-0174-4297-bb2b-e5964e453530 for this chassis.
Nov 25 16:34:48 compute-0 ovn_controller[153477]: 2025-11-25T16:34:48Z|00289|binding|INFO|84ab0426-0174-4297-bb2b-e5964e453530: Claiming fa:16:3e:06:28:4e 10.100.0.14
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.113 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:28:4e 10.100.0.14'], port_security=['fa:16:3e:06:28:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '270ad7f6-74d4-4c29-9856-77768f170789', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe7901baa563491c8609089aa4334bf1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10995e2d-e9a2-4098-859c-5dcd4d5f741f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6fc76d1-a8a4-44cf-ab8f-0304e50e033c, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=84ab0426-0174-4297-bb2b-e5964e453530) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.115 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 84ab0426-0174-4297-bb2b-e5964e453530 in datapath 82742f46-fb6e-443e-a99d-84c5367a4ccd bound to our chassis
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.117 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82742f46-fb6e-443e-a99d-84c5367a4ccd
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.133 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[534e45b8-a2a9-4ba9-8e79-6dda3c75e8f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.134 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82742f46-f1 in ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.136 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82742f46-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[47c44454-e85a-41fe-9e2c-10c2181be2a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 systemd-udevd[300958]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.138 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aee26b27-ace4-4914-90fa-fa6422e3c549]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 systemd-machined[216343]: New machine qemu-46-instance-00000029.
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.150 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a95bb890-7b1b-44d5-8142-3ab58f3920ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 NetworkManager[48891]: <info>  [1764088488.1537] device (tap84ab0426-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:34:48 compute-0 NetworkManager[48891]: <info>  [1764088488.1545] device (tap84ab0426-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:34:48 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000029.
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.168 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27e07a36-9dd8-4d92-bdc1-51f2f0637db8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:48 compute-0 ovn_controller[153477]: 2025-11-25T16:34:48Z|00290|binding|INFO|Setting lport 84ab0426-0174-4297-bb2b-e5964e453530 ovn-installed in OVS
Nov 25 16:34:48 compute-0 ovn_controller[153477]: 2025-11-25T16:34:48Z|00291|binding|INFO|Setting lport 84ab0426-0174-4297-bb2b-e5964e453530 up in Southbound
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.201 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ff1189-c36d-4516-8627-f0fbfd6931a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.208 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[81c4003b-b5fc-4652-915c-f10b7fc0ce39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 NetworkManager[48891]: <info>  [1764088488.2095] manager: (tap82742f46-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.262 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[219f26c1-95ed-4645-a253-1bfc68b9f81c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.268 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7f11e929-381d-48bc-a725-6773e8f9410a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 NetworkManager[48891]: <info>  [1764088488.2948] device (tap82742f46-f0): carrier: link connected
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.302 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[490888f9-6645-4fdd-8fe6-81b134b0ec61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.318 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73e98fd4-c509-4310-8f36-f9c2775b0d1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82742f46-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:df:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494588, 'reachable_time': 19595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300991, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.335 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[36a17aeb-29a2-49a1-919a-f3b53b8f4850]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:dfdd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494588, 'tstamp': 494588}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300992, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.350 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[946229ad-34ac-479c-870a-651cb80adac4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82742f46-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:df:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494588, 'reachable_time': 19595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300993, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.378 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8dbdac9-ef64-499c-98c9-68dc43b93188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.412 254096 INFO nova.virt.libvirt.driver [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Snapshot image upload complete
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.413 254096 INFO nova.compute.manager [None req-354cd9c4-503c-4480-a6bd-e1a9955e5e6a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Took 5.57 seconds to snapshot the instance on the hypervisor.
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.431 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2da51b4d-301a-4c0a-8ddc-0aa9f2328a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.432 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82742f46-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.432 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.432 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82742f46-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:48 compute-0 NetworkManager[48891]: <info>  [1764088488.4350] manager: (tap82742f46-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:48 compute-0 kernel: tap82742f46-f0: entered promiscuous mode
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.437 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82742f46-f0, col_values=(('external_ids', {'iface-id': '639a1689-3ed6-4bc6-98a0-e7a7773b6e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:48 compute-0 ovn_controller[153477]: 2025-11-25T16:34:48Z|00292|binding|INFO|Releasing lport 639a1689-3ed6-4bc6-98a0-e7a7773b6e05 from this chassis (sb_readonly=0)
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.454 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.456 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82742f46-fb6e-443e-a99d-84c5367a4ccd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82742f46-fb6e-443e-a99d-84c5367a4ccd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.456 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b290aef8-809f-44dd-9e19-c06b03337e3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.457 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-82742f46-fb6e-443e-a99d-84c5367a4ccd
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/82742f46-fb6e-443e-a99d-84c5367a4ccd.pid.haproxy
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 82742f46-fb6e-443e-a99d-84c5367a4ccd
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:34:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:48.458 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'env', 'PROCESS_TAG=haproxy-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82742f46-fb6e-443e-a99d-84c5367a4ccd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.560 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088488.5601306, 270ad7f6-74d4-4c29-9856-77768f170789 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.561 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] VM Started (Lifecycle Event)
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.580 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.585 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088488.561169, 270ad7f6-74d4-4c29-9856-77768f170789 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.585 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] VM Paused (Lifecycle Event)
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.603 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.606 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.626 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.638 254096 DEBUG nova.compute.manager [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-changed-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.638 254096 DEBUG nova.compute.manager [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Refreshing instance network info cache due to event network-changed-ec7e033b-7a98-44cb-9aab-c96b985fd4a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:48 compute-0 nova_compute[254092]: 2025-11-25 16:34:48.638 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:48 compute-0 podman[301067]: 2025-11-25 16:34:48.854367752 +0000 UTC m=+0.052836063 container create c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 16:34:48 compute-0 systemd[1]: Started libpod-conmon-c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26.scope.
Nov 25 16:34:48 compute-0 podman[301067]: 2025-11-25 16:34:48.828022098 +0000 UTC m=+0.026490439 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:34:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0ed159f9bbbb46b9f628e39e9f53d3c1be8d1c9987113299120773552e4043/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:48 compute-0 podman[301067]: 2025-11-25 16:34:48.94771269 +0000 UTC m=+0.146181021 container init c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 16:34:48 compute-0 podman[301067]: 2025-11-25 16:34:48.953061675 +0000 UTC m=+0.151529976 container start c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:48 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [NOTICE]   (301087) : New worker (301089) forked
Nov 25 16:34:48 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [NOTICE]   (301087) : Loading success.
Nov 25 16:34:48 compute-0 ceph-mon[74985]: pgmap v1424: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 306 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Nov 25 16:34:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 306 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.609 254096 DEBUG nova.network.neutron [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updating instance_info_cache with network_info: [{"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.644 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Releasing lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.645 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Instance network_info: |[{"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.646 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.647 254096 DEBUG nova.network.neutron [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Refreshing network info cache for port ec7e033b-7a98-44cb-9aab-c96b985fd4a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.655 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Start _get_guest_xml network_info=[{"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.664 254096 WARNING nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.670 254096 DEBUG nova.virt.libvirt.host [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.671 254096 DEBUG nova.virt.libvirt.host [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.681 254096 DEBUG nova.virt.libvirt.host [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.682 254096 DEBUG nova.virt.libvirt.host [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.682 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.682 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.683 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.683 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.683 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.684 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.684 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.684 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.684 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.684 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.685 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.685 254096 DEBUG nova.virt.hardware [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:34:50 compute-0 nova_compute[254092]: 2025-11-25 16:34:50.688 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 25 16:34:51 compute-0 ceph-mon[74985]: pgmap v1425: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 306 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Nov 25 16:34:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 25 16:34:51 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018011432083089144 of space, bias 1.0, pg target 0.5403429624926743 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:34:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460146158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.220 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.248 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.255 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.336 254096 DEBUG nova.compute.manager [req-712c6a98-b8ea-409d-a327-c373890497c7 req-4d9737a9-8468-4ab7-86de-e66bdbbc8f46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.337 254096 DEBUG oslo_concurrency.lockutils [req-712c6a98-b8ea-409d-a327-c373890497c7 req-4d9737a9-8468-4ab7-86de-e66bdbbc8f46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.339 254096 DEBUG oslo_concurrency.lockutils [req-712c6a98-b8ea-409d-a327-c373890497c7 req-4d9737a9-8468-4ab7-86de-e66bdbbc8f46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.339 254096 DEBUG oslo_concurrency.lockutils [req-712c6a98-b8ea-409d-a327-c373890497c7 req-4d9737a9-8468-4ab7-86de-e66bdbbc8f46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.339 254096 DEBUG nova.compute.manager [req-712c6a98-b8ea-409d-a327-c373890497c7 req-4d9737a9-8468-4ab7-86de-e66bdbbc8f46 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Processing event network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.340 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.354 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088491.3535597, 270ad7f6-74d4-4c29-9856-77768f170789 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.356 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] VM Resumed (Lifecycle Event)
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.361 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:34:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 306 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 951 KiB/s wr, 123 op/s
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.366 254096 INFO nova.virt.libvirt.driver [-] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Instance spawned successfully.
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.366 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.384 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.391 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.399 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.399 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.400 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.401 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.401 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.402 254096 DEBUG nova.virt.libvirt.driver [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.427 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.456 254096 INFO nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Took 11.56 seconds to spawn the instance on the hypervisor.
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.457 254096 DEBUG nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.523 254096 INFO nova.compute.manager [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Took 12.59 seconds to build instance.
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.544 254096 DEBUG oslo_concurrency.lockutils [None req-fd8e1ca3-19f6-41be-9a9e-61b536c8c0e7 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.725 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.726 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.726 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.727 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.727 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.728 254096 INFO nova.compute.manager [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Terminating instance
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.729 254096 DEBUG nova.compute.manager [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:34:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297308341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.762 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.765 254096 DEBUG nova.virt.libvirt.vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.765 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.766 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ae:b5,bridge_name='br-int',has_traffic_filtering=True,id=8be28993-accf-4bd5-8f8d-f1e94d84aca3,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8be28993-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.767 254096 DEBUG nova.virt.libvirt.vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.767 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.767 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:4f:1d,bridge_name='br-int',has_traffic_filtering=True,id=ec7e033b-7a98-44cb-9aab-c96b985fd4a7,network=Network(cb7431f6-9c64-49bf-bc1e-69a488346002),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7e033b-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:51 compute-0 kernel: tapf64d52c9-5d (unregistering): left promiscuous mode
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.768 254096 DEBUG nova.virt.libvirt.vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.772 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:51 compute-0 NetworkManager[48891]: <info>  [1764088491.7745] device (tapf64d52c9-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.779 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:9d:4a,bridge_name='br-int',has_traffic_filtering=True,id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71742b12-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.782 254096 DEBUG nova.objects.instance [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c116477-6534-4f01-a0bb-ebdd9e027e05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:51 compute-0 ovn_controller[153477]: 2025-11-25T16:34:51Z|00293|binding|INFO|Releasing lport f64d52c9-5dbe-4b99-af6f-4f3a4294d461 from this chassis (sb_readonly=0)
Nov 25 16:34:51 compute-0 ovn_controller[153477]: 2025-11-25T16:34:51Z|00294|binding|INFO|Setting lport f64d52c9-5dbe-4b99-af6f-4f3a4294d461 down in Southbound
Nov 25 16:34:51 compute-0 ovn_controller[153477]: 2025-11-25T16:34:51Z|00295|binding|INFO|Removing iface tapf64d52c9-5d ovn-installed in OVS
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:51.792 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:1d:6a 10.100.0.10'], port_security=['fa:16:3e:d6:1d:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '90437bdf-689c-4185-93de-c28fe2c2ab07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f64d52c9-5dbe-4b99-af6f-4f3a4294d461) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:51.794 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f64d52c9-5dbe-4b99-af6f-4f3a4294d461 in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:34:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:51.795 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:34:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:51.803 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e8faae-d14a-45bf-b554-904fbaad6c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:51.804 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace which is not needed anymore
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.810 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <uuid>2c116477-6534-4f01-a0bb-ebdd9e027e05</uuid>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <name>instance-00000028</name>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestMultiNic-server-1447884785</nova:name>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:34:50</nova:creationTime>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:user uuid="70f122fae9644012973ae5b56c1d459b">tempest-ServersTestMultiNic-809789765-project-member</nova:user>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:project uuid="3355a3ac2d6d4d5ea7f590f1e2ae3492">tempest-ServersTestMultiNic-809789765</nova:project>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:port uuid="8be28993-accf-4bd5-8f8d-f1e94d84aca3">
Nov 25 16:34:51 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.164" ipVersion="4"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:port uuid="ec7e033b-7a98-44cb-9aab-c96b985fd4a7">
Nov 25 16:34:51 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.1.123" ipVersion="4"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <nova:port uuid="71742b12-1a6c-4d30-a9c9-522fc9eb4a4a">
Nov 25 16:34:51 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.104" ipVersion="4"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <system>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <entry name="serial">2c116477-6534-4f01-a0bb-ebdd9e027e05</entry>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <entry name="uuid">2c116477-6534-4f01-a0bb-ebdd9e027e05</entry>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </system>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <os>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   </os>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <features>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   </features>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2c116477-6534-4f01-a0bb-ebdd9e027e05_disk">
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2c116477-6534-4f01-a0bb-ebdd9e027e05_disk.config">
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:51 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:02:ae:b5"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <target dev="tap8be28993-ac"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:de:4f:1d"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <target dev="tapec7e033b-7a"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:58:9d:4a"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <target dev="tap71742b12-1a"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/console.log" append="off"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <video>
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </video>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:34:51 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:34:51 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:34:51 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:34:51 compute-0 nova_compute[254092]: </domain>
Nov 25 16:34:51 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.811 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Preparing to wait for external event network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.811 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.811 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.811 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.812 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Preparing to wait for external event network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.812 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.812 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.812 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.813 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Preparing to wait for external event network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.813 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.813 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.813 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.814 254096 DEBUG nova.virt.libvirt.vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.815 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.816 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ae:b5,bridge_name='br-int',has_traffic_filtering=True,id=8be28993-accf-4bd5-8f8d-f1e94d84aca3,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8be28993-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.817 254096 DEBUG os_vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ae:b5,bridge_name='br-int',has_traffic_filtering=True,id=8be28993-accf-4bd5-8f8d-f1e94d84aca3,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8be28993-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.819 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.819 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.824 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8be28993-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.825 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8be28993-ac, col_values=(('external_ids', {'iface-id': '8be28993-accf-4bd5-8f8d-f1e94d84aca3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:ae:b5', 'vm-uuid': '2c116477-6534-4f01-a0bb-ebdd9e027e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 NetworkManager[48891]: <info>  [1764088491.8282] manager: (tap8be28993-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:51 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000027.scope: Deactivated successfully.
Nov 25 16:34:51 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000027.scope: Consumed 2.073s CPU time.
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.844 254096 INFO os_vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ae:b5,bridge_name='br-int',has_traffic_filtering=True,id=8be28993-accf-4bd5-8f8d-f1e94d84aca3,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8be28993-ac')
Nov 25 16:34:51 compute-0 systemd-machined[216343]: Machine qemu-45-instance-00000027 terminated.
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.845 254096 DEBUG nova.virt.libvirt.vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.846 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.847 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:4f:1d,bridge_name='br-int',has_traffic_filtering=True,id=ec7e033b-7a98-44cb-9aab-c96b985fd4a7,network=Network(cb7431f6-9c64-49bf-bc1e-69a488346002),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7e033b-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.847 254096 DEBUG os_vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:4f:1d,bridge_name='br-int',has_traffic_filtering=True,id=ec7e033b-7a98-44cb-9aab-c96b985fd4a7,network=Network(cb7431f6-9c64-49bf-bc1e-69a488346002),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7e033b-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.848 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.848 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.851 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec7e033b-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.851 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec7e033b-7a, col_values=(('external_ids', {'iface-id': 'ec7e033b-7a98-44cb-9aab-c96b985fd4a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:4f:1d', 'vm-uuid': '2c116477-6534-4f01-a0bb-ebdd9e027e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 NetworkManager[48891]: <info>  [1764088491.8534] manager: (tapec7e033b-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.864 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.864 254096 INFO os_vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:4f:1d,bridge_name='br-int',has_traffic_filtering=True,id=ec7e033b-7a98-44cb-9aab-c96b985fd4a7,network=Network(cb7431f6-9c64-49bf-bc1e-69a488346002),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7e033b-7a')
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.865 254096 DEBUG nova.virt.libvirt.vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.865 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.866 254096 DEBUG nova.network.os_vif_util [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:9d:4a,bridge_name='br-int',has_traffic_filtering=True,id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71742b12-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.866 254096 DEBUG os_vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:9d:4a,bridge_name='br-int',has_traffic_filtering=True,id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71742b12-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.869 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71742b12-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.869 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71742b12-1a, col_values=(('external_ids', {'iface-id': '71742b12-1a6c-4d30-a9c9-522fc9eb4a4a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:9d:4a', 'vm-uuid': '2c116477-6534-4f01-a0bb-ebdd9e027e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:51 compute-0 NetworkManager[48891]: <info>  [1764088491.8711] manager: (tap71742b12-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.886 254096 INFO os_vif [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:9d:4a,bridge_name='br-int',has_traffic_filtering=True,id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71742b12-1a')
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.948 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.949 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.949 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No VIF found with MAC fa:16:3e:02:ae:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.949 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No VIF found with MAC fa:16:3e:de:4f:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.949 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No VIF found with MAC fa:16:3e:58:9d:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.950 254096 INFO nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Using config drive
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.979 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:51 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[299740]: [NOTICE]   (299745) : haproxy version is 2.8.14-c23fe91
Nov 25 16:34:51 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[299740]: [NOTICE]   (299745) : path to executable is /usr/sbin/haproxy
Nov 25 16:34:51 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[299740]: [WARNING]  (299745) : Exiting Master process...
Nov 25 16:34:51 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[299740]: [ALERT]    (299745) : Current worker (299747) exited with code 143 (Terminated)
Nov 25 16:34:51 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[299740]: [WARNING]  (299745) : All workers exited. Exiting... (0)
Nov 25 16:34:51 compute-0 systemd[1]: libpod-00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3.scope: Deactivated successfully.
Nov 25 16:34:51 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.999 254096 INFO nova.virt.libvirt.driver [-] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Instance destroyed successfully.
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:51.999 254096 DEBUG nova.objects.instance [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid 90437bdf-689c-4185-93de-c28fe2c2ab07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:52 compute-0 podman[301190]: 2025-11-25 16:34:52.003233246 +0000 UTC m=+0.058231068 container died 00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.015 254096 DEBUG nova.virt.libvirt.vif [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:34:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-61720203',display_name='tempest-ImagesTestJSON-server-61720203',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-61720203',id=39,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:34:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-lxaolagh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:34:48Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=90437bdf-689c-4185-93de-c28fe2c2ab07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.015 254096 DEBUG nova.network.os_vif_util [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "address": "fa:16:3e:d6:1d:6a", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf64d52c9-5d", "ovs_interfaceid": "f64d52c9-5dbe-4b99-af6f-4f3a4294d461", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.016 254096 DEBUG nova.network.os_vif_util [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:1d:6a,bridge_name='br-int',has_traffic_filtering=True,id=f64d52c9-5dbe-4b99-af6f-4f3a4294d461,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf64d52c9-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.016 254096 DEBUG os_vif [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:1d:6a,bridge_name='br-int',has_traffic_filtering=True,id=f64d52c9-5dbe-4b99-af6f-4f3a4294d461,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf64d52c9-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.019 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf64d52c9-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:52 compute-0 ceph-mon[74985]: osdmap e171: 3 total, 3 up, 3 in
Nov 25 16:34:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1460146158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:52 compute-0 ceph-mon[74985]: pgmap v1427: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 306 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 951 KiB/s wr, 123 op/s
Nov 25 16:34:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2297308341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3-userdata-shm.mount: Deactivated successfully.
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-345afeaaf6f0479944a877acbbdba76eb2134be0c4e4c8c1f27a96d67f87d7ac-merged.mount: Deactivated successfully.
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.046 254096 INFO os_vif [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:1d:6a,bridge_name='br-int',has_traffic_filtering=True,id=f64d52c9-5dbe-4b99-af6f-4f3a4294d461,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf64d52c9-5d')
Nov 25 16:34:52 compute-0 podman[301190]: 2025-11-25 16:34:52.061835684 +0000 UTC m=+0.116833496 container cleanup 00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:52 compute-0 systemd[1]: libpod-conmon-00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3.scope: Deactivated successfully.
Nov 25 16:34:52 compute-0 podman[301270]: 2025-11-25 16:34:52.127264946 +0000 UTC m=+0.043879149 container remove 00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e3942c-8dd4-417f-85e2-7b46e70c5f87]: (4, ('Tue Nov 25 04:34:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3)\n00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3\nTue Nov 25 04:34:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3)\n00a5e11d1c1d39ae7a4fdfb05b731e9d41a63f234b35a55d67c25a79cc3156c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03d1eb53-c8fd-4b35-a5a6-cda935219cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:52 compute-0 kernel: tap0816ae24-20: left promiscuous mode
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.169 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11f8a437-2df2-459b-b98f-955be53e0b9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.186 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[821e9d6d-98f0-4bc0-b7de-ec5c03e01099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.187 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df47dd8b-c87e-4e0e-9372-838b673cd1d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.204 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[150fd003-5759-46f2-a8a9-90ed0c6dc5a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493246, 'reachable_time': 22070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301297, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d0816ae24\x2d275c\x2d455e\x2da549\x2d929f4eb756e7.mount: Deactivated successfully.
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.210 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.211 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[22521262-25c9-495e-b321-27cffd9a1be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.336 254096 DEBUG nova.network.neutron [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updated VIF entry in instance network info cache for port ec7e033b-7a98-44cb-9aab-c96b985fd4a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.337 254096 DEBUG nova.network.neutron [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updating instance_info_cache with network_info: [{"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.353 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.354 254096 DEBUG nova.compute.manager [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-changed-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.354 254096 DEBUG nova.compute.manager [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Refreshing instance network info cache due to event network-changed-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.354 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.354 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.355 254096 DEBUG nova.network.neutron [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Refreshing network info cache for port 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.399 254096 INFO nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Creating config drive at /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/disk.config
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.406 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejey7n61 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.443 254096 INFO nova.virt.libvirt.driver [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Deleting instance files /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07_del
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.444 254096 INFO nova.virt.libvirt.driver [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Deletion of /var/lib/nova/instances/90437bdf-689c-4185-93de-c28fe2c2ab07_del complete
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.501 254096 INFO nova.compute.manager [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.501 254096 DEBUG oslo.service.loopingcall [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.502 254096 DEBUG nova.compute.manager [-] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.502 254096 DEBUG nova.network.neutron [-] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.548 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpejey7n61" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.569 254096 DEBUG nova.storage.rbd_utils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.572 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/disk.config 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.746 254096 DEBUG oslo_concurrency.processutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/disk.config 2c116477-6534-4f01-a0bb-ebdd9e027e05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.747 254096 INFO nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Deleting local config drive /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05/disk.config because it was imported into RBD.
Nov 25 16:34:52 compute-0 systemd-udevd[301163]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8157] manager: (tap8be28993-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Nov 25 16:34:52 compute-0 kernel: tap8be28993-ac: entered promiscuous mode
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00296|binding|INFO|Claiming lport 8be28993-accf-4bd5-8f8d-f1e94d84aca3 for this chassis.
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00297|binding|INFO|8be28993-accf-4bd5-8f8d-f1e94d84aca3: Claiming fa:16:3e:02:ae:b5 10.100.0.164
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8356] device (tap8be28993-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8378] manager: (tapec7e033b-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8384] device (tap8be28993-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8582] manager: (tap71742b12-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Nov 25 16:34:52 compute-0 systemd-udevd[301162]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8823] device (tapec7e033b-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:34:52 compute-0 kernel: tapec7e033b-7a: entered promiscuous mode
Nov 25 16:34:52 compute-0 kernel: tap71742b12-1a: entered promiscuous mode
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8835] device (tap71742b12-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8842] device (tapec7e033b-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:34:52 compute-0 NetworkManager[48891]: <info>  [1764088492.8847] device (tap71742b12-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.885 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ae:b5 10.100.0.164'], port_security=['fa:16:3e:02:ae:b5 10.100.0.164'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.164/24', 'neutron:device_id': '2c116477-6534-4f01-a0bb-ebdd9e027e05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b490834-1ae6-4bac-88ff-7ec36dfeced3, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8be28993-accf-4bd5-8f8d-f1e94d84aca3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.886 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8be28993-accf-4bd5-8f8d-f1e94d84aca3 in datapath 3650bd5e-702b-4bb4-ae2f-2588a2cf70df bound to our chassis
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.888 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3650bd5e-702b-4bb4-ae2f-2588a2cf70df
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00298|binding|INFO|Claiming lport ec7e033b-7a98-44cb-9aab-c96b985fd4a7 for this chassis.
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00299|binding|INFO|ec7e033b-7a98-44cb-9aab-c96b985fd4a7: Claiming fa:16:3e:de:4f:1d 10.100.1.123
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00300|binding|INFO|Claiming lport 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a for this chassis.
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00301|binding|INFO|71742b12-1a6c-4d30-a9c9-522fc9eb4a4a: Claiming fa:16:3e:58:9d:4a 10.100.0.104
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00302|binding|INFO|Setting lport 8be28993-accf-4bd5-8f8d-f1e94d84aca3 ovn-installed in OVS
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 systemd-machined[216343]: New machine qemu-47-instance-00000028.
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef8b26e-482c-4244-a901-709f8c1ca69c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.906 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3650bd5e-71 in ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.913 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3650bd5e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.913 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11a29a91-eeed-41b5-a722-7603240660db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.914 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a0340f-845e-469b-9a93-4fc40e0ea206]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.924 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a350c15b-9582-4c45-b0f5-7a9bebdf2238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.926 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:4f:1d 10.100.1.123'], port_security=['fa:16:3e:de:4f:1d 10.100.1.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.123/24', 'neutron:device_id': '2c116477-6534-4f01-a0bb-ebdd9e027e05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb7431f6-9c64-49bf-bc1e-69a488346002', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5202c3a2-9dfb-4f2d-bf86-4ef3d95e542c, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ec7e033b-7a98-44cb-9aab-c96b985fd4a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00303|binding|INFO|Setting lport 8be28993-accf-4bd5-8f8d-f1e94d84aca3 up in Southbound
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.928 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:9d:4a 10.100.0.104'], port_security=['fa:16:3e:58:9d:4a 10.100.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.104/24', 'neutron:device_id': '2c116477-6534-4f01-a0bb-ebdd9e027e05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b490834-1ae6-4bac-88ff-7ec36dfeced3, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:52 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-00000028.
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00304|binding|INFO|Setting lport ec7e033b-7a98-44cb-9aab-c96b985fd4a7 ovn-installed in OVS
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00305|binding|INFO|Setting lport 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a ovn-installed in OVS
Nov 25 16:34:52 compute-0 nova_compute[254092]: 2025-11-25 16:34:52.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:52.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7281f5-3d68-4a83-bee0-db05b683c8cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00306|binding|INFO|Setting lport ec7e033b-7a98-44cb-9aab-c96b985fd4a7 up in Southbound
Nov 25 16:34:52 compute-0 ovn_controller[153477]: 2025-11-25T16:34:52Z|00307|binding|INFO|Setting lport 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a up in Southbound
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.018 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2544296a-133d-4bd3-b128-76a4900f244a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 NetworkManager[48891]: <info>  [1764088493.0271] manager: (tap3650bd5e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Nov 25 16:34:53 compute-0 systemd-udevd[301361]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.027 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b72742a0-c8fb-472b-b7a2-a810507b8a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.071 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b30afb3a-b488-4bd4-b67c-8bc372dd09e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.074 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[61d1f20f-830a-4acc-9207-9f32ad4dbc9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 NetworkManager[48891]: <info>  [1764088493.1018] device (tap3650bd5e-70): carrier: link connected
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.114 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8ef611-8902-4118-a893-4244ff6de137]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca473250-b13b-400d-99c6-38bd50899bbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3650bd5e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:db:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495069, 'reachable_time': 20096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301394, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.162 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a352a52b-7f50-4445-822f-aaf829c5f0eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:db4c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495069, 'tstamp': 495069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301395, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.191 254096 DEBUG nova.network.neutron [-] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be6f628b-2cfe-4767-9747-cfdfc5f37014]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3650bd5e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:db:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495069, 'reachable_time': 20096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301396, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.220 254096 INFO nova.compute.manager [-] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Took 0.72 seconds to deallocate network for instance.
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.234 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7799aa4f-03ee-4e01-902d-1fa7cfdf3b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.295 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.296 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.313 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bfeb6682-cde7-4a0b-824e-a95d71a6796e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.315 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3650bd5e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.315 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.316 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3650bd5e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:53 compute-0 NetworkManager[48891]: <info>  [1764088493.3188] manager: (tap3650bd5e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 25 16:34:53 compute-0 kernel: tap3650bd5e-70: entered promiscuous mode
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.321 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3650bd5e-70, col_values=(('external_ids', {'iface-id': '41f74651-7af4-42a4-9a35-56f962dcaceb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.319 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:53 compute-0 ovn_controller[153477]: 2025-11-25T16:34:53Z|00308|binding|INFO|Releasing lport 41f74651-7af4-42a4-9a35-56f962dcaceb from this chassis (sb_readonly=0)
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.340 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3650bd5e-702b-4bb4-ae2f-2588a2cf70df.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3650bd5e-702b-4bb4-ae2f-2588a2cf70df.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.342 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c72ab72e-9995-4723-8fd1-f92cf2264bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.342 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-3650bd5e-702b-4bb4-ae2f-2588a2cf70df
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/3650bd5e-702b-4bb4-ae2f-2588a2cf70df.pid.haproxy
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 3650bd5e-702b-4bb4-ae2f-2588a2cf70df
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.343 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'env', 'PROCESS_TAG=haproxy-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3650bd5e-702b-4bb4-ae2f-2588a2cf70df.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.353 254096 DEBUG nova.scheduler.client.report [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:34:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 306 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 770 KiB/s wr, 100 op/s
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.370 254096 DEBUG nova.scheduler.client.report [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.371 254096 DEBUG nova.compute.provider_tree [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.417 254096 DEBUG nova.scheduler.client.report [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.443 254096 DEBUG nova.scheduler.client.report [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.550 254096 DEBUG oslo_concurrency.processutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.595 254096 DEBUG nova.compute.manager [req-bb205f8f-a8bf-4c38-a739-3228fb953238 req-48b6a72c-261a-44df-ad74-f7e405761d5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received event network-vif-deleted-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.678 254096 DEBUG nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received event network-vif-unplugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.678 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.678 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.679 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.679 254096 DEBUG nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] No waiting events found dispatching network-vif-unplugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.680 254096 WARNING nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received unexpected event network-vif-unplugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 for instance with vm_state deleted and task_state None.
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.680 254096 DEBUG nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received event network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.680 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.681 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.681 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.681 254096 DEBUG nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] No waiting events found dispatching network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.681 254096 WARNING nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Received unexpected event network-vif-plugged-f64d52c9-5dbe-4b99-af6f-4f3a4294d461 for instance with vm_state deleted and task_state None.
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.681 254096 DEBUG nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.681 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.681 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.682 254096 DEBUG oslo_concurrency.lockutils [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.682 254096 DEBUG nova.compute.manager [req-ecd54f26-f615-45c1-9e47-d8a4ad396089 req-d54b6d3a-05f6-4b84-9b69-3d3f22515a21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Processing event network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:34:53 compute-0 podman[301462]: 2025-11-25 16:34:53.817773607 +0000 UTC m=+0.049293246 container create 0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:34:53 compute-0 systemd[1]: Started libpod-conmon-0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339.scope.
Nov 25 16:34:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:53 compute-0 podman[301462]: 2025-11-25 16:34:53.796916802 +0000 UTC m=+0.028436461 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb6fd336d3e9ca922b4c48733d46353a72312807a66307fe15cec204343db48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:53 compute-0 podman[301462]: 2025-11-25 16:34:53.909841201 +0000 UTC m=+0.141360850 container init 0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:34:53 compute-0 podman[301462]: 2025-11-25 16:34:53.921535288 +0000 UTC m=+0.153054927 container start 0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.939 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088493.9386299, 2c116477-6534-4f01-a0bb-ebdd9e027e05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.940 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] VM Started (Lifecycle Event)
Nov 25 16:34:53 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [NOTICE]   (301509) : New worker (301511) forked
Nov 25 16:34:53 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [NOTICE]   (301509) : Loading success.
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.959 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.964 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088493.940029, 2c116477-6534-4f01-a0bb-ebdd9e027e05 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.965 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] VM Paused (Lifecycle Event)
Nov 25 16:34:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 25 16:34:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 25 16:34:53 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.984 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ec7e033b-7a98-44cb-9aab-c96b985fd4a7 in datapath cb7431f6-9c64-49bf-bc1e-69a488346002 unbound from our chassis
Nov 25 16:34:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:53.989 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cb7431f6-9c64-49bf-bc1e-69a488346002
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.990 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:53 compute-0 nova_compute[254092]: 2025-11-25 16:34:53.996 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.007 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5daece20-65ec-46c9-89ea-4f58653004c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.009 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcb7431f6-91 in ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.013 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcb7431f6-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b804029b-e59f-49b8-bb0b-4892ae68a290]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:34:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606559027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.015 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.015 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2b5c58-94c9-4bdf-aacd-4ff649abc823]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.031 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a2fa98-8864-412c-bb73-0bb4e34d8e81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.053 254096 DEBUG oslo_concurrency.processutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.054 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df6abeb9-d08f-413e-bc5c-a825f41664d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.059 254096 DEBUG nova.compute.provider_tree [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.073 254096 DEBUG nova.scheduler.client.report [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.083 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a22864-82e3-42b6-9a5f-0e718ea2b8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 systemd-udevd[301389]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.092 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b890264f-fdda-4730-87e2-5893de249669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 NetworkManager[48891]: <info>  [1764088494.0938] manager: (tapcb7431f6-90): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.107 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.136 254096 INFO nova.scheduler.client.report [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance 90437bdf-689c-4185-93de-c28fe2c2ab07
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.138 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a9099064-765b-44b3-9982-f388f731ba6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.142 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[416a6f77-6f18-4f6e-87c3-4440147d9356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 NetworkManager[48891]: <info>  [1764088494.1774] device (tapcb7431f6-90): carrier: link connected
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.191 254096 DEBUG oslo_concurrency.lockutils [None req-b5a1e0bd-85c0-48fe-a376-33403e8f625d 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "90437bdf-689c-4185-93de-c28fe2c2ab07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.193 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ee732ad9-371f-48fe-bacd-47c31612dc1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.210 254096 DEBUG nova.network.neutron [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updated VIF entry in instance network info cache for port 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.211 254096 DEBUG nova.network.neutron [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updating instance_info_cache with network_info: [{"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.215 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5ed596-9c5d-4edf-a2a1-94944c23ca9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb7431f6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:f1:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495176, 'reachable_time': 27860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301532, 'error': None, 'target': 'ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.225 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2c116477-6534-4f01-a0bb-ebdd9e027e05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.225 254096 DEBUG nova.compute.manager [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.225 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.226 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.226 254096 DEBUG oslo_concurrency.lockutils [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.226 254096 DEBUG nova.compute.manager [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] No waiting events found dispatching network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.226 254096 WARNING nova.compute.manager [req-99e27079-ad94-4521-ae49-9a8533cfe14c req-df4c3af3-9be2-46b4-995f-4a06051bc7b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received unexpected event network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 for instance with vm_state building and task_state spawning.
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.240 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f31ff457-d55f-4dee-9195-020a4ed4ff60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:f135'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495176, 'tstamp': 495176}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301533, 'error': None, 'target': 'ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.265 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e7940f-4bb2-4f80-8f05-ea77b59931f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcb7431f6-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:f1:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495176, 'reachable_time': 27860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301534, 'error': None, 'target': 'ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.316 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3752b6ea-1bfc-4bbb-b571-eb2a1d2a6035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.382 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edced646-eadc-484c-b248-97a536cf4397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.385 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb7431f6-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.385 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.386 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb7431f6-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:54 compute-0 kernel: tapcb7431f6-90: entered promiscuous mode
Nov 25 16:34:54 compute-0 NetworkManager[48891]: <info>  [1764088494.3895] manager: (tapcb7431f6-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.392 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcb7431f6-90, col_values=(('external_ids', {'iface-id': '2796fe7b-0a92-4077-bfa7-c43f75a95f59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:54 compute-0 ovn_controller[153477]: 2025-11-25T16:34:54Z|00309|binding|INFO|Releasing lport 2796fe7b-0a92-4077-bfa7-c43f75a95f59 from this chassis (sb_readonly=0)
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.413 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cb7431f6-9c64-49bf-bc1e-69a488346002.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cb7431f6-9c64-49bf-bc1e-69a488346002.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.415 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[14a97815-622f-4504-862f-cba60a8c1d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.417 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-cb7431f6-9c64-49bf-bc1e-69a488346002
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/cb7431f6-9c64-49bf-bc1e-69a488346002.pid.haproxy
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID cb7431f6-9c64-49bf-bc1e-69a488346002
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.418 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002', 'env', 'PROCESS_TAG=haproxy-cb7431f6-9c64-49bf-bc1e-69a488346002', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cb7431f6-9c64-49bf-bc1e-69a488346002.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.497 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.497 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.514 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.652 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.652 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.661 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.662 254096 INFO nova.compute.claims [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:34:54 compute-0 podman[301565]: 2025-11-25 16:34:54.797724342 +0000 UTC m=+0.046479641 container create f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:34:54 compute-0 nova_compute[254092]: 2025-11-25 16:34:54.807 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:54 compute-0 systemd[1]: Started libpod-conmon-f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0.scope.
Nov 25 16:34:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:34:54 compute-0 podman[301565]: 2025-11-25 16:34:54.775414177 +0000 UTC m=+0.024169496 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/963faf632ca6fcea1ee8689b6fe13a9cc3ce4c88f16a90c5f45fa605181d6050/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:34:54 compute-0 podman[301565]: 2025-11-25 16:34:54.888165391 +0000 UTC m=+0.136920700 container init f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:34:54 compute-0 podman[301565]: 2025-11-25 16:34:54.896444155 +0000 UTC m=+0.145199454 container start f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:54 compute-0 neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002[301583]: [NOTICE]   (301587) : New worker (301589) forked
Nov 25 16:34:54 compute-0 neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002[301583]: [NOTICE]   (301587) : Loading success.
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.968 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a in datapath 3650bd5e-702b-4bb4-ae2f-2588a2cf70df unbound from our chassis
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.970 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3650bd5e-702b-4bb4-ae2f-2588a2cf70df
Nov 25 16:34:54 compute-0 ceph-mon[74985]: pgmap v1428: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 306 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 770 KiB/s wr, 100 op/s
Nov 25 16:34:54 compute-0 ceph-mon[74985]: osdmap e172: 3 total, 3 up, 3 in
Nov 25 16:34:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3606559027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:54.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ede472fd-a65e-433e-a3d7-8e33703d7d56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.023 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7d1913-19dc-4c18-bd71-0e710aac027e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.026 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd7b776-73d6-4f70-8ccf-d531ec5f3781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.053 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[896196f4-b217-4536-be19-9c58968e1de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.071 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[396537a4-db43-43c3-890f-c4f49f894c50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3650bd5e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:db:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 266, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495069, 'reachable_time': 20096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301622, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.086 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d623311-d9cd-4a74-a8ca-130b82ca23c5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap3650bd5e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495085, 'tstamp': 495085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301623, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3650bd5e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495089, 'tstamp': 495089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301623, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.088 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3650bd5e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.091 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3650bd5e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.091 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.091 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3650bd5e-70, col_values=(('external_ids', {'iface-id': '41f74651-7af4-42a4-9a35-56f962dcaceb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:55.091 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:34:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/962592265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:34:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:34:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/962592265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:34:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:34:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317189330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.258 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.264 254096 DEBUG nova.compute.provider_tree [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.278 254096 DEBUG nova.scheduler.client.report [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.300 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.301 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.355 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.356 254096 DEBUG nova.network.neutron [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:34:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 232 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 169 op/s
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.377 254096 INFO nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.397 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.510 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.512 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.512 254096 INFO nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Creating image(s)
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.542 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.572 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.597 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.601 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.631 254096 DEBUG nova.policy [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.672 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.673 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.674 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.674 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.694 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.698 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 763ace49-fa88-443d-9733-e919b6f86fab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.759 254096 INFO nova.compute.manager [None req-4e502589-ff31-4499-b8af-20edab11302d b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Pausing
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.760 254096 DEBUG nova.objects.instance [None req-4e502589-ff31-4499-b8af-20edab11302d b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'flavor' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.788 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088495.7880318, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.789 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Paused (Lifecycle Event)
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.790 254096 DEBUG nova.compute.manager [None req-4e502589-ff31-4499-b8af-20edab11302d b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.819 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.824 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.856 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.943 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 763ace49-fa88-443d-9733-e919b6f86fab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/962592265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:34:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/962592265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:34:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3317189330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:34:55 compute-0 nova_compute[254092]: 2025-11-25 16:34:55.997 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] resizing rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.068 254096 DEBUG nova.objects.instance [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid 763ace49-fa88-443d-9733-e919b6f86fab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.084 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.084 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Ensure instance console log exists: /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.084 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.085 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.085 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.382 254096 DEBUG nova.network.neutron [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Successfully created port: 1baf23f4-ec91-408d-b406-37757eba550e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.699 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.700 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.700 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.701 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.701 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Processing event network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.701 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.701 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.702 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.702 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.702 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No event matching network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 in dict_keys([('network-vif-plugged', 'ec7e033b-7a98-44cb-9aab-c96b985fd4a7')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.703 254096 WARNING nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received unexpected event network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 for instance with vm_state building and task_state spawning.
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.703 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.703 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.703 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.704 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.704 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Processing event network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.704 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.704 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.705 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.705 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.705 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.705 254096 WARNING nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received unexpected event network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 for instance with vm_state building and task_state spawning.
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.706 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-changed-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.706 254096 DEBUG nova.compute.manager [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Refreshing instance network info cache due to event network-changed-84ab0426-0174-4297-bb2b-e5964e453530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.706 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.707 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.707 254096 DEBUG nova.network.neutron [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Refreshing network info cache for port 84ab0426-0174-4297-bb2b-e5964e453530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.708 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.712 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.712 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088496.7125883, 2c116477-6534-4f01-a0bb-ebdd9e027e05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.713 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] VM Resumed (Lifecycle Event)
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.716 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Instance spawned successfully.
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.716 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.735 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.741 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.746 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.746 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.747 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.747 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.748 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.748 254096 DEBUG nova.virt.libvirt.driver [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.771 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.783 254096 DEBUG nova.compute.manager [req-682d3411-09b5-4367-9fdb-c47390a931d2 req-91f60eb9-62b4-4e35-bc47-a2b6af45e09b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.784 254096 DEBUG oslo_concurrency.lockutils [req-682d3411-09b5-4367-9fdb-c47390a931d2 req-91f60eb9-62b4-4e35-bc47-a2b6af45e09b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.784 254096 DEBUG oslo_concurrency.lockutils [req-682d3411-09b5-4367-9fdb-c47390a931d2 req-91f60eb9-62b4-4e35-bc47-a2b6af45e09b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.784 254096 DEBUG oslo_concurrency.lockutils [req-682d3411-09b5-4367-9fdb-c47390a931d2 req-91f60eb9-62b4-4e35-bc47-a2b6af45e09b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.785 254096 DEBUG nova.compute.manager [req-682d3411-09b5-4367-9fdb-c47390a931d2 req-91f60eb9-62b4-4e35-bc47-a2b6af45e09b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.785 254096 WARNING nova.compute.manager [req-682d3411-09b5-4367-9fdb-c47390a931d2 req-91f60eb9-62b4-4e35-bc47-a2b6af45e09b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received unexpected event network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a for instance with vm_state building and task_state spawning.
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.809 254096 INFO nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Took 18.35 seconds to spawn the instance on the hypervisor.
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.810 254096 DEBUG nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.875 254096 INFO nova.compute.manager [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Took 19.65 seconds to build instance.
Nov 25 16:34:56 compute-0 nova_compute[254092]: 2025-11-25 16:34:56.892 254096 DEBUG oslo_concurrency.lockutils [None req-fd3e5654-3d51-4b8b-984a-2dcb565a1e03 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:56 compute-0 ceph-mon[74985]: pgmap v1430: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 232 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 169 op/s
Nov 25 16:34:57 compute-0 nova_compute[254092]: 2025-11-25 16:34:57.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:57 compute-0 nova_compute[254092]: 2025-11-25 16:34:57.316 254096 DEBUG nova.network.neutron [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Successfully updated port: 1baf23f4-ec91-408d-b406-37757eba550e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:34:57 compute-0 nova_compute[254092]: 2025-11-25 16:34:57.336 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-763ace49-fa88-443d-9733-e919b6f86fab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:57 compute-0 nova_compute[254092]: 2025-11-25 16:34:57.337 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-763ace49-fa88-443d-9733-e919b6f86fab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:57 compute-0 nova_compute[254092]: 2025-11-25 16:34:57.337 254096 DEBUG nova.network.neutron [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:34:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 234 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 227 op/s
Nov 25 16:34:57 compute-0 nova_compute[254092]: 2025-11-25 16:34:57.485 254096 DEBUG nova.network.neutron [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:34:57 compute-0 nova_compute[254092]: 2025-11-25 16:34:57.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:58 compute-0 ceph-mon[74985]: pgmap v1431: 321 pgs: 321 active+clean; 234 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 227 op/s
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.431 254096 INFO nova.compute.manager [None req-e097203a-5ab1-4bd6-9994-361d58f0c791 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Unpausing
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.432 254096 DEBUG nova.objects.instance [None req-e097203a-5ab1-4bd6-9994-361d58f0c791 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'flavor' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.445 254096 DEBUG nova.network.neutron [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Updating instance_info_cache with network_info: [{"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.456 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088498.4557207, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.456 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Resumed (Lifecycle Event)
Nov 25 16:34:58 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.461 254096 DEBUG nova.virt.libvirt.guest [None req-e097203a-5ab1-4bd6-9994-361d58f0c791 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.461 254096 DEBUG nova.compute.manager [None req-e097203a-5ab1-4bd6-9994-361d58f0c791 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.462 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-763ace49-fa88-443d-9733-e919b6f86fab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.463 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance network_info: |[{"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.465 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Start _get_guest_xml network_info=[{"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.470 254096 WARNING nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.477 254096 DEBUG nova.virt.libvirt.host [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.478 254096 DEBUG nova.virt.libvirt.host [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.481 254096 DEBUG nova.virt.libvirt.host [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.482 254096 DEBUG nova.virt.libvirt.host [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.482 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.483 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.483 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.484 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.484 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.484 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.484 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.485 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.485 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.485 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.486 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.486 254096 DEBUG nova.virt.hardware [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.489 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.517 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.524 254096 DEBUG nova.network.neutron [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updated VIF entry in instance network info cache for port 84ab0426-0174-4297-bb2b-e5964e453530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.525 254096 DEBUG nova.network.neutron [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updating instance_info_cache with network_info: [{"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.530 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.552 254096 DEBUG oslo_concurrency.lockutils [req-e400e164-c61a-4501-863f-9fd9621b9c75 req-43c72b82-c68e-4d45-95cf-61cf83381ad8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:34:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241114713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.910 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.931 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.935 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.963 254096 DEBUG nova.compute.manager [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-changed-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.964 254096 DEBUG nova.compute.manager [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Refreshing instance network info cache due to event network-changed-84ab0426-0174-4297-bb2b-e5964e453530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.965 254096 DEBUG oslo_concurrency.lockutils [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.965 254096 DEBUG oslo_concurrency.lockutils [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:58 compute-0 nova_compute[254092]: 2025-11-25 16:34:58.965 254096 DEBUG nova.network.neutron [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Refreshing network info cache for port 84ab0426-0174-4297-bb2b-e5964e453530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:34:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 25 16:34:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 25 16:34:58 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.010 254096 DEBUG nova.compute.manager [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received event network-changed-1baf23f4-ec91-408d-b406-37757eba550e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.010 254096 DEBUG nova.compute.manager [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Refreshing instance network info cache due to event network-changed-1baf23f4-ec91-408d-b406-37757eba550e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.010 254096 DEBUG oslo_concurrency.lockutils [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-763ace49-fa88-443d-9733-e919b6f86fab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.011 254096 DEBUG oslo_concurrency.lockutils [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-763ace49-fa88-443d-9733-e919b6f86fab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.011 254096 DEBUG nova.network.neutron [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Refreshing network info cache for port 1baf23f4-ec91-408d-b406-37757eba550e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:34:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3241114713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:59 compute-0 ceph-mon[74985]: osdmap e173: 3 total, 3 up, 3 in
Nov 25 16:34:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:34:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174895987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:34:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 234 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.7 MiB/s wr, 184 op/s
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.370 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.372 254096 DEBUG nova.virt.libvirt.vif [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1725494683',display_name='tempest-ImagesTestJSON-server-1725494683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1725494683',id=42,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-24wfgra9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:55Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=763ace49-fa88-443d-9733-e919b6f86fab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.372 254096 DEBUG nova.network.os_vif_util [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.373 254096 DEBUG nova.network.os_vif_util [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=1baf23f4-ec91-408d-b406-37757eba550e,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1baf23f4-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.375 254096 DEBUG nova.objects.instance [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 763ace49-fa88-443d-9733-e919b6f86fab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.386 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <uuid>763ace49-fa88-443d-9733-e919b6f86fab</uuid>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <name>instance-0000002a</name>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesTestJSON-server-1725494683</nova:name>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:34:58</nova:creationTime>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <nova:port uuid="1baf23f4-ec91-408d-b406-37757eba550e">
Nov 25 16:34:59 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <system>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <entry name="serial">763ace49-fa88-443d-9733-e919b6f86fab</entry>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <entry name="uuid">763ace49-fa88-443d-9733-e919b6f86fab</entry>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </system>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <os>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   </os>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <features>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   </features>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/763ace49-fa88-443d-9733-e919b6f86fab_disk">
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/763ace49-fa88-443d-9733-e919b6f86fab_disk.config">
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       </source>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:34:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:11:f5:37"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <target dev="tap1baf23f4-ec"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/console.log" append="off"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <video>
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </video>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:34:59 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:34:59 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:34:59 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:34:59 compute-0 nova_compute[254092]: </domain>
Nov 25 16:34:59 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.392 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Preparing to wait for external event network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.392 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.392 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.393 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.394 254096 DEBUG nova.virt.libvirt.vif [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1725494683',display_name='tempest-ImagesTestJSON-server-1725494683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1725494683',id=42,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-24wfgra9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:34:55Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=763ace49-fa88-443d-9733-e919b6f86fab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.394 254096 DEBUG nova.network.os_vif_util [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.395 254096 DEBUG nova.network.os_vif_util [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=1baf23f4-ec91-408d-b406-37757eba550e,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1baf23f4-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.395 254096 DEBUG os_vif [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=1baf23f4-ec91-408d-b406-37757eba550e,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1baf23f4-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.396 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.396 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.402 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1baf23f4-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.403 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1baf23f4-ec, col_values=(('external_ids', {'iface-id': '1baf23f4-ec91-408d-b406-37757eba550e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:f5:37', 'vm-uuid': '763ace49-fa88-443d-9733-e919b6f86fab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 NetworkManager[48891]: <info>  [1764088499.4052] manager: (tap1baf23f4-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.411 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.412 254096 INFO os_vif [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=1baf23f4-ec91-408d-b406-37757eba550e,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1baf23f4-ec')
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.428 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.429 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.430 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.430 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.430 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.432 254096 INFO nova.compute.manager [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Terminating instance
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.433 254096 DEBUG nova.compute.manager [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.484 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.484 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.485 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:11:f5:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.485 254096 INFO nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Using config drive
Nov 25 16:34:59 compute-0 podman[301856]: 2025-11-25 16:34:59.500526137 +0000 UTC m=+0.058564927 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:34:59 compute-0 podman[301857]: 2025-11-25 16:34:59.506433128 +0000 UTC m=+0.060465139 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.516 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:34:59 compute-0 kernel: tap8be28993-ac (unregistering): left promiscuous mode
Nov 25 16:34:59 compute-0 NetworkManager[48891]: <info>  [1764088499.5245] device (tap8be28993-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.526 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.527 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.527 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.527 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00310|binding|INFO|Releasing lport 8be28993-accf-4bd5-8f8d-f1e94d84aca3 from this chassis (sb_readonly=0)
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00311|binding|INFO|Setting lport 8be28993-accf-4bd5-8f8d-f1e94d84aca3 down in Southbound
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00312|binding|INFO|Removing iface tap8be28993-ac ovn-installed in OVS
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.535 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:ae:b5 10.100.0.164'], port_security=['fa:16:3e:02:ae:b5 10.100.0.164'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.164/24', 'neutron:device_id': '2c116477-6534-4f01-a0bb-ebdd9e027e05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b490834-1ae6-4bac-88ff-7ec36dfeced3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8be28993-accf-4bd5-8f8d-f1e94d84aca3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.537 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8be28993-accf-4bd5-8f8d-f1e94d84aca3 in datapath 3650bd5e-702b-4bb4-ae2f-2588a2cf70df unbound from our chassis
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.539 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3650bd5e-702b-4bb4-ae2f-2588a2cf70df
Nov 25 16:34:59 compute-0 kernel: tapec7e033b-7a (unregistering): left promiscuous mode
Nov 25 16:34:59 compute-0 NetworkManager[48891]: <info>  [1764088499.5471] device (tapec7e033b-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.556 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.557 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.557 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.557 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.557 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.557 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[16e67188-65eb-4f4f-b0a6-3e86deab1946]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 kernel: tap71742b12-1a (unregistering): left promiscuous mode
Nov 25 16:34:59 compute-0 NetworkManager[48891]: <info>  [1764088499.5651] device (tap71742b12-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00313|binding|INFO|Releasing lport ec7e033b-7a98-44cb-9aab-c96b985fd4a7 from this chassis (sb_readonly=0)
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00314|binding|INFO|Setting lport ec7e033b-7a98-44cb-9aab-c96b985fd4a7 down in Southbound
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00315|binding|INFO|Removing iface tapec7e033b-7a ovn-installed in OVS
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.576 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:4f:1d 10.100.1.123'], port_security=['fa:16:3e:de:4f:1d 10.100.1.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.123/24', 'neutron:device_id': '2c116477-6534-4f01-a0bb-ebdd9e027e05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cb7431f6-9c64-49bf-bc1e-69a488346002', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5202c3a2-9dfb-4f2d-bf86-4ef3d95e542c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ec7e033b-7a98-44cb-9aab-c96b985fd4a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:59 compute-0 podman[301858]: 2025-11-25 16:34:59.587860663 +0000 UTC m=+0.140513348 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.599 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b65d56d0-1d6c-4041-bd4b-bc4cbc5b8730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00316|binding|INFO|Releasing lport 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a from this chassis (sb_readonly=0)
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00317|binding|INFO|Setting lport 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a down in Southbound
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.602 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c548e035-8260-4b68-aee8-5f403dc1b22a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_controller[153477]: 2025-11-25T16:34:59Z|00318|binding|INFO|Removing iface tap71742b12-1a ovn-installed in OVS
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.607 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:9d:4a 10.100.0.104'], port_security=['fa:16:3e:58:9d:4a 10.100.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.104/24', 'neutron:device_id': '2c116477-6534-4f01-a0bb-ebdd9e027e05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b490834-1ae6-4bac-88ff-7ec36dfeced3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:34:59 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000028.scope: Deactivated successfully.
Nov 25 16:34:59 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000028.scope: Consumed 3.794s CPU time.
Nov 25 16:34:59 compute-0 systemd-machined[216343]: Machine qemu-47-instance-00000028 terminated.
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.615 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.629 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d3032138-2f9b-4ef7-b140-8143d956ff5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.650 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c97d27a-ebc1-4f9b-9416-7f75c436cf78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3650bd5e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:db:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495069, 'reachable_time': 20096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301965, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.665 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b72f23ec-68dd-4e89-b39c-cb48787f451d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap3650bd5e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495085, 'tstamp': 495085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301967, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3650bd5e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495089, 'tstamp': 495089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301967, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.667 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3650bd5e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 NetworkManager[48891]: <info>  [1764088499.6816] manager: (tapec7e033b-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Nov 25 16:34:59 compute-0 NetworkManager[48891]: <info>  [1764088499.6905] manager: (tap71742b12-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.697 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3650bd5e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.698 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.698 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3650bd5e-70, col_values=(('external_ids', {'iface-id': '41f74651-7af4-42a4-9a35-56f962dcaceb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.698 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.699 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ec7e033b-7a98-44cb-9aab-c96b985fd4a7 in datapath cb7431f6-9c64-49bf-bc1e-69a488346002 unbound from our chassis
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.700 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cb7431f6-9c64-49bf-bc1e-69a488346002, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.702 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e22aee28-91e2-423a-8b64-d383df5bf2f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.703 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002 namespace which is not needed anymore
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.713 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Instance destroyed successfully.
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.713 254096 DEBUG nova.objects.instance [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lazy-loading 'resources' on Instance uuid 2c116477-6534-4f01-a0bb-ebdd9e027e05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.730 254096 DEBUG nova.virt.libvirt.vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:34:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:34:56Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.730 254096 DEBUG nova.network.os_vif_util [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.731 254096 DEBUG nova.network.os_vif_util [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:ae:b5,bridge_name='br-int',has_traffic_filtering=True,id=8be28993-accf-4bd5-8f8d-f1e94d84aca3,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8be28993-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.731 254096 DEBUG os_vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ae:b5,bridge_name='br-int',has_traffic_filtering=True,id=8be28993-accf-4bd5-8f8d-f1e94d84aca3,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8be28993-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.734 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8be28993-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.735 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.746 254096 INFO os_vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:ae:b5,bridge_name='br-int',has_traffic_filtering=True,id=8be28993-accf-4bd5-8f8d-f1e94d84aca3,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8be28993-ac')
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.747 254096 DEBUG nova.virt.libvirt.vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:34:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:34:56Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.747 254096 DEBUG nova.network.os_vif_util [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.748 254096 DEBUG nova.network.os_vif_util [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:4f:1d,bridge_name='br-int',has_traffic_filtering=True,id=ec7e033b-7a98-44cb-9aab-c96b985fd4a7,network=Network(cb7431f6-9c64-49bf-bc1e-69a488346002),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7e033b-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.748 254096 DEBUG os_vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:4f:1d,bridge_name='br-int',has_traffic_filtering=True,id=ec7e033b-7a98-44cb-9aab-c96b985fd4a7,network=Network(cb7431f6-9c64-49bf-bc1e-69a488346002),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7e033b-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.749 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec7e033b-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.764 254096 INFO os_vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:4f:1d,bridge_name='br-int',has_traffic_filtering=True,id=ec7e033b-7a98-44cb-9aab-c96b985fd4a7,network=Network(cb7431f6-9c64-49bf-bc1e-69a488346002),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7e033b-7a')
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.765 254096 DEBUG nova.virt.libvirt.vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:34:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1447884785',display_name='tempest-ServersTestMultiNic-server-1447884785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1447884785',id=40,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:34:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-255xlh5a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:34:56Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=2c116477-6534-4f01-a0bb-ebdd9e027e05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.765 254096 DEBUG nova.network.os_vif_util [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "address": "fa:16:3e:58:9d:4a", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.104", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71742b12-1a", "ovs_interfaceid": "71742b12-1a6c-4d30-a9c9-522fc9eb4a4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.766 254096 DEBUG nova.network.os_vif_util [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:9d:4a,bridge_name='br-int',has_traffic_filtering=True,id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71742b12-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.766 254096 DEBUG os_vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:9d:4a,bridge_name='br-int',has_traffic_filtering=True,id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71742b12-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.767 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71742b12-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.776 254096 INFO os_vif [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:9d:4a,bridge_name='br-int',has_traffic_filtering=True,id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a,network=Network(3650bd5e-702b-4bb4-ae2f-2588a2cf70df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71742b12-1a')
Nov 25 16:34:59 compute-0 neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002[301583]: [NOTICE]   (301587) : haproxy version is 2.8.14-c23fe91
Nov 25 16:34:59 compute-0 neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002[301583]: [NOTICE]   (301587) : path to executable is /usr/sbin/haproxy
Nov 25 16:34:59 compute-0 neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002[301583]: [WARNING]  (301587) : Exiting Master process...
Nov 25 16:34:59 compute-0 neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002[301583]: [ALERT]    (301587) : Current worker (301589) exited with code 143 (Terminated)
Nov 25 16:34:59 compute-0 neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002[301583]: [WARNING]  (301587) : All workers exited. Exiting... (0)
Nov 25 16:34:59 compute-0 systemd[1]: libpod-f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0.scope: Deactivated successfully.
Nov 25 16:34:59 compute-0 podman[302049]: 2025-11-25 16:34:59.844256368 +0000 UTC m=+0.047949620 container died f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 16:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0-userdata-shm.mount: Deactivated successfully.
Nov 25 16:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-963faf632ca6fcea1ee8689b6fe13a9cc3ce4c88f16a90c5f45fa605181d6050-merged.mount: Deactivated successfully.
Nov 25 16:34:59 compute-0 podman[302049]: 2025-11-25 16:34:59.886723528 +0000 UTC m=+0.090416780 container cleanup f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:34:59 compute-0 systemd[1]: libpod-conmon-f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0.scope: Deactivated successfully.
Nov 25 16:34:59 compute-0 podman[302089]: 2025-11-25 16:34:59.954622518 +0000 UTC m=+0.047966771 container remove f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.960 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7932459e-358e-46ce-885a-cd865b4fa981]: (4, ('Tue Nov 25 04:34:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002 (f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0)\nf211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0\nTue Nov 25 04:34:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002 (f211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0)\nf211892ead3e68a55939a3afc60bc5e07ce450a6fa174d2511b5812a74a4cdd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.962 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4676c24f-c336-4df1-b313-69a8d35665dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.963 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb7431f6-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 kernel: tapcb7431f6-90: left promiscuous mode
Nov 25 16:34:59 compute-0 nova_compute[254092]: 2025-11-25 16:34:59.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:34:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:34:59.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f878bbdc-21fc-4e8d-b1c6-9546a2ee7c00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.003 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9fde5b-35e2-46b5-8e84-e6258da9823f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.005 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b97865e-2e7f-4bf9-aaa6-e699ed10f5c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.021 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e06f16fe-ac0c-409d-ab2f-40688ca11ca8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495166, 'reachable_time': 34309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302108, 'error': None, 'target': 'ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dcb7431f6\x2d9c64\x2d49bf\x2dbc1e\x2d69a488346002.mount: Deactivated successfully.
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.025 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cb7431f6-9c64-49bf-bc1e-69a488346002 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.025 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4b8716-477d-477c-ad52-ee9e3ff40700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.026 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a in datapath 3650bd5e-702b-4bb4-ae2f-2588a2cf70df unbound from our chassis
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.027 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3650bd5e-702b-4bb4-ae2f-2588a2cf70df, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.028 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[82a8d91d-d7ab-4371-8c8b-13bc5bb6db8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.028 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df namespace which is not needed anymore
Nov 25 16:35:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779608707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.109 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.156 254096 INFO nova.virt.libvirt.driver [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Deleting instance files /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05_del
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.157 254096 INFO nova.virt.libvirt.driver [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Deletion of /var/lib/nova/instances/2c116477-6534-4f01-a0bb-ebdd9e027e05_del complete
Nov 25 16:35:00 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [NOTICE]   (301509) : haproxy version is 2.8.14-c23fe91
Nov 25 16:35:00 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [NOTICE]   (301509) : path to executable is /usr/sbin/haproxy
Nov 25 16:35:00 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [WARNING]  (301509) : Exiting Master process...
Nov 25 16:35:00 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [WARNING]  (301509) : Exiting Master process...
Nov 25 16:35:00 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [ALERT]    (301509) : Current worker (301511) exited with code 143 (Terminated)
Nov 25 16:35:00 compute-0 neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df[301503]: [WARNING]  (301509) : All workers exited. Exiting... (0)
Nov 25 16:35:00 compute-0 systemd[1]: libpod-0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339.scope: Deactivated successfully.
Nov 25 16:35:00 compute-0 podman[302128]: 2025-11-25 16:35:00.173923818 +0000 UTC m=+0.058782313 container died 0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.190 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.191 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.195 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.195 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339-userdata-shm.mount: Deactivated successfully.
Nov 25 16:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fb6fd336d3e9ca922b4c48733d46353a72312807a66307fe15cec204343db48-merged.mount: Deactivated successfully.
Nov 25 16:35:00 compute-0 podman[302128]: 2025-11-25 16:35:00.20387525 +0000 UTC m=+0.088733735 container cleanup 0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.207 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.207 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.211 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.212 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.212 254096 INFO nova.compute.manager [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.213 254096 DEBUG oslo.service.loopingcall [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.213 254096 DEBUG nova.compute.manager [-] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.213 254096 DEBUG nova.network.neutron [-] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:35:00 compute-0 systemd[1]: libpod-conmon-0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339.scope: Deactivated successfully.
Nov 25 16:35:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2174895987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:00 compute-0 ceph-mon[74985]: pgmap v1433: 321 pgs: 321 active+clean; 234 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.7 MiB/s wr, 184 op/s
Nov 25 16:35:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2779608707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:00 compute-0 podman[302159]: 2025-11-25 16:35:00.284724809 +0000 UTC m=+0.050633423 container remove 0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.291 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[819c6863-beb8-4de5-ae09-e764411dc2fb]: (4, ('Tue Nov 25 04:35:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df (0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339)\n0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339\nTue Nov 25 04:35:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df (0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339)\n0ceee0d6b061d7e756b547ed5cff30773194d71376aba3a00996fb25f4cf1339\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.293 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8341a156-e43f-4dd9-bcd5-9962cbc29229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.294 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3650bd5e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.296 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:00 compute-0 kernel: tap3650bd5e-70: left promiscuous mode
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.312 254096 DEBUG nova.network.neutron [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Updated VIF entry in instance network info cache for port 1baf23f4-ec91-408d-b406-37757eba550e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.312 254096 DEBUG nova.network.neutron [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Updating instance_info_cache with network_info: [{"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.318 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[296a1d49-3c69-4033-adff-f98f947e4119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.331 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ca3742-4598-4694-a200-b63e4754da30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.333 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0159e63b-babe-4faf-a93f-38d3387ae912]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.334 254096 DEBUG oslo_concurrency.lockutils [req-4281446f-830c-4437-8685-3a4395138ee7 req-8b4ec9f9-fbe7-49b0-940e-6a321f17f8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-763ace49-fa88-443d-9733-e919b6f86fab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.351 254096 INFO nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Creating config drive at /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/disk.config
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.352 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20ab84ce-9634-4546-86e8-e0020edd8f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495060, 'reachable_time': 24494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302175, 'error': None, 'target': 'ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.354 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3650bd5e-702b-4bb4-ae2f-2588a2cf70df deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.354 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[32e6ce31-0510-4651-a0d3-4bbf3e5dfc76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.357 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph88hqg88 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.460 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.461 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3788MB free_disk=59.88803482055664GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.461 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.461 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.488 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph88hqg88" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.507 254096 DEBUG nova.storage.rbd_utils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 763ace49-fa88-443d-9733-e919b6f86fab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.510 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/disk.config 763ace49-fa88-443d-9733-e919b6f86fab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.582 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.582 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2c116477-6534-4f01-a0bb-ebdd9e027e05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.582 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 270ad7f6-74d4-4c29-9856-77768f170789 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.582 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 763ace49-fa88-443d-9733-e919b6f86fab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.583 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.583 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.630 254096 DEBUG oslo_concurrency.processutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/disk.config 763ace49-fa88-443d-9733-e919b6f86fab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.631 254096 INFO nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Deleting local config drive /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab/disk.config because it was imported into RBD.
Nov 25 16:35:00 compute-0 kernel: tap1baf23f4-ec: entered promiscuous mode
Nov 25 16:35:00 compute-0 ovn_controller[153477]: 2025-11-25T16:35:00Z|00319|binding|INFO|Claiming lport 1baf23f4-ec91-408d-b406-37757eba550e for this chassis.
Nov 25 16:35:00 compute-0 ovn_controller[153477]: 2025-11-25T16:35:00Z|00320|binding|INFO|1baf23f4-ec91-408d-b406-37757eba550e: Claiming fa:16:3e:11:f5:37 10.100.0.7
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:00 compute-0 NetworkManager[48891]: <info>  [1764088500.6743] manager: (tap1baf23f4-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Nov 25 16:35:00 compute-0 systemd-udevd[301998]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.678 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:f5:37 10.100.0.7'], port_security=['fa:16:3e:11:f5:37 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '763ace49-fa88-443d-9733-e919b6f86fab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1baf23f4-ec91-408d-b406-37757eba550e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.679 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1baf23f4-ec91-408d-b406-37757eba550e in datapath 0816ae24-275c-455e-a549-929f4eb756e7 bound to our chassis
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.681 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:35:00 compute-0 ovn_controller[153477]: 2025-11-25T16:35:00Z|00321|binding|INFO|Setting lport 1baf23f4-ec91-408d-b406-37757eba550e ovn-installed in OVS
Nov 25 16:35:00 compute-0 ovn_controller[153477]: 2025-11-25T16:35:00Z|00322|binding|INFO|Setting lport 1baf23f4-ec91-408d-b406-37757eba550e up in Southbound
Nov 25 16:35:00 compute-0 NetworkManager[48891]: <info>  [1764088500.6926] device (tap1baf23f4-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:35:00 compute-0 NetworkManager[48891]: <info>  [1764088500.6937] device (tap1baf23f4-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cc3eac-71a2-4146-af06-dc1b8c5be9f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.693 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0816ae24-21 in ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.694 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.695 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0816ae24-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.695 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3bbf90-293a-4d77-a7a7-e1a676c1fbd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.697 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcdcc79-f6fc-44f4-a0f5-d8b57668b1c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.699 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.707 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8190e5b1-1a0a-4311-b144-b8270cf2d8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 systemd-machined[216343]: New machine qemu-48-instance-0000002a.
Nov 25 16:35:00 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000002a.
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.731 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39514fa2-a348-4b1a-bdae-638ba6b8e6f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.760 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d3706630-6597-4159-ba03-3d5eb78f633a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad9abff-52a5-4228-8796-c80b1351d872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 NetworkManager[48891]: <info>  [1764088500.7699] manager: (tap0816ae24-20): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.797 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3f6d33-c8f0-4ca6-bd51-fbdde76f6e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.800 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf51df0-6bdb-4413-99f0-c50e68301288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 NetworkManager[48891]: <info>  [1764088500.8199] device (tap0816ae24-20): carrier: link connected
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.824 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cb1fff-fab9-4d83-85ca-c06df345dae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da0a7fd0-97c5-4e13-a42f-3e954413032c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495841, 'reachable_time': 41461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302277, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.854 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ff25770b-82e2-4de7-9039-9dbc4b695fd0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:524c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495841, 'tstamp': 495841}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302278, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.867 254096 DEBUG nova.network.neutron [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updated VIF entry in instance network info cache for port 84ab0426-0174-4297-bb2b-e5964e453530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.868 254096 DEBUG nova.network.neutron [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updating instance_info_cache with network_info: [{"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d3650bd5e\x2d702b\x2d4bb4\x2dae2f\x2d2588a2cf70df.mount: Deactivated successfully.
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e7aef090-4bf4-4dc4-8b19-c799d907c8a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495841, 'reachable_time': 41461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302279, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.888 254096 DEBUG oslo_concurrency.lockutils [req-16c6362c-b63a-4a24-aa6d-068e3f2ef7df req-10dca23f-1dd1-4253-88a4-f0ad121e9f02 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-270ad7f6-74d4-4c29-9856-77768f170789" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.905 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f25b422-c047-4c6f-9185-d5d8cdaa2ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[253e01e2-7c79-48e9-a014-82bc17f722d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:00 compute-0 kernel: tap0816ae24-20: entered promiscuous mode
Nov 25 16:35:00 compute-0 NetworkManager[48891]: <info>  [1764088500.9629] manager: (tap0816ae24-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.963 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:00 compute-0 ovn_controller[153477]: 2025-11-25T16:35:00Z|00323|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.965 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6b52e7-ef29-4524-9b80-fa7cc0dbaafa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.966 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:35:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:00.966 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'env', 'PROCESS_TAG=haproxy-0816ae24-275c-455e-a549-929f4eb756e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0816ae24-275c-455e-a549-929f4eb756e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:35:00 compute-0 nova_compute[254092]: 2025-11-25 16:35:00.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.066 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-unplugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.067 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.067 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.068 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.068 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-unplugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.068 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-unplugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.068 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.069 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.069 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.069 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.069 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.070 254096 WARNING nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received unexpected event network-vif-plugged-8be28993-accf-4bd5-8f8d-f1e94d84aca3 for instance with vm_state active and task_state deleting.
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.070 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-unplugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.070 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.070 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.071 254096 DEBUG oslo_concurrency.lockutils [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.071 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-unplugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.071 254096 DEBUG nova.compute.manager [req-ce6a770f-5fbe-4ad8-9b56-c640cd7b7b73 req-be4b8028-1ce5-4891-988e-f3fdd71ba72e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-unplugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.146 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088501.1464634, 763ace49-fa88-443d-9733-e919b6f86fab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.147 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] VM Started (Lifecycle Event)
Nov 25 16:35:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406426453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.165 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.170 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088501.1465838, 763ace49-fa88-443d-9733-e919b6f86fab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.170 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] VM Paused (Lifecycle Event)
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.183 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.188 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.192 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.194 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.206 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.212 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.231 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.231 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/406426453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:01 compute-0 podman[302355]: 2025-11-25 16:35:01.351490265 +0000 UTC m=+0.064739045 container create d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 16:35:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 214 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 346 op/s
Nov 25 16:35:01 compute-0 systemd[1]: Started libpod-conmon-d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553.scope.
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.396 254096 DEBUG nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-unplugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.398 254096 DEBUG oslo_concurrency.lockutils [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.398 254096 DEBUG oslo_concurrency.lockutils [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.399 254096 DEBUG oslo_concurrency.lockutils [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.399 254096 DEBUG nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-unplugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.400 254096 DEBUG nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-unplugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.400 254096 DEBUG nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.400 254096 DEBUG oslo_concurrency.lockutils [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.401 254096 DEBUG oslo_concurrency.lockutils [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.401 254096 DEBUG oslo_concurrency.lockutils [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.402 254096 DEBUG nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.402 254096 WARNING nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received unexpected event network-vif-plugged-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a for instance with vm_state active and task_state deleting.
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.402 254096 DEBUG nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-deleted-71742b12-1a6c-4d30-a9c9-522fc9eb4a4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.403 254096 INFO nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Neutron deleted interface 71742b12-1a6c-4d30-a9c9-522fc9eb4a4a; detaching it from the instance and deleting it from the info cache
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.403 254096 DEBUG nova.network.neutron [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updating instance_info_cache with network_info: [{"id": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "address": "fa:16:3e:02:ae:b5", "network": {"id": "3650bd5e-702b-4bb4-ae2f-2588a2cf70df", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-211018014", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.164", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8be28993-ac", "ovs_interfaceid": "8be28993-accf-4bd5-8f8d-f1e94d84aca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "address": "fa:16:3e:de:4f:1d", "network": {"id": "cb7431f6-9c64-49bf-bc1e-69a488346002", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1673164746", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7e033b-7a", "ovs_interfaceid": "ec7e033b-7a98-44cb-9aab-c96b985fd4a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:01 compute-0 podman[302355]: 2025-11-25 16:35:01.321943405 +0000 UTC m=+0.035192195 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae550447c10edefc18483709f98d3041cf94f3caea761c726a0f618ee3794a1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:01 compute-0 nova_compute[254092]: 2025-11-25 16:35:01.426 254096 DEBUG nova.compute.manager [req-121ccb02-f89b-4780-932f-29191fb9e52e req-4da305bd-1751-4c9b-a9a5-ab334a529de6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Detach interface failed, port_id=71742b12-1a6c-4d30-a9c9-522fc9eb4a4a, reason: Instance 2c116477-6534-4f01-a0bb-ebdd9e027e05 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 16:35:01 compute-0 podman[302355]: 2025-11-25 16:35:01.430566346 +0000 UTC m=+0.143815166 container init d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:35:01 compute-0 podman[302355]: 2025-11-25 16:35:01.43550503 +0000 UTC m=+0.148753810 container start d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:35:01 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [NOTICE]   (302374) : New worker (302376) forked
Nov 25 16:35:01 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [NOTICE]   (302374) : Loading success.
Nov 25 16:35:02 compute-0 ceph-mon[74985]: pgmap v1434: 321 pgs: 321 active+clean; 214 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 346 op/s
Nov 25 16:35:02 compute-0 nova_compute[254092]: 2025-11-25 16:35:02.440 254096 DEBUG nova.network.neutron [-] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:02 compute-0 nova_compute[254092]: 2025-11-25 16:35:02.454 254096 INFO nova.compute.manager [-] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Took 2.24 seconds to deallocate network for instance.
Nov 25 16:35:02 compute-0 nova_compute[254092]: 2025-11-25 16:35:02.490 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:02 compute-0 nova_compute[254092]: 2025-11-25 16:35:02.491 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:02 compute-0 nova_compute[254092]: 2025-11-25 16:35:02.586 254096 DEBUG oslo_concurrency.processutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:02 compute-0 nova_compute[254092]: 2025-11-25 16:35:02.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011526092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.072 254096 DEBUG oslo_concurrency.processutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.078 254096 DEBUG nova.compute.provider_tree [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.093 254096 DEBUG nova.scheduler.client.report [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.115 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.148 254096 INFO nova.scheduler.client.report [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Deleted allocations for instance 2c116477-6534-4f01-a0bb-ebdd9e027e05
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.218 254096 DEBUG oslo_concurrency.lockutils [None req-4366d4e3-4217-4ac9-b1a7-a6ce0e75a6b0 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1011526092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 214 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.3 MiB/s wr, 295 op/s
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.510 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.510 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.511 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.511 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c116477-6534-4f01-a0bb-ebdd9e027e05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.511 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] No waiting events found dispatching network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.511 254096 WARNING nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received unexpected event network-vif-plugged-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 for instance with vm_state deleted and task_state None.
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.511 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received event network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.512 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.512 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.512 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.512 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Processing event network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.512 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-deleted-8be28993-accf-4bd5-8f8d-f1e94d84aca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.512 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received event network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.513 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.513 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.513 254096 DEBUG oslo_concurrency.lockutils [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.513 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] No waiting events found dispatching network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.513 254096 WARNING nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received unexpected event network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e for instance with vm_state building and task_state spawning.
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.513 254096 DEBUG nova.compute.manager [req-47e0f65a-ee16-472a-9bbe-0a6195c0cd2b req-f2d66f88-97dd-4ab1-8ab9-35fe75cd5fff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Received event network-vif-deleted-ec7e033b-7a98-44cb-9aab-c96b985fd4a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.514 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.518 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088503.5178125, 763ace49-fa88-443d-9733-e919b6f86fab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.518 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] VM Resumed (Lifecycle Event)
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.520 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.524 254096 INFO nova.virt.libvirt.driver [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance spawned successfully.
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.525 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.539 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.544 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.547 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.547 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.548 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.548 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.548 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.549 254096 DEBUG nova.virt.libvirt.driver [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.575 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.631 254096 INFO nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Took 8.12 seconds to spawn the instance on the hypervisor.
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.632 254096 DEBUG nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.692 254096 INFO nova.compute.manager [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Took 9.13 seconds to build instance.
Nov 25 16:35:03 compute-0 nova_compute[254092]: 2025-11-25 16:35:03.706 254096 DEBUG oslo_concurrency.lockutils [None req-37275ab0-ec68-425f-a8b7-6628952fca79 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:04 compute-0 nova_compute[254092]: 2025-11-25 16:35:04.195 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:35:04 compute-0 nova_compute[254092]: 2025-11-25 16:35:04.196 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:35:04 compute-0 ceph-mon[74985]: pgmap v1435: 321 pgs: 321 active+clean; 214 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.3 MiB/s wr, 295 op/s
Nov 25 16:35:04 compute-0 nova_compute[254092]: 2025-11-25 16:35:04.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:05 compute-0 ovn_controller[153477]: 2025-11-25T16:35:05Z|00324|binding|INFO|Releasing lport 639a1689-3ed6-4bc6-98a0-e7a7773b6e05 from this chassis (sb_readonly=0)
Nov 25 16:35:05 compute-0 ovn_controller[153477]: 2025-11-25T16:35:05Z|00325|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:35:05 compute-0 ovn_controller[153477]: 2025-11-25T16:35:05Z|00326|binding|INFO|Releasing lport 9dd2e935-32e0-43d1-8a28-23e6ab045e91 from this chassis (sb_readonly=0)
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 234 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 294 op/s
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.584 254096 DEBUG oslo_concurrency.lockutils [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.585 254096 DEBUG oslo_concurrency.lockutils [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.585 254096 DEBUG nova.compute.manager [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.588 254096 DEBUG nova.compute.manager [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.589 254096 DEBUG nova.objects.instance [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'flavor' on Instance uuid 763ace49-fa88-443d-9733-e919b6f86fab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:05 compute-0 nova_compute[254092]: 2025-11-25 16:35:05.609 254096 DEBUG nova.virt.libvirt.driver [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:35:05 compute-0 ovn_controller[153477]: 2025-11-25T16:35:05Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:28:4e 10.100.0.14
Nov 25 16:35:05 compute-0 ovn_controller[153477]: 2025-11-25T16:35:05Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:28:4e 10.100.0.14
Nov 25 16:35:06 compute-0 ceph-mon[74985]: pgmap v1436: 321 pgs: 321 active+clean; 234 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 294 op/s
Nov 25 16:35:06 compute-0 nova_compute[254092]: 2025-11-25 16:35:06.995 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088491.9694364, 90437bdf-689c-4185-93de-c28fe2c2ab07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:06 compute-0 nova_compute[254092]: 2025-11-25 16:35:06.996 254096 INFO nova.compute.manager [-] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] VM Stopped (Lifecycle Event)
Nov 25 16:35:07 compute-0 nova_compute[254092]: 2025-11-25 16:35:07.015 254096 DEBUG nova.compute.manager [None req-9be915bd-2730-4762-a744-8597b3d68e07 - - - - - -] [instance: 90437bdf-689c-4185-93de-c28fe2c2ab07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 300 op/s
Nov 25 16:35:07 compute-0 nova_compute[254092]: 2025-11-25 16:35:07.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:35:07 compute-0 nova_compute[254092]: 2025-11-25 16:35:07.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:08 compute-0 ceph-mon[74985]: pgmap v1437: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 300 op/s
Nov 25 16:35:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.2 MiB/s wr, 288 op/s
Nov 25 16:35:09 compute-0 nova_compute[254092]: 2025-11-25 16:35:09.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:35:10 compute-0 ceph-mon[74985]: pgmap v1438: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.2 MiB/s wr, 288 op/s
Nov 25 16:35:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.8 MiB/s wr, 252 op/s
Nov 25 16:35:12 compute-0 ceph-mon[74985]: pgmap v1439: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.8 MiB/s wr, 252 op/s
Nov 25 16:35:12 compute-0 nova_compute[254092]: 2025-11-25 16:35:12.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Nov 25 16:35:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:13.609 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:13.610 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:13.611 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:13 compute-0 nova_compute[254092]: 2025-11-25 16:35:13.885 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:13 compute-0 nova_compute[254092]: 2025-11-25 16:35:13.886 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:13 compute-0 nova_compute[254092]: 2025-11-25 16:35:13.908 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:35:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:13 compute-0 nova_compute[254092]: 2025-11-25 16:35:13.974 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:13 compute-0 nova_compute[254092]: 2025-11-25 16:35:13.975 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:13 compute-0 nova_compute[254092]: 2025-11-25 16:35:13.983 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:35:13 compute-0 nova_compute[254092]: 2025-11-25 16:35:13.983 254096 INFO nova.compute.claims [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.069444) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088514069475, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1950, "num_deletes": 508, "total_data_size": 2567319, "memory_usage": 2620648, "flush_reason": "Manual Compaction"}
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088514125923, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1545420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28085, "largest_seqno": 30034, "table_properties": {"data_size": 1538873, "index_size": 2917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 19878, "raw_average_key_size": 19, "raw_value_size": 1522438, "raw_average_value_size": 1519, "num_data_blocks": 132, "num_entries": 1002, "num_filter_entries": 1002, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088354, "oldest_key_time": 1764088354, "file_creation_time": 1764088514, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 56534 microseconds, and 4471 cpu microseconds.
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.125973) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1545420 bytes OK
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.125992) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.140491) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.140543) EVENT_LOG_v1 {"time_micros": 1764088514140531, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.140568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2557913, prev total WAL file size 2557913, number of live WAL files 2.
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.141636) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303036' seq:72057594037927935, type:22 .. '6D6772737461740031323538' seq:0, type:0; will stop at (end)
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1509KB)], [62(8642KB)]
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088514141713, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10395793, "oldest_snapshot_seqno": -1}
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.203 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5402 keys, 7959488 bytes, temperature: kUnknown
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088514431582, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7959488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7923950, "index_size": 20900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136020, "raw_average_key_size": 25, "raw_value_size": 7827308, "raw_average_value_size": 1448, "num_data_blocks": 859, "num_entries": 5402, "num_filter_entries": 5402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088514, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.431974) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7959488 bytes
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.476034) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.9 rd, 27.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(11.9) write-amplify(5.2) OK, records in: 6361, records dropped: 959 output_compression: NoCompression
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.476083) EVENT_LOG_v1 {"time_micros": 1764088514476060, "job": 34, "event": "compaction_finished", "compaction_time_micros": 289979, "compaction_time_cpu_micros": 17727, "output_level": 6, "num_output_files": 1, "total_output_size": 7959488, "num_input_records": 6361, "num_output_records": 5402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088514476548, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088514478189, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.141486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.478285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.478290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.478291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.478293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:14.478294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:14 compute-0 ceph-mon[74985]: pgmap v1440: 321 pgs: 321 active+clean; 246 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Nov 25 16:35:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/958697776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.643 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.650 254096 DEBUG nova.compute.provider_tree [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.671 254096 DEBUG nova.scheduler.client.report [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.697 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.698 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.707 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088499.7056758, 2c116477-6534-4f01-a0bb-ebdd9e027e05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.707 254096 INFO nova.compute.manager [-] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] VM Stopped (Lifecycle Event)
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.732 254096 DEBUG nova.compute.manager [None req-20b7b155-f699-4224-b8af-de87d2e17ff4 - - - - - -] [instance: 2c116477-6534-4f01-a0bb-ebdd9e027e05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.755 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.755 254096 DEBUG nova.network.neutron [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.798 254096 INFO nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.816 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.905 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.908 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.909 254096 INFO nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Creating image(s)
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.939 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:14 compute-0 nova_compute[254092]: 2025-11-25 16:35:14.975 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.007 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.012 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.095 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.097 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.098 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.098 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.165 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.169 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.314 254096 DEBUG nova.policy [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '650f3d90afcd4e85b7042981dc353a2d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fe7901baa563491c8609089aa4334bf1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:35:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 252 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 153 op/s
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.499 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/958697776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.552 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] resizing rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.630 254096 DEBUG nova.objects.instance [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'migration_context' on Instance uuid 8457c008-75d8-4c24-9ae2-6b8a526312ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.650 254096 DEBUG nova.virt.libvirt.driver [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.659 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.659 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Ensure instance console log exists: /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.660 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.660 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:15 compute-0 nova_compute[254092]: 2025-11-25 16:35:15.660 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:16 compute-0 ceph-mon[74985]: pgmap v1441: 321 pgs: 321 active+clean; 252 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 153 op/s
Nov 25 16:35:16 compute-0 nova_compute[254092]: 2025-11-25 16:35:16.942 254096 DEBUG nova.network.neutron [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Successfully created port: 12524556-7486-4f17-95f0-2984a51a4542 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:35:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 271 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 95 op/s
Nov 25 16:35:17 compute-0 nova_compute[254092]: 2025-11-25 16:35:17.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:18 compute-0 ceph-mon[74985]: pgmap v1442: 321 pgs: 321 active+clean; 271 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 95 op/s
Nov 25 16:35:18 compute-0 nova_compute[254092]: 2025-11-25 16:35:18.754 254096 DEBUG nova.network.neutron [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Successfully updated port: 12524556-7486-4f17-95f0-2984a51a4542 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:35:18 compute-0 nova_compute[254092]: 2025-11-25 16:35:18.767 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:18 compute-0 nova_compute[254092]: 2025-11-25 16:35:18.768 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquired lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:18 compute-0 nova_compute[254092]: 2025-11-25 16:35:18.768 254096 DEBUG nova.network.neutron [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:35:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.032 254096 DEBUG nova.network.neutron [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:35:19 compute-0 kernel: tap1baf23f4-ec (unregistering): left promiscuous mode
Nov 25 16:35:19 compute-0 NetworkManager[48891]: <info>  [1764088519.0760] device (tap1baf23f4-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:19 compute-0 ovn_controller[153477]: 2025-11-25T16:35:19Z|00327|binding|INFO|Releasing lport 1baf23f4-ec91-408d-b406-37757eba550e from this chassis (sb_readonly=0)
Nov 25 16:35:19 compute-0 ovn_controller[153477]: 2025-11-25T16:35:19Z|00328|binding|INFO|Setting lport 1baf23f4-ec91-408d-b406-37757eba550e down in Southbound
Nov 25 16:35:19 compute-0 ovn_controller[153477]: 2025-11-25T16:35:19Z|00329|binding|INFO|Removing iface tap1baf23f4-ec ovn-installed in OVS
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.094 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:f5:37 10.100.0.7'], port_security=['fa:16:3e:11:f5:37 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '763ace49-fa88-443d-9733-e919b6f86fab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1baf23f4-ec91-408d-b406-37757eba550e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.095 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1baf23f4-ec91-408d-b406-37757eba550e in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.096 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[59551d50-b7c8-47cc-866b-3a41029d9f26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.098 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace which is not needed anymore
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:19 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Nov 25 16:35:19 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002a.scope: Consumed 13.238s CPU time.
Nov 25 16:35:19 compute-0 systemd-machined[216343]: Machine qemu-48-instance-0000002a terminated.
Nov 25 16:35:19 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [NOTICE]   (302374) : haproxy version is 2.8.14-c23fe91
Nov 25 16:35:19 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [NOTICE]   (302374) : path to executable is /usr/sbin/haproxy
Nov 25 16:35:19 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [WARNING]  (302374) : Exiting Master process...
Nov 25 16:35:19 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [WARNING]  (302374) : Exiting Master process...
Nov 25 16:35:19 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [ALERT]    (302374) : Current worker (302376) exited with code 143 (Terminated)
Nov 25 16:35:19 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[302370]: [WARNING]  (302374) : All workers exited. Exiting... (0)
Nov 25 16:35:19 compute-0 systemd[1]: libpod-d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553.scope: Deactivated successfully.
Nov 25 16:35:19 compute-0 podman[302620]: 2025-11-25 16:35:19.258845726 +0000 UTC m=+0.056772169 container died d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553-userdata-shm.mount: Deactivated successfully.
Nov 25 16:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae550447c10edefc18483709f98d3041cf94f3caea761c726a0f618ee3794a1c-merged.mount: Deactivated successfully.
Nov 25 16:35:19 compute-0 podman[302620]: 2025-11-25 16:35:19.297298628 +0000 UTC m=+0.095225051 container cleanup d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:35:19 compute-0 systemd[1]: libpod-conmon-d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553.scope: Deactivated successfully.
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.319 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:19 compute-0 podman[302652]: 2025-11-25 16:35:19.374695055 +0000 UTC m=+0.048323980 container remove d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:35:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 271 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2ea304-7f52-4d4b-85cb-bff56d2ceb7b]: (4, ('Tue Nov 25 04:35:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553)\nd635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553\nTue Nov 25 04:35:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (d635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553)\nd635a9f0c343d200a7b799fe3ee0d6929298f3e4aefb4cd9951ee7f919426553\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.383 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99246a7c-a09e-45f9-9234-5517c23a891e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.384 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.386 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:19 compute-0 kernel: tap0816ae24-20: left promiscuous mode
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.405 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.409 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e871ac7-22f5-4997-a1f2-be2b211e266a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.424 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d4056425-624c-49f8-95e1-20c031aa1ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.425 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[78b6b1dd-3905-40ec-a8f6-a40d8c240b6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.445 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6011b8-511c-43c2-be84-cede71f3ce28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495834, 'reachable_time': 42297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302677, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d0816ae24\x2d275c\x2d455e\x2da549\x2d929f4eb756e7.mount: Deactivated successfully.
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.450 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:19.450 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bf40f0ef-2c3b-40ec-bbce-5760434a22bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.469 254096 DEBUG nova.compute.manager [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-changed-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.469 254096 DEBUG nova.compute.manager [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Refreshing instance network info cache due to event network-changed-12524556-7486-4f17-95f0-2984a51a4542. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.469 254096 DEBUG oslo_concurrency.lockutils [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.594 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.594 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.615 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.671 254096 INFO nova.virt.libvirt.driver [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance shutdown successfully after 14 seconds.
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.689 254096 INFO nova.virt.libvirt.driver [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance destroyed successfully.
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.690 254096 DEBUG nova.objects.instance [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'numa_topology' on Instance uuid 763ace49-fa88-443d-9733-e919b6f86fab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.706 254096 DEBUG nova.compute.manager [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.708 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.709 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.717 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.717 254096 INFO nova.compute.claims [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.780 254096 DEBUG oslo_concurrency.lockutils [None req-0aed37e6-46ee-4d78-8882-3ea26410f8c1 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:19 compute-0 nova_compute[254092]: 2025-11-25 16:35:19.937 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149382602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.357 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.365 254096 DEBUG nova.compute.provider_tree [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.381 254096 DEBUG nova.scheduler.client.report [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.420 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.421 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.493 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.493 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.507 254096 DEBUG nova.network.neutron [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updating instance_info_cache with network_info: [{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.521 254096 INFO nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.540 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Releasing lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.540 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance network_info: |[{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.541 254096 DEBUG oslo_concurrency.lockutils [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.541 254096 DEBUG nova.network.neutron [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Refreshing network info cache for port 12524556-7486-4f17-95f0-2984a51a4542 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.544 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Start _get_guest_xml network_info=[{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.545 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.551 254096 WARNING nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.558 254096 DEBUG nova.virt.libvirt.host [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.558 254096 DEBUG nova.virt.libvirt.host [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.565 254096 DEBUG nova.virt.libvirt.host [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.565 254096 DEBUG nova.virt.libvirt.host [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.566 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.566 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.566 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.567 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.567 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.567 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.567 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.568 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.568 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.568 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.568 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.569 254096 DEBUG nova.virt.hardware [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:35:20 compute-0 ceph-mon[74985]: pgmap v1443: 321 pgs: 321 active+clean; 271 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Nov 25 16:35:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/149382602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.572 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.659 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.660 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.661 254096 INFO nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Creating image(s)
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.684 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.713 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.743 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.749 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.817 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.818 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.819 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.819 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.840 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:20 compute-0 nova_compute[254092]: 2025-11-25 16:35:20.844 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:20 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 16:35:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/759856702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.042 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.082 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.086 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.164 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.212 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] resizing rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.300 254096 DEBUG nova.objects.instance [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lazy-loading 'migration_context' on Instance uuid 7a89b21c-79db-4e5f-88fd-35557c8c15ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.328 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.328 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Ensure instance console log exists: /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.329 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.329 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.329 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 336 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 4.4 MiB/s wr, 108 op/s
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.427 254096 DEBUG nova.policy [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '70f122fae9644012973ae5b56c1d459b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:35:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289229848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.548 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.549 254096 DEBUG nova.virt.libvirt.vif [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-960547750',display_name='tempest-SecurityGroupsTestJSON-server-960547750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-960547750',id=43,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-ckiy49j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:14Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=8457c008-75d8-4c24-9ae2-6b8a526312ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.550 254096 DEBUG nova.network.os_vif_util [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.551 254096 DEBUG nova.network.os_vif_util [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.551 254096 DEBUG nova.objects.instance [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8457c008-75d8-4c24-9ae2-6b8a526312ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.569 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <uuid>8457c008-75d8-4c24-9ae2-6b8a526312ce</uuid>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <name>instance-0000002b</name>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <nova:name>tempest-SecurityGroupsTestJSON-server-960547750</nova:name>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:35:20</nova:creationTime>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:user uuid="650f3d90afcd4e85b7042981dc353a2d">tempest-SecurityGroupsTestJSON-716261307-project-member</nova:user>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:project uuid="fe7901baa563491c8609089aa4334bf1">tempest-SecurityGroupsTestJSON-716261307</nova:project>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <nova:port uuid="12524556-7486-4f17-95f0-2984a51a4542">
Nov 25 16:35:21 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <system>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <entry name="serial">8457c008-75d8-4c24-9ae2-6b8a526312ce</entry>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <entry name="uuid">8457c008-75d8-4c24-9ae2-6b8a526312ce</entry>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </system>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <os>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   </os>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <features>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   </features>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8457c008-75d8-4c24-9ae2-6b8a526312ce_disk">
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8457c008-75d8-4c24-9ae2-6b8a526312ce_disk.config">
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:3a:8f:b2"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <target dev="tap12524556-74"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/console.log" append="off"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <video>
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </video>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:35:21 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:35:21 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:35:21 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:35:21 compute-0 nova_compute[254092]: </domain>
Nov 25 16:35:21 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.570 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Preparing to wait for external event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.570 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.570 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.570 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.571 254096 DEBUG nova.virt.libvirt.vif [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-960547750',display_name='tempest-SecurityGroupsTestJSON-server-960547750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-960547750',id=43,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-ckiy49j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:14Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=8457c008-75d8-4c24-9ae2-6b8a526312ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.571 254096 DEBUG nova.network.os_vif_util [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.572 254096 DEBUG nova.network.os_vif_util [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.572 254096 DEBUG os_vif [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.573 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.573 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.576 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12524556-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.576 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12524556-74, col_values=(('external_ids', {'iface-id': '12524556-7486-4f17-95f0-2984a51a4542', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:8f:b2', 'vm-uuid': '8457c008-75d8-4c24-9ae2-6b8a526312ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/759856702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2289229848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:21 compute-0 NetworkManager[48891]: <info>  [1764088521.6166] manager: (tap12524556-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.615 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.629 254096 INFO os_vif [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74')
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.700 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.701 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.702 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] No VIF found with MAC fa:16:3e:3a:8f:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.702 254096 INFO nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Using config drive
Nov 25 16:35:21 compute-0 nova_compute[254092]: 2025-11-25 16:35:21.728 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.017 254096 DEBUG nova.compute.manager [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received event network-vif-unplugged-1baf23f4-ec91-408d-b406-37757eba550e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.018 254096 DEBUG oslo_concurrency.lockutils [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.019 254096 DEBUG oslo_concurrency.lockutils [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.019 254096 DEBUG oslo_concurrency.lockutils [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.019 254096 DEBUG nova.compute.manager [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] No waiting events found dispatching network-vif-unplugged-1baf23f4-ec91-408d-b406-37757eba550e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.020 254096 WARNING nova.compute.manager [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received unexpected event network-vif-unplugged-1baf23f4-ec91-408d-b406-37757eba550e for instance with vm_state stopped and task_state None.
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.020 254096 DEBUG nova.compute.manager [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received event network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.020 254096 DEBUG oslo_concurrency.lockutils [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.021 254096 DEBUG oslo_concurrency.lockutils [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.021 254096 DEBUG oslo_concurrency.lockutils [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.021 254096 DEBUG nova.compute.manager [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] No waiting events found dispatching network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.022 254096 WARNING nova.compute.manager [req-fc3c2471-4c97-4a41-a388-1091de316473 req-fa989ddd-bc40-4622-9e49-f481db71ed30 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received unexpected event network-vif-plugged-1baf23f4-ec91-408d-b406-37757eba550e for instance with vm_state stopped and task_state None.
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.511 254096 INFO nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Creating config drive at /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/disk.config
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.516 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp608aam76 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:22 compute-0 ceph-mon[74985]: pgmap v1444: 321 pgs: 321 active+clean; 336 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 4.4 MiB/s wr, 108 op/s
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.660 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp608aam76" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.683 254096 DEBUG nova.storage.rbd_utils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] rbd image 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.687 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/disk.config 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.853 254096 DEBUG oslo_concurrency.processutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/disk.config 8457c008-75d8-4c24-9ae2-6b8a526312ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.854 254096 INFO nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Deleting local config drive /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/disk.config because it was imported into RBD.
Nov 25 16:35:22 compute-0 kernel: tap12524556-74: entered promiscuous mode
Nov 25 16:35:22 compute-0 NetworkManager[48891]: <info>  [1764088522.9102] manager: (tap12524556-74): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:22 compute-0 ovn_controller[153477]: 2025-11-25T16:35:22Z|00330|binding|INFO|Claiming lport 12524556-7486-4f17-95f0-2984a51a4542 for this chassis.
Nov 25 16:35:22 compute-0 ovn_controller[153477]: 2025-11-25T16:35:22Z|00331|binding|INFO|12524556-7486-4f17-95f0-2984a51a4542: Claiming fa:16:3e:3a:8f:b2 10.100.0.5
Nov 25 16:35:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:22.924 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:8f:b2 10.100.0.5'], port_security=['fa:16:3e:3a:8f:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8457c008-75d8-4c24-9ae2-6b8a526312ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe7901baa563491c8609089aa4334bf1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10995e2d-e9a2-4098-859c-5dcd4d5f741f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6fc76d1-a8a4-44cf-ab8f-0304e50e033c, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12524556-7486-4f17-95f0-2984a51a4542) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:22.925 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12524556-7486-4f17-95f0-2984a51a4542 in datapath 82742f46-fb6e-443e-a99d-84c5367a4ccd bound to our chassis
Nov 25 16:35:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:22.926 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82742f46-fb6e-443e-a99d-84c5367a4ccd
Nov 25 16:35:22 compute-0 ovn_controller[153477]: 2025-11-25T16:35:22Z|00332|binding|INFO|Setting lport 12524556-7486-4f17-95f0-2984a51a4542 ovn-installed in OVS
Nov 25 16:35:22 compute-0 ovn_controller[153477]: 2025-11-25T16:35:22Z|00333|binding|INFO|Setting lport 12524556-7486-4f17-95f0-2984a51a4542 up in Southbound
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:22 compute-0 systemd-udevd[303005]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:35:22 compute-0 nova_compute[254092]: 2025-11-25 16:35:22.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:22.945 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[016cd8c2-2294-4d31-83f0-5933359ba74e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:22 compute-0 systemd-machined[216343]: New machine qemu-49-instance-0000002b.
Nov 25 16:35:22 compute-0 NetworkManager[48891]: <info>  [1764088522.9560] device (tap12524556-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:35:22 compute-0 NetworkManager[48891]: <info>  [1764088522.9570] device (tap12524556-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:35:22 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002b.
Nov 25 16:35:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:22.978 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[57b4d2b5-8840-41b7-a265-689f2f275802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:22.982 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cce7631f-e0c1-4ae5-8a69-30a450ee4129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.013 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff2bc2a-6862-4e91-aef1-235e324d6394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.035 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b31942aa-3ca5-4422-ac79-817f2d10fda3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82742f46-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:df:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494588, 'reachable_time': 19595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303019, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.056 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8b14c9-42bc-4e1a-8a94-b5b4b0f91632]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494599, 'tstamp': 494599}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303021, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494601, 'tstamp': 494601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303021, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.057 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82742f46-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.066 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82742f46-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.066 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.067 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82742f46-f0, col_values=(('external_ids', {'iface-id': '639a1689-3ed6-4bc6-98a0-e7a7773b6e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:23.067 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.320 254096 DEBUG nova.compute.manager [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.375 254096 INFO nova.compute.manager [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] instance snapshotting
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.375 254096 WARNING nova.compute.manager [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] trying to snapshot a non-running instance: (state: 4 expected: 1)
Nov 25 16:35:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 336 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 4.4 MiB/s wr, 105 op/s
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.403 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Successfully created port: 18d58a6f-2179-462b-993c-b9ce6369673f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.524 254096 DEBUG nova.network.neutron [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updated VIF entry in instance network info cache for port 12524556-7486-4f17-95f0-2984a51a4542. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.524 254096 DEBUG nova.network.neutron [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updating instance_info_cache with network_info: [{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.542 254096 DEBUG oslo_concurrency.lockutils [req-e5a7ed69-7e06-41de-8ef1-0067133fbd83 req-f434c30a-ea64-4094-b851-0537113950af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.681 254096 INFO nova.virt.libvirt.driver [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Beginning cold snapshot process
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.820 254096 DEBUG nova.virt.libvirt.imagebackend [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.885 254096 DEBUG nova.compute.manager [req-94335352-ce7d-4f3f-b07c-35f4c1502e1a req-679ba150-80d8-4237-b2f6-9d07d6fd87ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.885 254096 DEBUG oslo_concurrency.lockutils [req-94335352-ce7d-4f3f-b07c-35f4c1502e1a req-679ba150-80d8-4237-b2f6-9d07d6fd87ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.886 254096 DEBUG oslo_concurrency.lockutils [req-94335352-ce7d-4f3f-b07c-35f4c1502e1a req-679ba150-80d8-4237-b2f6-9d07d6fd87ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.886 254096 DEBUG oslo_concurrency.lockutils [req-94335352-ce7d-4f3f-b07c-35f4c1502e1a req-679ba150-80d8-4237-b2f6-9d07d6fd87ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:23 compute-0 nova_compute[254092]: 2025-11-25 16:35:23.886 254096 DEBUG nova.compute.manager [req-94335352-ce7d-4f3f-b07c-35f4c1502e1a req-679ba150-80d8-4237-b2f6-9d07d6fd87ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Processing event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:35:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.032 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.034 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088524.032208, 8457c008-75d8-4c24-9ae2-6b8a526312ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.034 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] VM Started (Lifecycle Event)
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.037 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.040 254096 INFO nova.virt.libvirt.driver [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance spawned successfully.
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.040 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.055 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.060 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.063 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.063 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.064 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.064 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.064 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.065 254096 DEBUG nova.virt.libvirt.driver [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.109 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.110 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088524.0331619, 8457c008-75d8-4c24-9ae2-6b8a526312ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.110 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] VM Paused (Lifecycle Event)
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.135 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.138 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088524.036618, 8457c008-75d8-4c24-9ae2-6b8a526312ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.138 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] VM Resumed (Lifecycle Event)
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.149 254096 INFO nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Took 9.24 seconds to spawn the instance on the hypervisor.
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.149 254096 DEBUG nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.160 254096 DEBUG nova.storage.rbd_utils [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(46a14873cb3e4a2cbfe9a7a443ac0319) on rbd image(763ace49-fa88-443d-9733-e919b6f86fab_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.187 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.190 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.214 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.215 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Successfully created port: 80f0ea34-88eb-4091-912e-db28507e1f4b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.225 254096 INFO nova.compute.manager [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Took 10.28 seconds to build instance.
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.243 254096 DEBUG oslo_concurrency.lockutils [None req-152de9c6-853e-4393-9d34-32a689aec03c 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.532 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.533 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.533 254096 INFO nova.compute.manager [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Shelving
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.559 254096 DEBUG nova.virt.libvirt.driver [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:35:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 25 16:35:24 compute-0 ceph-mon[74985]: pgmap v1445: 321 pgs: 321 active+clean; 336 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 4.4 MiB/s wr, 105 op/s
Nov 25 16:35:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 25 16:35:24 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.720 254096 DEBUG nova.storage.rbd_utils [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning vms/763ace49-fa88-443d-9733-e919b6f86fab_disk@46a14873cb3e4a2cbfe9a7a443ac0319 to images/9be9a739-7b86-4d87-a44d-f0b0fdc04095 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:35:24 compute-0 nova_compute[254092]: 2025-11-25 16:35:24.816 254096 DEBUG nova.storage.rbd_utils [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] flattening images/9be9a739-7b86-4d87-a44d-f0b0fdc04095 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.282 254096 DEBUG nova.storage.rbd_utils [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(46a14873cb3e4a2cbfe9a7a443ac0319) on rbd image(763ace49-fa88-443d-9733-e919b6f86fab_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:35:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:25.346 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:25.347 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 388 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 199 op/s
Nov 25 16:35:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 25 16:35:25 compute-0 ceph-mon[74985]: osdmap e174: 3 total, 3 up, 3 in
Nov 25 16:35:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 25 16:35:25 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.705 254096 DEBUG nova.storage.rbd_utils [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(snap) on rbd image(9be9a739-7b86-4d87-a44d-f0b0fdc04095) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.739 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Successfully updated port: 18d58a6f-2179-462b-993c-b9ce6369673f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.844 254096 DEBUG nova.compute.manager [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-changed-18d58a6f-2179-462b-993c-b9ce6369673f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.845 254096 DEBUG nova.compute.manager [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Refreshing instance network info cache due to event network-changed-18d58a6f-2179-462b-993c-b9ce6369673f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.845 254096 DEBUG oslo_concurrency.lockutils [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.845 254096 DEBUG oslo_concurrency.lockutils [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:25 compute-0 nova_compute[254092]: 2025-11-25 16:35:25.846 254096 DEBUG nova.network.neutron [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Refreshing network info cache for port 18d58a6f-2179-462b-993c-b9ce6369673f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.061 254096 DEBUG nova.network.neutron [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.445 254096 DEBUG nova.compute.manager [req-ae317d9c-aa07-48be-819f-3295204fe043 req-6913af94-2eaf-4e8f-9a47-ad3f70183e7e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.446 254096 DEBUG oslo_concurrency.lockutils [req-ae317d9c-aa07-48be-819f-3295204fe043 req-6913af94-2eaf-4e8f-9a47-ad3f70183e7e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.447 254096 DEBUG oslo_concurrency.lockutils [req-ae317d9c-aa07-48be-819f-3295204fe043 req-6913af94-2eaf-4e8f-9a47-ad3f70183e7e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.447 254096 DEBUG oslo_concurrency.lockutils [req-ae317d9c-aa07-48be-819f-3295204fe043 req-6913af94-2eaf-4e8f-9a47-ad3f70183e7e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.448 254096 DEBUG nova.compute.manager [req-ae317d9c-aa07-48be-819f-3295204fe043 req-6913af94-2eaf-4e8f-9a47-ad3f70183e7e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] No waiting events found dispatching network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.448 254096 WARNING nova.compute.manager [req-ae317d9c-aa07-48be-819f-3295204fe043 req-6913af94-2eaf-4e8f-9a47-ad3f70183e7e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received unexpected event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 for instance with vm_state active and task_state None.
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 25 16:35:26 compute-0 ceph-mon[74985]: pgmap v1447: 321 pgs: 321 active+clean; 388 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 199 op/s
Nov 25 16:35:26 compute-0 ceph-mon[74985]: osdmap e175: 3 total, 3 up, 3 in
Nov 25 16:35:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 25 16:35:26 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 25 16:35:26 compute-0 kernel: tap660536bc-d4 (unregistering): left promiscuous mode
Nov 25 16:35:26 compute-0 NetworkManager[48891]: <info>  [1764088526.9414] device (tap660536bc-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:26 compute-0 ovn_controller[153477]: 2025-11-25T16:35:26Z|00334|binding|INFO|Releasing lport 660536bc-d4bf-4a4b-9515-06043951c25e from this chassis (sb_readonly=0)
Nov 25 16:35:26 compute-0 ovn_controller[153477]: 2025-11-25T16:35:26Z|00335|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e down in Southbound
Nov 25 16:35:26 compute-0 ovn_controller[153477]: 2025-11-25T16:35:26Z|00336|binding|INFO|Removing iface tap660536bc-d4 ovn-installed in OVS
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:26 compute-0 nova_compute[254092]: 2025-11-25 16:35:26.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:26 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 25 16:35:26 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000025.scope: Consumed 18.110s CPU time.
Nov 25 16:35:26 compute-0 systemd-machined[216343]: Machine qemu-43-instance-00000025 terminated.
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:64 10.100.0.8'], port_security=['fa:16:3e:10:46:64 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=660536bc-d4bf-4a4b-9515-06043951c25e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.137 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 660536bc-d4bf-4a4b-9515-06043951c25e in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f unbound from our chassis
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.138 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.139 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9a38bb-20fb-4358-9f98-932991fdf5f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.140 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f namespace which is not needed anymore
Nov 25 16:35:27 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[298147]: [NOTICE]   (298151) : haproxy version is 2.8.14-c23fe91
Nov 25 16:35:27 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[298147]: [NOTICE]   (298151) : path to executable is /usr/sbin/haproxy
Nov 25 16:35:27 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[298147]: [WARNING]  (298151) : Exiting Master process...
Nov 25 16:35:27 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[298147]: [ALERT]    (298151) : Current worker (298153) exited with code 143 (Terminated)
Nov 25 16:35:27 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[298147]: [WARNING]  (298151) : All workers exited. Exiting... (0)
Nov 25 16:35:27 compute-0 systemd[1]: libpod-f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0.scope: Deactivated successfully.
Nov 25 16:35:27 compute-0 podman[303247]: 2025-11-25 16:35:27.285600449 +0000 UTC m=+0.053332385 container died f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.308 254096 DEBUG nova.network.neutron [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.325 254096 DEBUG oslo_concurrency.lockutils [req-d2ea30c2-f953-4955-974a-4e821af723d0 req-12046a5d-6482-4b56-914d-2a437814a0b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0-userdata-shm.mount: Deactivated successfully.
Nov 25 16:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b5991d4fb967b0680449d6a41a8175c21fb80442f2c06a6e211d9a0ef3cb1a6-merged.mount: Deactivated successfully.
Nov 25 16:35:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 414 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 7.0 MiB/s wr, 235 op/s
Nov 25 16:35:27 compute-0 podman[303247]: 2025-11-25 16:35:27.380402378 +0000 UTC m=+0.148134314 container cleanup f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:35:27 compute-0 systemd[1]: libpod-conmon-f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0.scope: Deactivated successfully.
Nov 25 16:35:27 compute-0 podman[303276]: 2025-11-25 16:35:27.444236746 +0000 UTC m=+0.041498434 container remove f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.451 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5d1954-1a59-4eef-9026-038f7a6311b4]: (4, ('Tue Nov 25 04:35:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0)\nf13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0\nTue Nov 25 04:35:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (f13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0)\nf13276be76992c60475338c687b99c5f317b2e436cba236d236d083e893cbbd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.453 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5a43fa-8e1e-45ea-8e95-621661afb155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.456 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:27 compute-0 kernel: tap3960d4c5-60: left promiscuous mode
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.482 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5451fcff-3ea4-44a8-898d-e3c89cbe04f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.501 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d9c97a-16a3-4d3e-8520-88609c279e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.502 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c138b5b-b02b-4f68-8d6e-a6be70f94f06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.517 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6196283a-d807-4000-ba49-25d2b2a6c85d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487832, 'reachable_time': 39486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303297, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d3960d4c5\x2d60d7\x2d49e3\x2db26d\x2df1317dd96f9f.mount: Deactivated successfully.
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.523 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:27.524 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[06bab06c-5896-4f7e-842c-d8be5de2cda4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.583 254096 INFO nova.virt.libvirt.driver [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance shutdown successfully after 3 seconds.
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.591 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance destroyed successfully.
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.592 254096 DEBUG nova.objects.instance [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'numa_topology' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:27 compute-0 nova_compute[254092]: 2025-11-25 16:35:27.837 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:27 compute-0 ceph-mon[74985]: osdmap e176: 3 total, 3 up, 3 in
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.017 254096 INFO nova.virt.libvirt.driver [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Beginning cold snapshot process
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.142 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Successfully updated port: 80f0ea34-88eb-4091-912e-db28507e1f4b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.167 254096 DEBUG nova.compute.manager [req-e29825bc-e7d9-4053-b140-60ea800fc904 req-6983cfe2-b6b2-4510-b1ed-5c0237e41d9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.167 254096 DEBUG oslo_concurrency.lockutils [req-e29825bc-e7d9-4053-b140-60ea800fc904 req-6983cfe2-b6b2-4510-b1ed-5c0237e41d9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.168 254096 DEBUG oslo_concurrency.lockutils [req-e29825bc-e7d9-4053-b140-60ea800fc904 req-6983cfe2-b6b2-4510-b1ed-5c0237e41d9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.168 254096 DEBUG oslo_concurrency.lockutils [req-e29825bc-e7d9-4053-b140-60ea800fc904 req-6983cfe2-b6b2-4510-b1ed-5c0237e41d9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.168 254096 DEBUG nova.compute.manager [req-e29825bc-e7d9-4053-b140-60ea800fc904 req-6983cfe2-b6b2-4510-b1ed-5c0237e41d9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.168 254096 WARNING nova.compute.manager [req-e29825bc-e7d9-4053-b140-60ea800fc904 req-6983cfe2-b6b2-4510-b1ed-5c0237e41d9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.253 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.253 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquired lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.253 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.258 254096 DEBUG nova.virt.libvirt.imagebackend [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.405 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.494 254096 DEBUG nova.storage.rbd_utils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] creating snapshot(181c706f45c34af995ddfe8288429eb1) on rbd image(a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.649 254096 DEBUG nova.compute.manager [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-changed-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.649 254096 DEBUG nova.compute.manager [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Refreshing instance network info cache due to event network-changed-12524556-7486-4f17-95f0-2984a51a4542. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.650 254096 DEBUG oslo_concurrency.lockutils [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.650 254096 DEBUG oslo_concurrency.lockutils [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.650 254096 DEBUG nova.network.neutron [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Refreshing network info cache for port 12524556-7486-4f17-95f0-2984a51a4542 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.677 254096 DEBUG oslo_concurrency.lockutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.678 254096 DEBUG oslo_concurrency.lockutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.678 254096 INFO nova.compute.manager [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Rebooting instance
Nov 25 16:35:28 compute-0 nova_compute[254092]: 2025-11-25 16:35:28.696 254096 DEBUG oslo_concurrency.lockutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 25 16:35:28 compute-0 ceph-mon[74985]: pgmap v1450: 321 pgs: 321 active+clean; 414 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 7.0 MiB/s wr, 235 op/s
Nov 25 16:35:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 25 16:35:28 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 25 16:35:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:29 compute-0 nova_compute[254092]: 2025-11-25 16:35:29.135 254096 DEBUG nova.storage.rbd_utils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] cloning vms/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk@181c706f45c34af995ddfe8288429eb1 to images/6d9352ea-7650-4e12-a1b0-1f5e5bc16789 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:35:29 compute-0 nova_compute[254092]: 2025-11-25 16:35:29.323 254096 INFO nova.virt.libvirt.driver [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Snapshot image upload complete
Nov 25 16:35:29 compute-0 nova_compute[254092]: 2025-11-25 16:35:29.323 254096 INFO nova.compute.manager [None req-af07df9f-727b-487b-ac1d-1f28ee5d4ef7 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Took 5.94 seconds to snapshot the instance on the hypervisor.
Nov 25 16:35:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:29.357 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 414 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 8.9 MiB/s wr, 298 op/s
Nov 25 16:35:29 compute-0 nova_compute[254092]: 2025-11-25 16:35:29.669 254096 DEBUG nova.storage.rbd_utils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] flattening images/6d9352ea-7650-4e12-a1b0-1f5e5bc16789 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:35:29 compute-0 podman[303386]: 2025-11-25 16:35:29.690926702 +0000 UTC m=+0.103718210 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:35:29 compute-0 podman[303385]: 2025-11-25 16:35:29.703390731 +0000 UTC m=+0.103408693 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:35:29 compute-0 podman[303422]: 2025-11-25 16:35:29.80339171 +0000 UTC m=+0.121507553 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 16:35:30 compute-0 ceph-mon[74985]: osdmap e177: 3 total, 3 up, 3 in
Nov 25 16:35:30 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.397 254096 DEBUG nova.compute.manager [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.397 254096 DEBUG oslo_concurrency.lockutils [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.398 254096 DEBUG oslo_concurrency.lockutils [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.398 254096 DEBUG oslo_concurrency.lockutils [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.398 254096 DEBUG nova.compute.manager [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.398 254096 WARNING nova.compute.manager [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.398 254096 DEBUG nova.compute.manager [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-changed-80f0ea34-88eb-4091-912e-db28507e1f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.398 254096 DEBUG nova.compute.manager [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Refreshing instance network info cache due to event network-changed-80f0ea34-88eb-4091-912e-db28507e1f4b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:35:30 compute-0 nova_compute[254092]: 2025-11-25 16:35:30.399 254096 DEBUG oslo_concurrency.lockutils [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:31 compute-0 ceph-mon[74985]: pgmap v1452: 321 pgs: 321 active+clean; 414 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 8.9 MiB/s wr, 298 op/s
Nov 25 16:35:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 460 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 7.1 MiB/s wr, 287 op/s
Nov 25 16:35:31 compute-0 nova_compute[254092]: 2025-11-25 16:35:31.446 254096 DEBUG nova.network.neutron [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updated VIF entry in instance network info cache for port 12524556-7486-4f17-95f0-2984a51a4542. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:31 compute-0 nova_compute[254092]: 2025-11-25 16:35:31.447 254096 DEBUG nova.network.neutron [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updating instance_info_cache with network_info: [{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:31 compute-0 nova_compute[254092]: 2025-11-25 16:35:31.583 254096 DEBUG oslo_concurrency.lockutils [req-35d09557-cc7a-45fd-b346-bbe3bcba1296 req-1a78586e-3ac1-4fd5-96e8-1ecf07e97219 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:31 compute-0 nova_compute[254092]: 2025-11-25 16:35:31.583 254096 DEBUG oslo_concurrency.lockutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquired lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:31 compute-0 nova_compute[254092]: 2025-11-25 16:35:31.583 254096 DEBUG nova.network.neutron [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:35:31 compute-0 nova_compute[254092]: 2025-11-25 16:35:31.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:32 compute-0 ceph-mon[74985]: pgmap v1453: 321 pgs: 321 active+clean; 460 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 7.1 MiB/s wr, 287 op/s
Nov 25 16:35:32 compute-0 nova_compute[254092]: 2025-11-25 16:35:32.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:32 compute-0 nova_compute[254092]: 2025-11-25 16:35:32.957 254096 DEBUG nova.storage.rbd_utils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] removing snapshot(181c706f45c34af995ddfe8288429eb1) on rbd image(a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:35:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 460 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 223 op/s
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.428 254096 DEBUG nova.network.neutron [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Updating instance_info_cache with network_info: [{"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.458 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Releasing lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.458 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Instance network_info: |[{"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.459 254096 DEBUG oslo_concurrency.lockutils [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.459 254096 DEBUG nova.network.neutron [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Refreshing network info cache for port 80f0ea34-88eb-4091-912e-db28507e1f4b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.462 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Start _get_guest_xml network_info=[{"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:35:33 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.469 254096 WARNING nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.477 254096 DEBUG nova.virt.libvirt.host [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.478 254096 DEBUG nova.virt.libvirt.host [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.482 254096 DEBUG nova.virt.libvirt.host [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.483 254096 DEBUG nova.virt.libvirt.host [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.483 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.483 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.484 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.484 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.484 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.485 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.485 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.485 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.485 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.486 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.486 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.486 254096 DEBUG nova.virt.hardware [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.489 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.521 254096 DEBUG nova.storage.rbd_utils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] creating snapshot(snap) on rbd image(6d9352ea-7650-4e12-a1b0-1f5e5bc16789) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:35:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296860398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.910 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.932 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:33 compute-0 nova_compute[254092]: 2025-11-25 16:35:33.936 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 25 16:35:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 25 16:35:34 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.325 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088519.3239183, 763ace49-fa88-443d-9733-e919b6f86fab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.325 254096 INFO nova.compute.manager [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] VM Stopped (Lifecycle Event)
Nov 25 16:35:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875822583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.354 254096 DEBUG nova.compute.manager [None req-3ef99929-8d69-4507-b202-567c44079623 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.357 254096 DEBUG nova.compute.manager [None req-3ef99929-8d69-4507-b202-567c44079623 - - - - - -] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.367 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.368 254096 DEBUG nova.virt.libvirt.vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1960371828',display_name='tempest-ServersTestMultiNic-server-1960371828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1960371828',id=44,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-d7qhokwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:20Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=7a89b21c-79db-4e5f-88fd-35557c8c15ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.369 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.369 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:0b:6c,bridge_name='br-int',has_traffic_filtering=True,id=18d58a6f-2179-462b-993c-b9ce6369673f,network=Network(d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18d58a6f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.370 254096 DEBUG nova.virt.libvirt.vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1960371828',display_name='tempest-ServersTestMultiNic-server-1960371828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1960371828',id=44,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-d7qhokwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:20Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=7a89b21c-79db-4e5f-88fd-35557c8c15ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.371 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.371 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:54:4a,bridge_name='br-int',has_traffic_filtering=True,id=80f0ea34-88eb-4091-912e-db28507e1f4b,network=Network(d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f0ea34-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.372 254096 DEBUG nova.objects.instance [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a89b21c-79db-4e5f-88fd-35557c8c15ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.396 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <uuid>7a89b21c-79db-4e5f-88fd-35557c8c15ff</uuid>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <name>instance-0000002c</name>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestMultiNic-server-1960371828</nova:name>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:35:33</nova:creationTime>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:user uuid="70f122fae9644012973ae5b56c1d459b">tempest-ServersTestMultiNic-809789765-project-member</nova:user>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:project uuid="3355a3ac2d6d4d5ea7f590f1e2ae3492">tempest-ServersTestMultiNic-809789765</nova:project>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:port uuid="18d58a6f-2179-462b-993c-b9ce6369673f">
Nov 25 16:35:34 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.32" ipVersion="4"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <nova:port uuid="80f0ea34-88eb-4091-912e-db28507e1f4b">
Nov 25 16:35:34 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.1.67" ipVersion="4"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <system>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <entry name="serial">7a89b21c-79db-4e5f-88fd-35557c8c15ff</entry>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <entry name="uuid">7a89b21c-79db-4e5f-88fd-35557c8c15ff</entry>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </system>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <os>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   </os>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <features>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   </features>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk">
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk.config">
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:8d:0b:6c"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <target dev="tap18d58a6f-21"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:cd:54:4a"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <target dev="tap80f0ea34-88"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/console.log" append="off"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <video>
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </video>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:35:34 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:35:34 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:35:34 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:35:34 compute-0 nova_compute[254092]: </domain>
Nov 25 16:35:34 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.397 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Preparing to wait for external event network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.397 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.398 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.398 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.398 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Preparing to wait for external event network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.398 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.399 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.399 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.399 254096 DEBUG nova.virt.libvirt.vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1960371828',display_name='tempest-ServersTestMultiNic-server-1960371828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1960371828',id=44,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-d7qhokwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:20Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=7a89b21c-79db-4e5f-88fd-35557c8c15ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.400 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.400 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:0b:6c,bridge_name='br-int',has_traffic_filtering=True,id=18d58a6f-2179-462b-993c-b9ce6369673f,network=Network(d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18d58a6f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.401 254096 DEBUG os_vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:0b:6c,bridge_name='br-int',has_traffic_filtering=True,id=18d58a6f-2179-462b-993c-b9ce6369673f,network=Network(d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18d58a6f-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.404 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.404 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.410 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18d58a6f-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.410 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18d58a6f-21, col_values=(('external_ids', {'iface-id': '18d58a6f-2179-462b-993c-b9ce6369673f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:0b:6c', 'vm-uuid': '7a89b21c-79db-4e5f-88fd-35557c8c15ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 NetworkManager[48891]: <info>  [1764088534.4132] manager: (tap18d58a6f-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.436 254096 INFO os_vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:0b:6c,bridge_name='br-int',has_traffic_filtering=True,id=18d58a6f-2179-462b-993c-b9ce6369673f,network=Network(d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18d58a6f-21')
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.437 254096 DEBUG nova.virt.libvirt.vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1960371828',display_name='tempest-ServersTestMultiNic-server-1960371828',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1960371828',id=44,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-d7qhokwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:20Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=7a89b21c-79db-4e5f-88fd-35557c8c15ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.437 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.438 254096 DEBUG nova.network.os_vif_util [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:54:4a,bridge_name='br-int',has_traffic_filtering=True,id=80f0ea34-88eb-4091-912e-db28507e1f4b,network=Network(d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f0ea34-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.438 254096 DEBUG os_vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:54:4a,bridge_name='br-int',has_traffic_filtering=True,id=80f0ea34-88eb-4091-912e-db28507e1f4b,network=Network(d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f0ea34-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.439 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.439 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.441 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f0ea34-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.441 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap80f0ea34-88, col_values=(('external_ids', {'iface-id': '80f0ea34-88eb-4091-912e-db28507e1f4b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:54:4a', 'vm-uuid': '7a89b21c-79db-4e5f-88fd-35557c8c15ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 NetworkManager[48891]: <info>  [1764088534.4435] manager: (tap80f0ea34-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.452 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.453 254096 INFO os_vif [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:54:4a,bridge_name='br-int',has_traffic_filtering=True,id=80f0ea34-88eb-4091-912e-db28507e1f4b,network=Network(d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f0ea34-88')
Nov 25 16:35:34 compute-0 ceph-mon[74985]: pgmap v1454: 321 pgs: 321 active+clean; 460 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 223 op/s
Nov 25 16:35:34 compute-0 ceph-mon[74985]: osdmap e178: 3 total, 3 up, 3 in
Nov 25 16:35:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/296860398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:34 compute-0 ceph-mon[74985]: osdmap e179: 3 total, 3 up, 3 in
Nov 25 16:35:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/875822583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.505 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.506 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.506 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No VIF found with MAC fa:16:3e:8d:0b:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.506 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] No VIF found with MAC fa:16:3e:cd:54:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.507 254096 INFO nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Using config drive
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.532 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.580 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.581 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.581 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "763ace49-fa88-443d-9733-e919b6f86fab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.582 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.582 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.584 254096 INFO nova.compute.manager [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Terminating instance
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.585 254096 DEBUG nova.compute.manager [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.602 254096 INFO nova.virt.libvirt.driver [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Instance destroyed successfully.
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.603 254096 DEBUG nova.objects.instance [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid 763ace49-fa88-443d-9733-e919b6f86fab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.617 254096 DEBUG nova.virt.libvirt.vif [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:34:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1725494683',display_name='tempest-ImagesTestJSON-server-1725494683',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1725494683',id=42,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-24wfgra9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:29Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=763ace49-fa88-443d-9733-e919b6f86fab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.617 254096 DEBUG nova.network.os_vif_util [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "1baf23f4-ec91-408d-b406-37757eba550e", "address": "fa:16:3e:11:f5:37", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1baf23f4-ec", "ovs_interfaceid": "1baf23f4-ec91-408d-b406-37757eba550e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.618 254096 DEBUG nova.network.os_vif_util [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=1baf23f4-ec91-408d-b406-37757eba550e,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1baf23f4-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.619 254096 DEBUG os_vif [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=1baf23f4-ec91-408d-b406-37757eba550e,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1baf23f4-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.622 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1baf23f4-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.639 254096 INFO os_vif [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=1baf23f4-ec91-408d-b406-37757eba550e,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1baf23f4-ec')
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.674 254096 DEBUG nova.network.neutron [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updating instance_info_cache with network_info: [{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.688 254096 DEBUG oslo_concurrency.lockutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Releasing lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.689 254096 DEBUG nova.compute.manager [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:34 compute-0 kernel: tap12524556-74 (unregistering): left promiscuous mode
Nov 25 16:35:34 compute-0 NetworkManager[48891]: <info>  [1764088534.8492] device (tap12524556-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:34 compute-0 ovn_controller[153477]: 2025-11-25T16:35:34Z|00337|binding|INFO|Releasing lport 12524556-7486-4f17-95f0-2984a51a4542 from this chassis (sb_readonly=0)
Nov 25 16:35:34 compute-0 ovn_controller[153477]: 2025-11-25T16:35:34Z|00338|binding|INFO|Setting lport 12524556-7486-4f17-95f0-2984a51a4542 down in Southbound
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 ovn_controller[153477]: 2025-11-25T16:35:34Z|00339|binding|INFO|Removing iface tap12524556-74 ovn-installed in OVS
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.876 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.881 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:8f:b2 10.100.0.5'], port_security=['fa:16:3e:3a:8f:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8457c008-75d8-4c24-9ae2-6b8a526312ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe7901baa563491c8609089aa4334bf1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '10995e2d-e9a2-4098-859c-5dcd4d5f741f 35930f85-6093-4ee4-bf72-0f56aa3fb356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6fc76d1-a8a4-44cf-ab8f-0304e50e033c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12524556-7486-4f17-95f0-2984a51a4542) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.883 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12524556-7486-4f17-95f0-2984a51a4542 in datapath 82742f46-fb6e-443e-a99d-84c5367a4ccd unbound from our chassis
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.884 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82742f46-fb6e-443e-a99d-84c5367a4ccd
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.902 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[242305c6-d1e5-49df-86e4-e54e929b9b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:34 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Nov 25 16:35:34 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002b.scope: Consumed 11.968s CPU time.
Nov 25 16:35:34 compute-0 systemd-machined[216343]: Machine qemu-49-instance-0000002b terminated.
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.927 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f1a8e2-dbd8-4bb3-9782-da32148fd1db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.930 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b027e-da82-4b6e-b506-f83f8d6ea5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.954 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[535edf2b-67d9-4fdb-9e02-d54f3aee771f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.969 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[736d1abb-530e-42f3-be20-5eb3b73e0c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82742f46-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:df:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494588, 'reachable_time': 26292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303632, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.982 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[672f149a-7609-43d5-adba-90c3171822f9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494599, 'tstamp': 494599}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303633, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494601, 'tstamp': 494601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303633, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.984 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82742f46-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 nova_compute[254092]: 2025-11-25 16:35:34.997 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.998 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82742f46-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82742f46-f0, col_values=(('external_ids', {'iface-id': '639a1689-3ed6-4bc6-98a0-e7a7773b6e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:34.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.037 254096 INFO nova.virt.libvirt.driver [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Deleting instance files /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab_del
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.038 254096 INFO nova.virt.libvirt.driver [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Deletion of /var/lib/nova/instances/763ace49-fa88-443d-9733-e919b6f86fab_del complete
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.044 254096 INFO nova.virt.libvirt.driver [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance destroyed successfully.
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.044 254096 DEBUG nova.objects.instance [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'resources' on Instance uuid 8457c008-75d8-4c24-9ae2-6b8a526312ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.060 254096 DEBUG nova.virt.libvirt.vif [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:35:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-960547750',display_name='tempest-SecurityGroupsTestJSON-server-960547750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-960547750',id=43,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-ckiy49j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:34Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=8457c008-75d8-4c24-9ae2-6b8a526312ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.060 254096 DEBUG nova.network.os_vif_util [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.060 254096 DEBUG nova.network.os_vif_util [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.061 254096 DEBUG os_vif [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.062 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12524556-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.074 254096 INFO os_vif [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74')
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.079 254096 DEBUG nova.virt.libvirt.driver [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Start _get_guest_xml network_info=[{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.082 254096 WARNING nova.virt.libvirt.driver [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.087 254096 DEBUG nova.virt.libvirt.host [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.088 254096 DEBUG nova.virt.libvirt.host [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.090 254096 DEBUG nova.virt.libvirt.host [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.090 254096 DEBUG nova.virt.libvirt.host [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.091 254096 DEBUG nova.virt.libvirt.driver [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.091 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.091 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.091 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.092 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.092 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.092 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.092 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.092 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.092 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.092 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.093 254096 DEBUG nova.virt.hardware [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.093 254096 DEBUG nova.objects.instance [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8457c008-75d8-4c24-9ae2-6b8a526312ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.098 254096 INFO nova.compute.manager [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Took 0.51 seconds to destroy the instance on the hypervisor.
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.098 254096 DEBUG oslo.service.loopingcall [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.098 254096 DEBUG nova.compute.manager [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.098 254096 DEBUG nova.network.neutron [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.106 254096 DEBUG oslo_concurrency.processutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.239 254096 INFO nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Creating config drive at /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/disk.config
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.244 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd262g8wy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 402 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 7.6 MiB/s wr, 262 op/s
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.390 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd262g8wy" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.418 254096 DEBUG nova.storage.rbd_utils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] rbd image 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.422 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/disk.config 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910219140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.539 254096 DEBUG oslo_concurrency.processutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.568 254096 DEBUG oslo_concurrency.processutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.719 254096 INFO nova.virt.libvirt.driver [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Snapshot image upload complete
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.720 254096 DEBUG nova.compute.manager [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.774 254096 DEBUG oslo_concurrency.processutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/disk.config 7a89b21c-79db-4e5f-88fd-35557c8c15ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.775 254096 INFO nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Deleting local config drive /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff/disk.config because it was imported into RBD.
Nov 25 16:35:35 compute-0 systemd-udevd[303624]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:35:35 compute-0 NetworkManager[48891]: <info>  [1764088535.8265] manager: (tap18d58a6f-21): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Nov 25 16:35:35 compute-0 kernel: tap18d58a6f-21: entered promiscuous mode
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.839 254096 DEBUG nova.network.neutron [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Updated VIF entry in instance network info cache for port 80f0ea34-88eb-4091-912e-db28507e1f4b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.840 254096 DEBUG nova.network.neutron [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Updating instance_info_cache with network_info: [{"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00340|binding|INFO|Claiming lport 18d58a6f-2179-462b-993c-b9ce6369673f for this chassis.
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00341|binding|INFO|18d58a6f-2179-462b-993c-b9ce6369673f: Claiming fa:16:3e:8d:0b:6c 10.100.0.32
Nov 25 16:35:35 compute-0 NetworkManager[48891]: <info>  [1764088535.8433] device (tap18d58a6f-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:35:35 compute-0 NetworkManager[48891]: <info>  [1764088535.8448] device (tap18d58a6f-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:35 compute-0 NetworkManager[48891]: <info>  [1764088535.8542] manager: (tap80f0ea34-88): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.858 254096 INFO nova.compute.manager [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Shelve offloading
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.861 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:0b:6c 10.100.0.32'], port_security=['fa:16:3e:8d:0b:6c 10.100.0.32'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.32/24', 'neutron:device_id': '7a89b21c-79db-4e5f-88fd-35557c8c15ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd0e5ac9-ec42-429c-a5ae-2cbedc7268a3, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=18d58a6f-2179-462b-993c-b9ce6369673f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.861 254096 DEBUG oslo_concurrency.lockutils [req-dfaee46e-9456-4ad9-9158-fbe22b340392 req-5e815b29-bfe7-4d6a-894b-185f2a8906a1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7a89b21c-79db-4e5f-88fd-35557c8c15ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.862 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 18d58a6f-2179-462b-993c-b9ce6369673f in datapath d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0 bound to our chassis
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.863 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.877 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b06cf5a-1a16-4ad8-852b-34532166da20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.878 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd8a880f3-c1 in ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.880 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd8a880f3-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.880 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0deac3-c0c0-4539-a149-9cf82efeb3f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.883 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b07baec7-bcf9-4ca1-a44b-edb350cf038f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.897 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[886ed5fa-98c4-41df-8bb7-8187de0713f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.902 254096 DEBUG nova.compute.manager [req-bb952ecd-d6c2-4719-b425-e577aefba0e2 req-3ee96c58-43f3-45b9-8441-140e35ade455 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-unplugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.902 254096 DEBUG oslo_concurrency.lockutils [req-bb952ecd-d6c2-4719-b425-e577aefba0e2 req-3ee96c58-43f3-45b9-8441-140e35ade455 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.902 254096 DEBUG oslo_concurrency.lockutils [req-bb952ecd-d6c2-4719-b425-e577aefba0e2 req-3ee96c58-43f3-45b9-8441-140e35ade455 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:35 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.902 254096 DEBUG oslo_concurrency.lockutils [req-bb952ecd-d6c2-4719-b425-e577aefba0e2 req-3ee96c58-43f3-45b9-8441-140e35ade455 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:35 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.903 254096 DEBUG nova.compute.manager [req-bb952ecd-d6c2-4719-b425-e577aefba0e2 req-3ee96c58-43f3-45b9-8441-140e35ade455 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] No waiting events found dispatching network-vif-unplugged-12524556-7486-4f17-95f0-2984a51a4542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.903 254096 WARNING nova.compute.manager [req-bb952ecd-d6c2-4719-b425-e577aefba0e2 req-3ee96c58-43f3-45b9-8441-140e35ade455 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received unexpected event network-vif-unplugged-12524556-7486-4f17-95f0-2984a51a4542 for instance with vm_state active and task_state reboot_started_hard.
Nov 25 16:35:35 compute-0 systemd-machined[216343]: New machine qemu-50-instance-0000002c.
Nov 25 16:35:35 compute-0 kernel: tap80f0ea34-88: entered promiscuous mode
Nov 25 16:35:35 compute-0 NetworkManager[48891]: <info>  [1764088535.9143] device (tap80f0ea34-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:35:35 compute-0 NetworkManager[48891]: <info>  [1764088535.9157] device (tap80f0ea34-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2079d629-86ac-4b23-b097-a624761c5eb9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:35 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002c.
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00342|binding|INFO|Claiming lport 80f0ea34-88eb-4091-912e-db28507e1f4b for this chassis.
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00343|binding|INFO|80f0ea34-88eb-4091-912e-db28507e1f4b: Claiming fa:16:3e:cd:54:4a 10.100.1.67
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.926 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance destroyed successfully.
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.926 254096 DEBUG nova.compute.manager [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00344|binding|INFO|Setting lport 18d58a6f-2179-462b-993c-b9ce6369673f ovn-installed in OVS
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00345|binding|INFO|Setting lport 18d58a6f-2179-462b-993c-b9ce6369673f up in Southbound
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.931 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:54:4a 10.100.1.67'], port_security=['fa:16:3e:cd:54:4a 10.100.1.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.67/24', 'neutron:device_id': '7a89b21c-79db-4e5f-88fd-35557c8c15ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f17bf35-a68d-4a50-857b-e307d883bf20, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=80f0ea34-88eb-4091-912e-db28507e1f4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.941 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.941 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.941 254096 DEBUG nova.network.neutron [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.949 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[64e2f2bb-0665-46a5-944d-41bd02c74bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:35 compute-0 NetworkManager[48891]: <info>  [1764088535.9564] manager: (tapd8a880f3-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.955 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a006df-e837-4cdd-9183-272b147595fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00346|binding|INFO|Setting lport 80f0ea34-88eb-4091-912e-db28507e1f4b ovn-installed in OVS
Nov 25 16:35:35 compute-0 ovn_controller[153477]: 2025-11-25T16:35:35Z|00347|binding|INFO|Setting lport 80f0ea34-88eb-4091-912e-db28507e1f4b up in Southbound
Nov 25 16:35:35 compute-0 nova_compute[254092]: 2025-11-25 16:35:35.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.994 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbecde8-7d1f-4679-8139-cf5e1512ad08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:35.997 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[72b5ace2-2251-40b7-b611-c57312ee0735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961738160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.0229] device (tapd8a880f3-c0): carrier: link connected
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.028 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[527d33ef-0b45-48df-9b15-44f23f0a2305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.031 254096 DEBUG oslo_concurrency.processutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.032 254096 DEBUG nova.virt.libvirt.vif [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:35:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-960547750',display_name='tempest-SecurityGroupsTestJSON-server-960547750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-960547750',id=43,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-ckiy49j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:34Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=8457c008-75d8-4c24-9ae2-6b8a526312ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.032 254096 DEBUG nova.network.os_vif_util [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.033 254096 DEBUG nova.network.os_vif_util [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.034 254096 DEBUG nova.objects.instance [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8457c008-75d8-4c24-9ae2-6b8a526312ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[decc850d-a8b0-43d3-95a6-99f7e0445cea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd8a880f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:95:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499361, 'reachable_time': 23842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303818, 'error': None, 'target': 'ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.048 254096 DEBUG nova.virt.libvirt.driver [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <uuid>8457c008-75d8-4c24-9ae2-6b8a526312ce</uuid>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <name>instance-0000002b</name>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <nova:name>tempest-SecurityGroupsTestJSON-server-960547750</nova:name>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:35:35</nova:creationTime>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:user uuid="650f3d90afcd4e85b7042981dc353a2d">tempest-SecurityGroupsTestJSON-716261307-project-member</nova:user>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:project uuid="fe7901baa563491c8609089aa4334bf1">tempest-SecurityGroupsTestJSON-716261307</nova:project>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <nova:port uuid="12524556-7486-4f17-95f0-2984a51a4542">
Nov 25 16:35:36 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <system>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <entry name="serial">8457c008-75d8-4c24-9ae2-6b8a526312ce</entry>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <entry name="uuid">8457c008-75d8-4c24-9ae2-6b8a526312ce</entry>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </system>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <os>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   </os>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <features>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   </features>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8457c008-75d8-4c24-9ae2-6b8a526312ce_disk">
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8457c008-75d8-4c24-9ae2-6b8a526312ce_disk.config">
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:3a:8f:b2"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <target dev="tap12524556-74"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce/console.log" append="off"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <video>
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </video>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:35:36 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:35:36 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:35:36 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:35:36 compute-0 nova_compute[254092]: </domain>
Nov 25 16:35:36 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.048 254096 DEBUG nova.virt.libvirt.driver [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.049 254096 DEBUG nova.virt.libvirt.driver [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.049 254096 DEBUG nova.virt.libvirt.vif [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:35:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-960547750',display_name='tempest-SecurityGroupsTestJSON-server-960547750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-960547750',id=43,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-ckiy49j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:34Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=8457c008-75d8-4c24-9ae2-6b8a526312ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.050 254096 DEBUG nova.network.os_vif_util [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.050 254096 DEBUG nova.network.os_vif_util [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.050 254096 DEBUG os_vif [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.051 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.051 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.051 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.053 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.053 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12524556-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.054 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12524556-74, col_values=(('external_ids', {'iface-id': '12524556-7486-4f17-95f0-2984a51a4542', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:8f:b2', 'vm-uuid': '8457c008-75d8-4c24-9ae2-6b8a526312ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.0562] manager: (tap12524556-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.063 254096 INFO os_vif [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74')
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.062 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af57ace9-ae88-4f62-ad5d-6504d42e4fb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:956d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499361, 'tstamp': 499361}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303819, 'error': None, 'target': 'ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.079 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[98fecd2f-7d3b-4b14-88aa-29aad89d0cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd8a880f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:95:6d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499361, 'reachable_time': 23842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303822, 'error': None, 'target': 'ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.110 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[97276d73-f0be-4137-86d1-05873438f0ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.1213] manager: (tap12524556-74): new Tun device (/org/freedesktop/NetworkManager/Devices/157)
Nov 25 16:35:36 compute-0 systemd-udevd[303802]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:35:36 compute-0 kernel: tap12524556-74: entered promiscuous mode
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.128 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 ovn_controller[153477]: 2025-11-25T16:35:36Z|00348|binding|INFO|Claiming lport 12524556-7486-4f17-95f0-2984a51a4542 for this chassis.
Nov 25 16:35:36 compute-0 ovn_controller[153477]: 2025-11-25T16:35:36Z|00349|binding|INFO|12524556-7486-4f17-95f0-2984a51a4542: Claiming fa:16:3e:3a:8f:b2 10.100.0.5
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.1345] device (tap12524556-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.1357] device (tap12524556-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:8f:b2 10.100.0.5'], port_security=['fa:16:3e:3a:8f:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8457c008-75d8-4c24-9ae2-6b8a526312ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe7901baa563491c8609089aa4334bf1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '10995e2d-e9a2-4098-859c-5dcd4d5f741f 35930f85-6093-4ee4-bf72-0f56aa3fb356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6fc76d1-a8a4-44cf-ab8f-0304e50e033c, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12524556-7486-4f17-95f0-2984a51a4542) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:36 compute-0 ovn_controller[153477]: 2025-11-25T16:35:36Z|00350|binding|INFO|Setting lport 12524556-7486-4f17-95f0-2984a51a4542 ovn-installed in OVS
Nov 25 16:35:36 compute-0 ovn_controller[153477]: 2025-11-25T16:35:36Z|00351|binding|INFO|Setting lport 12524556-7486-4f17-95f0-2984a51a4542 up in Southbound
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 systemd-machined[216343]: New machine qemu-51-instance-0000002b.
Nov 25 16:35:36 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002b.
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.176 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dd205b73-17bd-4c35-98d1-18cdc9882727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.178 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8a880f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.178 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.178 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8a880f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.1811] manager: (tapd8a880f3-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Nov 25 16:35:36 compute-0 kernel: tapd8a880f3-c0: entered promiscuous mode
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.187 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd8a880f3-c0, col_values=(('external_ids', {'iface-id': 'dee7a299-ba16-41dd-b64f-1986eb055138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 ovn_controller[153477]: 2025-11-25T16:35:36Z|00352|binding|INFO|Releasing lport dee7a299-ba16-41dd-b64f-1986eb055138 from this chassis (sb_readonly=0)
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.209 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.211 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ee88dfd4-b707-4b64-b942-66b7f4bfbae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.211 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0.pid.haproxy
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.212 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'env', 'PROCESS_TAG=haproxy-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.320 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088536.3196497, 7a89b21c-79db-4e5f-88fd-35557c8c15ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.320 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] VM Started (Lifecycle Event)
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.342 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.346 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088536.319774, 7a89b21c-79db-4e5f-88fd-35557c8c15ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.346 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] VM Paused (Lifecycle Event)
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.361 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.365 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.382 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:36 compute-0 ceph-mon[74985]: pgmap v1457: 321 pgs: 321 active+clean; 402 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 7.6 MiB/s wr, 262 op/s
Nov 25 16:35:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1910219140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/961738160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:36 compute-0 podman[303924]: 2025-11-25 16:35:36.576751822 +0000 UTC m=+0.061243529 container create 9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:35:36 compute-0 systemd[1]: Started libpod-conmon-9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2.scope.
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.623 254096 DEBUG nova.network.neutron [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:36 compute-0 podman[303924]: 2025-11-25 16:35:36.538089446 +0000 UTC m=+0.022581173 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:35:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.646 254096 INFO nova.compute.manager [-] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Took 1.55 seconds to deallocate network for instance.
Nov 25 16:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4f214b571585608778b7ef11399703f4b6169c3938b0f2f6be2e8bd70c665c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:36 compute-0 podman[303924]: 2025-11-25 16:35:36.669630669 +0000 UTC m=+0.154122396 container init 9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:35:36 compute-0 podman[303924]: 2025-11-25 16:35:36.67634054 +0000 UTC m=+0.160832237 container start 9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:35:36 compute-0 neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0[303977]: [NOTICE]   (303988) : New worker (303990) forked
Nov 25 16:35:36 compute-0 neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0[303977]: [NOTICE]   (303988) : Loading success.
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.698 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.698 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.717 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 8457c008-75d8-4c24-9ae2-6b8a526312ce due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.717 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088536.7167892, 8457c008-75d8-4c24-9ae2-6b8a526312ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.717 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] VM Resumed (Lifecycle Event)
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.720 254096 DEBUG nova.compute.manager [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.728 254096 INFO nova.virt.libvirt.driver [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance rebooted successfully.
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.728 254096 DEBUG nova.compute.manager [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.734 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.736 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.753 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 80f0ea34-88eb-4091-912e-db28507e1f4b in datapath d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2 unbound from our chassis
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.755 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.761 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.762 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088536.7194445, 8457c008-75d8-4c24-9ae2-6b8a526312ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.762 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] VM Started (Lifecycle Event)
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5332926-0302-4998-91b0-1a2abd8f1222]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.767 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd30f2e6f-41 in ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.771 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd30f2e6f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.771 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2deb9cc9-62fc-4bf3-a631-2d25ad2cecbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.772 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[055d7293-cac7-4aa2-b642-1de83d3287e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.785 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3e383d-df59-4172-a01f-4fb2129d9e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.787 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.794 254096 DEBUG oslo_concurrency.lockutils [None req-34abbac4-6332-45b0-acb0-12f25f534788 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.795 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.802 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c518fde9-1773-402e-a7a9-4221d07d9f65]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.830 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3749f0e0-2170-4ae4-87fc-604bf95c35d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.8380] manager: (tapd30f2e6f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.837 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa59165d-a7c3-4d57-9310-559ecd15ea8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.871 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9d94b6b6-76e4-4e36-9338-863b92dc5efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.875 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecbe20e-1d0c-4fa6-9eea-eea93c148e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 nova_compute[254092]: 2025-11-25 16:35:36.893 254096 DEBUG oslo_concurrency.processutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:36 compute-0 NetworkManager[48891]: <info>  [1764088536.9044] device (tapd30f2e6f-40): carrier: link connected
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c92aa6-53ea-4e00-8801-f2e84f724ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.940 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[69f6aa97-d66c-4ac2-a72c-33fae4a32d42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd30f2e6f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499449, 'reachable_time': 44131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304013, 'error': None, 'target': 'ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.962 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b174d6d-33f8-49fc-9e07-3920ba3f9a46]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:adfd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499449, 'tstamp': 499449}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304014, 'error': None, 'target': 'ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:36.984 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c050781-152b-4b20-a8b0-f756be9156af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd30f2e6f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:ad:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499449, 'reachable_time': 44131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304015, 'error': None, 'target': 'ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.020 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9683d3-9655-42c0-9ff2-5a3d3b21618e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.046 254096 DEBUG nova.compute.manager [req-bb23b750-6d0c-4eb8-b228-4a702085acda req-01b66dfb-d127-4a27-9dd0-9942697ad67a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.047 254096 DEBUG oslo_concurrency.lockutils [req-bb23b750-6d0c-4eb8-b228-4a702085acda req-01b66dfb-d127-4a27-9dd0-9942697ad67a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.047 254096 DEBUG oslo_concurrency.lockutils [req-bb23b750-6d0c-4eb8-b228-4a702085acda req-01b66dfb-d127-4a27-9dd0-9942697ad67a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.047 254096 DEBUG oslo_concurrency.lockutils [req-bb23b750-6d0c-4eb8-b228-4a702085acda req-01b66dfb-d127-4a27-9dd0-9942697ad67a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.047 254096 DEBUG nova.compute.manager [req-bb23b750-6d0c-4eb8-b228-4a702085acda req-01b66dfb-d127-4a27-9dd0-9942697ad67a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Processing event network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.087 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f1896d-175a-4e63-a3fa-407f0e77dd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.089 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd30f2e6f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.089 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.089 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd30f2e6f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:37 compute-0 NetworkManager[48891]: <info>  [1764088537.0919] manager: (tapd30f2e6f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 kernel: tapd30f2e6f-40: entered promiscuous mode
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.097 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd30f2e6f-40, col_values=(('external_ids', {'iface-id': '41715ae5-0066-4469-9258-5562b9273c08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.098 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 ovn_controller[153477]: 2025-11-25T16:35:37Z|00353|binding|INFO|Releasing lport 41715ae5-0066-4469-9258-5562b9273c08 from this chassis (sb_readonly=0)
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.115 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.120 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d001b4f5-3fd1-4c22-882f-9e0302dd9e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.122 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2.pid.haproxy
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.122 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'env', 'PROCESS_TAG=haproxy-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:35:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1061810996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.361 254096 DEBUG oslo_concurrency.processutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.366 254096 DEBUG nova.compute.provider_tree [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 372 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.383 254096 DEBUG nova.scheduler.client.report [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.406 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.440 254096 INFO nova.scheduler.client.report [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance 763ace49-fa88-443d-9733-e919b6f86fab
Nov 25 16:35:37 compute-0 podman[304070]: 2025-11-25 16:35:37.470733289 +0000 UTC m=+0.048017143 container create a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:35:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1061810996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.496 254096 DEBUG nova.network.neutron [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:37 compute-0 systemd[1]: Started libpod-conmon-a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e.scope.
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.506 254096 DEBUG oslo_concurrency.lockutils [None req-9dc5481a-895c-4623-b109-4d297dbb9c7a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "763ace49-fa88-443d-9733-e919b6f86fab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.521 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c15d6b9062d4a29b3e55dfa2f9cf97f03a46bf0ba7ff987f335b874469962989/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:37 compute-0 podman[304070]: 2025-11-25 16:35:37.448028323 +0000 UTC m=+0.025312197 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:35:37 compute-0 podman[304070]: 2025-11-25 16:35:37.545787481 +0000 UTC m=+0.123071355 container init a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:35:37 compute-0 podman[304070]: 2025-11-25 16:35:37.551330191 +0000 UTC m=+0.128614045 container start a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:35:37 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [NOTICE]   (304089) : New worker (304091) forked
Nov 25 16:35:37 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [NOTICE]   (304089) : Loading success.
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.632 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12524556-7486-4f17-95f0-2984a51a4542 in datapath 82742f46-fb6e-443e-a99d-84c5367a4ccd unbound from our chassis
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.636 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82742f46-fb6e-443e-a99d-84c5367a4ccd
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.659 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3841d7ee-caba-4283-b88c-e55807bb369e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.695 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[468d0db0-1655-42a5-afb0-32fde9680202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.699 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a350025e-7699-4122-a8cd-a4cf3966ffc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.732 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1514c721-a6fb-4640-93b4-61e976d08fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.750 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ad71db-9799-43b3-80b7-81f49e42102c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82742f46-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:df:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494588, 'reachable_time': 26292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304105, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.771 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfcf162-e272-4f1b-8129-862825290c6e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494599, 'tstamp': 494599}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304106, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494601, 'tstamp': 494601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304106, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.773 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82742f46-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.817 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82742f46-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82742f46-f0, col_values=(('external_ids', {'iface-id': '639a1689-3ed6-4bc6-98a0-e7a7773b6e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:37.824 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.856 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.856 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.880 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.951 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.952 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.961 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:35:37 compute-0 nova_compute[254092]: 2025-11-25 16:35:37.961 254096 INFO nova.compute.claims [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.128 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.213 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.214 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.214 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.214 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.214 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] No waiting events found dispatching network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.215 254096 WARNING nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received unexpected event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 for instance with vm_state active and task_state None.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.215 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.215 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.215 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.216 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.216 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Processing event network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.216 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.216 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.216 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.217 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.217 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] No waiting events found dispatching network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.217 254096 WARNING nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received unexpected event network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f for instance with vm_state building and task_state spawning.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.217 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.217 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.217 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.218 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.218 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] No waiting events found dispatching network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.218 254096 WARNING nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received unexpected event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 for instance with vm_state active and task_state None.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.218 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.218 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.218 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.219 254096 DEBUG oslo_concurrency.lockutils [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.219 254096 DEBUG nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] No waiting events found dispatching network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.219 254096 WARNING nova.compute.manager [req-17e5f638-8e85-45cd-bbfc-e5828f0de0a9 req-d29a31f7-999a-4bcb-bbcc-80d308149268 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received unexpected event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 for instance with vm_state active and task_state None.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.220 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.223 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088538.223344, 7a89b21c-79db-4e5f-88fd-35557c8c15ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.223 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] VM Resumed (Lifecycle Event)
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.238 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.242 254096 INFO nova.virt.libvirt.driver [-] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Instance spawned successfully.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.242 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.247 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.250 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.268 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.279 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.280 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.280 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.281 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.281 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.282 254096 DEBUG nova.virt.libvirt.driver [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.447 254096 INFO nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Took 17.79 seconds to spawn the instance on the hypervisor.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.448 254096 DEBUG nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:38 compute-0 ceph-mon[74985]: pgmap v1458: 321 pgs: 321 active+clean; 372 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.530 254096 INFO nova.compute.manager [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Took 18.86 seconds to build instance.
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.577 254096 DEBUG oslo_concurrency.lockutils [None req-8184a1c0-ce25-41bd-8665-e339291ed61a 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812201672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.643 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.649 254096 DEBUG nova.compute.provider_tree [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.662 254096 DEBUG nova.scheduler.client.report [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.705 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.706 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.924 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.925 254096 DEBUG nova.network.neutron [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.976 254096 INFO nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:35:38 compute-0 nova_compute[254092]: 2025-11-25 16:35:38.994 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:35:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 25 16:35:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 25 16:35:39 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.317 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.319 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.319 254096 INFO nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Creating image(s)
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.339 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.363 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 372 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 199 op/s
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.385 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.389 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.437 254096 DEBUG nova.policy [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.447 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance destroyed successfully.
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.448 254096 DEBUG nova.objects.instance [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'resources' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.464 254096 DEBUG nova.virt.libvirt.vif [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:33:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member',shelved_at='2025-11-25T16:35:35.720387',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6d9352ea-7650-4e12-a1b0-1f5e5bc16789'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:28Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.465 254096 DEBUG nova.network.os_vif_util [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.466 254096 DEBUG nova.network.os_vif_util [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.467 254096 DEBUG os_vif [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.473 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap660536bc-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.479 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.482 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.483 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.484 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.518 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.522 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.563 254096 INFO os_vif [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4')
Nov 25 16:35:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/812201672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:39 compute-0 ceph-mon[74985]: osdmap e180: 3 total, 3 up, 3 in
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.933 254096 DEBUG nova.compute.manager [req-2a436a35-ed35-416c-bd8a-801b02bf5864 req-2fea2bfa-08be-47b1-87b6-f25f04672057 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 763ace49-fa88-443d-9733-e919b6f86fab] Received event network-vif-deleted-1baf23f4-ec91-408d-b406-37757eba550e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.933 254096 DEBUG nova.compute.manager [req-2a436a35-ed35-416c-bd8a-801b02bf5864 req-2fea2bfa-08be-47b1-87b6-f25f04672057 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.933 254096 DEBUG oslo_concurrency.lockutils [req-2a436a35-ed35-416c-bd8a-801b02bf5864 req-2fea2bfa-08be-47b1-87b6-f25f04672057 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.934 254096 DEBUG oslo_concurrency.lockutils [req-2a436a35-ed35-416c-bd8a-801b02bf5864 req-2fea2bfa-08be-47b1-87b6-f25f04672057 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.934 254096 DEBUG oslo_concurrency.lockutils [req-2a436a35-ed35-416c-bd8a-801b02bf5864 req-2fea2bfa-08be-47b1-87b6-f25f04672057 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.934 254096 DEBUG nova.compute.manager [req-2a436a35-ed35-416c-bd8a-801b02bf5864 req-2fea2bfa-08be-47b1-87b6-f25f04672057 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] No waiting events found dispatching network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:39 compute-0 nova_compute[254092]: 2025-11-25 16:35:39.934 254096 WARNING nova.compute.manager [req-2a436a35-ed35-416c-bd8a-801b02bf5864 req-2fea2bfa-08be-47b1-87b6-f25f04672057 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received unexpected event network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b for instance with vm_state active and task_state None.
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.050 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:35:40
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'backups', '.rgw.root']
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.132 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] resizing rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.260 254096 DEBUG nova.network.neutron [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Successfully created port: 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.271 254096 DEBUG nova.objects.instance [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.285 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.286 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Ensure instance console log exists: /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.287 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.287 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.289 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.372 254096 INFO nova.virt.libvirt.driver [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deleting instance files /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_del
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.373 254096 INFO nova.virt.libvirt.driver [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deletion of /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_del complete
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.386 254096 DEBUG nova.compute.manager [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-changed-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.387 254096 DEBUG nova.compute.manager [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Refreshing instance network info cache due to event network-changed-12524556-7486-4f17-95f0-2984a51a4542. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.387 254096 DEBUG oslo_concurrency.lockutils [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.387 254096 DEBUG oslo_concurrency.lockutils [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.388 254096 DEBUG nova.network.neutron [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Refreshing network info cache for port 12524556-7486-4f17-95f0-2984a51a4542 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.461 254096 INFO nova.scheduler.client.report [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Deleted allocations for instance a3d5d205-98f0-4820-a96c-7f3e59d0cdd9
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.505 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.505 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.561 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.562 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.562 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.562 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.562 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.563 254096 INFO nova.compute.manager [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Terminating instance
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.564 254096 DEBUG nova.compute.manager [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:35:40 compute-0 kernel: tap12524556-74 (unregistering): left promiscuous mode
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.621 254096 DEBUG oslo_concurrency.processutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:40 compute-0 NetworkManager[48891]: <info>  [1764088540.6301] device (tap12524556-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00354|binding|INFO|Releasing lport 12524556-7486-4f17-95f0-2984a51a4542 from this chassis (sb_readonly=0)
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00355|binding|INFO|Setting lport 12524556-7486-4f17-95f0-2984a51a4542 down in Southbound
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00356|binding|INFO|Removing iface tap12524556-74 ovn-installed in OVS
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.649 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:8f:b2 10.100.0.5'], port_security=['fa:16:3e:3a:8f:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8457c008-75d8-4c24-9ae2-6b8a526312ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe7901baa563491c8609089aa4334bf1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '10995e2d-e9a2-4098-859c-5dcd4d5f741f 35930f85-6093-4ee4-bf72-0f56aa3fb356 fe878550-8968-4c37-9d8f-71556c702131', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6fc76d1-a8a4-44cf-ab8f-0304e50e033c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12524556-7486-4f17-95f0-2984a51a4542) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.650 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12524556-7486-4f17-95f0-2984a51a4542 in datapath 82742f46-fb6e-443e-a99d-84c5367a4ccd unbound from our chassis
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.652 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82742f46-fb6e-443e-a99d-84c5367a4ccd
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.671 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[95c8fbd9-2341-4f19-bbd3-7d8a2a673656]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:40 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Nov 25 16:35:40 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002b.scope: Consumed 4.433s CPU time.
Nov 25 16:35:40 compute-0 systemd-machined[216343]: Machine qemu-51-instance-0000002b terminated.
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.700 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0f0ec8-b8b7-474b-b4aa-28ce15acbd29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.704 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[21b82d41-51a8-43e0-8821-838374000402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.717 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.718 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.718 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.718 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.718 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.720 254096 INFO nova.compute.manager [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Terminating instance
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.720 254096 DEBUG nova.compute.manager [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.730 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fe67cbc4-96b2-48b3-9034-5eba132cf0f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.748 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b38fd0b5-dd39-47c9-9991-5a5aaa89d43c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82742f46-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:df:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494588, 'reachable_time': 26292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304329, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:40 compute-0 kernel: tap18d58a6f-21 (unregistering): left promiscuous mode
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7e30fb-0fe9-4f62-aaf2-4d7c418db206]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494599, 'tstamp': 494599}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304331, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap82742f46-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 494601, 'tstamp': 494601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304331, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:40 compute-0 NetworkManager[48891]: <info>  [1764088540.7664] device (tap18d58a6f-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.766 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82742f46-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00357|binding|INFO|Releasing lport 18d58a6f-2179-462b-993c-b9ce6369673f from this chassis (sb_readonly=0)
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00358|binding|INFO|Setting lport 18d58a6f-2179-462b-993c-b9ce6369673f down in Southbound
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00359|binding|INFO|Removing iface tap18d58a6f-21 ovn-installed in OVS
Nov 25 16:35:40 compute-0 ceph-mon[74985]: pgmap v1460: 321 pgs: 321 active+clean; 372 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 199 op/s
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 kernel: tap80f0ea34-88 (unregistering): left promiscuous mode
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.781 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:0b:6c 10.100.0.32'], port_security=['fa:16:3e:8d:0b:6c 10.100.0.32'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.32/24', 'neutron:device_id': '7a89b21c-79db-4e5f-88fd-35557c8c15ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd0e5ac9-ec42-429c-a5ae-2cbedc7268a3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=18d58a6f-2179-462b-993c-b9ce6369673f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:40 compute-0 NetworkManager[48891]: <info>  [1764088540.7884] device (tap80f0ea34-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.804 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82742f46-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.804 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.805 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82742f46-f0, col_values=(('external_ids', {'iface-id': '639a1689-3ed6-4bc6-98a0-e7a7773b6e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.805 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.806 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 18d58a6f-2179-462b-993c-b9ce6369673f in datapath d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0 unbound from our chassis
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.807 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.807 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[46487e1e-85cd-4aa9-a7d2-b1e16286ba02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.808 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0 namespace which is not needed anymore
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00360|binding|INFO|Releasing lport 80f0ea34-88eb-4091-912e-db28507e1f4b from this chassis (sb_readonly=0)
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00361|binding|INFO|Setting lport 80f0ea34-88eb-4091-912e-db28507e1f4b down in Southbound
Nov 25 16:35:40 compute-0 ovn_controller[153477]: 2025-11-25T16:35:40Z|00362|binding|INFO|Removing iface tap80f0ea34-88 ovn-installed in OVS
Nov 25 16:35:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:40.821 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:54:4a 10.100.1.67'], port_security=['fa:16:3e:cd:54:4a 10.100.1.67'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.67/24', 'neutron:device_id': '7a89b21c-79db-4e5f-88fd-35557c8c15ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3355a3ac2d6d4d5ea7f590f1e2ae3492', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc762173-4fc8-48f6-b331-e2cedac5f655', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f17bf35-a68d-4a50-857b-e307d883bf20, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=80f0ea34-88eb-4091-912e-db28507e1f4b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.831 254096 INFO nova.virt.libvirt.driver [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Instance destroyed successfully.
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.831 254096 DEBUG nova.objects.instance [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'resources' on Instance uuid 8457c008-75d8-4c24-9ae2-6b8a526312ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:40 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Nov 25 16:35:40 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002c.scope: Consumed 2.921s CPU time.
Nov 25 16:35:40 compute-0 systemd-machined[216343]: Machine qemu-50-instance-0000002c terminated.
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.846 254096 DEBUG nova.virt.libvirt.vif [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:35:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-960547750',display_name='tempest-SecurityGroupsTestJSON-server-960547750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-960547750',id=43,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-ckiy49j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:36Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=8457c008-75d8-4c24-9ae2-6b8a526312ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.846 254096 DEBUG nova.network.os_vif_util [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.847 254096 DEBUG nova.network.os_vif_util [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.847 254096 DEBUG os_vif [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.849 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12524556-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.861 254096 INFO os_vif [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:8f:b2,bridge_name='br-int',has_traffic_filtering=True,id=12524556-7486-4f17-95f0-2984a51a4542,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12524556-74')
Nov 25 16:35:40 compute-0 neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0[303977]: [NOTICE]   (303988) : haproxy version is 2.8.14-c23fe91
Nov 25 16:35:40 compute-0 neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0[303977]: [NOTICE]   (303988) : path to executable is /usr/sbin/haproxy
Nov 25 16:35:40 compute-0 neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0[303977]: [WARNING]  (303988) : Exiting Master process...
Nov 25 16:35:40 compute-0 neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0[303977]: [ALERT]    (303988) : Current worker (303990) exited with code 143 (Terminated)
Nov 25 16:35:40 compute-0 neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0[303977]: [WARNING]  (303988) : All workers exited. Exiting... (0)
Nov 25 16:35:40 compute-0 systemd[1]: libpod-9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2.scope: Deactivated successfully.
Nov 25 16:35:40 compute-0 podman[304403]: 2025-11-25 16:35:40.944348839 +0000 UTC m=+0.048521685 container died 9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:35:40 compute-0 NetworkManager[48891]: <info>  [1764088540.9505] manager: (tap80f0ea34-88): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Nov 25 16:35:40 compute-0 sudo[304412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.971 254096 INFO nova.virt.libvirt.driver [-] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Instance destroyed successfully.
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.971 254096 DEBUG nova.objects.instance [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lazy-loading 'resources' on Instance uuid 7a89b21c-79db-4e5f-88fd-35557c8c15ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:40 compute-0 sudo[304412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:40 compute-0 sudo[304412]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2-userdata-shm.mount: Deactivated successfully.
Nov 25 16:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e4f214b571585608778b7ef11399703f4b6169c3938b0f2f6be2e8bd70c665c-merged.mount: Deactivated successfully.
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.987 254096 DEBUG nova.virt.libvirt.vif [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1960371828',display_name='tempest-ServersTestMultiNic-server-1960371828',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1960371828',id=44,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-d7qhokwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=7a89b21c-79db-4e5f-88fd-35557c8c15ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.988 254096 DEBUG nova.network.os_vif_util [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "18d58a6f-2179-462b-993c-b9ce6369673f", "address": "fa:16:3e:8d:0b:6c", "network": {"id": "d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-23122884", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.32", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18d58a6f-21", "ovs_interfaceid": "18d58a6f-2179-462b-993c-b9ce6369673f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.988 254096 DEBUG nova.network.os_vif_util [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:0b:6c,bridge_name='br-int',has_traffic_filtering=True,id=18d58a6f-2179-462b-993c-b9ce6369673f,network=Network(d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18d58a6f-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.988 254096 DEBUG os_vif [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:0b:6c,bridge_name='br-int',has_traffic_filtering=True,id=18d58a6f-2179-462b-993c-b9ce6369673f,network=Network(d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18d58a6f-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.990 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18d58a6f-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:40 compute-0 nova_compute[254092]: 2025-11-25 16:35:40.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.000 254096 INFO os_vif [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:0b:6c,bridge_name='br-int',has_traffic_filtering=True,id=18d58a6f-2179-462b-993c-b9ce6369673f,network=Network(d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18d58a6f-21')
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.001 254096 DEBUG nova.virt.libvirt.vif [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:35:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1960371828',display_name='tempest-ServersTestMultiNic-server-1960371828',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1960371828',id=44,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3355a3ac2d6d4d5ea7f590f1e2ae3492',ramdisk_id='',reservation_id='r-d7qhokwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-809789765',owner_user_name='tempest-ServersTestMultiNic-809789765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:38Z,user_data=None,user_id='70f122fae9644012973ae5b56c1d459b',uuid=7a89b21c-79db-4e5f-88fd-35557c8c15ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.002 254096 DEBUG nova.network.os_vif_util [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converting VIF {"id": "80f0ea34-88eb-4091-912e-db28507e1f4b", "address": "fa:16:3e:cd:54:4a", "network": {"id": "d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-666818867", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.67", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3355a3ac2d6d4d5ea7f590f1e2ae3492", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f0ea34-88", "ovs_interfaceid": "80f0ea34-88eb-4091-912e-db28507e1f4b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.002 254096 DEBUG nova.network.os_vif_util [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:54:4a,bridge_name='br-int',has_traffic_filtering=True,id=80f0ea34-88eb-4091-912e-db28507e1f4b,network=Network(d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f0ea34-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.003 254096 DEBUG os_vif [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:54:4a,bridge_name='br-int',has_traffic_filtering=True,id=80f0ea34-88eb-4091-912e-db28507e1f4b,network=Network(d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f0ea34-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:41 compute-0 podman[304403]: 2025-11-25 16:35:41.003788489 +0000 UTC m=+0.107961335 container cleanup 9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.004 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.005 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f0ea34-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.011 254096 INFO os_vif [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:54:4a,bridge_name='br-int',has_traffic_filtering=True,id=80f0ea34-88eb-4091-912e-db28507e1f4b,network=Network(d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f0ea34-88')
Nov 25 16:35:41 compute-0 systemd[1]: libpod-conmon-9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2.scope: Deactivated successfully.
Nov 25 16:35:41 compute-0 sudo[304475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:35:41 compute-0 sudo[304475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:41 compute-0 sudo[304475]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:41 compute-0 podman[304487]: 2025-11-25 16:35:41.088045632 +0000 UTC m=+0.055199437 container remove 9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:35:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094826353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.094 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a52825-8399-440a-b7e8-aa9184889cab]: (4, ('Tue Nov 25 04:35:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0 (9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2)\n9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2\nTue Nov 25 04:35:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0 (9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2)\n9f2c2a27f09626d90455530999ac8e8e46b080f177da202a06f271f99cd39cc2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.095 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c968678d-4350-4f6f-8e81-f046603e0a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.096 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8a880f3-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:41 compute-0 kernel: tapd8a880f3-c0: left promiscuous mode
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.115 254096 DEBUG oslo_concurrency.processutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02522f9b-a237-4eaa-a84a-e7719b1d121a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 sudo[304535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.123 254096 DEBUG nova.compute.provider_tree [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:41 compute-0 sudo[304535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:41 compute-0 sudo[304535]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.131 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac73f0e-d912-4160-84de-1aa0e516fca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eaae30-f147-4a73-9486-03d350801cfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.136 254096 DEBUG nova.scheduler.client.report [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8368b4e4-551a-4fb9-bffd-01a806baac89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499353, 'reachable_time': 35120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304566, 'error': None, 'target': 'ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 systemd[1]: run-netns-ovnmeta\x2dd8a880f3\x2dc52f\x2d4b0a\x2da545\x2d5eeb5ec2daa0.mount: Deactivated successfully.
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.153 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d8a880f3-c52f-4b0a-a545-5eeb5ec2daa0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.154 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[db15f92e-ed4d-4e5d-a263-23f24b858737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.155 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 80f0ea34-88eb-4091-912e-db28507e1f4b in datapath d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2 unbound from our chassis
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.156 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.156 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.157 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[103547a5-f336-4dcd-936c-2c02eec081a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.158 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2 namespace which is not needed anymore
Nov 25 16:35:41 compute-0 sudo[304565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:35:41 compute-0 sudo[304565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.218 254096 DEBUG oslo_concurrency.lockutils [None req-8c06458f-1ab1-4491-9fb6-1a15af2f04dd b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:41 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [NOTICE]   (304089) : haproxy version is 2.8.14-c23fe91
Nov 25 16:35:41 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [NOTICE]   (304089) : path to executable is /usr/sbin/haproxy
Nov 25 16:35:41 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [WARNING]  (304089) : Exiting Master process...
Nov 25 16:35:41 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [WARNING]  (304089) : Exiting Master process...
Nov 25 16:35:41 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [ALERT]    (304089) : Current worker (304091) exited with code 143 (Terminated)
Nov 25 16:35:41 compute-0 neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2[304085]: [WARNING]  (304089) : All workers exited. Exiting... (0)
Nov 25 16:35:41 compute-0 systemd[1]: libpod-a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e.scope: Deactivated successfully.
Nov 25 16:35:41 compute-0 podman[304610]: 2025-11-25 16:35:41.300014333 +0000 UTC m=+0.054862347 container died a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.319 254096 INFO nova.virt.libvirt.driver [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Deleting instance files /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce_del
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.321 254096 INFO nova.virt.libvirt.driver [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Deletion of /var/lib/nova/instances/8457c008-75d8-4c24-9ae2-6b8a526312ce_del complete
Nov 25 16:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e-userdata-shm.mount: Deactivated successfully.
Nov 25 16:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c15d6b9062d4a29b3e55dfa2f9cf97f03a46bf0ba7ff987f335b874469962989-merged.mount: Deactivated successfully.
Nov 25 16:35:41 compute-0 podman[304610]: 2025-11-25 16:35:41.333245373 +0000 UTC m=+0.088093367 container cleanup a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:35:41 compute-0 systemd[1]: libpod-conmon-a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e.scope: Deactivated successfully.
Nov 25 16:35:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 390 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 436 op/s
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.386 254096 INFO nova.compute.manager [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.388 254096 DEBUG oslo.service.loopingcall [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.388 254096 DEBUG nova.compute.manager [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.388 254096 DEBUG nova.network.neutron [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.397 254096 INFO nova.virt.libvirt.driver [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Deleting instance files /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff_del
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.398 254096 INFO nova.virt.libvirt.driver [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Deletion of /var/lib/nova/instances/7a89b21c-79db-4e5f-88fd-35557c8c15ff_del complete
Nov 25 16:35:41 compute-0 podman[304649]: 2025-11-25 16:35:41.404416011 +0000 UTC m=+0.046719556 container remove a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.410 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[296a429b-efb4-4d86-946e-fb1004cfc5d3]: (4, ('Tue Nov 25 04:35:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2 (a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e)\na7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e\nTue Nov 25 04:35:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2 (a7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e)\na7d8d998cd595a786be3b97c4e3fe880a49b3b1cea7979498f511c136632e34e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.412 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f870f5-53eb-4474-a0d5-260f8ce82f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.413 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd30f2e6f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:41 compute-0 kernel: tapd30f2e6f-40: left promiscuous mode
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.418 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.436 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[565d393b-edfa-42af-961f-31a7abb9e0f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.451 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c4cdf9-d9f1-40a6-8459-463a01723318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.452 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[46a39987-a28f-4312-aa1e-3862e3950f4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.466 254096 INFO nova.compute.manager [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.466 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ddf070-9bbe-4968-85e8-e40625fa8743]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499441, 'reachable_time': 32678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304666, 'error': None, 'target': 'ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.466 254096 DEBUG oslo.service.loopingcall [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.467 254096 DEBUG nova.compute.manager [-] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:35:41 compute-0 nova_compute[254092]: 2025-11-25 16:35:41.467 254096 DEBUG nova.network.neutron [-] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.468 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d30f2e6f-48e1-4522-975e-ec3d0f1d5dc2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:41.468 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecfc99a-cfd2-4bf5-ae42-7b62d754e8e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:41 compute-0 sudo[304565]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:35:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:35:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:35:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:35:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cf3e5fe6-40c3-4548-9c9c-1742e30a8833 does not exist
Nov 25 16:35:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 51337795-8852-4d03-b777-d36f16027703 does not exist
Nov 25 16:35:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ed8fe603-c6ff-4d94-858d-5fda0b26256a does not exist
Nov 25 16:35:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:35:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:35:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:35:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2094826353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:35:41 compute-0 sudo[304684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:41 compute-0 sudo[304684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:41 compute-0 sudo[304684]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:41 compute-0 sudo[304709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:35:41 compute-0 sudo[304709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:41 compute-0 sudo[304709]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:41 compute-0 sudo[304734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:41 compute-0 sudo[304734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:41 compute-0 sudo[304734]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:41 compute-0 systemd[1]: run-netns-ovnmeta\x2dd30f2e6f\x2d48e1\x2d4522\x2d975e\x2dec3d0f1d5dc2.mount: Deactivated successfully.
Nov 25 16:35:41 compute-0 sudo[304759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:35:41 compute-0 sudo[304759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.029 254096 DEBUG nova.compute.manager [req-a0542406-3906-4554-a154-155fafec72d7 req-bdaf52b2-5ff4-4b72-9370-72284cdcfae8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-unplugged-80f0ea34-88eb-4091-912e-db28507e1f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.029 254096 DEBUG oslo_concurrency.lockutils [req-a0542406-3906-4554-a154-155fafec72d7 req-bdaf52b2-5ff4-4b72-9370-72284cdcfae8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.029 254096 DEBUG oslo_concurrency.lockutils [req-a0542406-3906-4554-a154-155fafec72d7 req-bdaf52b2-5ff4-4b72-9370-72284cdcfae8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.030 254096 DEBUG oslo_concurrency.lockutils [req-a0542406-3906-4554-a154-155fafec72d7 req-bdaf52b2-5ff4-4b72-9370-72284cdcfae8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.030 254096 DEBUG nova.compute.manager [req-a0542406-3906-4554-a154-155fafec72d7 req-bdaf52b2-5ff4-4b72-9370-72284cdcfae8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] No waiting events found dispatching network-vif-unplugged-80f0ea34-88eb-4091-912e-db28507e1f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.030 254096 DEBUG nova.compute.manager [req-a0542406-3906-4554-a154-155fafec72d7 req-bdaf52b2-5ff4-4b72-9370-72284cdcfae8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-unplugged-80f0ea34-88eb-4091-912e-db28507e1f4b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.186 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088527.1853058, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.186 254096 INFO nova.compute.manager [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Stopped (Lifecycle Event)
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.212 254096 DEBUG nova.compute.manager [None req-2e400b34-0bc1-4493-8707-383c931833ea - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:42 compute-0 podman[304824]: 2025-11-25 16:35:42.309065556 +0000 UTC m=+0.038470663 container create 9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 16:35:42 compute-0 systemd[1]: Started libpod-conmon-9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2.scope.
Nov 25 16:35:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:42 compute-0 podman[304824]: 2025-11-25 16:35:42.293762641 +0000 UTC m=+0.023167768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:35:42 compute-0 podman[304824]: 2025-11-25 16:35:42.408343455 +0000 UTC m=+0.137748592 container init 9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:35:42 compute-0 podman[304824]: 2025-11-25 16:35:42.416249309 +0000 UTC m=+0.145654626 container start 9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:35:42 compute-0 podman[304824]: 2025-11-25 16:35:42.41885515 +0000 UTC m=+0.148260257 container attach 9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:35:42 compute-0 priceless_albattani[304840]: 167 167
Nov 25 16:35:42 compute-0 systemd[1]: libpod-9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2.scope: Deactivated successfully.
Nov 25 16:35:42 compute-0 podman[304824]: 2025-11-25 16:35:42.423237778 +0000 UTC m=+0.152642885 container died 9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9222623b9c0293cf84da495eff84df3a8ac8c7a7a69e50e574c3bf30f2aa64-merged.mount: Deactivated successfully.
Nov 25 16:35:42 compute-0 podman[304824]: 2025-11-25 16:35:42.459399867 +0000 UTC m=+0.188804974 container remove 9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_albattani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:35:42 compute-0 systemd[1]: libpod-conmon-9ace49d62b4cef98f74252411606b8e4a4710ab1a32484411db0d0fca0edfdd2.scope: Deactivated successfully.
Nov 25 16:35:42 compute-0 podman[304864]: 2025-11-25 16:35:42.649383084 +0000 UTC m=+0.037959399 container create 06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.668 254096 DEBUG nova.network.neutron [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Successfully updated port: 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.679 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-1f3bb552-1d70-4ad1-9304-9ed3896db3d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.680 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-1f3bb552-1d70-4ad1-9304-9ed3896db3d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.680 254096 DEBUG nova.network.neutron [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:35:42 compute-0 systemd[1]: Started libpod-conmon-06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da.scope.
Nov 25 16:35:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d3fdce26308bdeb91e03af04b17a0e97b6450ce0ecfc71d3a9580a6c039e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d3fdce26308bdeb91e03af04b17a0e97b6450ce0ecfc71d3a9580a6c039e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d3fdce26308bdeb91e03af04b17a0e97b6450ce0ecfc71d3a9580a6c039e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d3fdce26308bdeb91e03af04b17a0e97b6450ce0ecfc71d3a9580a6c039e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4d3fdce26308bdeb91e03af04b17a0e97b6450ce0ecfc71d3a9580a6c039e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:42 compute-0 podman[304864]: 2025-11-25 16:35:42.633039611 +0000 UTC m=+0.021615946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:35:42 compute-0 podman[304864]: 2025-11-25 16:35:42.746033092 +0000 UTC m=+0.134609467 container init 06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:35:42 compute-0 podman[304864]: 2025-11-25 16:35:42.756117415 +0000 UTC m=+0.144693740 container start 06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 16:35:42 compute-0 podman[304864]: 2025-11-25 16:35:42.760958586 +0000 UTC m=+0.149534911 container attach 06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.780 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-unplugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.781 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.781 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.781 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.781 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] No waiting events found dispatching network-vif-unplugged-12524556-7486-4f17-95f0-2984a51a4542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.781 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-unplugged-12524556-7486-4f17-95f0-2984a51a4542 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.782 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.782 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.782 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.782 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.782 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] No waiting events found dispatching network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.782 254096 WARNING nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received unexpected event network-vif-plugged-12524556-7486-4f17-95f0-2984a51a4542 for instance with vm_state active and task_state deleting.
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.782 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-unplugged-18d58a6f-2179-462b-993c-b9ce6369673f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.783 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.783 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.783 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.783 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] No waiting events found dispatching network-vif-unplugged-18d58a6f-2179-462b-993c-b9ce6369673f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.783 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-unplugged-18d58a6f-2179-462b-993c-b9ce6369673f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.783 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.783 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.784 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.784 254096 DEBUG oslo_concurrency.lockutils [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.784 254096 DEBUG nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] No waiting events found dispatching network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.784 254096 WARNING nova.compute.manager [req-a5c47495-2556-4e08-a68a-be14a0bbb0ac req-73d0a183-ddf6-4fd0-8310-5195526e3378 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received unexpected event network-vif-plugged-18d58a6f-2179-462b-993c-b9ce6369673f for instance with vm_state active and task_state deleting.
Nov 25 16:35:42 compute-0 ceph-mon[74985]: pgmap v1461: 321 pgs: 321 active+clean; 390 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 436 op/s
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:42 compute-0 nova_compute[254092]: 2025-11-25 16:35:42.972 254096 DEBUG nova.network.neutron [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.071 254096 DEBUG nova.network.neutron [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updated VIF entry in instance network info cache for port 12524556-7486-4f17-95f0-2984a51a4542. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.072 254096 DEBUG nova.network.neutron [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updating instance_info_cache with network_info: [{"id": "12524556-7486-4f17-95f0-2984a51a4542", "address": "fa:16:3e:3a:8f:b2", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12524556-74", "ovs_interfaceid": "12524556-7486-4f17-95f0-2984a51a4542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.091 254096 DEBUG oslo_concurrency.lockutils [req-2db4b77b-8f2c-49df-a72d-8c30a367695e req-76de8ce1-192d-4064-bff3-bd34488bd7c0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8457c008-75d8-4c24-9ae2-6b8a526312ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.229 254096 DEBUG nova.network.neutron [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.247 254096 INFO nova.compute.manager [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Took 1.86 seconds to deallocate network for instance.
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.284 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.285 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 390 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 6.1 MiB/s wr, 369 op/s
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.386 254096 DEBUG oslo_concurrency.processutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.733 254096 DEBUG nova.network.neutron [-] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:43 compute-0 quirky_keldysh[304881]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:35:43 compute-0 quirky_keldysh[304881]: --> relative data size: 1.0
Nov 25 16:35:43 compute-0 quirky_keldysh[304881]: --> All data devices are unavailable
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.750 254096 INFO nova.compute.manager [-] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Took 2.28 seconds to deallocate network for instance.
Nov 25 16:35:43 compute-0 systemd[1]: libpod-06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da.scope: Deactivated successfully.
Nov 25 16:35:43 compute-0 podman[304864]: 2025-11-25 16:35:43.770012048 +0000 UTC m=+1.158588363 container died 06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:35:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820542455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.833 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.849 254096 DEBUG oslo_concurrency.processutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.856 254096 DEBUG nova.compute.provider_tree [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.867 254096 DEBUG nova.scheduler.client.report [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b4d3fdce26308bdeb91e03af04b17a0e97b6450ce0ecfc71d3a9580a6c039e7-merged.mount: Deactivated successfully.
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.886 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.888 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3820542455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:43 compute-0 podman[304864]: 2025-11-25 16:35:43.951998088 +0000 UTC m=+1.340574403 container remove 06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:35:43 compute-0 sudo[304759]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:43 compute-0 nova_compute[254092]: 2025-11-25 16:35:43.991 254096 INFO nova.scheduler.client.report [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Deleted allocations for instance 8457c008-75d8-4c24-9ae2-6b8a526312ce
Nov 25 16:35:43 compute-0 systemd[1]: libpod-conmon-06506c043bb0e62c9f1aad8d2a78eb666bdcc128ccccb1945107d77f80bcc1da.scope: Deactivated successfully.
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.042 254096 DEBUG oslo_concurrency.processutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:44 compute-0 sudo[304945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:44 compute-0 sudo[304945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:44 compute-0 sudo[304945]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.073 254096 DEBUG oslo_concurrency.lockutils [None req-df179f2a-c58d-4642-b913-0366c935c7d5 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "8457c008-75d8-4c24-9ae2-6b8a526312ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.090 254096 DEBUG nova.network.neutron [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Updating instance_info_cache with network_info: [{"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.103 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-1f3bb552-1d70-4ad1-9304-9ed3896db3d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.103 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Instance network_info: |[{"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.105 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Start _get_guest_xml network_info=[{"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.109 254096 WARNING nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.113 254096 DEBUG nova.virt.libvirt.host [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.114 254096 DEBUG nova.virt.libvirt.host [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.116 254096 DEBUG nova.virt.libvirt.host [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.117 254096 DEBUG nova.virt.libvirt.host [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.117 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.117 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.118 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.118 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.118 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.118 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.119 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.119 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.119 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.119 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.119 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.120 254096 DEBUG nova.virt.hardware [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:35:44 compute-0 sudo[304971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.122 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:44 compute-0 sudo[304971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:44 compute-0 sudo[304971]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:44 compute-0 sudo[304997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:44 compute-0 sudo[304997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:44 compute-0 sudo[304997]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:44 compute-0 sudo[305041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:35:44 compute-0 sudo[305041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.468 254096 DEBUG nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.469 254096 DEBUG oslo_concurrency.lockutils [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.469 254096 DEBUG oslo_concurrency.lockutils [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.470 254096 DEBUG oslo_concurrency.lockutils [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.470 254096 DEBUG nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] No waiting events found dispatching network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.470 254096 WARNING nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received unexpected event network-vif-plugged-80f0ea34-88eb-4091-912e-db28507e1f4b for instance with vm_state deleted and task_state None.
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.470 254096 DEBUG nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-changed-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.471 254096 DEBUG nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Refreshing instance network info cache due to event network-changed-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.471 254096 DEBUG oslo_concurrency.lockutils [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1f3bb552-1d70-4ad1-9304-9ed3896db3d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.471 254096 DEBUG oslo_concurrency.lockutils [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1f3bb552-1d70-4ad1-9304-9ed3896db3d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.472 254096 DEBUG nova.network.neutron [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Refreshing network info cache for port 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:35:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309492086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.493 254096 DEBUG oslo_concurrency.processutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.501 254096 DEBUG nova.compute.provider_tree [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.514 254096 DEBUG nova.scheduler.client.report [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.534 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301584950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:44 compute-0 podman[305125]: 2025-11-25 16:35:44.557196741 +0000 UTC m=+0.038984037 container create aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_grothendieck, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.569 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.590 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:44 compute-0 systemd[1]: Started libpod-conmon-aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e.scope.
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.593 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.617 254096 INFO nova.scheduler.client.report [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Deleted allocations for instance 7a89b21c-79db-4e5f-88fd-35557c8c15ff
Nov 25 16:35:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:44 compute-0 podman[305125]: 2025-11-25 16:35:44.539376839 +0000 UTC m=+0.021164165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:35:44 compute-0 podman[305125]: 2025-11-25 16:35:44.639415178 +0000 UTC m=+0.121202464 container init aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_grothendieck, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 16:35:44 compute-0 podman[305125]: 2025-11-25 16:35:44.647583169 +0000 UTC m=+0.129370465 container start aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_grothendieck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:35:44 compute-0 podman[305125]: 2025-11-25 16:35:44.650678614 +0000 UTC m=+0.132465910 container attach aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:35:44 compute-0 vigilant_grothendieck[305161]: 167 167
Nov 25 16:35:44 compute-0 systemd[1]: libpod-aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e.scope: Deactivated successfully.
Nov 25 16:35:44 compute-0 podman[305125]: 2025-11-25 16:35:44.653549032 +0000 UTC m=+0.135336328 container died aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:35:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c9ab3f70a30fb4ff6387d442d2b482afa72bc1c4c148cfe333b29a957c619de-merged.mount: Deactivated successfully.
Nov 25 16:35:44 compute-0 podman[305125]: 2025-11-25 16:35:44.688738254 +0000 UTC m=+0.170525550 container remove aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:35:44 compute-0 nova_compute[254092]: 2025-11-25 16:35:44.691 254096 DEBUG oslo_concurrency.lockutils [None req-ce54065d-5b9c-49bf-b7a3-9b3c8681093c 70f122fae9644012973ae5b56c1d459b 3355a3ac2d6d4d5ea7f590f1e2ae3492 - - default default] Lock "7a89b21c-79db-4e5f-88fd-35557c8c15ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:44 compute-0 systemd[1]: libpod-conmon-aa730c00f7a2a2c691511ee6569c61d7de595397d28be71b097f0b5ff2dfe52e.scope: Deactivated successfully.
Nov 25 16:35:44 compute-0 podman[305204]: 2025-11-25 16:35:44.842793087 +0000 UTC m=+0.037792984 container create 3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:35:44 compute-0 systemd[1]: Started libpod-conmon-3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528.scope.
Nov 25 16:35:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a7271a80cdc380456f0babbd3476e3627265e776a523ba87b633096249cf67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a7271a80cdc380456f0babbd3476e3627265e776a523ba87b633096249cf67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a7271a80cdc380456f0babbd3476e3627265e776a523ba87b633096249cf67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a7271a80cdc380456f0babbd3476e3627265e776a523ba87b633096249cf67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:44 compute-0 podman[305204]: 2025-11-25 16:35:44.910809309 +0000 UTC m=+0.105809236 container init 3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:35:44 compute-0 podman[305204]: 2025-11-25 16:35:44.918748624 +0000 UTC m=+0.113748521 container start 3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:35:44 compute-0 podman[305204]: 2025-11-25 16:35:44.921975682 +0000 UTC m=+0.116975619 container attach 3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:35:44 compute-0 podman[305204]: 2025-11-25 16:35:44.826733872 +0000 UTC m=+0.021733789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:35:44 compute-0 ceph-mon[74985]: pgmap v1462: 321 pgs: 321 active+clean; 390 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 6.1 MiB/s wr, 369 op/s
Nov 25 16:35:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/309492086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3301584950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852507675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.029 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.031 254096 DEBUG nova.virt.libvirt.vif [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-450706863',display_name='tempest-ImagesTestJSON-server-450706863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-450706863',id=45,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-qmu36f3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:39Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=1f3bb552-1d70-4ad1-9304-9ed3896db3d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.031 254096 DEBUG nova.network.os_vif_util [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.032 254096 DEBUG nova.network.os_vif_util [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:92:a2,bridge_name='br-int',has_traffic_filtering=True,id=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1aa5a9a0-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.033 254096 DEBUG nova.objects.instance [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.048 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <uuid>1f3bb552-1d70-4ad1-9304-9ed3896db3d4</uuid>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <name>instance-0000002d</name>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesTestJSON-server-450706863</nova:name>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:35:44</nova:creationTime>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <nova:port uuid="1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d">
Nov 25 16:35:45 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <system>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <entry name="serial">1f3bb552-1d70-4ad1-9304-9ed3896db3d4</entry>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <entry name="uuid">1f3bb552-1d70-4ad1-9304-9ed3896db3d4</entry>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </system>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <os>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   </os>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <features>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   </features>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk">
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk.config">
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:cb:92:a2"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <target dev="tap1aa5a9a0-4c"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/console.log" append="off"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <video>
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </video>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:35:45 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:35:45 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:35:45 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:35:45 compute-0 nova_compute[254092]: </domain>
Nov 25 16:35:45 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.049 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Preparing to wait for external event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.049 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.049 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.049 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.050 254096 DEBUG nova.virt.libvirt.vif [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-450706863',display_name='tempest-ImagesTestJSON-server-450706863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-450706863',id=45,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-qmu36f3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:39Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=1f3bb552-1d70-4ad1-9304-9ed3896db3d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.050 254096 DEBUG nova.network.os_vif_util [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.051 254096 DEBUG nova.network.os_vif_util [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:92:a2,bridge_name='br-int',has_traffic_filtering=True,id=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1aa5a9a0-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.051 254096 DEBUG os_vif [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:92:a2,bridge_name='br-int',has_traffic_filtering=True,id=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1aa5a9a0-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.051 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.052 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.052 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.057 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1aa5a9a0-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.058 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1aa5a9a0-4c, col_values=(('external_ids', {'iface-id': '1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:92:a2', 'vm-uuid': '1f3bb552-1d70-4ad1-9304-9ed3896db3d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:45 compute-0 NetworkManager[48891]: <info>  [1764088545.0601] manager: (tap1aa5a9a0-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.067 254096 INFO os_vif [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:92:a2,bridge_name='br-int',has_traffic_filtering=True,id=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1aa5a9a0-4c')
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.117 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.117 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.118 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:cb:92:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.118 254096 INFO nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Using config drive
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.135 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 257 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 361 op/s
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.540 254096 INFO nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Creating config drive at /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/disk.config
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.544 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj062ztwb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:45 compute-0 charming_moore[305221]: {
Nov 25 16:35:45 compute-0 charming_moore[305221]:     "0": [
Nov 25 16:35:45 compute-0 charming_moore[305221]:         {
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "devices": [
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "/dev/loop3"
Nov 25 16:35:45 compute-0 charming_moore[305221]:             ],
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_name": "ceph_lv0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_size": "21470642176",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "name": "ceph_lv0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "tags": {
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cluster_name": "ceph",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.crush_device_class": "",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.encrypted": "0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osd_id": "0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.type": "block",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.vdo": "0"
Nov 25 16:35:45 compute-0 charming_moore[305221]:             },
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "type": "block",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "vg_name": "ceph_vg0"
Nov 25 16:35:45 compute-0 charming_moore[305221]:         }
Nov 25 16:35:45 compute-0 charming_moore[305221]:     ],
Nov 25 16:35:45 compute-0 charming_moore[305221]:     "1": [
Nov 25 16:35:45 compute-0 charming_moore[305221]:         {
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "devices": [
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "/dev/loop4"
Nov 25 16:35:45 compute-0 charming_moore[305221]:             ],
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_name": "ceph_lv1",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_size": "21470642176",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "name": "ceph_lv1",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "tags": {
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cluster_name": "ceph",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.crush_device_class": "",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.encrypted": "0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osd_id": "1",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.type": "block",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.vdo": "0"
Nov 25 16:35:45 compute-0 charming_moore[305221]:             },
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "type": "block",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "vg_name": "ceph_vg1"
Nov 25 16:35:45 compute-0 charming_moore[305221]:         }
Nov 25 16:35:45 compute-0 charming_moore[305221]:     ],
Nov 25 16:35:45 compute-0 charming_moore[305221]:     "2": [
Nov 25 16:35:45 compute-0 charming_moore[305221]:         {
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "devices": [
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "/dev/loop5"
Nov 25 16:35:45 compute-0 charming_moore[305221]:             ],
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_name": "ceph_lv2",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_size": "21470642176",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "name": "ceph_lv2",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "tags": {
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.cluster_name": "ceph",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.crush_device_class": "",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.encrypted": "0",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osd_id": "2",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.type": "block",
Nov 25 16:35:45 compute-0 charming_moore[305221]:                 "ceph.vdo": "0"
Nov 25 16:35:45 compute-0 charming_moore[305221]:             },
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "type": "block",
Nov 25 16:35:45 compute-0 charming_moore[305221]:             "vg_name": "ceph_vg2"
Nov 25 16:35:45 compute-0 charming_moore[305221]:         }
Nov 25 16:35:45 compute-0 charming_moore[305221]:     ]
Nov 25 16:35:45 compute-0 charming_moore[305221]: }
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.676 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj062ztwb" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:45 compute-0 systemd[1]: libpod-3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528.scope: Deactivated successfully.
Nov 25 16:35:45 compute-0 podman[305204]: 2025-11-25 16:35:45.694100797 +0000 UTC m=+0.889100694 container died 3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.703 254096 DEBUG nova.storage.rbd_utils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.710 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/disk.config 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a7271a80cdc380456f0babbd3476e3627265e776a523ba87b633096249cf67-merged.mount: Deactivated successfully.
Nov 25 16:35:45 compute-0 podman[305204]: 2025-11-25 16:35:45.746431705 +0000 UTC m=+0.941431592 container remove 3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 16:35:45 compute-0 systemd[1]: libpod-conmon-3fa9f124adcfc1b96ece97d40618b3cdf3ae1cc8e61a7e0b5db32c9ed8751528.scope: Deactivated successfully.
Nov 25 16:35:45 compute-0 sudo[305041]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:45 compute-0 sudo[305300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:45 compute-0 sudo[305300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:45 compute-0 sudo[305300]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.865 254096 DEBUG oslo_concurrency.processutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/disk.config 1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.866 254096 INFO nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Deleting local config drive /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4/disk.config because it was imported into RBD.
Nov 25 16:35:45 compute-0 sudo[305328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:35:45 compute-0 sudo[305328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:45 compute-0 sudo[305328]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:45 compute-0 kernel: tap1aa5a9a0-4c: entered promiscuous mode
Nov 25 16:35:45 compute-0 NetworkManager[48891]: <info>  [1764088545.9182] manager: (tap1aa5a9a0-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Nov 25 16:35:45 compute-0 ovn_controller[153477]: 2025-11-25T16:35:45Z|00363|binding|INFO|Claiming lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for this chassis.
Nov 25 16:35:45 compute-0 ovn_controller[153477]: 2025-11-25T16:35:45Z|00364|binding|INFO|1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d: Claiming fa:16:3e:cb:92:a2 10.100.0.12
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.930 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:92:a2 10.100.0.12'], port_security=['fa:16:3e:cb:92:a2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1f3bb552-1d70-4ad1-9304-9ed3896db3d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.932 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d in datapath 0816ae24-275c-455e-a549-929f4eb756e7 bound to our chassis
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.933 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:35:45 compute-0 ovn_controller[153477]: 2025-11-25T16:35:45Z|00365|binding|INFO|Setting lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d ovn-installed in OVS
Nov 25 16:35:45 compute-0 ovn_controller[153477]: 2025-11-25T16:35:45Z|00366|binding|INFO|Setting lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d up in Southbound
Nov 25 16:35:45 compute-0 nova_compute[254092]: 2025-11-25 16:35:45.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3852507675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[289157e9-3e12-4da5-97a4-ff704c8fcaac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.946 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0816ae24-21 in ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.948 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0816ae24-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.948 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ecfa94-903a-4fe6-b007-c991ad4df482]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.949 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a55accb-ca1c-4bc7-8404-d8cf0ef4ee4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:45 compute-0 systemd-machined[216343]: New machine qemu-52-instance-0000002d.
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.961 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3c4be7-a788-41e5-9fe0-3df0f6b5b169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:45 compute-0 systemd-udevd[305390]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:35:45 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002d.
Nov 25 16:35:45 compute-0 NetworkManager[48891]: <info>  [1764088545.9770] device (tap1aa5a9a0-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:35:45 compute-0 NetworkManager[48891]: <info>  [1764088545.9781] device (tap1aa5a9a0-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:35:45 compute-0 sudo[305362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:45 compute-0 sudo[305362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:45 compute-0 sudo[305362]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:45.988 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4279aac7-a292-4e4b-9796-08700958dffc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.019 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1a302179-216f-4018-a026-c040cf32bd3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.021 254096 DEBUG nova.network.neutron [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Updated VIF entry in instance network info cache for port 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.022 254096 DEBUG nova.network.neutron [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Updating instance_info_cache with network_info: [{"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:46 compute-0 NetworkManager[48891]: <info>  [1764088546.0265] manager: (tap0816ae24-20): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.025 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f30c40-ffe1-49a9-b912-cd0775425ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.039 254096 DEBUG oslo_concurrency.lockutils [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1f3bb552-1d70-4ad1-9304-9ed3896db3d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.039 254096 DEBUG nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-deleted-18d58a6f-2179-462b-993c-b9ce6369673f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.040 254096 DEBUG nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Received event network-vif-deleted-12524556-7486-4f17-95f0-2984a51a4542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.040 254096 DEBUG nova.compute.manager [req-b39a2fa7-1f3b-4bf7-b257-1f760c3988ab req-bd68328e-f33f-49af-984d-1c4981bff647 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Received event network-vif-deleted-80f0ea34-88eb-4091-912e-db28507e1f4b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:46 compute-0 sudo[305397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:35:46 compute-0 sudo[305397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.057 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f75fd482-db0d-421d-a2cb-58994ff229e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.060 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b6461af3-51bd-4a24-98cd-d2f8d59fffbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 NetworkManager[48891]: <info>  [1764088546.0789] device (tap0816ae24-20): carrier: link connected
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.083 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[54ef00a1-e9b6-4c63-ad3b-54e4e2b299bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.098 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5819be04-f1ea-4e91-a602-df2348643bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500366, 'reachable_time': 32982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305448, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e05f6205-7873-4181-9bf9-b02b9630d340]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:524c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500366, 'tstamp': 500366}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305449, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c45b37-c470-4710-a333-8666f48c5c39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500366, 'reachable_time': 32982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305450, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.170 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41d4ddb3-78d8-40ef-a886-12e418aa7c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.232 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8395f615-ff51-40a2-8060-b2296b9030a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.235 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.235 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.236 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:46 compute-0 kernel: tap0816ae24-20: entered promiscuous mode
Nov 25 16:35:46 compute-0 NetworkManager[48891]: <info>  [1764088546.2387] manager: (tap0816ae24-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.238 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.243 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.244 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:46 compute-0 ovn_controller[153477]: 2025-11-25T16:35:46Z|00367|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.246 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.247 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a313be4c-2273-430f-a08b-a0a906fe787d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.248 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:35:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:46.248 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'env', 'PROCESS_TAG=haproxy-0816ae24-275c-455e-a549-929f4eb756e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0816ae24-275c-455e-a549-929f4eb756e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:46 compute-0 podman[305498]: 2025-11-25 16:35:46.384452886 +0000 UTC m=+0.034917976 container create 0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.419 254096 DEBUG nova.compute.manager [req-64a3e344-e9d3-4c4a-a300-0fc75c61a7f0 req-2c208a99-8787-41cc-9edf-2a1d905315a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.420 254096 DEBUG oslo_concurrency.lockutils [req-64a3e344-e9d3-4c4a-a300-0fc75c61a7f0 req-2c208a99-8787-41cc-9edf-2a1d905315a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.421 254096 DEBUG oslo_concurrency.lockutils [req-64a3e344-e9d3-4c4a-a300-0fc75c61a7f0 req-2c208a99-8787-41cc-9edf-2a1d905315a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.421 254096 DEBUG oslo_concurrency.lockutils [req-64a3e344-e9d3-4c4a-a300-0fc75c61a7f0 req-2c208a99-8787-41cc-9edf-2a1d905315a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.421 254096 DEBUG nova.compute.manager [req-64a3e344-e9d3-4c4a-a300-0fc75c61a7f0 req-2c208a99-8787-41cc-9edf-2a1d905315a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Processing event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:35:46 compute-0 systemd[1]: Started libpod-conmon-0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83.scope.
Nov 25 16:35:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:46 compute-0 podman[305498]: 2025-11-25 16:35:46.369750948 +0000 UTC m=+0.020215858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:35:46 compute-0 podman[305498]: 2025-11-25 16:35:46.479125231 +0000 UTC m=+0.129590131 container init 0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:35:46 compute-0 podman[305498]: 2025-11-25 16:35:46.487886928 +0000 UTC m=+0.138351808 container start 0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:35:46 compute-0 podman[305498]: 2025-11-25 16:35:46.491685792 +0000 UTC m=+0.142150672 container attach 0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 16:35:46 compute-0 systemd[1]: libpod-0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83.scope: Deactivated successfully.
Nov 25 16:35:46 compute-0 heuristic_bohr[305515]: 167 167
Nov 25 16:35:46 compute-0 podman[305498]: 2025-11-25 16:35:46.497186621 +0000 UTC m=+0.147651501 container died 0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:35:46 compute-0 conmon[305515]: conmon 0c51912d02ae04d54e98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83.scope/container/memory.events
Nov 25 16:35:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f071c1e97f0a01fccc97933cebc58e3a16631131ea2ffe0a2477a8ff17693fc-merged.mount: Deactivated successfully.
Nov 25 16:35:46 compute-0 podman[305498]: 2025-11-25 16:35:46.535458237 +0000 UTC m=+0.185923117 container remove 0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 16:35:46 compute-0 systemd[1]: libpod-conmon-0c51912d02ae04d54e983f89d9fa4d61b34916a3b43ed4f564b5900d0e7aec83.scope: Deactivated successfully.
Nov 25 16:35:46 compute-0 podman[305563]: 2025-11-25 16:35:46.659352933 +0000 UTC m=+0.052051231 container create a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:35:46 compute-0 systemd[1]: Started libpod-conmon-a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812.scope.
Nov 25 16:35:46 compute-0 podman[305614]: 2025-11-25 16:35:46.725274228 +0000 UTC m=+0.042859181 container create 959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:35:46 compute-0 podman[305563]: 2025-11-25 16:35:46.634564092 +0000 UTC m=+0.027262400 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:35:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3483c772ca17ded177e9a557307eaab88cd93a1ea39d5786231c6538101d2bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:46 compute-0 systemd[1]: Started libpod-conmon-959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e.scope.
Nov 25 16:35:46 compute-0 podman[305563]: 2025-11-25 16:35:46.762143547 +0000 UTC m=+0.154841875 container init a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.763 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088546.7630658, 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.764 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] VM Started (Lifecycle Event)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.766 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:35:46 compute-0 podman[305563]: 2025-11-25 16:35:46.768816668 +0000 UTC m=+0.161514976 container start a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.770 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.774 254096 INFO nova.virt.libvirt.driver [-] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Instance spawned successfully.
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.774 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.783 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.787 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:46 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[305630]: [NOTICE]   (305640) : New worker (305643) forked
Nov 25 16:35:46 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[305630]: [NOTICE]   (305640) : Loading success.
Nov 25 16:35:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:46 compute-0 podman[305614]: 2025-11-25 16:35:46.708903845 +0000 UTC m=+0.026488818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca57565e75d7eed3ad9d17122ddd9954aa88948f2f36f9bfb4aa6141bdd55fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca57565e75d7eed3ad9d17122ddd9954aa88948f2f36f9bfb4aa6141bdd55fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca57565e75d7eed3ad9d17122ddd9954aa88948f2f36f9bfb4aa6141bdd55fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca57565e75d7eed3ad9d17122ddd9954aa88948f2f36f9bfb4aa6141bdd55fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.812 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.812 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088546.7640107, 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.813 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] VM Paused (Lifecycle Event)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.818 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.818 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.818 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.818 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.819 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.819 254096 DEBUG nova.virt.libvirt.driver [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:35:46 compute-0 podman[305614]: 2025-11-25 16:35:46.83129898 +0000 UTC m=+0.148883943 container init 959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 16:35:46 compute-0 podman[305614]: 2025-11-25 16:35:46.840413217 +0000 UTC m=+0.157998170 container start 959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.850 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:46 compute-0 podman[305614]: 2025-11-25 16:35:46.851625061 +0000 UTC m=+0.169210014 container attach 959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.854 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088546.7690015, 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.855 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] VM Resumed (Lifecycle Event)
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.886 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.891 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.910 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.916 254096 INFO nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Took 7.60 seconds to spawn the instance on the hypervisor.
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.916 254096 DEBUG nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:46 compute-0 ceph-mon[74985]: pgmap v1463: 321 pgs: 321 active+clean; 257 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 361 op/s
Nov 25 16:35:46 compute-0 nova_compute[254092]: 2025-11-25 16:35:46.984 254096 INFO nova.compute.manager [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Took 9.06 seconds to build instance.
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.006 254096 DEBUG oslo_concurrency.lockutils [None req-7dd0c921-7b7d-4559-9c7d-a12fd339bd66 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 246 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 307 op/s
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.485 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.486 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.486 254096 INFO nova.compute.manager [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Unshelving
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.555 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.555 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.560 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'pci_requests' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.574 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'numa_topology' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.583 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.583 254096 INFO nova.compute.claims [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:35:47 compute-0 ovn_controller[153477]: 2025-11-25T16:35:47Z|00368|binding|INFO|Releasing lport 639a1689-3ed6-4bc6-98a0-e7a7773b6e05 from this chassis (sb_readonly=0)
Nov 25 16:35:47 compute-0 ovn_controller[153477]: 2025-11-25T16:35:47Z|00369|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.713 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.875 254096 DEBUG nova.objects.instance [None req-69e5fac2-3f59-4590-ac8c-b861b50ae548 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.905 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088547.9010558, 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.906 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] VM Paused (Lifecycle Event)
Nov 25 16:35:47 compute-0 nice_borg[305637]: {
Nov 25 16:35:47 compute-0 nice_borg[305637]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "osd_id": 1,
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "type": "bluestore"
Nov 25 16:35:47 compute-0 nice_borg[305637]:     },
Nov 25 16:35:47 compute-0 nice_borg[305637]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "osd_id": 2,
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "type": "bluestore"
Nov 25 16:35:47 compute-0 nice_borg[305637]:     },
Nov 25 16:35:47 compute-0 nice_borg[305637]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "osd_id": 0,
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:35:47 compute-0 nice_borg[305637]:         "type": "bluestore"
Nov 25 16:35:47 compute-0 nice_borg[305637]:     }
Nov 25 16:35:47 compute-0 nice_borg[305637]: }
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.923 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.927 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:47 compute-0 systemd[1]: libpod-959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e.scope: Deactivated successfully.
Nov 25 16:35:47 compute-0 podman[305614]: 2025-11-25 16:35:47.944056612 +0000 UTC m=+1.261641565 container died 959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:35:47 compute-0 systemd[1]: libpod-959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e.scope: Consumed 1.022s CPU time.
Nov 25 16:35:47 compute-0 nova_compute[254092]: 2025-11-25 16:35:47.951 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 16:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ca57565e75d7eed3ad9d17122ddd9954aa88948f2f36f9bfb4aa6141bdd55fc-merged.mount: Deactivated successfully.
Nov 25 16:35:48 compute-0 kernel: tap1aa5a9a0-4c (unregistering): left promiscuous mode
Nov 25 16:35:48 compute-0 podman[305614]: 2025-11-25 16:35:48.063030625 +0000 UTC m=+1.380615578 container remove 959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:35:48 compute-0 NetworkManager[48891]: <info>  [1764088548.0664] device (tap1aa5a9a0-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00370|binding|INFO|Releasing lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d from this chassis (sb_readonly=0)
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00371|binding|INFO|Setting lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d down in Southbound
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00372|binding|INFO|Removing iface tap1aa5a9a0-4c ovn-installed in OVS
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.075 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 systemd[1]: libpod-conmon-959c1f93d798e3699765193e8aa617102c64512de9b71d6571ffb905f1ea8d0e.scope: Deactivated successfully.
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.085 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:92:a2 10.100.0.12'], port_security=['fa:16:3e:cb:92:a2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1f3bb552-1d70-4ad1-9304-9ed3896db3d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.087 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.088 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.089 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3394d05e-c296-4250-a0b0-ecead4244e79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.089 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace which is not needed anymore
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 sudo[305397]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:35:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:35:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:35:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:35:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7e2f1619-9566-46c4-9341-43174b818eac does not exist
Nov 25 16:35:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9e9f910d-0080-4c35-a68e-cb3500a38fb3 does not exist
Nov 25 16:35:48 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Nov 25 16:35:48 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002d.scope: Consumed 1.945s CPU time.
Nov 25 16:35:48 compute-0 systemd-machined[216343]: Machine qemu-52-instance-0000002d terminated.
Nov 25 16:35:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687571090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:48 compute-0 sudo[305727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.181 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:48 compute-0 sudo[305727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:48 compute-0 sudo[305727]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.188 254096 DEBUG nova.compute.provider_tree [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.203 254096 DEBUG nova.scheduler.client.report [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:48 compute-0 kernel: tap1aa5a9a0-4c: entered promiscuous mode
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.229 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:48 compute-0 NetworkManager[48891]: <info>  [1764088548.2324] manager: (tap1aa5a9a0-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00373|binding|INFO|Claiming lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for this chassis.
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00374|binding|INFO|1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d: Claiming fa:16:3e:cb:92:a2 10.100.0.12
Nov 25 16:35:48 compute-0 systemd-udevd[305429]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:35:48 compute-0 kernel: tap1aa5a9a0-4c (unregistering): left promiscuous mode
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[305630]: [NOTICE]   (305640) : haproxy version is 2.8.14-c23fe91
Nov 25 16:35:48 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[305630]: [NOTICE]   (305640) : path to executable is /usr/sbin/haproxy
Nov 25 16:35:48 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[305630]: [WARNING]  (305640) : Exiting Master process...
Nov 25 16:35:48 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[305630]: [ALERT]    (305640) : Current worker (305643) exited with code 143 (Terminated)
Nov 25 16:35:48 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[305630]: [WARNING]  (305640) : All workers exited. Exiting... (0)
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.239 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:92:a2 10.100.0.12'], port_security=['fa:16:3e:cb:92:a2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1f3bb552-1d70-4ad1-9304-9ed3896db3d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:48 compute-0 systemd[1]: libpod-a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812.scope: Deactivated successfully.
Nov 25 16:35:48 compute-0 podman[305766]: 2025-11-25 16:35:48.248408246 +0000 UTC m=+0.051664751 container stop a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:35:48 compute-0 conmon[305630]: conmon a5cfc6bf3f49d8052e34 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812.scope/container/memory.events
Nov 25 16:35:48 compute-0 podman[305766]: 2025-11-25 16:35:48.25150194 +0000 UTC m=+0.054758465 container died a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:35:48 compute-0 sudo[305768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:35:48 compute-0 sudo[305768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00375|binding|INFO|Setting lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d ovn-installed in OVS
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00376|binding|INFO|Setting lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d up in Southbound
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00377|binding|INFO|Releasing lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d from this chassis (sb_readonly=1)
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00378|if_status|INFO|Not setting lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d down as sb is readonly
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00379|binding|INFO|Removing iface tap1aa5a9a0-4c ovn-installed in OVS
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.267 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 sudo[305768]: pam_unix(sudo:session): session closed for user root
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00380|binding|INFO|Releasing lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d from this chassis (sb_readonly=0)
Nov 25 16:35:48 compute-0 ovn_controller[153477]: 2025-11-25T16:35:48Z|00381|binding|INFO|Setting lport 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d down in Southbound
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.275 254096 DEBUG nova.compute.manager [None req-69e5fac2-3f59-4590-ac8c-b861b50ae548 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.280 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:92:a2 10.100.0.12'], port_security=['fa:16:3e:cb:92:a2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1f3bb552-1d70-4ad1-9304-9ed3896db3d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812-userdata-shm.mount: Deactivated successfully.
Nov 25 16:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3483c772ca17ded177e9a557307eaab88cd93a1ea39d5786231c6538101d2bb-merged.mount: Deactivated successfully.
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.415 254096 INFO nova.network.neutron [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating port 660536bc-d4bf-4a4b-9515-06043951c25e with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 25 16:35:48 compute-0 podman[305766]: 2025-11-25 16:35:48.458281101 +0000 UTC m=+0.261537596 container cleanup a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:35:48 compute-0 systemd[1]: libpod-conmon-a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812.scope: Deactivated successfully.
Nov 25 16:35:48 compute-0 podman[305825]: 2025-11-25 16:35:48.596599488 +0000 UTC m=+0.116610380 container remove a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06a0fde6-9471-4c04-83f6-0ef1b2ab184f]: (4, ('Tue Nov 25 04:35:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812)\na5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812\nTue Nov 25 04:35:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (a5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812)\na5cfc6bf3f49d8052e34484098ae6a13fdd873ee3416fe48ff236dd3feaee812\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[011a1949-27dd-4bda-be19-300b881dbee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.605 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 kernel: tap0816ae24-20: left promiscuous mode
Nov 25 16:35:48 compute-0 nova_compute[254092]: 2025-11-25 16:35:48.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.627 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0afe49-1642-4832-8d93-2b744de39c82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.643 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e43403d4-e6fb-47e6-9081-2dd02ea79730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.644 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e6050bda-d766-4acc-b89e-49fc006b9cc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.659 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7089e0-efaa-4949-bdc8-c6f973487534]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500360, 'reachable_time': 27891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305844, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d0816ae24\x2d275c\x2d455e\x2da549\x2d929f4eb756e7.mount: Deactivated successfully.
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.663 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.663 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5fddaefb-1fea-400e-96c6-9f8a46054f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.664 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.665 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.666 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[123a1e15-8911-4a24-bafc-175b8f043acf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.667 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.668 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:48.669 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b39a62f4-74b3-4e18-a74b-9f84e689ec47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:49 compute-0 ceph-mon[74985]: pgmap v1464: 321 pgs: 321 active+clean; 246 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 307 op/s
Nov 25 16:35:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:35:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:35:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/687571090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 246 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 304 op/s
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.419 254096 DEBUG nova.compute.manager [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.419 254096 DEBUG oslo_concurrency.lockutils [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.420 254096 DEBUG oslo_concurrency.lockutils [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.420 254096 DEBUG oslo_concurrency.lockutils [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.420 254096 DEBUG nova.compute.manager [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] No waiting events found dispatching network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.420 254096 WARNING nova.compute.manager [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received unexpected event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for instance with vm_state suspended and task_state None.
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.420 254096 DEBUG nova.compute.manager [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-unplugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.421 254096 DEBUG oslo_concurrency.lockutils [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.421 254096 DEBUG oslo_concurrency.lockutils [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.421 254096 DEBUG oslo_concurrency.lockutils [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.421 254096 DEBUG nova.compute.manager [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] No waiting events found dispatching network-vif-unplugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.421 254096 WARNING nova.compute.manager [req-087723e8-c7dd-4e19-9d61-a0f71f0ef558 req-6ce6d169-8d5c-4d60-a3f7-2a1819fae48f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received unexpected event network-vif-unplugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for instance with vm_state suspended and task_state None.
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.703 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.704 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:49 compute-0 nova_compute[254092]: 2025-11-25 16:35:49.704 254096 DEBUG nova.network.neutron [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.038 254096 DEBUG nova.compute.manager [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-changed-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.038 254096 DEBUG nova.compute.manager [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Refreshing instance network info cache due to event network-changed-660536bc-d4bf-4a4b-9515-06043951c25e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.039 254096 DEBUG oslo_concurrency.lockutils [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.060 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:50 compute-0 ceph-mon[74985]: pgmap v1465: 321 pgs: 321 active+clean; 246 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 304 op/s
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.255339) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088550255362, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 702, "num_deletes": 254, "total_data_size": 705937, "memory_usage": 718872, "flush_reason": "Manual Compaction"}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088550260009, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 697284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30035, "largest_seqno": 30736, "table_properties": {"data_size": 693619, "index_size": 1445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8942, "raw_average_key_size": 20, "raw_value_size": 686003, "raw_average_value_size": 1541, "num_data_blocks": 63, "num_entries": 445, "num_filter_entries": 445, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088514, "oldest_key_time": 1764088514, "file_creation_time": 1764088550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4703 microseconds, and 2411 cpu microseconds.
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.260040) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 697284 bytes OK
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.260054) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.261183) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.261193) EVENT_LOG_v1 {"time_micros": 1764088550261189, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.261205) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 702197, prev total WAL file size 702197, number of live WAL files 2.
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.261594) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(680KB)], [65(7772KB)]
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088550261627, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8656772, "oldest_snapshot_seqno": -1}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5324 keys, 6987743 bytes, temperature: kUnknown
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088550299725, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 6987743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6953757, "index_size": 19604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 135192, "raw_average_key_size": 25, "raw_value_size": 6859450, "raw_average_value_size": 1288, "num_data_blocks": 796, "num_entries": 5324, "num_filter_entries": 5324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.299959) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 6987743 bytes
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.301367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.9 rd, 183.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.6 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(22.4) write-amplify(10.0) OK, records in: 5847, records dropped: 523 output_compression: NoCompression
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.301406) EVENT_LOG_v1 {"time_micros": 1764088550301391, "job": 36, "event": "compaction_finished", "compaction_time_micros": 38153, "compaction_time_cpu_micros": 15583, "output_level": 6, "num_output_files": 1, "total_output_size": 6987743, "num_input_records": 5847, "num_output_records": 5324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088550302018, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088550303870, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.261510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.304018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.304025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.304026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.304028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:35:50.304030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.612 254096 DEBUG nova.compute.manager [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.670 254096 INFO nova.compute.manager [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] instance snapshotting
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.671 254096 WARNING nova.compute.manager [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] trying to snapshot a non-running instance: (state: 4 expected: 1)
Nov 25 16:35:50 compute-0 nova_compute[254092]: 2025-11-25 16:35:50.871 254096 INFO nova.virt.libvirt.driver [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Beginning cold snapshot process
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.027 254096 DEBUG nova.virt.libvirt.imagebackend [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050157297974865 of space, bias 1.0, pg target 0.33150471893924593 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001423096450315817 of space, bias 1.0, pg target 0.4269289350947451 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.147 254096 DEBUG nova.network.neutron [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.164 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.165 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.166 254096 INFO nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Creating image(s)
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.183 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.186 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.187 254096 DEBUG oslo_concurrency.lockutils [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.187 254096 DEBUG nova.network.neutron [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Refreshing network info cache for port 660536bc-d4bf-4a4b-9515-06043951c25e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.227 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.249 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.252 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "27e8629442fa960187c0eade446ab07b5c8acd2f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.253 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "27e8629442fa960187c0eade446ab07b5c8acd2f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.257 254096 DEBUG nova.storage.rbd_utils [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(ed8264f1ca90478088744f625ee9ab98) on rbd image(1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:35:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 25 16:35:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 25 16:35:51 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 25 16:35:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 246 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 564 KiB/s wr, 145 op/s
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.394 254096 DEBUG nova.storage.rbd_utils [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning vms/1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk@ed8264f1ca90478088744f625ee9ab98 to images/4ee8e35f-8b5b-4fa9-9166-c82c2334e104 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.478 254096 DEBUG nova.storage.rbd_utils [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] flattening images/4ee8e35f-8b5b-4fa9-9166-c82c2334e104 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.510 254096 DEBUG nova.virt.libvirt.imagebackend [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/6d9352ea-7650-4e12-a1b0-1f5e5bc16789/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/6d9352ea-7650-4e12-a1b0-1f5e5bc16789/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.563 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.564 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.564 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.564 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.564 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] No waiting events found dispatching network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.564 254096 WARNING nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received unexpected event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for instance with vm_state suspended and task_state image_uploading.
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.565 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.565 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.565 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.565 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.565 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] No waiting events found dispatching network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.566 254096 WARNING nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received unexpected event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for instance with vm_state suspended and task_state image_uploading.
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.566 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.566 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.566 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.566 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.566 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] No waiting events found dispatching network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.567 254096 WARNING nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received unexpected event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for instance with vm_state suspended and task_state image_uploading.
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.567 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-unplugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.567 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.567 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.567 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.567 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] No waiting events found dispatching network-vif-unplugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.567 254096 WARNING nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received unexpected event network-vif-unplugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for instance with vm_state suspended and task_state image_uploading.
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.568 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.568 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.568 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.568 254096 DEBUG oslo_concurrency.lockutils [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.568 254096 DEBUG nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] No waiting events found dispatching network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.568 254096 WARNING nova.compute.manager [req-0fd8d034-5739-40f0-8934-bcb8437852e6 req-609a9d0e-67c0-483e-bbe6-069fb0d08a10 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received unexpected event network-vif-plugged-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d for instance with vm_state suspended and task_state image_uploading.
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.573 254096 DEBUG nova.virt.libvirt.imagebackend [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/6d9352ea-7650-4e12-a1b0-1f5e5bc16789/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.573 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] cloning images/6d9352ea-7650-4e12-a1b0-1f5e5bc16789@snap to None/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.720 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "27e8629442fa960187c0eade446ab07b5c8acd2f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.776 254096 DEBUG nova.storage.rbd_utils [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(ed8264f1ca90478088744f625ee9ab98) on rbd image(1f3bb552-1d70-4ad1-9304-9ed3896db3d4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.810 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'migration_context' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.851 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] flattening vms/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.899 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.900 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.900 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.900 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.900 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.902 254096 INFO nova.compute.manager [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Terminating instance
Nov 25 16:35:51 compute-0 nova_compute[254092]: 2025-11-25 16:35:51.903 254096 DEBUG nova.compute.manager [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:35:52 compute-0 kernel: tap84ab0426-01 (unregistering): left promiscuous mode
Nov 25 16:35:52 compute-0 NetworkManager[48891]: <info>  [1764088552.2048] device (tap84ab0426-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 ovn_controller[153477]: 2025-11-25T16:35:52Z|00382|binding|INFO|Releasing lport 84ab0426-0174-4297-bb2b-e5964e453530 from this chassis (sb_readonly=0)
Nov 25 16:35:52 compute-0 ovn_controller[153477]: 2025-11-25T16:35:52Z|00383|binding|INFO|Setting lport 84ab0426-0174-4297-bb2b-e5964e453530 down in Southbound
Nov 25 16:35:52 compute-0 ovn_controller[153477]: 2025-11-25T16:35:52Z|00384|binding|INFO|Removing iface tap84ab0426-01 ovn-installed in OVS
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.225 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:28:4e 10.100.0.14'], port_security=['fa:16:3e:06:28:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '270ad7f6-74d4-4c29-9856-77768f170789', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fe7901baa563491c8609089aa4334bf1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '10995e2d-e9a2-4098-859c-5dcd4d5f741f a5545b41-8988-4934-9baf-3036fea9a6f6 f0e700c9-75a4-4d6f-8ebc-2c0597dea301', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6fc76d1-a8a4-44cf-ab8f-0304e50e033c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=84ab0426-0174-4297-bb2b-e5964e453530) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.226 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 84ab0426-0174-4297-bb2b-e5964e453530 in datapath 82742f46-fb6e-443e-a99d-84c5367a4ccd unbound from our chassis
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.228 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82742f46-fb6e-443e-a99d-84c5367a4ccd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.228 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9776b30-c0c7-443f-89a4-84a7ba0c87aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.229 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd namespace which is not needed anymore
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.229 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Image rbd:vms/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.229 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.229 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Ensure instance console log exists: /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.230 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.230 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.230 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.233 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Start _get_guest_xml network_info=[{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:35:24Z,direct_url=<?>,disk_format='raw',id=6d9352ea-7650-4e12-a1b0-1f5e5bc16789,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-2038779180-shelved',owner='2d7c4dbc1eb44f39aa7ccb9b6363e554',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:35:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.238 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.241 254096 WARNING nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.246 254096 DEBUG nova.virt.libvirt.host [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.246 254096 DEBUG nova.virt.libvirt.host [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:35:52 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000029.scope: Deactivated successfully.
Nov 25 16:35:52 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000029.scope: Consumed 15.170s CPU time.
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.249 254096 DEBUG nova.virt.libvirt.host [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.250 254096 DEBUG nova.virt.libvirt.host [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.251 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.251 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:35:24Z,direct_url=<?>,disk_format='raw',id=6d9352ea-7650-4e12-a1b0-1f5e5bc16789,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-2038779180-shelved',owner='2d7c4dbc1eb44f39aa7ccb9b6363e554',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:35:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.252 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.253 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.253 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:35:52 compute-0 systemd-machined[216343]: Machine qemu-46-instance-00000029 terminated.
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.254 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.255 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.256 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.256 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.256 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.256 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.256 254096 DEBUG nova.virt.hardware [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.256 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.271 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.349 254096 INFO nova.virt.libvirt.driver [-] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Instance destroyed successfully.
Nov 25 16:35:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.350 254096 DEBUG nova.objects.instance [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lazy-loading 'resources' on Instance uuid 270ad7f6-74d4-4c29-9856-77768f170789 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:52 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 25 16:35:52 compute-0 ceph-mon[74985]: osdmap e181: 3 total, 3 up, 3 in
Nov 25 16:35:52 compute-0 ceph-mon[74985]: pgmap v1467: 321 pgs: 321 active+clean; 246 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 564 KiB/s wr, 145 op/s
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.362 254096 DEBUG nova.virt.libvirt.vif [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-402821829',display_name='tempest-SecurityGroupsTestJSON-server-402821829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-402821829',id=41,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:34:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fe7901baa563491c8609089aa4334bf1',ramdisk_id='',reservation_id='r-qwbkjkuw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-716261307',owner_user_name='tempest-SecurityGroupsTestJSON-716261307-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:34:51Z,user_data=None,user_id='650f3d90afcd4e85b7042981dc353a2d',uuid=270ad7f6-74d4-4c29-9856-77768f170789,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.363 254096 DEBUG nova.network.os_vif_util [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converting VIF {"id": "84ab0426-0174-4297-bb2b-e5964e453530", "address": "fa:16:3e:06:28:4e", "network": {"id": "82742f46-fb6e-443e-a99d-84c5367a4ccd", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-522194148-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fe7901baa563491c8609089aa4334bf1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84ab0426-01", "ovs_interfaceid": "84ab0426-0174-4297-bb2b-e5964e453530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.364 254096 DEBUG nova.network.os_vif_util [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:28:4e,bridge_name='br-int',has_traffic_filtering=True,id=84ab0426-0174-4297-bb2b-e5964e453530,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84ab0426-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.364 254096 DEBUG os_vif [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:28:4e,bridge_name='br-int',has_traffic_filtering=True,id=84ab0426-0174-4297-bb2b-e5964e453530,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84ab0426-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.366 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84ab0426-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.375 254096 INFO os_vif [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:28:4e,bridge_name='br-int',has_traffic_filtering=True,id=84ab0426-0174-4297-bb2b-e5964e453530,network=Network(82742f46-fb6e-443e-a99d-84c5367a4ccd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84ab0426-01')
Nov 25 16:35:52 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [NOTICE]   (301087) : haproxy version is 2.8.14-c23fe91
Nov 25 16:35:52 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [NOTICE]   (301087) : path to executable is /usr/sbin/haproxy
Nov 25 16:35:52 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [WARNING]  (301087) : Exiting Master process...
Nov 25 16:35:52 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [WARNING]  (301087) : Exiting Master process...
Nov 25 16:35:52 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [ALERT]    (301087) : Current worker (301089) exited with code 143 (Terminated)
Nov 25 16:35:52 compute-0 neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd[301083]: [WARNING]  (301087) : All workers exited. Exiting... (0)
Nov 25 16:35:52 compute-0 systemd[1]: libpod-c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26.scope: Deactivated successfully.
Nov 25 16:35:52 compute-0 podman[306208]: 2025-11-25 16:35:52.393057954 +0000 UTC m=+0.060107039 container died c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26-userdata-shm.mount: Deactivated successfully.
Nov 25 16:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a0ed159f9bbbb46b9f628e39e9f53d3c1be8d1c9987113299120773552e4043-merged.mount: Deactivated successfully.
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.439 254096 DEBUG nova.storage.rbd_utils [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(snap) on rbd image(4ee8e35f-8b5b-4fa9-9166-c82c2334e104) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:35:52 compute-0 podman[306208]: 2025-11-25 16:35:52.443370567 +0000 UTC m=+0.110419652 container cleanup c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:35:52 compute-0 systemd[1]: libpod-conmon-c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26.scope: Deactivated successfully.
Nov 25 16:35:52 compute-0 podman[306298]: 2025-11-25 16:35:52.522737017 +0000 UTC m=+0.057003216 container remove c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.533 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a05ab6-c039-46c1-bb20-1cd9f2825a4e]: (4, ('Tue Nov 25 04:35:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd (c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26)\nc4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26\nTue Nov 25 04:35:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd (c4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26)\nc4b831c7a775b898d953c2b93b587e2b837d01dc5240f6be9c79df11fce4cc26\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.535 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a608b880-b69a-41af-8aec-7c2589749df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.536 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82742f46-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 kernel: tap82742f46-f0: left promiscuous mode
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.573 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f4c5ab-1ff8-46a5-9135-8dee0e69234b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.590 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e50491a-4452-4a2e-81db-a6fb37a3ffaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.592 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0fb949-a62a-448f-abd0-064ab816108f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a1688f-c3e2-4a0e-8abc-27f9c22ee557]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 494578, 'reachable_time': 27071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306321, 'error': None, 'target': 'ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d82742f46\x2dfb6e\x2d443e\x2da99d\x2d84c5367a4ccd.mount: Deactivated successfully.
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.621 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82742f46-fb6e-443e-a99d-84c5367a4ccd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:35:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:52.621 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8a55eeb5-47e6-487f-989a-1d9f4a9fce51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1276632678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.758 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.797 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.804 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.895 254096 INFO nova.virt.libvirt.driver [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Deleting instance files /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789_del
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.896 254096 INFO nova.virt.libvirt.driver [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Deletion of /var/lib/nova/instances/270ad7f6-74d4-4c29-9856-77768f170789_del complete
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.956 254096 INFO nova.compute.manager [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Took 1.05 seconds to destroy the instance on the hypervisor.
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.957 254096 DEBUG oslo.service.loopingcall [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.957 254096 DEBUG nova.compute.manager [-] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:35:52 compute-0 nova_compute[254092]: 2025-11-25 16:35:52.957 254096 DEBUG nova.network.neutron [-] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.260 254096 DEBUG nova.network.neutron [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updated VIF entry in instance network info cache for port 660536bc-d4bf-4a4b-9515-06043951c25e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.261 254096 DEBUG nova.network.neutron [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:35:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487872049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.273 254096 DEBUG oslo_concurrency.lockutils [req-0c02213b-e8a8-49c7-be7f-d36972c8a0de req-93a51eb3-4d6e-436f-bb35-6bb9e26edfc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.292 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.294 254096 DEBUG nova.virt.libvirt.vif [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='6d9352ea-7650-4e12-a1b0-1f5e5bc16789',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:33:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member',shelved_at='2025-11-25T16:35:35.720387',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6d9352ea-7650-4e12-a1b0-1f5e5bc16789'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:47Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.294 254096 DEBUG nova.network.os_vif_util [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.295 254096 DEBUG nova.network.os_vif_util [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.297 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.308 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <uuid>a3d5d205-98f0-4820-a96c-7f3e59d0cdd9</uuid>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <name>instance-00000025</name>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersNegativeTestJSON-server-2038779180</nova:name>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:35:52</nova:creationTime>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:user uuid="b228702c02db4cb69105bb4c939c15d7">tempest-ServersNegativeTestJSON-549107942-project-member</nova:user>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:project uuid="2d7c4dbc1eb44f39aa7ccb9b6363e554">tempest-ServersNegativeTestJSON-549107942</nova:project>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="6d9352ea-7650-4e12-a1b0-1f5e5bc16789"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <nova:port uuid="660536bc-d4bf-4a4b-9515-06043951c25e">
Nov 25 16:35:53 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <system>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <entry name="serial">a3d5d205-98f0-4820-a96c-7f3e59d0cdd9</entry>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <entry name="uuid">a3d5d205-98f0-4820-a96c-7f3e59d0cdd9</entry>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </system>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <os>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   </os>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <features>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   </features>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk">
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config">
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       </source>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:35:53 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:10:46:64"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <target dev="tap660536bc-d4"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/console.log" append="off"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <video>
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </video>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:35:53 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:35:53 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:35:53 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:35:53 compute-0 nova_compute[254092]: </domain>
Nov 25 16:35:53 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.309 254096 DEBUG nova.compute.manager [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Preparing to wait for external event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.309 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.309 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.309 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.310 254096 DEBUG nova.virt.libvirt.vif [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='6d9352ea-7650-4e12-a1b0-1f5e5bc16789',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:33:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member',shelved_at='2025-11-25T16:35:35.720387',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6d9352ea-7650-4e12-a1b0-1f5e5bc16789'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:35:47Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.310 254096 DEBUG nova.network.os_vif_util [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.311 254096 DEBUG nova.network.os_vif_util [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.311 254096 DEBUG os_vif [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.318 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap660536bc-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.319 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap660536bc-d4, col_values=(('external_ids', {'iface-id': '660536bc-d4bf-4a4b-9515-06043951c25e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:46:64', 'vm-uuid': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:53 compute-0 NetworkManager[48891]: <info>  [1764088553.3222] manager: (tap660536bc-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.334 254096 INFO os_vif [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4')
Nov 25 16:35:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 25 16:35:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 25 16:35:53 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 25 16:35:53 compute-0 ceph-mon[74985]: osdmap e182: 3 total, 3 up, 3 in
Nov 25 16:35:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1276632678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1487872049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:35:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 246 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 24 KiB/s wr, 104 op/s
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.399 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.399 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.400 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] No VIF found with MAC fa:16:3e:10:46:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.400 254096 INFO nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Using config drive
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.421 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.437 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.471 254096 DEBUG nova.objects.instance [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'keypairs' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.598 254096 DEBUG nova.compute.manager [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-vif-unplugged-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.599 254096 DEBUG oslo_concurrency.lockutils [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.599 254096 DEBUG oslo_concurrency.lockutils [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.599 254096 DEBUG oslo_concurrency.lockutils [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.600 254096 DEBUG nova.compute.manager [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] No waiting events found dispatching network-vif-unplugged-84ab0426-0174-4297-bb2b-e5964e453530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.600 254096 DEBUG nova.compute.manager [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-vif-unplugged-84ab0426-0174-4297-bb2b-e5964e453530 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.600 254096 DEBUG nova.compute.manager [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.600 254096 DEBUG oslo_concurrency.lockutils [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "270ad7f6-74d4-4c29-9856-77768f170789-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.600 254096 DEBUG oslo_concurrency.lockutils [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.601 254096 DEBUG oslo_concurrency.lockutils [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.601 254096 DEBUG nova.compute.manager [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] No waiting events found dispatching network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.601 254096 WARNING nova.compute.manager [req-88c1d8ae-592a-4175-bd08-cd6c5947528d req-d1af615d-30da-42ee-9783-a434c5c79f1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received unexpected event network-vif-plugged-84ab0426-0174-4297-bb2b-e5964e453530 for instance with vm_state active and task_state deleting.
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.904 254096 INFO nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Creating config drive at /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.908 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8mxxs3d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.990 254096 DEBUG nova.network.neutron [-] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.995 254096 DEBUG nova.compute.manager [req-571067a2-2f2d-4645-b48c-8ba867b8d910 req-00a679ac-012c-4af7-99b2-16d38519c41c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Received event network-vif-deleted-84ab0426-0174-4297-bb2b-e5964e453530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.995 254096 INFO nova.compute.manager [req-571067a2-2f2d-4645-b48c-8ba867b8d910 req-00a679ac-012c-4af7-99b2-16d38519c41c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Neutron deleted interface 84ab0426-0174-4297-bb2b-e5964e453530; detaching it from the instance and deleting it from the info cache
Nov 25 16:35:53 compute-0 nova_compute[254092]: 2025-11-25 16:35:53.995 254096 DEBUG nova.network.neutron [req-571067a2-2f2d-4645-b48c-8ba867b8d910 req-00a679ac-012c-4af7-99b2-16d38519c41c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.033 254096 INFO nova.compute.manager [-] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Took 1.08 seconds to deallocate network for instance.
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.040 254096 DEBUG nova.compute.manager [req-571067a2-2f2d-4645-b48c-8ba867b8d910 req-00a679ac-012c-4af7-99b2-16d38519c41c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Detach interface failed, port_id=84ab0426-0174-4297-bb2b-e5964e453530, reason: Instance 270ad7f6-74d4-4c29-9856-77768f170789 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.051 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8mxxs3d" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.079 254096 DEBUG nova.storage.rbd_utils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] rbd image a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.083 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.124 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.125 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:54 compute-0 ceph-mon[74985]: osdmap e183: 3 total, 3 up, 3 in
Nov 25 16:35:54 compute-0 ceph-mon[74985]: pgmap v1470: 321 pgs: 321 active+clean; 246 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 24 KiB/s wr, 104 op/s
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.869 254096 DEBUG oslo_concurrency.processutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.786s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.870 254096 INFO nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deleting local config drive /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9/disk.config because it was imported into RBD.
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.885 254096 DEBUG oslo_concurrency.processutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:35:54 compute-0 NetworkManager[48891]: <info>  [1764088554.9373] manager: (tap660536bc-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Nov 25 16:35:54 compute-0 kernel: tap660536bc-d4: entered promiscuous mode
Nov 25 16:35:54 compute-0 systemd-udevd[306188]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:35:54 compute-0 NetworkManager[48891]: <info>  [1764088554.9773] device (tap660536bc-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:35:54 compute-0 NetworkManager[48891]: <info>  [1764088554.9784] device (tap660536bc-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:54 compute-0 ovn_controller[153477]: 2025-11-25T16:35:54Z|00385|binding|INFO|Claiming lport 660536bc-d4bf-4a4b-9515-06043951c25e for this chassis.
Nov 25 16:35:54 compute-0 ovn_controller[153477]: 2025-11-25T16:35:54Z|00386|binding|INFO|660536bc-d4bf-4a4b-9515-06043951c25e: Claiming fa:16:3e:10:46:64 10.100.0.8
Nov 25 16:35:54 compute-0 nova_compute[254092]: 2025-11-25 16:35:54.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:55 compute-0 systemd-machined[216343]: New machine qemu-53-instance-00000025.
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.013 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:64 10.100.0.8'], port_security=['fa:16:3e:10:46:64 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '7', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=660536bc-d4bf-4a4b-9515-06043951c25e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.014 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 660536bc-d4bf-4a4b-9515-06043951c25e in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f bound to our chassis
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.016 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.031 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b00d1f3b-ca76-453c-8c46-605dbd469570]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.032 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3960d4c5-61 in ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.034 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3960d4c5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.034 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d36b58-798c-4563-a1a3-60af39928978]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.036 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a43ca2ab-c8a0-4353-b62a-b907cf64ce77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.055 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[62376b1a-f0ad-451f-ba5f-77a994d12c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:55 compute-0 ovn_controller[153477]: 2025-11-25T16:35:55Z|00387|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e ovn-installed in OVS
Nov 25 16:35:55 compute-0 ovn_controller[153477]: 2025-11-25T16:35:55Z|00388|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e up in Southbound
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.071 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:55 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000025.
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.087 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01c42cc1-6077-4086-af6d-ff61d0636c16]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.134 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[27d2d283-2627-4041-bc9f-85fd29a2f168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 NetworkManager[48891]: <info>  [1764088555.1460] manager: (tap3960d4c5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/169)
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.144 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[09e8f2e9-bca5-4f88-801e-d0e837800081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.185 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[18856f76-f8e1-4de9-b7f7-ebfed94651c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.191 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f107120-3fad-4247-81ce-03a40953e3a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:35:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739608322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:35:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:35:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2739608322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:35:55 compute-0 NetworkManager[48891]: <info>  [1764088555.2256] device (tap3960d4c5-60): carrier: link connected
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.236 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abe2740a-ad72-463a-b51c-16349efb4ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.257 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5111aa6d-7c0d-461a-b8d5-771a8fcdf5ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501281, 'reachable_time': 16853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306497, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.276 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa62b3dd-173d-435a-ad54-6b3bd03a98fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:428f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501281, 'tstamp': 501281}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306498, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.297 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a330f639-9f28-4101-a137-a9afd94bb197]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501281, 'reachable_time': 16853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306499, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.339 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5222c337-2b17-4b49-b3dd-0c69ae053e83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:35:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399684866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 273 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.9 MiB/s wr, 263 op/s
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.400 254096 DEBUG oslo_concurrency.processutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.400 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fec2e44-6a55-4375-a5a6-2ccf1ca6c70a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.403 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3960d4c5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:55 compute-0 NetworkManager[48891]: <info>  [1764088555.4055] manager: (tap3960d4c5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.409 254096 DEBUG nova.compute.provider_tree [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:35:55 compute-0 kernel: tap3960d4c5-60: entered promiscuous mode
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.413 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3960d4c5-60, col_values=(('external_ids', {'iface-id': '9dd2e935-32e0-43d1-8a28-23e6ab045e91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:55 compute-0 ovn_controller[153477]: 2025-11-25T16:35:55Z|00389|binding|INFO|Releasing lport 9dd2e935-32e0-43d1-8a28-23e6ab045e91 from this chassis (sb_readonly=0)
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.426 254096 DEBUG nova.scheduler.client.report [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.438 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.439 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[866abd54-d11b-42f3-b4d6-f7c4f7a61985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.440 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:35:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:35:55.441 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'env', 'PROCESS_TAG=haproxy-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.508 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.710 254096 INFO nova.scheduler.client.report [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Deleted allocations for instance 270ad7f6-74d4-4c29-9856-77768f170789
Nov 25 16:35:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2739608322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:35:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2739608322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:35:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1399684866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.823 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088540.8143435, 8457c008-75d8-4c24-9ae2-6b8a526312ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.824 254096 INFO nova.compute.manager [-] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] VM Stopped (Lifecycle Event)
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.832 254096 DEBUG oslo_concurrency.lockutils [None req-ee8bc4cc-b86b-42bd-9e56-62c631f69005 650f3d90afcd4e85b7042981dc353a2d fe7901baa563491c8609089aa4334bf1 - - default default] Lock "270ad7f6-74d4-4c29-9856-77768f170789" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.847 254096 DEBUG nova.compute.manager [None req-e60c1d23-91d9-464e-952f-e68afe252e80 - - - - - -] [instance: 8457c008-75d8-4c24-9ae2-6b8a526312ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.912 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088555.9114428, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.912 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Started (Lifecycle Event)
Nov 25 16:35:55 compute-0 podman[306573]: 2025-11-25 16:35:55.827477822 +0000 UTC m=+0.031250907 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.931 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.936 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088555.912402, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.936 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Paused (Lifecycle Event)
Nov 25 16:35:55 compute-0 podman[306573]: 2025-11-25 16:35:55.949722404 +0000 UTC m=+0.153495469 container create 39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.955 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.961 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.964 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088540.96294, 7a89b21c-79db-4e5f-88fd-35557c8c15ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.964 254096 INFO nova.compute.manager [-] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] VM Stopped (Lifecycle Event)
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.987 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:55 compute-0 nova_compute[254092]: 2025-11-25 16:35:55.988 254096 DEBUG nova.compute.manager [None req-4ecadcae-8fd1-487e-a391-0ca8ce254db6 - - - - - -] [instance: 7a89b21c-79db-4e5f-88fd-35557c8c15ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:55 compute-0 systemd[1]: Started libpod-conmon-39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095.scope.
Nov 25 16:35:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:35:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59278409eece4f268f905d74d2f5e9016da291c00464cddcbac0e1a07e35273a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:35:56 compute-0 podman[306573]: 2025-11-25 16:35:56.056056734 +0000 UTC m=+0.259829819 container init 39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 16:35:56 compute-0 podman[306573]: 2025-11-25 16:35:56.068789738 +0000 UTC m=+0.272562793 container start 39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.076 254096 DEBUG nova.compute.manager [req-bbf720de-46f3-4ec8-ae42-822f2c3b1c4d req-542a0557-cfc9-4ad1-b3aa-f8ecacc1c300 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.077 254096 DEBUG oslo_concurrency.lockutils [req-bbf720de-46f3-4ec8-ae42-822f2c3b1c4d req-542a0557-cfc9-4ad1-b3aa-f8ecacc1c300 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.077 254096 DEBUG oslo_concurrency.lockutils [req-bbf720de-46f3-4ec8-ae42-822f2c3b1c4d req-542a0557-cfc9-4ad1-b3aa-f8ecacc1c300 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.077 254096 DEBUG oslo_concurrency.lockutils [req-bbf720de-46f3-4ec8-ae42-822f2c3b1c4d req-542a0557-cfc9-4ad1-b3aa-f8ecacc1c300 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.078 254096 DEBUG nova.compute.manager [req-bbf720de-46f3-4ec8-ae42-822f2c3b1c4d req-542a0557-cfc9-4ad1-b3aa-f8ecacc1c300 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Processing event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.078 254096 DEBUG nova.compute.manager [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.081 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088556.0815015, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.082 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Resumed (Lifecycle Event)
Nov 25 16:35:56 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [NOTICE]   (306600) : New worker (306602) forked
Nov 25 16:35:56 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [NOTICE]   (306600) : Loading success.
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.102 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.103 254096 DEBUG nova.virt.libvirt.driver [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.106 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.108 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance spawned successfully.
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.135 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.194 254096 INFO nova.virt.libvirt.driver [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Snapshot image upload complete
Nov 25 16:35:56 compute-0 nova_compute[254092]: 2025-11-25 16:35:56.195 254096 INFO nova.compute.manager [None req-d10f4cb2-70c5-433d-b4f5-8e089da41d21 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Took 5.52 seconds to snapshot the instance on the hypervisor.
Nov 25 16:35:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 25 16:35:56 compute-0 ceph-mon[74985]: pgmap v1471: 321 pgs: 321 active+clean; 273 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 7.9 MiB/s wr, 263 op/s
Nov 25 16:35:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 25 16:35:56 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 25 16:35:57 compute-0 nova_compute[254092]: 2025-11-25 16:35:57.138 254096 DEBUG nova.compute.manager [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:35:57 compute-0 nova_compute[254092]: 2025-11-25 16:35:57.212 254096 DEBUG oslo_concurrency.lockutils [None req-70db8d51-adbf-4e93-9e59-6f11a94ec6e0 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 292 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 330 op/s
Nov 25 16:35:57 compute-0 ceph-mon[74985]: osdmap e184: 3 total, 3 up, 3 in
Nov 25 16:35:57 compute-0 nova_compute[254092]: 2025-11-25 16:35:57.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:58 compute-0 nova_compute[254092]: 2025-11-25 16:35:58.171 254096 DEBUG nova.compute.manager [req-f7c7ba7f-6059-4773-954d-a67ae65a5b14 req-b1e5f362-3fc3-4d54-9ed9-33cbcf12723c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:35:58 compute-0 nova_compute[254092]: 2025-11-25 16:35:58.172 254096 DEBUG oslo_concurrency.lockutils [req-f7c7ba7f-6059-4773-954d-a67ae65a5b14 req-b1e5f362-3fc3-4d54-9ed9-33cbcf12723c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:58 compute-0 nova_compute[254092]: 2025-11-25 16:35:58.172 254096 DEBUG oslo_concurrency.lockutils [req-f7c7ba7f-6059-4773-954d-a67ae65a5b14 req-b1e5f362-3fc3-4d54-9ed9-33cbcf12723c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:58 compute-0 nova_compute[254092]: 2025-11-25 16:35:58.172 254096 DEBUG oslo_concurrency.lockutils [req-f7c7ba7f-6059-4773-954d-a67ae65a5b14 req-b1e5f362-3fc3-4d54-9ed9-33cbcf12723c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:58 compute-0 nova_compute[254092]: 2025-11-25 16:35:58.172 254096 DEBUG nova.compute.manager [req-f7c7ba7f-6059-4773-954d-a67ae65a5b14 req-b1e5f362-3fc3-4d54-9ed9-33cbcf12723c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:35:58 compute-0 nova_compute[254092]: 2025-11-25 16:35:58.173 254096 WARNING nova.compute.manager [req-f7c7ba7f-6059-4773-954d-a67ae65a5b14 req-b1e5f362-3fc3-4d54-9ed9-33cbcf12723c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state active and task_state None.
Nov 25 16:35:58 compute-0 nova_compute[254092]: 2025-11-25 16:35:58.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 25 16:35:58 compute-0 ceph-mon[74985]: pgmap v1473: 321 pgs: 321 active+clean; 292 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 330 op/s
Nov 25 16:35:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 25 16:35:58 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 25 16:35:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:35:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 25 16:35:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 25 16:35:59 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 25 16:35:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 292 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 330 op/s
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.496 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.497 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.497 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.497 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.497 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.498 254096 INFO nova.compute.manager [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Terminating instance
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.499 254096 DEBUG nova.compute.manager [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.508 254096 INFO nova.virt.libvirt.driver [-] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Instance destroyed successfully.
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.509 254096 DEBUG nova.objects.instance [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.524 254096 DEBUG nova.virt.libvirt.vif [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:35:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-450706863',display_name='tempest-ImagesTestJSON-server-450706863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-450706863',id=45,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-qmu36f3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:35:56Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=1f3bb552-1d70-4ad1-9304-9ed3896db3d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.525 254096 DEBUG nova.network.os_vif_util [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "address": "fa:16:3e:cb:92:a2", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1aa5a9a0-4c", "ovs_interfaceid": "1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.526 254096 DEBUG nova.network.os_vif_util [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:92:a2,bridge_name='br-int',has_traffic_filtering=True,id=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1aa5a9a0-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.526 254096 DEBUG os_vif [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:92:a2,bridge_name='br-int',has_traffic_filtering=True,id=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1aa5a9a0-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.530 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1aa5a9a0-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.537 254096 INFO os_vif [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:92:a2,bridge_name='br-int',has_traffic_filtering=True,id=1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1aa5a9a0-4c')
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.825 254096 INFO nova.virt.libvirt.driver [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Deleting instance files /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4_del
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.826 254096 INFO nova.virt.libvirt.driver [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Deletion of /var/lib/nova/instances/1f3bb552-1d70-4ad1-9304-9ed3896db3d4_del complete
Nov 25 16:35:59 compute-0 ceph-mon[74985]: osdmap e185: 3 total, 3 up, 3 in
Nov 25 16:35:59 compute-0 ceph-mon[74985]: osdmap e186: 3 total, 3 up, 3 in
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.887 254096 INFO nova.compute.manager [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.888 254096 DEBUG oslo.service.loopingcall [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.889 254096 DEBUG nova.compute.manager [-] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:35:59 compute-0 nova_compute[254092]: 2025-11-25 16:35:59.889 254096 DEBUG nova.network.neutron [-] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:00 compute-0 podman[306632]: 2025-11-25 16:36:00.635476289 +0000 UTC m=+0.051795765 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:36:00 compute-0 podman[306631]: 2025-11-25 16:36:00.640684039 +0000 UTC m=+0.058809434 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:36:00 compute-0 podman[306633]: 2025-11-25 16:36:00.658590174 +0000 UTC m=+0.072488834 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:36:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433526533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:00 compute-0 nova_compute[254092]: 2025-11-25 16:36:00.948 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:00 compute-0 ceph-mon[74985]: pgmap v1476: 321 pgs: 321 active+clean; 292 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 330 op/s
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.033 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.034 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.177 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.178 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4060MB free_disk=59.92182922363281GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.179 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.179 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.268 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.269 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.269 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.270 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.327 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 3.5 MiB/s wr, 469 op/s
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.610 254096 DEBUG nova.network.neutron [-] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1338900085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.765 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.771 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.785 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.828 254096 INFO nova.compute.manager [-] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Took 1.94 seconds to deallocate network for instance.
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.899 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.900 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.997 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:01 compute-0 nova_compute[254092]: 2025-11-25 16:36:01.998 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3433526533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1338900085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.168 254096 DEBUG oslo_concurrency.processutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.381 254096 DEBUG nova.compute.manager [req-f5075315-be15-47a7-90ac-a2337079e466 req-8b76d600-8360-47a6-8901-b8c523897948 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Received event network-vif-deleted-1aa5a9a0-4c3e-459f-a437-8aeb90df3a4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982751841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.637 254096 DEBUG oslo_concurrency.processutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.643 254096 DEBUG nova.compute.provider_tree [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.656 254096 DEBUG nova.scheduler.client.report [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.877 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.901 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.901 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.902 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:02 compute-0 nova_compute[254092]: 2025-11-25 16:36:02.977 254096 INFO nova.scheduler.client.report [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance 1f3bb552-1d70-4ad1-9304-9ed3896db3d4
Nov 25 16:36:03 compute-0 ovn_controller[153477]: 2025-11-25T16:36:03Z|00390|binding|INFO|Releasing lport 9dd2e935-32e0-43d1-8a28-23e6ab045e91 from this chassis (sb_readonly=0)
Nov 25 16:36:03 compute-0 nova_compute[254092]: 2025-11-25 16:36:03.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:03 compute-0 nova_compute[254092]: 2025-11-25 16:36:03.276 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088548.276095, 1f3bb552-1d70-4ad1-9304-9ed3896db3d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:03 compute-0 nova_compute[254092]: 2025-11-25 16:36:03.277 254096 INFO nova.compute.manager [-] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] VM Stopped (Lifecycle Event)
Nov 25 16:36:03 compute-0 ceph-mon[74985]: pgmap v1477: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 3.5 MiB/s wr, 469 op/s
Nov 25 16:36:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3982751841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:03 compute-0 nova_compute[254092]: 2025-11-25 16:36:03.387 254096 DEBUG oslo_concurrency.lockutils [None req-2e7e1936-0fa4-4fc9-8a9d-1ea154bfbd3a 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "1f3bb552-1d70-4ad1-9304-9ed3896db3d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.4 KiB/s wr, 272 op/s
Nov 25 16:36:03 compute-0 nova_compute[254092]: 2025-11-25 16:36:03.850 254096 DEBUG nova.compute.manager [None req-80f9a1ea-381e-4873-a26f-a05f75f58a7f - - - - - -] [instance: 1f3bb552-1d70-4ad1-9304-9ed3896db3d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 25 16:36:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.356 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.357 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:04 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.458 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:04 compute-0 ceph-mon[74985]: pgmap v1478: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.4 KiB/s wr, 272 op/s
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.667 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.668 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.674 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.675 254096 INFO nova.compute.claims [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:36:04 compute-0 nova_compute[254092]: 2025-11-25 16:36:04.826 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.003 254096 DEBUG nova.objects.instance [None req-a84dd144-db5a-4c7b-88e4-d488bd081666 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.029 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088565.0231917, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.030 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Paused (Lifecycle Event)
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.062 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.067 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.092 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 16:36:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154917831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.289 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.298 254096 DEBUG nova.compute.provider_tree [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.313 254096 DEBUG nova.scheduler.client.report [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.4 KiB/s wr, 280 op/s
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.684 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:05 compute-0 nova_compute[254092]: 2025-11-25 16:36:05.684 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.121 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.122 254096 DEBUG nova.network.neutron [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:36:06 compute-0 ceph-mon[74985]: osdmap e187: 3 total, 3 up, 3 in
Nov 25 16:36:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4154917831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.172 254096 INFO nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.303 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.621 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.622 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.622 254096 INFO nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Creating image(s)
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.646 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.670 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.721 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.727 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:06 compute-0 kernel: tap660536bc-d4 (unregistering): left promiscuous mode
Nov 25 16:36:06 compute-0 NetworkManager[48891]: <info>  [1764088566.7676] device (tap660536bc-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:36:06 compute-0 ovn_controller[153477]: 2025-11-25T16:36:06Z|00391|binding|INFO|Releasing lport 660536bc-d4bf-4a4b-9515-06043951c25e from this chassis (sb_readonly=0)
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:06 compute-0 ovn_controller[153477]: 2025-11-25T16:36:06Z|00392|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e down in Southbound
Nov 25 16:36:06 compute-0 ovn_controller[153477]: 2025-11-25T16:36:06Z|00393|binding|INFO|Removing iface tap660536bc-d4 ovn-installed in OVS
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.804 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.805 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.805 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.806 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.828 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:06 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 25 16:36:06 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000025.scope: Consumed 9.831s CPU time.
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.834 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e549d4e8-b824-480b-b81a-83e2ea1eff12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:06 compute-0 systemd-machined[216343]: Machine qemu-53-instance-00000025 terminated.
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.862 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:06.873 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:64 10.100.0.8'], port_security=['fa:16:3e:10:46:64 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '9', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=660536bc-d4bf-4a4b-9515-06043951c25e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:06.875 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 660536bc-d4bf-4a4b-9515-06043951c25e in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f unbound from our chassis
Nov 25 16:36:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:06.876 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:36:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:06.878 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fddc9e-4c9b-433f-9523-d85d5f79fcea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:06.878 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f namespace which is not needed anymore
Nov 25 16:36:06 compute-0 nova_compute[254092]: 2025-11-25 16:36:06.988 254096 DEBUG nova.compute.manager [None req-a84dd144-db5a-4c7b-88e4-d488bd081666 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:07 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [NOTICE]   (306600) : haproxy version is 2.8.14-c23fe91
Nov 25 16:36:07 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [NOTICE]   (306600) : path to executable is /usr/sbin/haproxy
Nov 25 16:36:07 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [WARNING]  (306600) : Exiting Master process...
Nov 25 16:36:07 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [WARNING]  (306600) : Exiting Master process...
Nov 25 16:36:07 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [ALERT]    (306600) : Current worker (306602) exited with code 143 (Terminated)
Nov 25 16:36:07 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[306596]: [WARNING]  (306600) : All workers exited. Exiting... (0)
Nov 25 16:36:07 compute-0 systemd[1]: libpod-39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095.scope: Deactivated successfully.
Nov 25 16:36:07 compute-0 podman[306901]: 2025-11-25 16:36:07.041621902 +0000 UTC m=+0.064034575 container died 39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.061 254096 DEBUG nova.policy [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:36:07 compute-0 ceph-mon[74985]: pgmap v1480: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.4 KiB/s wr, 280 op/s
Nov 25 16:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095-userdata-shm.mount: Deactivated successfully.
Nov 25 16:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-59278409eece4f268f905d74d2f5e9016da291c00464cddcbac0e1a07e35273a-merged.mount: Deactivated successfully.
Nov 25 16:36:07 compute-0 podman[306901]: 2025-11-25 16:36:07.178167151 +0000 UTC m=+0.200579824 container cleanup 39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:36:07 compute-0 systemd[1]: libpod-conmon-39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095.scope: Deactivated successfully.
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.211 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e549d4e8-b824-480b-b81a-83e2ea1eff12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:07 compute-0 podman[306941]: 2025-11-25 16:36:07.268739254 +0000 UTC m=+0.067954131 container remove 39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.274 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcdbe01-c01e-4515-88fa-f2ad999fd97e]: (4, ('Tue Nov 25 04:36:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095)\n39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095\nTue Nov 25 04:36:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095)\n39d8001716f9ce725686deff3985192a2e179870858b401c7c640e7fa56e8095\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.276 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef271240-b669-45ec-85cb-dc2bcd3e6809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.277 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.278 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] resizing rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:36:07 compute-0 kernel: tap3960d4c5-60: left promiscuous mode
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.318 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf2e112-141e-4dcb-841f-7b94dc7eb8fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.332 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2183c68-b019-4d36-88d2-527f50cb22e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.334 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9eb5b81-34c7-4499-9e80-75fa7fce650e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.343 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088552.3409839, 270ad7f6-74d4-4c29-9856-77768f170789 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.344 254096 INFO nova.compute.manager [-] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] VM Stopped (Lifecycle Event)
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.349 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[21256e93-7c56-4a10-9786-546e9d4f2672]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501270, 'reachable_time': 41809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307012, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d3960d4c5\x2d60d7\x2d49e3\x2db26d\x2df1317dd96f9f.mount: Deactivated successfully.
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.352 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:36:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:07.353 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[47d9aa68-7eed-4999-ae83-75ae7c3511c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.367 254096 DEBUG nova.compute.manager [None req-7ba60b64-78e3-4ef7-921f-6a409199533c - - - - - -] [instance: 270ad7f6-74d4-4c29-9856-77768f170789] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 121 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.1 KiB/s wr, 229 op/s
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.420 254096 DEBUG nova.compute.manager [req-2e551e03-5f13-4a3e-a9d2-0d5f080dcaa9 req-4fa6b765-7df9-44e2-836c-8068b054fc0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.421 254096 DEBUG oslo_concurrency.lockutils [req-2e551e03-5f13-4a3e-a9d2-0d5f080dcaa9 req-4fa6b765-7df9-44e2-836c-8068b054fc0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.421 254096 DEBUG oslo_concurrency.lockutils [req-2e551e03-5f13-4a3e-a9d2-0d5f080dcaa9 req-4fa6b765-7df9-44e2-836c-8068b054fc0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.422 254096 DEBUG oslo_concurrency.lockutils [req-2e551e03-5f13-4a3e-a9d2-0d5f080dcaa9 req-4fa6b765-7df9-44e2-836c-8068b054fc0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.422 254096 DEBUG nova.compute.manager [req-2e551e03-5f13-4a3e-a9d2-0d5f080dcaa9 req-4fa6b765-7df9-44e2-836c-8068b054fc0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.422 254096 WARNING nova.compute.manager [req-2e551e03-5f13-4a3e-a9d2-0d5f080dcaa9 req-4fa6b765-7df9-44e2-836c-8068b054fc0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state suspended and task_state None.
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.427 254096 DEBUG nova.objects.instance [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid e549d4e8-b824-480b-b81a-83e2ea1eff12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.442 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.443 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Ensure instance console log exists: /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.443 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.443 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.444 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.526 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.671 254096 DEBUG nova.network.neutron [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Successfully created port: 4ac8455e-46f9-4f4e-9acc-43b78589ef10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.753 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.754 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.754 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.755 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:07 compute-0 nova_compute[254092]: 2025-11-25 16:36:07.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:08 compute-0 ceph-mon[74985]: pgmap v1481: 321 pgs: 321 active+clean; 121 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.1 KiB/s wr, 229 op/s
Nov 25 16:36:08 compute-0 nova_compute[254092]: 2025-11-25 16:36:08.456 254096 DEBUG nova.network.neutron [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Successfully updated port: 4ac8455e-46f9-4f4e-9acc-43b78589ef10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:36:08 compute-0 nova_compute[254092]: 2025-11-25 16:36:08.480 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:08 compute-0 nova_compute[254092]: 2025-11-25 16:36:08.480 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:08 compute-0 nova_compute[254092]: 2025-11-25 16:36:08.480 254096 DEBUG nova.network.neutron [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:36:08 compute-0 nova_compute[254092]: 2025-11-25 16:36:08.685 254096 DEBUG nova.network.neutron [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:36:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 121 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 KiB/s wr, 188 op/s
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.890 254096 DEBUG nova.compute.manager [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-changed-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.891 254096 DEBUG nova.compute.manager [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Refreshing instance network info cache due to event network-changed-4ac8455e-46f9-4f4e-9acc-43b78589ef10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.892 254096 DEBUG oslo_concurrency.lockutils [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.924 254096 DEBUG nova.compute.manager [req-44ac47db-8328-482f-a2c3-b5216c60ab15 req-a45f1e03-9fae-46e7-8a72-1d0cf5317cd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.925 254096 DEBUG oslo_concurrency.lockutils [req-44ac47db-8328-482f-a2c3-b5216c60ab15 req-a45f1e03-9fae-46e7-8a72-1d0cf5317cd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.925 254096 DEBUG oslo_concurrency.lockutils [req-44ac47db-8328-482f-a2c3-b5216c60ab15 req-a45f1e03-9fae-46e7-8a72-1d0cf5317cd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.925 254096 DEBUG oslo_concurrency.lockutils [req-44ac47db-8328-482f-a2c3-b5216c60ab15 req-a45f1e03-9fae-46e7-8a72-1d0cf5317cd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.926 254096 DEBUG nova.compute.manager [req-44ac47db-8328-482f-a2c3-b5216c60ab15 req-a45f1e03-9fae-46e7-8a72-1d0cf5317cd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:09 compute-0 nova_compute[254092]: 2025-11-25 16:36:09.926 254096 WARNING nova.compute.manager [req-44ac47db-8328-482f-a2c3-b5216c60ab15 req-a45f1e03-9fae-46e7-8a72-1d0cf5317cd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state suspended and task_state None.
Nov 25 16:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.221 254096 INFO nova.compute.manager [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Resuming
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.222 254096 DEBUG nova.objects.instance [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'flavor' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.266 254096 DEBUG oslo_concurrency.lockutils [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.282 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.300 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.301 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.301 254096 DEBUG oslo_concurrency.lockutils [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquired lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.301 254096 DEBUG nova.network.neutron [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:36:10 compute-0 ceph-mon[74985]: pgmap v1482: 321 pgs: 321 active+clean; 121 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 KiB/s wr, 188 op/s
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.495 254096 DEBUG nova.network.neutron [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updating instance_info_cache with network_info: [{"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.527 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.528 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance network_info: |[{"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.528 254096 DEBUG oslo_concurrency.lockutils [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.529 254096 DEBUG nova.network.neutron [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Refreshing network info cache for port 4ac8455e-46f9-4f4e-9acc-43b78589ef10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.532 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Start _get_guest_xml network_info=[{"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.537 254096 WARNING nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.546 254096 DEBUG nova.virt.libvirt.host [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.548 254096 DEBUG nova.virt.libvirt.host [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.556 254096 DEBUG nova.virt.libvirt.host [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.557 254096 DEBUG nova.virt.libvirt.host [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.557 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.558 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.558 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.558 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.559 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.559 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.559 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.560 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.560 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.560 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.560 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.561 254096 DEBUG nova.virt.hardware [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:36:10 compute-0 nova_compute[254092]: 2025-11-25 16:36:10.564 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833647301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.023 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.049 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.054 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 16:36:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923766959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1833647301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.530 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.532 254096 DEBUG nova.virt.libvirt.vif [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-609049756',display_name='tempest-ImagesTestJSON-server-609049756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-609049756',id=46,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-6cjpy2nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:06Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=e549d4e8-b824-480b-b81a-83e2ea1eff12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.532 254096 DEBUG nova.network.os_vif_util [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.533 254096 DEBUG nova.network.os_vif_util [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.534 254096 DEBUG nova.objects.instance [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid e549d4e8-b824-480b-b81a-83e2ea1eff12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.548 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <uuid>e549d4e8-b824-480b-b81a-83e2ea1eff12</uuid>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <name>instance-0000002e</name>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesTestJSON-server-609049756</nova:name>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:36:10</nova:creationTime>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <nova:port uuid="4ac8455e-46f9-4f4e-9acc-43b78589ef10">
Nov 25 16:36:11 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <system>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <entry name="serial">e549d4e8-b824-480b-b81a-83e2ea1eff12</entry>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <entry name="uuid">e549d4e8-b824-480b-b81a-83e2ea1eff12</entry>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </system>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <os>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   </os>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <features>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   </features>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e549d4e8-b824-480b-b81a-83e2ea1eff12_disk">
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e549d4e8-b824-480b-b81a-83e2ea1eff12_disk.config">
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:11 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:36:7b:cf"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <target dev="tap4ac8455e-46"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/console.log" append="off"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <video>
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </video>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:36:11 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:36:11 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:36:11 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:36:11 compute-0 nova_compute[254092]: </domain>
Nov 25 16:36:11 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.549 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Preparing to wait for external event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.550 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.551 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.551 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.551 254096 DEBUG nova.virt.libvirt.vif [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-609049756',display_name='tempest-ImagesTestJSON-server-609049756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-609049756',id=46,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-6cjpy2nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:06Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=e549d4e8-b824-480b-b81a-83e2ea1eff12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.552 254096 DEBUG nova.network.os_vif_util [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.552 254096 DEBUG nova.network.os_vif_util [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.553 254096 DEBUG os_vif [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.554 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.554 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.556 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ac8455e-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.557 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ac8455e-46, col_values=(('external_ids', {'iface-id': '4ac8455e-46f9-4f4e-9acc-43b78589ef10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:7b:cf', 'vm-uuid': 'e549d4e8-b824-480b-b81a-83e2ea1eff12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 NetworkManager[48891]: <info>  [1764088571.5593] manager: (tap4ac8455e-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.567 254096 INFO os_vif [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46')
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.639 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.640 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.640 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:36:7b:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.641 254096 INFO nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Using config drive
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.660 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.745 254096 DEBUG nova.network.neutron [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [{"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.769 254096 DEBUG oslo_concurrency.lockutils [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Releasing lock "refresh_cache-a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.777 254096 DEBUG nova.virt.libvirt.vif [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:07Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.777 254096 DEBUG nova.network.os_vif_util [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.778 254096 DEBUG nova.network.os_vif_util [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.779 254096 DEBUG os_vif [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.781 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.781 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.784 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.784 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap660536bc-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.784 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap660536bc-d4, col_values=(('external_ids', {'iface-id': '660536bc-d4bf-4a4b-9515-06043951c25e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:46:64', 'vm-uuid': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.786 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.786 254096 INFO os_vif [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4')
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.824 254096 DEBUG nova.objects.instance [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'numa_topology' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:11 compute-0 NetworkManager[48891]: <info>  [1764088571.9060] manager: (tap660536bc-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Nov 25 16:36:11 compute-0 kernel: tap660536bc-d4: entered promiscuous mode
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.911 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 ovn_controller[153477]: 2025-11-25T16:36:11Z|00394|binding|INFO|Claiming lport 660536bc-d4bf-4a4b-9515-06043951c25e for this chassis.
Nov 25 16:36:11 compute-0 ovn_controller[153477]: 2025-11-25T16:36:11Z|00395|binding|INFO|660536bc-d4bf-4a4b-9515-06043951c25e: Claiming fa:16:3e:10:46:64 10.100.0.8
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.923 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:64 10.100.0.8'], port_security=['fa:16:3e:10:46:64 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '10', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=660536bc-d4bf-4a4b-9515-06043951c25e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.925 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 660536bc-d4bf-4a4b-9515-06043951c25e in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f bound to our chassis
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.926 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:36:11 compute-0 ovn_controller[153477]: 2025-11-25T16:36:11Z|00396|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e up in Southbound
Nov 25 16:36:11 compute-0 ovn_controller[153477]: 2025-11-25T16:36:11Z|00397|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e ovn-installed in OVS
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.938 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc47793-17a2-4d0b-bbae-a0b6cc612d0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.939 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3960d4c5-61 in ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:36:11 compute-0 systemd-udevd[307131]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:11 compute-0 nova_compute[254092]: 2025-11-25 16:36:11.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.942 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3960d4c5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.942 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0998d19c-9879-4129-83a9-366068787f76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.943 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1740dfe7-377c-497b-a08b-ffadf55e5fc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:11 compute-0 systemd-machined[216343]: New machine qemu-54-instance-00000025.
Nov 25 16:36:11 compute-0 NetworkManager[48891]: <info>  [1764088571.9541] device (tap660536bc-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:36:11 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000025.
Nov 25 16:36:11 compute-0 NetworkManager[48891]: <info>  [1764088571.9550] device (tap660536bc-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.963 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9aef89e1-fce4-4cfe-ac81-406ad0438602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:11.980 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4db29d5c-4f01-4058-aa10-093d28842652]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.014 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c2250074-b6b1-4ef9-876e-c4871927b09b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.019 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60c4187d-31fd-4c61-a56e-b009d83e0e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 systemd-udevd[307135]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:12 compute-0 NetworkManager[48891]: <info>  [1764088572.0218] manager: (tap3960d4c5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.051 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[579b00c2-7238-43fb-9a42-6447255b5517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.055 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[35b55d67-1b0f-410a-8195-73abefcd99f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 NetworkManager[48891]: <info>  [1764088572.0827] device (tap3960d4c5-60): carrier: link connected
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.090 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4b07fd90-40c8-47ab-bd6a-777c0cf0e47a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.112 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[95846627-127f-41cb-8298-12fd80f8f8fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502967, 'reachable_time': 20310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307167, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[526ac3dd-eb15-4d50-99c4-ff81cab08a1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:428f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502967, 'tstamp': 502967}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307168, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.151 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abddcf0e-6868-444b-886e-2dcb6f9b1fac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3960d4c5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502967, 'reachable_time': 20310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307173, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca44361e-9223-4fac-92e5-5339e6f4b34f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.237 254096 INFO nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Creating config drive at /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/disk.config
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.242 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvl6upla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.253 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27237323-5ae7-4c77-8543-ecb21a27d45b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.255 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.255 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.256 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3960d4c5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:12 compute-0 NetworkManager[48891]: <info>  [1764088572.2587] manager: (tap3960d4c5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Nov 25 16:36:12 compute-0 kernel: tap3960d4c5-60: entered promiscuous mode
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.264 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3960d4c5-60, col_values=(('external_ids', {'iface-id': '9dd2e935-32e0-43d1-8a28-23e6ab045e91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:12 compute-0 ovn_controller[153477]: 2025-11-25T16:36:12Z|00398|binding|INFO|Releasing lport 9dd2e935-32e0-43d1-8a28-23e6ab045e91 from this chassis (sb_readonly=0)
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.270 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.286 254096 DEBUG nova.network.neutron [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updated VIF entry in instance network info cache for port 4ac8455e-46f9-4f4e-9acc-43b78589ef10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.286 254096 DEBUG nova.network.neutron [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updating instance_info_cache with network_info: [{"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.288 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.289 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9a36d62a-34e7-4cea-8b99-30575eaab8db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.290 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.pid.haproxy
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 3960d4c5-60d7-49e3-b26d-f1317dd96f9f
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.291 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'env', 'PROCESS_TAG=haproxy-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3960d4c5-60d7-49e3-b26d-f1317dd96f9f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.310 254096 DEBUG oslo_concurrency.lockutils [req-52cddeb9-c4bc-4f13-b025-c8bf61b20276 req-8acda662-5113-4f2a-a392-79d5c8ca7926 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.378 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmvl6upla" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.412 254096 DEBUG nova.storage.rbd_utils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image e549d4e8-b824-480b-b81a-83e2ea1eff12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.417 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/disk.config e549d4e8-b824-480b-b81a-83e2ea1eff12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.463 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.463 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088572.415747, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.464 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Started (Lifecycle Event)
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.466 254096 DEBUG nova.compute.manager [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.466 254096 DEBUG nova.objects.instance [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.470 254096 DEBUG nova.compute.manager [req-461d46a4-51fa-48d0-9db5-52244f31ae9f req-2b648ab1-9752-4d36-8d4f-6c9c80e7cf81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.470 254096 DEBUG oslo_concurrency.lockutils [req-461d46a4-51fa-48d0-9db5-52244f31ae9f req-2b648ab1-9752-4d36-8d4f-6c9c80e7cf81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.471 254096 DEBUG oslo_concurrency.lockutils [req-461d46a4-51fa-48d0-9db5-52244f31ae9f req-2b648ab1-9752-4d36-8d4f-6c9c80e7cf81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.472 254096 DEBUG oslo_concurrency.lockutils [req-461d46a4-51fa-48d0-9db5-52244f31ae9f req-2b648ab1-9752-4d36-8d4f-6c9c80e7cf81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.472 254096 DEBUG nova.compute.manager [req-461d46a4-51fa-48d0-9db5-52244f31ae9f req-2b648ab1-9752-4d36-8d4f-6c9c80e7cf81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.472 254096 WARNING nova.compute.manager [req-461d46a4-51fa-48d0-9db5-52244f31ae9f req-2b648ab1-9752-4d36-8d4f-6c9c80e7cf81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state suspended and task_state resuming.
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.504 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.510 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.519 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance running successfully.
Nov 25 16:36:12 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.523 254096 DEBUG nova.virt.libvirt.guest [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.523 254096 DEBUG nova.compute.manager [None req-056cecf7-5cb2-45eb-bdbe-2b359bd1e709 b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.532 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.533 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088572.4390295, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.533 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Resumed (Lifecycle Event)
Nov 25 16:36:12 compute-0 ceph-mon[74985]: pgmap v1483: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 16:36:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1923766959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.558 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.562 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.596 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 16:36:12 compute-0 podman[307284]: 2025-11-25 16:36:12.705270936 +0000 UTC m=+0.089054804 container create 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 16:36:12 compute-0 podman[307284]: 2025-11-25 16:36:12.643737789 +0000 UTC m=+0.027521667 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.739 254096 DEBUG oslo_concurrency.processutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/disk.config e549d4e8-b824-480b-b81a-83e2ea1eff12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.740 254096 INFO nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Deleting local config drive /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12/disk.config because it was imported into RBD.
Nov 25 16:36:12 compute-0 systemd[1]: Started libpod-conmon-62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95.scope.
Nov 25 16:36:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e86859f0aeb96e23040d62d31f0a380e999b425012da5a016c66f9024dcf09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:12 compute-0 kernel: tap4ac8455e-46: entered promiscuous mode
Nov 25 16:36:12 compute-0 NetworkManager[48891]: <info>  [1764088572.8049] manager: (tap4ac8455e-46): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Nov 25 16:36:12 compute-0 systemd-udevd[307157]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:12 compute-0 ovn_controller[153477]: 2025-11-25T16:36:12Z|00399|binding|INFO|Claiming lport 4ac8455e-46f9-4f4e-9acc-43b78589ef10 for this chassis.
Nov 25 16:36:12 compute-0 ovn_controller[153477]: 2025-11-25T16:36:12Z|00400|binding|INFO|4ac8455e-46f9-4f4e-9acc-43b78589ef10: Claiming fa:16:3e:36:7b:cf 10.100.0.6
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.822 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:7b:cf 10.100.0.6'], port_security=['fa:16:3e:36:7b:cf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e549d4e8-b824-480b-b81a-83e2ea1eff12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4ac8455e-46f9-4f4e-9acc-43b78589ef10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:12 compute-0 NetworkManager[48891]: <info>  [1764088572.8278] device (tap4ac8455e-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:36:12 compute-0 NetworkManager[48891]: <info>  [1764088572.8294] device (tap4ac8455e-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:36:12 compute-0 podman[307284]: 2025-11-25 16:36:12.841561008 +0000 UTC m=+0.225344906 container init 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:36:12 compute-0 podman[307284]: 2025-11-25 16:36:12.850721616 +0000 UTC m=+0.234505484 container start 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:36:12 compute-0 systemd-machined[216343]: New machine qemu-55-instance-0000002e.
Nov 25 16:36:12 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-0000002e.
Nov 25 16:36:12 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [NOTICE]   (307320) : New worker (307322) forked
Nov 25 16:36:12 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [NOTICE]   (307320) : Loading success.
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:12 compute-0 ovn_controller[153477]: 2025-11-25T16:36:12Z|00401|binding|INFO|Setting lport 4ac8455e-46f9-4f4e-9acc-43b78589ef10 ovn-installed in OVS
Nov 25 16:36:12 compute-0 ovn_controller[153477]: 2025-11-25T16:36:12Z|00402|binding|INFO|Setting lport 4ac8455e-46f9-4f4e-9acc-43b78589ef10 up in Southbound
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:12 compute-0 nova_compute[254092]: 2025-11-25 16:36:12.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.948 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4ac8455e-46f9-4f4e-9acc-43b78589ef10 in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.950 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.962 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[32f6d9d2-7595-42c9-9e5e-65e96864a94f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.964 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0816ae24-21 in ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.967 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0816ae24-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.967 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fe0476-9313-4fed-ac7b-ca7156929b0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.968 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04a00b7d-6a28-4148-a494-86c2f5702feb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:12.980 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[e799283d-1ecc-4115-a3dc-44ecff044e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.008 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[59da41b4-a39e-4c52-9fe0-cc0995f840cd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.048 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13a328b1-6bf6-49ad-9255-8d0b1999b8af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.055 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6a077e-901b-4033-90b8-04076b32d3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 NetworkManager[48891]: <info>  [1764088573.0555] manager: (tap0816ae24-20): new Veth device (/org/freedesktop/NetworkManager/Devices/176)
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.094 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cac02a0b-3970-42fb-9790-00013ffdfae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.100 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[166dce64-72b7-4ccb-a420-b8e9dab14796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 NetworkManager[48891]: <info>  [1764088573.1309] device (tap0816ae24-20): carrier: link connected
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.141 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f633c1f-8736-4eb0-a4fe-7325f1f1f747]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.160 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20d5234c-6946-4a98-b233-1b2262dd680b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307347, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.176 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20e5730d-cecd-4f5a-a0b1-3906df1cc01a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:524c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503072, 'tstamp': 503072}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307348, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.194 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[46e1ad1e-1ffd-4994-a137-abfb030d1b3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307349, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.233 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5416b8-4a6e-447d-a6d8-73869569a08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.306 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4196944b-3b52-4062-ac14-3a6074e4cb68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.308 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.308 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.309 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:13 compute-0 NetworkManager[48891]: <info>  [1764088573.3116] manager: (tap0816ae24-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Nov 25 16:36:13 compute-0 kernel: tap0816ae24-20: entered promiscuous mode
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.314 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.317 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.319 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:13 compute-0 ovn_controller[153477]: 2025-11-25T16:36:13Z|00403|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.339 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.341 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05418622-09ca-49de-9d58-9444b94b7aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.342 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.344 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'env', 'PROCESS_TAG=haproxy-0816ae24-275c-455e-a549-929f4eb756e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0816ae24-275c-455e-a549-929f4eb756e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:36:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.444 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088573.4433224, e549d4e8-b824-480b-b81a-83e2ea1eff12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.444 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] VM Started (Lifecycle Event)
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.461 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.466 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088573.4452052, e549d4e8-b824-480b-b81a-83e2ea1eff12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.466 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] VM Paused (Lifecycle Event)
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.482 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.486 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:13 compute-0 nova_compute[254092]: 2025-11-25 16:36:13.509 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.613 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.615 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:13.616 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:13 compute-0 podman[307423]: 2025-11-25 16:36:13.795529778 +0000 UTC m=+0.072529386 container create e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 16:36:13 compute-0 podman[307423]: 2025-11-25 16:36:13.751456165 +0000 UTC m=+0.028455563 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:36:13 compute-0 systemd[1]: Started libpod-conmon-e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb.scope.
Nov 25 16:36:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902d37fe79f927afc8f719a0eb41b52feeeed28907f3b71bd4cc8a4842d1692e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:13 compute-0 podman[307423]: 2025-11-25 16:36:13.906241907 +0000 UTC m=+0.183241335 container init e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 16:36:13 compute-0 podman[307423]: 2025-11-25 16:36:13.913245747 +0000 UTC m=+0.190245135 container start e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:36:13 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [NOTICE]   (307441) : New worker (307443) forked
Nov 25 16:36:13 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [NOTICE]   (307441) : Loading success.
Nov 25 16:36:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:14 compute-0 ceph-mon[74985]: pgmap v1484: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 16:36:14 compute-0 nova_compute[254092]: 2025-11-25 16:36:14.827 254096 DEBUG nova.compute.manager [req-c3638325-b600-496a-9f05-1582f4a9b06e req-40be8ede-b860-4889-b0b6-9d666109666f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:14 compute-0 nova_compute[254092]: 2025-11-25 16:36:14.828 254096 DEBUG oslo_concurrency.lockutils [req-c3638325-b600-496a-9f05-1582f4a9b06e req-40be8ede-b860-4889-b0b6-9d666109666f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:14 compute-0 nova_compute[254092]: 2025-11-25 16:36:14.828 254096 DEBUG oslo_concurrency.lockutils [req-c3638325-b600-496a-9f05-1582f4a9b06e req-40be8ede-b860-4889-b0b6-9d666109666f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:14 compute-0 nova_compute[254092]: 2025-11-25 16:36:14.828 254096 DEBUG oslo_concurrency.lockutils [req-c3638325-b600-496a-9f05-1582f4a9b06e req-40be8ede-b860-4889-b0b6-9d666109666f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:14 compute-0 nova_compute[254092]: 2025-11-25 16:36:14.828 254096 DEBUG nova.compute.manager [req-c3638325-b600-496a-9f05-1582f4a9b06e req-40be8ede-b860-4889-b0b6-9d666109666f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:14 compute-0 nova_compute[254092]: 2025-11-25 16:36:14.828 254096 WARNING nova.compute.manager [req-c3638325-b600-496a-9f05-1582f4a9b06e req-40be8ede-b860-4889-b0b6-9d666109666f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state active and task_state None.
Nov 25 16:36:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.9 MiB/s wr, 105 op/s
Nov 25 16:36:16 compute-0 nova_compute[254092]: 2025-11-25 16:36:16.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:16 compute-0 ceph-mon[74985]: pgmap v1485: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.9 MiB/s wr, 105 op/s
Nov 25 16:36:17 compute-0 ovn_controller[153477]: 2025-11-25T16:36:17Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:46:64 10.100.0.8
Nov 25 16:36:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 16:36:17 compute-0 nova_compute[254092]: 2025-11-25 16:36:17.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:17 compute-0 nova_compute[254092]: 2025-11-25 16:36:17.943 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:17 compute-0 nova_compute[254092]: 2025-11-25 16:36:17.943 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:17 compute-0 nova_compute[254092]: 2025-11-25 16:36:17.994 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.126 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.127 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.134 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.134 254096 INFO nova.compute.claims [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.277 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:18 compute-0 ceph-mon[74985]: pgmap v1486: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 16:36:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147522495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.788 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.796 254096 DEBUG nova.compute.provider_tree [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.810 254096 DEBUG nova.scheduler.client.report [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.865 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.866 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.953 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.954 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:36:18 compute-0 nova_compute[254092]: 2025-11-25 16:36:18.982 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.022 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.149 254096 DEBUG nova.policy [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.168 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.169 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.169 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Creating image(s)
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.189 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.211 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.235 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.239 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.311 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.312 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.313 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.313 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.333 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.337 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 be1b8151-4e42-40db-813c-8b3b3e216949_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.438 254096 DEBUG nova.compute.manager [req-66cecd96-062f-4b1b-834b-08b5bf7877a9 req-f2e2e66e-5868-4433-a358-be34c2ba65ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.438 254096 DEBUG oslo_concurrency.lockutils [req-66cecd96-062f-4b1b-834b-08b5bf7877a9 req-f2e2e66e-5868-4433-a358-be34c2ba65ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.439 254096 DEBUG oslo_concurrency.lockutils [req-66cecd96-062f-4b1b-834b-08b5bf7877a9 req-f2e2e66e-5868-4433-a358-be34c2ba65ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.439 254096 DEBUG oslo_concurrency.lockutils [req-66cecd96-062f-4b1b-834b-08b5bf7877a9 req-f2e2e66e-5868-4433-a358-be34c2ba65ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.439 254096 DEBUG nova.compute.manager [req-66cecd96-062f-4b1b-834b-08b5bf7877a9 req-f2e2e66e-5868-4433-a358-be34c2ba65ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Processing event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.440 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.443 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088579.4434075, e549d4e8-b824-480b-b81a-83e2ea1eff12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.444 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] VM Resumed (Lifecycle Event)
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.446 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.449 254096 INFO nova.virt.libvirt.driver [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance spawned successfully.
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.450 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.467 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.473 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.476 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.477 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.478 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.478 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.478 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.479 254096 DEBUG nova.virt.libvirt.driver [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.504 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.628 254096 INFO nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 13.01 seconds to spawn the instance on the hypervisor.
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.629 254096 DEBUG nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.697 254096 INFO nova.compute.manager [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 15.05 seconds to build instance.
Nov 25 16:36:19 compute-0 nova_compute[254092]: 2025-11-25 16:36:19.859 254096 DEBUG oslo_concurrency.lockutils [None req-23f80ead-e602-488c-b994-f09083abfdbc 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/147522495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:20 compute-0 nova_compute[254092]: 2025-11-25 16:36:20.394 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Successfully created port: b2aa1e65-61e9-41b3-a2f4-400d17aafefb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:36:20 compute-0 ceph-mon[74985]: pgmap v1487: 321 pgs: 321 active+clean; 167 MiB data, 528 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.195 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 be1b8151-4e42-40db-813c-8b3b3e216949_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.858s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.279 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:36:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 175 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 713 KiB/s rd, 2.4 MiB/s wr, 147 op/s
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.490 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Successfully updated port: b2aa1e65-61e9-41b3-a2f4-400d17aafefb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.527 254096 DEBUG nova.compute.manager [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-changed-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.528 254096 DEBUG nova.compute.manager [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Refreshing instance network info cache due to event network-changed-b2aa1e65-61e9-41b3-a2f4-400d17aafefb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.528 254096 DEBUG oslo_concurrency.lockutils [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.528 254096 DEBUG oslo_concurrency.lockutils [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.529 254096 DEBUG nova.network.neutron [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Refreshing network info cache for port b2aa1e65-61e9-41b3-a2f4-400d17aafefb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.539 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.564 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.689 254096 DEBUG nova.network.neutron [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.743 254096 DEBUG nova.objects.instance [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid be1b8151-4e42-40db-813c-8b3b3e216949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.764 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.765 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Ensure instance console log exists: /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.766 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.766 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.766 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.817 254096 DEBUG nova.compute.manager [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.818 254096 DEBUG oslo_concurrency.lockutils [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.818 254096 DEBUG oslo_concurrency.lockutils [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.819 254096 DEBUG oslo_concurrency.lockutils [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.819 254096 DEBUG nova.compute.manager [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] No waiting events found dispatching network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.819 254096 WARNING nova.compute.manager [req-b1bd9503-d7bc-4822-b1d6-bda5316232fa req-d3592cb0-0703-41be-a594-04c748942078 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received unexpected event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 for instance with vm_state active and task_state image_snapshot_pending.
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.823 254096 DEBUG nova.compute.manager [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:21 compute-0 nova_compute[254092]: 2025-11-25 16:36:21.878 254096 INFO nova.compute.manager [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] instance snapshotting
Nov 25 16:36:22 compute-0 rsyslogd[1006]: imjournal: 9270 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.075 254096 DEBUG nova.network.neutron [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.114 254096 DEBUG oslo_concurrency.lockutils [req-386c4b3f-4e75-49f0-9c18-7e24905f28bb req-73ce2432-b5e2-44d3-8a98-4bcea9dd40e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.115 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.116 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.131 254096 INFO nova.virt.libvirt.driver [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Beginning live snapshot process
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.257 254096 DEBUG nova.virt.libvirt.imagebackend [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.377 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.502 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(3f3380e7e7b34d8ea4b1ffb0a0af6e38) on rbd image(e549d4e8-b824-480b-b81a-83e2ea1eff12_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:36:22 compute-0 nova_compute[254092]: 2025-11-25 16:36:22.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 25 16:36:22 compute-0 ceph-mon[74985]: pgmap v1488: 321 pgs: 321 active+clean; 175 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 713 KiB/s rd, 2.4 MiB/s wr, 147 op/s
Nov 25 16:36:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 25 16:36:22 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.008 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning vms/e549d4e8-b824-480b-b81a-83e2ea1eff12_disk@3f3380e7e7b34d8ea4b1ffb0a0af6e38 to images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.082 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] flattening images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.114 254096 DEBUG nova.network.neutron [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Updating instance_info_cache with network_info: [{"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.135 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-be1b8151-4e42-40db-813c-8b3b3e216949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.136 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance network_info: |[{"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.144 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Start _get_guest_xml network_info=[{"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.154 254096 WARNING nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.161 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.163 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.166 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.166 254096 DEBUG nova.virt.libvirt.host [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.167 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.169 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.170 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.171 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.171 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.171 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.172 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.172 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.173 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.173 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.174 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.174 254096 DEBUG nova.virt.hardware [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.178 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.310 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(3f3380e7e7b34d8ea4b1ffb0a0af6e38) on rbd image(e549d4e8-b824-480b-b81a-83e2ea1eff12_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:36:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 175 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 746 KiB/s wr, 82 op/s
Nov 25 16:36:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485946769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.631 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.649 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:23 compute-0 nova_compute[254092]: 2025-11-25 16:36:23.653 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 25 16:36:23 compute-0 ceph-mon[74985]: osdmap e188: 3 total, 3 up, 3 in
Nov 25 16:36:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1485946769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 25 16:36:23 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.024 254096 DEBUG nova.storage.rbd_utils [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(snap) on rbd image(12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:36:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040185271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.161 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.164 254096 DEBUG nova.virt.libvirt.vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1994523502',display_name='tempest-DeleteServersTestJSON-server-1994523502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1994523502',id=47,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ahiysc8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:19Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=be1b8151-4e42-40db-813c-8b3b3e216949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.165 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.167 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.168 254096 DEBUG nova.objects.instance [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid be1b8151-4e42-40db-813c-8b3b3e216949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.186 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <uuid>be1b8151-4e42-40db-813c-8b3b3e216949</uuid>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <name>instance-0000002f</name>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <nova:name>tempest-DeleteServersTestJSON-server-1994523502</nova:name>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:36:23</nova:creationTime>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <nova:port uuid="b2aa1e65-61e9-41b3-a2f4-400d17aafefb">
Nov 25 16:36:24 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <system>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <entry name="serial">be1b8151-4e42-40db-813c-8b3b3e216949</entry>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <entry name="uuid">be1b8151-4e42-40db-813c-8b3b3e216949</entry>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </system>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <os>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   </os>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <features>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   </features>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/be1b8151-4e42-40db-813c-8b3b3e216949_disk">
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/be1b8151-4e42-40db-813c-8b3b3e216949_disk.config">
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:24 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:15:0c:af"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <target dev="tapb2aa1e65-61"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/console.log" append="off"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <video>
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </video>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:36:24 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:36:24 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:36:24 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:36:24 compute-0 nova_compute[254092]: </domain>
Nov 25 16:36:24 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.193 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Preparing to wait for external event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.194 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.195 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.195 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.196 254096 DEBUG nova.virt.libvirt.vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1994523502',display_name='tempest-DeleteServersTestJSON-server-1994523502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1994523502',id=47,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ahiysc8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:19Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=be1b8151-4e42-40db-813c-8b3b3e216949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.197 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.198 254096 DEBUG nova.network.os_vif_util [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.199 254096 DEBUG os_vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.201 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.208 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2aa1e65-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.208 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2aa1e65-61, col_values=(('external_ids', {'iface-id': 'b2aa1e65-61e9-41b3-a2f4-400d17aafefb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:0c:af', 'vm-uuid': 'be1b8151-4e42-40db-813c-8b3b3e216949'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:24 compute-0 NetworkManager[48891]: <info>  [1764088584.2118] manager: (tapb2aa1e65-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.221 254096 INFO os_vif [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61')
Nov 25 16:36:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.290 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.291 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.291 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:15:0c:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.292 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Using config drive
Nov 25 16:36:24 compute-0 nova_compute[254092]: 2025-11-25 16:36:24.316 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 25 16:36:24 compute-0 ceph-mon[74985]: pgmap v1490: 321 pgs: 321 active+clean; 175 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 798 KiB/s rd, 746 KiB/s wr, 82 op/s
Nov 25 16:36:24 compute-0 ceph-mon[74985]: osdmap e189: 3 total, 3 up, 3 in
Nov 25 16:36:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4040185271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 25 16:36:24 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 25 16:36:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 236 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 4.6 MiB/s wr, 284 op/s
Nov 25 16:36:25 compute-0 nova_compute[254092]: 2025-11-25 16:36:25.696 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Creating config drive at /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config
Nov 25 16:36:25 compute-0 nova_compute[254092]: 2025-11-25 16:36:25.701 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xmid3zv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:25 compute-0 nova_compute[254092]: 2025-11-25 16:36:25.842 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xmid3zv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:25 compute-0 nova_compute[254092]: 2025-11-25 16:36:25.869 254096 DEBUG nova.storage.rbd_utils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image be1b8151-4e42-40db-813c-8b3b3e216949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:25 compute-0 nova_compute[254092]: 2025-11-25 16:36:25.873 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config be1b8151-4e42-40db-813c-8b3b3e216949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:25 compute-0 ceph-mon[74985]: osdmap e190: 3 total, 3 up, 3 in
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.027 254096 DEBUG oslo_concurrency.processutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config be1b8151-4e42-40db-813c-8b3b3e216949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.028 254096 INFO nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deleting local config drive /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949/disk.config because it was imported into RBD.
Nov 25 16:36:26 compute-0 NetworkManager[48891]: <info>  [1764088586.0941] manager: (tapb2aa1e65-61): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 25 16:36:26 compute-0 kernel: tapb2aa1e65-61: entered promiscuous mode
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:26 compute-0 ovn_controller[153477]: 2025-11-25T16:36:26Z|00404|binding|INFO|Claiming lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb for this chassis.
Nov 25 16:36:26 compute-0 ovn_controller[153477]: 2025-11-25T16:36:26Z|00405|binding|INFO|b2aa1e65-61e9-41b3-a2f4-400d17aafefb: Claiming fa:16:3e:15:0c:af 10.100.0.14
Nov 25 16:36:26 compute-0 systemd-udevd[307917]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:26 compute-0 systemd-machined[216343]: New machine qemu-56-instance-0000002f.
Nov 25 16:36:26 compute-0 NetworkManager[48891]: <info>  [1764088586.1824] device (tapb2aa1e65-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:36:26 compute-0 NetworkManager[48891]: <info>  [1764088586.1831] device (tapb2aa1e65-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:36:26 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-0000002f.
Nov 25 16:36:26 compute-0 ovn_controller[153477]: 2025-11-25T16:36:26Z|00406|binding|INFO|Setting lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb ovn-installed in OVS
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:26 compute-0 ovn_controller[153477]: 2025-11-25T16:36:26Z|00407|binding|INFO|Setting lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb up in Southbound
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.362 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:0c:af 10.100.0.14'], port_security=['fa:16:3e:15:0c:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be1b8151-4e42-40db-813c-8b3b3e216949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b2aa1e65-61e9-41b3-a2f4-400d17aafefb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.363 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b2aa1e65-61e9-41b3-a2f4-400d17aafefb in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.364 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac1f9e0-2934-498e-8b22-220cd94684c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.382 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.385 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.385 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c19caa73-2133-4afd-a886-397e44749982]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.386 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a91fd537-27b4-40be-9f5b-856a6df08a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.401 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[299fbdca-d415-405b-b80d-6187c191b862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.421 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[beebd969-999e-4009-95a9-d82fc3d919e0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.461 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.463 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4d882aa5-8b34-498a-8971-a4653b67a69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 systemd-udevd[307920]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.469 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bff4ea92-f0ce-4889-bb38-8b15932f79a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 NetworkManager[48891]: <info>  [1764088586.4707] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.510 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[720c2175-d96f-42b8-80fe-3e2154ac2804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.515 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7b97cb6f-e73e-4732-ae43-9209f2b61e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 NetworkManager[48891]: <info>  [1764088586.5458] device (tape469a950-70): carrier: link connected
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.553 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3417f1a9-cb39-4643-9462-d2848e3ed914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.574 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4dada1f8-d0f9-4f64-8ad1-b1adee6dbd06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504413, 'reachable_time': 19143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307951, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.592 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b148a1b-61be-42bf-808f-5dcaf253276a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504413, 'tstamp': 504413}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307959, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.618 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f2958c-ce62-4fe5-9d78-a1bcda9fad42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504413, 'reachable_time': 19143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307971, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.661 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0eecd0cf-7bfd-4db8-8c65-98e188c124e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.673 254096 INFO nova.virt.libvirt.driver [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Snapshot image upload complete
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.674 254096 INFO nova.compute.manager [None req-036fa6df-fa14-425c-ad8d-c800fe4f1892 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 4.79 seconds to snapshot the instance on the hypervisor.
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd3a109-0346-4204-8f5b-7d610ef0903b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.743 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.743 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.744 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:26 compute-0 NetworkManager[48891]: <info>  [1764088586.7473] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Nov 25 16:36:26 compute-0 kernel: tape469a950-70: entered promiscuous mode
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.752 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:26 compute-0 ovn_controller[153477]: 2025-11-25T16:36:26Z|00408|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.758 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aa85f5-d898-403a-8ff2-19c198d3f65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.760 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:36:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:26.760 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.791 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088586.7909305, be1b8151-4e42-40db-813c-8b3b3e216949 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.791 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Started (Lifecycle Event)
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.809 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.813 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088586.7909877, be1b8151-4e42-40db-813c-8b3b3e216949 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.814 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Paused (Lifecycle Event)
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.835 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.838 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:26 compute-0 nova_compute[254092]: 2025-11-25 16:36:26.855 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:36:26 compute-0 ceph-mon[74985]: pgmap v1493: 321 pgs: 321 active+clean; 236 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 4.6 MiB/s wr, 284 op/s
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.215 254096 DEBUG nova.compute.manager [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.215 254096 DEBUG oslo_concurrency.lockutils [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.216 254096 DEBUG oslo_concurrency.lockutils [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.216 254096 DEBUG oslo_concurrency.lockutils [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.216 254096 DEBUG nova.compute.manager [req-15999be9-f28e-46f1-bf77-90520a491d41 req-df1fada0-ef11-4463-bdfe-5c26f9580de9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Processing event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.217 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:36:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 261 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.434 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088587.43278, be1b8151-4e42-40db-813c-8b3b3e216949 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.436 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Resumed (Lifecycle Event)
Nov 25 16:36:27 compute-0 podman[308027]: 2025-11-25 16:36:27.44995293 +0000 UTC m=+0.320320861 container create b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.452 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.458 254096 INFO nova.virt.libvirt.driver [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance spawned successfully.
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.458 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.476 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.484 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.491 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.492 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.492 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.493 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.493 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.494 254096 DEBUG nova.virt.libvirt.driver [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:27 compute-0 systemd[1]: Started libpod-conmon-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39.scope.
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.513 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:36:27 compute-0 podman[308027]: 2025-11-25 16:36:27.421026866 +0000 UTC m=+0.291394827 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:36:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865c78254137b37ca1978a730536ff10014696e33d25bc8a2e8e961241b00cca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:27 compute-0 podman[308027]: 2025-11-25 16:36:27.546037107 +0000 UTC m=+0.416405078 container init b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.554 254096 INFO nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 8.39 seconds to spawn the instance on the hypervisor.
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.554 254096 DEBUG nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:27 compute-0 podman[308027]: 2025-11-25 16:36:27.555312729 +0000 UTC m=+0.425680670 container start b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:36:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : New worker (308049) forked
Nov 25 16:36:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : Loading success.
Nov 25 16:36:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:27.614 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.614 254096 INFO nova.compute.manager [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 9.52 seconds to build instance.
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.635 254096 DEBUG oslo_concurrency.lockutils [None req-9b421b9a-de1b-4ac9-9623-ee7fd25d89d5 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:27 compute-0 nova_compute[254092]: 2025-11-25 16:36:27.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:28 compute-0 ceph-mon[74985]: pgmap v1494: 321 pgs: 321 active+clean; 261 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 25 16:36:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 25 16:36:29 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 25 16:36:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 261 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.694 254096 DEBUG nova.compute.manager [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.694 254096 DEBUG oslo_concurrency.lockutils [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.694 254096 DEBUG oslo_concurrency.lockutils [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.695 254096 DEBUG oslo_concurrency.lockutils [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.695 254096 DEBUG nova.compute.manager [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] No waiting events found dispatching network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.695 254096 WARNING nova.compute.manager [req-2e55b80c-49dd-450b-8225-2b891b16618e req-039e282a-4049-41ff-af33-f95944acad6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received unexpected event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb for instance with vm_state active and task_state None.
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.957 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.959 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.959 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.960 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.960 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.963 254096 INFO nova.compute.manager [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Terminating instance
Nov 25 16:36:29 compute-0 nova_compute[254092]: 2025-11-25 16:36:29.967 254096 DEBUG nova.compute.manager [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:36:30 compute-0 ceph-mon[74985]: osdmap e191: 3 total, 3 up, 3 in
Nov 25 16:36:30 compute-0 ceph-mon[74985]: pgmap v1496: 321 pgs: 321 active+clean; 261 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Nov 25 16:36:30 compute-0 kernel: tapb2aa1e65-61 (unregistering): left promiscuous mode
Nov 25 16:36:30 compute-0 NetworkManager[48891]: <info>  [1764088590.3895] device (tapb2aa1e65-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:36:30 compute-0 ovn_controller[153477]: 2025-11-25T16:36:30Z|00409|binding|INFO|Releasing lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb from this chassis (sb_readonly=0)
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:30 compute-0 ovn_controller[153477]: 2025-11-25T16:36:30Z|00410|binding|INFO|Setting lport b2aa1e65-61e9-41b3-a2f4-400d17aafefb down in Southbound
Nov 25 16:36:30 compute-0 ovn_controller[153477]: 2025-11-25T16:36:30Z|00411|binding|INFO|Removing iface tapb2aa1e65-61 ovn-installed in OVS
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.415 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:0c:af 10.100.0.14'], port_security=['fa:16:3e:15:0c:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be1b8151-4e42-40db-813c-8b3b3e216949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b2aa1e65-61e9-41b3-a2f4-400d17aafefb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.417 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b2aa1e65-61e9-41b3-a2f4-400d17aafefb in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.418 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.420 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25559ceb-f1f8-423e-95c2-d1e81c325a78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.421 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:30 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Nov 25 16:36:30 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000002f.scope: Consumed 3.100s CPU time.
Nov 25 16:36:30 compute-0 systemd-machined[216343]: Machine qemu-56-instance-0000002f terminated.
Nov 25 16:36:30 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : haproxy version is 2.8.14-c23fe91
Nov 25 16:36:30 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [NOTICE]   (308047) : path to executable is /usr/sbin/haproxy
Nov 25 16:36:30 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [WARNING]  (308047) : Exiting Master process...
Nov 25 16:36:30 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [ALERT]    (308047) : Current worker (308049) exited with code 143 (Terminated)
Nov 25 16:36:30 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[308043]: [WARNING]  (308047) : All workers exited. Exiting... (0)
Nov 25 16:36:30 compute-0 systemd[1]: libpod-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39.scope: Deactivated successfully.
Nov 25 16:36:30 compute-0 podman[308080]: 2025-11-25 16:36:30.592096965 +0000 UTC m=+0.064912142 container died b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.618 254096 INFO nova.virt.libvirt.driver [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Instance destroyed successfully.
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.619 254096 DEBUG nova.objects.instance [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid be1b8151-4e42-40db-813c-8b3b3e216949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.641 254096 DEBUG nova.virt.libvirt.vif [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1994523502',display_name='tempest-DeleteServersTestJSON-server-1994523502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1994523502',id=47,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ahiysc8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:27Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=be1b8151-4e42-40db-813c-8b3b3e216949,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.641 254096 DEBUG nova.network.os_vif_util [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "address": "fa:16:3e:15:0c:af", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2aa1e65-61", "ovs_interfaceid": "b2aa1e65-61e9-41b3-a2f4-400d17aafefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.642 254096 DEBUG nova.network.os_vif_util [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.642 254096 DEBUG os_vif [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.646 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2aa1e65-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.653 254096 INFO os_vif [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:0c:af,bridge_name='br-int',has_traffic_filtering=True,id=b2aa1e65-61e9-41b3-a2f4-400d17aafefb,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2aa1e65-61')
Nov 25 16:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-865c78254137b37ca1978a730536ff10014696e33d25bc8a2e8e961241b00cca-merged.mount: Deactivated successfully.
Nov 25 16:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39-userdata-shm.mount: Deactivated successfully.
Nov 25 16:36:30 compute-0 podman[308080]: 2025-11-25 16:36:30.678055318 +0000 UTC m=+0.150870495 container cleanup b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:36:30 compute-0 systemd[1]: libpod-conmon-b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39.scope: Deactivated successfully.
Nov 25 16:36:30 compute-0 podman[308118]: 2025-11-25 16:36:30.757086341 +0000 UTC m=+0.083495766 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 16:36:30 compute-0 podman[308147]: 2025-11-25 16:36:30.807346885 +0000 UTC m=+0.098390040 container remove b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:36:30 compute-0 podman[308126]: 2025-11-25 16:36:30.813032619 +0000 UTC m=+0.117819487 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.813 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11d86504-3077-4f15-b045-65efb721dcf3]: (4, ('Tue Nov 25 04:36:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39)\nb76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39\nTue Nov 25 04:36:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (b76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39)\nb76717ca6d1abe30b556e89ea9f93a1c1e7ec3c7c513faaa2daa39d82bb9dd39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.858 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2636a78f-c9a1-4253-b78b-211516a69b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.860 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:30 compute-0 kernel: tape469a950-70: left promiscuous mode
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:30 compute-0 podman[308141]: 2025-11-25 16:36:30.893161253 +0000 UTC m=+0.191970199 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.897 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34f0a325-8436-476c-9463-4fe5d43abc09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.924 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.925 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.925 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.926 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.926 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.928 254096 INFO nova.compute.manager [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Terminating instance
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.928 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f75fd5e-0c65-4c6c-95db-29fdf367dab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 nova_compute[254092]: 2025-11-25 16:36:30.929 254096 DEBUG nova.compute.manager [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.930 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c73a7c2-3470-4ed4-beec-421894c27d40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9635bdb6-44e8-4b0d-8b9c-d6771ee3111e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504404, 'reachable_time': 27699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308215, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.961 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:36:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:30.961 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[65fac341-9db2-474f-8f01-ba05ec35dae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:30 compute-0 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 16:36:31 compute-0 kernel: tap660536bc-d4 (unregistering): left promiscuous mode
Nov 25 16:36:31 compute-0 NetworkManager[48891]: <info>  [1764088591.0780] device (tap660536bc-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.083 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:31 compute-0 ovn_controller[153477]: 2025-11-25T16:36:31Z|00412|binding|INFO|Releasing lport 660536bc-d4bf-4a4b-9515-06043951c25e from this chassis (sb_readonly=0)
Nov 25 16:36:31 compute-0 ovn_controller[153477]: 2025-11-25T16:36:31Z|00413|binding|INFO|Setting lport 660536bc-d4bf-4a4b-9515-06043951c25e down in Southbound
Nov 25 16:36:31 compute-0 ovn_controller[153477]: 2025-11-25T16:36:31Z|00414|binding|INFO|Removing iface tap660536bc-d4 ovn-installed in OVS
Nov 25 16:36:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.096 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:46:64 10.100.0.8'], port_security=['fa:16:3e:10:46:64 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a3d5d205-98f0-4820-a96c-7f3e59d0cdd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2d7c4dbc1eb44f39aa7ccb9b6363e554', 'neutron:revision_number': '11', 'neutron:security_group_ids': '11c41536-eaec-4c18-861e-ff1c71d0a6fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9a9fa8a-e323-43f7-a155-b2b5010363da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=660536bc-d4bf-4a4b-9515-06043951c25e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.097 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 660536bc-d4bf-4a4b-9515-06043951c25e in datapath 3960d4c5-60d7-49e3-b26d-f1317dd96f9f unbound from our chassis
Nov 25 16:36:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.099 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3960d4c5-60d7-49e3-b26d-f1317dd96f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.110 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[670526fb-7c65-4215-9207-cd6fc444df94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.111 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f namespace which is not needed anymore
Nov 25 16:36:31 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000025.scope: Deactivated successfully.
Nov 25 16:36:31 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000025.scope: Consumed 5.456s CPU time.
Nov 25 16:36:31 compute-0 systemd-machined[216343]: Machine qemu-54-instance-00000025 terminated.
Nov 25 16:36:31 compute-0 NetworkManager[48891]: <info>  [1764088591.3537] manager: (tap660536bc-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.371 254096 INFO nova.virt.libvirt.driver [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Instance destroyed successfully.
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.372 254096 DEBUG nova.objects.instance [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lazy-loading 'resources' on Instance uuid a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 262 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.8 MiB/s wr, 357 op/s
Nov 25 16:36:31 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [NOTICE]   (307320) : haproxy version is 2.8.14-c23fe91
Nov 25 16:36:31 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [NOTICE]   (307320) : path to executable is /usr/sbin/haproxy
Nov 25 16:36:31 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [WARNING]  (307320) : Exiting Master process...
Nov 25 16:36:31 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [WARNING]  (307320) : Exiting Master process...
Nov 25 16:36:31 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [ALERT]    (307320) : Current worker (307322) exited with code 143 (Terminated)
Nov 25 16:36:31 compute-0 neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f[307306]: [WARNING]  (307320) : All workers exited. Exiting... (0)
Nov 25 16:36:31 compute-0 systemd[1]: libpod-62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95.scope: Deactivated successfully.
Nov 25 16:36:31 compute-0 podman[308238]: 2025-11-25 16:36:31.420794848 +0000 UTC m=+0.196860222 container died 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.435 254096 DEBUG nova.virt.libvirt.vif [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:33:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-2038779180',display_name='tempest-ServersNegativeTestJSON-server-2038779180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-2038779180',id=37,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:35:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2d7c4dbc1eb44f39aa7ccb9b6363e554',ramdisk_id='',reservation_id='r-j2n2s8mj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-549107942',owner_user_name='tempest-ServersNegativeTestJSON-549107942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:12Z,user_data=None,user_id='b228702c02db4cb69105bb4c939c15d7',uuid=a3d5d205-98f0-4820-a96c-7f3e59d0cdd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.436 254096 DEBUG nova.network.os_vif_util [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converting VIF {"id": "660536bc-d4bf-4a4b-9515-06043951c25e", "address": "fa:16:3e:10:46:64", "network": {"id": "3960d4c5-60d7-49e3-b26d-f1317dd96f9f", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1770094643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2d7c4dbc1eb44f39aa7ccb9b6363e554", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660536bc-d4", "ovs_interfaceid": "660536bc-d4bf-4a4b-9515-06043951c25e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.437 254096 DEBUG nova.network.os_vif_util [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.438 254096 DEBUG os_vif [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.440 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap660536bc-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.448 254096 INFO os_vif [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:46:64,bridge_name='br-int',has_traffic_filtering=True,id=660536bc-d4bf-4a4b-9515-06043951c25e,network=Network(3960d4c5-60d7-49e3-b26d-f1317dd96f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660536bc-d4')
Nov 25 16:36:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:31.617 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95-userdata-shm.mount: Deactivated successfully.
Nov 25 16:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e86859f0aeb96e23040d62d31f0a380e999b425012da5a016c66f9024dcf09-merged.mount: Deactivated successfully.
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.772 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-unplugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] No waiting events found dispatching network-vif-unplugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-unplugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.773 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.774 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] No waiting events found dispatching network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 WARNING nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received unexpected event network-vif-plugged-b2aa1e65-61e9-41b3-a2f4-400d17aafefb for instance with vm_state active and task_state deleting.
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.775 254096 DEBUG oslo_concurrency.lockutils [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.776 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.776 254096 DEBUG nova.compute.manager [req-c67c6e66-f90d-42ad-9342-989ce79e39e9 req-62a0715d-a6c4-4700-823c-28dbaaa32f91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-unplugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.916 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.916 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:31 compute-0 podman[308238]: 2025-11-25 16:36:31.984950983 +0000 UTC m=+0.761016367 container cleanup 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:36:31 compute-0 nova_compute[254092]: 2025-11-25 16:36:31.991 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:36:31 compute-0 systemd[1]: libpod-conmon-62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95.scope: Deactivated successfully.
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.086 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.087 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.103 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.105 254096 INFO nova.compute.claims [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:36:32 compute-0 podman[308296]: 2025-11-25 16:36:32.126584126 +0000 UTC m=+0.090862877 container remove 62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f08875ca-4a51-4c6a-840f-e8f06092bdba]: (4, ('Tue Nov 25 04:36:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95)\n62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95\nTue Nov 25 04:36:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f (62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95)\n62754ff8d06162540c4adf6e84e01e792b0f148520efcb2fa29e7c9422466e95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.140 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd173a2-8d34-40fd-a2b4-ca1abca2726c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.142 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3960d4c5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:32 compute-0 kernel: tap3960d4c5-60: left promiscuous mode
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87b87eb3-14eb-422b-9d15-26ac4ca1efe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.167 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3d7e49-770e-46ae-8ab0-5719229e31cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.171 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80a26766-f4f2-4286-83fd-be0f819f5a24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.189 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1ab4b4-68ac-47f3-bc35-3126b0eac51d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502959, 'reachable_time': 26314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308310, 'error': None, 'target': 'ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.192 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3960d4c5-60d7-49e3-b26d-f1317dd96f9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:36:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:32.192 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ea750019-4e26-4414-9675-d6e807aa1d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d3960d4c5\x2d60d7\x2d49e3\x2db26d\x2df1317dd96f9f.mount: Deactivated successfully.
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.298 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.438 254096 INFO nova.virt.libvirt.driver [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deleting instance files /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949_del
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.440 254096 INFO nova.virt.libvirt.driver [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deletion of /var/lib/nova/instances/be1b8151-4e42-40db-813c-8b3b3e216949_del complete
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.500 254096 INFO nova.compute.manager [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 2.53 seconds to destroy the instance on the hypervisor.
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.501 254096 DEBUG oslo.service.loopingcall [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.502 254096 DEBUG nova.compute.manager [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.503 254096 DEBUG nova.network.neutron [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:36:32 compute-0 ceph-mon[74985]: pgmap v1497: 321 pgs: 321 active+clean; 262 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.8 MiB/s wr, 357 op/s
Nov 25 16:36:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876505788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.809 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.821 254096 DEBUG nova.compute.provider_tree [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.839 254096 DEBUG nova.scheduler.client.report [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.871 254096 INFO nova.virt.libvirt.driver [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deleting instance files /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_del
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.873 254096 INFO nova.virt.libvirt.driver [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deletion of /var/lib/nova/instances/a3d5d205-98f0-4820-a96c-7f3e59d0cdd9_del complete
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.878 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.879 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.960 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.961 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.971 254096 INFO nova.compute.manager [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Took 2.04 seconds to destroy the instance on the hypervisor.
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.972 254096 DEBUG oslo.service.loopingcall [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.973 254096 DEBUG nova.compute.manager [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.973 254096 DEBUG nova.network.neutron [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:36:32 compute-0 nova_compute[254092]: 2025-11-25 16:36:32.982 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.010 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.150 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.152 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.153 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Creating image(s)
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.176 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.200 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.222 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.227 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "34d43962814c7a60ec771694e1897a2898936965" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.228 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "34d43962814c7a60ec771694e1897a2898936965" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.301 254096 DEBUG nova.policy [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:36:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 262 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 4.2 MiB/s wr, 315 op/s
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.556 254096 DEBUG nova.network.neutron [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.618 254096 INFO nova.compute.manager [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Took 1.11 seconds to deallocate network for instance.
Nov 25 16:36:33 compute-0 ovn_controller[153477]: 2025-11-25T16:36:33Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:7b:cf 10.100.0.6
Nov 25 16:36:33 compute-0 ovn_controller[153477]: 2025-11-25T16:36:33Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:7b:cf 10.100.0.6
Nov 25 16:36:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/876505788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.678 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.680 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.810 254096 DEBUG nova.virt.libvirt.imagebackend [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.878 254096 DEBUG oslo_concurrency.processutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.920 254096 DEBUG nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG oslo_concurrency.lockutils [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG oslo_concurrency.lockutils [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG oslo_concurrency.lockutils [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.921 254096 DEBUG nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] No waiting events found dispatching network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.922 254096 WARNING nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received unexpected event network-vif-plugged-660536bc-d4bf-4a4b-9515-06043951c25e for instance with vm_state active and task_state deleting.
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.922 254096 DEBUG nova.compute.manager [req-e47e3249-d311-437d-bf7b-bdff4ac861c1 req-6efc4ef6-9a49-4b50-8fe7-28922653b1b5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Received event network-vif-deleted-b2aa1e65-61e9-41b3-a2f4-400d17aafefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.929 254096 DEBUG nova.virt.libvirt.imagebackend [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 16:36:33 compute-0 nova_compute[254092]: 2025-11-25 16:36:33.931 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning images/12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd@snap to None/6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:36:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2825584110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.398 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "34d43962814c7a60ec771694e1897a2898936965" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.442 254096 DEBUG oslo_concurrency.processutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.446 254096 DEBUG nova.network.neutron [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.498 254096 INFO nova.compute.manager [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Took 1.52 seconds to deallocate network for instance.
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.505 254096 DEBUG nova.compute.manager [req-2a80050c-641b-4ee4-ae60-a3e0f3c211a3 req-0f984a44-05a3-4b6d-8e33-103746b41c39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Received event network-vif-deleted-660536bc-d4bf-4a4b-9515-06043951c25e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.554 254096 DEBUG nova.compute.provider_tree [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.561 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.565 254096 DEBUG nova.objects.instance [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ca43770-6e19-4279-9bf2-c44dcc4d5260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.574 254096 DEBUG nova.scheduler.client.report [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.587 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.588 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Ensure instance console log exists: /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.589 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.589 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.589 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.597 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.600 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.639 254096 INFO nova.scheduler.client.report [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance be1b8151-4e42-40db-813c-8b3b3e216949
Nov 25 16:36:34 compute-0 ceph-mon[74985]: pgmap v1498: 321 pgs: 321 active+clean; 262 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 4.2 MiB/s wr, 315 op/s
Nov 25 16:36:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2825584110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.693 254096 DEBUG oslo_concurrency.processutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.740 254096 DEBUG oslo_concurrency.lockutils [None req-e08b0c3f-9688-4188-97d2-ac408d10ce65 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "be1b8151-4e42-40db-813c-8b3b3e216949" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:34 compute-0 nova_compute[254092]: 2025-11-25 16:36:34.809 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Successfully created port: a7a57913-2b29-44b6-ba43-4f56bfad86dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:36:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418472395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:35 compute-0 nova_compute[254092]: 2025-11-25 16:36:35.230 254096 DEBUG oslo_concurrency.processutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:35 compute-0 nova_compute[254092]: 2025-11-25 16:36:35.240 254096 DEBUG nova.compute.provider_tree [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:35 compute-0 nova_compute[254092]: 2025-11-25 16:36:35.269 254096 DEBUG nova.scheduler.client.report [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:35 compute-0 nova_compute[254092]: 2025-11-25 16:36:35.300 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:35 compute-0 nova_compute[254092]: 2025-11-25 16:36:35.349 254096 INFO nova.scheduler.client.report [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Deleted allocations for instance a3d5d205-98f0-4820-a96c-7f3e59d0cdd9
Nov 25 16:36:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 178 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 275 op/s
Nov 25 16:36:35 compute-0 nova_compute[254092]: 2025-11-25 16:36:35.688 254096 DEBUG oslo_concurrency.lockutils [None req-e6181b65-400b-4129-a7c7-be4f9766956f b228702c02db4cb69105bb4c939c15d7 2d7c4dbc1eb44f39aa7ccb9b6363e554 - - default default] Lock "a3d5d205-98f0-4820-a96c-7f3e59d0cdd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2418472395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.041 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Successfully updated port: a7a57913-2b29-44b6-ba43-4f56bfad86dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.105 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.106 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.106 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.344 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.632 254096 DEBUG nova.compute.manager [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-changed-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.633 254096 DEBUG nova.compute.manager [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Refreshing instance network info cache due to event network-changed-a7a57913-2b29-44b6-ba43-4f56bfad86dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:36:36 compute-0 nova_compute[254092]: 2025-11-25 16:36:36.633 254096 DEBUG oslo_concurrency.lockutils [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:36 compute-0 ceph-mon[74985]: pgmap v1499: 321 pgs: 321 active+clean; 178 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 MiB/s wr, 275 op/s
Nov 25 16:36:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 167 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 268 op/s
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.472 254096 DEBUG nova.network.neutron [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updating instance_info_cache with network_info: [{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.518 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.518 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance network_info: |[{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.519 254096 DEBUG oslo_concurrency.lockutils [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.519 254096 DEBUG nova.network.neutron [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Refreshing network info cache for port a7a57913-2b29-44b6-ba43-4f56bfad86dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.521 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start _get_guest_xml network_info=[{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:36:21Z,direct_url=<?>,disk_format='raw',id=12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd,min_disk=1,min_ram=0,name='tempest-test-snap-2087609465',owner='8bd01e0913564ac783fae350d6861e24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:36:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.527 254096 WARNING nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.531 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.531 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.538 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.539 254096 DEBUG nova.virt.libvirt.host [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.539 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.539 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:36:21Z,direct_url=<?>,disk_format='raw',id=12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd,min_disk=1,min_ram=0,name='tempest-test-snap-2087609465',owner='8bd01e0913564ac783fae350d6861e24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:36:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.540 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.540 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.540 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.541 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.542 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.542 254096 DEBUG nova.virt.hardware [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.545 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278933619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:37 compute-0 nova_compute[254092]: 2025-11-25 16:36:37.998 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.023 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.027 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527864369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.479 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.481 254096 DEBUG nova.virt.libvirt.vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1002945108',display_name='tempest-ImagesTestJSON-server-1002945108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1002945108',id=48,image_ref='12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-szp3bkr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e549d4e8-b824-480b-b81a-83e2ea1eff12',image_min_disk='1',image_min_ram='0',image_owner_id='8bd01e0913564ac783fae350d6861e24',image_owner_project_name='tempest-ImagesTestJSON-217824554',image_owner_user_name='tempest-ImagesTestJSON-217824554-project-member',image_user_id='75e65df891a54a2caabb073a427430b9',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:33Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=6ca43770-6e19-4279-9bf2-c44dcc4d5260,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.481 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.482 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.484 254096 DEBUG nova.objects.instance [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ca43770-6e19-4279-9bf2-c44dcc4d5260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.496 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <uuid>6ca43770-6e19-4279-9bf2-c44dcc4d5260</uuid>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <name>instance-00000030</name>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesTestJSON-server-1002945108</nova:name>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:36:37</nova:creationTime>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <nova:port uuid="a7a57913-2b29-44b6-ba43-4f56bfad86dc">
Nov 25 16:36:38 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <system>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <entry name="serial">6ca43770-6e19-4279-9bf2-c44dcc4d5260</entry>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <entry name="uuid">6ca43770-6e19-4279-9bf2-c44dcc4d5260</entry>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </system>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <os>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   </os>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <features>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   </features>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk">
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config">
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ff:6f:40"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <target dev="tapa7a57913-2b"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/console.log" append="off"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <video>
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </video>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:36:38 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:36:38 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:36:38 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:36:38 compute-0 nova_compute[254092]: </domain>
Nov 25 16:36:38 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.498 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Preparing to wait for external event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.498 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.499 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.499 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.500 254096 DEBUG nova.virt.libvirt.vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1002945108',display_name='tempest-ImagesTestJSON-server-1002945108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1002945108',id=48,image_ref='12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-szp3bkr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e549d4e8-b824-480b-b81a-83e2ea1eff12',image_min_disk='1',image_min_ram='0',image_owner_id='8bd01e0913564ac783fae350d6861e24',image_owner_project_name='tempest-ImagesTestJSON-217824554',image_owner_user_name='tempest-ImagesTestJSON-217824554-project-member',image_user_id='75e65df891a54a2caabb073a427430b9',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:33Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=6ca43770-6e19-4279-9bf2-c44dcc4d5260,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.500 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.500 254096 DEBUG nova.network.os_vif_util [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.501 254096 DEBUG os_vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.502 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.502 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.504 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7a57913-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.505 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa7a57913-2b, col_values=(('external_ids', {'iface-id': 'a7a57913-2b29-44b6-ba43-4f56bfad86dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:6f:40', 'vm-uuid': '6ca43770-6e19-4279-9bf2-c44dcc4d5260'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:38 compute-0 NetworkManager[48891]: <info>  [1764088598.5071] manager: (tapa7a57913-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.516 254096 INFO os_vif [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b')
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.581 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.582 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.582 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:ff:6f:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.583 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Using config drive
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.606 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.874 254096 DEBUG nova.network.neutron [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updated VIF entry in instance network info cache for port a7a57913-2b29-44b6-ba43-4f56bfad86dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.875 254096 DEBUG nova.network.neutron [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updating instance_info_cache with network_info: [{"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:38 compute-0 nova_compute[254092]: 2025-11-25 16:36:38.894 254096 DEBUG oslo_concurrency.lockutils [req-ec0e3868-331c-4473-b8d3-e07b6768c316 req-e1309059-e74e-4ef4-8cb3-754c51471c6a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6ca43770-6e19-4279-9bf2-c44dcc4d5260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:38 compute-0 ceph-mon[74985]: pgmap v1500: 321 pgs: 321 active+clean; 167 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 268 op/s
Nov 25 16:36:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/278933619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3527864369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.038 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Creating config drive at /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.045 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1fqgyfh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.182 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz1fqgyfh" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.207 254096 DEBUG nova.storage.rbd_utils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.211 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 167 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 264 op/s
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.653 254096 DEBUG oslo_concurrency.processutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config 6ca43770-6e19-4279-9bf2-c44dcc4d5260_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.654 254096 INFO nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deleting local config drive /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260/disk.config because it was imported into RBD.
Nov 25 16:36:39 compute-0 kernel: tapa7a57913-2b: entered promiscuous mode
Nov 25 16:36:39 compute-0 ovn_controller[153477]: 2025-11-25T16:36:39Z|00415|binding|INFO|Claiming lport a7a57913-2b29-44b6-ba43-4f56bfad86dc for this chassis.
Nov 25 16:36:39 compute-0 ovn_controller[153477]: 2025-11-25T16:36:39Z|00416|binding|INFO|a7a57913-2b29-44b6-ba43-4f56bfad86dc: Claiming fa:16:3e:ff:6f:40 10.100.0.14
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:39 compute-0 NetworkManager[48891]: <info>  [1764088599.7268] manager: (tapa7a57913-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Nov 25 16:36:39 compute-0 ovn_controller[153477]: 2025-11-25T16:36:39Z|00417|binding|INFO|Setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc ovn-installed in OVS
Nov 25 16:36:39 compute-0 ovn_controller[153477]: 2025-11-25T16:36:39Z|00418|binding|INFO|Setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc up in Southbound
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.743 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.745 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 bound to our chassis
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.746 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:39 compute-0 systemd-udevd[308691]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:39 compute-0 NetworkManager[48891]: <info>  [1764088599.7668] device (tapa7a57913-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:36:39 compute-0 NetworkManager[48891]: <info>  [1764088599.7681] device (tapa7a57913-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b39935cb-57a7-40ba-927a-2e0cd7b804b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:39 compute-0 systemd-machined[216343]: New machine qemu-57-instance-00000030.
Nov 25 16:36:39 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000030.
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.793 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[530cb36a-fe78-4259-b12e-29e88106906b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.798 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a9310f58-08d8-4ccd-93d1-b41177597e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.826 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29365dd5-a6fa-43db-9305-7670c482e1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.849 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b74f69-ad03-4a70-a9f0-dd2a1d43b189]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308702, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.869 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de6d8265-308a-4fb7-ad00-fbec93e63f14]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308705, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308705, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.870 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:39 compute-0 nova_compute[254092]: 2025-11-25 16:36:39.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.873 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:39.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:36:40
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data']
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:36:40 compute-0 ceph-mon[74985]: pgmap v1501: 321 pgs: 321 active+clean; 167 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 264 op/s
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.496 254096 DEBUG nova.compute.manager [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.496 254096 DEBUG oslo_concurrency.lockutils [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.497 254096 DEBUG oslo_concurrency.lockutils [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.497 254096 DEBUG oslo_concurrency.lockutils [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.497 254096 DEBUG nova.compute.manager [req-b6048fb1-f1d7-42f1-aea0-6ba8a72418d9 req-1c0c384b-116a-4f4b-a445-4b19295d138e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Processing event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:36:40 compute-0 ovn_controller[153477]: 2025-11-25T16:36:40Z|00419|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.672 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.672 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088600.671455, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.672 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Started (Lifecycle Event)
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.678 254096 DEBUG nova.virt.libvirt.driver [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.690 254096 INFO nova.virt.libvirt.driver [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance spawned successfully.
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.690 254096 INFO nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 7.54 seconds to spawn the instance on the hypervisor.
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.691 254096 DEBUG nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.698 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.702 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.727 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.727 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088600.6747887, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.727 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Paused (Lifecycle Event)
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.759 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.764 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088600.6762948, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.764 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Resumed (Lifecycle Event)
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.800 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.804 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.821 254096 INFO nova.compute.manager [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 8.77 seconds to build instance.
Nov 25 16:36:40 compute-0 nova_compute[254092]: 2025-11-25 16:36:40.844 254096 DEBUG oslo_concurrency.lockutils [None req-ed495c57-4a56-4041-8eea-9c7d5db347a9 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 167 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.018 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.018 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.053 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.221 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.222 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.233 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.234 254096 INFO nova.compute.claims [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.409 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:42 compute-0 ceph-mon[74985]: pgmap v1502: 321 pgs: 321 active+clean; 167 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.831 254096 DEBUG nova.compute.manager [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.832 254096 DEBUG oslo_concurrency.lockutils [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.832 254096 DEBUG oslo_concurrency.lockutils [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.833 254096 DEBUG oslo_concurrency.lockutils [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.833 254096 DEBUG nova.compute.manager [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] No waiting events found dispatching network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.833 254096 WARNING nova.compute.manager [req-a0d711f4-8027-4e21-b81f-40cd0dcdd9cf req-1abca3da-d944-4947-b251-84888395d7aa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received unexpected event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc for instance with vm_state active and task_state None.
Nov 25 16:36:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198775094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.932 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.938 254096 DEBUG nova.compute.provider_tree [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:42 compute-0 nova_compute[254092]: 2025-11-25 16:36:42.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.015 254096 DEBUG nova.scheduler.client.report [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.239 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.240 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.319 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.320 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.382 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.395 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.396 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.396 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.396 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.397 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.398 254096 INFO nova.compute.manager [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Terminating instance
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.399 254096 DEBUG nova.compute.manager [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.401 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:36:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 167 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 156 op/s
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.502 254096 DEBUG nova.policy [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.561 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.562 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.563 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Creating image(s)
Nov 25 16:36:43 compute-0 kernel: tapa7a57913-2b (unregistering): left promiscuous mode
Nov 25 16:36:43 compute-0 NetworkManager[48891]: <info>  [1764088603.5889] device (tapa7a57913-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.594 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00420|binding|INFO|Releasing lport a7a57913-2b29-44b6-ba43-4f56bfad86dc from this chassis (sb_readonly=0)
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00421|binding|INFO|Setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc down in Southbound
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00422|binding|INFO|Removing iface tapa7a57913-2b ovn-installed in OVS
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.611 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.612 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.613 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:36:43 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Deactivated successfully.
Nov 25 16:36:43 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000030.scope: Consumed 3.666s CPU time.
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d37bac8-20e6-4112-86ad-b0b040f0508b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 systemd-machined[216343]: Machine qemu-57-instance-00000030 terminated.
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.636 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.659 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.664 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdb32c1-c12d-418f-a0a7-80e63bc27b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.664 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.666 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea7196b-866b-4a7c-ad8b-ec8269809165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.698 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[52cf40a3-d141-485d-a8e2-fd5a51b903f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.717 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20a1b4ca-ff53-43ff-9e7c-873249b53275]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308838, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.737 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.738 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.739 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.740 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.739 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[824687e7-6546-4e3c-b553-ed43b4e7ec57]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308839, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308839, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.801 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.809 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:43 compute-0 kernel: tapa7a57913-2b: entered promiscuous mode
Nov 25 16:36:43 compute-0 systemd-udevd[308806]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:43 compute-0 NetworkManager[48891]: <info>  [1764088603.8319] manager: (tapa7a57913-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Nov 25 16:36:43 compute-0 kernel: tapa7a57913-2b (unregistering): left promiscuous mode
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00423|binding|INFO|Claiming lport a7a57913-2b29-44b6-ba43-4f56bfad86dc for this chassis.
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00424|binding|INFO|a7a57913-2b29-44b6-ba43-4f56bfad86dc: Claiming fa:16:3e:ff:6f:40 10.100.0.14
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1198775094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00425|if_status|INFO|Dropped 2 log messages in last 56 seconds (most recently, 56 seconds ago) due to excessive rate
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00426|if_status|INFO|Not setting lport a7a57913-2b29-44b6-ba43-4f56bfad86dc down as sb is readonly
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.895 254096 INFO nova.virt.libvirt.driver [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Instance destroyed successfully.
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.896 254096 DEBUG nova.objects.instance [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid 6ca43770-6e19-4279-9bf2-c44dcc4d5260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.908 254096 DEBUG nova.virt.libvirt.vif [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1002945108',display_name='tempest-ImagesTestJSON-server-1002945108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1002945108',id=48,image_ref='12e53e2d-2e42-4cb2-ac8c-9fb3fc1c4cbd',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-szp3bkr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e549d4e8-b824-480b-b81a-83e2ea1eff12',image_min_disk='1',image_min_ram='0',image_owner_id='8bd01e0913564ac783fae350d6861e24',image_owner_project_name='tempest-ImagesTestJSON-217824554',image_owner_user_name='tempest-ImagesTestJSON-217824554-project-member',image_user_id='75e65df891a54a2caabb073a427430b9',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:40Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=6ca43770-6e19-4279-9bf2-c44dcc4d5260,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.908 254096 DEBUG nova.network.os_vif_util [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "address": "fa:16:3e:ff:6f:40", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa7a57913-2b", "ovs_interfaceid": "a7a57913-2b29-44b6-ba43-4f56bfad86dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.909 254096 DEBUG nova.network.os_vif_util [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.910 254096 DEBUG os_vif [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:36:43 compute-0 ovn_controller[153477]: 2025-11-25T16:36:43Z|00427|binding|INFO|Releasing lport a7a57913-2b29-44b6-ba43-4f56bfad86dc from this chassis (sb_readonly=0)
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.913 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7a57913-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.913 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.915 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 bound to our chassis
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.917 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.932 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:6f:40 10.100.0.14'], port_security=['fa:16:3e:ff:6f:40 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6ca43770-6e19-4279-9bf2-c44dcc4d5260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a7a57913-2b29-44b6-ba43-4f56bfad86dc) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:43 compute-0 nova_compute[254092]: 2025-11-25 16:36:43.941 254096 INFO os_vif [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:6f:40,bridge_name='br-int',has_traffic_filtering=True,id=a7a57913-2b29-44b6-ba43-4f56bfad86dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa7a57913-2b')
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[afc86e7a-107b-42d1-9ba5-1ba91295b446]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.982 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d14b5643-30f1-4e0d-bcf1-7d8de098431e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:43.987 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c3becc86-2238-4e58-a193-2059d251ea28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.020 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab4d9d2-2b23-4085-a412-d74eb73c4de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.038 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5fffeb4e-6b67-4ad2-b2a0-f97b9f5bf125]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308901, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.058 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c35c2bdf-ff78-44db-aba0-8fc6c7b3f62a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308902, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308902, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.060 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.135 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.139 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.141 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.141 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.142 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a7a57913-2b29-44b6-ba43-4f56bfad86dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.143 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.164 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8055af2-eb7c-4e79-abc0-4db17b23f233]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.202 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbbec43-d8e1-40c6-b0e8-d9161b11b0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.206 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ca962b42-7286-4929-a9d5-c2b2260d46b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.241 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0ca92f-c262-462f-8c67-e0338e1cfd03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.261 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df620135-1148-4993-b463-c15c795023d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503072, 'reachable_time': 30629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308911, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15c75a20-589b-4580-969d-fd762be96e74]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503085, 'tstamp': 503085}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308912, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0816ae24-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503089, 'tstamp': 503089}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308912, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.283 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.285 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:44.286 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG nova.compute.manager [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-unplugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG oslo_concurrency.lockutils [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG oslo_concurrency.lockutils [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG oslo_concurrency.lockutils [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.911 254096 DEBUG nova.compute.manager [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] No waiting events found dispatching network-vif-unplugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:44 compute-0 nova_compute[254092]: 2025-11-25 16:36:44.912 254096 DEBUG nova.compute.manager [req-3e527c8b-1868-45f0-9ebb-6067b3b780c4 req-a1bedcc5-ee40-48f0-a1a7-2a64829e54c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-unplugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:36:45 compute-0 ceph-mon[74985]: pgmap v1503: 321 pgs: 321 active+clean; 167 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 156 op/s
Nov 25 16:36:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 167 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 870 KiB/s rd, 2.2 MiB/s wr, 175 op/s
Nov 25 16:36:45 compute-0 nova_compute[254092]: 2025-11-25 16:36:45.419 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Successfully created port: 379de8c7-cb88-4a89-8008-f7bb1cbfc09b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:36:45 compute-0 nova_compute[254092]: 2025-11-25 16:36:45.616 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088590.6141033, be1b8151-4e42-40db-813c-8b3b3e216949 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:45 compute-0 nova_compute[254092]: 2025-11-25 16:36:45.616 254096 INFO nova.compute.manager [-] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] VM Stopped (Lifecycle Event)
Nov 25 16:36:45 compute-0 nova_compute[254092]: 2025-11-25 16:36:45.656 254096 DEBUG nova.compute.manager [None req-66ddf397-aaef-48a2-99c6-1d8ef982d90a - - - - - -] [instance: be1b8151-4e42-40db-813c-8b3b3e216949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.370 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088591.3692746, a3d5d205-98f0-4820-a96c-7f3e59d0cdd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.371 254096 INFO nova.compute.manager [-] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] VM Stopped (Lifecycle Event)
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.392 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.417 254096 DEBUG nova.compute.manager [None req-b53db4e9-3526-443c-85d1-04be77966ecd - - - - - -] [instance: a3d5d205-98f0-4820-a96c-7f3e59d0cdd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.448 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:36:46 compute-0 ceph-mon[74985]: pgmap v1504: 321 pgs: 321 active+clean; 167 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 870 KiB/s rd, 2.2 MiB/s wr, 175 op/s
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.918 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Successfully updated port: 379de8c7-cb88-4a89-8008-f7bb1cbfc09b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.971 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.971 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.971 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.979 254096 DEBUG nova.objects.instance [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.992 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.992 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Ensure instance console log exists: /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.993 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.993 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:46 compute-0 nova_compute[254092]: 2025-11-25 16:36:46.993 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.128 254096 DEBUG nova.compute.manager [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.129 254096 DEBUG oslo_concurrency.lockutils [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.129 254096 DEBUG oslo_concurrency.lockutils [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.130 254096 DEBUG oslo_concurrency.lockutils [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.130 254096 DEBUG nova.compute.manager [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] No waiting events found dispatching network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.130 254096 WARNING nova.compute.manager [req-9f181d1d-639b-4bef-9caf-3019e8c1c1a6 req-3483ae9f-935b-4471-9a08-ad5d1c6a7888 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received unexpected event network-vif-plugged-a7a57913-2b29-44b6-ba43-4f56bfad86dc for instance with vm_state active and task_state deleting.
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.176 254096 DEBUG nova.compute.manager [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-changed-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.176 254096 DEBUG nova.compute.manager [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Refreshing instance network info cache due to event network-changed-379de8c7-cb88-4a89-8008-f7bb1cbfc09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.177 254096 DEBUG oslo_concurrency.lockutils [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.301 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:36:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 187 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 147 op/s
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.967 254096 INFO nova.virt.libvirt.driver [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deleting instance files /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260_del
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.968 254096 INFO nova.virt.libvirt.driver [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deletion of /var/lib/nova/instances/6ca43770-6e19-4279-9bf2-c44dcc4d5260_del complete
Nov 25 16:36:47 compute-0 nova_compute[254092]: 2025-11-25 16:36:47.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.131 254096 INFO nova.compute.manager [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 4.73 seconds to destroy the instance on the hypervisor.
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.132 254096 DEBUG oslo.service.loopingcall [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.132 254096 DEBUG nova.compute.manager [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.133 254096 DEBUG nova.network.neutron [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.234 254096 DEBUG nova.network.neutron [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updating instance_info_cache with network_info: [{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.297 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.298 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance network_info: |[{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.298 254096 DEBUG oslo_concurrency.lockutils [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.298 254096 DEBUG nova.network.neutron [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Refreshing network info cache for port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.302 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start _get_guest_xml network_info=[{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.310 254096 WARNING nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.316 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.318 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.330 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.331 254096 DEBUG nova.virt.libvirt.host [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.331 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.331 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.332 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.332 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.333 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.334 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.334 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.334 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.335 254096 DEBUG nova.virt.hardware [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.339 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:48 compute-0 sudo[308990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:36:48 compute-0 sudo[308990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:48 compute-0 sudo[308990]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:48 compute-0 sudo[309016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:36:48 compute-0 sudo[309016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:48 compute-0 sudo[309016]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:48 compute-0 sudo[309041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:36:48 compute-0 sudo[309041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:48 compute-0 sudo[309041]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:48 compute-0 sudo[309085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:36:48 compute-0 sudo[309085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:48 compute-0 ceph-mon[74985]: pgmap v1505: 321 pgs: 321 active+clean; 187 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 147 op/s
Nov 25 16:36:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4049767500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.837 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.867 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.873 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:48 compute-0 nova_compute[254092]: 2025-11-25 16:36:48.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:49 compute-0 sudo[309085]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:36:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 81250df2-611f-4359-94d7-cff18cf68d2a does not exist
Nov 25 16:36:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 864aad08-dcaf-4ef4-b33e-5a10c4bc400e does not exist
Nov 25 16:36:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 80a7b768-b0fb-44f8-ab45-7bf091435fc0 does not exist
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:36:49 compute-0 sudo[309182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:36:49 compute-0 sudo[309182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:49 compute-0 sudo[309182]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:49 compute-0 sudo[309207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:36:49 compute-0 sudo[309207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:49 compute-0 sudo[309207]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471800074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:49 compute-0 sudo[309232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:36:49 compute-0 sudo[309232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 187 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 675 KiB/s wr, 98 op/s
Nov 25 16:36:49 compute-0 sudo[309232]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:49 compute-0 sudo[309259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:36:49 compute-0 sudo[309259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.519 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.520 254096 DEBUG nova.virt.libvirt.vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1926162330',display_name='tempest-DeleteServersTestJSON-server-1926162330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1926162330',id=49,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-3hu4p32f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:43Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=288ada45-c7fc-4ddc-8b83-1c03ffa14fe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.521 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.521 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.522 254096 DEBUG nova.objects.instance [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.536 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <uuid>288ada45-c7fc-4ddc-8b83-1c03ffa14fe6</uuid>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <name>instance-00000031</name>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <nova:name>tempest-DeleteServersTestJSON-server-1926162330</nova:name>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:36:48</nova:creationTime>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <nova:port uuid="379de8c7-cb88-4a89-8008-f7bb1cbfc09b">
Nov 25 16:36:49 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <system>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <entry name="serial">288ada45-c7fc-4ddc-8b83-1c03ffa14fe6</entry>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <entry name="uuid">288ada45-c7fc-4ddc-8b83-1c03ffa14fe6</entry>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </system>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <os>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   </os>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <features>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   </features>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk">
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config">
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       </source>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:36:49 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:33:4b:3b"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <target dev="tap379de8c7-cb"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/console.log" append="off"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <video>
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </video>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:36:49 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:36:49 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:36:49 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:36:49 compute-0 nova_compute[254092]: </domain>
Nov 25 16:36:49 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.538 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Preparing to wait for external event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.538 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.538 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.539 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.539 254096 DEBUG nova.virt.libvirt.vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1926162330',display_name='tempest-DeleteServersTestJSON-server-1926162330',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1926162330',id=49,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-3hu4p32f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:36:43Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=288ada45-c7fc-4ddc-8b83-1c03ffa14fe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.540 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.540 254096 DEBUG nova.network.os_vif_util [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.541 254096 DEBUG os_vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.542 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.542 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.545 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap379de8c7-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.545 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap379de8c7-cb, col_values=(('external_ids', {'iface-id': '379de8c7-cb88-4a89-8008-f7bb1cbfc09b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:4b:3b', 'vm-uuid': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:49 compute-0 NetworkManager[48891]: <info>  [1764088609.5486] manager: (tap379de8c7-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.549 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.555 254096 INFO os_vif [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb')
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.608 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.608 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.608 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:33:4b:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.609 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Using config drive
Nov 25 16:36:49 compute-0 nova_compute[254092]: 2025-11-25 16:36:49.644 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4049767500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:36:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2471800074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:36:49 compute-0 podman[309345]: 2025-11-25 16:36:49.917512252 +0000 UTC m=+0.110221562 container create 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 25 16:36:49 compute-0 podman[309345]: 2025-11-25 16:36:49.835967909 +0000 UTC m=+0.028677239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:36:50 compute-0 systemd[1]: Started libpod-conmon-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope.
Nov 25 16:36:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:50 compute-0 podman[309345]: 2025-11-25 16:36:50.081277214 +0000 UTC m=+0.273986544 container init 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:36:50 compute-0 podman[309345]: 2025-11-25 16:36:50.093961989 +0000 UTC m=+0.286671299 container start 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:36:50 compute-0 elated_davinci[309361]: 167 167
Nov 25 16:36:50 compute-0 systemd[1]: libpod-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope: Deactivated successfully.
Nov 25 16:36:50 compute-0 conmon[309361]: conmon 9253bbe2bf98b0ca1b73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope/container/memory.events
Nov 25 16:36:50 compute-0 podman[309345]: 2025-11-25 16:36:50.125163525 +0000 UTC m=+0.317872835 container attach 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:36:50 compute-0 podman[309345]: 2025-11-25 16:36:50.126173802 +0000 UTC m=+0.318883122 container died 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-310f3666428d13a55798413474193d4c7986f942c0020f54d37ff7187a07444d-merged.mount: Deactivated successfully.
Nov 25 16:36:50 compute-0 podman[309345]: 2025-11-25 16:36:50.342750958 +0000 UTC m=+0.535460268 container remove 9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 16:36:50 compute-0 systemd[1]: libpod-conmon-9253bbe2bf98b0ca1b735ca9e9718d54627c1c5583c3c8ac7627c68bba7b68d0.scope: Deactivated successfully.
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.474 254096 DEBUG nova.network.neutron [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.499 254096 INFO nova.compute.manager [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Took 2.37 seconds to deallocate network for instance.
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.543 254096 DEBUG nova.compute.manager [req-9148fff9-0440-460f-a96c-e2ef305080af req-c6c08cdc-53df-4c38-87e3-715dd7fd52d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Received event network-vif-deleted-a7a57913-2b29-44b6-ba43-4f56bfad86dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:50 compute-0 podman[309385]: 2025-11-25 16:36:50.543597887 +0000 UTC m=+0.052639630 container create 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.561 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.562 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:50 compute-0 systemd[1]: Started libpod-conmon-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope.
Nov 25 16:36:50 compute-0 podman[309385]: 2025-11-25 16:36:50.518717602 +0000 UTC m=+0.027759365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:36:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:50 compute-0 podman[309385]: 2025-11-25 16:36:50.654920007 +0000 UTC m=+0.163961780 container init 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:36:50 compute-0 podman[309385]: 2025-11-25 16:36:50.666419329 +0000 UTC m=+0.175461072 container start 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:36:50 compute-0 podman[309385]: 2025-11-25 16:36:50.674812787 +0000 UTC m=+0.183854570 container attach 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.688 254096 DEBUG nova.network.neutron [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updated VIF entry in instance network info cache for port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.689 254096 DEBUG nova.network.neutron [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updating instance_info_cache with network_info: [{"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.707 254096 DEBUG oslo_concurrency.lockutils [req-face0b2c-da92-4151-8393-023d145866f0 req-ff86fc1b-51c0-43da-8f12-5bd616418639 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:36:50 compute-0 nova_compute[254092]: 2025-11-25 16:36:50.708 254096 DEBUG oslo_concurrency.processutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:50 compute-0 ceph-mon[74985]: pgmap v1506: 321 pgs: 321 active+clean; 187 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 675 KiB/s wr, 98 op/s
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011067962696164615 of space, bias 1.0, pg target 0.33203888088493844 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:36:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:36:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247762840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.218 254096 DEBUG oslo_concurrency.processutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.225 254096 DEBUG nova.compute.provider_tree [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.238 254096 DEBUG nova.scheduler.client.report [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.258 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.297 254096 INFO nova.scheduler.client.report [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance 6ca43770-6e19-4279-9bf2-c44dcc4d5260
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.359 254096 DEBUG oslo_concurrency.lockutils [None req-9a951ee3-e3bc-4b88-bf3c-f12139c813b4 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "6ca43770-6e19-4279-9bf2-c44dcc4d5260" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.741 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Creating config drive at /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.746 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2zmx2777 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:51 compute-0 zealous_payne[309401]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:36:51 compute-0 zealous_payne[309401]: --> relative data size: 1.0
Nov 25 16:36:51 compute-0 zealous_payne[309401]: --> All data devices are unavailable
Nov 25 16:36:51 compute-0 systemd[1]: libpod-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope: Deactivated successfully.
Nov 25 16:36:51 compute-0 systemd[1]: libpod-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope: Consumed 1.156s CPU time.
Nov 25 16:36:51 compute-0 podman[309385]: 2025-11-25 16:36:51.883724123 +0000 UTC m=+1.392765886 container died 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:36:51 compute-0 nova_compute[254092]: 2025-11-25 16:36:51.890 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2zmx2777" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:52 compute-0 nova_compute[254092]: 2025-11-25 16:36:52.076 254096 DEBUG nova.storage.rbd_utils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:36:52 compute-0 nova_compute[254092]: 2025-11-25 16:36:52.080 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:36:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3247762840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-252f30cc32666a06b7344e67c326e32e093690e94354ed63f9925e9915a2a151-merged.mount: Deactivated successfully.
Nov 25 16:36:52 compute-0 nova_compute[254092]: 2025-11-25 16:36:52.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 16:36:53 compute-0 podman[309385]: 2025-11-25 16:36:53.988926677 +0000 UTC m=+3.497968420 container remove 0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_payne, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 16:36:54 compute-0 systemd[1]: libpod-conmon-0da40303ed0385d78a7ca9790b535f0bbecb907a282ba0e470d3f323a36ca132.scope: Deactivated successfully.
Nov 25 16:36:54 compute-0 sudo[309259]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:54 compute-0 sudo[309506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:36:54 compute-0 sudo[309506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:54 compute-0 ceph-mon[74985]: pgmap v1507: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Nov 25 16:36:54 compute-0 sudo[309506]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:54 compute-0 sudo[309531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:36:54 compute-0 sudo[309531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:54 compute-0 sudo[309531]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:54 compute-0 sudo[309556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:36:54 compute-0 sudo[309556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:54 compute-0 sudo[309556]: pam_unix(sudo:session): session closed for user root
Nov 25 16:36:54 compute-0 sudo[309581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:36:54 compute-0 sudo[309581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:36:54 compute-0 nova_compute[254092]: 2025-11-25 16:36:54.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:54 compute-0 podman[309645]: 2025-11-25 16:36:54.607954221 +0000 UTC m=+0.024031103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:36:55 compute-0 podman[309645]: 2025-11-25 16:36:55.033444844 +0000 UTC m=+0.449521746 container create f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:36:55 compute-0 systemd[1]: Started libpod-conmon-f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d.scope.
Nov 25 16:36:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:36:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508781239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:36:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:36:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508781239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:36:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 16:36:55 compute-0 ceph-mon[74985]: pgmap v1508: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 16:36:55 compute-0 podman[309645]: 2025-11-25 16:36:55.965242133 +0000 UTC m=+1.381319015 container init f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 16:36:55 compute-0 podman[309645]: 2025-11-25 16:36:55.973533578 +0000 UTC m=+1.389610440 container start f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:36:56 compute-0 romantic_payne[309661]: 167 167
Nov 25 16:36:56 compute-0 systemd[1]: libpod-f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d.scope: Deactivated successfully.
Nov 25 16:36:56 compute-0 podman[309645]: 2025-11-25 16:36:56.481088699 +0000 UTC m=+1.897165591 container attach f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 16:36:56 compute-0 podman[309645]: 2025-11-25 16:36:56.482045884 +0000 UTC m=+1.898122756 container died f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:36:56 compute-0 nova_compute[254092]: 2025-11-25 16:36:56.729 254096 DEBUG oslo_concurrency.processutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:36:56 compute-0 nova_compute[254092]: 2025-11-25 16:36:56.731 254096 INFO nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deleting local config drive /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6/disk.config because it was imported into RBD.
Nov 25 16:36:56 compute-0 kernel: tap379de8c7-cb: entered promiscuous mode
Nov 25 16:36:56 compute-0 NetworkManager[48891]: <info>  [1764088616.7875] manager: (tap379de8c7-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 25 16:36:56 compute-0 ovn_controller[153477]: 2025-11-25T16:36:56Z|00428|binding|INFO|Claiming lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b for this chassis.
Nov 25 16:36:56 compute-0 ovn_controller[153477]: 2025-11-25T16:36:56Z|00429|binding|INFO|379de8c7-cb88-4a89-8008-f7bb1cbfc09b: Claiming fa:16:3e:33:4b:3b 10.100.0.8
Nov 25 16:36:56 compute-0 nova_compute[254092]: 2025-11-25 16:36:56.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:56 compute-0 nova_compute[254092]: 2025-11-25 16:36:56.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:56 compute-0 systemd-udevd[309688]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:36:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/508781239' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:36:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/508781239' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:36:56 compute-0 ceph-mon[74985]: pgmap v1509: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 16:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a811f387cdcd39b961103710d0e8c18c0edaa33832bd59ffd3ed72815f9af59-merged.mount: Deactivated successfully.
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.827 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:36:56 compute-0 NetworkManager[48891]: <info>  [1764088616.8299] device (tap379de8c7-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.828 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.829 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:36:56 compute-0 NetworkManager[48891]: <info>  [1764088616.8314] device (tap379de8c7-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:36:56 compute-0 systemd-machined[216343]: New machine qemu-58-instance-00000031.
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2242b911-42d3-4d9a-80f2-a93ec38e3f0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.843 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.845 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.845 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66b72d6c-fb37-41d4-8bef-fdb7c09dc2c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[47370103-572e-45ee-86e5-bf2696653fbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.858 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[166b9a5a-6c76-4e6b-9993-6854735f00e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000031.
Nov 25 16:36:56 compute-0 nova_compute[254092]: 2025-11-25 16:36:56.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:56 compute-0 ovn_controller[153477]: 2025-11-25T16:36:56Z|00430|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b ovn-installed in OVS
Nov 25 16:36:56 compute-0 ovn_controller[153477]: 2025-11-25T16:36:56Z|00431|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b up in Southbound
Nov 25 16:36:56 compute-0 nova_compute[254092]: 2025-11-25 16:36:56.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.883 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93da4d28-c456-4f8d-afce-cf242d672333]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.910 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c47fac93-823a-48cc-ae03-2ebcb7f053c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04e73cc5-121b-48ad-a814-afeb23e7fbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 NetworkManager[48891]: <info>  [1764088616.9177] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.948 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[76ca34fb-ff27-4857-b68b-bb16bff69956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.951 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3829d4f4-f660-4cbc-9e46-0f60e911312e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 NetworkManager[48891]: <info>  [1764088616.9699] device (tape469a950-70): carrier: link connected
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.975 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[721c6ee3-a82f-4044-8b1f-88e62379e66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:56.991 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8438512-d3b3-437e-86cd-2daac9872407]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507456, 'reachable_time': 36528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309724, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05c8b3cf-5525-4a2f-9c45-583274f000fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507456, 'tstamp': 507456}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309725, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.021 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c526922c-97f6-4ab7-97ec-7ad7aa20795d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507456, 'reachable_time': 36528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309726, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.050 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e95576f-2b1c-4c7c-bb7f-8df65d1017f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.107 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9a06ca57-c8c9-41e3-beb2-61632836d786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.108 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:57 compute-0 NetworkManager[48891]: <info>  [1764088617.1116] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Nov 25 16:36:57 compute-0 kernel: tape469a950-70: entered promiscuous mode
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.113 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:57 compute-0 ovn_controller[153477]: 2025-11-25T16:36:57Z|00432|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.131 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42679748-0f75-4cc5-b7fb-07aa01b8fae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.132 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:36:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:36:57.133 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:36:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 16:36:57 compute-0 podman[309645]: 2025-11-25 16:36:57.560315667 +0000 UTC m=+2.976392529 container remove f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 16:36:57 compute-0 systemd[1]: libpod-conmon-f4d1d2fb22ec231982c7fd3f0ab3e7268b9d1bfd25d0fd0ddba274a6b4a6927d.scope: Deactivated successfully.
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.701 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088617.7009575, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.702 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Started (Lifecycle Event)
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.724 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.730 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088617.702132, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.730 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Paused (Lifecycle Event)
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.747 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.753 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:57 compute-0 podman[309801]: 2025-11-25 16:36:57.666362844 +0000 UTC m=+0.019894921 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.769 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:36:57 compute-0 nova_compute[254092]: 2025-11-25 16:36:57.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.366 254096 DEBUG nova.compute.manager [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.367 254096 DEBUG oslo_concurrency.lockutils [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.367 254096 DEBUG oslo_concurrency.lockutils [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.367 254096 DEBUG oslo_concurrency.lockutils [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.368 254096 DEBUG nova.compute.manager [req-8f75caff-51d3-4b63-8fec-68f40aed7f62 req-107736f8-c9ed-436f-ae91-be2bc2d593dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Processing event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.368 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.372 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088618.372311, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.372 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Resumed (Lifecycle Event)
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.376 254096 INFO nova.virt.libvirt.driver [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance spawned successfully.
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.377 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.403 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.407 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.408 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.408 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.408 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.409 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.409 254096 DEBUG nova.virt.libvirt.driver [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.413 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.440 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.593 254096 INFO nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 15.03 seconds to spawn the instance on the hypervisor.
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.594 254096 DEBUG nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:58 compute-0 podman[309820]: 2025-11-25 16:36:58.57522544 +0000 UTC m=+0.872002337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:36:58 compute-0 ceph-mon[74985]: pgmap v1510: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.892 254096 INFO nova.compute.manager [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 16.72 seconds to build instance.
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.894 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088603.8917487, 6ca43770-6e19-4279-9bf2-c44dcc4d5260 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.894 254096 INFO nova.compute.manager [-] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] VM Stopped (Lifecycle Event)
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.925 254096 DEBUG nova.compute.manager [None req-93336a62-10ea-4730-a8a3-944c3bc44474 - - - - - -] [instance: 6ca43770-6e19-4279-9bf2-c44dcc4d5260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:36:58 compute-0 podman[309801]: 2025-11-25 16:36:58.928228997 +0000 UTC m=+1.281761054 container create ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:36:58 compute-0 nova_compute[254092]: 2025-11-25 16:36:58.965 254096 DEBUG oslo_concurrency.lockutils [None req-7f62bd0c-a7ab-457e-86c7-3849f748be21 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:36:59 compute-0 systemd[1]: Started libpod-conmon-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope.
Nov 25 16:36:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe71abb6aef0ccd46cc56760891a8c6424dbecf1dc6c300940a7bd657aa611f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:59 compute-0 podman[309820]: 2025-11-25 16:36:59.352798916 +0000 UTC m=+1.649575773 container create d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 16:36:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.2 MiB/s wr, 35 op/s
Nov 25 16:36:59 compute-0 nova_compute[254092]: 2025-11-25 16:36:59.551 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:36:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:36:59 compute-0 systemd[1]: Started libpod-conmon-d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162.scope.
Nov 25 16:36:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:36:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 25 16:36:59 compute-0 podman[309820]: 2025-11-25 16:36:59.800835171 +0000 UTC m=+2.097612038 container init d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:36:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 25 16:36:59 compute-0 podman[309820]: 2025-11-25 16:36:59.81704746 +0000 UTC m=+2.113824317 container start d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:37:00 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 25 16:37:00 compute-0 podman[309801]: 2025-11-25 16:37:00.277576404 +0000 UTC m=+2.631108471 container init ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 16:37:00 compute-0 podman[309801]: 2025-11-25 16:37:00.284552903 +0000 UTC m=+2.638084960 container start ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:37:00 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : New worker (309850) forked
Nov 25 16:37:00 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : Loading success.
Nov 25 16:37:00 compute-0 nova_compute[254092]: 2025-11-25 16:37:00.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]: {
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:     "0": [
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:         {
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "devices": [
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "/dev/loop3"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             ],
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_name": "ceph_lv0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_size": "21470642176",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "name": "ceph_lv0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "tags": {
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cluster_name": "ceph",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.crush_device_class": "",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.encrypted": "0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osd_id": "0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.type": "block",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.vdo": "0"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             },
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "type": "block",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "vg_name": "ceph_vg0"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:         }
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:     ],
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:     "1": [
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:         {
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "devices": [
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "/dev/loop4"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             ],
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_name": "ceph_lv1",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_size": "21470642176",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "name": "ceph_lv1",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "tags": {
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cluster_name": "ceph",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.crush_device_class": "",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.encrypted": "0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osd_id": "1",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.type": "block",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.vdo": "0"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             },
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "type": "block",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "vg_name": "ceph_vg1"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:         }
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:     ],
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:     "2": [
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:         {
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "devices": [
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "/dev/loop5"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             ],
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_name": "ceph_lv2",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_size": "21470642176",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "name": "ceph_lv2",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "tags": {
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.cluster_name": "ceph",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.crush_device_class": "",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.encrypted": "0",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osd_id": "2",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.type": "block",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:                 "ceph.vdo": "0"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             },
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "type": "block",
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:             "vg_name": "ceph_vg2"
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:         }
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]:     ]
Nov 25 16:37:00 compute-0 vigilant_gauss[309842]: }
Nov 25 16:37:00 compute-0 systemd[1]: libpod-d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162.scope: Deactivated successfully.
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.216 254096 DEBUG nova.compute.manager [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.217 254096 DEBUG oslo_concurrency.lockutils [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.218 254096 DEBUG oslo_concurrency.lockutils [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.218 254096 DEBUG oslo_concurrency.lockutils [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.219 254096 DEBUG nova.compute.manager [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.219 254096 WARNING nova.compute.manager [req-76f2a5d7-dd66-4742-8279-bf6ba1d32742 req-a72e2f33-853b-408f-b1a8-adca46b54618 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state active and task_state None.
Nov 25 16:37:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 97 op/s
Nov 25 16:37:01 compute-0 podman[309820]: 2025-11-25 16:37:01.439476377 +0000 UTC m=+3.736253234 container attach d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:37:01 compute-0 podman[309820]: 2025-11-25 16:37:01.441151221 +0000 UTC m=+3.737928078 container died d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:37:01 compute-0 ceph-mon[74985]: pgmap v1511: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.2 MiB/s wr, 35 op/s
Nov 25 16:37:01 compute-0 ceph-mon[74985]: osdmap e192: 3 total, 3 up, 3 in
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.541 254096 INFO nova.compute.manager [None req-1a9b566b-347a-4ea2-af78-c5c4a00ff337 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Pausing
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.543 254096 DEBUG nova.objects.instance [None req-1a9b566b-347a-4ea2-af78-c5c4a00ff337 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'flavor' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.567 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.567 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.629 254096 DEBUG nova.compute.manager [None req-1a9b566b-347a-4ea2-af78-c5c4a00ff337 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088621.62802, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.632 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Paused (Lifecycle Event)
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.657 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.660 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:37:01 compute-0 nova_compute[254092]: 2025-11-25 16:37:01.673 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 16:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-55f807d57a878fe24f50f2f9a64393172c0df0b4c38c5e482f0cf834c5d77ab6-merged.mount: Deactivated successfully.
Nov 25 16:37:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:37:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675372627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.207 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:02 compute-0 podman[309820]: 2025-11-25 16:37:02.28590454 +0000 UTC m=+4.582681447 container remove d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.309 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.309 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.315 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.315 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:37:02 compute-0 sudo[309581]: pam_unix(sudo:session): session closed for user root
Nov 25 16:37:02 compute-0 podman[309876]: 2025-11-25 16:37:02.385533642 +0000 UTC m=+0.796678944 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:37:02 compute-0 systemd[1]: libpod-conmon-d8dc39cccfaa15a436523e96e0e02cb63a387314bc717d029a2862c14d31c162.scope: Deactivated successfully.
Nov 25 16:37:02 compute-0 sudo[309939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:37:02 compute-0 sudo[309939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:37:02 compute-0 sudo[309939]: pam_unix(sudo:session): session closed for user root
Nov 25 16:37:02 compute-0 podman[309875]: 2025-11-25 16:37:02.416605275 +0000 UTC m=+0.827997324 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:37:02 compute-0 podman[309874]: 2025-11-25 16:37:02.416617495 +0000 UTC m=+0.823268006 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:37:02 compute-0 sudo[309986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:37:02 compute-0 sudo[309986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:37:02 compute-0 sudo[309986]: pam_unix(sudo:session): session closed for user root
Nov 25 16:37:02 compute-0 ceph-mon[74985]: pgmap v1513: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 97 op/s
Nov 25 16:37:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3675372627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.542 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.544 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3801MB free_disk=59.92188262939453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:02 compute-0 sudo[310012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:37:02 compute-0 sudo[310012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:37:02 compute-0 sudo[310012]: pam_unix(sudo:session): session closed for user root
Nov 25 16:37:02 compute-0 sudo[310037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:37:02 compute-0 sudo[310037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.622 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e549d4e8-b824-480b-b81a-83e2ea1eff12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.622 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.623 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.623 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.680 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:02 compute-0 nova_compute[254092]: 2025-11-25 16:37:02.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:03 compute-0 podman[310121]: 2025-11-25 16:37:02.941131446 +0000 UTC m=+0.024441455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:37:03 compute-0 podman[310121]: 2025-11-25 16:37:03.040336707 +0000 UTC m=+0.123646696 container create a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:37:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:37:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615259839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:03 compute-0 nova_compute[254092]: 2025-11-25 16:37:03.135 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:03 compute-0 nova_compute[254092]: 2025-11-25 16:37:03.141 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:37:03 compute-0 nova_compute[254092]: 2025-11-25 16:37:03.156 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:37:03 compute-0 nova_compute[254092]: 2025-11-25 16:37:03.219 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:37:03 compute-0 nova_compute[254092]: 2025-11-25 16:37:03.219 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:03 compute-0 systemd[1]: Started libpod-conmon-a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f.scope.
Nov 25 16:37:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:37:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 97 op/s
Nov 25 16:37:03 compute-0 podman[310121]: 2025-11-25 16:37:03.42713041 +0000 UTC m=+0.510440399 container init a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:37:03 compute-0 podman[310121]: 2025-11-25 16:37:03.437654816 +0000 UTC m=+0.520964815 container start a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:37:03 compute-0 compassionate_saha[310139]: 167 167
Nov 25 16:37:03 compute-0 systemd[1]: libpod-a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f.scope: Deactivated successfully.
Nov 25 16:37:03 compute-0 podman[310121]: 2025-11-25 16:37:03.49421493 +0000 UTC m=+0.577524919 container attach a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:37:03 compute-0 podman[310121]: 2025-11-25 16:37:03.49566714 +0000 UTC m=+0.578977129 container died a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:37:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/615259839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-47d967a0046aeff6ac0de221fa83a3106612c74ffe71dc114791a7d0b77a2637-merged.mount: Deactivated successfully.
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.219 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.220 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:04 compute-0 podman[310121]: 2025-11-25 16:37:04.356069442 +0000 UTC m=+1.439379431 container remove a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:37:04 compute-0 systemd[1]: libpod-conmon-a7ffa1f52413192c7802dd792c667a142c53526de22ca45e4231850b661ee73f.scope: Deactivated successfully.
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:04 compute-0 podman[310164]: 2025-11-25 16:37:04.574367005 +0000 UTC m=+0.037157420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:37:04 compute-0 podman[310164]: 2025-11-25 16:37:04.698975485 +0000 UTC m=+0.161765880 container create 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.709 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.710 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.710 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.710 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.711 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.712 254096 INFO nova.compute.manager [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Terminating instance
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.713 254096 DEBUG nova.compute.manager [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:37:04 compute-0 ceph-mon[74985]: pgmap v1514: 321 pgs: 321 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 97 op/s
Nov 25 16:37:04 compute-0 kernel: tap379de8c7-cb (unregistering): left promiscuous mode
Nov 25 16:37:04 compute-0 NetworkManager[48891]: <info>  [1764088624.7597] device (tap379de8c7-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:37:04 compute-0 systemd[1]: Started libpod-conmon-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope.
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00433|binding|INFO|Releasing lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b from this chassis (sb_readonly=0)
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00434|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b down in Southbound
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00435|binding|INFO|Removing iface tap379de8c7-cb ovn-installed in OVS
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.776 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.778 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:37:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.779 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:37:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.780 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[693149d5-978f-4258-933a-d5d8c568d15d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.781 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:37:04 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000031.scope: Deactivated successfully.
Nov 25 16:37:04 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000031.scope: Consumed 3.711s CPU time.
Nov 25 16:37:04 compute-0 systemd-machined[216343]: Machine qemu-58-instance-00000031 terminated.
Nov 25 16:37:04 compute-0 podman[310164]: 2025-11-25 16:37:04.819132675 +0000 UTC m=+0.281923090 container init 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:37:04 compute-0 podman[310164]: 2025-11-25 16:37:04.829509907 +0000 UTC m=+0.292300302 container start 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 16:37:04 compute-0 podman[310164]: 2025-11-25 16:37:04.834439149 +0000 UTC m=+0.297229565 container attach 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:37:04 compute-0 kernel: tap379de8c7-cb: entered promiscuous mode
Nov 25 16:37:04 compute-0 NetworkManager[48891]: <info>  [1764088624.9368] manager: (tap379de8c7-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Nov 25 16:37:04 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : haproxy version is 2.8.14-c23fe91
Nov 25 16:37:04 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [NOTICE]   (309848) : path to executable is /usr/sbin/haproxy
Nov 25 16:37:04 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [WARNING]  (309848) : Exiting Master process...
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00436|binding|INFO|Claiming lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b for this chassis.
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00437|binding|INFO|379de8c7-cb88-4a89-8008-f7bb1cbfc09b: Claiming fa:16:3e:33:4b:3b 10.100.0.8
Nov 25 16:37:04 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [WARNING]  (309848) : Exiting Master process...
Nov 25 16:37:04 compute-0 kernel: tap379de8c7-cb (unregistering): left promiscuous mode
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [ALERT]    (309848) : Current worker (309850) exited with code 143 (Terminated)
Nov 25 16:37:04 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[309837]: [WARNING]  (309848) : All workers exited. Exiting... (0)
Nov 25 16:37:04 compute-0 systemd[1]: libpod-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope: Deactivated successfully.
Nov 25 16:37:04 compute-0 conmon[309837]: conmon ceffc45733c669125a2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope/container/memory.events
Nov 25 16:37:04 compute-0 podman[310210]: 2025-11-25 16:37:04.950127729 +0000 UTC m=+0.060709099 container died ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:37:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.956 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00438|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b ovn-installed in OVS
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00439|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b up in Southbound
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.972 254096 INFO nova.virt.libvirt.driver [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Instance destroyed successfully.
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.972 254096 DEBUG nova.objects.instance [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00440|binding|INFO|Releasing lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b from this chassis (sb_readonly=0)
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00441|binding|INFO|Setting lport 379de8c7-cb88-4a89-8008-f7bb1cbfc09b down in Southbound
Nov 25 16:37:04 compute-0 ovn_controller[153477]: 2025-11-25T16:37:04Z|00442|binding|INFO|Removing iface tap379de8c7-cb ovn-installed in OVS
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:04.991 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:4b:3b 10.100.0.8'], port_security=['fa:16:3e:33:4b:3b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '288ada45-c7fc-4ddc-8b83-1c03ffa14fe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=379de8c7-cb88-4a89-8008-f7bb1cbfc09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.995 254096 DEBUG nova.virt.libvirt.vif [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1926162330',display_name='tempest-DeleteServersTestJSON-server-1926162330',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1926162330',id=49,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-3hu4p32f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:37:01Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=288ada45-c7fc-4ddc-8b83-1c03ffa14fe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.995 254096 DEBUG nova.network.os_vif_util [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "address": "fa:16:3e:33:4b:3b", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap379de8c7-cb", "ovs_interfaceid": "379de8c7-cb88-4a89-8008-f7bb1cbfc09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.996 254096 DEBUG nova.network.os_vif_util [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.996 254096 DEBUG os_vif [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:37:04 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:04.999 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379de8c7-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.004 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.008 254096 INFO os_vif [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:4b:3b,bridge_name='br-int',has_traffic_filtering=True,id=379de8c7-cb88-4a89-8008-f7bb1cbfc09b,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap379de8c7-cb')
Nov 25 16:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276-userdata-shm.mount: Deactivated successfully.
Nov 25 16:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-efe71abb6aef0ccd46cc56760891a8c6424dbecf1dc6c300940a7bd657aa611f-merged.mount: Deactivated successfully.
Nov 25 16:37:05 compute-0 podman[310210]: 2025-11-25 16:37:05.056248738 +0000 UTC m=+0.166830108 container cleanup ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:37:05 compute-0 systemd[1]: libpod-conmon-ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276.scope: Deactivated successfully.
Nov 25 16:37:05 compute-0 podman[310264]: 2025-11-25 16:37:05.146965659 +0000 UTC m=+0.065163899 container remove ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.156 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a76a1077-8c8c-471c-8466-cec461424c1f]: (4, ('Tue Nov 25 04:37:04 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276)\nceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276\nTue Nov 25 04:37:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (ceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276)\nceffc45733c669125a2e5d950ace0cea425fadb832b19c6cd62ea214a3883276\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.158 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e95ac38a-88c8-4cd3-96be-54ac7d47300a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.160 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 kernel: tape469a950-70: left promiscuous mode
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.167 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcf7b8a-a6b4-440d-8155-88bfef0e54a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.185 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99c90c19-5978-4efe-8297-135c43a7256e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.189 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48b16d07-f6fe-40f2-9fb4-357a6b2be43a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.215 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9ef0bb-f615-49cb-a191-72a6d82b7a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507449, 'reachable_time': 23267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310279, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.217 254096 DEBUG nova.compute.manager [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.220 254096 DEBUG oslo_concurrency.lockutils [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.221 254096 DEBUG oslo_concurrency.lockutils [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:05 compute-0 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.223 254096 DEBUG oslo_concurrency.lockutils [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.224 254096 DEBUG nova.compute.manager [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.224 254096 DEBUG nova.compute.manager [req-d1eb51d8-43b5-4ff2-a168-e2cdd65a7e46 req-8cb3726d-9a9c-4f9d-941c-9ce6d000c758 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.224 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.224 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f920de99-2d81-4da0-9ffd-07bcf9960bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.226 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.228 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24046d25-806d-42bf-8ef6-cb1c722e920f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.232 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 379de8c7-cb88-4a89-8008-f7bb1cbfc09b in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.234 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc99993-1b96-4cb8-8984-a616b9b79cbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 97 op/s
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.467 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.468 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.469 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.469 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.469 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.471 254096 INFO nova.compute.manager [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Terminating instance
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.472 254096 DEBUG nova.compute.manager [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:05 compute-0 kernel: tap4ac8455e-46 (unregistering): left promiscuous mode
Nov 25 16:37:05 compute-0 NetworkManager[48891]: <info>  [1764088625.5430] device (tap4ac8455e-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 ovn_controller[153477]: 2025-11-25T16:37:05Z|00443|binding|INFO|Releasing lport 4ac8455e-46f9-4f4e-9acc-43b78589ef10 from this chassis (sb_readonly=0)
Nov 25 16:37:05 compute-0 ovn_controller[153477]: 2025-11-25T16:37:05Z|00444|binding|INFO|Setting lport 4ac8455e-46f9-4f4e-9acc-43b78589ef10 down in Southbound
Nov 25 16:37:05 compute-0 ovn_controller[153477]: 2025-11-25T16:37:05Z|00445|binding|INFO|Removing iface tap4ac8455e-46 ovn-installed in OVS
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.565 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:7b:cf 10.100.0.6'], port_security=['fa:16:3e:36:7b:cf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e549d4e8-b824-480b-b81a-83e2ea1eff12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4ac8455e-46f9-4f4e-9acc-43b78589ef10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.567 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4ac8455e-46f9-4f4e-9acc-43b78589ef10 in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.568 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[522d4fce-1e41-4e47-a01e-5ac2022671df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.570 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace which is not needed anymore
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Nov 25 16:37:05 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000002e.scope: Consumed 15.485s CPU time.
Nov 25 16:37:05 compute-0 systemd-machined[216343]: Machine qemu-55-instance-0000002e terminated.
Nov 25 16:37:05 compute-0 NetworkManager[48891]: <info>  [1764088625.6993] manager: (tap4ac8455e-46): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.715 254096 INFO nova.virt.libvirt.driver [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance destroyed successfully.
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.715 254096 DEBUG nova.objects.instance [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid e549d4e8-b824-480b-b81a-83e2ea1eff12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.734 254096 INFO nova.virt.libvirt.driver [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deleting instance files /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_del
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.735 254096 INFO nova.virt.libvirt.driver [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deletion of /var/lib/nova/instances/288ada45-c7fc-4ddc-8b83-1c03ffa14fe6_del complete
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.741 254096 DEBUG nova.virt.libvirt.vif [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:36:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-609049756',display_name='tempest-ImagesTestJSON-server-609049756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-609049756',id=46,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:36:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-6cjpy2nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:36:26Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=e549d4e8-b824-480b-b81a-83e2ea1eff12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.742 254096 DEBUG nova.network.os_vif_util [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "address": "fa:16:3e:36:7b:cf", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ac8455e-46", "ovs_interfaceid": "4ac8455e-46f9-4f4e-9acc-43b78589ef10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.743 254096 DEBUG nova.network.os_vif_util [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.744 254096 DEBUG os_vif [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.746 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ac8455e-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.753 254096 INFO os_vif [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:7b:cf,bridge_name='br-int',has_traffic_filtering=True,id=4ac8455e-46f9-4f4e-9acc-43b78589ef10,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ac8455e-46')
Nov 25 16:37:05 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [NOTICE]   (307441) : haproxy version is 2.8.14-c23fe91
Nov 25 16:37:05 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [NOTICE]   (307441) : path to executable is /usr/sbin/haproxy
Nov 25 16:37:05 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [WARNING]  (307441) : Exiting Master process...
Nov 25 16:37:05 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [ALERT]    (307441) : Current worker (307443) exited with code 143 (Terminated)
Nov 25 16:37:05 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[307437]: [WARNING]  (307441) : All workers exited. Exiting... (0)
Nov 25 16:37:05 compute-0 systemd[1]: libpod-e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb.scope: Deactivated successfully.
Nov 25 16:37:05 compute-0 podman[310304]: 2025-11-25 16:37:05.775417648 +0000 UTC m=+0.093666732 container died e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.794 254096 INFO nova.compute.manager [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 1.08 seconds to destroy the instance on the hypervisor.
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.794 254096 DEBUG oslo.service.loopingcall [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.795 254096 DEBUG nova.compute.manager [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.796 254096 DEBUG nova.network.neutron [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-902d37fe79f927afc8f719a0eb41b52feeeed28907f3b71bd4cc8a4842d1692e-merged.mount: Deactivated successfully.
Nov 25 16:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb-userdata-shm.mount: Deactivated successfully.
Nov 25 16:37:05 compute-0 podman[310304]: 2025-11-25 16:37:05.841242634 +0000 UTC m=+0.159491728 container cleanup e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:37:05 compute-0 systemd[1]: libpod-conmon-e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb.scope: Deactivated successfully.
Nov 25 16:37:05 compute-0 podman[310372]: 2025-11-25 16:37:05.934583776 +0000 UTC m=+0.062245569 container remove e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.941 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[500e7dd0-9f04-4cd2-96d8-00ac85f9c30b]: (4, ('Tue Nov 25 04:37:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb)\ne8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb\nTue Nov 25 04:37:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (e8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb)\ne8196a8ea1bac3c8af38dd437aa6e881874ffcbae974e8b0267e2f6a10ddcbfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.943 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e045bf8-87f9-4380-9602-1fe8ebf4a2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.946 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 kernel: tap0816ae24-20: left promiscuous mode
Nov 25 16:37:05 compute-0 nova_compute[254092]: 2025-11-25 16:37:05.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.972 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9db4ccbf-4f63-46b8-844c-38dd2bdcfd1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf12fe8e-15a8-4722-bbc0-5bd0212e86c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:05.991 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9a1abe-5d01-402e-9e08-1a02610c7593]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:06.010 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4d49504c-854d-460d-8eee-f82aee417897]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503063, 'reachable_time': 22887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310398, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d0816ae24\x2d275c\x2d455e\x2da549\x2d929f4eb756e7.mount: Deactivated successfully.
Nov 25 16:37:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:06.014 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:37:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:06.014 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2b88ba-21d3-47f9-84d5-98936aac5158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]: {
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "osd_id": 1,
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "type": "bluestore"
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:     },
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "osd_id": 2,
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "type": "bluestore"
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:     },
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "osd_id": 0,
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:         "type": "bluestore"
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]:     }
Nov 25 16:37:06 compute-0 jolly_sanderson[310183]: }
Nov 25 16:37:06 compute-0 systemd[1]: libpod-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope: Deactivated successfully.
Nov 25 16:37:06 compute-0 systemd[1]: libpod-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope: Consumed 1.217s CPU time.
Nov 25 16:37:06 compute-0 podman[310403]: 2025-11-25 16:37:06.140436631 +0000 UTC m=+0.031718172 container died 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1281102363eb5769abdcfc0f1da93f8f4087da8e690141e1c9e4905d1a60a96-merged.mount: Deactivated successfully.
Nov 25 16:37:06 compute-0 podman[310403]: 2025-11-25 16:37:06.212769733 +0000 UTC m=+0.104051264 container remove 926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sanderson, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 16:37:06 compute-0 systemd[1]: libpod-conmon-926b0f97939e996374fd3d0d2825593f287c802b8ab86e36ea1e7c5d16099ce7.scope: Deactivated successfully.
Nov 25 16:37:06 compute-0 nova_compute[254092]: 2025-11-25 16:37:06.248 254096 INFO nova.virt.libvirt.driver [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Deleting instance files /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12_del
Nov 25 16:37:06 compute-0 nova_compute[254092]: 2025-11-25 16:37:06.249 254096 INFO nova.virt.libvirt.driver [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Deletion of /var/lib/nova/instances/e549d4e8-b824-480b-b81a-83e2ea1eff12_del complete
Nov 25 16:37:06 compute-0 sudo[310037]: pam_unix(sudo:session): session closed for user root
Nov 25 16:37:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:37:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:37:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:37:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:37:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b601c7f1-6f70-447f-87bd-377417b418a8 does not exist
Nov 25 16:37:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 64bc2ae8-e1d1-4124-91f3-b29a1526900e does not exist
Nov 25 16:37:06 compute-0 nova_compute[254092]: 2025-11-25 16:37:06.306 254096 INFO nova.compute.manager [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 25 16:37:06 compute-0 nova_compute[254092]: 2025-11-25 16:37:06.307 254096 DEBUG oslo.service.loopingcall [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:37:06 compute-0 nova_compute[254092]: 2025-11-25 16:37:06.307 254096 DEBUG nova.compute.manager [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:37:06 compute-0 nova_compute[254092]: 2025-11-25 16:37:06.308 254096 DEBUG nova.network.neutron [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:37:06 compute-0 sudo[310418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:37:06 compute-0 sudo[310418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:37:06 compute-0 sudo[310418]: pam_unix(sudo:session): session closed for user root
Nov 25 16:37:06 compute-0 sudo[310443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:37:06 compute-0 sudo[310443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:37:06 compute-0 sudo[310443]: pam_unix(sudo:session): session closed for user root
Nov 25 16:37:06 compute-0 ceph-mon[74985]: pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 97 op/s
Nov 25 16:37:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:37:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.087 254096 DEBUG nova.network.neutron [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.111 254096 INFO nova.compute.manager [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Took 1.32 seconds to deallocate network for instance.
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.163 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.163 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.244 254096 DEBUG oslo_concurrency.processutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 121 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 KiB/s wr, 144 op/s
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.507 254096 DEBUG nova.network.neutron [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.571 254096 INFO nova.compute.manager [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Took 1.26 seconds to deallocate network for instance.
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.685 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:37:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843938009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.719 254096 DEBUG oslo_concurrency.processutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.726 254096 DEBUG nova.compute.provider_tree [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.745 254096 DEBUG nova.scheduler.client.report [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.994 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:07 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.997 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:07.999 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.061 254096 DEBUG oslo_concurrency.processutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/843938009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.265 254096 INFO nova.scheduler.client.report [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.464 254096 DEBUG nova.compute.manager [req-2c2e33f6-dadc-45ca-b6e1-9525849205ea req-8bd12274-dbd9-41a3-bd67-414308fe1741 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-deleted-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.526 254096 DEBUG oslo_concurrency.lockutils [None req-a6fab936-358d-443d-9b8b-c5da3c5f40dc 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.600 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.600 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.601 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.602 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.602 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.602 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.603 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.604 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.605 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-unplugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.606 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "288ada45-c7fc-4ddc-8b83-1c03ffa14fe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] No waiting events found dispatching network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Received unexpected event network-vif-plugged-379de8c7-cb88-4a89-8008-f7bb1cbfc09b for instance with vm_state deleted and task_state None.
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-unplugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.607 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] No waiting events found dispatching network-vif-unplugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received unexpected event network-vif-unplugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 for instance with vm_state deleted and task_state None.
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.608 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 DEBUG oslo_concurrency.lockutils [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 DEBUG nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] No waiting events found dispatching network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.609 254096 WARNING nova.compute.manager [req-e3e25af2-b113-4306-8d4d-5b9cc1da102f req-a245a534-a715-4e4a-a7e6-f171e157ab44 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received unexpected event network-vif-plugged-4ac8455e-46f9-4f4e-9acc-43b78589ef10 for instance with vm_state deleted and task_state None.
Nov 25 16:37:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:37:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493393454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.700 254096 DEBUG oslo_concurrency.processutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.710 254096 DEBUG nova.compute.provider_tree [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.727 254096 DEBUG nova.scheduler.client.report [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.756 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.809 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.810 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.810 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.810 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e549d4e8-b824-480b-b81a-83e2ea1eff12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.819 254096 INFO nova.scheduler.client.report [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance e549d4e8-b824-480b-b81a-83e2ea1eff12
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.884 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.885 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.941 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:37:08 compute-0 nova_compute[254092]: 2025-11-25 16:37:08.952 254096 DEBUG oslo_concurrency.lockutils [None req-d42e595f-7129-4ecc-81a1-48077b85c2c0 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "e549d4e8-b824-480b-b81a-83e2ea1eff12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.067 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.068 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.076 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.077 254096 INFO nova.compute.claims [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:37:09 compute-0 ceph-mon[74985]: pgmap v1516: 321 pgs: 321 active+clean; 121 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 KiB/s wr, 144 op/s
Nov 25 16:37:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3493393454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.251 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.285 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:37:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 121 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 KiB/s wr, 144 op/s
Nov 25 16:37:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 25 16:37:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:37:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473220336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.824 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.832 254096 DEBUG nova.compute.provider_tree [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:37:09 compute-0 nova_compute[254092]: 2025-11-25 16:37:09.861 254096 DEBUG nova.scheduler.client.report [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:37:09 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 25 16:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.108 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.109 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:37:10 compute-0 ceph-mon[74985]: pgmap v1517: 321 pgs: 321 active+clean; 121 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 KiB/s wr, 144 op/s
Nov 25 16:37:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/473220336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:10 compute-0 ceph-mon[74985]: osdmap e193: 3 total, 3 up, 3 in
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.195 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.197 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.224 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.241 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.304 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.305 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.347 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.353 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.355 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.355 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Creating image(s)
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.390 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.419 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.444 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.448 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.497 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.508 254096 DEBUG nova.policy [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec49417447ad4a98b1f890ed78fd5b41', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8b9be1565d148a3ac487eacb391dc1f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.537 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.538 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.541 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.543 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.544 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.545 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.565 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.573 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.600 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-e549d4e8-b824-480b-b81a-83e2ea1eff12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.601 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.606 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.606 254096 INFO nova.compute.claims [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:10 compute-0 nova_compute[254092]: 2025-11-25 16:37:10.813 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.112 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.166 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] resizing rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.217 254096 DEBUG nova.compute.manager [req-14646ea2-bd22-449f-9487-0b600ca20803 req-f02fbc28-66cf-4798-8fe6-388114ab24ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Received event network-vif-deleted-4ac8455e-46f9-4f4e-9acc-43b78589ef10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:37:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1902254106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.333 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.339 254096 DEBUG nova.compute.provider_tree [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.360 254096 DEBUG nova.scheduler.client.report [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:37:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1902254106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.397 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.398 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.407 254096 DEBUG nova.objects.instance [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'migration_context' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.423 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.424 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Ensure instance console log exists: /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.425 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.425 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.425 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 53 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 634 KiB/s wr, 87 op/s
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.458 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.459 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.479 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.496 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.601 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.602 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.603 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Creating image(s)
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.630 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.655 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.683 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.687 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.759 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.760 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.760 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.761 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.787 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.791 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.822 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Successfully created port: 1693689c-371b-40fb-8153-5313e926d910 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:37:11 compute-0 nova_compute[254092]: 2025-11-25 16:37:11.846 254096 DEBUG nova.policy [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '75e65df891a54a2caabb073a427430b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8bd01e0913564ac783fae350d6861e24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:37:12 compute-0 ceph-mon[74985]: pgmap v1519: 321 pgs: 321 active+clean; 53 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 634 KiB/s wr, 87 op/s
Nov 25 16:37:12 compute-0 nova_compute[254092]: 2025-11-25 16:37:12.920 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:12 compute-0 nova_compute[254092]: 2025-11-25 16:37:12.981 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] resizing rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:37:13 compute-0 nova_compute[254092]: 2025-11-25 16:37:13.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:13 compute-0 nova_compute[254092]: 2025-11-25 16:37:13.103 254096 DEBUG nova.objects.instance [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'migration_context' on Instance uuid 8ed9342e-e179-467c-993f-a92f2f7b0dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:13 compute-0 nova_compute[254092]: 2025-11-25 16:37:13.118 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:37:13 compute-0 nova_compute[254092]: 2025-11-25 16:37:13.119 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Ensure instance console log exists: /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:37:13 compute-0 nova_compute[254092]: 2025-11-25 16:37:13.119 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:13 compute-0 nova_compute[254092]: 2025-11-25 16:37:13.119 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:13 compute-0 nova_compute[254092]: 2025-11-25 16:37:13.120 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 53 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 634 KiB/s wr, 87 op/s
Nov 25 16:37:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:13.614 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:13.615 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:13.615 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.485 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.485 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.505 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.596 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.597 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.604 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.605 254096 INFO nova.compute.claims [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.608 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.775 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.808 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Successfully created port: 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:37:14 compute-0 nova_compute[254092]: 2025-11-25 16:37:14.978 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Successfully updated port: 1693689c-371b-40fb-8153-5313e926d910 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.008 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.009 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.009 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:37:15 compute-0 ceph-mon[74985]: pgmap v1520: 321 pgs: 321 active+clean; 53 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 634 KiB/s wr, 87 op/s
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.167 254096 DEBUG nova.compute.manager [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.168 254096 DEBUG nova.compute.manager [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.169 254096 DEBUG oslo_concurrency.lockutils [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:37:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485257019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.210 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.224 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.232 254096 DEBUG nova.compute.provider_tree [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.251 254096 DEBUG nova.scheduler.client.report [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.284 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.285 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.330 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.331 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.349 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.375 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:37:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 90 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.496 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.498 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.498 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Creating image(s)
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.526 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.552 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.579 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.583 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.668 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.669 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.670 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.670 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.693 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.697 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:15 compute-0 nova_compute[254092]: 2025-11-25 16:37:15.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/485257019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.081 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.150 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.187 254096 DEBUG nova.policy [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.254 254096 DEBUG nova.objects.instance [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.270 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.271 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Ensure instance console log exists: /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.271 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.271 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.272 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.425 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Successfully updated port: 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.446 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.446 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquired lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.446 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:37:16 compute-0 nova_compute[254092]: 2025-11-25 16:37:16.768 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.091 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Successfully created port: d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:37:17 compute-0 ceph-mon[74985]: pgmap v1521: 321 pgs: 321 active+clean; 90 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.318 254096 DEBUG nova.compute.manager [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-changed-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.318 254096 DEBUG nova.compute.manager [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Refreshing instance network info cache due to event network-changed-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.318 254096 DEBUG oslo_concurrency.lockutils [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 135 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.554 254096 DEBUG nova.network.neutron [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.578 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.579 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance network_info: |[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.579 254096 DEBUG oslo_concurrency.lockutils [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.580 254096 DEBUG nova.network.neutron [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.583 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start _get_guest_xml network_info=[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.589 254096 WARNING nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.594 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.595 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.602 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.603 254096 DEBUG nova.virt.libvirt.host [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.603 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.604 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.604 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.604 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.605 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.606 254096 DEBUG nova.virt.hardware [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:37:17 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.610 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:17.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:37:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201025939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.113 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:18 compute-0 ceph-mon[74985]: pgmap v1522: 321 pgs: 321 active+clean; 135 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Nov 25 16:37:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/201025939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.207 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.212 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.609 254096 DEBUG nova.network.neutron [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updating instance_info_cache with network_info: [{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.634 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Releasing lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.635 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance network_info: |[{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.636 254096 DEBUG oslo_concurrency.lockutils [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.636 254096 DEBUG nova.network.neutron [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Refreshing network info cache for port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.640 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start _get_guest_xml network_info=[{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.647 254096 WARNING nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.655 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.656 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.661 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.662 254096 DEBUG nova.virt.libvirt.host [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.663 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.663 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.664 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.665 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.665 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.665 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.666 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.666 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.666 254096 DEBUG nova.virt.hardware [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.670 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:37:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732494780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.716 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.718 254096 DEBUG nova.virt.libvirt.vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2097469038',display_name='tempest-AttachInterfacesUnderV243Test-server-2097469038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2097469038',id=50,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK35kG1vwXJXdI4r/EHWJZkHbZ92CrcmDm6T8HHIBEabt8dsD4hwgL2ByxTJp0aD3PDPswuWtqGhIZZ1n6EYekgLgLZqS6KsMbAxaY/ldKY87IH4bSdNTYm2tWLgSZE5MA==',key_name='tempest-keypair-935791717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8b9be1565d148a3ac487eacb391dc1f',ramdisk_id='',reservation_id='r-nejw6g9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1206829083',owner_user_name='tempest-AttachInterfacesUnderV243Test-1206829083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec49417447ad4a98b1f890ed78fd5b41',uuid=d3356685-91bc-46b9-9b9f-87ffce31a4ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.719 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converting VIF {"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.720 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.721 254096 DEBUG nova.objects.instance [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'pci_devices' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.738 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <uuid>d3356685-91bc-46b9-9b9f-87ffce31a4ab</uuid>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <name>instance-00000032</name>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-2097469038</nova:name>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:37:17</nova:creationTime>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:user uuid="ec49417447ad4a98b1f890ed78fd5b41">tempest-AttachInterfacesUnderV243Test-1206829083-project-member</nova:user>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:project uuid="c8b9be1565d148a3ac487eacb391dc1f">tempest-AttachInterfacesUnderV243Test-1206829083</nova:project>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <nova:port uuid="1693689c-371b-40fb-8153-5313e926d910">
Nov 25 16:37:18 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <system>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <entry name="serial">d3356685-91bc-46b9-9b9f-87ffce31a4ab</entry>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <entry name="uuid">d3356685-91bc-46b9-9b9f-87ffce31a4ab</entry>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </system>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <os>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   </os>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <features>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   </features>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk">
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       </source>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config">
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       </source>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:37:18 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:fc:c2:fe"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <target dev="tap1693689c-37"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/console.log" append="off"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <video>
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </video>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:37:18 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:37:18 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:37:18 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:37:18 compute-0 nova_compute[254092]: </domain>
Nov 25 16:37:18 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.740 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Preparing to wait for external event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.740 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.741 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.741 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.742 254096 DEBUG nova.virt.libvirt.vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2097469038',display_name='tempest-AttachInterfacesUnderV243Test-server-2097469038',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2097469038',id=50,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK35kG1vwXJXdI4r/EHWJZkHbZ92CrcmDm6T8HHIBEabt8dsD4hwgL2ByxTJp0aD3PDPswuWtqGhIZZ1n6EYekgLgLZqS6KsMbAxaY/ldKY87IH4bSdNTYm2tWLgSZE5MA==',key_name='tempest-keypair-935791717',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8b9be1565d148a3ac487eacb391dc1f',ramdisk_id='',reservation_id='r-nejw6g9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1206829083',owner_user_name='tempest-AttachInterfacesUnderV243Test-1206829083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec49417447ad4a98b1f890ed78fd5b41',uuid=d3356685-91bc-46b9-9b9f-87ffce31a4ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.742 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converting VIF {"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.743 254096 DEBUG nova.network.os_vif_util [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.743 254096 DEBUG os_vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.744 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.745 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.749 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1693689c-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.750 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1693689c-37, col_values=(('external_ids', {'iface-id': '1693689c-371b-40fb-8153-5313e926d910', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:c2:fe', 'vm-uuid': 'd3356685-91bc-46b9-9b9f-87ffce31a4ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:18 compute-0 NetworkManager[48891]: <info>  [1764088638.7532] manager: (tap1693689c-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.763 254096 INFO os_vif [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37')
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.829 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.830 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.830 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] No VIF found with MAC fa:16:3e:fc:c2:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.830 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Using config drive
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.863 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.871 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Successfully updated port: d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.954 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.954 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.955 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.976 254096 DEBUG nova.compute.manager [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.977 254096 DEBUG nova.compute.manager [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing instance network info cache due to event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:37:18 compute-0 nova_compute[254092]: 2025-11-25 16:37:18.977 254096 DEBUG oslo_concurrency.lockutils [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.191 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:37:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1732494780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:37:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592362619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.333 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.362 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.368 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 135 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Nov 25 16:37:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:37:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751106983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.890 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.892 254096 DEBUG nova.virt.libvirt.vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1535795979',display_name='tempest-ImagesTestJSON-server-1535795979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1535795979',id=51,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-no9ayfye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:11Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=8ed9342e-e179-467c-993f-a92f2f7b0dff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.892 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.893 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.894 254096 DEBUG nova.objects.instance [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8ed9342e-e179-467c-993f-a92f2f7b0dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.923 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <uuid>8ed9342e-e179-467c-993f-a92f2f7b0dff</uuid>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <name>instance-00000033</name>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesTestJSON-server-1535795979</nova:name>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:37:18</nova:creationTime>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:user uuid="75e65df891a54a2caabb073a427430b9">tempest-ImagesTestJSON-217824554-project-member</nova:user>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:project uuid="8bd01e0913564ac783fae350d6861e24">tempest-ImagesTestJSON-217824554</nova:project>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <nova:port uuid="27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc">
Nov 25 16:37:19 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <system>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <entry name="serial">8ed9342e-e179-467c-993f-a92f2f7b0dff</entry>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <entry name="uuid">8ed9342e-e179-467c-993f-a92f2f7b0dff</entry>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </system>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <os>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   </os>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <features>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   </features>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8ed9342e-e179-467c-993f-a92f2f7b0dff_disk">
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       </source>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config">
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       </source>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:37:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:fc:e5:5c"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <target dev="tap27bf7a08-6d"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/console.log" append="off"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <video>
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </video>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:37:19 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:37:19 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:37:19 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:37:19 compute-0 nova_compute[254092]: </domain>
Nov 25 16:37:19 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.925 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Preparing to wait for external event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.926 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.926 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.926 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.927 254096 DEBUG nova.virt.libvirt.vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1535795979',display_name='tempest-ImagesTestJSON-server-1535795979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1535795979',id=51,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-no9ayfye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:11Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=8ed9342e-e179-467c-993f-a92f2f7b0dff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.927 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.928 254096 DEBUG nova.network.os_vif_util [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.929 254096 DEBUG os_vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.930 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.930 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.933 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27bf7a08-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.934 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27bf7a08-6d, col_values=(('external_ids', {'iface-id': '27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:e5:5c', 'vm-uuid': '8ed9342e-e179-467c-993f-a92f2f7b0dff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:19 compute-0 NetworkManager[48891]: <info>  [1764088639.9364] manager: (tap27bf7a08-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.945 254096 INFO os_vif [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d')
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.964 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088624.9616897, 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.965 254096 INFO nova.compute.manager [-] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] VM Stopped (Lifecycle Event)
Nov 25 16:37:19 compute-0 nova_compute[254092]: 2025-11-25 16:37:19.982 254096 DEBUG nova.compute.manager [None req-0d9e5fd0-4f5a-47b3-bda5-7b34cbe34c5c - - - - - -] [instance: 288ada45-c7fc-4ddc-8b83-1c03ffa14fe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.048 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.049 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.049 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No VIF found with MAC fa:16:3e:fc:e5:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.049 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Using config drive
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.073 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1592362619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:20 compute-0 ceph-mon[74985]: pgmap v1523: 321 pgs: 321 active+clean; 135 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Nov 25 16:37:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1751106983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.446 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Creating config drive at /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.455 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_avtw57 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.600 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt_avtw57" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.639 254096 DEBUG nova.storage.rbd_utils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] rbd image d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.645 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.712 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088625.7113466, e549d4e8-b824-480b-b81a-83e2ea1eff12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.713 254096 INFO nova.compute.manager [-] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] VM Stopped (Lifecycle Event)
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.736 254096 DEBUG nova.compute.manager [None req-89873492-4360-48dc-919c-235fc11aa7ad - - - - - -] [instance: e549d4e8-b824-480b-b81a-83e2ea1eff12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.739 254096 DEBUG nova.network.neutron [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated VIF entry in instance network info cache for port 1693689c-371b-40fb-8153-5313e926d910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.740 254096 DEBUG nova.network.neutron [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.755 254096 DEBUG oslo_concurrency.lockutils [req-870b3f43-8b81-4b2b-98e8-df7155a5d80d req-1264d1a5-db5f-42ba-8ea3-31c79de35464 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.805 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Creating config drive at /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.810 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rmxs87b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.959 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4rmxs87b" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.980 254096 DEBUG nova.storage.rbd_utils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] rbd image 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:20 compute-0 nova_compute[254092]: 2025-11-25 16:37:20.983 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.414 254096 DEBUG nova.network.neutron [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 5.5 MiB/s wr, 113 op/s
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.480 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.481 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance network_info: |[{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.482 254096 DEBUG oslo_concurrency.lockutils [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.482 254096 DEBUG nova.network.neutron [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.485 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Start _get_guest_xml network_info=[{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.491 254096 WARNING nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.496 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.496 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.504 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.504 254096 DEBUG nova.virt.libvirt.host [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.505 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.505 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.506 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.507 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.508 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.508 254096 DEBUG nova.virt.hardware [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.511 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.836 254096 DEBUG nova.network.neutron [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updated VIF entry in instance network info cache for port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.838 254096 DEBUG nova.network.neutron [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updating instance_info_cache with network_info: [{"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.866 254096 DEBUG oslo_concurrency.lockutils [req-b3588db7-80be-4c4a-8ce0-40cc4c129067 req-427a3640-d485-4c19-93fa-82d193a7fc80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8ed9342e-e179-467c-993f-a92f2f7b0dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.868 254096 DEBUG oslo_concurrency.processutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config d3356685-91bc-46b9-9b9f-87ffce31a4ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.870 254096 INFO nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deleting local config drive /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab/disk.config because it was imported into RBD.
Nov 25 16:37:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:37:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4052910765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:21 compute-0 kernel: tap1693689c-37: entered promiscuous mode
Nov 25 16:37:21 compute-0 NetworkManager[48891]: <info>  [1764088641.9460] manager: (tap1693689c-37): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Nov 25 16:37:21 compute-0 ovn_controller[153477]: 2025-11-25T16:37:21Z|00446|binding|INFO|Claiming lport 1693689c-371b-40fb-8153-5313e926d910 for this chassis.
Nov 25 16:37:21 compute-0 ovn_controller[153477]: 2025-11-25T16:37:21Z|00447|binding|INFO|1693689c-371b-40fb-8153-5313e926d910: Claiming fa:16:3e:fc:c2:fe 10.100.0.14
Nov 25 16:37:21 compute-0 nova_compute[254092]: 2025-11-25 16:37:21.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:21.996 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:c2:fe 10.100.0.14'], port_security=['fa:16:3e:fc:c2:fe 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd3356685-91bc-46b9-9b9f-87ffce31a4ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8b9be1565d148a3ac487eacb391dc1f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f328e556-e196-4e21-8b60-04c34108b4ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8c5f31d-3e5c-4add-b1b6-dfe24828d28e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1693689c-371b-40fb-8153-5313e926d910) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:21.997 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1693689c-371b-40fb-8153-5313e926d910 in datapath 0c61a44f-bcff-4141-9691-0b0cd16e5793 bound to our chassis
Nov 25 16:37:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:21.998 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c61a44f-bcff-4141-9691-0b0cd16e5793
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.006 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edab9812-4462-4201-bcc7-7d3efae8c2b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.015 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0c61a44f-b1 in ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.018 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0c61a44f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[faf89c31-2482-445e-b91c-48afa9ba548d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 systemd-udevd[311370]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.019 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[61036d8f-a30a-498d-8051-5816b15d2c56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 systemd-machined[216343]: New machine qemu-59-instance-00000032.
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.032 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3984a28d-4653-4a98-91d9-5584204acd9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.0368] device (tap1693689c-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.0377] device (tap1693689c-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.040 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:22 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000032.
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.044 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.059 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b35dd095-7e84-4426-8aa9-ed2e1f0f77ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.075 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 ovn_controller[153477]: 2025-11-25T16:37:22Z|00448|binding|INFO|Setting lport 1693689c-371b-40fb-8153-5313e926d910 ovn-installed in OVS
Nov 25 16:37:22 compute-0 ovn_controller[153477]: 2025-11-25T16:37:22Z|00449|binding|INFO|Setting lport 1693689c-371b-40fb-8153-5313e926d910 up in Southbound
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.078 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.095 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1993ea-90a7-420f-81ce-a0a5e9297c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.101 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6fad8c-a2a6-45a4-ad3c-ab53941ac990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.1022] manager: (tap0c61a44f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.150 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a5d3f9-13cf-4b60-8b05-cb316dd26450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.154 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7eabc477-2d1f-4188-9798-65a7dd9f952e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.1770] device (tap0c61a44f-b0): carrier: link connected
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.183 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8111d2fe-66dc-45ed-a409-7016bb2e8704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.189 254096 DEBUG oslo_concurrency.processutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config 8ed9342e-e179-467c-993f-a92f2f7b0dff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.189 254096 INFO nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deleting local config drive /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff/disk.config because it was imported into RBD.
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.204 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af5a09f1-ce4d-46a7-9f3f-7eefbfb9bc65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c61a44f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:38:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509976, 'reachable_time': 16458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311423, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.222 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b055186-d306-4e26-8522-2a1f8a8cb07f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:385d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509976, 'tstamp': 509976}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311441, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.240 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[350fbc43-3961-4671-b5be-aa232fc0aaf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c61a44f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:38:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509976, 'reachable_time': 16458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311444, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.2535] manager: (tap27bf7a08-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Nov 25 16:37:22 compute-0 kernel: tap27bf7a08-6d: entered promiscuous mode
Nov 25 16:37:22 compute-0 ovn_controller[153477]: 2025-11-25T16:37:22Z|00450|binding|INFO|Claiming lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for this chassis.
Nov 25 16:37:22 compute-0 ovn_controller[153477]: 2025-11-25T16:37:22Z|00451|binding|INFO|27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc: Claiming fa:16:3e:fc:e5:5c 10.100.0.13
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.2664] device (tap27bf7a08-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.2675] device (tap27bf7a08-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:37:22 compute-0 ovn_controller[153477]: 2025-11-25T16:37:22Z|00452|binding|INFO|Setting lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc ovn-installed in OVS
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.273 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.275 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 ovn_controller[153477]: 2025-11-25T16:37:22Z|00453|binding|INFO|Setting lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc up in Southbound
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.288 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e5:5c 10.100.0.13'], port_security=['fa:16:3e:fc:e5:5c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8ed9342e-e179-467c-993f-a92f2f7b0dff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.287 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8b7de0-704d-480f-9673-c4b14b306a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 systemd-machined[216343]: New machine qemu-60-instance-00000033.
Nov 25 16:37:22 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000033.
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.361 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[772eb7f6-9ee6-422b-9945-1b25cfd1ea17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.364 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c61a44f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.364 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.365 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c61a44f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.3678] manager: (tap0c61a44f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Nov 25 16:37:22 compute-0 kernel: tap0c61a44f-b0: entered promiscuous mode
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.374 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c61a44f-b0, col_values=(('external_ids', {'iface-id': 'eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:22 compute-0 ovn_controller[153477]: 2025-11-25T16:37:22Z|00454|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=1)
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.379 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c61a44f-bcff-4141-9691-0b0cd16e5793.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c61a44f-bcff-4141-9691-0b0cd16e5793.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1224f08-9fef-4c52-8ecc-2e879b530722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.382 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0c61a44f-bcff-4141-9691-0b0cd16e5793
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0c61a44f-bcff-4141-9691-0b0cd16e5793.pid.haproxy
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0c61a44f-bcff-4141-9691-0b0cd16e5793
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:37:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:22.384 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'env', 'PROCESS_TAG=haproxy-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0c61a44f-bcff-4141-9691-0b0cd16e5793.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:37:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352025268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.549 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.552 254096 DEBUG nova.virt.libvirt.vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1046040473',display_name='tempest-DeleteServersTestJSON-server-1046040473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1046040473',id=52,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ia105ljl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:15Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=497caf1f-53fe-425d-8e5c-10b2f0a2506d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.553 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.554 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.556 254096 DEBUG nova.objects.instance [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.572 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <uuid>497caf1f-53fe-425d-8e5c-10b2f0a2506d</uuid>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <name>instance-00000034</name>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <nova:name>tempest-DeleteServersTestJSON-server-1046040473</nova:name>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:37:21</nova:creationTime>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <nova:port uuid="d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca">
Nov 25 16:37:22 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <system>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <entry name="serial">497caf1f-53fe-425d-8e5c-10b2f0a2506d</entry>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <entry name="uuid">497caf1f-53fe-425d-8e5c-10b2f0a2506d</entry>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </system>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <os>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   </os>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <features>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   </features>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk">
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       </source>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config">
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       </source>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:37:22 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:b0:76:f8"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <target dev="tapd90f4f5a-3c"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/console.log" append="off"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <video>
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </video>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:37:22 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:37:22 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:37:22 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:37:22 compute-0 nova_compute[254092]: </domain>
Nov 25 16:37:22 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.579 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Preparing to wait for external event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.579 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.580 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.580 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.581 254096 DEBUG nova.virt.libvirt.vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1046040473',display_name='tempest-DeleteServersTestJSON-server-1046040473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1046040473',id=52,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ia105ljl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:37:15Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=497caf1f-53fe-425d-8e5c-10b2f0a2506d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.581 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.582 254096 DEBUG nova.network.os_vif_util [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.583 254096 DEBUG os_vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.584 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.584 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.588 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd90f4f5a-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.588 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd90f4f5a-3c, col_values=(('external_ids', {'iface-id': 'd90f4f5a-3cd7-4c5d-bf11-0e669fb736ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:76:f8', 'vm-uuid': '497caf1f-53fe-425d-8e5c-10b2f0a2506d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 NetworkManager[48891]: <info>  [1764088642.5914] manager: (tapd90f4f5a-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.599 254096 INFO os_vif [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c')
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.715 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088642.7149215, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.716 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Started (Lifecycle Event)
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.743 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.748 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088642.7169144, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.748 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Paused (Lifecycle Event)
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.757 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.757 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.758 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:b0:76:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.758 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Using config drive
Nov 25 16:37:22 compute-0 ceph-mon[74985]: pgmap v1524: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 5.5 MiB/s wr, 113 op/s
Nov 25 16:37:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4052910765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/352025268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.810 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.826 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.832 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:37:22 compute-0 podman[311556]: 2025-11-25 16:37:22.763412903 +0000 UTC m=+0.032316087 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:37:22 compute-0 nova_compute[254092]: 2025-11-25 16:37:22.859 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.092 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088643.0918915, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.092 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Started (Lifecycle Event)
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.119 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.125 254096 DEBUG nova.compute.manager [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.126 254096 DEBUG oslo_concurrency.lockutils [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.126 254096 DEBUG oslo_concurrency.lockutils [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.127 254096 DEBUG oslo_concurrency.lockutils [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.127 254096 DEBUG nova.compute.manager [req-bf9a1555-d3bd-4416-a483-e566e9b45baf req-2f10d1c2-4194-4ff8-967a-3144f035c5d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Processing event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.128 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.133 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.134 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088643.0927892, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.134 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Paused (Lifecycle Event)
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.139 254096 INFO nova.virt.libvirt.driver [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance spawned successfully.
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.139 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:37:23 compute-0 podman[311556]: 2025-11-25 16:37:23.154123502 +0000 UTC m=+0.423026666 container create f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.156 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.163 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.164 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.165 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.165 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.166 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.166 254096 DEBUG nova.virt.libvirt.driver [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.171 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.203 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088643.132422, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Resumed (Lifecycle Event)
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.227 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:37:23 compute-0 systemd[1]: Started libpod-conmon-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440.scope.
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.252 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:37:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:37:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8389ed7fca4d6f0aa03377187ba2932fde612ac6153d8db17d6a663aaf209d6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.309 254096 INFO nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 12.96 seconds to spawn the instance on the hypervisor.
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.311 254096 DEBUG nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:23 compute-0 podman[311556]: 2025-11-25 16:37:23.403513138 +0000 UTC m=+0.672416322 container init f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:37:23 compute-0 podman[311556]: 2025-11-25 16:37:23.40910703 +0000 UTC m=+0.678010194 container start f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:37:23 compute-0 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : New worker (311621) forked
Nov 25 16:37:23 compute-0 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : Loading success.
Nov 25 16:37:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.8 MiB/s wr, 80 op/s
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.474 254096 INFO nova.compute.manager [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 14.44 seconds to build instance.
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.539 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.542 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[446d1518-9602-4a47-a8a9-1e4c27aefad1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.554 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0816ae24-21 in ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.556 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0816ae24-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[753ed2c5-ae78-489e-9855-3ebd384764ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.557 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa141f45-6dfe-4f76-b386-2c64e1366428]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.570 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bb227fc1-1065-42d4-a686-6694c2c6b2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.584 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e85e3fc-c3fe-4fd0-93a2-31022d4669c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.610 254096 DEBUG oslo_concurrency.lockutils [None req-9f080f56-51bb-480a-a73a-e9a5ce89e4b3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.614 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13c12584-224b-4ef6-94a3-a309aba64498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 NetworkManager[48891]: <info>  [1764088643.6216] manager: (tap0816ae24-20): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8b274a-ed97-4cb9-a79d-2cbf2d769fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.656 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c5609533-791b-4fec-823e-79fbdc8e6372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.659 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b31990-d168-4903-a84c-1bf064d0eee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 NetworkManager[48891]: <info>  [1764088643.6827] device (tap0816ae24-20): carrier: link connected
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.688 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a5107ba1-eadc-4f9c-bcf5-da40d5ffb499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cc8ee5-9f60-4cb0-9f5f-4219d8cfba73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510127, 'reachable_time': 24726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311643, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.723 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc555e1-19e9-43b4-976e-95b9a055a347]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:524c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510127, 'tstamp': 510127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311644, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8794fc7a-bfd3-4a6e-a919-b34914b29599]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0816ae24-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:52:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510127, 'reachable_time': 24726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311645, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.774 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1099388a-8e4b-4708-9708-09a7224ff354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.836 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3fbd25-824e-4ad7-bc3d-693f32e028ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.839 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.839 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.840 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0816ae24-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:23 compute-0 kernel: tap0816ae24-20: entered promiscuous mode
Nov 25 16:37:23 compute-0 NetworkManager[48891]: <info>  [1764088643.8730] manager: (tap0816ae24-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.871 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0816ae24-20, col_values=(('external_ids', {'iface-id': '6381e38a-44c0-40e8-bec6-3ddd886bfc49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:23 compute-0 ovn_controller[153477]: 2025-11-25T16:37:23Z|00455|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.907 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.908 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9bebb218-065d-4850-b6f2-4d5bc777db50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.909 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0816ae24-275c-455e-a549-929f4eb756e7.pid.haproxy
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0816ae24-275c-455e-a549-929f4eb756e7
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:37:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:23.911 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'env', 'PROCESS_TAG=haproxy-0816ae24-275c-455e-a549-929f4eb756e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0816ae24-275c-455e-a549-929f4eb756e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.931 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Creating config drive at /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config
Nov 25 16:37:23 compute-0 nova_compute[254092]: 2025-11-25 16:37:23.936 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22yhnsr3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:24 compute-0 nova_compute[254092]: 2025-11-25 16:37:24.081 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp22yhnsr3" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:24 compute-0 nova_compute[254092]: 2025-11-25 16:37:24.103 254096 DEBUG nova.storage.rbd_utils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:37:24 compute-0 nova_compute[254092]: 2025-11-25 16:37:24.114 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:37:24 compute-0 podman[311715]: 2025-11-25 16:37:24.28122803 +0000 UTC m=+0.022400659 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:37:24 compute-0 nova_compute[254092]: 2025-11-25 16:37:24.474 254096 DEBUG nova.network.neutron [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updated VIF entry in instance network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:37:24 compute-0 nova_compute[254092]: 2025-11-25 16:37:24.475 254096 DEBUG nova.network.neutron [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:24 compute-0 nova_compute[254092]: 2025-11-25 16:37:24.489 254096 DEBUG oslo_concurrency.lockutils [req-29d319a5-472b-4609-87d6-02c3ecab0422 req-0923fd27-247d-45be-ad47-e1d05669bcd9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:24 compute-0 podman[311715]: 2025-11-25 16:37:24.910233965 +0000 UTC m=+0.651406574 container create 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 16:37:24 compute-0 ceph-mon[74985]: pgmap v1525: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.8 MiB/s wr, 80 op/s
Nov 25 16:37:24 compute-0 systemd[1]: Started libpod-conmon-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1.scope.
Nov 25 16:37:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0b7ea57d4fc30664551c01fa106099d478380f35527ead310f6fbe933d4f2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:37:25 compute-0 podman[311715]: 2025-11-25 16:37:25.041027903 +0000 UTC m=+0.782200542 container init 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 16:37:25 compute-0 podman[311715]: 2025-11-25 16:37:25.047577381 +0000 UTC m=+0.788749990 container start 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:37:25 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : New worker (311740) forked
Nov 25 16:37:25 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : Loading success.
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.208 254096 DEBUG nova.compute.manager [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.208 254096 DEBUG oslo_concurrency.lockutils [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 DEBUG oslo_concurrency.lockutils [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 DEBUG oslo_concurrency.lockutils [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 DEBUG nova.compute.manager [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] No waiting events found dispatching network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.209 254096 WARNING nova.compute.manager [req-9ce1e26b-16d5-4c84-a594-35000c408bb3 req-cd640ef5-45cc-4c69-ab1f-165686b99ef2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received unexpected event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 for instance with vm_state active and task_state None.
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.423 254096 DEBUG oslo_concurrency.processutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config 497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.424 254096 INFO nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Deleting local config drive /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d/disk.config because it was imported into RBD.
Nov 25 16:37:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 4.8 MiB/s wr, 101 op/s
Nov 25 16:37:25 compute-0 kernel: tapd90f4f5a-3c: entered promiscuous mode
Nov 25 16:37:25 compute-0 NetworkManager[48891]: <info>  [1764088645.4997] manager: (tapd90f4f5a-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Nov 25 16:37:25 compute-0 ovn_controller[153477]: 2025-11-25T16:37:25Z|00456|binding|INFO|Claiming lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for this chassis.
Nov 25 16:37:25 compute-0 ovn_controller[153477]: 2025-11-25T16:37:25Z|00457|binding|INFO|d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca: Claiming fa:16:3e:b0:76:f8 10.100.0.10
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:25 compute-0 ovn_controller[153477]: 2025-11-25T16:37:25Z|00458|binding|INFO|Setting lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca ovn-installed in OVS
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:25 compute-0 systemd-udevd[311762]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:37:25 compute-0 ovn_controller[153477]: 2025-11-25T16:37:25Z|00459|binding|INFO|Setting lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca up in Southbound
Nov 25 16:37:25 compute-0 systemd-machined[216343]: New machine qemu-61-instance-00000034.
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.562 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:76:f8 10.100.0.10'], port_security=['fa:16:3e:b0:76:f8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '497caf1f-53fe-425d-8e5c-10b2f0a2506d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.564 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.565 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:37:25 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000034.
Nov 25 16:37:25 compute-0 NetworkManager[48891]: <info>  [1764088645.5778] device (tapd90f4f5a-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:37:25 compute-0 NetworkManager[48891]: <info>  [1764088645.5799] device (tapd90f4f5a-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64bf9c1c-425e-4dab-bcbd-565a8e39a6a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.584 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.586 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c251e9a-eadf-4b03-b02c-34043ac9251f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8de04c3-ca24-444c-b60f-20546bb649a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.606 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8a484232-7a66-4c13-bfb4-72bea70da16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.644 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79ae4e10-f739-45be-95b1-b3ccb3f6d245]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.685 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb224a6-c42c-481c-a794-70121e6fe2a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 NetworkManager[48891]: <info>  [1764088645.6969] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.698 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e95c51ba-775c-48b7-a2f9-1260cd58d846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.737 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6de3f445-27e5-4646-bd06-3d0250395d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.741 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a34a1376-b08f-47bd-9300-a9126001f190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 NetworkManager[48891]: <info>  [1764088645.7656] device (tape469a950-70): carrier: link connected
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.772 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[80950a39-7142-4bf0-9932-61e445f45b87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d244d9ff-c3c1-4e4d-9eee-04733f71de4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510335, 'reachable_time': 31348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311796, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.806 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[992662e5-9472-46ed-a568-206f39ff3c89]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 510335, 'tstamp': 510335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311797, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2da4462-fd3b-4420-bfea-4afeabeb89b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510335, 'reachable_time': 31348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311798, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.864 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[177f6cf3-6854-4ebf-bd1b-bcb641c62646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.954 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b51d92a-903e-402a-87d1-6726d3922d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.957 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.958 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:25 compute-0 NetworkManager[48891]: <info>  [1764088645.9620] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Nov 25 16:37:25 compute-0 kernel: tape469a950-70: entered promiscuous mode
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.967 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:25 compute-0 ovn_controller[153477]: 2025-11-25T16:37:25Z|00460|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.972 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.973 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3357b2-aa20-41d3-9cb2-1bed5709e9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.974 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:37:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:25.975 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:37:25 compute-0 nova_compute[254092]: 2025-11-25 16:37:25.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.047 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088646.0470612, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.047 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Started (Lifecycle Event)
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.086 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088646.0478446, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.086 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Paused (Lifecycle Event)
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.113 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.118 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.143 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:37:26 compute-0 podman[311872]: 2025-11-25 16:37:26.394104591 +0000 UTC m=+0.058662621 container create 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:37:26 compute-0 podman[311872]: 2025-11-25 16:37:26.359863542 +0000 UTC m=+0.024421592 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:37:26 compute-0 systemd[1]: Started libpod-conmon-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8.scope.
Nov 25 16:37:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af3758759c7473d6d0c9e6e56bd70b29b7aef6fe03951135d48e66ecc34c2f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:37:26 compute-0 podman[311872]: 2025-11-25 16:37:26.578822412 +0000 UTC m=+0.243380462 container init 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:37:26 compute-0 podman[311872]: 2025-11-25 16:37:26.58647486 +0000 UTC m=+0.251032890 container start 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:37:26 compute-0 NetworkManager[48891]: <info>  [1764088646.5913] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 25 16:37:26 compute-0 NetworkManager[48891]: <info>  [1764088646.5929] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:26 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : New worker (311892) forked
Nov 25 16:37:26 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : Loading success.
Nov 25 16:37:26 compute-0 ovn_controller[153477]: 2025-11-25T16:37:26Z|00461|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=0)
Nov 25 16:37:26 compute-0 ovn_controller[153477]: 2025-11-25T16:37:26Z|00462|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:37:26 compute-0 ovn_controller[153477]: 2025-11-25T16:37:26Z|00463|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:26 compute-0 ovn_controller[153477]: 2025-11-25T16:37:26Z|00464|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=0)
Nov 25 16:37:26 compute-0 ovn_controller[153477]: 2025-11-25T16:37:26Z|00465|binding|INFO|Releasing lport 6381e38a-44c0-40e8-bec6-3ddd886bfc49 from this chassis (sb_readonly=0)
Nov 25 16:37:26 compute-0 ovn_controller[153477]: 2025-11-25T16:37:26Z|00466|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:37:26 compute-0 nova_compute[254092]: 2025-11-25 16:37:26.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:27 compute-0 ceph-mon[74985]: pgmap v1526: 321 pgs: 321 active+clean; 180 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 4.8 MiB/s wr, 101 op/s
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.378 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.379 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.379 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.379 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.380 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Processing event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.380 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.380 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.381 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.381 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.381 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] No waiting events found dispatching network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 WARNING nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received unexpected event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for instance with vm_state building and task_state spawning.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.382 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.383 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.383 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Processing event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.383 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG nova.compute.manager [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.384 254096 DEBUG nova.network.neutron [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.385 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.386 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088647.3952847, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.396 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Resumed (Lifecycle Event)
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.398 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.418 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.419 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.424 254096 INFO nova.virt.libvirt.driver [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance spawned successfully.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.425 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.426 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance spawned successfully.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.427 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.428 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:37:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.451 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.452 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088647.3954356, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.452 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Resumed (Lifecycle Event)
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.455 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.456 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.456 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.457 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.457 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.458 254096 DEBUG nova.virt.libvirt.driver [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.463 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.463 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.464 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.465 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.465 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.465 254096 DEBUG nova.virt.libvirt.driver [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:37:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:27.496 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:27.497 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:37:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:27.498 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.507 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.511 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.526 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.691 254096 INFO nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 16.09 seconds to spawn the instance on the hypervisor.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.692 254096 DEBUG nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.698 254096 INFO nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Took 12.20 seconds to spawn the instance on the hypervisor.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.699 254096 DEBUG nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.795 254096 INFO nova.compute.manager [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 17.29 seconds to build instance.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.799 254096 INFO nova.compute.manager [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Took 13.23 seconds to build instance.
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.827 254096 DEBUG oslo_concurrency.lockutils [None req-08248b5e-fdfb-487b-a8ca-3319b9dae617 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:27 compute-0 nova_compute[254092]: 2025-11-25 16:37:27.830 254096 DEBUG oslo_concurrency.lockutils [None req-17f478ef-717a-418e-9c7d-25c42fb9f877 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:28 compute-0 nova_compute[254092]: 2025-11-25 16:37:28.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:28 compute-0 ceph-mon[74985]: pgmap v1527: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Nov 25 16:37:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 25 16:37:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.730 254096 DEBUG nova.network.neutron [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated VIF entry in instance network info cache for port 1693689c-371b-40fb-8153-5313e926d910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.730 254096 DEBUG nova.network.neutron [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.805 254096 DEBUG nova.compute.manager [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.805 254096 DEBUG oslo_concurrency.lockutils [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.806 254096 DEBUG oslo_concurrency.lockutils [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.806 254096 DEBUG oslo_concurrency.lockutils [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.807 254096 DEBUG nova.compute.manager [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] No waiting events found dispatching network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.807 254096 WARNING nova.compute.manager [req-5ca34b40-9f4f-4084-8774-05101eda0fb5 req-fe4ec86a-6880-4d5a-88a1-4495bf930c0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received unexpected event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for instance with vm_state active and task_state None.
Nov 25 16:37:30 compute-0 ceph-mon[74985]: pgmap v1528: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Nov 25 16:37:30 compute-0 nova_compute[254092]: 2025-11-25 16:37:30.862 254096 DEBUG oslo_concurrency.lockutils [req-aadb180e-dcdc-4dc5-a72b-32a160730fc1 req-586dfcd5-e20e-4568-85a0-1a4b2ec0635f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:37:31 compute-0 nova_compute[254092]: 2025-11-25 16:37:31.360 254096 DEBUG nova.compute.manager [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:37:31 compute-0 nova_compute[254092]: 2025-11-25 16:37:31.421 254096 INFO nova.compute.manager [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] instance snapshotting
Nov 25 16:37:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.7 MiB/s wr, 247 op/s
Nov 25 16:37:31 compute-0 nova_compute[254092]: 2025-11-25 16:37:31.692 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:31 compute-0 nova_compute[254092]: 2025-11-25 16:37:31.692 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:31 compute-0 nova_compute[254092]: 2025-11-25 16:37:31.692 254096 INFO nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Shelving
Nov 25 16:37:31 compute-0 nova_compute[254092]: 2025-11-25 16:37:31.716 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:37:31 compute-0 nova_compute[254092]: 2025-11-25 16:37:31.819 254096 INFO nova.virt.libvirt.driver [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Beginning live snapshot process
Nov 25 16:37:32 compute-0 nova_compute[254092]: 2025-11-25 16:37:32.129 254096 DEBUG nova.virt.libvirt.imagebackend [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:37:32 compute-0 nova_compute[254092]: 2025-11-25 16:37:32.452 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(a48b4200ef294c0cb47c992fa8f0f5ce) on rbd image(8ed9342e-e179-467c-993f-a92f2f7b0dff_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:37:32 compute-0 nova_compute[254092]: 2025-11-25 16:37:32.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:32 compute-0 podman[311952]: 2025-11-25 16:37:32.678950246 +0000 UTC m=+0.089993582 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 25 16:37:32 compute-0 podman[311953]: 2025-11-25 16:37:32.684028624 +0000 UTC m=+0.094656929 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Nov 25 16:37:32 compute-0 podman[311954]: 2025-11-25 16:37:32.717194124 +0000 UTC m=+0.118879776 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:37:33 compute-0 nova_compute[254092]: 2025-11-25 16:37:33.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 25 16:37:33 compute-0 ceph-mon[74985]: pgmap v1529: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.7 MiB/s wr, 247 op/s
Nov 25 16:37:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 25 16:37:33 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 25 16:37:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 47 KiB/s wr, 262 op/s
Nov 25 16:37:34 compute-0 ceph-mon[74985]: osdmap e194: 3 total, 3 up, 3 in
Nov 25 16:37:34 compute-0 ceph-mon[74985]: pgmap v1531: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 47 KiB/s wr, 262 op/s
Nov 25 16:37:34 compute-0 nova_compute[254092]: 2025-11-25 16:37:34.528 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] cloning vms/8ed9342e-e179-467c-993f-a92f2f7b0dff_disk@a48b4200ef294c0cb47c992fa8f0f5ce to images/1a87d43c-b6eb-4995-8c8a-ad36a6013e2b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:37:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 32 KiB/s wr, 241 op/s
Nov 25 16:37:37 compute-0 ceph-mon[74985]: pgmap v1532: 321 pgs: 321 active+clean; 181 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 32 KiB/s wr, 241 op/s
Nov 25 16:37:37 compute-0 nova_compute[254092]: 2025-11-25 16:37:37.364 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] flattening images/1a87d43c-b6eb-4995-8c8a-ad36a6013e2b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:37:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 187 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 186 op/s
Nov 25 16:37:37 compute-0 nova_compute[254092]: 2025-11-25 16:37:37.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:38 compute-0 nova_compute[254092]: 2025-11-25 16:37:38.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:38 compute-0 ceph-mon[74985]: pgmap v1533: 321 pgs: 321 active+clean; 187 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 186 op/s
Nov 25 16:37:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 187 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 186 op/s
Nov 25 16:37:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:37:40
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', 'images', '.rgw.root']
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:37:41 compute-0 ceph-mon[74985]: pgmap v1534: 321 pgs: 321 active+clean; 187 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 186 op/s
Nov 25 16:37:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 253 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 105 op/s
Nov 25 16:37:41 compute-0 nova_compute[254092]: 2025-11-25 16:37:41.764 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:37:42 compute-0 ceph-mon[74985]: pgmap v1535: 321 pgs: 321 active+clean; 253 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 105 op/s
Nov 25 16:37:42 compute-0 nova_compute[254092]: 2025-11-25 16:37:42.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:43 compute-0 nova_compute[254092]: 2025-11-25 16:37:43.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 253 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 105 op/s
Nov 25 16:37:44 compute-0 ceph-mon[74985]: pgmap v1536: 321 pgs: 321 active+clean; 253 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 105 op/s
Nov 25 16:37:44 compute-0 ovn_controller[153477]: 2025-11-25T16:37:44Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:c2:fe 10.100.0.14
Nov 25 16:37:44 compute-0 ovn_controller[153477]: 2025-11-25T16:37:44Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:c2:fe 10.100.0.14
Nov 25 16:37:45 compute-0 nova_compute[254092]: 2025-11-25 16:37:45.132 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(a48b4200ef294c0cb47c992fa8f0f5ce) on rbd image(8ed9342e-e179-467c-993f-a92f2f7b0dff_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:37:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 258 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 103 op/s
Nov 25 16:37:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 25 16:37:46 compute-0 ceph-mon[74985]: pgmap v1537: 321 pgs: 321 active+clean; 258 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 103 op/s
Nov 25 16:37:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 25 16:37:46 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 25 16:37:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 299 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.5 MiB/s wr, 154 op/s
Nov 25 16:37:47 compute-0 ovn_controller[153477]: 2025-11-25T16:37:47Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:76:f8 10.100.0.10
Nov 25 16:37:47 compute-0 ovn_controller[153477]: 2025-11-25T16:37:47Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:76:f8 10.100.0.10
Nov 25 16:37:47 compute-0 nova_compute[254092]: 2025-11-25 16:37:47.524 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] creating snapshot(snap) on rbd image(1a87d43c-b6eb-4995-8c8a-ad36a6013e2b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:37:47 compute-0 ceph-mon[74985]: osdmap e195: 3 total, 3 up, 3 in
Nov 25 16:37:47 compute-0 nova_compute[254092]: 2025-11-25 16:37:47.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:48 compute-0 nova_compute[254092]: 2025-11-25 16:37:48.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 25 16:37:49 compute-0 ceph-mon[74985]: pgmap v1539: 321 pgs: 321 active+clean; 299 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.5 MiB/s wr, 154 op/s
Nov 25 16:37:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 25 16:37:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 25 16:37:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 299 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 4.8 MiB/s wr, 96 op/s
Nov 25 16:37:49 compute-0 ovn_controller[153477]: 2025-11-25T16:37:49Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:e5:5c 10.100.0.13
Nov 25 16:37:49 compute-0 ovn_controller[153477]: 2025-11-25T16:37:49Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:e5:5c 10.100.0.13
Nov 25 16:37:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:50 compute-0 ceph-mon[74985]: osdmap e196: 3 total, 3 up, 3 in
Nov 25 16:37:50 compute-0 ceph-mon[74985]: pgmap v1541: 321 pgs: 321 active+clean; 299 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 4.8 MiB/s wr, 96 op/s
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002201510304747021 of space, bias 1.0, pg target 0.6604530914241062 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010120461149638602 of space, bias 1.0, pg target 0.3036138344891581 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:37:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 311 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 217 op/s
Nov 25 16:37:52 compute-0 ceph-mon[74985]: pgmap v1542: 321 pgs: 321 active+clean; 311 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 217 op/s
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b could not be found.
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver 
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver 
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     image = self._client.call(
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b could not be found.
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.379 254096 ERROR nova.virt.libvirt.driver 
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.866 254096 DEBUG nova.storage.rbd_utils [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] removing snapshot(snap) on rbd image(1a87d43c-b6eb-4995-8c8a-ad36a6013e2b) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:52 compute-0 nova_compute[254092]: 2025-11-25 16:37:52.982 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:37:53 compute-0 nova_compute[254092]: 2025-11-25 16:37:53.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 311 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Nov 25 16:37:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 25 16:37:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 25 16:37:54 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 25 16:37:54 compute-0 ceph-mon[74985]: pgmap v1543: 321 pgs: 321 active+clean; 311 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Nov 25 16:37:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:37:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 25 16:37:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 25 16:37:55 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 25 16:37:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:37:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747353024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:37:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:37:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3747353024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:37:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 308 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 218 op/s
Nov 25 16:37:55 compute-0 ceph-mon[74985]: osdmap e197: 3 total, 3 up, 3 in
Nov 25 16:37:55 compute-0 ceph-mon[74985]: osdmap e198: 3 total, 3 up, 3 in
Nov 25 16:37:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3747353024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:37:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3747353024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:37:56 compute-0 ovn_controller[153477]: 2025-11-25T16:37:56Z|00467|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 16:37:56 compute-0 nova_compute[254092]: 2025-11-25 16:37:56.750 254096 WARNING nova.compute.manager [None req-70d0c83f-8692-4d99-a158-3483f9dbc267 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Image not found during snapshot: nova.exception.ImageNotFound: Image 1a87d43c-b6eb-4995-8c8a-ad36a6013e2b could not be found.
Nov 25 16:37:56 compute-0 ceph-mon[74985]: pgmap v1546: 321 pgs: 321 active+clean; 308 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 218 op/s
Nov 25 16:37:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 280 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 204 op/s
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.824 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.825 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.827 254096 INFO nova.compute.manager [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Terminating instance
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.827 254096 DEBUG nova.compute.manager [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:37:57 compute-0 sshd-session[312137]: Connection closed by authenticating user root 171.244.51.45 port 44988 [preauth]
Nov 25 16:37:57 compute-0 nova_compute[254092]: 2025-11-25 16:37:57.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 kernel: tap27bf7a08-6d (unregistering): left promiscuous mode
Nov 25 16:37:58 compute-0 NetworkManager[48891]: <info>  [1764088678.0519] device (tap27bf7a08-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 ovn_controller[153477]: 2025-11-25T16:37:58Z|00468|binding|INFO|Releasing lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc from this chassis (sb_readonly=0)
Nov 25 16:37:58 compute-0 ovn_controller[153477]: 2025-11-25T16:37:58Z|00469|binding|INFO|Setting lport 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc down in Southbound
Nov 25 16:37:58 compute-0 ovn_controller[153477]: 2025-11-25T16:37:58Z|00470|binding|INFO|Removing iface tap27bf7a08-6d ovn-installed in OVS
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.060 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.097 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:e5:5c 10.100.0.13'], port_security=['fa:16:3e:fc:e5:5c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8ed9342e-e179-467c-993f-a92f2f7b0dff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0816ae24-275c-455e-a549-929f4eb756e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8bd01e0913564ac783fae350d6861e24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572f99e3-d1c9-4515-9799-c37da094cf6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24fed12c-3cf8-4d51-9b64-c89f82ee1964, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc in datapath 0816ae24-275c-455e-a549-929f4eb756e7 unbound from our chassis
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.100 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0816ae24-275c-455e-a549-929f4eb756e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.101 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f199b019-5d09-4431-ae8d-08eacd3dac5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.101 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 namespace which is not needed anymore
Nov 25 16:37:58 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Deactivated successfully.
Nov 25 16:37:58 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Consumed 14.091s CPU time.
Nov 25 16:37:58 compute-0 systemd-machined[216343]: Machine qemu-60-instance-00000033 terminated.
Nov 25 16:37:58 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : haproxy version is 2.8.14-c23fe91
Nov 25 16:37:58 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [NOTICE]   (311738) : path to executable is /usr/sbin/haproxy
Nov 25 16:37:58 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [WARNING]  (311738) : Exiting Master process...
Nov 25 16:37:58 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [ALERT]    (311738) : Current worker (311740) exited with code 143 (Terminated)
Nov 25 16:37:58 compute-0 neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7[311734]: [WARNING]  (311738) : All workers exited. Exiting... (0)
Nov 25 16:37:58 compute-0 systemd[1]: libpod-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1.scope: Deactivated successfully.
Nov 25 16:37:58 compute-0 podman[312162]: 2025-11-25 16:37:58.238399172 +0000 UTC m=+0.052511116 container died 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.263 254096 INFO nova.virt.libvirt.driver [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Instance destroyed successfully.
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.263 254096 DEBUG nova.objects.instance [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lazy-loading 'resources' on Instance uuid 8ed9342e-e179-467c-993f-a92f2f7b0dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.365 254096 DEBUG nova.virt.libvirt.vif [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1535795979',display_name='tempest-ImagesTestJSON-server-1535795979',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1535795979',id=51,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8bd01e0913564ac783fae350d6861e24',ramdisk_id='',reservation_id='r-no9ayfye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-217824554',owner_user_name='tempest-ImagesTestJSON-217824554-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:37:56Z,user_data=None,user_id='75e65df891a54a2caabb073a427430b9',uuid=8ed9342e-e179-467c-993f-a92f2f7b0dff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.366 254096 DEBUG nova.network.os_vif_util [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converting VIF {"id": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "address": "fa:16:3e:fc:e5:5c", "network": {"id": "0816ae24-275c-455e-a549-929f4eb756e7", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1362485437-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8bd01e0913564ac783fae350d6861e24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27bf7a08-6d", "ovs_interfaceid": "27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.367 254096 DEBUG nova.network.os_vif_util [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.367 254096 DEBUG os_vif [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.369 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27bf7a08-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.375 254096 INFO os_vif [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:e5:5c,bridge_name='br-int',has_traffic_filtering=True,id=27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc,network=Network(0816ae24-275c-455e-a549-929f4eb756e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27bf7a08-6d')
Nov 25 16:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc0b7ea57d4fc30664551c01fa106099d478380f35527ead310f6fbe933d4f2a-merged.mount: Deactivated successfully.
Nov 25 16:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1-userdata-shm.mount: Deactivated successfully.
Nov 25 16:37:58 compute-0 podman[312162]: 2025-11-25 16:37:58.434697357 +0000 UTC m=+0.248809301 container cleanup 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:37:58 compute-0 systemd[1]: libpod-conmon-0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1.scope: Deactivated successfully.
Nov 25 16:37:58 compute-0 podman[312219]: 2025-11-25 16:37:58.577998035 +0000 UTC m=+0.118173757 container remove 0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.580 254096 DEBUG nova.compute.manager [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-unplugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.581 254096 DEBUG oslo_concurrency.lockutils [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.581 254096 DEBUG oslo_concurrency.lockutils [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.582 254096 DEBUG oslo_concurrency.lockutils [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.582 254096 DEBUG nova.compute.manager [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] No waiting events found dispatching network-vif-unplugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.582 254096 DEBUG nova.compute.manager [req-fc145f0a-8757-419c-99b5-933171a08007 req-1554e6a9-34f5-4f6c-bf96-ce0816141718 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-unplugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27f24b0f-46c9-4958-88cd-b64c0af26dd9]: (4, ('Tue Nov 25 04:37:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1)\n0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1\nTue Nov 25 04:37:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 (0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1)\n0ab9faacd1c3894c4dce8c20f59716038307427870e37793dcc381527599cbe1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6bb656-d603-4678-b7ee-2766de9930f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.588 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0816ae24-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 kernel: tap0816ae24-20: left promiscuous mode
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 nova_compute[254092]: 2025-11-25 16:37:58.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[318d9e07-20c1-45dc-9c02-04c3f9225cf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cad173c7-8b47-409c-ab78-f01346fa433a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.625 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1d34a4-d9c0-4cfa-a05e-b5ad01334c5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.645 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54ecff4f-99a0-41fc-9f68-eb0c4793010f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510119, 'reachable_time': 29977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312234, 'error': None, 'target': 'ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.649 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0816ae24-275c-455e-a549-929f4eb756e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:37:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:37:58.649 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[da9af182-47fb-455f-bedd-e287d88e8123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:37:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d0816ae24\x2d275c\x2d455e\x2da549\x2d929f4eb756e7.mount: Deactivated successfully.
Nov 25 16:37:58 compute-0 ceph-mon[74985]: pgmap v1547: 321 pgs: 321 active+clean; 280 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 204 op/s
Nov 25 16:37:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 280 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 262 KiB/s wr, 83 op/s
Nov 25 16:37:59 compute-0 nova_compute[254092]: 2025-11-25 16:37:59.666 254096 INFO nova.virt.libvirt.driver [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deleting instance files /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff_del
Nov 25 16:37:59 compute-0 nova_compute[254092]: 2025-11-25 16:37:59.667 254096 INFO nova.virt.libvirt.driver [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deletion of /var/lib/nova/instances/8ed9342e-e179-467c-993f-a92f2f7b0dff_del complete
Nov 25 16:37:59 compute-0 nova_compute[254092]: 2025-11-25 16:37:59.729 254096 INFO nova.compute.manager [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 1.90 seconds to destroy the instance on the hypervisor.
Nov 25 16:37:59 compute-0 nova_compute[254092]: 2025-11-25 16:37:59.729 254096 DEBUG oslo.service.loopingcall [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:37:59 compute-0 nova_compute[254092]: 2025-11-25 16:37:59.729 254096 DEBUG nova.compute.manager [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:37:59 compute-0 nova_compute[254092]: 2025-11-25 16:37:59.730 254096 DEBUG nova.network.neutron [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:38:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 25 16:38:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 25 16:38:00 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 25 16:38:00 compute-0 ceph-mon[74985]: pgmap v1548: 321 pgs: 321 active+clean; 280 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 262 KiB/s wr, 83 op/s
Nov 25 16:38:00 compute-0 ceph-mon[74985]: osdmap e199: 3 total, 3 up, 3 in
Nov 25 16:38:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 305 KiB/s wr, 145 op/s
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.461 254096 DEBUG nova.network.neutron [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.494 254096 INFO nova.compute.manager [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Took 2.76 seconds to deallocate network for instance.
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.544 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG nova.compute.manager [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG oslo_concurrency.lockutils [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG oslo_concurrency.lockutils [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.637 254096 DEBUG oslo_concurrency.lockutils [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.638 254096 DEBUG nova.compute.manager [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] No waiting events found dispatching network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.638 254096 WARNING nova.compute.manager [req-977c1a26-01c7-4b5b-a50c-753024dd05a2 req-91b5f127-c255-4994-b0e9-79d50ea0fbea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received unexpected event network-vif-plugged-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc for instance with vm_state deleted and task_state None.
Nov 25 16:38:02 compute-0 nova_compute[254092]: 2025-11-25 16:38:02.645 254096 DEBUG oslo_concurrency.processutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:02 compute-0 ceph-mon[74985]: pgmap v1550: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 305 KiB/s wr, 145 op/s
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.062 254096 DEBUG nova.objects.instance [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'flavor' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.086 254096 DEBUG oslo_concurrency.lockutils [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.087 254096 DEBUG oslo_concurrency.lockutils [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120555344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.178 254096 DEBUG oslo_concurrency.processutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.184 254096 DEBUG nova.compute.provider_tree [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.203 254096 DEBUG nova.scheduler.client.report [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.226 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.273 254096 INFO nova.scheduler.client.report [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Deleted allocations for instance 8ed9342e-e179-467c-993f-a92f2f7b0dff
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.348 254096 DEBUG oslo_concurrency.lockutils [None req-63233c64-d41e-4547-aa10-5b393af4e50c 75e65df891a54a2caabb073a427430b9 8bd01e0913564ac783fae350d6861e24 - - default default] Lock "8ed9342e-e179-467c-993f-a92f2f7b0dff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 109 KiB/s wr, 80 op/s
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:03 compute-0 podman[312260]: 2025-11-25 16:38:03.655577438 +0000 UTC m=+0.068536861 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 16:38:03 compute-0 podman[312259]: 2025-11-25 16:38:03.659244047 +0000 UTC m=+0.073399212 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 16:38:03 compute-0 podman[312261]: 2025-11-25 16:38:03.716731456 +0000 UTC m=+0.122316819 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 25 16:38:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4120555344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032760713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:03 compute-0 nova_compute[254092]: 2025-11-25 16:38:03.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.026 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.068 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.068 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.142 254096 DEBUG nova.network.neutron [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.283 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3823MB free_disk=59.897247314453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.285 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.285 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.362 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance d3356685-91bc-46b9-9b9f-87ffce31a4ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.362 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 497caf1f-53fe-425d-8e5c-10b2f0a2506d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.363 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.363 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.454 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.738 254096 DEBUG nova.compute.manager [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Received event network-vif-deleted-27bf7a08-6d8a-4e0b-a070-d4c8f14e65dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.740 254096 DEBUG nova.compute.manager [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.741 254096 DEBUG nova.compute.manager [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.742 254096 DEBUG oslo_concurrency.lockutils [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:04 compute-0 ceph-mon[74985]: pgmap v1551: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 109 KiB/s wr, 80 op/s
Nov 25 16:38:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2032760713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162550706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.945 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.953 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:38:04 compute-0 nova_compute[254092]: 2025-11-25 16:38:04.991 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:38:05 compute-0 nova_compute[254092]: 2025-11-25 16:38:05.100 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:38:05 compute-0 nova_compute[254092]: 2025-11-25 16:38:05.101 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:05 compute-0 nova_compute[254092]: 2025-11-25 16:38:05.102 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:05 compute-0 nova_compute[254092]: 2025-11-25 16:38:05.102 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:38:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 94 KiB/s wr, 65 op/s
Nov 25 16:38:05 compute-0 nova_compute[254092]: 2025-11-25 16:38:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:05 compute-0 nova_compute[254092]: 2025-11-25 16:38:05.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2162550706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:06 compute-0 sudo[312363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:06 compute-0 sudo[312363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:06 compute-0 sudo[312363]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.561 254096 DEBUG nova.network.neutron [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.578 254096 DEBUG oslo_concurrency.lockutils [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.578 254096 DEBUG nova.compute.manager [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.578 254096 DEBUG nova.compute.manager [None req-55ecfb9a-5b0b-484a-bf51-eee2289e189a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] network_info to inject: |[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.580 254096 DEBUG oslo_concurrency.lockutils [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:06 compute-0 nova_compute[254092]: 2025-11-25 16:38:06.581 254096 DEBUG nova.network.neutron [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:38:06 compute-0 sudo[312388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:38:06 compute-0 sudo[312388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:06 compute-0 sudo[312388]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:06 compute-0 sudo[312413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:06 compute-0 sudo[312413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:06 compute-0 sudo[312413]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:06 compute-0 sudo[312438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 16:38:06 compute-0 sudo[312438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:06 compute-0 ceph-mon[74985]: pgmap v1552: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 94 KiB/s wr, 65 op/s
Nov 25 16:38:07 compute-0 podman[312534]: 2025-11-25 16:38:07.346954091 +0000 UTC m=+0.078542132 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:38:07 compute-0 podman[312534]: 2025-11-25 16:38:07.450000306 +0000 UTC m=+0.181588347 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:38:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 9.4 KiB/s wr, 35 op/s
Nov 25 16:38:08 compute-0 nova_compute[254092]: 2025-11-25 16:38:08.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:08 compute-0 nova_compute[254092]: 2025-11-25 16:38:08.063 254096 DEBUG nova.objects.instance [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'flavor' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:08 compute-0 nova_compute[254092]: 2025-11-25 16:38:08.087 254096 DEBUG oslo_concurrency.lockutils [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:08 compute-0 sudo[312438]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:08 compute-0 sudo[312691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:08 compute-0 sudo[312691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:08 compute-0 sudo[312691]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:08 compute-0 sudo[312716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:38:08 compute-0 sudo[312716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:08 compute-0 sudo[312716]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:08 compute-0 sudo[312741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:08 compute-0 sudo[312741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:08 compute-0 sudo[312741]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:08 compute-0 nova_compute[254092]: 2025-11-25 16:38:08.373 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:08 compute-0 sudo[312766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:38:08 compute-0 sudo[312766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:08 compute-0 nova_compute[254092]: 2025-11-25 16:38:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:08 compute-0 nova_compute[254092]: 2025-11-25 16:38:08.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:38:08 compute-0 nova_compute[254092]: 2025-11-25 16:38:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:38:08 compute-0 sudo[312766]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0a3e9c55-611b-44ce-b0e6-68c68c30eae5 does not exist
Nov 25 16:38:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2a9ab26a-ad31-4f7e-89d1-29fdc81b9d54 does not exist
Nov 25 16:38:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 733ef74d-8d74-4899-a760-17a112ecd26a does not exist
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:38:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: pgmap v1553: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 9.4 KiB/s wr, 35 op/s
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:38:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:38:08 compute-0 sudo[312823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:08 compute-0 sudo[312823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:08 compute-0 sudo[312823]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:09 compute-0 sudo[312848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:38:09 compute-0 sudo[312848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:09 compute-0 sudo[312848]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:09 compute-0 sudo[312873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:09 compute-0 sudo[312873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:09 compute-0 sudo[312873]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:09 compute-0 sudo[312898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:38:09 compute-0 sudo[312898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 9.4 KiB/s wr, 35 op/s
Nov 25 16:38:09 compute-0 nova_compute[254092]: 2025-11-25 16:38:09.489 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:09 compute-0 podman[312963]: 2025-11-25 16:38:09.492436647 +0000 UTC m=+0.041643011 container create 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:38:09 compute-0 systemd[1]: Started libpod-conmon-2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681.scope.
Nov 25 16:38:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:38:09 compute-0 podman[312963]: 2025-11-25 16:38:09.474234193 +0000 UTC m=+0.023440587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:38:09 compute-0 podman[312963]: 2025-11-25 16:38:09.611052343 +0000 UTC m=+0.160258737 container init 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 16:38:09 compute-0 podman[312963]: 2025-11-25 16:38:09.619491823 +0000 UTC m=+0.168698197 container start 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:38:09 compute-0 podman[312963]: 2025-11-25 16:38:09.624725455 +0000 UTC m=+0.173931849 container attach 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:38:09 compute-0 gifted_golick[312979]: 167 167
Nov 25 16:38:09 compute-0 systemd[1]: libpod-2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681.scope: Deactivated successfully.
Nov 25 16:38:09 compute-0 podman[312963]: 2025-11-25 16:38:09.626225066 +0000 UTC m=+0.175431450 container died 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:38:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ddc078d2fdc38967bdfae6663830df767a1f0a992580ffba4cead93816d4698-merged.mount: Deactivated successfully.
Nov 25 16:38:09 compute-0 podman[312963]: 2025-11-25 16:38:09.670254951 +0000 UTC m=+0.219461325 container remove 2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:38:09 compute-0 systemd[1]: libpod-conmon-2647e77759885a7ede3e644b973375224938aef4fe3184c8cca92b0162961681.scope: Deactivated successfully.
Nov 25 16:38:09 compute-0 podman[313003]: 2025-11-25 16:38:09.85820152 +0000 UTC m=+0.065140209 container create bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:38:09 compute-0 podman[313003]: 2025-11-25 16:38:09.817393043 +0000 UTC m=+0.024331752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:38:09 compute-0 systemd[1]: Started libpod-conmon-bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c.scope.
Nov 25 16:38:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:09 compute-0 nova_compute[254092]: 2025-11-25 16:38:09.947 254096 DEBUG nova.network.neutron [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated VIF entry in instance network info cache for port 1693689c-371b-40fb-8153-5313e926d910. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:38:09 compute-0 nova_compute[254092]: 2025-11-25 16:38:09.948 254096 DEBUG nova.network.neutron [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:09 compute-0 nova_compute[254092]: 2025-11-25 16:38:09.965 254096 DEBUG oslo_concurrency.lockutils [req-681b53b8-92ae-4f59-8ac9-a4f94bdf8e55 req-23db65a5-e799-40e4-8ade-960c15f96d8d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:09 compute-0 nova_compute[254092]: 2025-11-25 16:38:09.965 254096 DEBUG oslo_concurrency.lockutils [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:09 compute-0 podman[313003]: 2025-11-25 16:38:09.986262523 +0000 UTC m=+0.193201232 container init bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:38:09 compute-0 podman[313003]: 2025-11-25 16:38:09.993333426 +0000 UTC m=+0.200272115 container start bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:38:10 compute-0 podman[313003]: 2025-11-25 16:38:10.030773891 +0000 UTC m=+0.237712610 container attach bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:38:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:10 compute-0 ceph-mon[74985]: pgmap v1554: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 9.4 KiB/s wr, 35 op/s
Nov 25 16:38:11 compute-0 determined_bell[313019]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:38:11 compute-0 determined_bell[313019]: --> relative data size: 1.0
Nov 25 16:38:11 compute-0 determined_bell[313019]: --> All data devices are unavailable
Nov 25 16:38:11 compute-0 systemd[1]: libpod-bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c.scope: Deactivated successfully.
Nov 25 16:38:11 compute-0 podman[313003]: 2025-11-25 16:38:11.060312512 +0000 UTC m=+1.267251201 container died bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ca22d712e55b26dfb46e3342536ea4cc9d825b7c442849aa989a361cbcdb376-merged.mount: Deactivated successfully.
Nov 25 16:38:11 compute-0 podman[313003]: 2025-11-25 16:38:11.118051988 +0000 UTC m=+1.324990677 container remove bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:38:11 compute-0 systemd[1]: libpod-conmon-bb865c6d0a2e7968bc22ebc5f6f734507121313cf1d845376c1fbdc889d9a81c.scope: Deactivated successfully.
Nov 25 16:38:11 compute-0 sudo[312898]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:11 compute-0 sudo[313060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:11 compute-0 sudo[313060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:11 compute-0 sudo[313060]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:11 compute-0 sudo[313085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:38:11 compute-0 sudo[313085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:11 compute-0 sudo[313085]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:11 compute-0 sudo[313110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:11 compute-0 sudo[313110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:11 compute-0 sudo[313110]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:11 compute-0 sudo[313135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:38:11 compute-0 sudo[313135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 8.6 KiB/s wr, 25 op/s
Nov 25 16:38:11 compute-0 podman[313201]: 2025-11-25 16:38:11.760291133 +0000 UTC m=+0.025269277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:38:11 compute-0 podman[313201]: 2025-11-25 16:38:11.870232655 +0000 UTC m=+0.135210769 container create c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:38:11 compute-0 ovn_controller[153477]: 2025-11-25T16:38:11Z|00471|binding|INFO|Releasing lport eafb5caa-fac0-4eb9-b6f2-3dbe2ea21bda from this chassis (sb_readonly=0)
Nov 25 16:38:11 compute-0 ovn_controller[153477]: 2025-11-25T16:38:11Z|00472|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:38:11 compute-0 systemd[1]: Started libpod-conmon-c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e.scope.
Nov 25 16:38:11 compute-0 nova_compute[254092]: 2025-11-25 16:38:11.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:38:11 compute-0 podman[313201]: 2025-11-25 16:38:11.987026554 +0000 UTC m=+0.252004688 container init c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:38:11 compute-0 podman[313201]: 2025-11-25 16:38:11.994096095 +0000 UTC m=+0.259074209 container start c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:38:11 compute-0 festive_shtern[313217]: 167 167
Nov 25 16:38:11 compute-0 systemd[1]: libpod-c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e.scope: Deactivated successfully.
Nov 25 16:38:11 compute-0 podman[313201]: 2025-11-25 16:38:11.998741032 +0000 UTC m=+0.263719146 container attach c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:38:12 compute-0 podman[313201]: 2025-11-25 16:38:12.000101008 +0000 UTC m=+0.265079142 container died c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 25 16:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd06d4366ddb525501842901f5e5822facae99d3bb8735dac61793f3d56062b8-merged.mount: Deactivated successfully.
Nov 25 16:38:12 compute-0 podman[313201]: 2025-11-25 16:38:12.039738373 +0000 UTC m=+0.304716487 container remove c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 16:38:12 compute-0 systemd[1]: libpod-conmon-c2bc3d067eb8c5e5fa3a5c6ee7ff6430e571bdc7b4d3017cae4de12b9fb79b4e.scope: Deactivated successfully.
Nov 25 16:38:12 compute-0 podman[313240]: 2025-11-25 16:38:12.210903708 +0000 UTC m=+0.041072526 container create 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:38:12 compute-0 systemd[1]: Started libpod-conmon-4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3.scope.
Nov 25 16:38:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:12 compute-0 podman[313240]: 2025-11-25 16:38:12.28177257 +0000 UTC m=+0.111941408 container init 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:38:12 compute-0 podman[313240]: 2025-11-25 16:38:12.192550729 +0000 UTC m=+0.022719577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:38:12 compute-0 podman[313240]: 2025-11-25 16:38:12.287997669 +0000 UTC m=+0.118166487 container start 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 16:38:12 compute-0 podman[313240]: 2025-11-25 16:38:12.29206398 +0000 UTC m=+0.122232828 container attach 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:38:12 compute-0 nova_compute[254092]: 2025-11-25 16:38:12.523 254096 DEBUG nova.network.neutron [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:38:12 compute-0 nova_compute[254092]: 2025-11-25 16:38:12.618 254096 DEBUG nova.compute.manager [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-changed-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:12 compute-0 nova_compute[254092]: 2025-11-25 16:38:12.618 254096 DEBUG nova.compute.manager [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing instance network info cache due to event network-changed-1693689c-371b-40fb-8153-5313e926d910. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:38:12 compute-0 nova_compute[254092]: 2025-11-25 16:38:12.619 254096 DEBUG oslo_concurrency.lockutils [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:12 compute-0 ceph-mon[74985]: pgmap v1555: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 8.6 KiB/s wr, 25 op/s
Nov 25 16:38:13 compute-0 nova_compute[254092]: 2025-11-25 16:38:13.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]: {
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:     "0": [
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:         {
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "devices": [
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "/dev/loop3"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             ],
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_name": "ceph_lv0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_size": "21470642176",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "name": "ceph_lv0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "tags": {
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cluster_name": "ceph",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.crush_device_class": "",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.encrypted": "0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osd_id": "0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.type": "block",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.vdo": "0"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             },
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "type": "block",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "vg_name": "ceph_vg0"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:         }
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:     ],
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:     "1": [
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:         {
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "devices": [
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "/dev/loop4"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             ],
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_name": "ceph_lv1",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_size": "21470642176",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "name": "ceph_lv1",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "tags": {
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cluster_name": "ceph",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.crush_device_class": "",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.encrypted": "0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osd_id": "1",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.type": "block",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.vdo": "0"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             },
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "type": "block",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "vg_name": "ceph_vg1"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:         }
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:     ],
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:     "2": [
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:         {
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "devices": [
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "/dev/loop5"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             ],
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_name": "ceph_lv2",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_size": "21470642176",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "name": "ceph_lv2",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "tags": {
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.cluster_name": "ceph",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.crush_device_class": "",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.encrypted": "0",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osd_id": "2",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.type": "block",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:                 "ceph.vdo": "0"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             },
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "type": "block",
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:             "vg_name": "ceph_vg2"
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:         }
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]:     ]
Nov 25 16:38:13 compute-0 upbeat_blackwell[313256]: }
Nov 25 16:38:13 compute-0 systemd[1]: libpod-4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3.scope: Deactivated successfully.
Nov 25 16:38:13 compute-0 podman[313240]: 2025-11-25 16:38:13.069816379 +0000 UTC m=+0.899985247 container died 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d3cc21c3b0e1a95e4584b933fe0637a51ebdad6840ca5a21d811d33c5c5140-merged.mount: Deactivated successfully.
Nov 25 16:38:13 compute-0 podman[313240]: 2025-11-25 16:38:13.130134656 +0000 UTC m=+0.960303474 container remove 4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:38:13 compute-0 systemd[1]: libpod-conmon-4269de6bbb3be704d8c1b01e7fb777a41dc0125ae5cd24bbeb23443adc1676b3.scope: Deactivated successfully.
Nov 25 16:38:13 compute-0 sudo[313135]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:13 compute-0 sudo[313279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:13 compute-0 sudo[313279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:13 compute-0 sudo[313279]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:13 compute-0 nova_compute[254092]: 2025-11-25 16:38:13.262 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088678.2608998, 8ed9342e-e179-467c-993f-a92f2f7b0dff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:38:13 compute-0 nova_compute[254092]: 2025-11-25 16:38:13.263 254096 INFO nova.compute.manager [-] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] VM Stopped (Lifecycle Event)
Nov 25 16:38:13 compute-0 sudo[313304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:38:13 compute-0 sudo[313304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:13 compute-0 nova_compute[254092]: 2025-11-25 16:38:13.290 254096 DEBUG nova.compute.manager [None req-dda4a91c-31df-4c87-88e8-457a539171b8 - - - - - -] [instance: 8ed9342e-e179-467c-993f-a92f2f7b0dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:13 compute-0 sudo[313304]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:13 compute-0 sudo[313329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:13 compute-0 sudo[313329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:13 compute-0 sudo[313329]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:13 compute-0 nova_compute[254092]: 2025-11-25 16:38:13.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:13 compute-0 sudo[313354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:38:13 compute-0 sudo[313354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Nov 25 16:38:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:13.615 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:13.616 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:13.617 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:13 compute-0 podman[313418]: 2025-11-25 16:38:13.739434585 +0000 UTC m=+0.036373798 container create b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 16:38:13 compute-0 systemd[1]: Started libpod-conmon-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope.
Nov 25 16:38:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:38:13 compute-0 podman[313418]: 2025-11-25 16:38:13.808586332 +0000 UTC m=+0.105525565 container init b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:38:13 compute-0 podman[313418]: 2025-11-25 16:38:13.815083548 +0000 UTC m=+0.112022761 container start b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:38:13 compute-0 podman[313418]: 2025-11-25 16:38:13.817673078 +0000 UTC m=+0.114612311 container attach b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:38:13 compute-0 elegant_greider[313434]: 167 167
Nov 25 16:38:13 compute-0 podman[313418]: 2025-11-25 16:38:13.723997767 +0000 UTC m=+0.020937000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:38:13 compute-0 systemd[1]: libpod-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope: Deactivated successfully.
Nov 25 16:38:13 compute-0 conmon[313434]: conmon b1e5a8a2e1ce763634d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope/container/memory.events
Nov 25 16:38:13 compute-0 podman[313418]: 2025-11-25 16:38:13.821318396 +0000 UTC m=+0.118257609 container died b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 16:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ecf01bbd3b7b6c3a716cdca94eb1e490d2da789fe311361e4970ffb62142bae-merged.mount: Deactivated successfully.
Nov 25 16:38:13 compute-0 podman[313418]: 2025-11-25 16:38:13.856332507 +0000 UTC m=+0.153271720 container remove b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:38:13 compute-0 systemd[1]: libpod-conmon-b1e5a8a2e1ce763634d3dca8aca1dc431e0419f9074663ee8b018023d4cf2830.scope: Deactivated successfully.
Nov 25 16:38:14 compute-0 podman[313459]: 2025-11-25 16:38:14.031055017 +0000 UTC m=+0.037161129 container create 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 16:38:14 compute-0 systemd[1]: Started libpod-conmon-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope.
Nov 25 16:38:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:38:14 compute-0 podman[313459]: 2025-11-25 16:38:14.014619871 +0000 UTC m=+0.020726003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:14 compute-0 podman[313459]: 2025-11-25 16:38:14.130260269 +0000 UTC m=+0.136366401 container init 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:38:14 compute-0 podman[313459]: 2025-11-25 16:38:14.138631806 +0000 UTC m=+0.144737938 container start 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 16:38:14 compute-0 podman[313459]: 2025-11-25 16:38:14.142235653 +0000 UTC m=+0.148341785 container attach 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 16:38:14 compute-0 nova_compute[254092]: 2025-11-25 16:38:14.961 254096 DEBUG nova.network.neutron [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:14 compute-0 nova_compute[254092]: 2025-11-25 16:38:14.990 254096 DEBUG oslo_concurrency.lockutils [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:14 compute-0 nova_compute[254092]: 2025-11-25 16:38:14.991 254096 DEBUG nova.compute.manager [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 16:38:14 compute-0 nova_compute[254092]: 2025-11-25 16:38:14.991 254096 DEBUG nova.compute.manager [None req-f956ac81-8a47-4483-b4f7-97d829c5976a ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] network_info to inject: |[{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 16:38:14 compute-0 nova_compute[254092]: 2025-11-25 16:38:14.994 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:14 compute-0 nova_compute[254092]: 2025-11-25 16:38:14.994 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:38:14 compute-0 nova_compute[254092]: 2025-11-25 16:38:14.994 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:15 compute-0 ceph-mon[74985]: pgmap v1556: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.080 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:38:15 compute-0 gracious_perlman[313475]: {
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "osd_id": 1,
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "type": "bluestore"
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:     },
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "osd_id": 2,
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "type": "bluestore"
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:     },
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "osd_id": 0,
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:         "type": "bluestore"
Nov 25 16:38:15 compute-0 gracious_perlman[313475]:     }
Nov 25 16:38:15 compute-0 gracious_perlman[313475]: }
Nov 25 16:38:15 compute-0 systemd[1]: libpod-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope: Deactivated successfully.
Nov 25 16:38:15 compute-0 podman[313459]: 2025-11-25 16:38:15.205612513 +0000 UTC m=+1.211718625 container died 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:38:15 compute-0 systemd[1]: libpod-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope: Consumed 1.068s CPU time.
Nov 25 16:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1df4501f04cf9e44d4866daa8994463853d444555c973ff3a08940ea0dbd5b8-merged.mount: Deactivated successfully.
Nov 25 16:38:15 compute-0 podman[313459]: 2025-11-25 16:38:15.263057931 +0000 UTC m=+1.269164043 container remove 147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 25 16:38:15 compute-0 systemd[1]: libpod-conmon-147e3c876c784841e471876b2c88f3012af926d878ed5a49d11e56ed94aba68a.scope: Deactivated successfully.
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.310 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.311 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.313 254096 INFO nova.compute.manager [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Terminating instance
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.313 254096 DEBUG nova.compute.manager [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:38:15 compute-0 sudo[313354]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:38:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:38:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e226de30-7687-4daf-ab29-ced4fefb7d15 does not exist
Nov 25 16:38:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c3d74471-60c4-44e4-8669-99240d9a3dcc does not exist
Nov 25 16:38:15 compute-0 kernel: tap1693689c-37 (unregistering): left promiscuous mode
Nov 25 16:38:15 compute-0 NetworkManager[48891]: <info>  [1764088695.3809] device (tap1693689c-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:38:15 compute-0 ovn_controller[153477]: 2025-11-25T16:38:15Z|00473|binding|INFO|Releasing lport 1693689c-371b-40fb-8153-5313e926d910 from this chassis (sb_readonly=0)
Nov 25 16:38:15 compute-0 ovn_controller[153477]: 2025-11-25T16:38:15Z|00474|binding|INFO|Setting lport 1693689c-371b-40fb-8153-5313e926d910 down in Southbound
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 ovn_controller[153477]: 2025-11-25T16:38:15Z|00475|binding|INFO|Removing iface tap1693689c-37 ovn-installed in OVS
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.397 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:c2:fe 10.100.0.14'], port_security=['fa:16:3e:fc:c2:fe 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd3356685-91bc-46b9-9b9f-87ffce31a4ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8b9be1565d148a3ac487eacb391dc1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f328e556-e196-4e21-8b60-04c34108b4ac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8c5f31d-3e5c-4add-b1b6-dfe24828d28e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1693689c-371b-40fb-8153-5313e926d910) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.398 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1693689c-371b-40fb-8153-5313e926d910 in datapath 0c61a44f-bcff-4141-9691-0b0cd16e5793 unbound from our chassis
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.399 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c61a44f-bcff-4141-9691-0b0cd16e5793, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.401 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40a9d26f-968c-423f-ad14-67693c7f0a55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.401 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 namespace which is not needed anymore
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 sudo[313522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:38:15 compute-0 sudo[313522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:15 compute-0 sudo[313522]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:15 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Deactivated successfully.
Nov 25 16:38:15 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000032.scope: Consumed 15.622s CPU time.
Nov 25 16:38:15 compute-0 systemd-machined[216343]: Machine qemu-59-instance-00000032 terminated.
Nov 25 16:38:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Nov 25 16:38:15 compute-0 sudo[313558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:38:15 compute-0 sudo[313558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:38:15 compute-0 sudo[313558]: pam_unix(sudo:session): session closed for user root
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.555 254096 INFO nova.virt.libvirt.driver [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Instance destroyed successfully.
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.557 254096 DEBUG nova.objects.instance [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lazy-loading 'resources' on Instance uuid d3356685-91bc-46b9-9b9f-87ffce31a4ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:15 compute-0 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : haproxy version is 2.8.14-c23fe91
Nov 25 16:38:15 compute-0 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [NOTICE]   (311619) : path to executable is /usr/sbin/haproxy
Nov 25 16:38:15 compute-0 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [WARNING]  (311619) : Exiting Master process...
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.576 254096 DEBUG nova.virt.libvirt.vif [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:37:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2097469038',display_name='tempest-AttachInterfacesUnderV243Test-server-2097469038',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2097469038',id=50,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK35kG1vwXJXdI4r/EHWJZkHbZ92CrcmDm6T8HHIBEabt8dsD4hwgL2ByxTJp0aD3PDPswuWtqGhIZZ1n6EYekgLgLZqS6KsMbAxaY/ldKY87IH4bSdNTYm2tWLgSZE5MA==',key_name='tempest-keypair-935791717',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:37:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8b9be1565d148a3ac487eacb391dc1f',ramdisk_id='',reservation_id='r-nejw6g9z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1206829083',owner_user_name='tempest-AttachInterfacesUnderV243Test-1206829083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:38:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ec49417447ad4a98b1f890ed78fd5b41',uuid=d3356685-91bc-46b9-9b9f-87ffce31a4ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.576 254096 DEBUG nova.network.os_vif_util [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converting VIF {"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.577 254096 DEBUG nova.network.os_vif_util [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:38:15 compute-0 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [ALERT]    (311619) : Current worker (311621) exited with code 143 (Terminated)
Nov 25 16:38:15 compute-0 neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793[311615]: [WARNING]  (311619) : All workers exited. Exiting... (0)
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.577 254096 DEBUG os_vif [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.579 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1693689c-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:15 compute-0 systemd[1]: libpod-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440.scope: Deactivated successfully.
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.581 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 podman[313594]: 2025-11-25 16:38:15.587577815 +0000 UTC m=+0.066325581 container died f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.588 254096 INFO os_vif [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:c2:fe,bridge_name='br-int',has_traffic_filtering=True,id=1693689c-371b-40fb-8153-5313e926d910,network=Network(0c61a44f-bcff-4141-9691-0b0cd16e5793),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1693689c-37')
Nov 25 16:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8389ed7fca4d6f0aa03377187ba2932fde612ac6153d8db17d6a663aaf209d6a-merged.mount: Deactivated successfully.
Nov 25 16:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440-userdata-shm.mount: Deactivated successfully.
Nov 25 16:38:15 compute-0 podman[313594]: 2025-11-25 16:38:15.633040998 +0000 UTC m=+0.111788754 container cleanup f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:38:15 compute-0 systemd[1]: libpod-conmon-f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440.scope: Deactivated successfully.
Nov 25 16:38:15 compute-0 podman[313654]: 2025-11-25 16:38:15.702585515 +0000 UTC m=+0.047016277 container remove f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.709 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41772acc-d847-4d07-b2bb-e5ce11a69edf]: (4, ('Tue Nov 25 04:38:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 (f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440)\nf9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440\nTue Nov 25 04:38:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 (f9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440)\nf9fd7f0c450a559eb99daa3c4e9a7352bab1af0831ec4c8ea0d3f03c8f107440\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa20833c-3a08-497c-84d4-83349b34b214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.712 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c61a44f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 kernel: tap0c61a44f-b0: left promiscuous mode
Nov 25 16:38:15 compute-0 nova_compute[254092]: 2025-11-25 16:38:15.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.733 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f45e02e4-2b19-4081-b520-5182b0fc8bbf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b76769-238e-4807-99f8-d7048edad45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b520c478-1c1e-4187-97c2-938864973bc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.768 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[099a2e90-2519-477f-b1cc-5781c0fbd8ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509968, 'reachable_time': 44554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313667, 'error': None, 'target': 'ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.771 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0c61a44f-bcff-4141-9691-0b0cd16e5793 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:38:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:15.772 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5e64deeb-b702-44e5-8bdc-d5463a095850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d0c61a44f\x2dbcff\x2d4141\x2d9691\x2d0b0cd16e5793.mount: Deactivated successfully.
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.050 254096 INFO nova.virt.libvirt.driver [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deleting instance files /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab_del
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.052 254096 INFO nova.virt.libvirt.driver [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deletion of /var/lib/nova/instances/d3356685-91bc-46b9-9b9f-87ffce31a4ab_del complete
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.254 254096 INFO nova.compute.manager [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 0.94 seconds to destroy the instance on the hypervisor.
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.254 254096 DEBUG oslo.service.loopingcall [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.254 254096 DEBUG nova.compute.manager [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.255 254096 DEBUG nova.network.neutron [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:38:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:38:16 compute-0 ceph-mon[74985]: pgmap v1557: 321 pgs: 321 active+clean; 200 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.517 254096 DEBUG nova.compute.manager [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-unplugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.517 254096 DEBUG oslo_concurrency.lockutils [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.517 254096 DEBUG oslo_concurrency.lockutils [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.518 254096 DEBUG oslo_concurrency.lockutils [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.518 254096 DEBUG nova.compute.manager [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] No waiting events found dispatching network-vif-unplugged-1693689c-371b-40fb-8153-5313e926d910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:38:16 compute-0 nova_compute[254092]: 2025-11-25 16:38:16.518 254096 DEBUG nova.compute.manager [req-798306d1-38fc-4cdf-9ca3-f138649e9e90 req-d1177f04-daf0-450d-a65a-838f6a11cfa3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-unplugged-1693689c-371b-40fb-8153-5313e926d910 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:38:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 168 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 4.0 KiB/s wr, 17 op/s
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.518 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [{"id": "1693689c-371b-40fb-8153-5313e926d910", "address": "fa:16:3e:fc:c2:fe", "network": {"id": "0c61a44f-bcff-4141-9691-0b0cd16e5793", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1067646278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8b9be1565d148a3ac487eacb391dc1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1693689c-37", "ovs_interfaceid": "1693689c-371b-40fb-8153-5313e926d910", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.532 254096 DEBUG oslo_concurrency.lockutils [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.532 254096 DEBUG nova.network.neutron [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Refreshing network info cache for port 1693689c-371b-40fb-8153-5313e926d910 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.560 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.578 254096 DEBUG nova.network.neutron [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.607 254096 INFO nova.compute.manager [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Took 1.35 seconds to deallocate network for instance.
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.649 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.650 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.735 254096 DEBUG oslo_concurrency.processutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.921 254096 INFO nova.network.neutron [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Port 1693689c-371b-40fb-8153-5313e926d910 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.922 254096 DEBUG nova.network.neutron [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:17 compute-0 nova_compute[254092]: 2025-11-25 16:38:17.936 254096 DEBUG oslo_concurrency.lockutils [req-778dada2-2b5f-4f97-9d49-b48e3b07820e req-f59f0d11-1ae6-46c7-a778-743b56917899 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d3356685-91bc-46b9-9b9f-87ffce31a4ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3355352229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.199 254096 DEBUG oslo_concurrency.processutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.208 254096 DEBUG nova.compute.provider_tree [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.223 254096 DEBUG nova.scheduler.client.report [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.242 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.291 254096 INFO nova.scheduler.client.report [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Deleted allocations for instance d3356685-91bc-46b9-9b9f-87ffce31a4ab
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.375 254096 DEBUG oslo_concurrency.lockutils [None req-7d3264e4-ae8c-48eb-b4f2-7cd8bc1c7ab3 ec49417447ad4a98b1f890ed78fd5b41 c8b9be1565d148a3ac487eacb391dc1f - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:18 compute-0 ceph-mon[74985]: pgmap v1558: 321 pgs: 321 active+clean; 168 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 4.0 KiB/s wr, 17 op/s
Nov 25 16:38:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3355352229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.656 254096 DEBUG nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.656 254096 DEBUG oslo_concurrency.lockutils [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.657 254096 DEBUG oslo_concurrency.lockutils [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.657 254096 DEBUG oslo_concurrency.lockutils [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d3356685-91bc-46b9-9b9f-87ffce31a4ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.657 254096 DEBUG nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] No waiting events found dispatching network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.658 254096 WARNING nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received unexpected event network-vif-plugged-1693689c-371b-40fb-8153-5313e926d910 for instance with vm_state deleted and task_state None.
Nov 25 16:38:18 compute-0 nova_compute[254092]: 2025-11-25 16:38:18.658 254096 DEBUG nova.compute.manager [req-d82a5497-7687-4267-8eed-776932463eb9 req-ac713e57-5b31-41cd-9f1f-82b4ecad416f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Received event network-vif-deleted-1693689c-371b-40fb-8153-5313e926d910 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 168 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.7 KiB/s wr, 17 op/s
Nov 25 16:38:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:20 compute-0 ceph-mon[74985]: pgmap v1559: 321 pgs: 321 active+clean; 168 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.7 KiB/s wr, 17 op/s
Nov 25 16:38:20 compute-0 nova_compute[254092]: 2025-11-25 16:38:20.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 16:38:22 compute-0 ceph-mon[74985]: pgmap v1560: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Nov 25 16:38:22 compute-0 nova_compute[254092]: 2025-11-25 16:38:22.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:23 compute-0 nova_compute[254092]: 2025-11-25 16:38:23.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 16:38:24 compute-0 ceph-mon[74985]: pgmap v1561: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 16:38:25 compute-0 nova_compute[254092]: 2025-11-25 16:38:25.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:25 compute-0 ovn_controller[153477]: 2025-11-25T16:38:25Z|00476|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:38:25 compute-0 nova_compute[254092]: 2025-11-25 16:38:25.314 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:25 compute-0 ovn_controller[153477]: 2025-11-25T16:38:25Z|00477|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:38:25 compute-0 nova_compute[254092]: 2025-11-25 16:38:25.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 16:38:25 compute-0 nova_compute[254092]: 2025-11-25 16:38:25.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:26 compute-0 nova_compute[254092]: 2025-11-25 16:38:26.132 254096 DEBUG nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:38:26 compute-0 ceph-mon[74985]: pgmap v1562: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Nov 25 16:38:27 compute-0 kernel: tapd90f4f5a-3c (unregistering): left promiscuous mode
Nov 25 16:38:27 compute-0 NetworkManager[48891]: <info>  [1764088707.4332] device (tapd90f4f5a-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:38:27 compute-0 nova_compute[254092]: 2025-11-25 16:38:27.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:27 compute-0 ovn_controller[153477]: 2025-11-25T16:38:27Z|00478|binding|INFO|Releasing lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca from this chassis (sb_readonly=0)
Nov 25 16:38:27 compute-0 ovn_controller[153477]: 2025-11-25T16:38:27Z|00479|binding|INFO|Setting lport d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca down in Southbound
Nov 25 16:38:27 compute-0 ovn_controller[153477]: 2025-11-25T16:38:27Z|00480|binding|INFO|Removing iface tapd90f4f5a-3c ovn-installed in OVS
Nov 25 16:38:27 compute-0 nova_compute[254092]: 2025-11-25 16:38:27.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.455 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:76:f8 10.100.0.10'], port_security=['fa:16:3e:b0:76:f8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '497caf1f-53fe-425d-8e5c-10b2f0a2506d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.457 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.459 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.460 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93ff0d7a-079b-4c23-a663-e135e44ad8b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.461 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore
Nov 25 16:38:27 compute-0 nova_compute[254092]: 2025-11-25 16:38:27.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Nov 25 16:38:27 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 25 16:38:27 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000034.scope: Consumed 15.691s CPU time.
Nov 25 16:38:27 compute-0 systemd-machined[216343]: Machine qemu-61-instance-00000034 terminated.
Nov 25 16:38:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : haproxy version is 2.8.14-c23fe91
Nov 25 16:38:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [NOTICE]   (311890) : path to executable is /usr/sbin/haproxy
Nov 25 16:38:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [WARNING]  (311890) : Exiting Master process...
Nov 25 16:38:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [WARNING]  (311890) : Exiting Master process...
Nov 25 16:38:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [ALERT]    (311890) : Current worker (311892) exited with code 143 (Terminated)
Nov 25 16:38:27 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[311886]: [WARNING]  (311890) : All workers exited. Exiting... (0)
Nov 25 16:38:27 compute-0 systemd[1]: libpod-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8.scope: Deactivated successfully.
Nov 25 16:38:27 compute-0 podman[313716]: 2025-11-25 16:38:27.601918748 +0000 UTC m=+0.050490890 container died 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8-userdata-shm.mount: Deactivated successfully.
Nov 25 16:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4af3758759c7473d6d0c9e6e56bd70b29b7aef6fe03951135d48e66ecc34c2f1-merged.mount: Deactivated successfully.
Nov 25 16:38:27 compute-0 podman[313716]: 2025-11-25 16:38:27.649381416 +0000 UTC m=+0.097953548 container cleanup 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:38:27 compute-0 systemd[1]: libpod-conmon-8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8.scope: Deactivated successfully.
Nov 25 16:38:27 compute-0 nova_compute[254092]: 2025-11-25 16:38:27.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:27 compute-0 podman[313747]: 2025-11-25 16:38:27.762912306 +0000 UTC m=+0.047317254 container remove 8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.771 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a746292-edd1-4128-a701-bcf01cd77ed4]: (4, ('Tue Nov 25 04:38:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8)\n8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8\nTue Nov 25 04:38:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8)\n8ed97d98a6d8c8ff4fc6850b6df849b8a8c49c36c837c0327bf8875b87b215e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.775 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[09c4902c-4187-4285-a103-e21681555ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.776 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:27 compute-0 kernel: tape469a950-70: left promiscuous mode
Nov 25 16:38:27 compute-0 nova_compute[254092]: 2025-11-25 16:38:27.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:27 compute-0 nova_compute[254092]: 2025-11-25 16:38:27.797 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bcca25bb-2514-4a96-889f-697cc02065ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.823 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f472ffa3-4045-4981-bb24-007eadfa610f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62058701-1601-492f-9767-04d43d32a536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9878da22-ff60-48c0-a307-b1ac11c3008d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 510326, 'reachable_time': 28441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313774, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.849 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:38:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:27.849 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4afcc7-21c9-4ba1-b9dd-47c29578f9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:27 compute-0 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 16:38:28 compute-0 nova_compute[254092]: 2025-11-25 16:38:28.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:28 compute-0 nova_compute[254092]: 2025-11-25 16:38:28.143 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance shutdown successfully after 56 seconds.
Nov 25 16:38:28 compute-0 nova_compute[254092]: 2025-11-25 16:38:28.148 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance destroyed successfully.
Nov 25 16:38:28 compute-0 nova_compute[254092]: 2025-11-25 16:38:28.148 254096 DEBUG nova.objects.instance [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:28 compute-0 ceph-mon[74985]: pgmap v1563: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Nov 25 16:38:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:29.689 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:38:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:29.690 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:38:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 4.8 KiB/s wr, 11 op/s
Nov 25 16:38:29 compute-0 nova_compute[254092]: 2025-11-25 16:38:29.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:29 compute-0 nova_compute[254092]: 2025-11-25 16:38:29.764 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Beginning cold snapshot process
Nov 25 16:38:29 compute-0 nova_compute[254092]: 2025-11-25 16:38:29.970 254096 DEBUG nova.virt.libvirt.imagebackend [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.265 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] creating snapshot(79daa9a6eec84791953396c480b0a63f) on rbd image(497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:38:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.553 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088695.5517426, d3356685-91bc-46b9-9b9f-87ffce31a4ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.553 254096 INFO nova.compute.manager [-] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] VM Stopped (Lifecycle Event)
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.582 254096 DEBUG nova.compute.manager [None req-1ded6a8e-6b26-4619-b1c0-7bd0efca1f85 - - - - - -] [instance: d3356685-91bc-46b9-9b9f-87ffce31a4ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 25 16:38:30 compute-0 ceph-mon[74985]: pgmap v1564: 321 pgs: 321 active+clean; 121 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 4.8 KiB/s wr, 11 op/s
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.821 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.823 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:30 compute-0 nova_compute[254092]: 2025-11-25 16:38:30.914 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:38:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 25 16:38:30 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.016 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.016 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.025 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.026 254096 INFO nova.compute.claims [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.075 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] cloning vms/497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk@79daa9a6eec84791953396c480b0a63f to images/fedb4aef-bdad-4b3a-abdc-073591bfcffa clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.192 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.233 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] flattening images/fedb4aef-bdad-4b3a-abdc-073591bfcffa flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:38:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 17 KiB/s wr, 6 op/s
Nov 25 16:38:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629628229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.662 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.668 254096 DEBUG nova.compute.provider_tree [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.684 254096 DEBUG nova.scheduler.client.report [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:38:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:31.692 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.774 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.775 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.854 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.854 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.899 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:38:31 compute-0 nova_compute[254092]: 2025-11-25 16:38:31.936 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.146 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.147 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.147 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Creating image(s)
Nov 25 16:38:32 compute-0 ceph-mon[74985]: osdmap e200: 3 total, 3 up, 3 in
Nov 25 16:38:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3629628229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.174 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.282 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.305 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.310 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.363 254096 DEBUG nova.policy [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34706428d3f94a60b53f4a535d408fd1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '80a627278d934815a3ea621e9d6402d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.370 254096 DEBUG nova.compute.manager [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-unplugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.370 254096 DEBUG oslo_concurrency.lockutils [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 DEBUG oslo_concurrency.lockutils [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 DEBUG oslo_concurrency.lockutils [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 DEBUG nova.compute.manager [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] No waiting events found dispatching network-vif-unplugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.371 254096 WARNING nova.compute.manager [req-0eac4003-84f0-47da-968a-bde0782082ae req-b033e05e-36f4-4995-b302-19b30ae9aa40 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received unexpected event network-vif-unplugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.404 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.406 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.406 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.407 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.428 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.433 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.583 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] removing snapshot(79daa9a6eec84791953396c480b0a63f) on rbd image(497caf1f-53fe-425d-8e5c-10b2f0a2506d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.844 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:32 compute-0 nova_compute[254092]: 2025-11-25 16:38:32.912 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] resizing rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.018 254096 DEBUG nova.objects.instance [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.048 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.048 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Ensure instance console log exists: /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.049 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.049 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.049 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.051 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Successfully created port: a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:38:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 25 16:38:33 compute-0 ceph-mon[74985]: pgmap v1566: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 17 KiB/s wr, 6 op/s
Nov 25 16:38:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 25 16:38:33 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 25 16:38:33 compute-0 nova_compute[254092]: 2025-11-25 16:38:33.225 254096 DEBUG nova.storage.rbd_utils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] creating snapshot(snap) on rbd image(fedb4aef-bdad-4b3a-abdc-073591bfcffa) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:38:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 22 KiB/s wr, 8 op/s
Nov 25 16:38:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 25 16:38:34 compute-0 ceph-mon[74985]: osdmap e201: 3 total, 3 up, 3 in
Nov 25 16:38:34 compute-0 ceph-mon[74985]: pgmap v1568: 321 pgs: 321 active+clean; 121 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 22 KiB/s wr, 8 op/s
Nov 25 16:38:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 25 16:38:34 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.358 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Successfully updated port: a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.390 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.391 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquired lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.391 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.479 254096 DEBUG nova.compute.manager [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-changed-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.479 254096 DEBUG nova.compute.manager [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Refreshing instance network info cache due to event network-changed-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.479 254096 DEBUG oslo_concurrency.lockutils [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.482 254096 DEBUG nova.compute.manager [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.482 254096 DEBUG oslo_concurrency.lockutils [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.482 254096 DEBUG oslo_concurrency.lockutils [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.483 254096 DEBUG oslo_concurrency.lockutils [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.483 254096 DEBUG nova.compute.manager [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] No waiting events found dispatching network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.483 254096 WARNING nova.compute.manager [req-af2e5563-b00f-4ea4-bbb9-6ba9d53c4654 req-6dfeb192-bd72-434e-bba6-0e8ac306afb6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received unexpected event network-vif-plugged-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 16:38:34 compute-0 podman[314105]: 2025-11-25 16:38:34.643695608 +0000 UTC m=+0.063090672 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:38:34 compute-0 podman[314104]: 2025-11-25 16:38:34.646953826 +0000 UTC m=+0.066037382 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 16:38:34 compute-0 podman[314106]: 2025-11-25 16:38:34.67511818 +0000 UTC m=+0.090786303 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:38:34 compute-0 nova_compute[254092]: 2025-11-25 16:38:34.687 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:38:35 compute-0 ceph-mon[74985]: osdmap e202: 3 total, 3 up, 3 in
Nov 25 16:38:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 182 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.0 MiB/s wr, 151 op/s
Nov 25 16:38:35 compute-0 nova_compute[254092]: 2025-11-25 16:38:35.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:36 compute-0 ceph-mon[74985]: pgmap v1570: 321 pgs: 321 active+clean; 182 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.0 MiB/s wr, 151 op/s
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.440 254096 DEBUG nova.network.neutron [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updating instance_info_cache with network_info: [{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.462 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Releasing lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.462 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance network_info: |[{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.463 254096 DEBUG oslo_concurrency.lockutils [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.463 254096 DEBUG nova.network.neutron [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Refreshing network info cache for port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.466 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start _get_guest_xml network_info=[{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.469 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Snapshot image upload complete
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.469 254096 DEBUG nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.478 254096 WARNING nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.483 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.484 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.487 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.487 254096 DEBUG nova.virt.libvirt.host [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.488 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.488 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.488 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.489 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.489 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.489 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.490 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.491 254096 DEBUG nova.virt.hardware [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.493 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.533 254096 INFO nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Shelve offloading
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.541 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance destroyed successfully.
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.542 254096 DEBUG nova.compute.manager [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.544 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.545 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.545 254096 DEBUG nova.network.neutron [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:38:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:38:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464012297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.946 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.967 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:36 compute-0 nova_compute[254092]: 2025-11-25 16:38:36.971 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1464012297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:38:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2749163616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.404 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.407 254096 DEBUG nova.virt.libvirt.vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-876908934',display_name='tempest-ImagesOneServerTestJSON-server-876908934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-876908934',id=53,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80a627278d934815a3ea621e9d6402d2',ramdisk_id='',reservation_id='r-mt56k20g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-941588767',owner_user_name='tempest-ImagesOneServerTestJSON-941588767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:31Z,user_data=None,user_id='34706428d3f94a60b53f4a535d408fd1',uuid=52c13ebd-df79-43b9-8d5f-e4bf4a2e0738,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.407 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converting VIF {"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.409 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.410 254096 DEBUG nova.objects.instance [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.423 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <uuid>52c13ebd-df79-43b9-8d5f-e4bf4a2e0738</uuid>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <name>instance-00000035</name>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <nova:name>tempest-ImagesOneServerTestJSON-server-876908934</nova:name>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:38:36</nova:creationTime>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:user uuid="34706428d3f94a60b53f4a535d408fd1">tempest-ImagesOneServerTestJSON-941588767-project-member</nova:user>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:project uuid="80a627278d934815a3ea621e9d6402d2">tempest-ImagesOneServerTestJSON-941588767</nova:project>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <nova:port uuid="a1b0e8cf-d5e8-4b48-b591-6e3d110aff41">
Nov 25 16:38:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <system>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <entry name="serial">52c13ebd-df79-43b9-8d5f-e4bf4a2e0738</entry>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <entry name="uuid">52c13ebd-df79-43b9-8d5f-e4bf4a2e0738</entry>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </system>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <os>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   </os>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <features>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   </features>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk">
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config">
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:38:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d3:ad:75"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <target dev="tapa1b0e8cf-d5"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/console.log" append="off"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <video>
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </video>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:38:37 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:38:37 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:38:37 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:38:37 compute-0 nova_compute[254092]: </domain>
Nov 25 16:38:37 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.424 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Preparing to wait for external event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.425 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.425 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.425 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.426 254096 DEBUG nova.virt.libvirt.vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-876908934',display_name='tempest-ImagesOneServerTestJSON-server-876908934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-876908934',id=53,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='80a627278d934815a3ea621e9d6402d2',ramdisk_id='',reservation_id='r-mt56k20g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-941588767',owner_user_name='tempest-ImagesOneServerTestJSON-941588767-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:31Z,user_data=None,user_id='34706428d3f94a60b53f4a535d408fd1',uuid=52c13ebd-df79-43b9-8d5f-e4bf4a2e0738,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.426 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converting VIF {"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.427 254096 DEBUG nova.network.os_vif_util [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.427 254096 DEBUG os_vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.428 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.429 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.432 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1b0e8cf-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.432 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1b0e8cf-d5, col_values=(('external_ids', {'iface-id': 'a1b0e8cf-d5e8-4b48-b591-6e3d110aff41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:ad:75', 'vm-uuid': '52c13ebd-df79-43b9-8d5f-e4bf4a2e0738'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:37 compute-0 NetworkManager[48891]: <info>  [1764088717.4353] manager: (tapa1b0e8cf-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.442 254096 INFO os_vif [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5')
Nov 25 16:38:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 246 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 10 MiB/s wr, 201 op/s
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.506 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.507 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.507 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No VIF found with MAC fa:16:3e:d3:ad:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.507 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Using config drive
Nov 25 16:38:37 compute-0 nova_compute[254092]: 2025-11-25 16:38:37.527 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2749163616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:38 compute-0 ceph-mon[74985]: pgmap v1571: 321 pgs: 321 active+clean; 246 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 10 MiB/s wr, 201 op/s
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.625 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Creating config drive at /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.631 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2dvl4yb1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.720 254096 DEBUG nova.network.neutron [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updated VIF entry in instance network info cache for port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.721 254096 DEBUG nova.network.neutron [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updating instance_info_cache with network_info: [{"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.737 254096 DEBUG oslo_concurrency.lockutils [req-0d741e31-3a85-430c-9a26-05eec95c92a2 req-69c272a0-7fb1-43a4-91b3-68289adcf40e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.765 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2dvl4yb1" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.786 254096 DEBUG nova.storage.rbd_utils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] rbd image 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:38 compute-0 nova_compute[254092]: 2025-11-25 16:38:38.789 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:39 compute-0 nova_compute[254092]: 2025-11-25 16:38:39.204 254096 DEBUG nova.network.neutron [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:39 compute-0 nova_compute[254092]: 2025-11-25 16:38:39.218 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 246 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 161 op/s
Nov 25 16:38:39 compute-0 nova_compute[254092]: 2025-11-25 16:38:39.957 254096 DEBUG oslo_concurrency.processutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:39 compute-0 nova_compute[254092]: 2025-11-25 16:38:39.958 254096 INFO nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deleting local config drive /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738/disk.config because it was imported into RBD.
Nov 25 16:38:40 compute-0 kernel: tapa1b0e8cf-d5: entered promiscuous mode
Nov 25 16:38:40 compute-0 NetworkManager[48891]: <info>  [1764088720.0116] manager: (tapa1b0e8cf-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 ovn_controller[153477]: 2025-11-25T16:38:40Z|00481|binding|INFO|Claiming lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for this chassis.
Nov 25 16:38:40 compute-0 ovn_controller[153477]: 2025-11-25T16:38:40Z|00482|binding|INFO|a1b0e8cf-d5e8-4b48-b591-6e3d110aff41: Claiming fa:16:3e:d3:ad:75 10.100.0.9
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:38:40 compute-0 systemd-machined[216343]: New machine qemu-62-instance-00000035.
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:38:40
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes']
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:38:40 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000035.
Nov 25 16:38:40 compute-0 systemd-udevd[314301]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.098 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:ad:75 10.100.0.9'], port_security=['fa:16:3e:d3:ad:75 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52c13ebd-df79-43b9-8d5f-e4bf4a2e0738', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80a627278d934815a3ea621e9d6402d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1671917c-f980-406a-8c8d-043f07074abb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f06ea60c-86e3-46a2-b0dc-014d0b0b5949, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:38:40 compute-0 NetworkManager[48891]: <info>  [1764088720.1003] device (tapa1b0e8cf-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:38:40 compute-0 NetworkManager[48891]: <info>  [1764088720.1012] device (tapa1b0e8cf-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.100 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 in datapath abda97f3-dcb7-42ee-af40-cfc387fadfda bound to our chassis
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.101 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abda97f3-dcb7-42ee-af40-cfc387fadfda
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.115 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73c8d346-4d99-4ebd-b619-78d64886dfba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.116 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabda97f3-d1 in ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.118 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabda97f3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5fc00b-31be-430e-a41d-88feee7384cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbf4365-ffb0-4eb9-9dc6-133a4440f74b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.134 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7d1c41-5804-49cf-bf7a-959a8009a165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 ovn_controller[153477]: 2025-11-25T16:38:40Z|00483|binding|INFO|Setting lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 ovn-installed in OVS
Nov 25 16:38:40 compute-0 ovn_controller[153477]: 2025-11-25T16:38:40Z|00484|binding|INFO|Setting lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 up in Southbound
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac633b7f-87d7-4ac8-a34d-069d9b7bdf7e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.200 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c41ca1e6-9dea-4d67-b50e-ed77a7cea6f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 NetworkManager[48891]: <info>  [1764088720.2079] manager: (tapabda97f3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/208)
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2dee14-0f73-4b7c-90cc-d26e7e48191e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.242 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7089d7ad-ae58-4bd2-95fb-9f5ea754dd53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.247 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9d3483-1dfa-4976-9758-a79ed52581c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 NetworkManager[48891]: <info>  [1764088720.2762] device (tapabda97f3-d0): carrier: link connected
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.281 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e90519fb-518d-4954-90c1-cba1f03c9459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[306a1cc0-f7c9-4745-82a0-d275db37fb6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabda97f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:a9:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517786, 'reachable_time': 26951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314340, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b18c9f-8aab-4ce3-991d-9fb4bb0af029]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:a9e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517786, 'tstamp': 517786}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314341, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.330 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a021a32b-62e3-4b16-9a99-ba2c9565016b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabda97f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:a9:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517786, 'reachable_time': 26951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314342, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.358 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37b4507c-ecc5-40ec-85c5-d5255ffac275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.413 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25d6c23a-0718-4bc0-80c4-30191b129a0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.415 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabda97f3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.415 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.415 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabda97f3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:40 compute-0 NetworkManager[48891]: <info>  [1764088720.4182] manager: (tapabda97f3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Nov 25 16:38:40 compute-0 kernel: tapabda97f3-d0: entered promiscuous mode
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.424 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabda97f3-d0, col_values=(('external_ids', {'iface-id': '4b466e06-fb69-4706-83df-d7865671165a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:40 compute-0 ovn_controller[153477]: 2025-11-25T16:38:40Z|00485|binding|INFO|Releasing lport 4b466e06-fb69-4706-83df-d7865671165a from this chassis (sb_readonly=0)
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 nova_compute[254092]: 2025-11-25 16:38:40.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.445 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abda97f3-dcb7-42ee-af40-cfc387fadfda.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abda97f3-dcb7-42ee-af40-cfc387fadfda.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.446 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f5bc87-34ae-4330-af62-62ac409bc88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.447 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-abda97f3-dcb7-42ee-af40-cfc387fadfda
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/abda97f3-dcb7-42ee-af40-cfc387fadfda.pid.haproxy
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID abda97f3-dcb7-42ee-af40-cfc387fadfda
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:38:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:38:40.448 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'env', 'PROCESS_TAG=haproxy-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abda97f3-dcb7-42ee-af40-cfc387fadfda.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:38:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 25 16:38:40 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:38:40 compute-0 podman[314376]: 2025-11-25 16:38:40.822579858 +0000 UTC m=+0.029631475 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:38:41 compute-0 ceph-mon[74985]: pgmap v1572: 321 pgs: 321 active+clean; 246 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 161 op/s
Nov 25 16:38:41 compute-0 ceph-mon[74985]: osdmap e203: 3 total, 3 up, 3 in
Nov 25 16:38:41 compute-0 podman[314376]: 2025-11-25 16:38:41.419401069 +0000 UTC m=+0.626452656 container create ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 16:38:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 172 op/s
Nov 25 16:38:41 compute-0 systemd[1]: Started libpod-conmon-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c.scope.
Nov 25 16:38:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d652c06ceeec69f5cac5c1cc50b800c7a964f2bc7a8ca4a2399870c6c25d11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:38:41 compute-0 podman[314376]: 2025-11-25 16:38:41.578115045 +0000 UTC m=+0.785166632 container init ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:38:41 compute-0 podman[314376]: 2025-11-25 16:38:41.588945639 +0000 UTC m=+0.795997216 container start ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 16:38:41 compute-0 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : New worker (314441) forked
Nov 25 16:38:41 compute-0 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : Loading success.
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.664 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088721.6643987, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.665 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Started (Lifecycle Event)
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.690 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.695 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088721.6654434, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.696 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Paused (Lifecycle Event)
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.710 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.714 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:38:41 compute-0 nova_compute[254092]: 2025-11-25 16:38:41.730 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:38:42 compute-0 ceph-mon[74985]: pgmap v1574: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 172 op/s
Nov 25 16:38:42 compute-0 nova_compute[254092]: 2025-11-25 16:38:42.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:42 compute-0 sshd-session[314451]: Connection closed by 139.171.194.111 port 42994
Nov 25 16:38:42 compute-0 sshd-session[314452]: Invalid user admin from 139.171.194.111 port 42996
Nov 25 16:38:42 compute-0 sshd-session[314452]: Connection closed by invalid user admin 139.171.194.111 port 42996 [preauth]
Nov 25 16:38:42 compute-0 nova_compute[254092]: 2025-11-25 16:38:42.720 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088707.7184463, 497caf1f-53fe-425d-8e5c-10b2f0a2506d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:38:42 compute-0 nova_compute[254092]: 2025-11-25 16:38:42.722 254096 INFO nova.compute.manager [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] VM Stopped (Lifecycle Event)
Nov 25 16:38:42 compute-0 nova_compute[254092]: 2025-11-25 16:38:42.790 254096 DEBUG nova.compute.manager [None req-e2e8c774-34c0-483e-bcd9-dfa22ea1d05b - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:42 compute-0 nova_compute[254092]: 2025-11-25 16:38:42.794 254096 DEBUG nova.compute.manager [None req-e2e8c774-34c0-483e-bcd9-dfa22ea1d05b - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:38:42 compute-0 nova_compute[254092]: 2025-11-25 16:38:42.820 254096 INFO nova.compute.manager [None req-e2e8c774-34c0-483e-bcd9-dfa22ea1d05b - - - - - -] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.3 MiB/s wr, 148 op/s
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.690 254096 DEBUG nova.compute.manager [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.691 254096 DEBUG oslo_concurrency.lockutils [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.692 254096 DEBUG oslo_concurrency.lockutils [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.692 254096 DEBUG oslo_concurrency.lockutils [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.692 254096 DEBUG nova.compute.manager [req-38f8d451-ede3-4bd7-bef8-ff12f2c664c4 req-04e769e3-21f3-4f4e-8c7d-cc82788d1fc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Processing event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.694 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.699 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088723.698265, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.699 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Resumed (Lifecycle Event)
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.700 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.704 254096 INFO nova.virt.libvirt.driver [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance spawned successfully.
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.705 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.720 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.727 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.732 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.733 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.733 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.734 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.734 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.735 254096 DEBUG nova.virt.libvirt.driver [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.953 254096 INFO nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 11.81 seconds to spawn the instance on the hypervisor.
Nov 25 16:38:43 compute-0 nova_compute[254092]: 2025-11-25 16:38:43.954 254096 DEBUG nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.247 254096 INFO nova.compute.manager [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 13.26 seconds to build instance.
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.263 254096 DEBUG oslo_concurrency.lockutils [None req-2ae93aa8-b9ce-4da8-8323-f35e40c8d36a 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:44 compute-0 ceph-mon[74985]: pgmap v1575: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.3 MiB/s wr, 148 op/s
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.777 254096 INFO nova.virt.libvirt.driver [-] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Instance destroyed successfully.
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.778 254096 DEBUG nova.objects.instance [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid 497caf1f-53fe-425d-8e5c-10b2f0a2506d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.796 254096 DEBUG nova.virt.libvirt.vif [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1046040473',display_name='tempest-DeleteServersTestJSON-server-1046040473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1046040473',id=52,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-ia105ljl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member',shelved_at='2025-11-25T16:38:36.469708',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='fedb4aef-bdad-4b3a-abdc-073591bfcffa'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:38:29Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=497caf1f-53fe-425d-8e5c-10b2f0a2506d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.797 254096 DEBUG nova.network.os_vif_util [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.798 254096 DEBUG nova.network.os_vif_util [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.798 254096 DEBUG os_vif [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.801 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd90f4f5a-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.808 254096 INFO os_vif [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:76:f8,bridge_name='br-int',has_traffic_filtering=True,id=d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f4f5a-3c')
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.989 254096 DEBUG nova.compute.manager [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Received event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.990 254096 DEBUG nova.compute.manager [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing instance network info cache due to event network-changed-d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.990 254096 DEBUG oslo_concurrency.lockutils [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.991 254096 DEBUG oslo_concurrency.lockutils [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:44 compute-0 nova_compute[254092]: 2025-11-25 16:38:44.991 254096 DEBUG nova.network.neutron [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Refreshing network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:38:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.8 MiB/s wr, 100 op/s
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.542 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Deleting instance files /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d_del
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.547 254096 INFO nova.virt.libvirt.driver [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Deletion of /var/lib/nova/instances/497caf1f-53fe-425d-8e5c-10b2f0a2506d_del complete
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.689 254096 INFO nova.scheduler.client.report [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance 497caf1f-53fe-425d-8e5c-10b2f0a2506d
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.762 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.764 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.819 254096 DEBUG oslo_concurrency.processutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.873 254096 DEBUG nova.compute.manager [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.875 254096 DEBUG oslo_concurrency.lockutils [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.875 254096 DEBUG oslo_concurrency.lockutils [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.875 254096 DEBUG oslo_concurrency.lockutils [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.876 254096 DEBUG nova.compute.manager [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] No waiting events found dispatching network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:38:45 compute-0 nova_compute[254092]: 2025-11-25 16:38:45.876 254096 WARNING nova.compute.manager [req-443da545-706e-46f9-bc52-97a68605cfb7 req-88d9fedd-a93e-43b8-87a0-fdac0d7190a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received unexpected event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for instance with vm_state active and task_state None.
Nov 25 16:38:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557752528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:46 compute-0 nova_compute[254092]: 2025-11-25 16:38:46.318 254096 DEBUG oslo_concurrency.processutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:46 compute-0 nova_compute[254092]: 2025-11-25 16:38:46.325 254096 DEBUG nova.compute.provider_tree [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:38:46 compute-0 nova_compute[254092]: 2025-11-25 16:38:46.345 254096 DEBUG nova.scheduler.client.report [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:38:46 compute-0 nova_compute[254092]: 2025-11-25 16:38:46.376 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:46 compute-0 nova_compute[254092]: 2025-11-25 16:38:46.456 254096 DEBUG oslo_concurrency.lockutils [None req-e2576c68-a6f1-42ae-9c07-21f4d1f124a3 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "497caf1f-53fe-425d-8e5c-10b2f0a2506d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 74.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:46 compute-0 ceph-mon[74985]: pgmap v1576: 321 pgs: 321 active+clean; 246 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.8 MiB/s wr, 100 op/s
Nov 25 16:38:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2557752528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:47 compute-0 nova_compute[254092]: 2025-11-25 16:38:47.453 254096 DEBUG nova.compute.manager [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:38:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 188 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 108 op/s
Nov 25 16:38:47 compute-0 nova_compute[254092]: 2025-11-25 16:38:47.484 254096 DEBUG nova.network.neutron [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updated VIF entry in instance network info cache for port d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:38:47 compute-0 nova_compute[254092]: 2025-11-25 16:38:47.485 254096 DEBUG nova.network.neutron [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 497caf1f-53fe-425d-8e5c-10b2f0a2506d] Updating instance_info_cache with network_info: [{"id": "d90f4f5a-3cd7-4c5d-bf11-0e669fb736ca", "address": "fa:16:3e:b0:76:f8", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": null, "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapd90f4f5a-3c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:47 compute-0 nova_compute[254092]: 2025-11-25 16:38:47.509 254096 INFO nova.compute.manager [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] instance snapshotting
Nov 25 16:38:47 compute-0 nova_compute[254092]: 2025-11-25 16:38:47.523 254096 DEBUG oslo_concurrency.lockutils [req-c323f3b6-b0e8-4bc4-be7d-72348cff0f70 req-bddbec20-523c-4e40-b8fb-2f4f514a732c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-497caf1f-53fe-425d-8e5c-10b2f0a2506d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:47 compute-0 nova_compute[254092]: 2025-11-25 16:38:47.991 254096 INFO nova.virt.libvirt.driver [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Beginning live snapshot process
Nov 25 16:38:48 compute-0 nova_compute[254092]: 2025-11-25 16:38:48.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:48 compute-0 nova_compute[254092]: 2025-11-25 16:38:48.198 254096 DEBUG nova.virt.libvirt.imagebackend [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:38:48 compute-0 ceph-mon[74985]: pgmap v1577: 321 pgs: 321 active+clean; 188 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 108 op/s
Nov 25 16:38:48 compute-0 nova_compute[254092]: 2025-11-25 16:38:48.613 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(b91028f01b13453bb03c3748cb1f0430) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:38:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 188 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 108 op/s
Nov 25 16:38:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 25 16:38:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 25 16:38:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 25 16:38:49 compute-0 nova_compute[254092]: 2025-11-25 16:38:49.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:49 compute-0 nova_compute[254092]: 2025-11-25 16:38:49.837 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] cloning vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk@b91028f01b13453bb03c3748cb1f0430 to images/82803f77-dd79-40bf-9575-d6c61a15ce8a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:38:50 compute-0 nova_compute[254092]: 2025-11-25 16:38:50.089 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] flattening images/82803f77-dd79-40bf-9575-d6c61a15ce8a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:38:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:50 compute-0 ceph-mon[74985]: pgmap v1578: 321 pgs: 321 active+clean; 188 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 108 op/s
Nov 25 16:38:50 compute-0 ceph-mon[74985]: osdmap e204: 3 total, 3 up, 3 in
Nov 25 16:38:50 compute-0 nova_compute[254092]: 2025-11-25 16:38:50.923 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:50 compute-0 nova_compute[254092]: 2025-11-25 16:38:50.923 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:50 compute-0 nova_compute[254092]: 2025-11-25 16:38:50.989 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.022 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] removing snapshot(b91028f01b13453bb03c3748cb1f0430) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.062 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.062 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.067 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.067 254096 INFO nova.compute.claims [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005052917643419938 of space, bias 1.0, pg target 0.15158752930259814 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014241774923487661 of space, bias 1.0, pg target 0.42725324770462986 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.217 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 148 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 179 op/s
Nov 25 16:38:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320757746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.654 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.660 254096 DEBUG nova.compute.provider_tree [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:38:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.676 254096 DEBUG nova.scheduler.client.report [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:38:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 25 16:38:51 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.705 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.706 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:38:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/320757746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.772 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.773 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.842 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.880 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.985 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.987 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:38:51 compute-0 nova_compute[254092]: 2025-11-25 16:38:51.988 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Creating image(s)
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.066 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.097 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.121 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.125 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.163 254096 DEBUG nova.storage.rbd_utils [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(snap) on rbd image(82803f77-dd79-40bf-9575-d6c61a15ce8a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.236 254096 DEBUG nova.policy [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cb4b65d4074373a38534856574dc8f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.239 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.240 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.241 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.241 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.262 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:52 compute-0 nova_compute[254092]: 2025-11-25 16:38:52.265 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 25 16:38:52 compute-0 ceph-mon[74985]: pgmap v1580: 321 pgs: 321 active+clean; 148 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 179 op/s
Nov 25 16:38:52 compute-0 ceph-mon[74985]: osdmap e205: 3 total, 3 up, 3 in
Nov 25 16:38:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:53 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.472 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 148 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.535 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] resizing rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.580 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully created port: a6f06f5d-486f-4039-a0cb-30b122e69258 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.728 254096 DEBUG nova.objects.instance [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.743 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.744 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Ensure instance console log exists: /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.744 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.745 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:53 compute-0 nova_compute[254092]: 2025-11-25 16:38:53.745 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:54 compute-0 ceph-mon[74985]: osdmap e206: 3 total, 3 up, 3 in
Nov 25 16:38:54 compute-0 nova_compute[254092]: 2025-11-25 16:38:54.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:54 compute-0 nova_compute[254092]: 2025-11-25 16:38:54.981 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully updated port: a6f06f5d-486f-4039-a0cb-30b122e69258 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.046 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.046 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.047 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:38:55 compute-0 ceph-mon[74985]: pgmap v1583: 321 pgs: 321 active+clean; 148 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Nov 25 16:38:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:38:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035909139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:38:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:38:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2035909139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:38:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:38:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 25 16:38:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 25 16:38:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 152 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 201 op/s
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.530 254096 DEBUG nova.compute.manager [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-changed-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.530 254096 DEBUG nova.compute.manager [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing instance network info cache due to event network-changed-a6f06f5d-486f-4039-a0cb-30b122e69258. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.530 254096 DEBUG oslo_concurrency.lockutils [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.532 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:38:55 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.598 254096 INFO nova.virt.libvirt.driver [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Snapshot image upload complete
Nov 25 16:38:55 compute-0 nova_compute[254092]: 2025-11-25 16:38:55.598 254096 INFO nova.compute.manager [None req-fe6afb05-0e90-431b-8603-98be392c979b 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 8.09 seconds to snapshot the instance on the hypervisor.
Nov 25 16:38:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2035909139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:38:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2035909139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:38:56 compute-0 ceph-mon[74985]: pgmap v1584: 321 pgs: 321 active+clean; 152 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 201 op/s
Nov 25 16:38:56 compute-0 ceph-mon[74985]: osdmap e207: 3 total, 3 up, 3 in
Nov 25 16:38:56 compute-0 nova_compute[254092]: 2025-11-25 16:38:56.754 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:56 compute-0 nova_compute[254092]: 2025-11-25 16:38:56.754 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:56 compute-0 nova_compute[254092]: 2025-11-25 16:38:56.787 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.041 254096 DEBUG nova.network.neutron [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.076 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.077 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.086 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.086 254096 INFO nova.compute.claims [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.111 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.112 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance network_info: |[{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.112 254096 DEBUG oslo_concurrency.lockutils [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.112 254096 DEBUG nova.network.neutron [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing network info cache for port a6f06f5d-486f-4039-a0cb-30b122e69258 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.115 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start _get_guest_xml network_info=[{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.120 254096 WARNING nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.124 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.126 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.133 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.134 254096 DEBUG nova.virt.libvirt.host [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.134 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.135 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.136 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.137 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.137 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.137 254096 DEBUG nova.virt.hardware [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.140 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 180 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.1 MiB/s wr, 132 op/s
Nov 25 16:38:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:38:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770989582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.617 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.642 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.647 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:57 compute-0 nova_compute[254092]: 2025-11-25 16:38:57.882 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3770989582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:38:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772629285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.207 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.208 254096 DEBUG nova.virt.libvirt.vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:51Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.209 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.210 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.211 254096 DEBUG nova.objects.instance [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.225 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <uuid>97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211</uuid>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <name>instance-00000036</name>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <nova:name>tempest-AttachInterfacesV270Test-server-540246934</nova:name>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:38:57</nova:creationTime>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:user uuid="96cb4b65d4074373a38534856574dc8f">tempest-AttachInterfacesV270Test-1255379647-project-member</nova:user>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:project uuid="a92e4b86655441c59ead5a1bd83173e5">tempest-AttachInterfacesV270Test-1255379647</nova:project>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <nova:port uuid="a6f06f5d-486f-4039-a0cb-30b122e69258">
Nov 25 16:38:58 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <system>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <entry name="serial">97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211</entry>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <entry name="uuid">97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211</entry>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </system>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <os>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   </os>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <features>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   </features>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk">
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config">
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:38:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:76:39:37"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <target dev="tapa6f06f5d-48"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/console.log" append="off"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <video>
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </video>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:38:58 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:38:58 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:38:58 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:38:58 compute-0 nova_compute[254092]: </domain>
Nov 25 16:38:58 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Preparing to wait for external event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.227 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.228 254096 DEBUG nova.virt.libvirt.vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:51Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.228 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.229 254096 DEBUG nova.network.os_vif_util [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.229 254096 DEBUG os_vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.230 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.230 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.230 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.234 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6f06f5d-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.234 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6f06f5d-48, col_values=(('external_ids', {'iface-id': 'a6f06f5d-486f-4039-a0cb-30b122e69258', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:39:37', 'vm-uuid': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:58 compute-0 NetworkManager[48891]: <info>  [1764088738.2375] manager: (tapa6f06f5d-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.243 254096 INFO os_vif [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48')
Nov 25 16:38:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:38:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567130199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.333 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.340 254096 DEBUG nova.compute.provider_tree [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.359 254096 DEBUG nova.scheduler.client.report [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.384 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.385 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.385 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No VIF found with MAC fa:16:3e:76:39:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.385 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Using config drive
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.408 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.414 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.415 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.474 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.475 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.495 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.524 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.630 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.632 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.633 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Creating image(s)
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.651 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.673 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.695 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.699 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.779 254096 DEBUG nova.policy [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.783 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.784 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.785 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.785 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.810 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:58 compute-0 nova_compute[254092]: 2025-11-25 16:38:58.816 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5947529-cfda-4753-94cd-b764da9d5c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:59 compute-0 ceph-mon[74985]: pgmap v1586: 321 pgs: 321 active+clean; 180 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.1 MiB/s wr, 132 op/s
Nov 25 16:38:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3772629285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:38:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2567130199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:38:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 25 16:38:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 25 16:38:59 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.282 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5947529-cfda-4753-94cd-b764da9d5c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.343 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:38:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 180 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 125 op/s
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.481 254096 DEBUG nova.objects.instance [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.494 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.494 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Ensure instance console log exists: /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.495 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.495 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.496 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.588 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Creating config drive at /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.594 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3iiwimi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.755 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3iiwimi" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.789 254096 DEBUG nova.storage.rbd_utils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] rbd image 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.794 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.976 254096 DEBUG oslo_concurrency.processutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:38:59 compute-0 nova_compute[254092]: 2025-11-25 16:38:59.977 254096 INFO nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deleting local config drive /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211/disk.config because it was imported into RBD.
Nov 25 16:39:00 compute-0 kernel: tapa6f06f5d-48: entered promiscuous mode
Nov 25 16:39:00 compute-0 NetworkManager[48891]: <info>  [1764088740.0262] manager: (tapa6f06f5d-48): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:00 compute-0 ovn_controller[153477]: 2025-11-25T16:39:00Z|00486|binding|INFO|Claiming lport a6f06f5d-486f-4039-a0cb-30b122e69258 for this chassis.
Nov 25 16:39:00 compute-0 ovn_controller[153477]: 2025-11-25T16:39:00Z|00487|binding|INFO|a6f06f5d-486f-4039-a0cb-30b122e69258: Claiming fa:16:3e:76:39:37 10.100.0.6
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.046 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.049 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 bound to our chassis
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.052 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5808bee-5100-4cdf-b578-a1bc323dafe9
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.066 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca139968-8e23-4c15-8cd3-2a5fdc5765c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.067 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd5808bee-51 in ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.073 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd5808bee-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.073 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[409c4890-bb7c-4cee-9c78-f95540fb364a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 systemd-machined[216343]: New machine qemu-63-instance-00000036.
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.074 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb878d25-6c38-420e-b7be-fa9d1d942691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 systemd-udevd[315149]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:39:00 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000036.
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.091 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9196601d-c37b-4eec-beee-f93b128a81d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 NetworkManager[48891]: <info>  [1764088740.0962] device (tapa6f06f5d-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:39:00 compute-0 NetworkManager[48891]: <info>  [1764088740.0975] device (tapa6f06f5d-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:39:00 compute-0 ovn_controller[153477]: 2025-11-25T16:39:00Z|00488|binding|INFO|Setting lport a6f06f5d-486f-4039-a0cb-30b122e69258 ovn-installed in OVS
Nov 25 16:39:00 compute-0 ovn_controller[153477]: 2025-11-25T16:39:00Z|00489|binding|INFO|Setting lport a6f06f5d-486f-4039-a0cb-30b122e69258 up in Southbound
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.113 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5125b11f-0a3d-4e1f-abe7-352f8e8eacb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.146 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3b09e7-bda3-4dc7-9371-fb33cf3219d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 NetworkManager[48891]: <info>  [1764088740.1535] manager: (tapd5808bee-50): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07519d69-19ba-4136-8af9-f1f0511cce95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.197 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1bf113-ead5-44d4-98b0-d3c5029c3b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.202 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[173ae467-0e75-4422-a502-0651075662bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 NetworkManager[48891]: <info>  [1764088740.2298] device (tapd5808bee-50): carrier: link connected
Nov 25 16:39:00 compute-0 ceph-mon[74985]: osdmap e208: 3 total, 3 up, 3 in
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.237 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a50ce31-7635-497c-9ede-22df7eee4b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ceph-mon[74985]: pgmap v1588: 321 pgs: 321 active+clean; 180 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 125 op/s
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.250027) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740250113, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2219, "num_deletes": 260, "total_data_size": 3336185, "memory_usage": 3378272, "flush_reason": "Manual Compaction"}
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.262 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b74a44f1-ac8f-4a5a-a614-ab075d2cb05b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315181, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740274834, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3274719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30737, "largest_seqno": 32955, "table_properties": {"data_size": 3264596, "index_size": 6489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21317, "raw_average_key_size": 20, "raw_value_size": 3244167, "raw_average_value_size": 3177, "num_data_blocks": 283, "num_entries": 1021, "num_filter_entries": 1021, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088551, "oldest_key_time": 1764088551, "file_creation_time": 1764088740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 24996 microseconds, and 7341 cpu microseconds.
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.275035) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3274719 bytes OK
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.275099) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.276774) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.276792) EVENT_LOG_v1 {"time_micros": 1764088740276786, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.276819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3326752, prev total WAL file size 3326752, number of live WAL files 2.
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.278179) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3197KB)], [68(6823KB)]
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740278258, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10262462, "oldest_snapshot_seqno": -1}
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad733870-06ad-475f-84f5-13d155f33b87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:9ed0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519782, 'tstamp': 519782}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315182, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffcf78a-f52a-4079-ba20-29500048b89d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315183, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5816 keys, 8664787 bytes, temperature: kUnknown
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740333481, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8664787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8625423, "index_size": 23702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 146457, "raw_average_key_size": 25, "raw_value_size": 8520436, "raw_average_value_size": 1464, "num_data_blocks": 964, "num_entries": 5816, "num_filter_entries": 5816, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.333755) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8664787 bytes
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.334874) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.6 rd, 156.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 6345, records dropped: 529 output_compression: NoCompression
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.334890) EVENT_LOG_v1 {"time_micros": 1764088740334881, "job": 38, "event": "compaction_finished", "compaction_time_micros": 55293, "compaction_time_cpu_micros": 21055, "output_level": 6, "num_output_files": 1, "total_output_size": 8664787, "num_input_records": 6345, "num_output_records": 5816, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740335413, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088740336399, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.278046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:39:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:39:00.336507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a97ef4-bdd0-4137-b416-c83de9a896f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.400 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[043bdb7f-8c57-4b8c-ba46-238a98fed43d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5808bee-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:00 compute-0 kernel: tapd5808bee-50: entered promiscuous mode
Nov 25 16:39:00 compute-0 NetworkManager[48891]: <info>  [1764088740.4051] manager: (tapd5808bee-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.407 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5808bee-50, col_values=(('external_ids', {'iface-id': 'ab0217e4-1718-4cef-9483-cef43176b686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:00 compute-0 ovn_controller[153477]: 2025-11-25T16:39:00Z|00490|binding|INFO|Releasing lport ab0217e4-1718-4cef-9483-cef43176b686 from this chassis (sb_readonly=0)
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.431 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d5808bee-5100-4cdf-b578-a1bc323dafe9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d5808bee-5100-4cdf-b578-a1bc323dafe9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.432 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d42f6565-bad9-4eba-aef1-7a38e500c7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.433 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-d5808bee-5100-4cdf-b578-a1bc323dafe9
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/d5808bee-5100-4cdf-b578-a1bc323dafe9.pid.haproxy
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID d5808bee-5100-4cdf-b578-a1bc323dafe9
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:39:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:00.433 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'env', 'PROCESS_TAG=haproxy-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d5808bee-5100-4cdf-b578-a1bc323dafe9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 25 16:39:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 25 16:39:00 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 25 16:39:00 compute-0 ovn_controller[153477]: 2025-11-25T16:39:00Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:ad:75 10.100.0.9
Nov 25 16:39:00 compute-0 ovn_controller[153477]: 2025-11-25T16:39:00Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:ad:75 10.100.0.9
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.589 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Successfully created port: aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.634 254096 DEBUG nova.compute.manager [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG oslo_concurrency.lockutils [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG oslo_concurrency.lockutils [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG oslo_concurrency.lockutils [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.635 254096 DEBUG nova.compute.manager [req-0116ce20-6c1b-4168-b6f5-0fceaa533f4a req-75290997-6293-4729-b3ff-80bebf0298d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Processing event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.673 254096 DEBUG nova.compute.manager [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.720 254096 INFO nova.compute.manager [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] instance snapshotting
Nov 25 16:39:00 compute-0 podman[315215]: 2025-11-25 16:39:00.842529299 +0000 UTC m=+0.064816920 container create 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:39:00 compute-0 systemd[1]: Started libpod-conmon-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded.scope.
Nov 25 16:39:00 compute-0 podman[315215]: 2025-11-25 16:39:00.808602738 +0000 UTC m=+0.030890359 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:39:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c248a764d5721c6e561451525ee03f5d19e082815bceb4132727d099f101a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:00 compute-0 podman[315215]: 2025-11-25 16:39:00.942012938 +0000 UTC m=+0.164300559 container init 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:39:00 compute-0 podman[315215]: 2025-11-25 16:39:00.948662278 +0000 UTC m=+0.170949909 container start 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:39:00 compute-0 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : New worker (315237) forked
Nov 25 16:39:00 compute-0 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : Loading success.
Nov 25 16:39:00 compute-0 nova_compute[254092]: 2025-11-25 16:39:00.984 254096 INFO nova.virt.libvirt.driver [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Beginning live snapshot process
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.147 254096 DEBUG nova.network.neutron [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updated VIF entry in instance network info cache for port a6f06f5d-486f-4039-a0cb-30b122e69258. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.148 254096 DEBUG nova.network.neutron [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.154 254096 DEBUG nova.virt.libvirt.imagebackend [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.161 254096 DEBUG oslo_concurrency.lockutils [req-e0b1cd8e-a818-4e06-8a4f-1084d827b852 req-18c16735-e400-4919-a5a5-8c0c93ee487d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.309 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(1e8da10f0cb94a5f915a173d7d11e28b) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:39:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 25 16:39:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 25 16:39:01 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 25 16:39:01 compute-0 ceph-mon[74985]: osdmap e209: 3 total, 3 up, 3 in
Nov 25 16:39:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 213 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 10 MiB/s wr, 300 op/s
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.540 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] cloning vms/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk@1e8da10f0cb94a5f915a173d7d11e28b to images/0216513c-fd2f-4f07-aa1e-cd470e84e4a4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.734 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] flattening images/0216513c-fd2f-4f07-aa1e-cd470e84e4a4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.798 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088741.7631524, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.798 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Started (Lifecycle Event)
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.800 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.803 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.806 254096 INFO nova.virt.libvirt.driver [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance spawned successfully.
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.807 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.830 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.836 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.839 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.839 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.840 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.840 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.841 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.841 254096 DEBUG nova.virt.libvirt.driver [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:01 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.876 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088741.763254, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.876 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Paused (Lifecycle Event)
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.904 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.909 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088741.8031816, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.910 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Resumed (Lifecycle Event)
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.934 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.938 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.963 254096 INFO nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 9.98 seconds to spawn the instance on the hypervisor.
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.964 254096 DEBUG nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:01 compute-0 nova_compute[254092]: 2025-11-25 16:39:01.965 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.077 254096 INFO nova.compute.manager [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 11.04 seconds to build instance.
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.135 254096 DEBUG oslo_concurrency.lockutils [None req-303706b9-6a6c-43e9-a60c-58ad4ce5265b 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.524 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:02 compute-0 ceph-mon[74985]: osdmap e210: 3 total, 3 up, 3 in
Nov 25 16:39:02 compute-0 ceph-mon[74985]: pgmap v1591: 321 pgs: 321 active+clean; 213 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 10 MiB/s wr, 300 op/s
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.691 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] removing snapshot(1e8da10f0cb94a5f915a173d7d11e28b) on rbd image(52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG nova.compute.manager [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG oslo_concurrency.lockutils [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG oslo_concurrency.lockutils [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG oslo_concurrency.lockutils [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.877 254096 DEBUG nova.compute.manager [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.878 254096 WARNING nova.compute.manager [req-9d93a66c-4246-4032-80fd-b188a23750c2 req-49043beb-d370-48a3-bd13-f286232cb927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 for instance with vm_state active and task_state None.
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.915 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Successfully updated port: aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.936 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.936 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:39:02 compute-0 nova_compute[254092]: 2025-11-25 16:39:02.936 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.307 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:39:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 213 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 704 KiB/s rd, 7.7 MiB/s wr, 233 op/s
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 25 16:39:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 25 16:39:03 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 25 16:39:03 compute-0 nova_compute[254092]: 2025-11-25 16:39:03.966 254096 DEBUG nova.storage.rbd_utils [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] creating snapshot(snap) on rbd image(0216513c-fd2f-4f07-aa1e-cd470e84e4a4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:39:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532262061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.128 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.214 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.215 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.219 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.219 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.392 254096 DEBUG nova.network.neutron [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updating instance_info_cache with network_info: [{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.446 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.447 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance network_info: |[{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.449 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start _get_guest_xml network_info=[{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.453 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.454 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3793MB free_disk=59.90127944946289GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.454 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.455 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.456 254096 WARNING nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.460 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.461 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.463 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.463 254096 DEBUG nova.virt.libvirt.host [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.464 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.464 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.464 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.465 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.466 254096 DEBUG nova.virt.hardware [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.468 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:04 compute-0 ceph-mon[74985]: pgmap v1592: 321 pgs: 321 active+clean; 213 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 704 KiB/s rd, 7.7 MiB/s wr, 233 op/s
Nov 25 16:39:04 compute-0 ceph-mon[74985]: osdmap e211: 3 total, 3 up, 3 in
Nov 25 16:39:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2532262061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.553 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.553 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e5947529-cfda-4753-94cd-b764da9d5c2c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:39:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.574 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "interface-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.575 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "interface-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.575 254096 DEBUG nova.objects.instance [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'flavor' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.612 254096 DEBUG nova.objects.instance [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.620 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:39:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 25 16:39:04 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.641 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:39:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091903382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.934 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.955 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:04 compute-0 nova_compute[254092]: 2025-11-25 16:39:04.958 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205301590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.109 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.124 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG nova.compute.manager [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-changed-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG nova.compute.manager [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Refreshing instance network info cache due to event network-changed-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG oslo_concurrency.lockutils [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.132 254096 DEBUG oslo_concurrency.lockutils [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.133 254096 DEBUG nova.network.neutron [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Refreshing network info cache for port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.146 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.177 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.178 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019557981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.440 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.441 254096 DEBUG nova.virt.libvirt.vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-251232744',display_name='tempest-DeleteServersTestJSON-server-251232744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-251232744',id=55,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-xpvmx0am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:58Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=e5947529-cfda-4753-94cd-b764da9d5c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.442 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.442 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.444 254096 DEBUG nova.objects.instance [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.461 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <uuid>e5947529-cfda-4753-94cd-b764da9d5c2c</uuid>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <name>instance-00000037</name>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <nova:name>tempest-DeleteServersTestJSON-server-251232744</nova:name>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:39:04</nova:creationTime>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <nova:port uuid="aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2">
Nov 25 16:39:05 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <system>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <entry name="serial">e5947529-cfda-4753-94cd-b764da9d5c2c</entry>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <entry name="uuid">e5947529-cfda-4753-94cd-b764da9d5c2c</entry>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </system>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <os>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   </os>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <features>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   </features>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e5947529-cfda-4753-94cd-b764da9d5c2c_disk">
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       </source>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config">
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       </source>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:39:05 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:38:a9:78"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <target dev="tapaabf40d8-e3"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/console.log" append="off"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <video>
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </video>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:39:05 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:39:05 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:39:05 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:39:05 compute-0 nova_compute[254092]: </domain>
Nov 25 16:39:05 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.461 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Preparing to wait for external event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.462 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.462 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.462 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.463 254096 DEBUG nova.virt.libvirt.vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:38:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-251232744',display_name='tempest-DeleteServersTestJSON-server-251232744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-251232744',id=55,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-xpvmx0am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:38:58Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=e5947529-cfda-4753-94cd-b764da9d5c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.463 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.464 254096 DEBUG nova.network.os_vif_util [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.464 254096 DEBUG os_vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.466 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.466 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaabf40d8-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaabf40d8-e3, col_values=(('external_ids', {'iface-id': 'aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:a9:78', 'vm-uuid': 'e5947529-cfda-4753-94cd-b764da9d5c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:05 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:05 compute-0 NetworkManager[48891]: <info>  [1764088745.4731] manager: (tapaabf40d8-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.479 254096 INFO os_vif [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3')
Nov 25 16:39:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 235 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.7 MiB/s wr, 235 op/s
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.565 254096 DEBUG nova.policy [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96cb4b65d4074373a38534856574dc8f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.613 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.615 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.615 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:38:a9:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.615 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Using config drive
Nov 25 16:39:05 compute-0 ceph-mon[74985]: osdmap e212: 3 total, 3 up, 3 in
Nov 25 16:39:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2091903382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3205301590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1019557981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:05 compute-0 ceph-mon[74985]: osdmap e213: 3 total, 3 up, 3 in
Nov 25 16:39:05 compute-0 podman[315538]: 2025-11-25 16:39:05.672127613 +0000 UTC m=+0.074293415 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 16:39:05 compute-0 podman[315539]: 2025-11-25 16:39:05.697828552 +0000 UTC m=+0.099658146 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 16:39:05 compute-0 podman[315540]: 2025-11-25 16:39:05.711585254 +0000 UTC m=+0.113509120 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 16:39:05 compute-0 nova_compute[254092]: 2025-11-25 16:39:05.808 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.178 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.178 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.178 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.179 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:06 compute-0 ceph-mon[74985]: pgmap v1596: 321 pgs: 321 active+clean; 235 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.7 MiB/s wr, 235 op/s
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.655 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Creating config drive at /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.661 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqbonlcb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.808 254096 INFO nova.virt.libvirt.driver [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Snapshot image upload complete
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.809 254096 INFO nova.compute.manager [None req-ab02db88-93f5-49fa-8291-59386d4b3955 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 6.09 seconds to snapshot the instance on the hypervisor.
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.812 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqbonlcb" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.834 254096 DEBUG nova.storage.rbd_utils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:06 compute-0 nova_compute[254092]: 2025-11-25 16:39:06.837 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.015 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully created port: c66763cf-d7ff-412d-89d8-fb6db38952f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.058 254096 DEBUG oslo_concurrency.processutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config e5947529-cfda-4753-94cd-b764da9d5c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.059 254096 INFO nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deleting local config drive /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c/disk.config because it was imported into RBD.
Nov 25 16:39:07 compute-0 kernel: tapaabf40d8-e3: entered promiscuous mode
Nov 25 16:39:07 compute-0 NetworkManager[48891]: <info>  [1764088747.1088] manager: (tapaabf40d8-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:07 compute-0 ovn_controller[153477]: 2025-11-25T16:39:07Z|00491|binding|INFO|Claiming lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for this chassis.
Nov 25 16:39:07 compute-0 ovn_controller[153477]: 2025-11-25T16:39:07Z|00492|binding|INFO|aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2: Claiming fa:16:3e:38:a9:78 10.100.0.6
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.168 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.169 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.170 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:39:07 compute-0 ovn_controller[153477]: 2025-11-25T16:39:07Z|00493|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 ovn-installed in OVS
Nov 25 16:39:07 compute-0 ovn_controller[153477]: 2025-11-25T16:39:07Z|00494|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 up in Southbound
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f293f42-bccd-4be0-8741-c67ca87a4c2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.184 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.186 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.186 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e63288c1-79eb-4fbd-9266-a8675b01368e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.187 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5df82957-5678-4af6-9d75-8cf0ee792d64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 systemd-udevd[315669]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.201 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f25373b1-f528-4eca-ad4e-ab80c8b904f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 systemd-machined[216343]: New machine qemu-64-instance-00000037.
Nov 25 16:39:07 compute-0 NetworkManager[48891]: <info>  [1764088747.2113] device (tapaabf40d8-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:39:07 compute-0 NetworkManager[48891]: <info>  [1764088747.2119] device (tapaabf40d8-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:39:07 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000037.
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.229 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[578971f8-cf5c-435f-b88f-9246d9a3fb84]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.261 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7ab98f-0755-4ded-a60f-16cce7651591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.269 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec7ad80-0081-4e59-a284-273c6f0fd7d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 NetworkManager[48891]: <info>  [1764088747.2708] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.306 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[46a19992-1b5f-4018-8895-236c999d8cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[87560e57-f8bb-4a72-9a38-967ec66fdf7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 NetworkManager[48891]: <info>  [1764088747.3363] device (tape469a950-70): carrier: link connected
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.342 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0413e089-02ef-496c-94bd-b4edd7ef42ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.361 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07a1f60a-c484-48e1-a133-3d2738ce9bab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520492, 'reachable_time': 43790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315701, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.378 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d84fd010-fd35-4bdd-8753-3d2891853582]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520492, 'tstamp': 520492}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315702, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.396 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1802a45-69a2-41d2-9db1-57c7c54b4886]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520492, 'reachable_time': 43790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315703, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.424 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fba259d0-0b28-4b41-868d-78382732e333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.455 254096 DEBUG nova.network.neutron [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updated VIF entry in instance network info cache for port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.455 254096 DEBUG nova.network.neutron [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updating instance_info_cache with network_info: [{"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.468 254096 DEBUG oslo_concurrency.lockutils [req-8028a58c-c2fe-47df-b06a-bb53dbbbbf88 req-df6cee12-ae24-44d1-ad00-96f89987140f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5947529-cfda-4753-94cd-b764da9d5c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:39:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 292 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.7 MiB/s wr, 296 op/s
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.492 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5f1758-8f9c-4880-b2d7-b0f438c512a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:07 compute-0 NetworkManager[48891]: <info>  [1764088747.4970] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 25 16:39:07 compute-0 kernel: tape469a950-70: entered promiscuous mode
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.499 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:07 compute-0 ovn_controller[153477]: 2025-11-25T16:39:07Z|00495|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.522 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.523 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d5fcc8-e730-458d-bb0e-0d6aeb53bd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.524 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:39:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:07.525 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.675 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088747.6749299, e5947529-cfda-4753-94cd-b764da9d5c2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.675 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Started (Lifecycle Event)
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.891 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.899 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088747.6751065, e5947529-cfda-4753-94cd-b764da9d5c2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.900 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Paused (Lifecycle Event)
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.927 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.931 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:07 compute-0 nova_compute[254092]: 2025-11-25 16:39:07.960 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:07 compute-0 podman[315777]: 2025-11-25 16:39:07.875708336 +0000 UTC m=+0.021751731 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:08 compute-0 podman[315777]: 2025-11-25 16:39:08.150973594 +0000 UTC m=+0.297016969 container create c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 16:39:08 compute-0 systemd[1]: Started libpod-conmon-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957.scope.
Nov 25 16:39:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ad08b84d0545a65c2b91231b7123f70815850aa282c0b4fc83b481c7c3e304/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:08 compute-0 podman[315777]: 2025-11-25 16:39:08.325993512 +0000 UTC m=+0.472036907 container init c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:39:08 compute-0 podman[315777]: 2025-11-25 16:39:08.331771309 +0000 UTC m=+0.477814684 container start c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:08 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : New worker (315798) forked
Nov 25 16:39:08 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : Loading success.
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.381 254096 DEBUG nova.compute.manager [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.381 254096 DEBUG oslo_concurrency.lockutils [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.382 254096 DEBUG oslo_concurrency.lockutils [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.382 254096 DEBUG oslo_concurrency.lockutils [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.382 254096 DEBUG nova.compute.manager [req-ac7e70d5-f156-4cd8-876b-8484f5fcb6fa req-27046837-1ff8-40a7-b83c-4211acedae7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Processing event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.383 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.388 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088748.3882308, e5947529-cfda-4753-94cd-b764da9d5c2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.388 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Resumed (Lifecycle Event)
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.391 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.395 254096 INFO nova.virt.libvirt.driver [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance spawned successfully.
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.396 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.412 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.415 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.415 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.416 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.416 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.417 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.417 254096 DEBUG nova.virt.libvirt.driver [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.424 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.456 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.516 254096 INFO nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 9.88 seconds to spawn the instance on the hypervisor.
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.516 254096 DEBUG nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.584 254096 INFO nova.compute.manager [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 11.69 seconds to build instance.
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.612 254096 DEBUG oslo_concurrency.lockutils [None req-e93fdb13-a0d3-4480-ac2e-992d71ddaf50 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:08 compute-0 ceph-mon[74985]: pgmap v1597: 321 pgs: 321 active+clean; 292 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.7 MiB/s wr, 296 op/s
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.701 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Successfully updated port: c66763cf-d7ff-412d-89d8-fb6db38952f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.716 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.716 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.716 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:39:08 compute-0 nova_compute[254092]: 2025-11-25 16:39:08.935 254096 WARNING nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] d5808bee-5100-4cdf-b578-a1bc323dafe9 already exists in list: networks containing: ['d5808bee-5100-4cdf-b578-a1bc323dafe9']. ignoring it
Nov 25 16:39:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 292 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.7 MiB/s wr, 296 op/s
Nov 25 16:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:39:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 25 16:39:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 25 16:39:10 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:39:10 compute-0 ceph-mon[74985]: pgmap v1598: 321 pgs: 321 active+clean; 292 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.7 MiB/s wr, 296 op/s
Nov 25 16:39:10 compute-0 ceph-mon[74985]: osdmap e214: 3 total, 3 up, 3 in
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.841 254096 DEBUG nova.network.neutron [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.872 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.875 254096 DEBUG nova.virt.libvirt.vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.875 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.876 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.876 254096 DEBUG os_vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.877 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.877 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.881 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc66763cf-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.881 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc66763cf-d7, col_values=(('external_ids', {'iface-id': 'c66763cf-d7ff-412d-89d8-fb6db38952f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:7c:94', 'vm-uuid': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:10 compute-0 NetworkManager[48891]: <info>  [1764088750.8838] manager: (tapc66763cf-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.891 254096 INFO os_vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7')
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.892 254096 DEBUG nova.virt.libvirt.vif [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.892 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.893 254096 DEBUG nova.network.os_vif_util [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.896 254096 DEBUG nova.virt.libvirt.guest [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] attach device xml: <interface type="ethernet">
Nov 25 16:39:10 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:a6:7c:94"/>
Nov 25 16:39:10 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 16:39:10 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:39:10 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 16:39:10 compute-0 nova_compute[254092]:   <target dev="tapc66763cf-d7"/>
Nov 25 16:39:10 compute-0 nova_compute[254092]: </interface>
Nov 25 16:39:10 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 16:39:10 compute-0 kernel: tapc66763cf-d7: entered promiscuous mode
Nov 25 16:39:10 compute-0 NetworkManager[48891]: <info>  [1764088750.9065] manager: (tapc66763cf-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Nov 25 16:39:10 compute-0 ovn_controller[153477]: 2025-11-25T16:39:10Z|00496|binding|INFO|Claiming lport c66763cf-d7ff-412d-89d8-fb6db38952f9 for this chassis.
Nov 25 16:39:10 compute-0 ovn_controller[153477]: 2025-11-25T16:39:10Z|00497|binding|INFO|c66763cf-d7ff-412d-89d8-fb6db38952f9: Claiming fa:16:3e:a6:7c:94 10.100.0.9
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.922 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:7c:94 10.100.0.9'], port_security=['fa:16:3e:a6:7c:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c66763cf-d7ff-412d-89d8-fb6db38952f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.923 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c66763cf-d7ff-412d-89d8-fb6db38952f9 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 bound to our chassis
Nov 25 16:39:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.925 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5808bee-5100-4cdf-b578-a1bc323dafe9
Nov 25 16:39:10 compute-0 ovn_controller[153477]: 2025-11-25T16:39:10Z|00498|binding|INFO|Setting lport c66763cf-d7ff-412d-89d8-fb6db38952f9 ovn-installed in OVS
Nov 25 16:39:10 compute-0 ovn_controller[153477]: 2025-11-25T16:39:10Z|00499|binding|INFO|Setting lport c66763cf-d7ff-412d-89d8-fb6db38952f9 up in Southbound
Nov 25 16:39:10 compute-0 nova_compute[254092]: 2025-11-25 16:39:10.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.949 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4933c86-ed5a-4a30-982a-fd6edb6bea3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:10 compute-0 systemd-udevd[315814]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:39:10 compute-0 NetworkManager[48891]: <info>  [1764088750.9695] device (tapc66763cf-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:39:10 compute-0 NetworkManager[48891]: <info>  [1764088750.9704] device (tapc66763cf-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:39:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.992 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[03bc5ef1-259f-48c1-a85c-b3fa5537aa97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:10.996 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ab15791a-d884-49f6-8006-cb6f4c6f861e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.020 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.021 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.021 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No VIF found with MAC fa:16:3e:76:39:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.021 254096 DEBUG nova.virt.libvirt.driver [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] No VIF found with MAC fa:16:3e:a6:7c:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.045 254096 DEBUG nova.virt.libvirt.guest [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:39:11 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   <nova:name>tempest-AttachInterfacesV270Test-server-540246934</nova:name>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 16:39:11</nova:creationTime>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:user uuid="96cb4b65d4074373a38534856574dc8f">tempest-AttachInterfacesV270Test-1255379647-project-member</nova:user>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:project uuid="a92e4b86655441c59ead5a1bd83173e5">tempest-AttachInterfacesV270Test-1255379647</nova:project>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:port uuid="a6f06f5d-486f-4039-a0cb-30b122e69258">
Nov 25 16:39:11 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     <nova:port uuid="c66763cf-d7ff-412d-89d8-fb6db38952f9">
Nov 25 16:39:11 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:39:11 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 16:39:11 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 16:39:11 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 16:39:11 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.049 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5ca7ef-3d2d-4f09-acaf-f6275d78dd4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.068 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4157ea-d974-4b0a-9694-ab0eec896d08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315821, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.072 254096 DEBUG oslo_concurrency.lockutils [None req-4b8c8da5-1459-47f3-8736-a4cd50727e53 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "interface-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.094 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8af4b4c-47c2-4858-9b5f-99911035e6e8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519795, 'tstamp': 519795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315822, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519798, 'tstamp': 519798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315822, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.104 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5808bee-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.109 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.110 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5808bee-50, col_values=(('external_ids', {'iface-id': 'ab0217e4-1718-4cef-9483-cef43176b686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:11.110 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 252 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 6.8 MiB/s wr, 435 op/s
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.875 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.876 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.876 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.876 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 WARNING nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state active and task_state None.
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-changed-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG nova.compute.manager [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing instance network info cache due to event network-changed-c66763cf-d7ff-412d-89d8-fb6db38952f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.877 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.878 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:39:11 compute-0 nova_compute[254092]: 2025-11-25 16:39:11.878 254096 DEBUG nova.network.neutron [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Refreshing network info cache for port c66763cf-d7ff-412d-89d8-fb6db38952f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.047 254096 DEBUG oslo_concurrency.lockutils [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.048 254096 DEBUG oslo_concurrency.lockutils [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.048 254096 DEBUG nova.compute.manager [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.051 254096 DEBUG nova.compute.manager [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.051 254096 DEBUG nova.objects.instance [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'flavor' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.073 254096 DEBUG nova.virt.libvirt.driver [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.692 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.693 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.693 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.693 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.694 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.695 254096 INFO nova.compute.manager [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Terminating instance
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.696 254096 DEBUG nova.compute.manager [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:39:12 compute-0 ceph-mon[74985]: pgmap v1600: 321 pgs: 321 active+clean; 252 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 6.8 MiB/s wr, 435 op/s
Nov 25 16:39:12 compute-0 kernel: tapa1b0e8cf-d5 (unregistering): left promiscuous mode
Nov 25 16:39:12 compute-0 NetworkManager[48891]: <info>  [1764088752.7741] device (tapa1b0e8cf-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:39:12 compute-0 ovn_controller[153477]: 2025-11-25T16:39:12Z|00500|binding|INFO|Releasing lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 from this chassis (sb_readonly=0)
Nov 25 16:39:12 compute-0 ovn_controller[153477]: 2025-11-25T16:39:12Z|00501|binding|INFO|Setting lport a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 down in Southbound
Nov 25 16:39:12 compute-0 ovn_controller[153477]: 2025-11-25T16:39:12Z|00502|binding|INFO|Removing iface tapa1b0e8cf-d5 ovn-installed in OVS
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.839 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:ad:75 10.100.0.9'], port_security=['fa:16:3e:d3:ad:75 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '52c13ebd-df79-43b9-8d5f-e4bf4a2e0738', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80a627278d934815a3ea621e9d6402d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1671917c-f980-406a-8c8d-043f07074abb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f06ea60c-86e3-46a2-b0dc-014d0b0b5949, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.840 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 in datapath abda97f3-dcb7-42ee-af40-cfc387fadfda unbound from our chassis
Nov 25 16:39:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.841 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abda97f3-dcb7-42ee-af40-cfc387fadfda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca78ea7f-4888-4848-9d73-9708d1087678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:12.843 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda namespace which is not needed anymore
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:12 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Deactivated successfully.
Nov 25 16:39:12 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000035.scope: Consumed 14.148s CPU time.
Nov 25 16:39:12 compute-0 systemd-machined[216343]: Machine qemu-62-instance-00000035 terminated.
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.941 254096 INFO nova.virt.libvirt.driver [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Instance destroyed successfully.
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.941 254096 DEBUG nova.objects.instance [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lazy-loading 'resources' on Instance uuid 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.961 254096 DEBUG nova.virt.libvirt.vif [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-876908934',display_name='tempest-ImagesOneServerTestJSON-server-876908934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-876908934',id=53,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:38:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='80a627278d934815a3ea621e9d6402d2',ramdisk_id='',reservation_id='r-mt56k20g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-941588767',owner_user_name='tempest-ImagesOneServerTestJSON-941588767-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:06Z,user_data=None,user_id='34706428d3f94a60b53f4a535d408fd1',uuid=52c13ebd-df79-43b9-8d5f-e4bf4a2e0738,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.963 254096 DEBUG nova.network.os_vif_util [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converting VIF {"id": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "address": "fa:16:3e:d3:ad:75", "network": {"id": "abda97f3-dcb7-42ee-af40-cfc387fadfda", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-292420344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "80a627278d934815a3ea621e9d6402d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1b0e8cf-d5", "ovs_interfaceid": "a1b0e8cf-d5e8-4b48-b591-6e3d110aff41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.966 254096 DEBUG nova.network.os_vif_util [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.967 254096 DEBUG os_vif [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.971 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1b0e8cf-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:12 compute-0 nova_compute[254092]: 2025-11-25 16:39:12.979 254096 INFO os_vif [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:75,bridge_name='br-int',has_traffic_filtering=True,id=a1b0e8cf-d5e8-4b48-b591-6e3d110aff41,network=Network(abda97f3-dcb7-42ee-af40-cfc387fadfda),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1b0e8cf-d5')
Nov 25 16:39:13 compute-0 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : haproxy version is 2.8.14-c23fe91
Nov 25 16:39:13 compute-0 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [NOTICE]   (314439) : path to executable is /usr/sbin/haproxy
Nov 25 16:39:13 compute-0 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [WARNING]  (314439) : Exiting Master process...
Nov 25 16:39:13 compute-0 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [ALERT]    (314439) : Current worker (314441) exited with code 143 (Terminated)
Nov 25 16:39:13 compute-0 neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda[314427]: [WARNING]  (314439) : All workers exited. Exiting... (0)
Nov 25 16:39:13 compute-0 systemd[1]: libpod-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c.scope: Deactivated successfully.
Nov 25 16:39:13 compute-0 podman[315852]: 2025-11-25 16:39:13.014937531 +0000 UTC m=+0.053015199 container died ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c-userdata-shm.mount: Deactivated successfully.
Nov 25 16:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-87d652c06ceeec69f5cac5c1cc50b800c7a964f2bc7a8ca4a2399870c6c25d11-merged.mount: Deactivated successfully.
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:13 compute-0 podman[315852]: 2025-11-25 16:39:13.072872802 +0000 UTC m=+0.110950450 container cleanup ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:39:13 compute-0 systemd[1]: libpod-conmon-ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c.scope: Deactivated successfully.
Nov 25 16:39:13 compute-0 podman[315898]: 2025-11-25 16:39:13.161011094 +0000 UTC m=+0.058455797 container remove ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.173 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc6027a-4549-4c28-af08-2fbe4d7701f1]: (4, ('Tue Nov 25 04:39:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda (ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c)\nad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c\nTue Nov 25 04:39:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda (ad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c)\nad0a0b8fc7ae04241a33b2e51504aca19cf04d644efc5475e0c2d4a3b12b451c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af556460-f02e-4046-a81b-0ceb010ef38b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.179 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabda97f3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:13 compute-0 kernel: tapabda97f3-d0: left promiscuous mode
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.211 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf16140f-9fee-4a5a-9c91-ed2e1d7cc9b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.221 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df8013d2-7308-4b02-86b5-faebb5250687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.225 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af1622db-23b7-41bc-ba07-a140f6f0375d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5b1f1e-dcce-4b5e-b1c9-61fdc1f8eefa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517778, 'reachable_time': 34569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315912, 'error': None, 'target': 'ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.250 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abda97f3-dcb7-42ee-af40-cfc387fadfda deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.250 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4849b1ba-069e-4335-a131-0f04573fa07b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:13 compute-0 systemd[1]: run-netns-ovnmeta\x2dabda97f3\x2ddcb7\x2d42ee\x2daf40\x2dcfc387fadfda.mount: Deactivated successfully.
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.423 254096 INFO nova.virt.libvirt.driver [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deleting instance files /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_del
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.425 254096 INFO nova.virt.libvirt.driver [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deletion of /var/lib/nova/instances/52c13ebd-df79-43b9-8d5f-e4bf4a2e0738_del complete
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.475 254096 INFO nova.compute.manager [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.476 254096 DEBUG oslo.service.loopingcall [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.476 254096 DEBUG nova.compute.manager [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:39:13 compute-0 nova_compute[254092]: 2025-11-25 16:39:13.476 254096 DEBUG nova.network.neutron [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:39:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 252 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 4.0 MiB/s wr, 253 op/s
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.616 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.054 254096 DEBUG nova.compute.manager [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-unplugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.055 254096 DEBUG oslo_concurrency.lockutils [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.055 254096 DEBUG oslo_concurrency.lockutils [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.055 254096 DEBUG oslo_concurrency.lockutils [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.056 254096 DEBUG nova.compute.manager [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] No waiting events found dispatching network-vif-unplugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.056 254096 DEBUG nova.compute.manager [req-37355c7d-8649-4eff-b68b-58e7bd9228b8 req-5c96baa2-a8c7-4677-b1bc-5d0689df5753 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-unplugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.219 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.221 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.269 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.269 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 WARNING nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with vm_state active and task_state None.
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.270 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG oslo_concurrency.lockutils [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 DEBUG nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.271 254096 WARNING nova.compute.manager [req-9aa51a9b-823e-4e61-8fc5-3dc17ebdb069 req-ac23545b-7b6f-4049-9a14-5095618259c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with vm_state active and task_state None.
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.340 254096 DEBUG nova.network.neutron [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.371 254096 INFO nova.compute.manager [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Took 0.89 seconds to deallocate network for instance.
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.441 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.442 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.442 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.442 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.443 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.444 254096 INFO nova.compute.manager [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Terminating instance
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.446 254096 DEBUG nova.compute.manager [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.449 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.449 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.559 254096 DEBUG oslo_concurrency.processutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.618 254096 DEBUG nova.network.neutron [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updated VIF entry in instance network info cache for port c66763cf-d7ff-412d-89d8-fb6db38952f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.621 254096 DEBUG nova.network.neutron [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [{"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:14 compute-0 kernel: tapa6f06f5d-48 (unregistering): left promiscuous mode
Nov 25 16:39:14 compute-0 NetworkManager[48891]: <info>  [1764088754.6549] device (tapa6f06f5d-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00503|binding|INFO|Releasing lport a6f06f5d-486f-4039-a0cb-30b122e69258 from this chassis (sb_readonly=0)
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00504|binding|INFO|Setting lport a6f06f5d-486f-4039-a0cb-30b122e69258 down in Southbound
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00505|binding|INFO|Removing iface tapa6f06f5d-48 ovn-installed in OVS
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.668 254096 DEBUG oslo_concurrency.lockutils [req-0597da81-faf8-4ddd-99ae-dd68ff7c6610 req-861b89c4-b0e9-4c96-a340-fbea895d032b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.677 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.679 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.680 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d5808bee-5100-4cdf-b578-a1bc323dafe9
Nov 25 16:39:14 compute-0 kernel: tapc66763cf-d7 (unregistering): left promiscuous mode
Nov 25 16:39:14 compute-0 NetworkManager[48891]: <info>  [1764088754.6889] device (tapc66763cf-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00506|binding|INFO|Releasing lport c66763cf-d7ff-412d-89d8-fb6db38952f9 from this chassis (sb_readonly=0)
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00507|binding|INFO|Setting lport c66763cf-d7ff-412d-89d8-fb6db38952f9 down in Southbound
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00508|binding|INFO|Removing iface tapc66763cf-d7 ovn-installed in OVS
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.719 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:7c:94 10.100.0.9'], port_security=['fa:16:3e:a6:7c:94 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c66763cf-d7ff-412d-89d8-fb6db38952f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.719 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c86b3baf-2a65-4517-bf90-202234b17ab1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ceph-mon[74985]: pgmap v1601: 321 pgs: 321 active+clean; 252 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 4.0 MiB/s wr, 253 op/s
Nov 25 16:39:14 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000036.scope: Deactivated successfully.
Nov 25 16:39:14 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000036.scope: Consumed 13.563s CPU time.
Nov 25 16:39:14 compute-0 systemd-machined[216343]: Machine qemu-63-instance-00000036 terminated.
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.763 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3568e660-5a36-417f-b6ea-2e4e5fdb1c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.767 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[71d7ba19-1be9-4ae6-a377-6b53ed365017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.800 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4b814250-30c9-4eca-b52b-395ae8ce416e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.821 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2989b42f-dbb1-4006-8b7f-fd07675bb5b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd5808bee-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:9e:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519782, 'reachable_time': 24363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315949, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8049e4f-4dac-49f6-ac3f-1de260d47e56]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519795, 'tstamp': 519795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315950, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd5808bee-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519798, 'tstamp': 519798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315950, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.847 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.862 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5808bee-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd5808bee-50, col_values=(('external_ids', {'iface-id': 'ab0217e4-1718-4cef-9483-cef43176b686'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.864 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.865 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c66763cf-d7ff-412d-89d8-fb6db38952f9 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.866 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5808bee-5100-4cdf-b578-a1bc323dafe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.867 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[023a8ac1-b0ca-40b8-aebf-19cebb710526]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.867 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 namespace which is not needed anymore
Nov 25 16:39:14 compute-0 kernel: tapa6f06f5d-48: entered promiscuous mode
Nov 25 16:39:14 compute-0 NetworkManager[48891]: <info>  [1764088754.8717] manager: (tapa6f06f5d-48): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Nov 25 16:39:14 compute-0 kernel: tapa6f06f5d-48 (unregistering): left promiscuous mode
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00509|binding|INFO|Claiming lport a6f06f5d-486f-4039-a0cb-30b122e69258 for this chassis.
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00510|binding|INFO|a6f06f5d-486f-4039-a0cb-30b122e69258: Claiming fa:16:3e:76:39:37 10.100.0.6
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.891 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:14 compute-0 NetworkManager[48891]: <info>  [1764088754.8931] manager: (tapc66763cf-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Nov 25 16:39:14 compute-0 ovn_controller[153477]: 2025-11-25T16:39:14Z|00511|binding|INFO|Releasing lport a6f06f5d-486f-4039-a0cb-30b122e69258 from this chassis (sb_readonly=0)
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:14.927 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:39:37 10.100.0.6'], port_security=['fa:16:3e:76:39:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a92e4b86655441c59ead5a1bd83173e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f22d4d9b-3d2b-4a51-a756-13e1e296ee57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18acd19-221e-4d09-97bf-0871da349301, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a6f06f5d-486f-4039-a0cb-30b122e69258) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.933 254096 INFO nova.virt.libvirt.driver [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Instance destroyed successfully.
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.934 254096 DEBUG nova.objects.instance [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lazy-loading 'resources' on Instance uuid 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.947 254096 DEBUG nova.virt.libvirt.vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.947 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "a6f06f5d-486f-4039-a0cb-30b122e69258", "address": "fa:16:3e:76:39:37", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6f06f5d-48", "ovs_interfaceid": "a6f06f5d-486f-4039-a0cb-30b122e69258", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.948 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.949 254096 DEBUG os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.951 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6f06f5d-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.960 254096 INFO os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:39:37,bridge_name='br-int',has_traffic_filtering=True,id=a6f06f5d-486f-4039-a0cb-30b122e69258,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6f06f5d-48')
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.961 254096 DEBUG nova.virt.libvirt.vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-540246934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-540246934',id=54,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a92e4b86655441c59ead5a1bd83173e5',ramdisk_id='',reservation_id='r-yc4qlfex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1255379647',owner_user_name='tempest-AttachInterfacesV270Test-1255379647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:02Z,user_data=None,user_id='96cb4b65d4074373a38534856574dc8f',uuid=97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.961 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converting VIF {"id": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "address": "fa:16:3e:a6:7c:94", "network": {"id": "d5808bee-5100-4cdf-b578-a1bc323dafe9", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-286790264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a92e4b86655441c59ead5a1bd83173e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc66763cf-d7", "ovs_interfaceid": "c66763cf-d7ff-412d-89d8-fb6db38952f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.962 254096 DEBUG nova.network.os_vif_util [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.962 254096 DEBUG os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.965 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc66763cf-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:14 compute-0 nova_compute[254092]: 2025-11-25 16:39:14.970 254096 INFO os_vif [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:7c:94,bridge_name='br-int',has_traffic_filtering=True,id=c66763cf-d7ff-412d-89d8-fb6db38952f9,network=Network(d5808bee-5100-4cdf-b578-a1bc323dafe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc66763cf-d7')
Nov 25 16:39:15 compute-0 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : haproxy version is 2.8.14-c23fe91
Nov 25 16:39:15 compute-0 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [NOTICE]   (315235) : path to executable is /usr/sbin/haproxy
Nov 25 16:39:15 compute-0 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [WARNING]  (315235) : Exiting Master process...
Nov 25 16:39:15 compute-0 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [ALERT]    (315235) : Current worker (315237) exited with code 143 (Terminated)
Nov 25 16:39:15 compute-0 neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9[315230]: [WARNING]  (315235) : All workers exited. Exiting... (0)
Nov 25 16:39:15 compute-0 systemd[1]: libpod-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded.scope: Deactivated successfully.
Nov 25 16:39:15 compute-0 podman[315990]: 2025-11-25 16:39:15.049292922 +0000 UTC m=+0.056797423 container died 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:39:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602312147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded-userdata-shm.mount: Deactivated successfully.
Nov 25 16:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c248a764d5721c6e561451525ee03f5d19e082815bceb4132727d099f101a8-merged.mount: Deactivated successfully.
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.163 254096 DEBUG oslo_concurrency.processutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.169 254096 DEBUG nova.compute.provider_tree [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:39:15 compute-0 podman[315990]: 2025-11-25 16:39:15.179561366 +0000 UTC m=+0.187065867 container cleanup 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:39:15 compute-0 systemd[1]: libpod-conmon-8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded.scope: Deactivated successfully.
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.197 254096 DEBUG nova.scheduler.client.report [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.222 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.223 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.260 254096 INFO nova.scheduler.client.report [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Deleted allocations for instance 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738
Nov 25 16:39:15 compute-0 podman[316039]: 2025-11-25 16:39:15.2674255 +0000 UTC m=+0.057524312 container remove 8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.278 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8916aa-b1a8-46c0-88ef-2550af144840]: (4, ('Tue Nov 25 04:39:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 (8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded)\n8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded\nTue Nov 25 04:39:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 (8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded)\n8d0d55c069856f731cd8f8155c451ddd5465869f50fd985fd9c7bab54624cded\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[154dff44-5c27-4474-9dfe-ea6f090c8585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.282 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5808bee-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:15 compute-0 kernel: tapd5808bee-50: left promiscuous mode
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a08bb265-eea7-4bcd-bcf8-b75420ec7ffd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.328 254096 DEBUG oslo_concurrency.lockutils [None req-775e66b0-3e50-49c5-95c8-1b201905604f 34706428d3f94a60b53f4a535d408fd1 80a627278d934815a3ea621e9d6402d2 - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.329 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd5f481-31dd-4dd2-8c6f-4932846c7f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.332 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c00d867-5d4a-4de9-8ed7-2050ffb0817c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.351 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b1810142-368e-4cb4-9e47-fe1ec04b0259]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519773, 'reachable_time': 20090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316055, 'error': None, 'target': 'ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 systemd[1]: run-netns-ovnmeta\x2dd5808bee\x2d5100\x2d4cdf\x2db578\x2da1bc323dafe9.mount: Deactivated successfully.
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.355 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d5808bee-5100-4cdf-b578-a1bc323dafe9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.355 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ea428feb-8e0b-41e8-8da0-c436e65e7869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.358 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.359 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5808bee-5100-4cdf-b578-a1bc323dafe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.361 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8eae78a6-cb5b-451d-b3ed-ea3d90f544cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.362 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a6f06f5d-486f-4039-a0cb-30b122e69258 in datapath d5808bee-5100-4cdf-b578-a1bc323dafe9 unbound from our chassis
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.363 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d5808bee-5100-4cdf-b578-a1bc323dafe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:15.364 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71ecdab1-813e-434c-bb42-6a37dfde8808]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 164 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:39:15 compute-0 sudo[316057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:15 compute-0 sudo[316057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:15 compute-0 sudo[316057]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.686 254096 INFO nova.virt.libvirt.driver [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deleting instance files /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_del
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.688 254096 INFO nova.virt.libvirt.driver [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deletion of /var/lib/nova/instances/97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211_del complete
Nov 25 16:39:15 compute-0 sudo[316082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:39:15 compute-0 sudo[316082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:15 compute-0 sudo[316082]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.741 254096 INFO nova.compute.manager [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 1.30 seconds to destroy the instance on the hypervisor.
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.742 254096 DEBUG oslo.service.loopingcall [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.742 254096 DEBUG nova.compute.manager [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:39:15 compute-0 nova_compute[254092]: 2025-11-25 16:39:15.743 254096 DEBUG nova.network.neutron [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:39:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/602312147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:15 compute-0 sudo[316107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:15 compute-0 sudo[316107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:15 compute-0 sudo[316107]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:15 compute-0 sudo[316132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:39:15 compute-0 sudo[316132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.229 254096 DEBUG nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.229 254096 DEBUG oslo_concurrency.lockutils [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG oslo_concurrency.lockutils [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG oslo_concurrency.lockutils [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "52c13ebd-df79-43b9-8d5f-e4bf4a2e0738-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] No waiting events found dispatching network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 WARNING nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received unexpected event network-vif-plugged-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 for instance with vm_state deleted and task_state None.
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.230 254096 DEBUG nova.compute.manager [req-fff1762e-8b92-42d4-9fb0-9cefd1e1b851 req-9056ee45-ce97-4178-b1ed-ef64f54bbb55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Received event network-vif-deleted-a1b0e8cf-d5e8-4b48-b591-6e3d110aff41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:16 compute-0 sudo[316132]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.356 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-unplugged-a6f06f5d-486f-4039-a0cb-30b122e69258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.357 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-a6f06f5d-486f-4039-a0cb-30b122e69258 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 WARNING nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-a6f06f5d-486f-4039-a0cb-30b122e69258 for instance with vm_state active and task_state deleting.
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.358 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-unplugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-unplugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.359 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG oslo_concurrency.lockutils [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 DEBUG nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] No waiting events found dispatching network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:16 compute-0 nova_compute[254092]: 2025-11-25 16:39:16.360 254096 WARNING nova.compute.manager [req-1646e150-c2c0-45f5-af28-fea81ae6b872 req-9c21d158-e76e-499c-af08-76fc5d1af608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received unexpected event network-vif-plugged-c66763cf-d7ff-412d-89d8-fb6db38952f9 for instance with vm_state active and task_state deleting.
Nov 25 16:39:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:39:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:39:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:39:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:39:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 106d5375-87cc-4d10-bce1-452950ee8def does not exist
Nov 25 16:39:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c6e90fd5-c60e-4eae-a94e-4d82cfb727aa does not exist
Nov 25 16:39:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ff225544-2378-4eb1-80ec-4623e5a3736d does not exist
Nov 25 16:39:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:39:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:39:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:39:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:39:16 compute-0 sudo[316188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:16 compute-0 sudo[316188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:16 compute-0 sudo[316188]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:16 compute-0 sudo[316213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:39:16 compute-0 sudo[316213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:16 compute-0 sudo[316213]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:16 compute-0 sudo[316238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:16 compute-0 sudo[316238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:16 compute-0 sudo[316238]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:16 compute-0 sudo[316263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:39:16 compute-0 sudo[316263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:16 compute-0 ceph-mon[74985]: pgmap v1602: 321 pgs: 321 active+clean; 164 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 254 op/s
Nov 25 16:39:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:39:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:39:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:39:17 compute-0 podman[316328]: 2025-11-25 16:39:17.11338773 +0000 UTC m=+0.054280314 container create c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:39:17 compute-0 systemd[1]: Started libpod-conmon-c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9.scope.
Nov 25 16:39:17 compute-0 podman[316328]: 2025-11-25 16:39:17.083219312 +0000 UTC m=+0.024111916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:39:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:17 compute-0 podman[316328]: 2025-11-25 16:39:17.226874109 +0000 UTC m=+0.167766723 container init c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:39:17 compute-0 podman[316328]: 2025-11-25 16:39:17.238027761 +0000 UTC m=+0.178920355 container start c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:39:17 compute-0 podman[316328]: 2025-11-25 16:39:17.242884923 +0000 UTC m=+0.183777507 container attach c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:39:17 compute-0 elated_liskov[316344]: 167 167
Nov 25 16:39:17 compute-0 systemd[1]: libpod-c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9.scope: Deactivated successfully.
Nov 25 16:39:17 compute-0 podman[316328]: 2025-11-25 16:39:17.244780904 +0000 UTC m=+0.185673498 container died c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-fba6000f2dee9160c206b843f23a29c761c41ddd68712fa2809909027d1a069d-merged.mount: Deactivated successfully.
Nov 25 16:39:17 compute-0 podman[316328]: 2025-11-25 16:39:17.28479378 +0000 UTC m=+0.225686364 container remove c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_liskov, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:39:17 compute-0 systemd[1]: libpod-conmon-c75c84d1b57d76a952daf0e07d199b15d14b0618deaf1ae47df82227d87921e9.scope: Deactivated successfully.
Nov 25 16:39:17 compute-0 nova_compute[254092]: 2025-11-25 16:39:17.352 254096 DEBUG nova.network.neutron [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:17 compute-0 nova_compute[254092]: 2025-11-25 16:39:17.373 254096 INFO nova.compute.manager [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Took 1.63 seconds to deallocate network for instance.
Nov 25 16:39:17 compute-0 nova_compute[254092]: 2025-11-25 16:39:17.421 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:17 compute-0 nova_compute[254092]: 2025-11-25 16:39:17.422 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 129 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 202 op/s
Nov 25 16:39:17 compute-0 nova_compute[254092]: 2025-11-25 16:39:17.493 254096 DEBUG oslo_concurrency.processutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:17 compute-0 podman[316368]: 2025-11-25 16:39:17.495048294 +0000 UTC m=+0.057213693 container create 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:17 compute-0 systemd[1]: Started libpod-conmon-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope.
Nov 25 16:39:17 compute-0 podman[316368]: 2025-11-25 16:39:17.471527166 +0000 UTC m=+0.033692665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:39:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:17 compute-0 podman[316368]: 2025-11-25 16:39:17.615009979 +0000 UTC m=+0.177175398 container init 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:39:17 compute-0 podman[316368]: 2025-11-25 16:39:17.62646865 +0000 UTC m=+0.188634049 container start 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:39:17 compute-0 podman[316368]: 2025-11-25 16:39:17.631932618 +0000 UTC m=+0.194098057 container attach 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:39:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902064392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.035 254096 DEBUG oslo_concurrency.processutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.043 254096 DEBUG nova.compute.provider_tree [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.067 254096 DEBUG nova.scheduler.client.report [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.093 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.126 254096 INFO nova.scheduler.client.report [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Deleted allocations for instance 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.198 254096 DEBUG oslo_concurrency.lockutils [None req-2202d48b-6847-442e-a893-06d842053929 96cb4b65d4074373a38534856574dc8f a92e4b86655441c59ead5a1bd83173e5 - - default default] Lock "97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.445 254096 DEBUG nova.compute.manager [req-691e2723-5830-47bd-a30c-aba43b4bcf6a req-bfef7b54-3606-465b-b623-c83929669519 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-deleted-a6f06f5d-486f-4039-a0cb-30b122e69258 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:18 compute-0 nova_compute[254092]: 2025-11-25 16:39:18.447 254096 DEBUG nova.compute.manager [req-691e2723-5830-47bd-a30c-aba43b4bcf6a req-bfef7b54-3606-465b-b623-c83929669519 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Received event network-vif-deleted-c66763cf-d7ff-412d-89d8-fb6db38952f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:18 compute-0 hardcore_jackson[316386]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:39:18 compute-0 hardcore_jackson[316386]: --> relative data size: 1.0
Nov 25 16:39:18 compute-0 hardcore_jackson[316386]: --> All data devices are unavailable
Nov 25 16:39:18 compute-0 ceph-mon[74985]: pgmap v1603: 321 pgs: 321 active+clean; 129 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 202 op/s
Nov 25 16:39:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3902064392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:18 compute-0 systemd[1]: libpod-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope: Deactivated successfully.
Nov 25 16:39:18 compute-0 systemd[1]: libpod-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope: Consumed 1.108s CPU time.
Nov 25 16:39:18 compute-0 conmon[316386]: conmon 768e7ef8ca16a19c49f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope/container/memory.events
Nov 25 16:39:18 compute-0 podman[316368]: 2025-11-25 16:39:18.798361672 +0000 UTC m=+1.360527081 container died 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 16:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5b72d55348ef279e0dcaa8b08b7512a27ff610f36f354f0ec2858e8ceaa1905-merged.mount: Deactivated successfully.
Nov 25 16:39:18 compute-0 podman[316368]: 2025-11-25 16:39:18.856609963 +0000 UTC m=+1.418775362 container remove 768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jackson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:39:18 compute-0 systemd[1]: libpod-conmon-768e7ef8ca16a19c49f497477a07ba41f7cea40ceb2b9f8c06f0232343e8dc52.scope: Deactivated successfully.
Nov 25 16:39:18 compute-0 sudo[316263]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:18 compute-0 sudo[316447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:18 compute-0 sudo[316447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:18 compute-0 sudo[316447]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:19 compute-0 sudo[316472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:39:19 compute-0 sudo[316472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:19 compute-0 sudo[316472]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:19 compute-0 sudo[316497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:19 compute-0 sudo[316497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:19 compute-0 sudo[316497]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:19 compute-0 sudo[316522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:39:19 compute-0 sudo[316522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 129 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 202 op/s
Nov 25 16:39:19 compute-0 podman[316585]: 2025-11-25 16:39:19.614899584 +0000 UTC m=+0.069301410 container create 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 25 16:39:19 compute-0 podman[316585]: 2025-11-25 16:39:19.577420738 +0000 UTC m=+0.031822634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:39:19 compute-0 systemd[1]: Started libpod-conmon-24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a.scope.
Nov 25 16:39:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:19 compute-0 podman[316585]: 2025-11-25 16:39:19.734451728 +0000 UTC m=+0.188853594 container init 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:39:19 compute-0 podman[316585]: 2025-11-25 16:39:19.744120871 +0000 UTC m=+0.198522687 container start 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:39:19 compute-0 eloquent_sanderson[316602]: 167 167
Nov 25 16:39:19 compute-0 systemd[1]: libpod-24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a.scope: Deactivated successfully.
Nov 25 16:39:19 compute-0 podman[316585]: 2025-11-25 16:39:19.75112596 +0000 UTC m=+0.205527806 container attach 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:39:19 compute-0 podman[316585]: 2025-11-25 16:39:19.751984383 +0000 UTC m=+0.206386219 container died 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 16:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-610f4ac3aeb6bfcbdda3a911d2b1660e488f72926cca6c9ff92b125b80f60428-merged.mount: Deactivated successfully.
Nov 25 16:39:19 compute-0 podman[316585]: 2025-11-25 16:39:19.821177471 +0000 UTC m=+0.275579287 container remove 24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:39:19 compute-0 systemd[1]: libpod-conmon-24559e262c3515e7ed01bddb8cc94345e656186d90d9ee5889262744680ac65a.scope: Deactivated successfully.
Nov 25 16:39:19 compute-0 nova_compute[254092]: 2025-11-25 16:39:19.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:20 compute-0 podman[316627]: 2025-11-25 16:39:20.025847463 +0000 UTC m=+0.038589638 container create dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:39:20 compute-0 systemd[1]: Started libpod-conmon-dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3.scope.
Nov 25 16:39:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:20 compute-0 podman[316627]: 2025-11-25 16:39:20.010142238 +0000 UTC m=+0.022884443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:20 compute-0 podman[316627]: 2025-11-25 16:39:20.126617867 +0000 UTC m=+0.139360062 container init dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:20 compute-0 podman[316627]: 2025-11-25 16:39:20.13999371 +0000 UTC m=+0.152735885 container start dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:39:20 compute-0 podman[316627]: 2025-11-25 16:39:20.146473816 +0000 UTC m=+0.159216021 container attach dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:39:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 25 16:39:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 25 16:39:20 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 25 16:39:20 compute-0 ovn_controller[153477]: 2025-11-25T16:39:20Z|00512|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:39:20 compute-0 nova_compute[254092]: 2025-11-25 16:39:20.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:20 compute-0 ceph-mon[74985]: pgmap v1604: 321 pgs: 321 active+clean; 129 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 202 op/s
Nov 25 16:39:20 compute-0 ceph-mon[74985]: osdmap e215: 3 total, 3 up, 3 in
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]: {
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:     "0": [
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:         {
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "devices": [
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "/dev/loop3"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             ],
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_name": "ceph_lv0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_size": "21470642176",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "name": "ceph_lv0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "tags": {
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cluster_name": "ceph",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.crush_device_class": "",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.encrypted": "0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osd_id": "0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.type": "block",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.vdo": "0"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             },
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "type": "block",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "vg_name": "ceph_vg0"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:         }
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:     ],
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:     "1": [
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:         {
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "devices": [
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "/dev/loop4"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             ],
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_name": "ceph_lv1",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_size": "21470642176",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "name": "ceph_lv1",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "tags": {
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cluster_name": "ceph",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.crush_device_class": "",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.encrypted": "0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osd_id": "1",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.type": "block",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.vdo": "0"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             },
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "type": "block",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "vg_name": "ceph_vg1"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:         }
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:     ],
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:     "2": [
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:         {
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "devices": [
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "/dev/loop5"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             ],
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_name": "ceph_lv2",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_size": "21470642176",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "name": "ceph_lv2",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "tags": {
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.cluster_name": "ceph",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.crush_device_class": "",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.encrypted": "0",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osd_id": "2",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.type": "block",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:                 "ceph.vdo": "0"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             },
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "type": "block",
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:             "vg_name": "ceph_vg2"
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:         }
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]:     ]
Nov 25 16:39:20 compute-0 hungry_brahmagupta[316643]: }
Nov 25 16:39:20 compute-0 systemd[1]: libpod-dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3.scope: Deactivated successfully.
Nov 25 16:39:20 compute-0 podman[316627]: 2025-11-25 16:39:20.963666856 +0000 UTC m=+0.976409041 container died dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c8be19d1e00e61cb365f739483d5171911b1e03a3510a345a191aa4b903f3aa-merged.mount: Deactivated successfully.
Nov 25 16:39:21 compute-0 podman[316627]: 2025-11-25 16:39:21.041773435 +0000 UTC m=+1.054515610 container remove dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:39:21 compute-0 systemd[1]: libpod-conmon-dc2e663bb5d2ce53789fac2e626bf81aa10f394d28e06bed8c26f7a2972cfec3.scope: Deactivated successfully.
Nov 25 16:39:21 compute-0 sudo[316522]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:21 compute-0 sudo[316665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:21 compute-0 sudo[316665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:21 compute-0 sudo[316665]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:21 compute-0 sudo[316690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:39:21 compute-0 sudo[316690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:21 compute-0 sudo[316690]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:21 compute-0 ovn_controller[153477]: 2025-11-25T16:39:21Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:a9:78 10.100.0.6
Nov 25 16:39:21 compute-0 ovn_controller[153477]: 2025-11-25T16:39:21Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:a9:78 10.100.0.6
Nov 25 16:39:21 compute-0 sudo[316715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:21 compute-0 sudo[316715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:21 compute-0 sudo[316715]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:21 compute-0 sudo[316740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:39:21 compute-0 sudo[316740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 92 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 25 16:39:21 compute-0 podman[316807]: 2025-11-25 16:39:21.782422138 +0000 UTC m=+0.045387372 container create ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:39:21 compute-0 systemd[1]: Started libpod-conmon-ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9.scope.
Nov 25 16:39:21 compute-0 podman[316807]: 2025-11-25 16:39:21.762013015 +0000 UTC m=+0.024978279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:39:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:21 compute-0 podman[316807]: 2025-11-25 16:39:21.885392012 +0000 UTC m=+0.148357276 container init ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:39:21 compute-0 podman[316807]: 2025-11-25 16:39:21.895242839 +0000 UTC m=+0.158208093 container start ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:39:21 compute-0 podman[316807]: 2025-11-25 16:39:21.899242838 +0000 UTC m=+0.162208132 container attach ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:39:21 compute-0 amazing_knuth[316823]: 167 167
Nov 25 16:39:21 compute-0 systemd[1]: libpod-ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9.scope: Deactivated successfully.
Nov 25 16:39:21 compute-0 podman[316807]: 2025-11-25 16:39:21.902956068 +0000 UTC m=+0.165921322 container died ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 16:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-21c7f1e67c36deabdfc962b171b6c88d470e58b2f42d8630cfc8d2678175b227-merged.mount: Deactivated successfully.
Nov 25 16:39:21 compute-0 podman[316807]: 2025-11-25 16:39:21.943445877 +0000 UTC m=+0.206411111 container remove ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 16:39:21 compute-0 systemd[1]: libpod-conmon-ff3bf16ac582f2061907eb1f27376c30540754074f40edbfb51f547669ad2ca9.scope: Deactivated successfully.
Nov 25 16:39:22 compute-0 nova_compute[254092]: 2025-11-25 16:39:22.136 254096 DEBUG nova.virt.libvirt.driver [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:39:22 compute-0 podman[316846]: 2025-11-25 16:39:22.145660473 +0000 UTC m=+0.056728980 container create 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 16:39:22 compute-0 systemd[1]: Started libpod-conmon-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope.
Nov 25 16:39:22 compute-0 podman[316846]: 2025-11-25 16:39:22.125110286 +0000 UTC m=+0.036178823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:39:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:22 compute-0 podman[316846]: 2025-11-25 16:39:22.251579327 +0000 UTC m=+0.162647864 container init 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 16:39:22 compute-0 podman[316846]: 2025-11-25 16:39:22.261735422 +0000 UTC m=+0.172803929 container start 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 16:39:22 compute-0 podman[316846]: 2025-11-25 16:39:22.265032281 +0000 UTC m=+0.176100818 container attach 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:39:22 compute-0 ceph-mon[74985]: pgmap v1606: 321 pgs: 321 active+clean; 92 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 25 16:39:23 compute-0 nova_compute[254092]: 2025-11-25 16:39:23.067 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:23 compute-0 sweet_spence[316863]: {
Nov 25 16:39:23 compute-0 sweet_spence[316863]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "osd_id": 1,
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "type": "bluestore"
Nov 25 16:39:23 compute-0 sweet_spence[316863]:     },
Nov 25 16:39:23 compute-0 sweet_spence[316863]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "osd_id": 2,
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "type": "bluestore"
Nov 25 16:39:23 compute-0 sweet_spence[316863]:     },
Nov 25 16:39:23 compute-0 sweet_spence[316863]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "osd_id": 0,
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:39:23 compute-0 sweet_spence[316863]:         "type": "bluestore"
Nov 25 16:39:23 compute-0 sweet_spence[316863]:     }
Nov 25 16:39:23 compute-0 sweet_spence[316863]: }
Nov 25 16:39:23 compute-0 systemd[1]: libpod-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope: Deactivated successfully.
Nov 25 16:39:23 compute-0 systemd[1]: libpod-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope: Consumed 1.069s CPU time.
Nov 25 16:39:23 compute-0 podman[316846]: 2025-11-25 16:39:23.330888318 +0000 UTC m=+1.241956825 container died 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2124a65847f8daaa543f2a30aa7223e8d0b4a270213bd4669b2b7622ce37c2d9-merged.mount: Deactivated successfully.
Nov 25 16:39:23 compute-0 podman[316846]: 2025-11-25 16:39:23.391765589 +0000 UTC m=+1.302834096 container remove 4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_spence, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:39:23 compute-0 systemd[1]: libpod-conmon-4b178c3b67691efe7ebbe02f1cbf85163ed30b1f226e7d802a95e602eebd0ac3.scope: Deactivated successfully.
Nov 25 16:39:23 compute-0 sudo[316740]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:39:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:39:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:39:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:39:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c3133126-8cc0-4927-891c-69fbc3fc1703 does not exist
Nov 25 16:39:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 65c0cfed-f922-4d8f-bf2c-f2aad2053c6b does not exist
Nov 25 16:39:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 92 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 25 16:39:23 compute-0 sudo[316910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:39:23 compute-0 sudo[316910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:23 compute-0 sudo[316910]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:23 compute-0 sudo[316935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:39:23 compute-0 sudo[316935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:39:23 compute-0 sudo[316935]: pam_unix(sudo:session): session closed for user root
Nov 25 16:39:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:39:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:39:24 compute-0 ceph-mon[74985]: pgmap v1607: 321 pgs: 321 active+clean; 92 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 25 16:39:24 compute-0 kernel: tapaabf40d8-e3 (unregistering): left promiscuous mode
Nov 25 16:39:24 compute-0 NetworkManager[48891]: <info>  [1764088764.6682] device (tapaabf40d8-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00513|binding|INFO|Releasing lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 from this chassis (sb_readonly=0)
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00514|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 down in Southbound
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00515|binding|INFO|Removing iface tapaabf40d8-e3 ovn-installed in OVS
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.689 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.691 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.693 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.695 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[415e4580-621e-4de3-a080-bf47c24bdb62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.696 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 25 16:39:24 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000037.scope: Consumed 13.834s CPU time.
Nov 25 16:39:24 compute-0 systemd-machined[216343]: Machine qemu-64-instance-00000037 terminated.
Nov 25 16:39:24 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : haproxy version is 2.8.14-c23fe91
Nov 25 16:39:24 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [NOTICE]   (315796) : path to executable is /usr/sbin/haproxy
Nov 25 16:39:24 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [WARNING]  (315796) : Exiting Master process...
Nov 25 16:39:24 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [ALERT]    (315796) : Current worker (315798) exited with code 143 (Terminated)
Nov 25 16:39:24 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[315792]: [WARNING]  (315796) : All workers exited. Exiting... (0)
Nov 25 16:39:24 compute-0 systemd[1]: libpod-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957.scope: Deactivated successfully.
Nov 25 16:39:24 compute-0 podman[316985]: 2025-11-25 16:39:24.819350669 +0000 UTC m=+0.040340465 container died c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ad08b84d0545a65c2b91231b7123f70815850aa282c0b4fc83b481c7c3e304-merged.mount: Deactivated successfully.
Nov 25 16:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957-userdata-shm.mount: Deactivated successfully.
Nov 25 16:39:24 compute-0 podman[316985]: 2025-11-25 16:39:24.855164341 +0000 UTC m=+0.076154137 container cleanup c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:39:24 compute-0 systemd[1]: libpod-conmon-c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957.scope: Deactivated successfully.
Nov 25 16:39:24 compute-0 kernel: tapaabf40d8-e3: entered promiscuous mode
Nov 25 16:39:24 compute-0 NetworkManager[48891]: <info>  [1764088764.9035] manager: (tapaabf40d8-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Nov 25 16:39:24 compute-0 systemd-udevd[316967]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00516|binding|INFO|Claiming lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for this chassis.
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00517|binding|INFO|aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2: Claiming fa:16:3e:38:a9:78 10.100.0.6
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.914 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:24 compute-0 kernel: tapaabf40d8-e3 (unregistering): left promiscuous mode
Nov 25 16:39:24 compute-0 podman[317016]: 2025-11-25 16:39:24.930365221 +0000 UTC m=+0.052772323 container remove c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: hostname: compute-0
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.936 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f899d12b-2135-42b6-931c-fb39c36e688c]: (4, ('Tue Nov 25 04:39:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957)\nc066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957\nTue Nov 25 04:39:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (c066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957)\nc066cc5455532f1c6bff5f5897b54ace7a9ea17e5d7ebd3b84dea3e4e4958957\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00518|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 ovn-installed in OVS
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00519|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 up in Southbound
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00520|binding|INFO|Releasing lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 from this chassis (sb_readonly=1)
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00521|if_status|INFO|Dropped 2 log messages in last 161 seconds (most recently, 161 seconds ago) due to excessive rate
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00522|if_status|INFO|Not setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 down as sb is readonly
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00523|binding|INFO|Removing iface tapaabf40d8-e3 ovn-installed in OVS
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.939 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbcb089-8588-4b90-aec0-24c620cc67c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.940 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00524|binding|INFO|Releasing lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 from this chassis (sb_readonly=0)
Nov 25 16:39:24 compute-0 ovn_controller[153477]: 2025-11-25T16:39:24Z|00525|binding|INFO|Setting lport aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 down in Southbound
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.964 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:a9:78 10.100.0.6'], port_security=['fa:16:3e:38:a9:78 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e5947529-cfda-4753-94cd-b764da9d5c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 kernel: tape469a950-70: left promiscuous mode
Nov 25 16:39:24 compute-0 virtnodedevd[254417]: ethtool ioctl error on tapaabf40d8-e3: No such device
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.982 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3876ca06-4cd5-432f-9c6e-1833690a3c33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.991 254096 DEBUG nova.compute.manager [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.991 254096 DEBUG oslo_concurrency.lockutils [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 DEBUG oslo_concurrency.lockutils [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 DEBUG oslo_concurrency.lockutils [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 DEBUG nova.compute.manager [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:24 compute-0 nova_compute[254092]: 2025-11-25 16:39:24.992 254096 WARNING nova.compute.manager [req-ce281c15-f3ca-4389-95d1-b72fd2b68f6a req-5221760a-79a4-4619-b4df-512db83865ed a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state active and task_state powering-off.
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[120325f8-24f7-4e46-a4f1-00ddd1056596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:24.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6f5cc1-b04f-4a49-86e3-f23679f5d1e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70ac3ab7-879c-4c88-bc99-a58708661a5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520484, 'reachable_time': 42622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317057, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.015 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.015 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[76823b99-6212-4bc1-bfeb-b3b88de02506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.016 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:39:25 compute-0 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.016 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.017 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8d89d4-8116-43c1-8271-ba15da8d5493]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.018 163338 INFO neutron.agent.ovn.metadata.agent [-] Port aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.019 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:25.019 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6c825f09-af7f-4d9a-9fe8-c4bae1a885b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:25 compute-0 nova_compute[254092]: 2025-11-25 16:39:25.152 254096 INFO nova.virt.libvirt.driver [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance shutdown successfully after 13 seconds.
Nov 25 16:39:25 compute-0 nova_compute[254092]: 2025-11-25 16:39:25.157 254096 INFO nova.virt.libvirt.driver [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance destroyed successfully.
Nov 25 16:39:25 compute-0 nova_compute[254092]: 2025-11-25 16:39:25.157 254096 DEBUG nova.objects.instance [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:25 compute-0 nova_compute[254092]: 2025-11-25 16:39:25.168 254096 DEBUG nova.compute.manager [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:25 compute-0 nova_compute[254092]: 2025-11-25 16:39:25.208 254096 DEBUG oslo_concurrency.lockutils [None req-51763a1a-8e52-4ad8-95a6-147f9ccb06d4 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 119 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 431 KiB/s rd, 2.9 MiB/s wr, 121 op/s
Nov 25 16:39:26 compute-0 ceph-mon[74985]: pgmap v1608: 321 pgs: 321 active+clean; 119 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 431 KiB/s rd, 2.9 MiB/s wr, 121 op/s
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.139 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.140 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.141 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.141 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.141 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.142 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.143 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.143 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.143 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.144 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.144 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.144 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.145 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-unplugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.146 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.147 254096 DEBUG oslo_concurrency.lockutils [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.147 254096 DEBUG nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] No waiting events found dispatching network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.147 254096 WARNING nova.compute.manager [req-afca4a8a-1314-42db-8df8-e95c12601ddc req-81f628a6-55cd-4c6b-b1cb-78016b6807b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received unexpected event network-vif-plugged-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 for instance with vm_state stopped and task_state None.
Nov 25 16:39:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 121 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.940 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088752.939042, 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.941 254096 INFO nova.compute.manager [-] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] VM Stopped (Lifecycle Event)
Nov 25 16:39:27 compute-0 nova_compute[254092]: 2025-11-25 16:39:27.965 254096 DEBUG nova.compute.manager [None req-e0cc94df-4d5a-4a64-974f-285d6bc301e3 - - - - - -] [instance: 52c13ebd-df79-43b9-8d5f-e4bf4a2e0738] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:28 compute-0 ceph-mon[74985]: pgmap v1609: 321 pgs: 321 active+clean; 121 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.931 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.932 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.933 254096 INFO nova.compute.manager [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Terminating instance
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.934 254096 DEBUG nova.compute.manager [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.940 254096 INFO nova.virt.libvirt.driver [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Instance destroyed successfully.
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.941 254096 DEBUG nova.objects.instance [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid e5947529-cfda-4753-94cd-b764da9d5c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.953 254096 DEBUG nova.virt.libvirt.vif [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:38:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-251232744',display_name='tempest-DeleteServersTestJSON-server-251232744',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-251232744',id=55,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-xpvmx0am',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:25Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=e5947529-cfda-4753-94cd-b764da9d5c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.954 254096 DEBUG nova.network.os_vif_util [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "address": "fa:16:3e:38:a9:78", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaabf40d8-e3", "ovs_interfaceid": "aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.955 254096 DEBUG nova.network.os_vif_util [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.955 254096 DEBUG os_vif [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.958 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaabf40d8-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:39:28 compute-0 nova_compute[254092]: 2025-11-25 16:39:28.968 254096 INFO os_vif [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:a9:78,bridge_name='br-int',has_traffic_filtering=True,id=aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaabf40d8-e3')
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.464 254096 INFO nova.virt.libvirt.driver [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deleting instance files /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c_del
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.465 254096 INFO nova.virt.libvirt.driver [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deletion of /var/lib/nova/instances/e5947529-cfda-4753-94cd-b764da9d5c2c_del complete
Nov 25 16:39:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 121 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.713 254096 INFO nova.compute.manager [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.713 254096 DEBUG oslo.service.loopingcall [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.714 254096 DEBUG nova.compute.manager [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.714 254096 DEBUG nova.network.neutron [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.914 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088754.912875, 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.914 254096 INFO nova.compute.manager [-] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] VM Stopped (Lifecycle Event)
Nov 25 16:39:29 compute-0 nova_compute[254092]: 2025-11-25 16:39:29.932 254096 DEBUG nova.compute.manager [None req-97464e98-c13b-43fd-b2a0-1a2c15ea3a58 - - - - - -] [instance: 97f4b6cb-b3c9-45d8-8b9f-f9f12beb3211] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:30 compute-0 ceph-mon[74985]: pgmap v1610: 321 pgs: 321 active+clean; 121 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 90 op/s
Nov 25 16:39:30 compute-0 nova_compute[254092]: 2025-11-25 16:39:30.605 254096 DEBUG nova.network.neutron [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:30 compute-0 nova_compute[254092]: 2025-11-25 16:39:30.626 254096 INFO nova.compute.manager [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Took 0.91 seconds to deallocate network for instance.
Nov 25 16:39:30 compute-0 nova_compute[254092]: 2025-11-25 16:39:30.670 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:30 compute-0 nova_compute[254092]: 2025-11-25 16:39:30.671 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:30 compute-0 nova_compute[254092]: 2025-11-25 16:39:30.746 254096 DEBUG nova.compute.manager [req-10c8ec24-eb52-4e9f-948c-c848a54efff1 req-7ef5c5f8-a626-4268-a9cc-900fec7977c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Received event network-vif-deleted-aabf40d8-e3a1-4e60-a7c9-7ba5dd4570d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:30 compute-0 nova_compute[254092]: 2025-11-25 16:39:30.747 254096 DEBUG oslo_concurrency.processutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660174848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:31 compute-0 nova_compute[254092]: 2025-11-25 16:39:31.207 254096 DEBUG oslo_concurrency.processutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:31 compute-0 nova_compute[254092]: 2025-11-25 16:39:31.215 254096 DEBUG nova.compute.provider_tree [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:39:31 compute-0 nova_compute[254092]: 2025-11-25 16:39:31.231 254096 DEBUG nova.scheduler.client.report [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:39:31 compute-0 nova_compute[254092]: 2025-11-25 16:39:31.257 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:31 compute-0 nova_compute[254092]: 2025-11-25 16:39:31.308 254096 INFO nova.scheduler.client.report [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance e5947529-cfda-4753-94cd-b764da9d5c2c
Nov 25 16:39:31 compute-0 nova_compute[254092]: 2025-11-25 16:39:31.374 254096 DEBUG oslo_concurrency.lockutils [None req-6e7cf20d-b690-4b1b-88e6-5c2ea18c75ad 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "e5947529-cfda-4753-94cd-b764da9d5c2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 101 op/s
Nov 25 16:39:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3660174848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:32 compute-0 ceph-mon[74985]: pgmap v1611: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.3 MiB/s wr, 101 op/s
Nov 25 16:39:33 compute-0 nova_compute[254092]: 2025-11-25 16:39:33.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 16:39:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 25 16:39:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 25 16:39:33 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 25 16:39:33 compute-0 nova_compute[254092]: 2025-11-25 16:39:33.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:34 compute-0 ceph-mon[74985]: pgmap v1612: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 25 16:39:34 compute-0 ceph-mon[74985]: osdmap e216: 3 total, 3 up, 3 in
Nov 25 16:39:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 6.0 KiB/s wr, 55 op/s
Nov 25 16:39:36 compute-0 podman[317100]: 2025-11-25 16:39:36.626975656 +0000 UTC m=+0.048894027 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 16:39:36 compute-0 podman[317099]: 2025-11-25 16:39:36.634359416 +0000 UTC m=+0.056305568 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:39:36 compute-0 podman[317101]: 2025-11-25 16:39:36.659354925 +0000 UTC m=+0.076478717 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:39:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Nov 25 16:39:37 compute-0 ceph-mon[74985]: pgmap v1614: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 6.0 KiB/s wr, 55 op/s
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.335 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.336 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.395 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.618 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.618 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.623 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.624 254096 INFO nova.compute.claims [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.792 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:38 compute-0 nova_compute[254092]: 2025-11-25 16:39:38.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 25 16:39:39 compute-0 ceph-mon[74985]: pgmap v1615: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Nov 25 16:39:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895456730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.268 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:39 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.275 254096 DEBUG nova.compute.provider_tree [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.314 254096 DEBUG nova.scheduler.client.report [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.370 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.371 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.451 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.452 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.485 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:39:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 28 op/s
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.515 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.655 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.657 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.658 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Creating image(s)
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.684 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.711 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.740 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.745 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.858 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.859 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.859 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.860 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.900 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.904 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.944 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088764.9432306, e5947529-cfda-4753-94cd-b764da9d5c2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.944 254096 INFO nova.compute.manager [-] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] VM Stopped (Lifecycle Event)
Nov 25 16:39:39 compute-0 nova_compute[254092]: 2025-11-25 16:39:39.978 254096 DEBUG nova.compute.manager [None req-d2e9de6f-9c83-434a-8e6b-4129674188f3 - - - - - -] [instance: e5947529-cfda-4753-94cd-b764da9d5c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:39:40
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'backups']
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:39:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2895456730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:40 compute-0 ceph-mon[74985]: osdmap e217: 3 total, 3 up, 3 in
Nov 25 16:39:40 compute-0 ceph-mon[74985]: pgmap v1617: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 28 op/s
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.256 254096 DEBUG nova.policy [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '788efa7ba65347b69663f4ea7ba4cd9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.461 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.513 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] resizing rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.615 254096 DEBUG nova.objects.instance [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'migration_context' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.697 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.698 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Ensure instance console log exists: /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.699 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.699 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:40 compute-0 nova_compute[254092]: 2025-11-25 16:39:40.699 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:41 compute-0 nova_compute[254092]: 2025-11-25 16:39:41.378 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Successfully created port: 9d4276f1-91e9-418f-9a7b-844c83aea8f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:39:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 58 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 949 KiB/s wr, 51 op/s
Nov 25 16:39:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.657 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Successfully updated port: 9d4276f1-91e9-418f-9a7b-844c83aea8f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:39:42 compute-0 ceph-mon[74985]: pgmap v1618: 321 pgs: 321 active+clean; 58 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 949 KiB/s wr, 51 op/s
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.713 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.713 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquired lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.714 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:39:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 25 16:39:42 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.776 254096 DEBUG nova.compute.manager [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-changed-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.776 254096 DEBUG nova.compute.manager [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Refreshing instance network info cache due to event network-changed-9d4276f1-91e9-418f-9a7b-844c83aea8f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.777 254096 DEBUG oslo_concurrency.lockutils [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:39:42 compute-0 nova_compute[254092]: 2025-11-25 16:39:42.913 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:39:43 compute-0 nova_compute[254092]: 2025-11-25 16:39:43.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 58 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 947 KiB/s wr, 23 op/s
Nov 25 16:39:43 compute-0 ceph-mon[74985]: osdmap e218: 3 total, 3 up, 3 in
Nov 25 16:39:43 compute-0 nova_compute[254092]: 2025-11-25 16:39:43.923 254096 DEBUG nova.network.neutron [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updating instance_info_cache with network_info: [{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:43 compute-0 nova_compute[254092]: 2025-11-25 16:39:43.968 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Releasing lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:39:43 compute-0 nova_compute[254092]: 2025-11-25 16:39:43.969 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance network_info: |[{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:39:43 compute-0 nova_compute[254092]: 2025-11-25 16:39:43.969 254096 DEBUG oslo_concurrency.lockutils [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:39:43 compute-0 nova_compute[254092]: 2025-11-25 16:39:43.969 254096 DEBUG nova.network.neutron [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Refreshing network info cache for port 9d4276f1-91e9-418f-9a7b-844c83aea8f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:39:43 compute-0 nova_compute[254092]: 2025-11-25 16:39:43.973 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start _get_guest_xml network_info=[{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.020 254096 WARNING nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.027 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.028 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.031 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.031 254096 DEBUG nova.virt.libvirt.host [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.032 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.032 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.033 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.033 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.034 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.034 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.035 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.035 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.035 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.036 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.036 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.036 254096 DEBUG nova.virt.hardware [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.040 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:39:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120606357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.485 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.516 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:44 compute-0 nova_compute[254092]: 2025-11-25 16:39:44.522 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 25 16:39:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 25 16:39:44 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 25 16:39:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:39:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604200154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:44 compute-0 ceph-mon[74985]: pgmap v1620: 321 pgs: 321 active+clean; 58 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 947 KiB/s wr, 23 op/s
Nov 25 16:39:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2120606357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.002 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.004 254096 DEBUG nova.virt.libvirt.vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:39:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2082208700',display_name='tempest-DeleteServersTestJSON-server-2082208700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2082208700',id=56,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-w2u0qqei',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:39:39Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.004 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.005 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.007 254096 DEBUG nova.objects.instance [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.028 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <uuid>bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68</uuid>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <name>instance-00000038</name>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <nova:name>tempest-DeleteServersTestJSON-server-2082208700</nova:name>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:39:44</nova:creationTime>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:user uuid="788efa7ba65347b69663f4ea7ba4cd9d">tempest-DeleteServersTestJSON-150185898-project-member</nova:user>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:project uuid="ff5d31ffc7934838a461d6ac8aa546c9">tempest-DeleteServersTestJSON-150185898</nova:project>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <nova:port uuid="9d4276f1-91e9-418f-9a7b-844c83aea8f4">
Nov 25 16:39:45 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <system>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <entry name="serial">bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68</entry>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <entry name="uuid">bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68</entry>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </system>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <os>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   </os>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <features>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   </features>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk">
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config">
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:39:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:8d:00:0e"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <target dev="tap9d4276f1-91"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/console.log" append="off"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <video>
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </video>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:39:45 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:39:45 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:39:45 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:39:45 compute-0 nova_compute[254092]: </domain>
Nov 25 16:39:45 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.029 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Preparing to wait for external event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.029 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.030 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.030 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.031 254096 DEBUG nova.virt.libvirt.vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:39:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2082208700',display_name='tempest-DeleteServersTestJSON-server-2082208700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2082208700',id=56,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-w2u0qqei',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:39:39Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.031 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.032 254096 DEBUG nova.network.os_vif_util [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.033 254096 DEBUG os_vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.034 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.035 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.039 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.039 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d4276f1-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.040 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d4276f1-91, col_values=(('external_ids', {'iface-id': '9d4276f1-91e9-418f-9a7b-844c83aea8f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:00:0e', 'vm-uuid': 'bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:45 compute-0 NetworkManager[48891]: <info>  [1764088785.0432] manager: (tap9d4276f1-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.049 254096 INFO os_vif [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91')
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.114 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.114 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.114 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] No VIF found with MAC fa:16:3e:8d:00:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.115 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Using config drive
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.137 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.4 MiB/s wr, 141 op/s
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.824 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Creating config drive at /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.829 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx0i954y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:45 compute-0 nova_compute[254092]: 2025-11-25 16:39:45.978 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx0i954y" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:45 compute-0 ceph-mon[74985]: osdmap e219: 3 total, 3 up, 3 in
Nov 25 16:39:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3604200154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.020 254096 DEBUG nova.storage.rbd_utils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] rbd image bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.024 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.183 254096 DEBUG oslo_concurrency.processutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.184 254096 INFO nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deleting local config drive /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68/disk.config because it was imported into RBD.
Nov 25 16:39:46 compute-0 kernel: tap9d4276f1-91: entered promiscuous mode
Nov 25 16:39:46 compute-0 NetworkManager[48891]: <info>  [1764088786.2521] manager: (tap9d4276f1-91): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Nov 25 16:39:46 compute-0 systemd-udevd[317481]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:39:46 compute-0 ovn_controller[153477]: 2025-11-25T16:39:46Z|00526|binding|INFO|Claiming lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 for this chassis.
Nov 25 16:39:46 compute-0 ovn_controller[153477]: 2025-11-25T16:39:46Z|00527|binding|INFO|9d4276f1-91e9-418f-9a7b-844c83aea8f4: Claiming fa:16:3e:8d:00:0e 10.100.0.4
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:46 compute-0 NetworkManager[48891]: <info>  [1764088786.3015] device (tap9d4276f1-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:39:46 compute-0 NetworkManager[48891]: <info>  [1764088786.3022] device (tap9d4276f1-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.303 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:00:0e 10.100.0.4'], port_security=['fa:16:3e:8d:00:0e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9d4276f1-91e9-418f-9a7b-844c83aea8f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:46 compute-0 ovn_controller[153477]: 2025-11-25T16:39:46Z|00528|binding|INFO|Setting lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 ovn-installed in OVS
Nov 25 16:39:46 compute-0 ovn_controller[153477]: 2025-11-25T16:39:46Z|00529|binding|INFO|Setting lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 up in Southbound
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.305 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9d4276f1-91e9-418f-9a7b-844c83aea8f4 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa bound to our chassis
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.307 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.319 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7355b6e-6981-4fb4-af78-9095889b17bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.320 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape469a950-71 in ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.322 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape469a950-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.322 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d242e9-ff99-4353-874e-5316f7f47271]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 systemd-machined[216343]: New machine qemu-65-instance-00000038.
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3def81ed-0b6e-4b5e-979d-86d8e3bf163d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.333 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d1406a0d-7bd1-40f3-8ffb-f001aac4a27a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000038.
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d7739188-6e20-4dd2-b7f7-b4dcfae29ab0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.397 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d8141ed5-ddf0-4ea2-9ef2-666bfa4130fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 NetworkManager[48891]: <info>  [1764088786.4043] manager: (tape469a950-70): new Veth device (/org/freedesktop/NetworkManager/Devices/225)
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.403 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03063a96-2088-4495-947e-a2321c3dbc67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.440 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[70ba8400-8abd-4933-843f-eaa56822c491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.444 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[50751263-c972-4d6c-b5ad-d6b6fe4d7ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 NetworkManager[48891]: <info>  [1764088786.4737] device (tape469a950-70): carrier: link connected
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.481 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[44c8fd3d-93b8-4347-9088-d454b7681c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.507 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[622edd37-16d6-4a98-876f-6982b3f60a3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524406, 'reachable_time': 17584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317517, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a414599d-15b8-491f-8bc3-70e1efdb4b3b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e4a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524406, 'tstamp': 524406}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317518, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2fdf39a0-5ef0-4160-93ad-0f4460d8aa59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape469a950-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e4:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524406, 'reachable_time': 17584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317526, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c04d4ea3-c375-4dd7-af4f-6463dc0216d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.668 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c16c94fb-5904-479f-8d35-baaf95d02f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.670 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.670 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.671 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape469a950-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:46 compute-0 NetworkManager[48891]: <info>  [1764088786.6742] manager: (tape469a950-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Nov 25 16:39:46 compute-0 kernel: tape469a950-70: entered promiscuous mode
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.677 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape469a950-70, col_values=(('external_ids', {'iface-id': 'd55c2366-e0b4-4cb8-86a6-03b9430fb138'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:46 compute-0 ovn_controller[153477]: 2025-11-25T16:39:46Z|00530|binding|INFO|Releasing lport d55c2366-e0b4-4cb8-86a6-03b9430fb138 from this chassis (sb_readonly=0)
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.695 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6b1f1e-b9b8-438b-a9b8-e79801346912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.698 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/e469a950-7044-48bc-a261-bc9effe2c2aa.pid.haproxy
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID e469a950-7044-48bc-a261-bc9effe2c2aa
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:39:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:46.699 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'env', 'PROCESS_TAG=haproxy-e469a950-7044-48bc-a261-bc9effe2c2aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e469a950-7044-48bc-a261-bc9effe2c2aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.741 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088786.7408016, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.743 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Started (Lifecycle Event)
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.766 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.772 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088786.7411115, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.772 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Paused (Lifecycle Event)
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.802 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.806 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:46 compute-0 nova_compute[254092]: 2025-11-25 16:39:46.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:47 compute-0 ceph-mon[74985]: pgmap v1622: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.4 MiB/s wr, 141 op/s
Nov 25 16:39:47 compute-0 nova_compute[254092]: 2025-11-25 16:39:47.043 254096 DEBUG nova.network.neutron [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updated VIF entry in instance network info cache for port 9d4276f1-91e9-418f-9a7b-844c83aea8f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:39:47 compute-0 nova_compute[254092]: 2025-11-25 16:39:47.043 254096 DEBUG nova.network.neutron [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updating instance_info_cache with network_info: [{"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:47 compute-0 nova_compute[254092]: 2025-11-25 16:39:47.059 254096 DEBUG oslo_concurrency.lockutils [req-c5d17e26-cdd3-460c-91b2-1156b03cb4d2 req-e66ce7cf-49b3-4980-9efb-b967a65f44d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:39:47 compute-0 podman[317591]: 2025-11-25 16:39:47.138256353 +0000 UTC m=+0.056402131 container create 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:39:47 compute-0 systemd[1]: Started libpod-conmon-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55.scope.
Nov 25 16:39:47 compute-0 podman[317591]: 2025-11-25 16:39:47.109893724 +0000 UTC m=+0.028039512 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:39:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:39:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a6a91dce05b8ef142c987667939fc82ec5e5aaaa51b344a4808d5d7ff32d46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:39:47 compute-0 podman[317591]: 2025-11-25 16:39:47.239508731 +0000 UTC m=+0.157654499 container init 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:39:47 compute-0 podman[317591]: 2025-11-25 16:39:47.246838569 +0000 UTC m=+0.164984327 container start 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:47 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : New worker (317613) forked
Nov 25 16:39:47 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : Loading success.
Nov 25 16:39:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.793 254096 DEBUG nova.compute.manager [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.794 254096 DEBUG oslo_concurrency.lockutils [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.794 254096 DEBUG oslo_concurrency.lockutils [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.795 254096 DEBUG oslo_concurrency.lockutils [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.795 254096 DEBUG nova.compute.manager [req-67430b87-c61b-41d5-bf0f-84458bf5677d req-78cdd824-73c9-4609-a6c6-e59578d0298b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Processing event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.797 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.802 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088788.8021367, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.803 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Resumed (Lifecycle Event)
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.806 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.811 254096 INFO nova.virt.libvirt.driver [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance spawned successfully.
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.812 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.845 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.855 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.856 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.857 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.858 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.858 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.859 254096 DEBUG nova.virt.libvirt.driver [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.867 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:48 compute-0 nova_compute[254092]: 2025-11-25 16:39:48.913 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:49 compute-0 ceph-mon[74985]: pgmap v1623: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Nov 25 16:39:49 compute-0 nova_compute[254092]: 2025-11-25 16:39:49.258 254096 INFO nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 9.60 seconds to spawn the instance on the hypervisor.
Nov 25 16:39:49 compute-0 nova_compute[254092]: 2025-11-25 16:39:49.259 254096 DEBUG nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:49 compute-0 nova_compute[254092]: 2025-11-25 16:39:49.366 254096 INFO nova.compute.manager [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 10.78 seconds to build instance.
Nov 25 16:39:49 compute-0 nova_compute[254092]: 2025-11-25 16:39:49.388 254096 DEBUG oslo_concurrency.lockutils [None req-6d950193-27e6-4038-916c-a7c03c28af61 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 1.7 MiB/s wr, 89 op/s
Nov 25 16:39:50 compute-0 nova_compute[254092]: 2025-11-25 16:39:50.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:50 compute-0 ceph-mon[74985]: pgmap v1624: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 1.7 MiB/s wr, 89 op/s
Nov 25 16:39:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 25 16:39:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 25 16:39:50 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034834989744090644 of space, bias 1.0, pg target 0.10450496923227193 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:39:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 212 op/s
Nov 25 16:39:51 compute-0 ceph-mon[74985]: osdmap e220: 3 total, 3 up, 3 in
Nov 25 16:39:51 compute-0 nova_compute[254092]: 2025-11-25 16:39:51.946 254096 DEBUG nova.compute.manager [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:51 compute-0 nova_compute[254092]: 2025-11-25 16:39:51.946 254096 DEBUG oslo_concurrency.lockutils [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:51 compute-0 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 DEBUG oslo_concurrency.lockutils [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:51 compute-0 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 DEBUG oslo_concurrency.lockutils [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:51 compute-0 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 DEBUG nova.compute.manager [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] No waiting events found dispatching network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:51 compute-0 nova_compute[254092]: 2025-11-25 16:39:51.947 254096 WARNING nova.compute.manager [req-26a86953-a8cc-40f6-a19c-371dfeccf6d4 req-3591addf-6091-4d84-b25f-f60e0d234f08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received unexpected event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 for instance with vm_state active and task_state None.
Nov 25 16:39:52 compute-0 ceph-mon[74985]: pgmap v1626: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 212 op/s
Nov 25 16:39:52 compute-0 nova_compute[254092]: 2025-11-25 16:39:52.618 254096 DEBUG nova.objects.instance [None req-bf505fd5-32a3-49fb-b25c-5a85f27cdb1d 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:52 compute-0 nova_compute[254092]: 2025-11-25 16:39:52.644 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088792.6446714, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:52 compute-0 nova_compute[254092]: 2025-11-25 16:39:52.645 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Paused (Lifecycle Event)
Nov 25 16:39:52 compute-0 nova_compute[254092]: 2025-11-25 16:39:52.663 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:52 compute-0 nova_compute[254092]: 2025-11-25 16:39:52.666 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:52 compute-0 nova_compute[254092]: 2025-11-25 16:39:52.687 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 kernel: tap9d4276f1-91 (unregistering): left promiscuous mode
Nov 25 16:39:53 compute-0 NetworkManager[48891]: <info>  [1764088793.1915] device (tap9d4276f1-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:39:53 compute-0 ovn_controller[153477]: 2025-11-25T16:39:53Z|00531|binding|INFO|Releasing lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 from this chassis (sb_readonly=0)
Nov 25 16:39:53 compute-0 ovn_controller[153477]: 2025-11-25T16:39:53Z|00532|binding|INFO|Setting lport 9d4276f1-91e9-418f-9a7b-844c83aea8f4 down in Southbound
Nov 25 16:39:53 compute-0 ovn_controller[153477]: 2025-11-25T16:39:53Z|00533|binding|INFO|Removing iface tap9d4276f1-91 ovn-installed in OVS
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.216 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:00:0e 10.100.0.4'], port_security=['fa:16:3e:8d:00:0e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e469a950-7044-48bc-a261-bc9effe2c2aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff5d31ffc7934838a461d6ac8aa546c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcbb2958-ef75-485e-8580-da48580c7577', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff181adb-f502-4aa2-aaa5-e1f9a56ea733, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9d4276f1-91e9-418f-9a7b-844c83aea8f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.219 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9d4276f1-91e9-418f-9a7b-844c83aea8f4 in datapath e469a950-7044-48bc-a261-bc9effe2c2aa unbound from our chassis
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.220 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e469a950-7044-48bc-a261-bc9effe2c2aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.222 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8614d029-0156-4486-a7d0-0ba85942f748]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.225 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa namespace which is not needed anymore
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000038.scope: Deactivated successfully.
Nov 25 16:39:53 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000038.scope: Consumed 4.423s CPU time.
Nov 25 16:39:53 compute-0 systemd-machined[216343]: Machine qemu-65-instance-00000038 terminated.
Nov 25 16:39:53 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : haproxy version is 2.8.14-c23fe91
Nov 25 16:39:53 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [NOTICE]   (317611) : path to executable is /usr/sbin/haproxy
Nov 25 16:39:53 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [WARNING]  (317611) : Exiting Master process...
Nov 25 16:39:53 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [ALERT]    (317611) : Current worker (317613) exited with code 143 (Terminated)
Nov 25 16:39:53 compute-0 neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa[317606]: [WARNING]  (317611) : All workers exited. Exiting... (0)
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 systemd[1]: libpod-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55.scope: Deactivated successfully.
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 podman[317649]: 2025-11-25 16:39:53.374507458 +0000 UTC m=+0.055564478 container died 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.381 254096 DEBUG nova.compute.manager [None req-bf505fd5-32a3-49fb-b25c-5a85f27cdb1d 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55-userdata-shm.mount: Deactivated successfully.
Nov 25 16:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-54a6a91dce05b8ef142c987667939fc82ec5e5aaaa51b344a4808d5d7ff32d46-merged.mount: Deactivated successfully.
Nov 25 16:39:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 806 KiB/s wr, 152 op/s
Nov 25 16:39:53 compute-0 podman[317649]: 2025-11-25 16:39:53.592021628 +0000 UTC m=+0.273078638 container cleanup 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:39:53 compute-0 systemd[1]: libpod-conmon-2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55.scope: Deactivated successfully.
Nov 25 16:39:53 compute-0 podman[317685]: 2025-11-25 16:39:53.70521123 +0000 UTC m=+0.086343634 container remove 2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.714 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c078ee4f-94a2-454a-bdc1-f012f40511f0]: (4, ('Tue Nov 25 04:39:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55)\n2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55\nTue Nov 25 04:39:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa (2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55)\n2d0711218be98a429ca580d2bf75562fc8d99eacaa3e0fb4a51693f657d62c55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.714 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "440746e9-455f-4a2f-8412-a24d1c93cb21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.715 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.716 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4720ae-b149-4a3b-a796-644a0442c5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.717 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape469a950-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 kernel: tape469a950-70: left promiscuous mode
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.730 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e232207e-e970-4dd9-aa79-33065cff92ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.753 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55f38787-f45d-4221-a05e-ce9d53d77b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.754 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[461b5c9b-6080-432b-a932-01ab94969f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4092d35a-bce7-4791-80ca-4e128c6ffe8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524398, 'reachable_time': 20227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317703, 'error': None, 'target': 'ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.774 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e469a950-7044-48bc-a261-bc9effe2c2aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:39:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:39:53.774 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[17e345ca-9b03-464e-aae9-87ec0b3fe7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:39:53 compute-0 systemd[1]: run-netns-ovnmeta\x2de469a950\x2d7044\x2d48bc\x2da261\x2dbc9effe2c2aa.mount: Deactivated successfully.
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.816 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.816 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.825 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.826 254096 INFO nova.compute.claims [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.900 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.918 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.919 254096 DEBUG nova.compute.provider_tree [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.936 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:39:53 compute-0 nova_compute[254092]: 2025-11-25 16:39:53.974 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.041 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.087 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-unplugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.087 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] No waiting events found dispatching network-vif-unplugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.088 254096 WARNING nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received unexpected event network-vif-unplugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 for instance with vm_state suspended and task_state None.
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG oslo_concurrency.lockutils [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.089 254096 DEBUG nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] No waiting events found dispatching network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.090 254096 WARNING nova.compute.manager [req-2f28b977-2ed5-4c8b-9a6a-661092f27b1d req-bb3492c4-5d62-4d2e-a052-d08844a8d352 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received unexpected event network-vif-plugged-9d4276f1-91e9-418f-9a7b-844c83aea8f4 for instance with vm_state suspended and task_state None.
Nov 25 16:39:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641268344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.502 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.508 254096 DEBUG nova.compute.provider_tree [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.521 254096 DEBUG nova.scheduler.client.report [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.546 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.547 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:39:54 compute-0 ceph-mon[74985]: pgmap v1627: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 806 KiB/s wr, 152 op/s
Nov 25 16:39:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3641268344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.606 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.606 254096 DEBUG nova.network.neutron [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.627 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.644 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.756 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.758 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.758 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Creating image(s)
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.783 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.810 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.829 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.832 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "15cb5c745a0e602074dbadf61d84b40262c4f70d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:54 compute-0 nova_compute[254092]: 2025-11-25 16:39:54.833 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "15cb5c745a0e602074dbadf61d84b40262c4f70d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.010 254096 DEBUG nova.network.neutron [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.011 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.122 254096 DEBUG nova.virt.libvirt.imagebackend [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.181 254096 DEBUG nova.virt.libvirt.imagebackend [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.182 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] cloning images/be0cdf5e-9d4a-430a-ba46-c7875458b1f8@snap to None/440746e9-455f-4a2f-8412-a24d1c93cb21_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:39:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:39:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1385576439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:39:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:39:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1385576439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.285 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "15cb5c745a0e602074dbadf61d84b40262c4f70d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.430 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] resizing rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.498 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.498 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.499 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.499 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.499 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.501 254096 INFO nova.compute.manager [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Terminating instance
Nov 25 16:39:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:39:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.503 254096 DEBUG nova.compute.manager [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:39:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 120 op/s
Nov 25 16:39:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 25 16:39:55 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.513 254096 DEBUG nova.objects.instance [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lazy-loading 'migration_context' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.522 254096 INFO nova.virt.libvirt.driver [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Instance destroyed successfully.
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.523 254096 DEBUG nova.objects.instance [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lazy-loading 'resources' on Instance uuid bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Ensure instance console log exists: /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.527 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.528 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.529 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='6912c9181ae0be6aa2d2706c24ed15a5',container_format='bare',created_at=2025-11-25T16:39:49Z,direct_url=<?>,disk_format='raw',id=be0cdf5e-9d4a-430a-ba46-c7875458b1f8,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1147967949',owner='805768b696874b00aa9b3bac89550ed7',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T16:39:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': 'be0cdf5e-9d4a-430a-ba46-c7875458b1f8'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.533 254096 WARNING nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.535 254096 DEBUG nova.virt.libvirt.vif [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:39:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2082208700',display_name='tempest-DeleteServersTestJSON-server-2082208700',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2082208700',id=56,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:39:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff5d31ffc7934838a461d6ac8aa546c9',ramdisk_id='',reservation_id='r-w2u0qqei',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-150185898',owner_user_name='tempest-DeleteServersTestJSON-150185898-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:39:53Z,user_data=None,user_id='788efa7ba65347b69663f4ea7ba4cd9d',uuid=bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.536 254096 DEBUG nova.network.os_vif_util [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converting VIF {"id": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "address": "fa:16:3e:8d:00:0e", "network": {"id": "e469a950-7044-48bc-a261-bc9effe2c2aa", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-2042384638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff5d31ffc7934838a461d6ac8aa546c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d4276f1-91", "ovs_interfaceid": "9d4276f1-91e9-418f-9a7b-844c83aea8f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.536 254096 DEBUG nova.network.os_vif_util [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.537 254096 DEBUG os_vif [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.540 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d4276f1-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.544 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.545 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.594 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.594 254096 DEBUG nova.virt.libvirt.host [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.595 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:39:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1385576439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:39:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1385576439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:39:55 compute-0 ceph-mon[74985]: osdmap e221: 3 total, 3 up, 3 in
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.595 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='6912c9181ae0be6aa2d2706c24ed15a5',container_format='bare',created_at=2025-11-25T16:39:49Z,direct_url=<?>,disk_format='raw',id=be0cdf5e-9d4a-430a-ba46-c7875458b1f8,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1147967949',owner='805768b696874b00aa9b3bac89550ed7',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T16:39:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.596 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.597 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.598 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.598 254096 DEBUG nova.virt.hardware [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.601 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:55 compute-0 nova_compute[254092]: 2025-11-25 16:39:55.640 254096 INFO os_vif [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:00:0e,bridge_name='br-int',has_traffic_filtering=True,id=9d4276f1-91e9-418f-9a7b-844c83aea8f4,network=Network(e469a950-7044-48bc-a261-bc9effe2c2aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d4276f1-91')
Nov 25 16:39:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:39:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835229537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.095 254096 INFO nova.virt.libvirt.driver [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deleting instance files /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_del
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.096 254096 INFO nova.virt.libvirt.driver [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deletion of /var/lib/nova/instances/bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68_del complete
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.104 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.121 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.124 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.336 254096 INFO nova.compute.manager [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.337 254096 DEBUG oslo.service.loopingcall [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.337 254096 DEBUG nova.compute.manager [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.337 254096 DEBUG nova.network.neutron [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:39:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:39:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638881181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.571 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.572 254096 DEBUG nova.objects.instance [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.590 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <uuid>440746e9-455f-4a2f-8412-a24d1c93cb21</uuid>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <name>instance-00000039</name>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <nova:name>instance-depend-image</nova:name>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:39:55</nova:creationTime>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <nova:user uuid="810b628f1c824b55930a996d843cc85f">tempest-ImageDependencyTests-2018589381-project-member</nova:user>
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <nova:project uuid="805768b696874b00aa9b3bac89550ed7">tempest-ImageDependencyTests-2018589381</nova:project>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="be0cdf5e-9d4a-430a-ba46-c7875458b1f8"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <system>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <entry name="serial">440746e9-455f-4a2f-8412-a24d1c93cb21</entry>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <entry name="uuid">440746e9-455f-4a2f-8412-a24d1c93cb21</entry>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </system>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <os>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   </os>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <features>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   </features>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/440746e9-455f-4a2f-8412-a24d1c93cb21_disk">
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config">
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:39:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/console.log" append="off"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <video>
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </video>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:39:56 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:39:56 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:39:56 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:39:56 compute-0 nova_compute[254092]: </domain>
Nov 25 16:39:56 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:39:56 compute-0 ceph-mon[74985]: pgmap v1628: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 120 op/s
Nov 25 16:39:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3835229537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2638881181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.640 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.640 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.641 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Using config drive
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.657 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.855 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Creating config drive at /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config
Nov 25 16:39:56 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.860 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3fd0jc6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:56.999 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx3fd0jc6" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.026 254096 DEBUG nova.storage.rbd_utils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] rbd image 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.031 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.176 254096 DEBUG oslo_concurrency.processutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config 440746e9-455f-4a2f-8412-a24d1c93cb21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.177 254096 INFO nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deleting local config drive /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21/disk.config because it was imported into RBD.
Nov 25 16:39:57 compute-0 systemd-machined[216343]: New machine qemu-66-instance-00000039.
Nov 25 16:39:57 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-00000039.
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.323 254096 DEBUG nova.network.neutron [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.341 254096 INFO nova.compute.manager [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Took 1.00 seconds to deallocate network for instance.
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.390 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.391 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.452 254096 DEBUG oslo_concurrency.processutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:39:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 71 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 176 op/s
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.755 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.756 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088797.7564464, 440746e9-455f-4a2f-8412-a24d1c93cb21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.757 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] VM Resumed (Lifecycle Event)
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.766 254096 INFO nova.virt.libvirt.driver [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance spawned successfully.
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.766 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.793 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.801 254096 DEBUG nova.compute.manager [req-f0cc0f44-1586-482e-98c9-998fec689d39 req-1a91ea79-8fc9-4e87-8203-b342a60c6bec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Received event network-vif-deleted-9d4276f1-91e9-418f-9a7b-844c83aea8f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.805 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.811 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.811 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.812 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.812 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.812 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.813 254096 DEBUG nova.virt.libvirt.driver [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.851 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.851 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088797.7607162, 440746e9-455f-4a2f-8412-a24d1c93cb21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.852 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] VM Started (Lifecycle Event)
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.874 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.877 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.897 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:39:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:39:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683262055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.993 254096 INFO nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 3.24 seconds to spawn the instance on the hypervisor.
Nov 25 16:39:57 compute-0 nova_compute[254092]: 2025-11-25 16:39:57.994 254096 DEBUG nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.004 254096 DEBUG oslo_concurrency.processutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.011 254096 DEBUG nova.compute.provider_tree [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.026 254096 DEBUG nova.scheduler.client.report [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.093 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.137 254096 INFO nova.compute.manager [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 4.36 seconds to build instance.
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.160 254096 DEBUG oslo_concurrency.lockutils [None req-eb82aca1-3000-4b0c-9044-146425739037 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.161 254096 INFO nova.scheduler.client.report [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Deleted allocations for instance bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68
Nov 25 16:39:58 compute-0 nova_compute[254092]: 2025-11-25 16:39:58.231 254096 DEBUG oslo_concurrency.lockutils [None req-32dae80d-84c3-4755-b48e-2508a2720e1a 788efa7ba65347b69663f4ea7ba4cd9d ff5d31ffc7934838a461d6ac8aa546c9 - - default default] Lock "bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:39:58 compute-0 ceph-mon[74985]: pgmap v1630: 321 pgs: 321 active+clean; 71 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 176 op/s
Nov 25 16:39:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/683262055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:39:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 71 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 KiB/s wr, 96 op/s
Nov 25 16:40:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:00 compute-0 nova_compute[254092]: 2025-11-25 16:40:00.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.601837) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800601908, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 933, "num_deletes": 261, "total_data_size": 1132135, "memory_usage": 1154544, "flush_reason": "Manual Compaction"}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800660974, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1118801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32956, "largest_seqno": 33888, "table_properties": {"data_size": 1114047, "index_size": 2342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10666, "raw_average_key_size": 20, "raw_value_size": 1104317, "raw_average_value_size": 2075, "num_data_blocks": 103, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088740, "oldest_key_time": 1764088740, "file_creation_time": 1764088800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 59173 microseconds, and 4809 cpu microseconds.
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.661018) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1118801 bytes OK
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.661036) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.673391) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.673491) EVENT_LOG_v1 {"time_micros": 1764088800673480, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.673529) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1127515, prev total WAL file size 1127515, number of live WAL files 2.
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.674402) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1092KB)], [71(8461KB)]
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800674481, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9783588, "oldest_snapshot_seqno": -1}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5812 keys, 9666328 bytes, temperature: kUnknown
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800743141, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9666328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9625087, "index_size": 25579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147435, "raw_average_key_size": 25, "raw_value_size": 9518266, "raw_average_value_size": 1637, "num_data_blocks": 1042, "num_entries": 5812, "num_filter_entries": 5812, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.743408) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9666328 bytes
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.748713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.3 rd, 140.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.3 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(17.4) write-amplify(8.6) OK, records in: 6348, records dropped: 536 output_compression: NoCompression
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.748749) EVENT_LOG_v1 {"time_micros": 1764088800748736, "job": 40, "event": "compaction_finished", "compaction_time_micros": 68735, "compaction_time_cpu_micros": 24773, "output_level": 6, "num_output_files": 1, "total_output_size": 9666328, "num_input_records": 6348, "num_output_records": 5812, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800749192, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088800751130, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.674229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:40:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:40:00.751355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:40:00 compute-0 ceph-mon[74985]: pgmap v1631: 321 pgs: 321 active+clean; 71 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.6 KiB/s wr, 96 op/s
Nov 25 16:40:00 compute-0 nova_compute[254092]: 2025-11-25 16:40:00.922 254096 DEBUG nova.compute.manager [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:00 compute-0 nova_compute[254092]: 2025-11-25 16:40:00.977 254096 INFO nova.compute.manager [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] instance snapshotting
Nov 25 16:40:01 compute-0 nova_compute[254092]: 2025-11-25 16:40:01.372 254096 INFO nova.virt.libvirt.driver [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Beginning live snapshot process
Nov 25 16:40:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 18 KiB/s wr, 108 op/s
Nov 25 16:40:01 compute-0 nova_compute[254092]: 2025-11-25 16:40:01.528 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] creating snapshot(f083c1174065407790877006dc67cc02) on rbd image(440746e9-455f-4a2f-8412-a24d1c93cb21_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:40:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 25 16:40:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 25 16:40:01 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 25 16:40:01 compute-0 nova_compute[254092]: 2025-11-25 16:40:01.941 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] cloning vms/440746e9-455f-4a2f-8412-a24d1c93cb21_disk@f083c1174065407790877006dc67cc02 to images/3281111a-0357-46ec-9d85-c1808ddbe20b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:40:02 compute-0 nova_compute[254092]: 2025-11-25 16:40:02.078 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] flattening images/3281111a-0357-46ec-9d85-c1808ddbe20b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:40:02 compute-0 nova_compute[254092]: 2025-11-25 16:40:02.254 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] removing snapshot(f083c1174065407790877006dc67cc02) on rbd image(440746e9-455f-4a2f-8412-a24d1c93cb21_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:40:02 compute-0 nova_compute[254092]: 2025-11-25 16:40:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:02 compute-0 ceph-mon[74985]: pgmap v1632: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 681 KiB/s rd, 18 KiB/s wr, 108 op/s
Nov 25 16:40:02 compute-0 ceph-mon[74985]: osdmap e222: 3 total, 3 up, 3 in
Nov 25 16:40:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 25 16:40:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 25 16:40:02 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 25 16:40:02 compute-0 nova_compute[254092]: 2025-11-25 16:40:02.937 254096 DEBUG nova.storage.rbd_utils [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] creating snapshot(snap) on rbd image(3281111a-0357-46ec-9d85-c1808ddbe20b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.424 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.424 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.443 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:40:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 22 KiB/s wr, 110 op/s
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.511 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.511 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.519 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.520 254096 INFO nova.compute.claims [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:40:03 compute-0 nova_compute[254092]: 2025-11-25 16:40:03.641 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 25 16:40:03 compute-0 ceph-mon[74985]: osdmap e223: 3 total, 3 up, 3 in
Nov 25 16:40:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 25 16:40:03 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 25 16:40:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417381252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.093 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.098 254096 DEBUG nova.compute.provider_tree [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.114 254096 DEBUG nova.scheduler.client.report [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.185 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.185 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.246 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.247 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.282 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.301 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.426 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.428 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.428 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating image(s)
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.448 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.466 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.484 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.487 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.516 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.517 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.555 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.556 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.557 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.557 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.577 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.581 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.813 254096 DEBUG nova.policy [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c1fd56de7cd4f5c9b1d85ffe8545c90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:40:04 compute-0 ceph-mon[74985]: pgmap v1635: 321 pgs: 321 active+clean; 41 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 22 KiB/s wr, 110 op/s
Nov 25 16:40:04 compute-0 ceph-mon[74985]: osdmap e224: 3 total, 3 up, 3 in
Nov 25 16:40:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/417381252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:04 compute-0 nova_compute[254092]: 2025-11-25 16:40:04.985 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.042 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.139 254096 DEBUG nova.objects.instance [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.161 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.162 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Ensure instance console log exists: /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.162 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.163 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.163 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 63 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 1.5 MiB/s wr, 207 op/s
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.529 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:05 compute-0 nova_compute[254092]: 2025-11-25 16:40:05.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2873390419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.053 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.139 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Successfully created port: d79fd017-c7a6-4bfe-8c90-b3295f62f83c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.152 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.153 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:40:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2873390419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.315 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4057MB free_disk=59.98812484741211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.316 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.317 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.340 254096 INFO nova.virt.libvirt.driver [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Snapshot image upload complete
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.341 254096 INFO nova.compute.manager [None req-d6b3c1bc-a106-4fc3-ab9e-cfd70843e93a 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 5.36 seconds to snapshot the instance on the hypervisor.
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.411 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 440746e9-455f-4a2f-8412-a24d1c93cb21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance d1ceaafd-59a6-45b1-833d-eb2a76e789be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.469 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273605996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.963 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.969 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:06 compute-0 nova_compute[254092]: 2025-11-25 16:40:06.982 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:07 compute-0 nova_compute[254092]: 2025-11-25 16:40:07.097 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:40:07 compute-0 nova_compute[254092]: 2025-11-25 16:40:07.098 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:07 compute-0 ceph-mon[74985]: pgmap v1637: 321 pgs: 321 active+clean; 63 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 1.5 MiB/s wr, 207 op/s
Nov 25 16:40:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/273605996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 78 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 2.6 MiB/s wr, 172 op/s
Nov 25 16:40:07 compute-0 podman[318514]: 2025-11-25 16:40:07.668382132 +0000 UTC m=+0.075276293 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 16:40:07 compute-0 podman[318515]: 2025-11-25 16:40:07.669582084 +0000 UTC m=+0.070410431 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 16:40:07 compute-0 podman[318516]: 2025-11-25 16:40:07.737357434 +0000 UTC m=+0.142148548 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:40:08 compute-0 nova_compute[254092]: 2025-11-25 16:40:08.098 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:08 compute-0 nova_compute[254092]: 2025-11-25 16:40:08.098 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:08 compute-0 nova_compute[254092]: 2025-11-25 16:40:08.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:08 compute-0 nova_compute[254092]: 2025-11-25 16:40:08.384 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088793.381286, bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:08 compute-0 nova_compute[254092]: 2025-11-25 16:40:08.384 254096 INFO nova.compute.manager [-] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] VM Stopped (Lifecycle Event)
Nov 25 16:40:08 compute-0 nova_compute[254092]: 2025-11-25 16:40:08.419 254096 DEBUG nova.compute.manager [None req-9105e228-2be1-4c81-9949-e028ea2f97e6 - - - - - -] [instance: bc3e628c-31ee-4fc7-93f7-4fbba8e4ef68] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:08 compute-0 ceph-mon[74985]: pgmap v1638: 321 pgs: 321 active+clean; 78 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 2.6 MiB/s wr, 172 op/s
Nov 25 16:40:08 compute-0 nova_compute[254092]: 2025-11-25 16:40:08.986 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Successfully updated port: d79fd017-c7a6-4bfe-8c90-b3295f62f83c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:40:09 compute-0 nova_compute[254092]: 2025-11-25 16:40:09.006 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:09 compute-0 nova_compute[254092]: 2025-11-25 16:40:09.007 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquired lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:09 compute-0 nova_compute[254092]: 2025-11-25 16:40:09.008 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:40:09 compute-0 nova_compute[254092]: 2025-11-25 16:40:09.085 254096 DEBUG nova.compute.manager [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-changed-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:09 compute-0 nova_compute[254092]: 2025-11-25 16:40:09.085 254096 DEBUG nova.compute.manager [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Refreshing instance network info cache due to event network-changed-d79fd017-c7a6-4bfe-8c90-b3295f62f83c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:40:09 compute-0 nova_compute[254092]: 2025-11-25 16:40:09.086 254096 DEBUG oslo_concurrency.lockutils [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:09 compute-0 nova_compute[254092]: 2025-11-25 16:40:09.243 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:40:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 78 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 135 op/s
Nov 25 16:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.284 254096 DEBUG nova.network.neutron [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updating instance_info_cache with network_info: [{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.359 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Releasing lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.359 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance network_info: |[{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.359 254096 DEBUG oslo_concurrency.lockutils [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.360 254096 DEBUG nova.network.neutron [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Refreshing network info cache for port d79fd017-c7a6-4bfe-8c90-b3295f62f83c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.362 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start _get_guest_xml network_info=[{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.367 254096 WARNING nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.377 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.378 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.382 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.382 254096 DEBUG nova.virt.libvirt.host [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.383 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.383 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.383 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.384 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.385 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.386 254096 DEBUG nova.virt.hardware [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.389 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 25 16:40:10 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 25 16:40:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392095743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.878 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:10 compute-0 ceph-mon[74985]: pgmap v1639: 321 pgs: 321 active+clean; 78 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 135 op/s
Nov 25 16:40:10 compute-0 ceph-mon[74985]: osdmap e225: 3 total, 3 up, 3 in
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.943 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:10 compute-0 nova_compute[254092]: 2025-11-25 16:40:10.949 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022952161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.438 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.441 254096 DEBUG nova.virt.libvirt.vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:04Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.442 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.443 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.445 254096 DEBUG nova.objects.instance [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.462 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <uuid>d1ceaafd-59a6-45b1-833d-eb2a76e789be</uuid>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <name>instance-0000003a</name>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1732543352</nova:name>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:40:10</nova:creationTime>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <nova:port uuid="d79fd017-c7a6-4bfe-8c90-b3295f62f83c">
Nov 25 16:40:11 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <system>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <entry name="serial">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <entry name="uuid">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </system>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <os>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   </os>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <features>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   </features>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk">
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config">
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:11 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e2:7b:b0"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <target dev="tapd79fd017-c7"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log" append="off"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <video>
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </video>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:40:11 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:40:11 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:40:11 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:40:11 compute-0 nova_compute[254092]: </domain>
Nov 25 16:40:11 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.464 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Preparing to wait for external event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.465 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.465 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.465 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.466 254096 DEBUG nova.virt.libvirt.vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:04Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.467 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.468 254096 DEBUG nova.network.os_vif_util [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.468 254096 DEBUG os_vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.471 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.474 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd79fd017-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.474 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd79fd017-c7, col_values=(('external_ids', {'iface-id': 'd79fd017-c7a6-4bfe-8c90-b3295f62f83c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:7b:b0', 'vm-uuid': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:11 compute-0 NetworkManager[48891]: <info>  [1764088811.4775] manager: (tapd79fd017-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.489 254096 INFO os_vif [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')
Nov 25 16:40:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.7 MiB/s wr, 167 op/s
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.544 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.545 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.545 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:e2:7b:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.546 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Using config drive
Nov 25 16:40:11 compute-0 nova_compute[254092]: 2025-11-25 16:40:11.577 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 25 16:40:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/392095743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2022952161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 25 16:40:11 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.132 254096 DEBUG nova.network.neutron [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updated VIF entry in instance network info cache for port d79fd017-c7a6-4bfe-8c90-b3295f62f83c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.133 254096 DEBUG nova.network.neutron [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updating instance_info_cache with network_info: [{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.150 254096 DEBUG oslo_concurrency.lockutils [req-cf9fd9c4-7827-4571-9fb9-ea08d9f4d424 req-02408044-b13b-4a5a-a263-0ab0a749ffc9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d1ceaafd-59a6-45b1-833d-eb2a76e789be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.193 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating config drive at /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.198 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpin2mst4b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.336 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpin2mst4b" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.366 254096 DEBUG nova.storage.rbd_utils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.370 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.539 254096 DEBUG oslo_concurrency.processutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.541 254096 INFO nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting local config drive /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config because it was imported into RBD.
Nov 25 16:40:12 compute-0 kernel: tapd79fd017-c7: entered promiscuous mode
Nov 25 16:40:12 compute-0 NetworkManager[48891]: <info>  [1764088812.6039] manager: (tapd79fd017-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/228)
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:12 compute-0 ovn_controller[153477]: 2025-11-25T16:40:12Z|00534|binding|INFO|Claiming lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c for this chassis.
Nov 25 16:40:12 compute-0 ovn_controller[153477]: 2025-11-25T16:40:12Z|00535|binding|INFO|d79fd017-c7a6-4bfe-8c90-b3295f62f83c: Claiming fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.610 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "440746e9-455f-4a2f-8412-a24d1c93cb21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "440746e9-455f-4a2f-8412-a24d1c93cb21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.611 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.612 254096 INFO nova.compute.manager [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Terminating instance
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.613 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.613 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquired lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.614 254096 DEBUG nova.network.neutron [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.614 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.616 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.617 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a15cac4c-bcb2-45c0-b1eb-6d492721c4b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.632 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:40:12 compute-0 systemd-udevd[318711]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.634 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a10320d0-6d7b-4d92-bf36-8a7db67215ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.636 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[097e22e8-12b3-482f-9aba-422fa2a99171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 systemd-machined[216343]: New machine qemu-67-instance-0000003a.
Nov 25 16:40:12 compute-0 NetworkManager[48891]: <info>  [1764088812.6462] device (tapd79fd017-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:40:12 compute-0 NetworkManager[48891]: <info>  [1764088812.6475] device (tapd79fd017-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.649 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d41406-549b-42e7-8a45-91684eb14d8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003a.
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.679 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01b35123-e976-4be1-add9-8ab3f6c6606b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:12 compute-0 ovn_controller[153477]: 2025-11-25T16:40:12Z|00536|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c ovn-installed in OVS
Nov 25 16:40:12 compute-0 ovn_controller[153477]: 2025-11-25T16:40:12Z|00537|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c up in Southbound
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.712 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a281eb3c-e9ac-4eb5-98f3-7321ac6637fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.715 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.717 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06b9dda6-7912-4c0a-9108-c98596b3dbe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 systemd-udevd[318715]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:12 compute-0 NetworkManager[48891]: <info>  [1764088812.7186] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/229)
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.755 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5187c3bc-dcb4-4289-a77d-81edb8d17e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.758 254096 DEBUG nova.network.neutron [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.764 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e12a319b-4100-415c-ae01-5c4341aa07b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 NetworkManager[48891]: <info>  [1764088812.7843] device (tap62c0a8be-b0): carrier: link connected
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.788 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2acfc7-b6f7-46cc-b770-08ce9acea0dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.808 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf0a6bb-ffa7-4a89-94db-3c504b6c7e3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527037, 'reachable_time': 17142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318744, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.831 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94714e43-40d2-49c4-9d50-a97d5c4f9c34]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527037, 'tstamp': 527037}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318745, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.854 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b63be33-afac-4f97-b688-26de5bbfd7f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527037, 'reachable_time': 17142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318746, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.890 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[313764e7-d3ea-4a00-b817-065d9be495ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ceph-mon[74985]: pgmap v1641: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.7 MiB/s wr, 167 op/s
Nov 25 16:40:12 compute-0 ceph-mon[74985]: osdmap e226: 3 total, 3 up, 3 in
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ccfcd0-1965-4565-bbbb-14decd97a95b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.960 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:12 compute-0 NetworkManager[48891]: <info>  [1764088812.9632] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Nov 25 16:40:12 compute-0 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:12 compute-0 ovn_controller[153477]: 2025-11-25T16:40:12Z|00538|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 16:40:12 compute-0 nova_compute[254092]: 2025-11-25 16:40:12.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.991 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.992 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9c6a74-5b3f-4a33-bf85-2c7e2f4e4447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.993 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:40:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:12.996 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.093 254096 DEBUG nova.network.neutron [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.112 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Releasing lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.113 254096 DEBUG nova.compute.manager [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.114 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.115 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.115 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.132 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088813.132157, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.133 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Started (Lifecycle Event)
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.154 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.160 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088813.1331186, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.161 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Paused (Lifecycle Event)
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.180 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.185 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.206 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:13 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Deactivated successfully.
Nov 25 16:40:13 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000039.scope: Consumed 1.028s CPU time.
Nov 25 16:40:13 compute-0 systemd-machined[216343]: Machine qemu-66-instance-00000039 terminated.
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.289 254096 DEBUG nova.compute.manager [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.290 254096 DEBUG oslo_concurrency.lockutils [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.290 254096 DEBUG oslo_concurrency.lockutils [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.291 254096 DEBUG oslo_concurrency.lockutils [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.291 254096 DEBUG nova.compute.manager [req-61293951-8a0f-4eb6-916c-024436531f20 req-a62675a7-28ee-4efc-b044-644f550a6189 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Processing event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.292 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.297 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088813.2968247, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.298 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Resumed (Lifecycle Event)
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.300 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.305 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance spawned successfully.
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.305 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.308 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.321 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.335 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.340 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.340 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.341 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.341 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.341 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.342 254096 DEBUG nova.virt.libvirt.driver [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.351 254096 INFO nova.virt.libvirt.driver [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance destroyed successfully.
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.352 254096 DEBUG nova.objects.instance [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lazy-loading 'resources' on Instance uuid 440746e9-455f-4a2f-8412-a24d1c93cb21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.374 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.420 254096 INFO nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 8.99 seconds to spawn the instance on the hypervisor.
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.421 254096 DEBUG nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:13 compute-0 podman[318821]: 2025-11-25 16:40:13.433354401 +0000 UTC m=+0.066451974 container create 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 16:40:13 compute-0 systemd[1]: Started libpod-conmon-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope.
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.484 254096 INFO nova.compute.manager [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 9.99 seconds to build instance.
Nov 25 16:40:13 compute-0 podman[318821]: 2025-11-25 16:40:13.399886313 +0000 UTC m=+0.032983906 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.504 254096 DEBUG oslo_concurrency.lockutils [None req-2c9495d2-0781-4809-b8e5-ed555144031f 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 1.5 MiB/s wr, 94 op/s
Nov 25 16:40:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6577cfdadc0151800bbbfb7a549b8162c86092c3bd8a05ddb66da72e8582cad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:13 compute-0 podman[318821]: 2025-11-25 16:40:13.54202407 +0000 UTC m=+0.175121673 container init 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:40:13 compute-0 podman[318821]: 2025-11-25 16:40:13.548234018 +0000 UTC m=+0.181331591 container start 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.570 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:13 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : New worker (318860) forked
Nov 25 16:40:13 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : Loading success.
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.596 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-440746e9-455f-4a2f-8412-a24d1c93cb21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:13 compute-0 nova_compute[254092]: 2025-11-25 16:40:13.596 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:40:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:13.617 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:13.617 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 25 16:40:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 25 16:40:13 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.302 254096 INFO nova.virt.libvirt.driver [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deleting instance files /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21_del
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.303 254096 INFO nova.virt.libvirt.driver [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deletion of /var/lib/nova/instances/440746e9-455f-4a2f-8412-a24d1c93cb21_del complete
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.361 254096 INFO nova.compute.manager [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 1.25 seconds to destroy the instance on the hypervisor.
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.362 254096 DEBUG oslo.service.loopingcall [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.362 254096 DEBUG nova.compute.manager [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.362 254096 DEBUG nova.network.neutron [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.644 254096 DEBUG nova.network.neutron [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.656 254096 DEBUG nova.network.neutron [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.669 254096 INFO nova.compute.manager [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Took 0.31 seconds to deallocate network for instance.
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.723 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.723 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:14.848 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:14.850 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:40:14 compute-0 nova_compute[254092]: 2025-11-25 16:40:14.850 254096 DEBUG oslo_concurrency.processutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:14 compute-0 ceph-mon[74985]: pgmap v1643: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 1.5 MiB/s wr, 94 op/s
Nov 25 16:40:14 compute-0 ceph-mon[74985]: osdmap e227: 3 total, 3 up, 3 in
Nov 25 16:40:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176849215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.316 254096 DEBUG oslo_concurrency.processutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.322 254096 DEBUG nova.compute.provider_tree [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.341 254096 DEBUG nova.scheduler.client.report [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.359 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.381 254096 INFO nova.scheduler.client.report [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Deleted allocations for instance 440746e9-455f-4a2f-8412-a24d1c93cb21
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG nova.compute.manager [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG oslo_concurrency.lockutils [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG oslo_concurrency.lockutils [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG oslo_concurrency.lockutils [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.385 254096 DEBUG nova.compute.manager [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.386 254096 WARNING nova.compute.manager [req-e3247502-0022-42df-be2c-d0b3f91527d0 req-46ea4994-86ce-49e4-8453-385283c9e533 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state None.
Nov 25 16:40:15 compute-0 nova_compute[254092]: 2025-11-25 16:40:15.446 254096 DEBUG oslo_concurrency.lockutils [None req-32fbef60-08b9-422c-9496-f63af111c69f 810b628f1c824b55930a996d843cc85f 805768b696874b00aa9b3bac89550ed7 - - default default] Lock "440746e9-455f-4a2f-8412-a24d1c93cb21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Nov 25 16:40:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1176849215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:16 compute-0 nova_compute[254092]: 2025-11-25 16:40:16.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:17 compute-0 ceph-mon[74985]: pgmap v1645: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.324 254096 INFO nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Rebuilding instance
Nov 25 16:40:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 251 op/s
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.565 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.577 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.749 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_requests' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.760 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.777 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.787 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.797 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:40:17 compute-0 nova_compute[254092]: 2025-11-25 16:40:17.800 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:40:18 compute-0 ceph-mon[74985]: pgmap v1646: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 251 op/s
Nov 25 16:40:18 compute-0 nova_compute[254092]: 2025-11-25 16:40:18.157 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 186 op/s
Nov 25 16:40:20 compute-0 ceph-mon[74985]: pgmap v1647: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 186 op/s
Nov 25 16:40:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 25 16:40:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 25 16:40:20 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 25 16:40:21 compute-0 nova_compute[254092]: 2025-11-25 16:40:21.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 186 op/s
Nov 25 16:40:21 compute-0 ceph-mon[74985]: osdmap e228: 3 total, 3 up, 3 in
Nov 25 16:40:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:21.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:22 compute-0 ceph-mon[74985]: pgmap v1649: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 186 op/s
Nov 25 16:40:23 compute-0 nova_compute[254092]: 2025-11-25 16:40:23.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 155 op/s
Nov 25 16:40:23 compute-0 sudo[318893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:23 compute-0 sudo[318893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:23 compute-0 sudo[318893]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:23 compute-0 sudo[318918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:40:23 compute-0 sudo[318918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:23 compute-0 sudo[318918]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:23 compute-0 sudo[318943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:23 compute-0 sudo[318943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:23 compute-0 sudo[318943]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:23 compute-0 sudo[318968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:40:23 compute-0 sudo[318968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:24 compute-0 sudo[318968]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:40:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ad4a56ac-c18f-41c6-9c4f-644d39d15804 does not exist
Nov 25 16:40:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d3b2f856-fb28-40ba-a7fa-0a176a19862e does not exist
Nov 25 16:40:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 40e2874e-d650-4655-a137-1acfc4fc3894 does not exist
Nov 25 16:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:40:24 compute-0 sudo[319025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:24 compute-0 sudo[319025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:24 compute-0 sudo[319025]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:24 compute-0 sudo[319050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:40:24 compute-0 sudo[319050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:24 compute-0 sudo[319050]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:24 compute-0 sudo[319075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:24 compute-0 sudo[319075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:24 compute-0 sudo[319075]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:24 compute-0 sudo[319100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:40:24 compute-0 sudo[319100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:25 compute-0 ceph-mon[74985]: pgmap v1650: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 155 op/s
Nov 25 16:40:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:40:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:40:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:40:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:40:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:40:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:40:25 compute-0 podman[319163]: 2025-11-25 16:40:25.258468061 +0000 UTC m=+0.113169911 container create 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:40:25 compute-0 podman[319163]: 2025-11-25 16:40:25.169957749 +0000 UTC m=+0.024659629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:40:25 compute-0 systemd[1]: Started libpod-conmon-5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983.scope.
Nov 25 16:40:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:25 compute-0 podman[319163]: 2025-11-25 16:40:25.48589098 +0000 UTC m=+0.340592860 container init 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:40:25 compute-0 podman[319163]: 2025-11-25 16:40:25.496038856 +0000 UTC m=+0.350740706 container start 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:40:25 compute-0 tender_williamson[319179]: 167 167
Nov 25 16:40:25 compute-0 systemd[1]: libpod-5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983.scope: Deactivated successfully.
Nov 25 16:40:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 1.7 KiB/s wr, 62 op/s
Nov 25 16:40:25 compute-0 podman[319163]: 2025-11-25 16:40:25.639370854 +0000 UTC m=+0.494072764 container attach 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:40:25 compute-0 podman[319163]: 2025-11-25 16:40:25.641622786 +0000 UTC m=+0.496324636 container died 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:40:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e10a12645e05a225d914b79c0acc5cf62a63c48ce116915f0a650b04d783ef0-merged.mount: Deactivated successfully.
Nov 25 16:40:25 compute-0 podman[319163]: 2025-11-25 16:40:25.838152377 +0000 UTC m=+0.692854227 container remove 5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:40:25 compute-0 systemd[1]: libpod-conmon-5cdd62e13b96300f8e0270511bce07038e5e727f320c8662d6a920e39eecc983.scope: Deactivated successfully.
Nov 25 16:40:26 compute-0 podman[319203]: 2025-11-25 16:40:26.068991009 +0000 UTC m=+0.089295293 container create ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 16:40:26 compute-0 podman[319203]: 2025-11-25 16:40:26.002598889 +0000 UTC m=+0.022903203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:40:26 compute-0 systemd[1]: Started libpod-conmon-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope.
Nov 25 16:40:26 compute-0 ceph-mon[74985]: pgmap v1651: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 1.7 KiB/s wr, 62 op/s
Nov 25 16:40:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:26 compute-0 podman[319203]: 2025-11-25 16:40:26.175362415 +0000 UTC m=+0.195666729 container init ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:40:26 compute-0 podman[319203]: 2025-11-25 16:40:26.18729342 +0000 UTC m=+0.207597704 container start ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:40:26 compute-0 podman[319203]: 2025-11-25 16:40:26.196854649 +0000 UTC m=+0.217159023 container attach ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:40:26 compute-0 ovn_controller[153477]: 2025-11-25T16:40:26Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 16:40:26 compute-0 ovn_controller[153477]: 2025-11-25T16:40:26Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 16:40:26 compute-0 nova_compute[254092]: 2025-11-25 16:40:26.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:27 compute-0 naughty_gould[319220]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:40:27 compute-0 naughty_gould[319220]: --> relative data size: 1.0
Nov 25 16:40:27 compute-0 naughty_gould[319220]: --> All data devices are unavailable
Nov 25 16:40:27 compute-0 systemd[1]: libpod-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope: Deactivated successfully.
Nov 25 16:40:27 compute-0 systemd[1]: libpod-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope: Consumed 1.013s CPU time.
Nov 25 16:40:27 compute-0 podman[319203]: 2025-11-25 16:40:27.265286734 +0000 UTC m=+1.285591018 container died ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:40:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 89 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 606 KiB/s wr, 13 op/s
Nov 25 16:40:27 compute-0 nova_compute[254092]: 2025-11-25 16:40:27.848 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-21791b7201701011e92a42bf95ebbed88012fcad31f0ee07f7b7dabf465b06ed-merged.mount: Deactivated successfully.
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.377 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088813.3462272, 440746e9-455f-4a2f-8412-a24d1c93cb21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.377 254096 INFO nova.compute.manager [-] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] VM Stopped (Lifecycle Event)
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.395 254096 DEBUG nova.compute.manager [None req-1dce0595-399d-4a18-9630-cf72d57deefc - - - - - -] [instance: 440746e9-455f-4a2f-8412-a24d1c93cb21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:28 compute-0 podman[319203]: 2025-11-25 16:40:28.507896907 +0000 UTC m=+2.528201191 container remove ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:40:28 compute-0 systemd[1]: libpod-conmon-ec853f674729c2b36eb7b4c7351a2093967144b2afad601950c02ad899deb8be.scope: Deactivated successfully.
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.534 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.535 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:28 compute-0 sudo[319100]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.561 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:40:28 compute-0 sudo[319261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:28 compute-0 sudo[319261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:28 compute-0 sudo[319261]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:28 compute-0 sudo[319286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:40:28 compute-0 sudo[319286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:28 compute-0 sudo[319286]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.676 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.678 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:28 compute-0 ceph-mon[74985]: pgmap v1652: 321 pgs: 321 active+clean; 89 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 606 KiB/s wr, 13 op/s
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.685 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.686 254096 INFO nova.compute.claims [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:40:28 compute-0 sudo[319311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:28 compute-0 sudo[319311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:28 compute-0 sudo[319311]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:28 compute-0 sudo[319336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:40:28 compute-0 sudo[319336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:28 compute-0 nova_compute[254092]: 2025-11-25 16:40:28.830 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790846405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.287 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.295 254096 DEBUG nova.compute.provider_tree [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.310 254096 DEBUG nova.scheduler.client.report [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.348 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.349 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:40:29 compute-0 podman[319422]: 2025-11-25 16:40:29.348767759 +0000 UTC m=+0.062371913 container create 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:40:29 compute-0 systemd[1]: Started libpod-conmon-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope.
Nov 25 16:40:29 compute-0 podman[319422]: 2025-11-25 16:40:29.312409742 +0000 UTC m=+0.026013916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.423 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.424 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:40:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.477 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:40:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 89 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 606 KiB/s wr, 13 op/s
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.568 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:40:29 compute-0 podman[319422]: 2025-11-25 16:40:29.666798376 +0000 UTC m=+0.380402550 container init 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:40:29 compute-0 podman[319422]: 2025-11-25 16:40:29.674056572 +0000 UTC m=+0.387660726 container start 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:40:29 compute-0 keen_snyder[319438]: 167 167
Nov 25 16:40:29 compute-0 systemd[1]: libpod-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope: Deactivated successfully.
Nov 25 16:40:29 compute-0 conmon[319438]: conmon 70f22833e47b1d5ee682 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope/container/memory.events
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.697 254096 DEBUG nova.policy [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '509d158fe3f34e219f96739bb51bd6d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ea58a9bae9c474bb9f0b9c821689054', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:40:29 compute-0 podman[319422]: 2025-11-25 16:40:29.774277731 +0000 UTC m=+0.487881905 container attach 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:40:29 compute-0 podman[319422]: 2025-11-25 16:40:29.77533882 +0000 UTC m=+0.488942994 container died 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:40:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/790846405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.808 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.810 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.810 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Creating image(s)
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.833 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.855 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.880 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.886 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-089a85581a67d88e7e7cfb015518a4206f4c5c430a3744ff4c3b8f4c600dddc5-merged.mount: Deactivated successfully.
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.965 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.967 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.968 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.968 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.990 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:29 compute-0 nova_compute[254092]: 2025-11-25 16:40:29.994 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 32b30534-761a-439a-85e5-4e2fe8f507df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:30 compute-0 podman[319422]: 2025-11-25 16:40:30.190699979 +0000 UTC m=+0.904304123 container remove 70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 16:40:30 compute-0 systemd[1]: libpod-conmon-70f22833e47b1d5ee682e97be4f8097bca21beee7c1f496290a976349501c8ba.scope: Deactivated successfully.
Nov 25 16:40:30 compute-0 podman[319553]: 2025-11-25 16:40:30.347700158 +0000 UTC m=+0.023897500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:40:30 compute-0 podman[319553]: 2025-11-25 16:40:30.531250847 +0000 UTC m=+0.207448169 container create e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:40:30 compute-0 systemd[1]: Started libpod-conmon-e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85.scope.
Nov 25 16:40:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:30 compute-0 podman[319553]: 2025-11-25 16:40:30.791023845 +0000 UTC m=+0.467221177 container init e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:40:30 compute-0 podman[319553]: 2025-11-25 16:40:30.797999733 +0000 UTC m=+0.474197055 container start e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:40:31 compute-0 ceph-mon[74985]: pgmap v1653: 321 pgs: 321 active+clean; 89 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 606 KiB/s wr, 13 op/s
Nov 25 16:40:31 compute-0 podman[319553]: 2025-11-25 16:40:31.027631823 +0000 UTC m=+0.703829175 container attach e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.422 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 32b30534-761a-439a-85e5-4e2fe8f507df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.505 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] resizing rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:40:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 121 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]: {
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:     "0": [
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:         {
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "devices": [
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "/dev/loop3"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             ],
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_name": "ceph_lv0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_size": "21470642176",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "name": "ceph_lv0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "tags": {
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cluster_name": "ceph",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.crush_device_class": "",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.encrypted": "0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osd_id": "0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.type": "block",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.vdo": "0"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             },
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "type": "block",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "vg_name": "ceph_vg0"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:         }
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:     ],
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:     "1": [
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:         {
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "devices": [
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "/dev/loop4"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             ],
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_name": "ceph_lv1",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_size": "21470642176",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "name": "ceph_lv1",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "tags": {
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cluster_name": "ceph",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.crush_device_class": "",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.encrypted": "0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osd_id": "1",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.type": "block",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.vdo": "0"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             },
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "type": "block",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "vg_name": "ceph_vg1"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:         }
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:     ],
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:     "2": [
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:         {
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "devices": [
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "/dev/loop5"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             ],
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_name": "ceph_lv2",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_size": "21470642176",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "name": "ceph_lv2",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "tags": {
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.cluster_name": "ceph",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.crush_device_class": "",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.encrypted": "0",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osd_id": "2",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.type": "block",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:                 "ceph.vdo": "0"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             },
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "type": "block",
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:             "vg_name": "ceph_vg2"
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:         }
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]:     ]
Nov 25 16:40:31 compute-0 pedantic_ardinghelli[319570]: }
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.636 254096 DEBUG nova.objects.instance [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lazy-loading 'migration_context' on Instance uuid 32b30534-761a-439a-85e5-4e2fe8f507df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.650 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.650 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Ensure instance console log exists: /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.651 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.651 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:31 compute-0 nova_compute[254092]: 2025-11-25 16:40:31.652 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:31 compute-0 systemd[1]: libpod-e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85.scope: Deactivated successfully.
Nov 25 16:40:31 compute-0 podman[319553]: 2025-11-25 16:40:31.663180826 +0000 UTC m=+1.339378148 container died e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f99b1e51747860151e08141927ab9dfc1f0bbd48946f1ac64a2a6cb32e1e51cf-merged.mount: Deactivated successfully.
Nov 25 16:40:31 compute-0 podman[319553]: 2025-11-25 16:40:31.730855021 +0000 UTC m=+1.407052353 container remove e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:40:31 compute-0 systemd[1]: libpod-conmon-e224217e5590913134264b727cd02d30b1a89a41fa3f2fa8c8dfaabd09c39a85.scope: Deactivated successfully.
Nov 25 16:40:31 compute-0 sudo[319336]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:31 compute-0 sudo[319665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:31 compute-0 sudo[319665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:31 compute-0 sudo[319665]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:31 compute-0 sudo[319690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:40:31 compute-0 sudo[319690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:31 compute-0 sudo[319690]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:31 compute-0 sudo[319715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:31 compute-0 sudo[319715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:31 compute-0 sudo[319715]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:32 compute-0 sudo[319740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:40:32 compute-0 sudo[319740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:32 compute-0 podman[319805]: 2025-11-25 16:40:32.325381991 +0000 UTC m=+0.040841989 container create f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:40:32 compute-0 systemd[1]: Started libpod-conmon-f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e.scope.
Nov 25 16:40:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:32 compute-0 podman[319805]: 2025-11-25 16:40:32.403906712 +0000 UTC m=+0.119366750 container init f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:40:32 compute-0 podman[319805]: 2025-11-25 16:40:32.309409998 +0000 UTC m=+0.024870026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:40:32 compute-0 podman[319805]: 2025-11-25 16:40:32.410916031 +0000 UTC m=+0.126376039 container start f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:40:32 compute-0 podman[319805]: 2025-11-25 16:40:32.414320583 +0000 UTC m=+0.129780621 container attach f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:40:32 compute-0 sharp_mestorf[319821]: 167 167
Nov 25 16:40:32 compute-0 systemd[1]: libpod-f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e.scope: Deactivated successfully.
Nov 25 16:40:32 compute-0 podman[319805]: 2025-11-25 16:40:32.415609399 +0000 UTC m=+0.131069407 container died f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdfea0d00eb1c161c578aaa9156760f774feba7f3b10345be3524b5c3813c858-merged.mount: Deactivated successfully.
Nov 25 16:40:32 compute-0 podman[319805]: 2025-11-25 16:40:32.48532109 +0000 UTC m=+0.200781098 container remove f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:40:32 compute-0 systemd[1]: libpod-conmon-f12d9762072738ca4b7a25d20770876bfd1100533053b4b2bba4760d91fe911e.scope: Deactivated successfully.
Nov 25 16:40:32 compute-0 podman[319845]: 2025-11-25 16:40:32.659421583 +0000 UTC m=+0.052677340 container create be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:40:32 compute-0 systemd[1]: Started libpod-conmon-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope.
Nov 25 16:40:32 compute-0 podman[319845]: 2025-11-25 16:40:32.628107784 +0000 UTC m=+0.021363561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:40:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:32 compute-0 podman[319845]: 2025-11-25 16:40:32.747976476 +0000 UTC m=+0.141232253 container init be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 16:40:32 compute-0 podman[319845]: 2025-11-25 16:40:32.754110542 +0000 UTC m=+0.147366299 container start be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:40:32 compute-0 podman[319845]: 2025-11-25 16:40:32.767092204 +0000 UTC m=+0.160347961 container attach be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:40:33 compute-0 ceph-mon[74985]: pgmap v1654: 321 pgs: 321 active+clean; 121 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 kernel: tapd79fd017-c7 (unregistering): left promiscuous mode
Nov 25 16:40:33 compute-0 NetworkManager[48891]: <info>  [1764088833.2078] device (tapd79fd017-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:40:33 compute-0 ovn_controller[153477]: 2025-11-25T16:40:33Z|00539|binding|INFO|Releasing lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c from this chassis (sb_readonly=0)
Nov 25 16:40:33 compute-0 ovn_controller[153477]: 2025-11-25T16:40:33Z|00540|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c down in Southbound
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 ovn_controller[153477]: 2025-11-25T16:40:33Z|00541|binding|INFO|Removing iface tapd79fd017-c7 ovn-installed in OVS
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.219 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance shutdown successfully after 15 seconds.
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.226 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.227 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.228 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31e8acf2-46d5-4b25-9be7-80de8b284029]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.231 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 25 16:40:33 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003a.scope: Consumed 13.295s CPU time.
Nov 25 16:40:33 compute-0 systemd-machined[216343]: Machine qemu-67-instance-0000003a terminated.
Nov 25 16:40:33 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : haproxy version is 2.8.14-c23fe91
Nov 25 16:40:33 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [NOTICE]   (318858) : path to executable is /usr/sbin/haproxy
Nov 25 16:40:33 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [WARNING]  (318858) : Exiting Master process...
Nov 25 16:40:33 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [ALERT]    (318858) : Current worker (318860) exited with code 143 (Terminated)
Nov 25 16:40:33 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[318854]: [WARNING]  (318858) : All workers exited. Exiting... (0)
Nov 25 16:40:33 compute-0 systemd[1]: libpod-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope: Deactivated successfully.
Nov 25 16:40:33 compute-0 conmon[318854]: conmon 03cc187dd0506b8a2415 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope/container/memory.events
Nov 25 16:40:33 compute-0 podman[319891]: 2025-11-25 16:40:33.376354273 +0000 UTC m=+0.043640295 container died 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6577cfdadc0151800bbbfb7a549b8162c86092c3bd8a05ddb66da72e8582cad-merged.mount: Deactivated successfully.
Nov 25 16:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6-userdata-shm.mount: Deactivated successfully.
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 podman[319891]: 2025-11-25 16:40:33.451074991 +0000 UTC m=+0.118361013 container cleanup 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 systemd[1]: libpod-conmon-03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6.scope: Deactivated successfully.
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.467 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance destroyed successfully.
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.474 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance destroyed successfully.
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.475 254096 DEBUG nova.virt.libvirt.vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:16Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.476 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.477 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.478 254096 DEBUG os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.482 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd79fd017-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.488 254096 INFO os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')
Nov 25 16:40:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 121 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:40:33 compute-0 podman[319929]: 2025-11-25 16:40:33.713391387 +0000 UTC m=+0.235148371 container remove 03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.720 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bbf6dc-2ba0-4274-b59b-cba74359e7f0]: (4, ('Tue Nov 25 04:40:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6)\n03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6\nTue Nov 25 04:40:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6)\n03cc187dd0506b8a24150f3662aeaeb616589c7e425b44690b7dd1dbb82d1aa6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.723 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb79b536-143d-4f24-9d68-0779b6422300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 16:40:33 compute-0 nova_compute[254092]: 2025-11-25 16:40:33.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef4755e-9cfc-4f3d-bca9-e90daf577db1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 elated_hellman[319861]: {
Nov 25 16:40:33 compute-0 elated_hellman[319861]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "osd_id": 1,
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "type": "bluestore"
Nov 25 16:40:33 compute-0 elated_hellman[319861]:     },
Nov 25 16:40:33 compute-0 elated_hellman[319861]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "osd_id": 2,
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "type": "bluestore"
Nov 25 16:40:33 compute-0 elated_hellman[319861]:     },
Nov 25 16:40:33 compute-0 elated_hellman[319861]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "osd_id": 0,
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:40:33 compute-0 elated_hellman[319861]:         "type": "bluestore"
Nov 25 16:40:33 compute-0 elated_hellman[319861]:     }
Nov 25 16:40:33 compute-0 elated_hellman[319861]: }
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.762 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d16d3ece-a0b8-4a90-b656-dea68c0c53fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c246023-bf7f-42c3-9af0-52411961fb84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 systemd[1]: libpod-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope: Deactivated successfully.
Nov 25 16:40:33 compute-0 systemd[1]: libpod-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope: Consumed 1.003s CPU time.
Nov 25 16:40:33 compute-0 podman[319845]: 2025-11-25 16:40:33.77948177 +0000 UTC m=+1.172737527 container died be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93ad2421-1aa7-4ff4-b001-b9344e94bb41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527029, 'reachable_time': 21510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319992, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.792 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:40:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:33.792 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8f0f90c2-9d63-4355-9f6b-461d0144ea6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 16:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c351f43175589723447706a0b008137a5f75574a7e960b6a18e43dab0ea2331-merged.mount: Deactivated successfully.
Nov 25 16:40:33 compute-0 podman[319845]: 2025-11-25 16:40:33.967445069 +0000 UTC m=+1.360700826 container remove be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:40:33 compute-0 systemd[1]: libpod-conmon-be889961c8ca59b02a7059b1d8328081cced48b478282802ae12da75d5e63ef5.scope: Deactivated successfully.
Nov 25 16:40:34 compute-0 sudo[319740]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:40:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:40:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:40:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:40:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7fa11e78-f5be-4192-8f3a-937177935258 does not exist
Nov 25 16:40:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b04c556a-1bfb-42bb-980e-4344e2143822 does not exist
Nov 25 16:40:34 compute-0 sudo[320003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:40:34 compute-0 sudo[320003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:34 compute-0 sudo[320003]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.111 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Successfully created port: 9691504b-429d-44e8-bdf5-7f223c5b0527 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:40:34 compute-0 sudo[320029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:40:34 compute-0 sudo[320029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:40:34 compute-0 sudo[320029]: pam_unix(sudo:session): session closed for user root
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.290 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting instance files /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.291 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deletion of /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del complete
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.430 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.431 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating image(s)
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.453 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.484 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.511 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.516 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.558 254096 DEBUG nova.compute.manager [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.559 254096 DEBUG oslo_concurrency.lockutils [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.560 254096 DEBUG oslo_concurrency.lockutils [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.560 254096 DEBUG oslo_concurrency.lockutils [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.561 254096 DEBUG nova.compute.manager [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.561 254096 WARNING nova.compute.manager [req-f6577270-b7c5-4403-b28f-2a99b6f82903 req-6420763e-8cad-4a19-9f76-1dbe6305d689 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state rebuild_spawning.
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.570 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.570 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.588 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.610 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.611 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.612 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.612 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.634 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.638 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.707 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.708 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.716 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.716 254096 INFO nova.compute.claims [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.862 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:34 compute-0 nova_compute[254092]: 2025-11-25 16:40:34.980 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:35 compute-0 ceph-mon[74985]: pgmap v1655: 321 pgs: 321 active+clean; 121 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 16:40:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:40:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.046 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.126 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.127 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Ensure instance console log exists: /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.127 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.127 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.128 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.130 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start _get_guest_xml network_info=[{"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.134 254096 WARNING nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.138 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.139 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.141 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.142 254096 DEBUG nova.virt.libvirt.host [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.142 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.142 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.143 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.143 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.144 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.145 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.virt.hardware [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.146 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.159 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956529383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.320 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.326 254096 DEBUG nova.compute.provider_tree [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.342 254096 DEBUG nova.scheduler.client.report [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.368 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.368 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.415 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.416 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.432 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.459 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:40:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 122 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 4.4 MiB/s wr, 129 op/s
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.546 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.549 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.550 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Creating image(s)
Nov 25 16:40:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232557520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.580 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.602 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.625 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.628 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.659 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.679 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.685 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.718 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.719 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.720 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.720 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.738 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.741 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:35 compute-0 nova_compute[254092]: 2025-11-25 16:40:35.854 254096 DEBUG nova.policy [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a304e40f25749e49b171b1db4828ff1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a6a1f7b5bb9482d85239a3b39051837', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:40:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2956529383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2232557520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.068 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965256065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.119 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] resizing rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.143 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.145 254096 DEBUG nova.virt.libvirt.vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:34Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.145 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.146 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.148 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <uuid>d1ceaafd-59a6-45b1-833d-eb2a76e789be</uuid>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <name>instance-0000003a</name>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1732543352</nova:name>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:40:35</nova:creationTime>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <nova:port uuid="d79fd017-c7a6-4bfe-8c90-b3295f62f83c">
Nov 25 16:40:36 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <system>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <entry name="serial">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <entry name="uuid">d1ceaafd-59a6-45b1-833d-eb2a76e789be</entry>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </system>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <os>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   </os>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <features>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   </features>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk">
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config">
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e2:7b:b0"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <target dev="tapd79fd017-c7"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/console.log" append="off"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <video>
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </video>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:40:36 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:40:36 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:40:36 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:40:36 compute-0 nova_compute[254092]: </domain>
Nov 25 16:40:36 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.150 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Preparing to wait for external event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.150 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.151 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.151 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.151 254096 DEBUG nova.virt.libvirt.vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:34Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.152 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.152 254096 DEBUG nova.network.os_vif_util [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.153 254096 DEBUG os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.154 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.154 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.157 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd79fd017-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.157 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd79fd017-c7, col_values=(('external_ids', {'iface-id': 'd79fd017-c7a6-4bfe-8c90-b3295f62f83c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:7b:b0', 'vm-uuid': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:36 compute-0 NetworkManager[48891]: <info>  [1764088836.1595] manager: (tapd79fd017-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.165 254096 INFO os_vif [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.202 254096 DEBUG nova.objects.instance [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lazy-loading 'migration_context' on Instance uuid bd217c57-20a2-41c4-a969-7a4d94f0c7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.219 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.220 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Ensure instance console log exists: /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.220 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.220 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.221 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.225 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.226 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.226 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:e2:7b:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.226 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Using config drive
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.244 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.259 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.313 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'keypairs' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.616 254096 DEBUG nova.compute.manager [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.616 254096 DEBUG oslo_concurrency.lockutils [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.617 254096 DEBUG oslo_concurrency.lockutils [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.617 254096 DEBUG oslo_concurrency.lockutils [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.617 254096 DEBUG nova.compute.manager [req-610e681c-104f-4760-8340-f0654961d42c req-cdc63b78-5e8d-410f-9f3e-758ebf1d79f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Processing event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.661 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Successfully updated port: 9691504b-429d-44e8-bdf5-7f223c5b0527 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.684 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.685 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquired lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.685 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.793 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Creating config drive at /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.797 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bm0ixpo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.889 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.934 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6bm0ixpo" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.958 254096 DEBUG nova.storage.rbd_utils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:36 compute-0 nova_compute[254092]: 2025-11-25 16:40:36.962 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:37 compute-0 ceph-mon[74985]: pgmap v1656: 321 pgs: 321 active+clean; 122 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 4.4 MiB/s wr, 129 op/s
Nov 25 16:40:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2965256065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.117 254096 DEBUG oslo_concurrency.processutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config d1ceaafd-59a6-45b1-833d-eb2a76e789be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.118 254096 INFO nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting local config drive /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be/disk.config because it was imported into RBD.
Nov 25 16:40:37 compute-0 kernel: tapd79fd017-c7: entered promiscuous mode
Nov 25 16:40:37 compute-0 NetworkManager[48891]: <info>  [1764088837.1671] manager: (tapd79fd017-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Nov 25 16:40:37 compute-0 ovn_controller[153477]: 2025-11-25T16:40:37Z|00542|binding|INFO|Claiming lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c for this chassis.
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:37 compute-0 ovn_controller[153477]: 2025-11-25T16:40:37Z|00543|binding|INFO|d79fd017-c7a6-4bfe-8c90-b3295f62f83c: Claiming fa:16:3e:e2:7b:b0 10.100.0.8
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.177 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.178 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.179 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:37 compute-0 ovn_controller[153477]: 2025-11-25T16:40:37Z|00544|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c ovn-installed in OVS
Nov 25 16:40:37 compute-0 ovn_controller[153477]: 2025-11-25T16:40:37Z|00545|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c up in Southbound
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[baf01e6c-8a47-4618-98a2-04a2b9bdc433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.192 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.193 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf5d70f-bdbc-445a-b67f-dbdd1878bede]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.194 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[83ac86fc-f067-4dcf-8879-d57e18fecf56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 systemd-udevd[320544]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:37 compute-0 systemd-machined[216343]: New machine qemu-68-instance-0000003a.
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.204 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b4233e-56b8-4751-813c-20d2454d5f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 NetworkManager[48891]: <info>  [1764088837.2059] device (tapd79fd017-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:40:37 compute-0 NetworkManager[48891]: <info>  [1764088837.2069] device (tapd79fd017-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:40:37 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003a.
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.227 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45c1d692-4fd3-4e6c-8c76-232bf9439763]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.255 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fbdbe9-af2e-4009-b950-50f8d8212d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.261 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d065bc4-79a2-4361-be0f-987329d96152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 NetworkManager[48891]: <info>  [1764088837.2627] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Nov 25 16:40:37 compute-0 systemd-udevd[320548]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.270 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Successfully created port: 341145f6-8319-4c2d-aa9d-9d7475a5e7eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.290 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6e99ce-c68d-47b7-b080-23d069a15728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.296 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba71563a-3630-477b-abdb-2a84f67f60d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 NetworkManager[48891]: <info>  [1764088837.3146] device (tap62c0a8be-b0): carrier: link connected
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.323 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a1923b-fa87-4348-80ca-2ba9503e42af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.340 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b521e5f9-59e1-4321-abe1-81a6f664c4e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529490, 'reachable_time': 26368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320577, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[823875e6-92f9-415e-828a-eed3f01e0b35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529490, 'tstamp': 529490}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320578, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.370 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ada9c8b-06ad-4680-8ab8-cf9ce2eaf54b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529490, 'reachable_time': 26368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320579, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.404 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe2070-2258-41b0-b0c6-7f33beda7056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.466 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7210499c-75d1-4d72-8656-dcf029908d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.468 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:37 compute-0 NetworkManager[48891]: <info>  [1764088837.4714] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 25 16:40:37 compute-0 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.475 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:37 compute-0 ovn_controller[153477]: 2025-11-25T16:40:37Z|00546|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.477 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.478 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[58b38619-62e2-4bee-a8b2-1e5e26854d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.479 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:40:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:37.480 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 134 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 5.7 MiB/s wr, 135 op/s
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.617 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for d1ceaafd-59a6-45b1-833d-eb2a76e789be due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.618 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088837.6171453, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.618 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Started (Lifecycle Event)
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.621 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.624 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.627 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance spawned successfully.
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.628 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.640 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.646 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.650 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.651 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.651 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.651 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.652 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.652 254096 DEBUG nova.virt.libvirt.driver [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.679 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.679 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088837.62036, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.679 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Paused (Lifecycle Event)
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.691 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.694 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088837.6240833, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.694 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Resumed (Lifecycle Event)
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.715 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.717 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.745 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.807 254096 DEBUG nova.compute.manager [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.892 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.892 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:37 compute-0 nova_compute[254092]: 2025-11-25 16:40:37.892 254096 DEBUG nova.objects.instance [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:40:37 compute-0 podman[320653]: 2025-11-25 16:40:37.938408459 +0000 UTC m=+0.114978100 container create a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:40:37 compute-0 podman[320653]: 2025-11-25 16:40:37.843941167 +0000 UTC m=+0.020510798 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:40:37 compute-0 systemd[1]: Started libpod-conmon-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20.scope.
Nov 25 16:40:38 compute-0 nova_compute[254092]: 2025-11-25 16:40:38.008 254096 DEBUG oslo_concurrency.lockutils [None req-7154df91-2dce-4caa-a836-09604f3db778 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c090a3c2d63b20744daa816dd216a1d72f1b7b16b76ad011f3f5fdfd906ac44b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:38 compute-0 podman[320653]: 2025-11-25 16:40:38.036618373 +0000 UTC m=+0.213188034 container init a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 16:40:38 compute-0 podman[320653]: 2025-11-25 16:40:38.042674428 +0000 UTC m=+0.219244069 container start a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 16:40:38 compute-0 podman[320666]: 2025-11-25 16:40:38.04825359 +0000 UTC m=+0.072136669 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:40:38 compute-0 podman[320670]: 2025-11-25 16:40:38.057988273 +0000 UTC m=+0.078128260 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:40:38 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : New worker (320735) forked
Nov 25 16:40:38 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : Loading success.
Nov 25 16:40:38 compute-0 podman[320671]: 2025-11-25 16:40:38.071041568 +0000 UTC m=+0.087054373 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:40:38 compute-0 nova_compute[254092]: 2025-11-25 16:40:38.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:38 compute-0 nova_compute[254092]: 2025-11-25 16:40:38.775 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-changed-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:38 compute-0 nova_compute[254092]: 2025-11-25 16:40:38.775 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Refreshing instance network info cache due to event network-changed-9691504b-429d-44e8-bdf5-7f223c5b0527. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:40:38 compute-0 nova_compute[254092]: 2025-11-25 16:40:38.775 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.047 254096 DEBUG nova.network.neutron [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updating instance_info_cache with network_info: [{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:39 compute-0 ceph-mon[74985]: pgmap v1657: 321 pgs: 321 active+clean; 134 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 5.7 MiB/s wr, 135 op/s
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.075 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Releasing lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.076 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance network_info: |[{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.077 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.077 254096 DEBUG nova.network.neutron [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Refreshing network info cache for port 9691504b-429d-44e8-bdf5-7f223c5b0527 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.080 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start _get_guest_xml network_info=[{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.084 254096 WARNING nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.090 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.090 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.093 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.094 254096 DEBUG nova.virt.libvirt.host [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.094 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.095 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.095 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.095 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.096 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.097 254096 DEBUG nova.virt.hardware [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.099 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 134 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 5.2 MiB/s wr, 128 op/s
Nov 25 16:40:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3099929923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.555 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.577 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:39 compute-0 nova_compute[254092]: 2025-11-25 16:40:39.581 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352668002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.051 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.054 254096 DEBUG nova.virt.libvirt.vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-849851407',display_name='tempest-InstanceActionsNegativeTestJSON-server-849851407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-849851407',id=59,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ea58a9bae9c474bb9f0b9c821689054',ramdisk_id='',reservation_id='r-umqrt77x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1833085706',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1833085706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:29Z,user_data=None,user_id='509d158fe3f34e219f96739bb51bd6d9',uuid=32b30534-761a-439a-85e5-4e2fe8f507df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.055 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converting VIF {"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.056 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.057 254096 DEBUG nova.objects.instance [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32b30534-761a-439a-85e5-4e2fe8f507df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3099929923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/352668002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.077 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <uuid>32b30534-761a-439a-85e5-4e2fe8f507df</uuid>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <name>instance-0000003b</name>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-849851407</nova:name>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:40:39</nova:creationTime>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:user uuid="509d158fe3f34e219f96739bb51bd6d9">tempest-InstanceActionsNegativeTestJSON-1833085706-project-member</nova:user>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:project uuid="1ea58a9bae9c474bb9f0b9c821689054">tempest-InstanceActionsNegativeTestJSON-1833085706</nova:project>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <nova:port uuid="9691504b-429d-44e8-bdf5-7f223c5b0527">
Nov 25 16:40:40 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <system>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <entry name="serial">32b30534-761a-439a-85e5-4e2fe8f507df</entry>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <entry name="uuid">32b30534-761a-439a-85e5-4e2fe8f507df</entry>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </system>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <os>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/32b30534-761a-439a-85e5-4e2fe8f507df_disk">
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/32b30534-761a-439a-85e5-4e2fe8f507df_disk.config">
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f4:ec:f2"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <target dev="tap9691504b-42"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/console.log" append="off"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <video>
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:40:40 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:40:40 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:40:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:40:40 compute-0 nova_compute[254092]: </domain>
Nov 25 16:40:40 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.085 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Preparing to wait for external event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.086 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.087 254096 DEBUG nova.virt.libvirt.vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-849851407',display_name='tempest-InstanceActionsNegativeTestJSON-server-849851407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-849851407',id=59,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ea58a9bae9c474bb9f0b9c821689054',ramdisk_id='',reservation_id='r-umqrt77x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1833085706',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1833085706-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:29Z,user_data=None,user_id='509d158fe3f34e219f96739bb51bd6d9',uuid=32b30534-761a-439a-85e5-4e2fe8f507df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:40:40
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.087 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converting VIF {"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr']
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.089 254096 DEBUG nova.network.os_vif_util [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.090 254096 DEBUG os_vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.091 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.092 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.095 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9691504b-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.096 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9691504b-42, col_values=(('external_ids', {'iface-id': '9691504b-429d-44e8-bdf5-7f223c5b0527', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:ec:f2', 'vm-uuid': '32b30534-761a-439a-85e5-4e2fe8f507df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:40 compute-0 NetworkManager[48891]: <info>  [1764088840.0984] manager: (tap9691504b-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.104 254096 INFO os_vif [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42')
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.148 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.149 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.150 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] No VIF found with MAC fa:16:3e:f4:ec:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.151 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Using config drive
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.174 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.685 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Creating config drive at /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.691 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe472clc4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.727 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Successfully updated port: 341145f6-8319-4c2d-aa9d-9d7475a5e7eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.740 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.741 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquired lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.741 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.828 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe472clc4" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.851 254096 DEBUG nova.storage.rbd_utils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] rbd image 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.855 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.915 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.916 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.917 254096 WARNING nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state None.
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.917 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-changed-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.917 254096 DEBUG nova.compute.manager [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Refreshing instance network info cache due to event network-changed-341145f6-8319-4c2d-aa9d-9d7475a5e7eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.918 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:40 compute-0 nova_compute[254092]: 2025-11-25 16:40:40.955 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.026 254096 DEBUG oslo_concurrency.processutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config 32b30534-761a-439a-85e5-4e2fe8f507df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.027 254096 INFO nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deleting local config drive /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df/disk.config because it was imported into RBD.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.068 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.070 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.070 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.070 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.071 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.072 254096 INFO nova.compute.manager [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Terminating instance
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.073 254096 DEBUG nova.compute.manager [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:40:41 compute-0 kernel: tap9691504b-42: entered promiscuous mode
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.0813] manager: (tap9691504b-42): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Nov 25 16:40:41 compute-0 ceph-mon[74985]: pgmap v1658: 321 pgs: 321 active+clean; 134 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 5.2 MiB/s wr, 128 op/s
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00547|binding|INFO|Claiming lport 9691504b-429d-44e8-bdf5-7f223c5b0527 for this chassis.
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00548|binding|INFO|9691504b-429d-44e8-bdf5-7f223c5b0527: Claiming fa:16:3e:f4:ec:f2 10.100.0.11
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.107 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:ec:f2 10.100.0.11'], port_security=['fa:16:3e:f4:ec:f2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32b30534-761a-439a-85e5-4e2fe8f507df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73476371-a7cf-4563-aeaf-a32b30db040e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ea58a9bae9c474bb9f0b9c821689054', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a64acc35-5bf2-40e8-88f9-5321cc37448d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94966a1-d8f7-4eaa-bd8c-6d046b12f822, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9691504b-429d-44e8-bdf5-7f223c5b0527) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.108 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9691504b-429d-44e8-bdf5-7f223c5b0527 in datapath 73476371-a7cf-4563-aeaf-a32b30db040e bound to our chassis
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.109 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 73476371-a7cf-4563-aeaf-a32b30db040e
Nov 25 16:40:41 compute-0 systemd-udevd[320880]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:41 compute-0 systemd-machined[216343]: New machine qemu-69-instance-0000003b.
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3aa91dd-a66e-4e50-ac3a-3d0b85708f68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.122 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap73476371-a1 in ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.123 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap73476371-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.123 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ee24e5ca-9422-4df2-8a26-811f46b7b1a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.123 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b71819a3-5b1c-4c30-bc71-23a5a5e25846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.1342] device (tap9691504b-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.1353] device (tap9691504b-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:40:41 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003b.
Nov 25 16:40:41 compute-0 kernel: tapd79fd017-c7 (unregistering): left promiscuous mode
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.140 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[91a7c9be-147d-4f82-a4a0-d06bc2b7dbca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.1468] device (tapd79fd017-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.169 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[691e9791-311b-44dd-b9d5-58f224e72566]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00549|binding|INFO|Releasing lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c from this chassis (sb_readonly=0)
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00550|binding|INFO|Setting lport d79fd017-c7a6-4bfe-8c90-b3295f62f83c down in Southbound
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00551|binding|INFO|Removing iface tapd79fd017-c7 ovn-installed in OVS
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.186 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:7b:b0 10.100.0.8'], port_security=['fa:16:3e:e2:7b:b0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd1ceaafd-59a6-45b1-833d-eb2a76e789be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d79fd017-c7a6-4bfe-8c90-b3295f62f83c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:41 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Nov 25 16:40:41 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003a.scope: Consumed 3.881s CPU time.
Nov 25 16:40:41 compute-0 systemd-machined[216343]: Machine qemu-68-instance-0000003a terminated.
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.197 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[14ed5e64-bd0e-4278-99ed-70fad21cecb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00552|binding|INFO|Setting lport 9691504b-429d-44e8-bdf5-7f223c5b0527 up in Southbound
Nov 25 16:40:41 compute-0 systemd-udevd[320884]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00553|binding|INFO|Setting lport 9691504b-429d-44e8-bdf5-7f223c5b0527 ovn-installed in OVS
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.2053] manager: (tap73476371-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.204 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60caa7ca-ba3d-4e2e-beb3-f4ff43ba283f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.241 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[641883ae-e6c4-4651-a53e-f8ca1184a8fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.244 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fad5fae0-4430-4c81-86e0-960f117d8e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.254 254096 DEBUG nova.network.neutron [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updated VIF entry in instance network info cache for port 9691504b-429d-44e8-bdf5-7f223c5b0527. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.255 254096 DEBUG nova.network.neutron [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updating instance_info_cache with network_info: [{"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.2679] device (tap73476371-a0): carrier: link connected
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.269 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-32b30534-761a-439a-85e5-4e2fe8f507df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.269 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG oslo_concurrency.lockutils [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.270 254096 DEBUG nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.271 254096 WARNING nova.compute.manager [req-d013a57f-ead6-4135-abc9-e9effd04134f req-bc1fa311-a875-46dc-8195-f84847c2ded5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state None.
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.275 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[480415de-363b-45fa-90be-046e1e69df86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.2951] manager: (tapd79fd017-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.299 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1e1e74-ae4a-4b91-bdb2-a8db95e5247b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73476371-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:5f:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529885, 'reachable_time': 20577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320920, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.310 254096 INFO nova.virt.libvirt.driver [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Instance destroyed successfully.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.311 254096 DEBUG nova.objects.instance [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid d1ceaafd-59a6-45b1-833d-eb2a76e789be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.315 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3a4902-486a-4f85-884a-2a2f4f0e2fa1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:5f9e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 529885, 'tstamp': 529885}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320925, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.328 254096 DEBUG nova.virt.libvirt.vif [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1732543352',display_name='tempest-ServerDiskConfigTestJSON-server-1732543352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1732543352',id=58,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-nrd7tmad',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:40:37Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=d1ceaafd-59a6-45b1-833d-eb2a76e789be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.329 254096 DEBUG nova.network.os_vif_util [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "address": "fa:16:3e:e2:7b:b0", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd79fd017-c7", "ovs_interfaceid": "d79fd017-c7a6-4bfe-8c90-b3295f62f83c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.330 254096 DEBUG nova.network.os_vif_util [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.331 254096 DEBUG os_vif [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.334 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd79fd017-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.333 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e16f390-59b6-49ee-9e63-4d88201996c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73476371-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:5f:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529885, 'reachable_time': 20577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320931, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.340 254096 INFO os_vif [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:7b:b0,bridge_name='br-int',has_traffic_filtering=True,id=d79fd017-c7a6-4bfe-8c90-b3295f62f83c,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd79fd017-c7')
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.364 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60b5b1cc-8514-4ddd-a40d-18f865adb9c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.435 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e102ad28-6e3d-47f3-ae72-96bf34c29c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.436 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73476371-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.436 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.436 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73476371-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 NetworkManager[48891]: <info>  [1764088841.4389] manager: (tap73476371-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Nov 25 16:40:41 compute-0 kernel: tap73476371-a0: entered promiscuous mode
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.442 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap73476371-a0, col_values=(('external_ids', {'iface-id': '4f2051e7-8850-4104-b92b-9573772689cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:41 compute-0 ovn_controller[153477]: 2025-11-25T16:40:41Z|00554|binding|INFO|Releasing lport 4f2051e7-8850-4104-b92b-9573772689cb from this chassis (sb_readonly=0)
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.465 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.467 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/73476371-a7cf-4563-aeaf-a32b30db040e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/73476371-a7cf-4563-aeaf-a32b30db040e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.468 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e906d93a-d114-4836-b6cf-44496339724a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.469 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-73476371-a7cf-4563-aeaf-a32b30db040e
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/73476371-a7cf-4563-aeaf-a32b30db040e.pid.haproxy
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 73476371-a7cf-4563-aeaf-a32b30db040e
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:40:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:41.469 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'env', 'PROCESS_TAG=haproxy-73476371-a7cf-4563-aeaf-a32b30db040e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/73476371-a7cf-4563-aeaf-a32b30db040e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:40:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 180 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.0 MiB/s wr, 237 op/s
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.543 254096 DEBUG nova.compute.manager [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.544 254096 DEBUG oslo_concurrency.lockutils [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.544 254096 DEBUG oslo_concurrency.lockutils [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.545 254096 DEBUG oslo_concurrency.lockutils [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.545 254096 DEBUG nova.compute.manager [req-f19eefda-0bb3-4042-8887-c0ee159968f7 req-25d746b2-fb4d-4dad-a83b-65eadf65960c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Processing event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.565 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088841.5648766, 32b30534-761a-439a-85e5-4e2fe8f507df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.566 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Started (Lifecycle Event)
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.568 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.571 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.574 254096 INFO nova.virt.libvirt.driver [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance spawned successfully.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.574 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.588 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.594 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.598 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.599 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.599 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.600 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.600 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.601 254096 DEBUG nova.virt.libvirt.driver [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.626 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.626 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088841.5649772, 32b30534-761a-439a-85e5-4e2fe8f507df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.627 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Paused (Lifecycle Event)
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.648 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.650 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088841.5701854, 32b30534-761a-439a-85e5-4e2fe8f507df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.650 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Resumed (Lifecycle Event)
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.658 254096 INFO nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 11.85 seconds to spawn the instance on the hypervisor.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.658 254096 DEBUG nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.681 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.684 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.711 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.721 254096 INFO nova.compute.manager [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 13.07 seconds to build instance.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.736 254096 DEBUG oslo_concurrency.lockutils [None req-644cfa15-9754-4f5e-aa58-cb2bb7901726 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:41 compute-0 podman[321023]: 2025-11-25 16:40:41.85631541 +0000 UTC m=+0.050809959 container create 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.885 254096 INFO nova.virt.libvirt.driver [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deleting instance files /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.887 254096 INFO nova.virt.libvirt.driver [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deletion of /var/lib/nova/instances/d1ceaafd-59a6-45b1-833d-eb2a76e789be_del complete
Nov 25 16:40:41 compute-0 systemd[1]: Started libpod-conmon-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27.scope.
Nov 25 16:40:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:41 compute-0 podman[321023]: 2025-11-25 16:40:41.833108721 +0000 UTC m=+0.027603300 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438c9251a188e84b6838f46344ecd1bfc883a1c4248292af4a79de88d209154/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.939 254096 INFO nova.compute.manager [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 0.87 seconds to destroy the instance on the hypervisor.
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.940 254096 DEBUG oslo.service.loopingcall [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.940 254096 DEBUG nova.compute.manager [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:40:41 compute-0 nova_compute[254092]: 2025-11-25 16:40:41.941 254096 DEBUG nova.network.neutron [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:40:41 compute-0 podman[321023]: 2025-11-25 16:40:41.944819011 +0000 UTC m=+0.139313590 container init 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:40:41 compute-0 podman[321023]: 2025-11-25 16:40:41.952324825 +0000 UTC m=+0.146819384 container start 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 16:40:41 compute-0 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : New worker (321045) forked
Nov 25 16:40:41 compute-0 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : Loading success.
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.050 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d79fd017-c7a6-4bfe-8c90-b3295f62f83c in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.052 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.053 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c55713bf-b565-4b3a-a5f2-42d2d9a9c741]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.054 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore
Nov 25 16:40:42 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : haproxy version is 2.8.14-c23fe91
Nov 25 16:40:42 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [NOTICE]   (320727) : path to executable is /usr/sbin/haproxy
Nov 25 16:40:42 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [WARNING]  (320727) : Exiting Master process...
Nov 25 16:40:42 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [ALERT]    (320727) : Current worker (320735) exited with code 143 (Terminated)
Nov 25 16:40:42 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[320695]: [WARNING]  (320727) : All workers exited. Exiting... (0)
Nov 25 16:40:42 compute-0 systemd[1]: libpod-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20.scope: Deactivated successfully.
Nov 25 16:40:42 compute-0 podman[321072]: 2025-11-25 16:40:42.228022185 +0000 UTC m=+0.083269000 container died a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20-userdata-shm.mount: Deactivated successfully.
Nov 25 16:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c090a3c2d63b20744daa816dd216a1d72f1b7b16b76ad011f3f5fdfd906ac44b-merged.mount: Deactivated successfully.
Nov 25 16:40:42 compute-0 podman[321072]: 2025-11-25 16:40:42.581139165 +0000 UTC m=+0.436385960 container cleanup a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:40:42 compute-0 systemd[1]: libpod-conmon-a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20.scope: Deactivated successfully.
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.638 254096 DEBUG nova.network.neutron [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updating instance_info_cache with network_info: [{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.703 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Releasing lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.704 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance network_info: |[{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.704 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.705 254096 DEBUG nova.network.neutron [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Refreshing network info cache for port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.708 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start _get_guest_xml network_info=[{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.713 254096 WARNING nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.719 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.720 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.729 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.730 254096 DEBUG nova.virt.libvirt.host [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.731 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.731 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.732 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.733 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.733 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.734 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:40:42 compute-0 podman[321098]: 2025-11-25 16:40:42.734366531 +0000 UTC m=+0.132151396 container remove a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.735 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.735 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.736 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.736 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.736 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.737 254096 DEBUG nova.virt.hardware [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.740 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[82137892-5adf-44eb-b9b1-9c982bfeea41]: (4, ('Tue Nov 25 04:40:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20)\na8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20\nTue Nov 25 04:40:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (a8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20)\na8239858891d44d07be03ebd422bfd16dbcee78681c2987e30b15dd5b0159c20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b467f97-fc4b-4d67-9368-45f070caf66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.743 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:42 compute-0 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4279717f-c7ac-4d66-92f5-f282a2c66eb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:42 compute-0 nova_compute[254092]: 2025-11-25 16:40:42.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.777 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b65acb-f292-47af-824e-d81259973119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.778 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[518d9455-27d3-4d78-a16b-146f55ceb935]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e988a641-f4e3-46d5-a861-8c03d907fbef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529484, 'reachable_time': 21588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321115, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.802 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:40:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:42.802 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[cede02c2-d52d-422a-8efe-394ea929e7a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:43 compute-0 ceph-mon[74985]: pgmap v1659: 321 pgs: 321 active+clean; 180 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.0 MiB/s wr, 237 op/s
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.125 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.126 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.126 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.127 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.127 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.128 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-unplugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.128 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.128 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.129 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.129 254096 DEBUG oslo_concurrency.lockutils [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.129 254096 DEBUG nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] No waiting events found dispatching network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.130 254096 WARNING nova.compute.manager [req-f3d0e5e3-216e-406d-aef7-ade60fd3e266 req-11422a39-03c0-4091-a71d-afa7af1d8c72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received unexpected event network-vif-plugged-d79fd017-c7a6-4bfe-8c90-b3295f62f83c for instance with vm_state active and task_state deleting.
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356384302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.215 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.236 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.241 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.272 254096 DEBUG nova.network.neutron [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.289 254096 INFO nova.compute.manager [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Took 1.35 seconds to deallocate network for instance.
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.344 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.345 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.444 254096 DEBUG oslo_concurrency.processutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.488 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.488 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.517 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:40:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 180 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.604 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.624 254096 DEBUG nova.compute.manager [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.625 254096 DEBUG oslo_concurrency.lockutils [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.626 254096 DEBUG oslo_concurrency.lockutils [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.627 254096 DEBUG oslo_concurrency.lockutils [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.628 254096 DEBUG nova.compute.manager [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] No waiting events found dispatching network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.629 254096 WARNING nova.compute.manager [req-cb40eff7-346c-4228-987d-1f6b57b50b83 req-c2ae2912-9df8-4953-902d-3a450786e182 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received unexpected event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 for instance with vm_state active and task_state None.
Nov 25 16:40:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193363460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.690 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.694 254096 DEBUG nova.virt.libvirt.vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-2012655337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-2012655337',id=60,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a6a1f7b5bb9482d85239a3b39051837',ramdisk_id='',reservation_id='r-7kghzncd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-694375426',owner_user_name='tempest-InstanceActionsV221TestJSON-694375426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:35Z,user_data=None,user_id='8a304e40f25749e49b171b1db4828ff1',uuid=bd217c57-20a2-41c4-a969-7a4d94f0c7ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.695 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converting VIF {"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.698 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.702 254096 DEBUG nova.objects.instance [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd217c57-20a2-41c4-a969-7a4d94f0c7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.728 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <uuid>bd217c57-20a2-41c4-a969-7a4d94f0c7ce</uuid>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <name>instance-0000003c</name>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-2012655337</nova:name>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:40:42</nova:creationTime>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:user uuid="8a304e40f25749e49b171b1db4828ff1">tempest-InstanceActionsV221TestJSON-694375426-project-member</nova:user>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:project uuid="8a6a1f7b5bb9482d85239a3b39051837">tempest-InstanceActionsV221TestJSON-694375426</nova:project>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <nova:port uuid="341145f6-8319-4c2d-aa9d-9d7475a5e7eb">
Nov 25 16:40:43 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <system>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <entry name="serial">bd217c57-20a2-41c4-a969-7a4d94f0c7ce</entry>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <entry name="uuid">bd217c57-20a2-41c4-a969-7a4d94f0c7ce</entry>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </system>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <os>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   </os>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <features>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   </features>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk">
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config">
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:43 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:cc:6c:f3"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <target dev="tap341145f6-83"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/console.log" append="off"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <video>
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </video>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:40:43 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:40:43 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:40:43 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:40:43 compute-0 nova_compute[254092]: </domain>
Nov 25 16:40:43 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.731 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Preparing to wait for external event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.731 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.732 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.732 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.735 254096 DEBUG nova.virt.libvirt.vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-2012655337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-2012655337',id=60,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a6a1f7b5bb9482d85239a3b39051837',ramdisk_id='',reservation_id='r-7kghzncd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-694375426',owner_user_name='tempest-InstanceActionsV221TestJSON-694375426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:35Z,user_data=None,user_id='8a304e40f25749e49b171b1db4828ff1',uuid=bd217c57-20a2-41c4-a969-7a4d94f0c7ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.736 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converting VIF {"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.738 254096 DEBUG nova.network.os_vif_util [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.739 254096 DEBUG os_vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.742 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.744 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.750 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap341145f6-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.751 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap341145f6-83, col_values=(('external_ids', {'iface-id': '341145f6-8319-4c2d-aa9d-9d7475a5e7eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:6c:f3', 'vm-uuid': 'bd217c57-20a2-41c4-a969-7a4d94f0c7ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:43 compute-0 NetworkManager[48891]: <info>  [1764088843.7558] manager: (tap341145f6-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.764 254096 INFO os_vif [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83')
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.817 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.818 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.819 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] No VIF found with MAC fa:16:3e:cc:6c:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.820 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Using config drive
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.857 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026401179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.896 254096 DEBUG oslo_concurrency.processutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.902 254096 DEBUG nova.compute.provider_tree [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.915 254096 DEBUG nova.scheduler.client.report [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.935 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.937 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.942 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.942 254096 INFO nova.compute.claims [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:40:43 compute-0 nova_compute[254092]: 2025-11-25 16:40:43.979 254096 INFO nova.scheduler.client.report [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Deleted allocations for instance d1ceaafd-59a6-45b1-833d-eb2a76e789be
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.037 254096 DEBUG oslo_concurrency.lockutils [None req-bc9aff56-bb2e-49bc-ad91-b881f45033fe 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "d1ceaafd-59a6-45b1-833d-eb2a76e789be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.083 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1356384302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4193363460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3026401179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.146 254096 DEBUG nova.network.neutron [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updated VIF entry in instance network info cache for port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.146 254096 DEBUG nova.network.neutron [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updating instance_info_cache with network_info: [{"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.159 254096 DEBUG oslo_concurrency.lockutils [req-1f2b69b8-b670-4db2-b598-846d1b99974d req-e566986b-0775-45fb-b6ca-16c145b322a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd217c57-20a2-41c4-a969-7a4d94f0c7ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.268 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Creating config drive at /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.274 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4tpxgaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.419 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl4tpxgaj" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.439 254096 DEBUG nova.storage.rbd_utils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] rbd image bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.442 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253537573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.547 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.556 254096 DEBUG nova.compute.provider_tree [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.572 254096 DEBUG nova.scheduler.client.report [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.584 254096 DEBUG oslo_concurrency.processutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config bd217c57-20a2-41c4-a969-7a4d94f0c7ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.585 254096 INFO nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deleting local config drive /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce/disk.config because it was imported into RBD.
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.611 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.612 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.614 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.614 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.614 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.615 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.615 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.616 254096 INFO nova.compute.manager [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Terminating instance
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.617 254096 DEBUG nova.compute.manager [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:40:44 compute-0 kernel: tap341145f6-83: entered promiscuous mode
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.6297] manager: (tap341145f6-83): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00555|binding|INFO|Claiming lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb for this chassis.
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00556|binding|INFO|341145f6-8319-4c2d-aa9d-9d7475a5e7eb: Claiming fa:16:3e:cc:6c:f3 10.100.0.10
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.651 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6c:f3 10.100.0.10'], port_security=['fa:16:3e:cc:6c:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bd217c57-20a2-41c4-a969-7a4d94f0c7ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd021dd-a32c-4996-b105-78f4369f31fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a6a1f7b5bb9482d85239a3b39051837', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06e61d98-1ba5-4078-aa67-4e1101d6ea8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0771bd5-9169-46dc-85a6-0c2b379eeaa2, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=341145f6-8319-4c2d-aa9d-9d7475a5e7eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.652 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb in datapath 4fd021dd-a32c-4996-b105-78f4369f31fc bound to our chassis
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.653 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd021dd-a32c-4996-b105-78f4369f31fc
Nov 25 16:40:44 compute-0 systemd-udevd[321295]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:44 compute-0 systemd-machined[216343]: New machine qemu-70-instance-0000003c.
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.664 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1d5240-00d4-48b3-9578-3adfe742bbe7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.665 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4fd021dd-a1 in ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.667 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.668 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.669 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4fd021dd-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.669 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39973123-1d8d-47d6-89a7-33da9b55ef25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.670 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2e5b41-151b-429b-9619-e05a15ad3fde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 kernel: tap9691504b-42 (unregistering): left promiscuous mode
Nov 25 16:40:44 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003c.
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.6778] device (tap9691504b-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.6786] device (tap341145f6-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.6791] device (tap341145f6-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.681 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6feb9c-0b69-4059-ab51-70a1b3707514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.687 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6319db-d7fe-486d-9063-d5f4cec6c87e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.713 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00557|binding|INFO|Releasing lport 9691504b-429d-44e8-bdf5-7f223c5b0527 from this chassis (sb_readonly=0)
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00558|binding|INFO|Setting lport 9691504b-429d-44e8-bdf5-7f223c5b0527 down in Southbound
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00559|binding|INFO|Removing iface tap9691504b-42 ovn-installed in OVS
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.736 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:ec:f2 10.100.0.11'], port_security=['fa:16:3e:f4:ec:f2 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32b30534-761a-439a-85e5-4e2fe8f507df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73476371-a7cf-4563-aeaf-a32b30db040e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ea58a9bae9c474bb9f0b9c821689054', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a64acc35-5bf2-40e8-88f9-5321cc37448d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94966a1-d8f7-4eaa-bd8c-6d046b12f822, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9691504b-429d-44e8-bdf5-7f223c5b0527) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.739 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfcac93-d3cc-4625-a586-2b64ffe34320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00560|binding|INFO|Setting lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb up in Southbound
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00561|binding|INFO|Setting lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb ovn-installed in OVS
Nov 25 16:40:44 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Nov 25 16:40:44 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003b.scope: Consumed 3.442s CPU time.
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[749a39c2-c375-456e-85ea-50bd4f76d47c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.7487] manager: (tap4fd021dd-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Nov 25 16:40:44 compute-0 systemd-udevd[321298]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:44 compute-0 systemd-machined[216343]: Machine qemu-69-instance-0000003b terminated.
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.778 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9b4402-3f0a-4fd6-ba1b-ca6fa26aa2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.781 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[71833f51-72fd-4913-b541-580357030091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.8020] device (tap4fd021dd-a0): carrier: link connected
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.808 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e390fc4d-67e1-47dc-864a-100d7eadd9cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.821 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.823 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.823 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating image(s)
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb8b512-559f-4d59-a3bf-af3b3a1e885a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd021dd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:61:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530239, 'reachable_time': 31722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321333, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.8350] manager: (tap9691504b-42): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.841 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b3d64f-1b3e-4c17-b167-25ff994e5708]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:6112'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530239, 'tstamp': 530239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321337, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.850 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.861 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be1a7744-fc8e-4eb1-b82c-65b810782765]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd021dd-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:61:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530239, 'reachable_time': 31722, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321359, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.881 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.889 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8593a7-0591-4d86-a55b-c42fcb72e9f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.907 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.911 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.945 254096 DEBUG nova.policy [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c1fd56de7cd4f5c9b1d85ffe8545c90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48b5e09a-5251-48c8-8955-c4fc5a6fa77b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.947 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd021dd-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.948 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.948 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd021dd-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:44 compute-0 NetworkManager[48891]: <info>  [1764088844.9508] manager: (tap4fd021dd-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 kernel: tap4fd021dd-a0: entered promiscuous mode
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.958 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd021dd-a0, col_values=(('external_ids', {'iface-id': 'a9d87038-3e5b-43cb-8024-6dad7e852af0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 ovn_controller[153477]: 2025-11-25T16:40:44Z|00562|binding|INFO|Releasing lport a9d87038-3e5b-43cb-8024-6dad7e852af0 from this chassis (sb_readonly=0)
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.961 254096 INFO nova.virt.libvirt.driver [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Instance destroyed successfully.
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.961 254096 DEBUG nova.objects.instance [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lazy-loading 'resources' on Instance uuid 32b30534-761a-439a-85e5-4e2fe8f507df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.982 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4fd021dd-a32c-4996-b105-78f4369f31fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4fd021dd-a32c-4996-b105-78f4369f31fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.982 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[143a875e-1f55-45cf-8c53-17cfb22b59e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.983 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-4fd021dd-a32c-4996-b105-78f4369f31fc
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/4fd021dd-a32c-4996-b105-78f4369f31fc.pid.haproxy
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 4fd021dd-a32c-4996-b105-78f4369f31fc
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:40:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:44.985 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'env', 'PROCESS_TAG=haproxy-4fd021dd-a32c-4996-b105-78f4369f31fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4fd021dd-a32c-4996-b105-78f4369f31fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.985 254096 DEBUG nova.virt.libvirt.vif [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-849851407',display_name='tempest-InstanceActionsNegativeTestJSON-server-849851407',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-849851407',id=59,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ea58a9bae9c474bb9f0b9c821689054',ramdisk_id='',reservation_id='r-umqrt77x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1833085706',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1833085706-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:40:41Z,user_data=None,user_id='509d158fe3f34e219f96739bb51bd6d9',uuid=32b30534-761a-439a-85e5-4e2fe8f507df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.985 254096 DEBUG nova.network.os_vif_util [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converting VIF {"id": "9691504b-429d-44e8-bdf5-7f223c5b0527", "address": "fa:16:3e:f4:ec:f2", "network": {"id": "73476371-a7cf-4563-aeaf-a32b30db040e", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-535081531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ea58a9bae9c474bb9f0b9c821689054", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9691504b-42", "ovs_interfaceid": "9691504b-429d-44e8-bdf5-7f223c5b0527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.986 254096 DEBUG nova.network.os_vif_util [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.987 254096 DEBUG os_vif [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.989 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9691504b-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:44 compute-0 nova_compute[254092]: 2025-11-25 16:40:44.993 254096 INFO os_vif [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ec:f2,bridge_name='br-int',has_traffic_filtering=True,id=9691504b-429d-44e8-bdf5-7f223c5b0527,network=Network(73476371-a7cf-4563-aeaf-a32b30db040e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9691504b-42')
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.012 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.012 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.037 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.041 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.074 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088845.0274746, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.075 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Started (Lifecycle Event)
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.100 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:45 compute-0 ceph-mon[74985]: pgmap v1660: 321 pgs: 321 active+clean; 180 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Nov 25 16:40:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3253537573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.104 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088845.0275977, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.105 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Paused (Lifecycle Event)
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.129 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.133 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.162 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.228 254096 DEBUG nova.compute.manager [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Received event network-vif-deleted-d79fd017-c7a6-4bfe-8c90-b3295f62f83c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.229 254096 DEBUG nova.compute.manager [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.229 254096 DEBUG oslo_concurrency.lockutils [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.229 254096 DEBUG oslo_concurrency.lockutils [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.230 254096 DEBUG oslo_concurrency.lockutils [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.230 254096 DEBUG nova.compute.manager [req-cd27c004-0259-434b-98d8-65b67804829d req-c0ed70bc-f04c-40cc-af05-d54a865e0330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Processing event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.231 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.245 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.246 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088845.245339, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.247 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Resumed (Lifecycle Event)
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.252 254096 INFO nova.virt.libvirt.driver [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance spawned successfully.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.252 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.272 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.272 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.273 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.273 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.273 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.274 254096 DEBUG nova.virt.libvirt.driver [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.277 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.281 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.299 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.348 254096 INFO nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 9.80 seconds to spawn the instance on the hypervisor.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.349 254096 DEBUG nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:45 compute-0 podman[321533]: 2025-11-25 16:40:45.405033536 +0000 UTC m=+0.071007429 container create 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.413 254096 INFO nova.compute.manager [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 10.74 seconds to build instance.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.434 254096 DEBUG oslo_concurrency.lockutils [None req-8dd45246-5858-414c-bf16-e2b6b46fd5c7 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:45 compute-0 systemd[1]: Started libpod-conmon-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62.scope.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.448 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:45 compute-0 podman[321533]: 2025-11-25 16:40:45.357877766 +0000 UTC m=+0.023851689 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:40:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a24bbb3b2d46df9d259f5d274fdace0aea75909df6f8ee405b5020d775c516/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:45 compute-0 podman[321533]: 2025-11-25 16:40:45.498949103 +0000 UTC m=+0.164923026 container init 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:40:45 compute-0 podman[321533]: 2025-11-25 16:40:45.50658254 +0000 UTC m=+0.172556433 container start 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:40:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 156 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.536 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:40:45 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : New worker (321591) forked
Nov 25 16:40:45 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : Loading success.
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.574 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9691504b-429d-44e8-bdf5-7f223c5b0527 in datapath 73476371-a7cf-4563-aeaf-a32b30db040e unbound from our chassis
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.577 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 73476371-a7cf-4563-aeaf-a32b30db040e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.578 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aeec7b8e-e870-4acf-9ffe-387e54275a21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e namespace which is not needed anymore
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.644 254096 DEBUG nova.objects.instance [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.666 254096 INFO nova.virt.libvirt.driver [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deleting instance files /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df_del
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.667 254096 INFO nova.virt.libvirt.driver [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deletion of /var/lib/nova/instances/32b30534-761a-439a-85e5-4e2fe8f507df_del complete
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.671 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.671 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Ensure instance console log exists: /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.672 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.673 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.674 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:45 compute-0 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : haproxy version is 2.8.14-c23fe91
Nov 25 16:40:45 compute-0 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [NOTICE]   (321043) : path to executable is /usr/sbin/haproxy
Nov 25 16:40:45 compute-0 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [WARNING]  (321043) : Exiting Master process...
Nov 25 16:40:45 compute-0 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [ALERT]    (321043) : Current worker (321045) exited with code 143 (Terminated)
Nov 25 16:40:45 compute-0 neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e[321039]: [WARNING]  (321043) : All workers exited. Exiting... (0)
Nov 25 16:40:45 compute-0 systemd[1]: libpod-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27.scope: Deactivated successfully.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.733 254096 INFO nova.compute.manager [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 1.12 seconds to destroy the instance on the hypervisor.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.733 254096 DEBUG oslo.service.loopingcall [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.734 254096 DEBUG nova.compute.manager [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.734 254096 DEBUG nova.network.neutron [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.738 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-unplugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.738 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] No waiting events found dispatching network-vif-unplugged-9691504b-429d-44e8-bdf5-7f223c5b0527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-unplugged-9691504b-429d-44e8-bdf5-7f223c5b0527 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.739 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG oslo_concurrency.lockutils [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 DEBUG nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] No waiting events found dispatching network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.740 254096 WARNING nova.compute.manager [req-3ddf7be5-3fcd-4720-b82f-c6f74955e24a req-b91b678e-0a1c-468c-ba25-dd84249aa10a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received unexpected event network-vif-plugged-9691504b-429d-44e8-bdf5-7f223c5b0527 for instance with vm_state active and task_state deleting.
Nov 25 16:40:45 compute-0 podman[321653]: 2025-11-25 16:40:45.742708966 +0000 UTC m=+0.048912848 container died 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27-userdata-shm.mount: Deactivated successfully.
Nov 25 16:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a438c9251a188e84b6838f46344ecd1bfc883a1c4248292af4a79de88d209154-merged.mount: Deactivated successfully.
Nov 25 16:40:45 compute-0 podman[321653]: 2025-11-25 16:40:45.78671126 +0000 UTC m=+0.092915122 container cleanup 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:40:45 compute-0 systemd[1]: libpod-conmon-5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27.scope: Deactivated successfully.
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.807 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Successfully created port: 60897ca4-9177-413c-b0f0-808dbc7d34dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:40:45 compute-0 podman[321681]: 2025-11-25 16:40:45.857668025 +0000 UTC m=+0.047090689 container remove 5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.868 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f51c9b3-5178-41d5-8334-b7443cae5acf]: (4, ('Tue Nov 25 04:40:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e (5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27)\n5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27\nTue Nov 25 04:40:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e (5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27)\n5b4352167d58119796965df3af8ebf8e85aeb97867b4b3d5f0abee23e2e41e27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.870 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[718b0d64-8572-43cd-82e4-7e7ff9e44be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.871 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73476371-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.873 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:45 compute-0 kernel: tap73476371-a0: left promiscuous mode
Nov 25 16:40:45 compute-0 nova_compute[254092]: 2025-11-25 16:40:45.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.892 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[837ce464-687c-4e32-99ab-033198b23cca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.906 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cee1b144-e3fd-42fc-b260-c0054e7637ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.907 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f74c302b-821e-4191-8bd0-a1442f0ad3a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.924 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a28bf8-ca95-458b-beea-263529f6ce3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 529878, 'reachable_time': 25319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321697, 'error': None, 'target': 'ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d73476371\x2da7cf\x2d4563\x2daeaf\x2da32b30db040e.mount: Deactivated successfully.
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.929 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-73476371-a7cf-4563-aeaf-a32b30db040e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:40:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:45.929 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b734c298-66e7-484b-b7a7-106f4ba25244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:46 compute-0 ceph-mon[74985]: pgmap v1661: 321 pgs: 321 active+clean; 156 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.617 254096 DEBUG nova.network.neutron [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.630 254096 INFO nova.compute.manager [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Took 0.90 seconds to deallocate network for instance.
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.671 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.671 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.686 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Successfully updated port: 60897ca4-9177-413c-b0f0-808dbc7d34dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.696 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.696 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquired lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.696 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.748 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.749 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.750 254096 INFO nova.compute.manager [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Terminating instance
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.751 254096 DEBUG nova.compute.manager [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.757 254096 DEBUG oslo_concurrency.processutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:46 compute-0 kernel: tap341145f6-83 (unregistering): left promiscuous mode
Nov 25 16:40:46 compute-0 NetworkManager[48891]: <info>  [1764088846.7947] device (tap341145f6-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:40:46 compute-0 ovn_controller[153477]: 2025-11-25T16:40:46Z|00563|binding|INFO|Releasing lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb from this chassis (sb_readonly=0)
Nov 25 16:40:46 compute-0 ovn_controller[153477]: 2025-11-25T16:40:46Z|00564|binding|INFO|Setting lport 341145f6-8319-4c2d-aa9d-9d7475a5e7eb down in Southbound
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:46 compute-0 ovn_controller[153477]: 2025-11-25T16:40:46Z|00565|binding|INFO|Removing iface tap341145f6-83 ovn-installed in OVS
Nov 25 16:40:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.811 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6c:f3 10.100.0.10'], port_security=['fa:16:3e:cc:6c:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bd217c57-20a2-41c4-a969-7a4d94f0c7ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd021dd-a32c-4996-b105-78f4369f31fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a6a1f7b5bb9482d85239a3b39051837', 'neutron:revision_number': '4', 'neutron:security_group_ids': '06e61d98-1ba5-4078-aa67-4e1101d6ea8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0771bd5-9169-46dc-85a6-0c2b379eeaa2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=341145f6-8319-4c2d-aa9d-9d7475a5e7eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.812 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 341145f6-8319-4c2d-aa9d-9d7475a5e7eb in datapath 4fd021dd-a32c-4996-b105-78f4369f31fc unbound from our chassis
Nov 25 16:40:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.813 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4fd021dd-a32c-4996-b105-78f4369f31fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:40:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15283d85-3895-4375-89a0-630a68d31ec6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:46.814 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc namespace which is not needed anymore
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:46 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 25 16:40:46 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003c.scope: Consumed 1.886s CPU time.
Nov 25 16:40:46 compute-0 systemd-machined[216343]: Machine qemu-70-instance-0000003c terminated.
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.898 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:40:46 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : haproxy version is 2.8.14-c23fe91
Nov 25 16:40:46 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [NOTICE]   (321586) : path to executable is /usr/sbin/haproxy
Nov 25 16:40:46 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [WARNING]  (321586) : Exiting Master process...
Nov 25 16:40:46 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [WARNING]  (321586) : Exiting Master process...
Nov 25 16:40:46 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [ALERT]    (321586) : Current worker (321591) exited with code 143 (Terminated)
Nov 25 16:40:46 compute-0 neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc[321549]: [WARNING]  (321586) : All workers exited. Exiting... (0)
Nov 25 16:40:46 compute-0 systemd[1]: libpod-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62.scope: Deactivated successfully.
Nov 25 16:40:46 compute-0 podman[321726]: 2025-11-25 16:40:46.941098878 +0000 UTC m=+0.041423185 container died 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62-userdata-shm.mount: Deactivated successfully.
Nov 25 16:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9a24bbb3b2d46df9d259f5d274fdace0aea75909df6f8ee405b5020d775c516-merged.mount: Deactivated successfully.
Nov 25 16:40:46 compute-0 NetworkManager[48891]: <info>  [1764088846.9703] manager: (tap341145f6-83): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:46 compute-0 podman[321726]: 2025-11-25 16:40:46.980771594 +0000 UTC m=+0.081095901 container cleanup 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.994 254096 INFO nova.virt.libvirt.driver [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Instance destroyed successfully.
Nov 25 16:40:46 compute-0 nova_compute[254092]: 2025-11-25 16:40:46.995 254096 DEBUG nova.objects.instance [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lazy-loading 'resources' on Instance uuid bd217c57-20a2-41c4-a969-7a4d94f0c7ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:47 compute-0 systemd[1]: libpod-conmon-0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62.scope: Deactivated successfully.
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.004 254096 DEBUG nova.virt.libvirt.vif [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-2012655337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-2012655337',id=60,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a6a1f7b5bb9482d85239a3b39051837',ramdisk_id='',reservation_id='r-7kghzncd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-694375426',owner_user_name='tempest-InstanceActionsV221TestJSON-694375426-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:40:45Z,user_data=None,user_id='8a304e40f25749e49b171b1db4828ff1',uuid=bd217c57-20a2-41c4-a969-7a4d94f0c7ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.004 254096 DEBUG nova.network.os_vif_util [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converting VIF {"id": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "address": "fa:16:3e:cc:6c:f3", "network": {"id": "4fd021dd-a32c-4996-b105-78f4369f31fc", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1334163951-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a6a1f7b5bb9482d85239a3b39051837", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap341145f6-83", "ovs_interfaceid": "341145f6-8319-4c2d-aa9d-9d7475a5e7eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.005 254096 DEBUG nova.network.os_vif_util [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.005 254096 DEBUG os_vif [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.007 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap341145f6-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.010 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.013 254096 INFO os_vif [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6c:f3,bridge_name='br-int',has_traffic_filtering=True,id=341145f6-8319-4c2d-aa9d-9d7475a5e7eb,network=Network(4fd021dd-a32c-4996-b105-78f4369f31fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap341145f6-83')
Nov 25 16:40:47 compute-0 podman[321772]: 2025-11-25 16:40:47.077348385 +0000 UTC m=+0.071495871 container remove 0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.084 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5214f364-8d8b-4b8c-84dd-fbfa04f1d29f]: (4, ('Tue Nov 25 04:40:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc (0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62)\n0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62\nTue Nov 25 04:40:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc (0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62)\n0219b4e6abb03a7e935022b0711d0dfa24f40c782c1a8506240a68cc61cefd62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.086 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd848a5c-435e-40e6-838c-7941fe6c4fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.087 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd021dd-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:47 compute-0 kernel: tap4fd021dd-a0: left promiscuous mode
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.106 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4dcd9d3-34a8-412e-9da3-659a712cd982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e473b3f5-52a5-476b-8068-82c08d643f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.122 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11afd887-6413-4f3c-abf5-85d78f16cb29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ecf6bb-0e2a-43d7-a32d-eabeefab0d64]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530232, 'reachable_time': 20508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321805, 'error': None, 'target': 'ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d4fd021dd\x2da32c\x2d4996\x2db105\x2d78f4369f31fc.mount: Deactivated successfully.
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.142 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4fd021dd-a32c-4996-b105-78f4369f31fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:40:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:47.142 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[81c22bb6-eedb-4f1e-b67e-f13bdc6a8475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918936432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.230 254096 DEBUG oslo_concurrency.processutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.237 254096 DEBUG nova.compute.provider_tree [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.251 254096 DEBUG nova.scheduler.client.report [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1918936432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.273 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.309 254096 INFO nova.scheduler.client.report [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Deleted allocations for instance 32b30534-761a-439a-85e5-4e2fe8f507df
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.382 254096 DEBUG oslo_concurrency.lockutils [None req-3a67494d-ee63-4525-b602-933732dd843b 509d158fe3f34e219f96739bb51bd6d9 1ea58a9bae9c474bb9f0b9c821689054 - - default default] Lock "32b30534-761a-439a-85e5-4e2fe8f507df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.400 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.401 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.401 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.402 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.403 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] No waiting events found dispatching network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.403 254096 WARNING nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received unexpected event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb for instance with vm_state active and task_state deleting.
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.403 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Received event network-vif-deleted-9691504b-429d-44e8-bdf5-7f223c5b0527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-unplugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.404 254096 DEBUG oslo_concurrency.lockutils [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.405 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] No waiting events found dispatching network-vif-unplugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.405 254096 DEBUG nova.compute.manager [req-c6d5ac70-638e-426e-88bb-041a22ad51f4 req-aa139497-006f-4c0c-8343-eba2580c5fc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-unplugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.496 254096 INFO nova.virt.libvirt.driver [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deleting instance files /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_del
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.496 254096 INFO nova.virt.libvirt.driver [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deletion of /var/lib/nova/instances/bd217c57-20a2-41c4-a969-7a4d94f0c7ce_del complete
Nov 25 16:40:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 133 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 239 op/s
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.539 254096 INFO nova.compute.manager [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.540 254096 DEBUG oslo.service.loopingcall [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.540 254096 DEBUG nova.compute.manager [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.540 254096 DEBUG nova.network.neutron [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.845 254096 DEBUG nova.compute.manager [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-changed-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.845 254096 DEBUG nova.compute.manager [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Refreshing instance network info cache due to event network-changed-60897ca4-9177-413c-b0f0-808dbc7d34dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:40:47 compute-0 nova_compute[254092]: 2025-11-25 16:40:47.845 254096 DEBUG oslo_concurrency.lockutils [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.021 254096 DEBUG nova.network.neutron [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.036 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Releasing lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.037 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance network_info: |[{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.037 254096 DEBUG oslo_concurrency.lockutils [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.037 254096 DEBUG nova.network.neutron [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Refreshing network info cache for port 60897ca4-9177-413c-b0f0-808dbc7d34dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.040 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start _get_guest_xml network_info=[{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.044 254096 WARNING nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.049 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.049 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.055 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.055 254096 DEBUG nova.virt.libvirt.host [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.056 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.056 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.056 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.057 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.058 254096 DEBUG nova.virt.hardware [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.061 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:48 compute-0 ceph-mon[74985]: pgmap v1662: 321 pgs: 321 active+clean; 133 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 239 op/s
Nov 25 16:40:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769453002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.492 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.513 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.516 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.896 254096 DEBUG nova.network.neutron [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.915 254096 INFO nova.compute.manager [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Took 1.37 seconds to deallocate network for instance.
Nov 25 16:40:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:40:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709853231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.958 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.959 254096 DEBUG nova.virt.libvirt.vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:44Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.959 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.960 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.961 254096 DEBUG nova.objects.instance [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.969 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.970 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.979 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <uuid>fef208e1-3706-4d03-8385-12418e9dc230</uuid>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <name>instance-0000003d</name>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1975310078</nova:name>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:40:48</nova:creationTime>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <nova:port uuid="60897ca4-9177-413c-b0f0-808dbc7d34dc">
Nov 25 16:40:48 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <system>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <entry name="serial">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <entry name="uuid">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </system>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <os>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   </os>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <features>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   </features>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk">
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk.config">
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       </source>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:40:48 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:43:f9:27"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <target dev="tap60897ca4-91"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log" append="off"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <video>
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </video>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:40:48 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:40:48 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:40:48 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:40:48 compute-0 nova_compute[254092]: </domain>
Nov 25 16:40:48 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.981 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Preparing to wait for external event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.981 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.982 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.982 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.983 254096 DEBUG nova.virt.libvirt.vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:44Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.983 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.984 254096 DEBUG nova.network.os_vif_util [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.984 254096 DEBUG os_vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.985 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.986 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.989 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60897ca4-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.989 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60897ca4-91, col_values=(('external_ids', {'iface-id': '60897ca4-9177-413c-b0f0-808dbc7d34dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:f9:27', 'vm-uuid': 'fef208e1-3706-4d03-8385-12418e9dc230'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:48 compute-0 NetworkManager[48891]: <info>  [1764088848.9919] manager: (tap60897ca4-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:48 compute-0 nova_compute[254092]: 2025-11-25 16:40:48.999 254096 INFO os_vif [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.035 254096 DEBUG oslo_concurrency.processutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.099 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.100 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.100 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:43:f9:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.100 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Using config drive
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.119 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1769453002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3709853231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.297 254096 DEBUG nova.network.neutron [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updated VIF entry in instance network info cache for port 60897ca4-9177-413c-b0f0-808dbc7d34dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.298 254096 DEBUG nova.network.neutron [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.311 254096 DEBUG oslo_concurrency.lockutils [req-32c08bd8-dae5-4369-9dc2-1737fd246eae req-f1201fce-df89-4867-a9cc-1aeea767ff31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:40:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:40:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042085003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.488 254096 DEBUG nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.488 254096 DEBUG oslo_concurrency.lockutils [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.489 254096 DEBUG oslo_concurrency.lockutils [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.489 254096 DEBUG oslo_concurrency.lockutils [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.489 254096 DEBUG nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] No waiting events found dispatching network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.490 254096 WARNING nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received unexpected event network-vif-plugged-341145f6-8319-4c2d-aa9d-9d7475a5e7eb for instance with vm_state deleted and task_state None.
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.490 254096 DEBUG nova.compute.manager [req-64e2c626-016e-4153-ab9d-46c7dd844fab req-b6837e28-f391-430a-8940-aa793a6c9d7d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Received event network-vif-deleted-341145f6-8319-4c2d-aa9d-9d7475a5e7eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.490 254096 DEBUG oslo_concurrency.processutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.495 254096 DEBUG nova.compute.provider_tree [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.507 254096 DEBUG nova.scheduler.client.report [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.528 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 133 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 229 op/s
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.560 254096 INFO nova.scheduler.client.report [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Deleted allocations for instance bd217c57-20a2-41c4-a969-7a4d94f0c7ce
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.584 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating config drive at /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.589 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb4cjicgz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.632 254096 DEBUG oslo_concurrency.lockutils [None req-48d6d1bf-faf2-4a13-9520-1bed65daa240 8a304e40f25749e49b171b1db4828ff1 8a6a1f7b5bb9482d85239a3b39051837 - - default default] Lock "bd217c57-20a2-41c4-a969-7a4d94f0c7ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.725 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb4cjicgz" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.746 254096 DEBUG nova.storage.rbd_utils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.749 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.891 254096 DEBUG oslo_concurrency.processutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.891 254096 INFO nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting local config drive /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config because it was imported into RBD.
Nov 25 16:40:49 compute-0 kernel: tap60897ca4-91: entered promiscuous mode
Nov 25 16:40:49 compute-0 ovn_controller[153477]: 2025-11-25T16:40:49Z|00566|binding|INFO|Claiming lport 60897ca4-9177-413c-b0f0-808dbc7d34dc for this chassis.
Nov 25 16:40:49 compute-0 ovn_controller[153477]: 2025-11-25T16:40:49Z|00567|binding|INFO|60897ca4-9177-413c-b0f0-808dbc7d34dc: Claiming fa:16:3e:43:f9:27 10.100.0.13
Nov 25 16:40:49 compute-0 NetworkManager[48891]: <info>  [1764088849.9424] manager: (tap60897ca4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.948 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.950 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.951 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:49 compute-0 ovn_controller[153477]: 2025-11-25T16:40:49Z|00568|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc ovn-installed in OVS
Nov 25 16:40:49 compute-0 ovn_controller[153477]: 2025-11-25T16:40:49Z|00569|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc up in Southbound
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[656aae32-eb43-4c55-bbb8-21dae40a426c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.967 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:49 compute-0 nova_compute[254092]: 2025-11-25 16:40:49.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.969 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.969 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[69b26c1f-a9d1-4753-8ff1-b87ba208ff11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.972 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a931f1-1d66-40c0-ae92-ae849c54a839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:49 compute-0 systemd-machined[216343]: New machine qemu-71-instance-0000003d.
Nov 25 16:40:49 compute-0 systemd-udevd[321967]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:40:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:49.985 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5e2cf5-ba60-4d43-a092-96d8746904af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:49 compute-0 NetworkManager[48891]: <info>  [1764088849.9904] device (tap60897ca4-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:40:49 compute-0 NetworkManager[48891]: <info>  [1764088849.9916] device (tap60897ca4-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:40:49 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000003d.
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb9993a-7a93-450e-929e-dec331f11331]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.040 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a5c2ba-5131-481d-8828-43a9e680e1c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[365660c4-1ba1-4b84-a201-d30ce4c21bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 NetworkManager[48891]: <info>  [1764088850.0457] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/248)
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.073 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ec808376-12b5-43bc-8b22-931d6ce5b9fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.077 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[90156b18-582a-448c-af6a-2b50fb8c1052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 NetworkManager[48891]: <info>  [1764088850.0960] device (tap62c0a8be-b0): carrier: link connected
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.104 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eae810f4-9762-44f9-bd8a-3e960f4451e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6b1fc5-4d30-4757-a68e-35374931e4bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530768, 'reachable_time': 29192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321999, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a99f7d3-611f-4bf3-b53a-420bc57ac5df]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530768, 'tstamp': 530768}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322000, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[89e80b2e-dbd7-45a7-a0ef-4a87755276ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530768, 'reachable_time': 29192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322001, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG nova.compute.manager [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG oslo_concurrency.lockutils [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG oslo_concurrency.lockutils [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.158 254096 DEBUG oslo_concurrency.lockutils [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.159 254096 DEBUG nova.compute.manager [req-26996588-4cce-4f12-b846-5b8c9fa2b427 req-f6782143-bf0f-4f58-9e4f-0a185fd50790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Processing event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae90551-0e41-4cd0-b982-c1739282c4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.238 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26bedeb9-89f4-4583-a08e-da3047e55d49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.240 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.240 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.240 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:50 compute-0 NetworkManager[48891]: <info>  [1764088850.2430] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Nov 25 16:40:50 compute-0 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.249 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.250 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:50 compute-0 ovn_controller[153477]: 2025-11-25T16:40:50Z|00570|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.265 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.266 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28ce38e6-71d9-43e1-bff9-b0455f1014fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.267 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:40:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:40:50.267 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:40:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1042085003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:40:50 compute-0 ceph-mon[74985]: pgmap v1663: 321 pgs: 321 active+clean; 133 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 229 op/s
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.472 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088850.4715316, fef208e1-3706-4d03-8385-12418e9dc230 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.472 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Started (Lifecycle Event)
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.474 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.477 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.481 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance spawned successfully.
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.481 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.502 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.508 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.510 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.511 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.511 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.512 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.512 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.512 254096 DEBUG nova.virt.libvirt.driver [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.559 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.559 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088850.4717445, fef208e1-3706-4d03-8385-12418e9dc230 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.560 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Paused (Lifecycle Event)
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.588 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.592 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088850.477283, fef208e1-3706-4d03-8385-12418e9dc230 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.592 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Resumed (Lifecycle Event)
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.599 254096 INFO nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 5.78 seconds to spawn the instance on the hypervisor.
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.600 254096 DEBUG nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:50 compute-0 podman[322076]: 2025-11-25 16:40:50.623249383 +0000 UTC m=+0.053429971 container create f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.623 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.627 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:40:50 compute-0 systemd[1]: Started libpod-conmon-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839.scope.
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.662 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:40:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.684 254096 INFO nova.compute.manager [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 7.11 seconds to build instance.
Nov 25 16:40:50 compute-0 podman[322076]: 2025-11-25 16:40:50.596889468 +0000 UTC m=+0.027070086 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:40:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41736214c61e8e784bfe20f83e87831dc58ca28b27d833d91e8e86513c66c372/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:40:50 compute-0 podman[322076]: 2025-11-25 16:40:50.700999152 +0000 UTC m=+0.131179760 container init f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:40:50 compute-0 nova_compute[254092]: 2025-11-25 16:40:50.700 254096 DEBUG oslo_concurrency.lockutils [None req-8b65ad4c-9166-4dc7-9ea0-ac3f719179e9 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:50 compute-0 podman[322076]: 2025-11-25 16:40:50.705991308 +0000 UTC m=+0.136171896 container start f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:40:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:50 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : New worker (322097) forked
Nov 25 16:40:50 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : Loading success.
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007549488832454256 of space, bias 1.0, pg target 0.22648466497362765 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:40:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 88 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 369 op/s
Nov 25 16:40:51 compute-0 ovn_controller[153477]: 2025-11-25T16:40:51Z|00571|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 16:40:51 compute-0 nova_compute[254092]: 2025-11-25 16:40:51.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:52 compute-0 nova_compute[254092]: 2025-11-25 16:40:52.250 254096 DEBUG nova.compute.manager [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:40:52 compute-0 nova_compute[254092]: 2025-11-25 16:40:52.250 254096 DEBUG oslo_concurrency.lockutils [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:40:52 compute-0 nova_compute[254092]: 2025-11-25 16:40:52.250 254096 DEBUG oslo_concurrency.lockutils [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:40:52 compute-0 nova_compute[254092]: 2025-11-25 16:40:52.251 254096 DEBUG oslo_concurrency.lockutils [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:40:52 compute-0 nova_compute[254092]: 2025-11-25 16:40:52.251 254096 DEBUG nova.compute.manager [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:40:52 compute-0 nova_compute[254092]: 2025-11-25 16:40:52.251 254096 WARNING nova.compute.manager [req-af8b2503-d949-4cc9-adc4-4bfa29dfe177 req-243f1751-a30a-4094-aa62-83c58aae6855 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state active and task_state None.
Nov 25 16:40:52 compute-0 ceph-mon[74985]: pgmap v1664: 321 pgs: 321 active+clean; 88 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 369 op/s
Nov 25 16:40:53 compute-0 nova_compute[254092]: 2025-11-25 16:40:53.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 88 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 260 op/s
Nov 25 16:40:54 compute-0 nova_compute[254092]: 2025-11-25 16:40:54.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:54 compute-0 ceph-mon[74985]: pgmap v1665: 321 pgs: 321 active+clean; 88 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 260 op/s
Nov 25 16:40:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:40:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3373983916' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:40:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:40:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3373983916' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:40:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 303 op/s
Nov 25 16:40:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3373983916' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:40:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3373983916' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:40:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:40:56 compute-0 nova_compute[254092]: 2025-11-25 16:40:56.309 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088841.3079152, d1ceaafd-59a6-45b1-833d-eb2a76e789be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:56 compute-0 nova_compute[254092]: 2025-11-25 16:40:56.309 254096 INFO nova.compute.manager [-] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] VM Stopped (Lifecycle Event)
Nov 25 16:40:56 compute-0 nova_compute[254092]: 2025-11-25 16:40:56.335 254096 DEBUG nova.compute.manager [None req-691f60b8-290b-4390-845e-6ae545da3643 - - - - - -] [instance: d1ceaafd-59a6-45b1-833d-eb2a76e789be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:56 compute-0 ceph-mon[74985]: pgmap v1666: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 303 op/s
Nov 25 16:40:56 compute-0 nova_compute[254092]: 2025-11-25 16:40:56.987 254096 INFO nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Rebuilding instance
Nov 25 16:40:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.548 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.562 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.617 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_requests' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.627 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.639 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.657 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.665 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:40:57 compute-0 nova_compute[254092]: 2025-11-25 16:40:57.668 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:40:58 compute-0 nova_compute[254092]: 2025-11-25 16:40:58.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:58 compute-0 ceph-mon[74985]: pgmap v1667: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 247 op/s
Nov 25 16:40:59 compute-0 nova_compute[254092]: 2025-11-25 16:40:59.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:40:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 207 op/s
Nov 25 16:40:59 compute-0 nova_compute[254092]: 2025-11-25 16:40:59.944 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088844.8449953, 32b30534-761a-439a-85e5-4e2fe8f507df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:40:59 compute-0 nova_compute[254092]: 2025-11-25 16:40:59.944 254096 INFO nova.compute.manager [-] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] VM Stopped (Lifecycle Event)
Nov 25 16:40:59 compute-0 sshd-session[322106]: Received disconnect from 80.94.93.119 port 43756:11:  [preauth]
Nov 25 16:40:59 compute-0 sshd-session[322106]: Disconnected from authenticating user root 80.94.93.119 port 43756 [preauth]
Nov 25 16:40:59 compute-0 nova_compute[254092]: 2025-11-25 16:40:59.964 254096 DEBUG nova.compute.manager [None req-5dcc45d8-ba3c-4fa3-aab7-fb410213162b - - - - - -] [instance: 32b30534-761a-439a-85e5-4e2fe8f507df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:00 compute-0 ceph-mon[74985]: pgmap v1668: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 207 op/s
Nov 25 16:41:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 207 op/s
Nov 25 16:41:01 compute-0 nova_compute[254092]: 2025-11-25 16:41:01.991 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088846.9888165, bd217c57-20a2-41c4-a969-7a4d94f0c7ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:01 compute-0 nova_compute[254092]: 2025-11-25 16:41:01.992 254096 INFO nova.compute.manager [-] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] VM Stopped (Lifecycle Event)
Nov 25 16:41:02 compute-0 nova_compute[254092]: 2025-11-25 16:41:02.009 254096 DEBUG nova.compute.manager [None req-5d329d7f-1a65-492e-af01-773778943f64 - - - - - -] [instance: bd217c57-20a2-41c4-a969-7a4d94f0c7ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:02 compute-0 ovn_controller[153477]: 2025-11-25T16:41:02Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:f9:27 10.100.0.13
Nov 25 16:41:02 compute-0 ovn_controller[153477]: 2025-11-25T16:41:02Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:f9:27 10.100.0.13
Nov 25 16:41:02 compute-0 ceph-mon[74985]: pgmap v1669: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 207 op/s
Nov 25 16:41:03 compute-0 nova_compute[254092]: 2025-11-25 16:41:03.176 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:03 compute-0 nova_compute[254092]: 2025-11-25 16:41:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.504 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.504 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.549 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.673 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.673 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.684 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.684 254096 INFO nova.compute.claims [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:04 compute-0 ceph-mon[74985]: pgmap v1670: 321 pgs: 321 active+clean; 88 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 25 16:41:04 compute-0 nova_compute[254092]: 2025-11-25 16:41:04.814 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520643533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.265 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.272 254096 DEBUG nova.compute.provider_tree [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.290 254096 DEBUG nova.scheduler.client.report [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.317 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.318 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.384 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.385 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.414 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.440 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:41:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 111 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.564 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.566 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.566 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Creating image(s)
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.590 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.614 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.643 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.648 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.716 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.717 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.718 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.718 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.741 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:05 compute-0 nova_compute[254092]: 2025-11-25 16:41:05.745 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/520643533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.082 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.146 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] resizing rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.229 254096 DEBUG nova.objects.instance [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'migration_context' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.245 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.247 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Ensure instance console log exists: /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.248 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.248 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.248 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.476 254096 DEBUG nova.policy [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0d077039e0b4d9e8d5663768f40fa48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:06 compute-0 nova_compute[254092]: 2025-11-25 16:41:06.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:06 compute-0 ceph-mon[74985]: pgmap v1671: 321 pgs: 321 active+clean; 111 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.225 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.226 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.242 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.324 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.325 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.335 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.335 254096 INFO nova.compute.claims [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.509 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:07 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:41:07 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:41:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.712 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:41:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894277031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.974 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.980 254096 DEBUG nova.compute.provider_tree [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:07 compute-0 nova_compute[254092]: 2025-11-25 16:41:07.996 254096 DEBUG nova.scheduler.client.report [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.023 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.023 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.027 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.027 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.027 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.028 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.119 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.120 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.147 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.184 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.316 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.319 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.319 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Creating image(s)
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.344 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.377 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.406 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.411 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2867797437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.477 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.478 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.478 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.479 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.498 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.501 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 dce3a591-9fb6-4495-a7fb-867af2de384f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.614 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.615 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:41:08 compute-0 podman[322426]: 2025-11-25 16:41:08.639746842 +0000 UTC m=+0.064699916 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:41:08 compute-0 podman[322418]: 2025-11-25 16:41:08.639842254 +0000 UTC m=+0.065076877 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 25 16:41:08 compute-0 podman[322427]: 2025-11-25 16:41:08.669980091 +0000 UTC m=+0.090621759 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.794 254096 DEBUG nova.policy [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0d077039e0b4d9e8d5663768f40fa48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:08 compute-0 ceph-mon[74985]: pgmap v1672: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 25 16:41:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1894277031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2867797437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.827 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.829 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3988MB free_disk=59.94293212890625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.829 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.829 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.869 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 dce3a591-9fb6-4495-a7fb-867af2de384f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.925 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] resizing rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.981 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance fef208e1-3706-4d03-8385-12418e9dc230 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.981 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 013dc18e-57cd-4733-8e98-7d20e3b5c4db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.982 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance dce3a591-9fb6-4495-a7fb-867af2de384f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.982 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:41:08 compute-0 nova_compute[254092]: 2025-11-25 16:41:08.982 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.119 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.182 254096 DEBUG nova.objects.instance [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'migration_context' on Instance uuid dce3a591-9fb6-4495-a7fb-867af2de384f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.212 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.212 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Ensure instance console log exists: /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.213 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.213 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.213 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077200639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.557 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.566 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.636 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.704 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:41:09 compute-0 nova_compute[254092]: 2025-11-25 16:41:09.704 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1077200639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:41:10 compute-0 kernel: tap60897ca4-91 (unregistering): left promiscuous mode
Nov 25 16:41:10 compute-0 NetworkManager[48891]: <info>  [1764088870.4542] device (tap60897ca4-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:41:10 compute-0 ovn_controller[153477]: 2025-11-25T16:41:10Z|00572|binding|INFO|Releasing lport 60897ca4-9177-413c-b0f0-808dbc7d34dc from this chassis (sb_readonly=0)
Nov 25 16:41:10 compute-0 ovn_controller[153477]: 2025-11-25T16:41:10Z|00573|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc down in Southbound
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:10 compute-0 ovn_controller[153477]: 2025-11-25T16:41:10Z|00574|binding|INFO|Removing iface tap60897ca4-91 ovn-installed in OVS
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.494 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.496 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.497 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.498 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d577917-b312-4529-afbf-7985c9d138bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.498 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore
Nov 25 16:41:10 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 25 16:41:10 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003d.scope: Consumed 12.890s CPU time.
Nov 25 16:41:10 compute-0 systemd-machined[216343]: Machine qemu-71-instance-0000003d terminated.
Nov 25 16:41:10 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : haproxy version is 2.8.14-c23fe91
Nov 25 16:41:10 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [NOTICE]   (322095) : path to executable is /usr/sbin/haproxy
Nov 25 16:41:10 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [WARNING]  (322095) : Exiting Master process...
Nov 25 16:41:10 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [ALERT]    (322095) : Current worker (322097) exited with code 143 (Terminated)
Nov 25 16:41:10 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[322091]: [WARNING]  (322095) : All workers exited. Exiting... (0)
Nov 25 16:41:10 compute-0 systemd[1]: libpod-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839.scope: Deactivated successfully.
Nov 25 16:41:10 compute-0 podman[322619]: 2025-11-25 16:41:10.621617548 +0000 UTC m=+0.042476673 container died f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839-userdata-shm.mount: Deactivated successfully.
Nov 25 16:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-41736214c61e8e784bfe20f83e87831dc58ca28b27d833d91e8e86513c66c372-merged.mount: Deactivated successfully.
Nov 25 16:41:10 compute-0 podman[322619]: 2025-11-25 16:41:10.676942059 +0000 UTC m=+0.097801104 container cleanup f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:41:10 compute-0 systemd[1]: libpod-conmon-f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839.scope: Deactivated successfully.
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.704 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.726 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance shutdown successfully after 13 seconds.
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.733 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance destroyed successfully.
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.739 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance destroyed successfully.
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.740 254096 DEBUG nova.virt.libvirt.vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:40:55Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.740 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.741 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.741 254096 DEBUG os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.743 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60897ca4-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:10 compute-0 podman[322654]: 2025-11-25 16:41:10.748087019 +0000 UTC m=+0.050290645 container remove f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.750 254096 INFO os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.753 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bea5f54d-59df-43c7-861d-7e841f54147a]: (4, ('Tue Nov 25 04:41:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839)\nf9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839\nTue Nov 25 04:41:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (f9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839)\nf9f20c763ba8ffa0a44414ec4814fe868d695e5c9a6e9af09a59322126dad839\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.755 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[608a7085-1069-4e0e-9d79-76a867317cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.756 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:10 compute-0 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:10 compute-0 nova_compute[254092]: 2025-11-25 16:41:10.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.775 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9be7268-95cd-47c5-8b0e-955de14665cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.790 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[639d685b-a2bd-4c4e-b20e-111c4e7e6556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.791 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dd24a085-cf68-40c4-84bb-1812705b3577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.805 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4172e0-5d74-4df1-b361-927f321c4bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530762, 'reachable_time': 38528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322692, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.807 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:41:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:10.807 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[31700902-6c8d-4502-be4a-691890b3e5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 16:41:10 compute-0 ceph-mon[74985]: pgmap v1673: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.122 254096 DEBUG nova.compute.manager [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-unplugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.122 254096 DEBUG oslo_concurrency.lockutils [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 DEBUG oslo_concurrency.lockutils [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 DEBUG oslo_concurrency.lockutils [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 DEBUG nova.compute.manager [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-unplugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.123 254096 WARNING nova.compute.manager [req-8bbd3a07-5617-4429-be30-ecf8d6bc6259 req-da684d24-80d8-4902-babc-3878487c6766 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-unplugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state active and task_state rebuilding.
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.194 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Successfully created port: 5136afea-102e-46a1-8fdb-0af970c5af04 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.235 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting instance files /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.236 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deletion of /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del complete
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.419 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.420 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating image(s)
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.448 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.473 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.498 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.502 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 213 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.560 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.561 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.580 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.595 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.596 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.597 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.597 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.625 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.630 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.698 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.699 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.709 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.709 254096 INFO nova.compute.claims [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:11 compute-0 nova_compute[254092]: 2025-11-25 16:41:11.916 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.044 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 fef208e1-3706-4d03-8385-12418e9dc230_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.098 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.183 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.184 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Ensure instance console log exists: /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.184 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.185 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.185 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.187 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start _get_guest_xml network_info=[{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.191 254096 WARNING nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.197 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.198 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.204 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.204 254096 DEBUG nova.virt.libvirt.host [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.205 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.205 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.205 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.206 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.207 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.208 254096 DEBUG nova.virt.hardware [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.208 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.234 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553204244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.393 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.400 254096 DEBUG nova.compute.provider_tree [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.416 254096 DEBUG nova.scheduler.client.report [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.539 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.539 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890891610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.675 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.711 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.718 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.749 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.749 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.834 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:12 compute-0 nova_compute[254092]: 2025-11-25 16:41:12.913 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:13 compute-0 ceph-mon[74985]: pgmap v1674: 321 pgs: 321 active+clean; 213 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Nov 25 16:41:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3553204244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/890891610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593126229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.153 254096 DEBUG nova.policy [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e0d077039e0b4d9e8d5663768f40fa48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.158 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.159 254096 DEBUG nova.virt.libvirt.vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:11Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.159 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.160 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.162 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <uuid>fef208e1-3706-4d03-8385-12418e9dc230</uuid>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <name>instance-0000003d</name>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1975310078</nova:name>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:12</nova:creationTime>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <nova:port uuid="60897ca4-9177-413c-b0f0-808dbc7d34dc">
Nov 25 16:41:13 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <system>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <entry name="serial">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <entry name="uuid">fef208e1-3706-4d03-8385-12418e9dc230</entry>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </system>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <os>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   </os>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <features>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   </features>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk">
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fef208e1-3706-4d03-8385-12418e9dc230_disk.config">
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:13 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:43:f9:27"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <target dev="tap60897ca4-91"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/console.log" append="off"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <video>
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </video>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:41:13 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:41:13 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:41:13 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:41:13 compute-0 nova_compute[254092]: </domain>
Nov 25 16:41:13 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.163 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Preparing to wait for external event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.164 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.166 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.166 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.167 254096 DEBUG nova.virt.libvirt.vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:40:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:11Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.167 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.167 254096 DEBUG nova.network.os_vif_util [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.168 254096 DEBUG os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.171 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.172 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.172 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Creating image(s)
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.194 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.270 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.291 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.294 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.328 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Successfully created port: 9e60e140-ca34-40f4-b867-d7c53f05bca4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.336 254096 DEBUG nova.compute.manager [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.337 254096 DEBUG oslo_concurrency.lockutils [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.337 254096 DEBUG oslo_concurrency.lockutils [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.337 254096 DEBUG oslo_concurrency.lockutils [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.338 254096 DEBUG nova.compute.manager [req-23cffe03-b3e1-49db-89ca-b32e4d8921db req-c2cf6ca2-1986-49ca-8ebb-6b7b8b5da79f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Processing event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.339 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60897ca4-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.340 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60897ca4-91, col_values=(('external_ids', {'iface-id': '60897ca4-9177-413c-b0f0-808dbc7d34dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:f9:27', 'vm-uuid': 'fef208e1-3706-4d03-8385-12418e9dc230'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:13 compute-0 NetworkManager[48891]: <info>  [1764088873.3427] manager: (tap60897ca4-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.347 254096 INFO os_vif [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.365 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.366 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.367 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.367 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.388 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.391 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.455 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.456 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.456 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:43:f9:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.456 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Using config drive
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.477 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.496 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:13 compute-0 nova_compute[254092]: 2025-11-25 16:41:13.531 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'keypairs' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 213 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Nov 25 16:41:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:13.618 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:13.619 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:13.619 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3593126229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.194 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.803s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.241 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] resizing rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.360 254096 DEBUG nova.objects.instance [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'migration_context' on Instance uuid b5c5a442-8e8e-40c5-9634-e36c49e6e41b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.381 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.382 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Ensure instance console log exists: /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.382 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.383 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.383 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.526 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.526 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.527 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.683 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Creating config drive at /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.688 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpml21mrbe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.822 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpml21mrbe" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.842 254096 DEBUG nova.storage.rbd_utils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image fef208e1-3706-4d03-8385-12418e9dc230_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:14 compute-0 nova_compute[254092]: 2025-11-25 16:41:14.846 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:15 compute-0 ceph-mon[74985]: pgmap v1675: 321 pgs: 321 active+clean; 213 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.118 254096 DEBUG oslo_concurrency.processutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config fef208e1-3706-4d03-8385-12418e9dc230_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.118 254096 INFO nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting local config drive /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230/disk.config because it was imported into RBD.
Nov 25 16:41:15 compute-0 kernel: tap60897ca4-91: entered promiscuous mode
Nov 25 16:41:15 compute-0 NetworkManager[48891]: <info>  [1764088875.1681] manager: (tap60897ca4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:15 compute-0 ovn_controller[153477]: 2025-11-25T16:41:15Z|00575|binding|INFO|Claiming lport 60897ca4-9177-413c-b0f0-808dbc7d34dc for this chassis.
Nov 25 16:41:15 compute-0 ovn_controller[153477]: 2025-11-25T16:41:15Z|00576|binding|INFO|60897ca4-9177-413c-b0f0-808dbc7d34dc: Claiming fa:16:3e:43:f9:27 10.100.0.13
Nov 25 16:41:15 compute-0 ovn_controller[153477]: 2025-11-25T16:41:15Z|00577|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc ovn-installed in OVS
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:15 compute-0 systemd-udevd[323182]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:15 compute-0 NetworkManager[48891]: <info>  [1764088875.1997] device (tap60897ca4-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:41:15 compute-0 NetworkManager[48891]: <info>  [1764088875.2006] device (tap60897ca4-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:41:15 compute-0 systemd-machined[216343]: New machine qemu-72-instance-0000003d.
Nov 25 16:41:15 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000003d.
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.523 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.522 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.524 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:41:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 214 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 8.4 MiB/s wr, 182 op/s
Nov 25 16:41:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:15 compute-0 ovn_controller[153477]: 2025-11-25T16:41:15Z|00578|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc up in Southbound
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.721 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.722 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.724 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.731 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for fef208e1-3706-4d03-8385-12418e9dc230 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.732 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088875.7313852, fef208e1-3706-4d03-8385-12418e9dc230 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.732 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Started (Lifecycle Event)
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.735 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39927fc1-a7b6-423b-a373-86b5f77c4aa4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.738 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.738 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.740 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67dbe776-3c47-4699-af6b-f51bea8486d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.741 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance spawned successfully.
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.742 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da5fea1e-1037-4d25-a3d6-1364d0b0d896]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.755 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[19a6407e-0785-447a-aa78-bac53e8c54d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.761 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.771 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.772 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e4d2a9-b1cd-4b09-9883-75ef053b0599]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.775 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.775 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.776 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.776 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.777 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.777 254096 DEBUG nova.virt.libvirt.driver [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.799 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d55693-bfc7-4ac9-936e-f10d2ea17ea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.802 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.803 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088875.7347782, fef208e1-3706-4d03-8385-12418e9dc230 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.803 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Paused (Lifecycle Event)
Nov 25 16:41:15 compute-0 NetworkManager[48891]: <info>  [1764088875.8056] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.804 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b57baa4-1459-4b0e-903f-f6b7864f8339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.833 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.836 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088875.7373233, fef208e1-3706-4d03-8385-12418e9dc230 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.837 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Resumed (Lifecycle Event)
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.843 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b76b4a69-5ca4-4b86-9c78-49173b6cf4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.846 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[293aaf1f-9833-4130-8279-2ec71c030bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.856 254096 DEBUG nova.compute.manager [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.858 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.864 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:15 compute-0 NetworkManager[48891]: <info>  [1764088875.8685] device (tap62c0a8be-b0): carrier: link connected
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.873 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ede3c6f4-b644-416c-a90f-fbf26dc5bc0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.891 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.892 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8a4459-476c-40d9-b878-e063845c2ed6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533345, 'reachable_time': 22257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.906 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca954004-a1d7-4b28-b1c0-6ef0ccb536e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533345, 'tstamp': 533345}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323260, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.912 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.912 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.913 254096 DEBUG nova.objects.instance [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9b6b2e-c651-4cfc-94b0-bc9bae24dad6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533345, 'reachable_time': 22257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323261, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:15.948 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1df239f6-66e3-48c8-8fe5-413a237f8b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:15 compute-0 nova_compute[254092]: 2025-11-25 16:41:15.973 254096 DEBUG oslo_concurrency.lockutils [None req-945236ff-e664-48ff-90fb-e9bb1e3e6b00 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.012 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea2cae5-9e9f-47e6-beb0-8602ead4ab3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.014 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.014 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:16 compute-0 NetworkManager[48891]: <info>  [1764088876.0505] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Nov 25 16:41:16 compute-0 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.052 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:16 compute-0 ovn_controller[153477]: 2025-11-25T16:41:16Z|00579|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.069 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.070 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70b7acdb-62ae-44aa-9efa-23dcb0456f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.071 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:41:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:16.071 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:41:16 compute-0 podman[323293]: 2025-11-25 16:41:16.477525886 +0000 UTC m=+0.102016968 container create dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:41:16 compute-0 podman[323293]: 2025-11-25 16:41:16.400004574 +0000 UTC m=+0.024495676 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:41:16 compute-0 systemd[1]: Started libpod-conmon-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727.scope.
Nov 25 16:41:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc6a3717c2bdadab6ffc5e32dfd90bd78792ae46a7514d2c4fc20fc246fb569/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:16 compute-0 podman[323293]: 2025-11-25 16:41:16.57936825 +0000 UTC m=+0.203859362 container init dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:41:16 compute-0 podman[323293]: 2025-11-25 16:41:16.585889556 +0000 UTC m=+0.210380638 container start dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.599 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Successfully updated port: 5136afea-102e-46a1-8fdb-0af970c5af04 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:16 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : New worker (323313) forked
Nov 25 16:41:16 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : Loading success.
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.614 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.615 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.615 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.769 254096 DEBUG nova.compute.manager [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-changed-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.770 254096 DEBUG nova.compute.manager [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Refreshing instance network info cache due to event network-changed-5136afea-102e-46a1-8fdb-0af970c5af04. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:41:16 compute-0 nova_compute[254092]: 2025-11-25 16:41:16.770 254096 DEBUG oslo_concurrency.lockutils [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:17 compute-0 ceph-mon[74985]: pgmap v1676: 321 pgs: 321 active+clean; 214 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 8.4 MiB/s wr, 182 op/s
Nov 25 16:41:17 compute-0 nova_compute[254092]: 2025-11-25 16:41:17.420 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 226 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 150 KiB/s rd, 7.5 MiB/s wr, 151 op/s
Nov 25 16:41:17 compute-0 nova_compute[254092]: 2025-11-25 16:41:17.626 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Successfully created port: 1404e99c-a32c-404a-a7d6-3daccc67c48b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:18 compute-0 nova_compute[254092]: 2025-11-25 16:41:18.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:18 compute-0 nova_compute[254092]: 2025-11-25 16:41:18.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:18 compute-0 ceph-mon[74985]: pgmap v1677: 321 pgs: 321 active+clean; 226 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 150 KiB/s rd, 7.5 MiB/s wr, 151 op/s
Nov 25 16:41:18 compute-0 nova_compute[254092]: 2025-11-25 16:41:18.818 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [{"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:18 compute-0 nova_compute[254092]: 2025-11-25 16:41:18.835 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-fef208e1-3706-4d03-8385-12418e9dc230" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:18 compute-0 nova_compute[254092]: 2025-11-25 16:41:18.836 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.315 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Successfully updated port: 9e60e140-ca34-40f4-b867-d7c53f05bca4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.393 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.394 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.394 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 226 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 7.1 MiB/s wr, 142 op/s
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.626 254096 DEBUG nova.network.neutron [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.631 254096 DEBUG nova.compute.manager [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-changed-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.632 254096 DEBUG nova.compute.manager [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Refreshing instance network info cache due to event network-changed-9e60e140-ca34-40f4-b867-d7c53f05bca4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.632 254096 DEBUG oslo_concurrency.lockutils [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.786 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.786 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance network_info: |[{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.786 254096 DEBUG oslo_concurrency.lockutils [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.787 254096 DEBUG nova.network.neutron [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Refreshing network info cache for port 5136afea-102e-46a1-8fdb-0af970c5af04 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.790 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start _get_guest_xml network_info=[{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.794 254096 WARNING nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.799 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.800 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.802 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.803 254096 DEBUG nova.virt.libvirt.host [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.803 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.803 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.804 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.805 254096 DEBUG nova.virt.hardware [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.808 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:19 compute-0 nova_compute[254092]: 2025-11-25 16:41:19.841 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.222 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.226 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.226 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.226 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.227 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.227 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.228 254096 INFO nova.compute.manager [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Terminating instance
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.229 254096 DEBUG nova.compute.manager [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:41:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009668596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.278 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.310 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.315 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:20 compute-0 kernel: tap60897ca4-91 (unregistering): left promiscuous mode
Nov 25 16:41:20 compute-0 NetworkManager[48891]: <info>  [1764088880.3833] device (tap60897ca4-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00580|binding|INFO|Releasing lport 60897ca4-9177-413c-b0f0-808dbc7d34dc from this chassis (sb_readonly=0)
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00581|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc down in Southbound
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00582|binding|INFO|Removing iface tap60897ca4-91 ovn-installed in OVS
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.395 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.447 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.449 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis
Nov 25 16:41:20 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Nov 25 16:41:20 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003d.scope: Consumed 4.998s CPU time.
Nov 25 16:41:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.451 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:41:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.452 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02467b22-ddbb-4ae4-bbef-4a41401e8a84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:20.453 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore
Nov 25 16:41:20 compute-0 systemd-machined[216343]: Machine qemu-72-instance-0000003d terminated.
Nov 25 16:41:20 compute-0 kernel: tap60897ca4-91: entered promiscuous mode
Nov 25 16:41:20 compute-0 NetworkManager[48891]: <info>  [1764088880.6765] manager: (tap60897ca4-91): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Nov 25 16:41:20 compute-0 systemd-udevd[323366]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00583|binding|INFO|Claiming lport 60897ca4-9177-413c-b0f0-808dbc7d34dc for this chassis.
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00584|binding|INFO|60897ca4-9177-413c-b0f0-808dbc7d34dc: Claiming fa:16:3e:43:f9:27 10.100.0.13
Nov 25 16:41:20 compute-0 kernel: tap60897ca4-91 (unregistering): left promiscuous mode
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00585|binding|INFO|Setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc ovn-installed in OVS
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00586|if_status|INFO|Dropped 5 log messages in last 116 seconds (most recently, 116 seconds ago) due to excessive rate
Nov 25 16:41:20 compute-0 ovn_controller[153477]: 2025-11-25T16:41:20Z|00587|if_status|INFO|Not setting lport 60897ca4-9177-413c-b0f0-808dbc7d34dc down as sb is readonly
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.726 254096 INFO nova.virt.libvirt.driver [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Instance destroyed successfully.
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.727 254096 DEBUG nova.objects.instance [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid fef208e1-3706-4d03-8385-12418e9dc230 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.737 254096 DEBUG nova.virt.libvirt.vif [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:40:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1975310078',display_name='tempest-ServerDiskConfigTestJSON-server-1975310078',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1975310078',id=61,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-b4grt1v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:15Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=fef208e1-3706-4d03-8385-12418e9dc230,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.738 254096 DEBUG nova.network.os_vif_util [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "address": "fa:16:3e:43:f9:27", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60897ca4-91", "ovs_interfaceid": "60897ca4-9177-413c-b0f0-808dbc7d34dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.739 254096 DEBUG nova.network.os_vif_util [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.739 254096 DEBUG os_vif [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.742 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60897ca4-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.746 254096 INFO os_vif [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:f9:27,bridge_name='br-int',has_traffic_filtering=True,id=60897ca4-9177-413c-b0f0-808dbc7d34dc,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60897ca4-91')
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 virtnodedevd[254417]: ethtool ioctl error on tap60897ca4-91: No such device
Nov 25 16:41:20 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : haproxy version is 2.8.14-c23fe91
Nov 25 16:41:20 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [NOTICE]   (323311) : path to executable is /usr/sbin/haproxy
Nov 25 16:41:20 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [WARNING]  (323311) : Exiting Master process...
Nov 25 16:41:20 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [ALERT]    (323311) : Current worker (323313) exited with code 143 (Terminated)
Nov 25 16:41:20 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[323307]: [WARNING]  (323311) : All workers exited. Exiting... (0)
Nov 25 16:41:20 compute-0 systemd[1]: libpod-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727.scope: Deactivated successfully.
Nov 25 16:41:20 compute-0 podman[323406]: 2025-11-25 16:41:20.806318055 +0000 UTC m=+0.244969037 container died dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:41:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766047327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.856 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.859 254096 DEBUG nova.virt.libvirt.vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:05Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.859 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.861 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.863 254096 DEBUG nova.objects.instance [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.897 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <uuid>013dc18e-57cd-4733-8e98-7d20e3b5c4db</uuid>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <name>instance-0000003e</name>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1278256596</nova:name>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:19</nova:creationTime>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <nova:port uuid="5136afea-102e-46a1-8fdb-0af970c5af04">
Nov 25 16:41:20 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <system>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <entry name="serial">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <entry name="uuid">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </system>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <os>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   </os>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <features>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   </features>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk">
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config">
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:70:61:e3"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <target dev="tap5136afea-10"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/console.log" append="off"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <video>
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </video>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:41:20 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:41:20 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:41:20 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:41:20 compute-0 nova_compute[254092]: </domain>
Nov 25 16:41:20 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.899 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Preparing to wait for external event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.900 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.900 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.900 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.901 254096 DEBUG nova.virt.libvirt.vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:05Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.902 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.902 254096 DEBUG nova.network.os_vif_util [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.903 254096 DEBUG os_vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.904 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.904 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.908 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.908 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5136afea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.909 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5136afea-10, col_values=(('external_ids', {'iface-id': '5136afea-102e-46a1-8fdb-0af970c5af04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:61:e3', 'vm-uuid': '013dc18e-57cd-4733-8e98-7d20e3b5c4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:20 compute-0 NetworkManager[48891]: <info>  [1764088880.9124] manager: (tap5136afea-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:20 compute-0 nova_compute[254092]: 2025-11-25 16:41:20.979 254096 INFO os_vif [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')
Nov 25 16:41:21 compute-0 ovn_controller[153477]: 2025-11-25T16:41:21Z|00588|binding|INFO|Releasing lport 60897ca4-9177-413c-b0f0-808dbc7d34dc from this chassis (sb_readonly=0)
Nov 25 16:41:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:21.023 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:21 compute-0 nova_compute[254092]: 2025-11-25 16:41:21.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:21.186 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:f9:27 10.100.0.13'], port_security=['fa:16:3e:43:f9:27 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fef208e1-3706-4d03-8385-12418e9dc230', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=60897ca4-9177-413c-b0f0-808dbc7d34dc) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:21 compute-0 ceph-mon[74985]: pgmap v1678: 321 pgs: 321 active+clean; 226 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 7.1 MiB/s wr, 142 op/s
Nov 25 16:41:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2009668596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:21 compute-0 nova_compute[254092]: 2025-11-25 16:41:21.277 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:21 compute-0 nova_compute[254092]: 2025-11-25 16:41:21.277 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:21 compute-0 nova_compute[254092]: 2025-11-25 16:41:21.278 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No VIF found with MAC fa:16:3e:70:61:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:41:21 compute-0 nova_compute[254092]: 2025-11-25 16:41:21.278 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Using config drive
Nov 25 16:41:21 compute-0 nova_compute[254092]: 2025-11-25 16:41:21.470 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 227 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 212 op/s
Nov 25 16:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727-userdata-shm.mount: Deactivated successfully.
Nov 25 16:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc6a3717c2bdadab6ffc5e32dfd90bd78792ae46a7514d2c4fc20fc246fb569-merged.mount: Deactivated successfully.
Nov 25 16:41:21 compute-0 podman[323406]: 2025-11-25 16:41:21.966182172 +0000 UTC m=+1.404833154 container cleanup dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:41:22 compute-0 podman[323496]: 2025-11-25 16:41:22.313167355 +0000 UTC m=+0.326203671 container remove dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 16:41:22 compute-0 systemd[1]: libpod-conmon-dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727.scope: Deactivated successfully.
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.319 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a3ed69-84e5-4d16-9c00-9a62aebf0c30]: (4, ('Tue Nov 25 04:41:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727)\ndfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727\nTue Nov 25 04:41:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (dfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727)\ndfcfe9b9228868575fedb1ac0288483fe3cc359ae4f6ba0b0ab547bf53835727\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.322 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[609822df-e4dc-482f-bc48-48785b88efa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.323 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:22 compute-0 nova_compute[254092]: 2025-11-25 16:41:22.325 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:22 compute-0 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 16:41:22 compute-0 nova_compute[254092]: 2025-11-25 16:41:22.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.348 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d15aed22-8c92-48f8-8109-51b3ff41ba7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.367 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1ce592-f093-4dde-97d4-584b04b5ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.368 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b076c51c-62a0-4dc5-b5aa-fd5453c85d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.387 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29b57587-c5e4-4236-b09a-af7c079e047c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533338, 'reachable_time': 33699, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323515, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.391 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.391 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[19a718b6-f9bc-4817-9b6b-0bf20ac79b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.392 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis
Nov 25 16:41:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.394 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.395 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[988d2694-8122-4ca4-8ea4-fe10ac7e515a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.396 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 60897ca4-9177-413c-b0f0-808dbc7d34dc in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.397 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:41:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:22.397 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26125889-0c33-458f-a3b2-9d8dbfc64b17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2766047327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:22 compute-0 ceph-mon[74985]: pgmap v1679: 321 pgs: 321 active+clean; 227 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 212 op/s
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 227 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.811 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Creating config drive at /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.816 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaqiov4s6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.854 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Successfully updated port: 1404e99c-a32c-404a-a7d6-3daccc67c48b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.877 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.878 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.878 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.904 254096 DEBUG nova.network.neutron [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updating instance_info_cache with network_info: [{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.931 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.931 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance network_info: |[{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.932 254096 DEBUG oslo_concurrency.lockutils [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.932 254096 DEBUG nova.network.neutron [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Refreshing network info cache for port 9e60e140-ca34-40f4-b867-d7c53f05bca4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.935 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start _get_guest_xml network_info=[{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': 'a4aa3708-bb73-4b5a-b3f3-42153358021e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.939 254096 WARNING nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.945 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.946 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.952 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.952 254096 DEBUG nova.virt.libvirt.host [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.953 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.953 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.954 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.954 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.954 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.955 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.956 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.956 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.956 254096 DEBUG nova.virt.hardware [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.959 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:23 compute-0 nova_compute[254092]: 2025-11-25 16:41:23.987 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaqiov4s6" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.009 254096 DEBUG nova.storage.rbd_utils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.013 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.194 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.230 254096 INFO nova.virt.libvirt.driver [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deleting instance files /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.231 254096 INFO nova.virt.libvirt.driver [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deletion of /var/lib/nova/instances/fef208e1-3706-4d03-8385-12418e9dc230_del complete
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.249 254096 DEBUG oslo_concurrency.processutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config 013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.250 254096 INFO nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deleting local config drive /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/disk.config because it was imported into RBD.
Nov 25 16:41:24 compute-0 kernel: tap5136afea-10: entered promiscuous mode
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 NetworkManager[48891]: <info>  [1764088884.3038] manager: (tap5136afea-10): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 ovn_controller[153477]: 2025-11-25T16:41:24Z|00589|binding|INFO|Claiming lport 5136afea-102e-46a1-8fdb-0af970c5af04 for this chassis.
Nov 25 16:41:24 compute-0 ovn_controller[153477]: 2025-11-25T16:41:24Z|00590|binding|INFO|5136afea-102e-46a1-8fdb-0af970c5af04: Claiming fa:16:3e:70:61:e3 10.100.0.13
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.319 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.321 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.322 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.331 254096 INFO nova.compute.manager [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 4.10 seconds to destroy the instance on the hypervisor.
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.331 254096 DEBUG oslo.service.loopingcall [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.332 254096 DEBUG nova.compute.manager [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.332 254096 DEBUG nova.network.neutron [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:41:24 compute-0 systemd-udevd[323590]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[758ff0bb-eb7a-470a-be92-3130dc2b505f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.337 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf00f265b-61 in ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.338 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf00f265b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.338 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6755d9-a128-4360-88bb-9f31940acf17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 systemd-machined[216343]: New machine qemu-73-instance-0000003e.
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.345 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0db630d-31e5-49ea-aa79-a4642d1bd52d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 NetworkManager[48891]: <info>  [1764088884.3523] device (tap5136afea-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:41:24 compute-0 NetworkManager[48891]: <info>  [1764088884.3536] device (tap5136afea-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:41:24 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000003e.
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.358 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 ovn_controller[153477]: 2025-11-25T16:41:24Z|00591|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 ovn-installed in OVS
Nov 25 16:41:24 compute-0 ovn_controller[153477]: 2025-11-25T16:41:24Z|00592|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 up in Southbound
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.361 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.362 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0643ed-8fa0-4fd9-af88-8d9f54ee760d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.387 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1174d8-aad6-42d1-82fb-762d4dc5f209]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054519550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.418 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c3f56d-b1e5-47a0-87d1-55d20b6d8fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 systemd-udevd[323594]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.424 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e81bf8d-95c7-4c12-878a-f704ef779e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 NetworkManager[48891]: <info>  [1764088884.4258] manager: (tapf00f265b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.427 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.458 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cd738bc4-4328-4e8d-8940-74601e482f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.462 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[31247e0a-0051-4729-8603-3578093eeb39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.466 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.476 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:24 compute-0 NetworkManager[48891]: <info>  [1764088884.4834] device (tapf00f265b-60): carrier: link connected
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.489 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe8c11a-f4dc-4281-8179-ef705036f61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.505 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf27eaa0-0536-40fe-8920-60cd79f6d6ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323644, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.507 254096 DEBUG nova.network.neutron [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updated VIF entry in instance network info cache for port 5136afea-102e-46a1-8fdb-0af970c5af04. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.513 254096 DEBUG nova.network.neutron [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.523 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feae20a5-451d-42ed-b5df-ff6539cef218]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:fc66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534207, 'tstamp': 534207}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323645, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.524 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.532 254096 DEBUG oslo_concurrency.lockutils [req-331854a5-bdca-4f81-bd74-e74835f5833f req-a6cf7e2c-b939-44f6-b6b6-0753a8572508 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.541 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04fc487a-0b2f-4158-aa0e-b813978277f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323646, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[885e4165-070c-47ae-9ea0-671d466a1794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ceph-mon[74985]: pgmap v1680: 321 pgs: 321 active+clean; 227 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 25 16:41:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3054519550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.625 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2164b1e9-3b63-498b-aa60-e444fb4073da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.627 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.627 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:24 compute-0 kernel: tapf00f265b-60: entered promiscuous mode
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.628 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 NetworkManager[48891]: <info>  [1764088884.6295] manager: (tapf00f265b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.631 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 ovn_controller[153477]: 2025-11-25T16:41:24Z|00593|binding|INFO|Releasing lport 57c889f7-e44b-4f52-8e8a-db17b4e1f3b8 from this chassis (sb_readonly=0)
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.653 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f00f265b-63fa-48fb-9383-38ff6abf51c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f00f265b-63fa-48fb-9383-38ff6abf51c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.654 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[81b38d15-988e-448e-b81e-90094a0d7ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.655 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/f00f265b-63fa-48fb-9383-38ff6abf51c1.pid.haproxy
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:41:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:24.657 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'env', 'PROCESS_TAG=haproxy-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f00f265b-63fa-48fb-9383-38ff6abf51c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:41:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750340307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.931 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.933 254096 DEBUG nova.virt.libvirt.vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-596690940',display_name='tempest-ListServerFiltersTestJSON-instance-596690940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-596690940',id=63,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-pzh0k6o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:08Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=dce3a591-9fb6-4495-a7fb-867af2de384f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.934 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.935 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.936 254096 DEBUG nova.objects.instance [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid dce3a591-9fb6-4495-a7fb-867af2de384f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.958 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <uuid>dce3a591-9fb6-4495-a7fb-867af2de384f</uuid>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <name>instance-0000003f</name>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-596690940</nova:name>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:23</nova:creationTime>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <nova:port uuid="9e60e140-ca34-40f4-b867-d7c53f05bca4">
Nov 25 16:41:24 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <system>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <entry name="serial">dce3a591-9fb6-4495-a7fb-867af2de384f</entry>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <entry name="uuid">dce3a591-9fb6-4495-a7fb-867af2de384f</entry>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </system>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <os>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   </os>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <features>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   </features>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/dce3a591-9fb6-4495-a7fb-867af2de384f_disk">
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config">
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:24 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:67:c0:a9"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <target dev="tap9e60e140-ca"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/console.log" append="off"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <video>
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </video>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:41:24 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:41:24 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:41:24 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:41:24 compute-0 nova_compute[254092]: </domain>
Nov 25 16:41:24 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Preparing to wait for external event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.960 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.961 254096 DEBUG nova.virt.libvirt.vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-596690940',display_name='tempest-ListServerFiltersTestJSON-instance-596690940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-596690940',id=63,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-pzh0k6o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:08Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=dce3a591-9fb6-4495-a7fb-867af2de384f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.961 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.962 254096 DEBUG nova.network.os_vif_util [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.962 254096 DEBUG os_vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.963 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.964 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.967 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e60e140-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.967 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e60e140-ca, col_values=(('external_ids', {'iface-id': '9e60e140-ca34-40f4-b867-d7c53f05bca4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:c0:a9', 'vm-uuid': 'dce3a591-9fb6-4495-a7fb-867af2de384f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 NetworkManager[48891]: <info>  [1764088884.9696] manager: (tap9e60e140-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.975 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088884.974942, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.975 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Started (Lifecycle Event)
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:24 compute-0 nova_compute[254092]: 2025-11-25 16:41:24.978 254096 INFO os_vif [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca')
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.082 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.089 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088884.975989, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.090 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Paused (Lifecycle Event)
Nov 25 16:41:25 compute-0 podman[323741]: 2025-11-25 16:41:24.996866433 +0000 UTC m=+0.023013186 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.123 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.124 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.124 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No VIF found with MAC fa:16:3e:67:c0:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.124 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Using config drive
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.146 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.152 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.157 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:25 compute-0 podman[323741]: 2025-11-25 16:41:25.172453176 +0000 UTC m=+0.198599899 container create 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.198 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:25 compute-0 systemd[1]: Started libpod-conmon-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192.scope.
Nov 25 16:41:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b958805e8f49ae0dd91301b40e75db313a46de7a441e4d3c3ccb3b8eb4101d4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:25 compute-0 podman[323741]: 2025-11-25 16:41:25.266974771 +0000 UTC m=+0.293121504 container init 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:41:25 compute-0 podman[323741]: 2025-11-25 16:41:25.272621554 +0000 UTC m=+0.298768287 container start 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:41:25 compute-0 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : New worker (323781) forked
Nov 25 16:41:25 compute-0 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : Loading success.
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.379 254096 DEBUG nova.compute.manager [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-changed-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.380 254096 DEBUG nova.compute.manager [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Refreshing instance network info cache due to event network-changed-1404e99c-a32c-404a-a7d6-3daccc67c48b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.380 254096 DEBUG oslo_concurrency.lockutils [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.527 254096 DEBUG nova.compute.manager [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.527 254096 DEBUG oslo_concurrency.lockutils [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.528 254096 DEBUG oslo_concurrency.lockutils [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.528 254096 DEBUG oslo_concurrency.lockutils [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.529 254096 DEBUG nova.compute.manager [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:25 compute-0 nova_compute[254092]: 2025-11-25 16:41:25.530 254096 WARNING nova.compute.manager [req-edaad452-e472-4275-b620-db491d7f176f req-6edf9088-23e7-473d-a292-cba560b75f0d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state active and task_state deleting.
Nov 25 16:41:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 193 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Nov 25 16:41:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3750340307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.611 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Creating config drive at /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.616 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwjiw4ol execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:26 compute-0 ceph-mon[74985]: pgmap v1681: 321 pgs: 321 active+clean; 193 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.757 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwjiw4ol" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.781 254096 DEBUG nova.storage.rbd_utils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.784 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.820 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.821 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.843 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.862 254096 DEBUG nova.network.neutron [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updating instance_info_cache with network_info: [{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.891 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.891 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance network_info: |[{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.892 254096 DEBUG oslo_concurrency.lockutils [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.892 254096 DEBUG nova.network.neutron [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Refreshing network info cache for port 1404e99c-a32c-404a-a7d6-3daccc67c48b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.896 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start _get_guest_xml network_info=[{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.908 254096 WARNING nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.915 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.916 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.919 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.920 254096 DEBUG nova.virt.libvirt.host [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.920 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.920 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='48b3ab46-af13-4c6a-9088-ba98b648a375',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.921 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.922 254096 DEBUG nova.virt.hardware [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.925 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.985 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.986 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.988 254096 DEBUG oslo_concurrency.processutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config dce3a591-9fb6-4495-a7fb-867af2de384f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:26 compute-0 nova_compute[254092]: 2025-11-25 16:41:26.988 254096 INFO nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deleting local config drive /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f/disk.config because it was imported into RBD.
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.000 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.001 254096 INFO nova.compute.claims [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:27 compute-0 kernel: tap9e60e140-ca: entered promiscuous mode
Nov 25 16:41:27 compute-0 NetworkManager[48891]: <info>  [1764088887.0435] manager: (tap9e60e140-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Nov 25 16:41:27 compute-0 systemd-udevd[323621]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:27 compute-0 ovn_controller[153477]: 2025-11-25T16:41:27Z|00594|binding|INFO|Claiming lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 for this chassis.
Nov 25 16:41:27 compute-0 ovn_controller[153477]: 2025-11-25T16:41:27Z|00595|binding|INFO|9e60e140-ca34-40f4-b867-d7c53f05bca4: Claiming fa:16:3e:67:c0:a9 10.100.0.8
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.058 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:c0:a9 10.100.0.8'], port_security=['fa:16:3e:67:c0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dce3a591-9fb6-4495-a7fb-867af2de384f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9e60e140-ca34-40f4-b867-d7c53f05bca4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.060 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9e60e140-ca34-40f4-b867-d7c53f05bca4 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.061 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:41:27 compute-0 NetworkManager[48891]: <info>  [1764088887.0659] device (tap9e60e140-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:41:27 compute-0 NetworkManager[48891]: <info>  [1764088887.0667] device (tap9e60e140-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40565c2c-f8be-4293-b33d-5efe50b891cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:27 compute-0 ovn_controller[153477]: 2025-11-25T16:41:27Z|00596|binding|INFO|Setting lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 ovn-installed in OVS
Nov 25 16:41:27 compute-0 ovn_controller[153477]: 2025-11-25T16:41:27Z|00597|binding|INFO|Setting lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 up in Southbound
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:27 compute-0 systemd-machined[216343]: New machine qemu-74-instance-0000003f.
Nov 25 16:41:27 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-0000003f.
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.121 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4369f73c-031b-43a4-9420-57e2a80f1714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.127 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec5c7ac-33c0-42ed-8b3e-b0f3f53270bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.156 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e55dc2-f9a1-4856-aa6a-3d8d5d99c1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[319bb8e0-3422-4d1b-82c8-6e1c1a74138e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323872, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.193 254096 DEBUG nova.network.neutron [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.195 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5adf288-2d99-4824-a6cd-536c14958ee3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323875, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323875, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.197 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.202 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:27.202 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.207 254096 INFO nova.compute.manager [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Took 2.87 seconds to deallocate network for instance.
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.216 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.264 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.319 254096 DEBUG nova.network.neutron [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updated VIF entry in instance network info cache for port 9e60e140-ca34-40f4-b867-d7c53f05bca4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.320 254096 DEBUG nova.network.neutron [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updating instance_info_cache with network_info: [{"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.335 254096 DEBUG oslo_concurrency.lockutils [req-908a1656-816e-42ea-8a0c-8c599e2f651a req-7398eee9-520d-46f3-a254-1ebb609f6de1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dce3a591-9fb6-4495-a7fb-867af2de384f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208042239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.470 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.493 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.498 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 180 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 866 KiB/s wr, 129 op/s
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.600 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088887.5994153, dce3a591-9fb6-4495-a7fb-867af2de384f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.600 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Started (Lifecycle Event)
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.606 254096 DEBUG nova.compute.manager [req-0d07125c-fc0a-4f89-8782-41f7c64e6abb req-89330c76-887f-40f8-bbca-4f7b3d11cf73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-deleted-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.633 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.638 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088887.6009853, dce3a591-9fb6-4495-a7fb-867af2de384f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.638 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Paused (Lifecycle Event)
Nov 25 16:41:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/208042239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.659 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.664 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229298609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.696 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.703 254096 DEBUG nova.compute.provider_tree [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.721 254096 DEBUG nova.scheduler.client.report [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.749 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.750 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.752 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.830 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.830 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.862 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.914 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:27 compute-0 nova_compute[254092]: 2025-11-25 16:41:27.934 254096 DEBUG oslo_concurrency.processutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22168325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.010 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.012 254096 DEBUG nova.virt.libvirt.vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1125840801',display_name='tempest-ListServerFiltersTestJSON-instance-1125840801',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1125840801',id=64,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-e1rd6q3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:12Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=b5c5a442-8e8e-40c5-9634-e36c49e6e41b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.013 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.014 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.015 254096 DEBUG nova.objects.instance [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5c5a442-8e8e-40c5-9634-e36c49e6e41b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.069 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <uuid>b5c5a442-8e8e-40c5-9634-e36c49e6e41b</uuid>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <name>instance-00000040</name>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <memory>196608</memory>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1125840801</nova:name>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:26</nova:creationTime>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <nova:flavor name="m1.micro">
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:memory>192</nova:memory>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <nova:port uuid="1404e99c-a32c-404a-a7d6-3daccc67c48b">
Nov 25 16:41:28 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <system>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <entry name="serial">b5c5a442-8e8e-40c5-9634-e36c49e6e41b</entry>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <entry name="uuid">b5c5a442-8e8e-40c5-9634-e36c49e6e41b</entry>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </system>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <os>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   </os>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <features>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   </features>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk">
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config">
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:28 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:68:dd:b0"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <target dev="tap1404e99c-a3"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/console.log" append="off"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <video>
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </video>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:41:28 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:41:28 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:41:28 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:41:28 compute-0 nova_compute[254092]: </domain>
Nov 25 16:41:28 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.071 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Preparing to wait for external event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.072 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.072 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.072 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.073 254096 DEBUG nova.virt.libvirt.vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1125840801',display_name='tempest-ListServerFiltersTestJSON-instance-1125840801',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1125840801',id=64,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-e1rd6q3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:12Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=b5c5a442-8e8e-40c5-9634-e36c49e6e41b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.074 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.074 254096 DEBUG nova.network.os_vif_util [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.075 254096 DEBUG os_vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.076 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.077 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.081 254096 DEBUG nova.compute.manager [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.081 254096 DEBUG oslo_concurrency.lockutils [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fef208e1-3706-4d03-8385-12418e9dc230-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 DEBUG oslo_concurrency.lockutils [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 DEBUG oslo_concurrency.lockutils [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 DEBUG nova.compute.manager [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] No waiting events found dispatching network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.082 254096 WARNING nova.compute.manager [req-d3b20dbd-a42c-4188-8ea6-1d2f04200be4 req-0fbc7827-0285-4bdd-a3d7-8b10099de357 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Received unexpected event network-vif-plugged-60897ca4-9177-413c-b0f0-808dbc7d34dc for instance with vm_state deleted and task_state None.
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1404e99c-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1404e99c-a3, col_values=(('external_ids', {'iface-id': '1404e99c-a32c-404a-a7d6-3daccc67c48b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:dd:b0', 'vm-uuid': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:28 compute-0 NetworkManager[48891]: <info>  [1764088888.0897] manager: (tap1404e99c-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.092 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.098 254096 INFO os_vif [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3')
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373184205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.436 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.438 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.438 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Creating image(s)
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.458 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.482 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.509 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.514 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.542 254096 DEBUG oslo_concurrency.processutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.551 254096 DEBUG nova.compute.provider_tree [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.564 254096 DEBUG nova.scheduler.client.report [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.584 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.585 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.586 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.586 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.606 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.610 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a9e83b59-224b-49dd-83f7-d057737f5825_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.646 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.655 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.655 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.656 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] No VIF found with MAC fa:16:3e:68:dd:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:41:28 compute-0 ceph-mon[74985]: pgmap v1682: 321 pgs: 321 active+clean; 180 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 866 KiB/s wr, 129 op/s
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.657 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Using config drive
Nov 25 16:41:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4229298609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/22168325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/373184205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.677 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.683 254096 INFO nova.scheduler.client.report [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Deleted allocations for instance fef208e1-3706-4d03-8385-12418e9dc230
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.767 254096 DEBUG nova.policy [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c1fd56de7cd4f5c9b1d85ffe8545c90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:28 compute-0 nova_compute[254092]: 2025-11-25 16:41:28.837 254096 DEBUG oslo_concurrency.lockutils [None req-cca53708-6de3-4e94-989e-7a3100a9d86d 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "fef208e1-3706-4d03-8385-12418e9dc230" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.281 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Creating config drive at /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.286 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgs7fm4ny execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.440 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgs7fm4ny" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.466 254096 DEBUG nova.storage.rbd_utils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] rbd image b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.470 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 180 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 106 op/s
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.597 254096 DEBUG nova.network.neutron [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updated VIF entry in instance network info cache for port 1404e99c-a32c-404a-a7d6-3daccc67c48b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.598 254096 DEBUG nova.network.neutron [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updating instance_info_cache with network_info: [{"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.620 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a9e83b59-224b-49dd-83f7-d057737f5825_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.646 254096 DEBUG oslo_concurrency.lockutils [req-04a6a80c-5828-4eb4-b7dc-78846506b49c req-ed314cbe-2c44-4a85-ad57-8b8c74928637 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b5c5a442-8e8e-40c5-9634-e36c49e6e41b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.771 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.772 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Processing event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG oslo_concurrency.lockutils [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.773 254096 DEBUG nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] No waiting events found dispatching network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.774 254096 WARNING nova.compute.manager [req-a6d22289-2a6d-4e04-bc45-6287a6548663 req-57bc973e-4d0a-4975-b5ca-81f6dcba54da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received unexpected event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 for instance with vm_state building and task_state spawning.
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.774 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.783 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] resizing rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.874 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088889.7792208, dce3a591-9fb6-4495-a7fb-867af2de384f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Resumed (Lifecycle Event)
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.880 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.881 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.881 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.885 254096 INFO nova.virt.libvirt.driver [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance spawned successfully.
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.885 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.917 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.922 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.922 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.923 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.923 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.923 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.924 254096 DEBUG nova.virt.libvirt.driver [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.927 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:29 compute-0 nova_compute[254092]: 2025-11-25 16:41:29.968 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.053 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.155 254096 INFO nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 21.84 seconds to spawn the instance on the hypervisor.
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.155 254096 DEBUG nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.168 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.169 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.211 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.212 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.223 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.223 254096 INFO nova.compute.claims [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.288 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.310 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Successfully created port: 63c5b67d-9c2e-4371-887d-db0d034d9072 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.353 254096 INFO nova.compute.manager [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 23.05 seconds to build instance.
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.424 254096 DEBUG oslo_concurrency.lockutils [None req-11ab290d-0cdd-477f-b684-508a61859b45 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.460 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.522 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.595 254096 DEBUG nova.objects.instance [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'migration_context' on Instance uuid a9e83b59-224b-49dd-83f7-d057737f5825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.618 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.619 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Ensure instance console log exists: /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.619 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.619 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.620 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.687 254096 DEBUG oslo_concurrency.processutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config b5c5a442-8e8e-40c5-9634-e36c49e6e41b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.687 254096 INFO nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deleting local config drive /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b/disk.config because it was imported into RBD.
Nov 25 16:41:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:30 compute-0 kernel: tap1404e99c-a3: entered promiscuous mode
Nov 25 16:41:30 compute-0 NetworkManager[48891]: <info>  [1764088890.7474] manager: (tap1404e99c-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Nov 25 16:41:30 compute-0 ovn_controller[153477]: 2025-11-25T16:41:30Z|00598|binding|INFO|Claiming lport 1404e99c-a32c-404a-a7d6-3daccc67c48b for this chassis.
Nov 25 16:41:30 compute-0 ovn_controller[153477]: 2025-11-25T16:41:30Z|00599|binding|INFO|1404e99c-a32c-404a-a7d6-3daccc67c48b: Claiming fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:30 compute-0 ovn_controller[153477]: 2025-11-25T16:41:30Z|00600|binding|INFO|Setting lport 1404e99c-a32c-404a-a7d6-3daccc67c48b ovn-installed in OVS
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:30 compute-0 systemd-machined[216343]: New machine qemu-75-instance-00000040.
Nov 25 16:41:30 compute-0 systemd-udevd[324266]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:30 compute-0 ovn_controller[153477]: 2025-11-25T16:41:30Z|00601|binding|INFO|Setting lport 1404e99c-a32c-404a-a7d6-3daccc67c48b up in Southbound
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.793 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.795 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis
Nov 25 16:41:30 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000040.
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.796 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:41:30 compute-0 NetworkManager[48891]: <info>  [1764088890.8033] device (tap1404e99c-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:41:30 compute-0 NetworkManager[48891]: <info>  [1764088890.8041] device (tap1404e99c-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf51d37-67ed-4bf3-a37f-c80ebd2d6d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:30 compute-0 ceph-mon[74985]: pgmap v1683: 321 pgs: 321 active+clean; 180 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 106 op/s
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.855 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfbfe89-548f-41f4-ad2b-20da30b5afd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.859 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[91032e6a-7723-40a6-baf7-b460d7ca855e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.894 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c58302-ddc2-4c47-8367-a6421b1f47df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3afb5ff7-6139-490c-93fb-bcf92a5c8587]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324280, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.944 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5908fd-68a1-48ff-9585-d56798e26854]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324281, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324281, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.946 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:30.950 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411224049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.982 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:30 compute-0 nova_compute[254092]: 2025-11-25 16:41:30.989 254096 DEBUG nova.compute.provider_tree [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.005 254096 DEBUG nova.scheduler.client.report [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.089 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.090 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.093 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.102 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.103 254096 INFO nova.compute.claims [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.188 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.189 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.224 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.312 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.345 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 227 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.552 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.554 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.554 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Creating image(s)
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.578 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.598 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.620 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.623 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.704 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.705 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.706 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.707 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.730 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.735 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 42be369c-5a19-4073-becc-4f28ef579c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.766 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088891.7178912, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Started (Lifecycle Event)
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.787 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.791 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088891.71818, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.791 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Paused (Lifecycle Event)
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.810 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.813 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.830 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707549141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.881 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1411224049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.886 254096 DEBUG nova.compute.provider_tree [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.902 254096 DEBUG nova.scheduler.client.report [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.953 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:31 compute-0 nova_compute[254092]: 2025-11-25 16:41:31.954 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.144 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.146 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.189 254096 DEBUG nova.policy [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.218 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.285 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.382 254096 DEBUG nova.compute.manager [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.383 254096 DEBUG oslo_concurrency.lockutils [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.383 254096 DEBUG oslo_concurrency.lockutils [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.384 254096 DEBUG oslo_concurrency.lockutils [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.384 254096 DEBUG nova.compute.manager [req-490b864b-870e-45f6-98d5-8185e800e9d0 req-d1f984e3-0a56-4ba2-a8ad-1d1a7cb30032 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Processing event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.385 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.389 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088892.389475, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.390 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Resumed (Lifecycle Event)
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.392 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.397 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance spawned successfully.
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.397 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.414 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.414 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.415 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.415 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.416 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.416 254096 DEBUG nova.virt.libvirt.driver [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.420 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.423 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.436 254096 DEBUG nova.policy [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.441 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.487 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.490 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.490 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Creating image(s)
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.509 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.529 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.551 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.556 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.595 254096 DEBUG nova.compute.manager [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.596 254096 DEBUG oslo_concurrency.lockutils [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.597 254096 DEBUG oslo_concurrency.lockutils [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.597 254096 DEBUG oslo_concurrency.lockutils [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.597 254096 DEBUG nova.compute.manager [req-ae7560d7-95c6-4877-b362-567e35d9daa0 req-a9e0d49b-5483-4127-b53b-4e73c2d0d063 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Processing event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.598 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.611 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088892.6111143, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.612 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Resumed (Lifecycle Event)
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.631 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.637 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.638 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.639 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.641 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.641 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.667 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.676 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 679f5e87-bac4-4169-bffa-555a53e7321f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.710 254096 INFO nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 27.14 seconds to spawn the instance on the hypervisor.
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.711 254096 DEBUG nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.716 254096 INFO nova.virt.libvirt.driver [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance spawned successfully.
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.716 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.718 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.748 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.754 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.755 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.757 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.758 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.759 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.759 254096 DEBUG nova.virt.libvirt.driver [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.814 254096 INFO nova.compute.manager [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 28.19 seconds to build instance.
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.872 254096 DEBUG oslo_concurrency.lockutils [None req-9f94d32d-acab-4530-8657-50aa4518288d e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.887 254096 INFO nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 19.72 seconds to spawn the instance on the hypervisor.
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.888 254096 DEBUG nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.946 254096 INFO nova.compute.manager [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 21.27 seconds to build instance.
Nov 25 16:41:32 compute-0 ceph-mon[74985]: pgmap v1684: 321 pgs: 321 active+clean; 227 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Nov 25 16:41:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1707549141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:32 compute-0 nova_compute[254092]: 2025-11-25 16:41:32.997 254096 DEBUG oslo_concurrency.lockutils [None req-c428c35d-9e28-490c-a2b9-7f667c6bac7e e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.167 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Successfully updated port: 63c5b67d-9c2e-4371-887d-db0d034d9072 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.198 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.199 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquired lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.199 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.533 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 42be369c-5a19-4073-becc-4f28ef579c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.799s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 227 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.566 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:33 compute-0 nova_compute[254092]: 2025-11-25 16:41:33.625 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:34 compute-0 sudo[324590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:34 compute-0 sudo[324590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:34 compute-0 sudo[324590]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:34 compute-0 nova_compute[254092]: 2025-11-25 16:41:34.242 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Successfully created port: 43b8a38b-0b5a-4b7d-8043-759fa3697e8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:34 compute-0 sudo[324615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:41:34 compute-0 sudo[324615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:34 compute-0 sudo[324615]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:34 compute-0 sudo[324640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:34 compute-0 sudo[324640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:34 compute-0 sudo[324640]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:34 compute-0 sudo[324665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:41:34 compute-0 sudo[324665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:34 compute-0 ceph-mon[74985]: pgmap v1685: 321 pgs: 321 active+clean; 227 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 25 16:41:34 compute-0 sudo[324665]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:41:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:41:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:41:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:41:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:41:34 compute-0 nova_compute[254092]: 2025-11-25 16:41:34.986 254096 DEBUG nova.compute.manager [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:34 compute-0 nova_compute[254092]: 2025-11-25 16:41:34.988 254096 DEBUG oslo_concurrency.lockutils [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:34 compute-0 nova_compute[254092]: 2025-11-25 16:41:34.988 254096 DEBUG oslo_concurrency.lockutils [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:34 compute-0 nova_compute[254092]: 2025-11-25 16:41:34.988 254096 DEBUG oslo_concurrency.lockutils [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:34 compute-0 nova_compute[254092]: 2025-11-25 16:41:34.990 254096 DEBUG nova.compute.manager [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:34 compute-0 nova_compute[254092]: 2025-11-25 16:41:34.992 254096 WARNING nova.compute.manager [req-0888610f-848f-46d7-b8c1-db93dace122f req-117005c1-071e-4944-b1a1-6ff24b408451 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state None.
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.007 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Successfully created port: 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.061 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-changed-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.062 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Refreshing instance network info cache due to event network-changed-63c5b67d-9c2e-4371-887d-db0d034d9072. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.062 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:41:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9c98f7f4-9a93-4f65-8dca-db4754f7ef2d does not exist
Nov 25 16:41:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 81725a30-d0bb-46ce-a4ae-bcc50c9e4f14 does not exist
Nov 25 16:41:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 43867373-0770-4688-b051-9350420497f0 does not exist
Nov 25 16:41:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:41:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:41:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:41:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:41:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:41:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:41:35 compute-0 sudo[324721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:35 compute-0 sudo[324721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:35 compute-0 sudo[324721]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:35 compute-0 sudo[324746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:41:35 compute-0 sudo[324746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:35 compute-0 sudo[324746]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:35 compute-0 sudo[324771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:35 compute-0 sudo[324771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:35 compute-0 sudo[324771]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:35 compute-0 sudo[324796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:41:35 compute-0 sudo[324796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.489 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 679f5e87-bac4-4169-bffa-555a53e7321f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 301 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 244 op/s
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.594 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:35 compute-0 podman[324911]: 2025-11-25 16:41:35.680371841 +0000 UTC m=+0.021519234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:41:35 compute-0 podman[324911]: 2025-11-25 16:41:35.786923632 +0000 UTC m=+0.128071005 container create 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.802 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088880.72169, fef208e1-3706-4d03-8385-12418e9dc230 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.803 254096 INFO nova.compute.manager [-] [instance: fef208e1-3706-4d03-8385-12418e9dc230] VM Stopped (Lifecycle Event)
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.809 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 42be369c-5a19-4073-becc-4f28ef579c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.829 254096 DEBUG nova.compute.manager [None req-881900ba-a799-4837-8656-279dea01dedf - - - - - -] [instance: fef208e1-3706-4d03-8385-12418e9dc230] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.830 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.831 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Ensure instance console log exists: /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.831 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.832 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.832 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:35 compute-0 nova_compute[254092]: 2025-11-25 16:41:35.921 254096 DEBUG nova.network.neutron [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updating instance_info_cache with network_info: [{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.006 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Releasing lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.006 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance network_info: |[{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.007 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.007 254096 DEBUG nova.network.neutron [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Refreshing network info cache for port 63c5b67d-9c2e-4371-887d-db0d034d9072 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.010 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start _get_guest_xml network_info=[{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.017 254096 WARNING nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.025 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.026 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.029 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:36 compute-0 systemd[1]: Started libpod-conmon-823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3.scope.
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.030 254096 DEBUG nova.virt.libvirt.host [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.031 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.031 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.032 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.033 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.033 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.033 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.034 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.034 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.034 254096 DEBUG nova.virt.hardware [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.037 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:36 compute-0 podman[324911]: 2025-11-25 16:41:36.108434345 +0000 UTC m=+0.449581738 container init 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:41:36 compute-0 podman[324911]: 2025-11-25 16:41:36.118896699 +0000 UTC m=+0.460044072 container start 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:41:36 compute-0 festive_bardeen[324944]: 167 167
Nov 25 16:41:36 compute-0 systemd[1]: libpod-823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3.scope: Deactivated successfully.
Nov 25 16:41:36 compute-0 podman[324911]: 2025-11-25 16:41:36.355279892 +0000 UTC m=+0.696427295 container attach 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:41:36 compute-0 podman[324911]: 2025-11-25 16:41:36.356586658 +0000 UTC m=+0.697734031 container died 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:41:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824987733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.568 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.597 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.601 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4da5db6edac3f727996977e48a79968e790527e7c43389387edd47cdc5da7bcc-merged.mount: Deactivated successfully.
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.687 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 679f5e87-bac4-4169-bffa-555a53e7321f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.718 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.718 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Ensure instance console log exists: /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.719 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.719 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:36 compute-0 nova_compute[254092]: 2025-11-25 16:41:36.720 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:36 compute-0 ceph-mon[74985]: pgmap v1686: 321 pgs: 321 active+clean; 301 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 244 op/s
Nov 25 16:41:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/824987733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:37 compute-0 podman[324911]: 2025-11-25 16:41:37.037026968 +0000 UTC m=+1.378174341 container remove 823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:41:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2873100244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.066 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.068 254096 DEBUG nova.virt.libvirt.vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-695586713',display_name='tempest-ServerDiskConfigTestJSON-server-695586713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-695586713',id=65,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-mslb6gvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:27Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=a9e83b59-224b-49dd-83f7-d057737f5825,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.068 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.069 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.070 254096 DEBUG nova.objects.instance [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'pci_devices' on Instance uuid a9e83b59-224b-49dd-83f7-d057737f5825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.088 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <uuid>a9e83b59-224b-49dd-83f7-d057737f5825</uuid>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <name>instance-00000041</name>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-695586713</nova:name>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:36</nova:creationTime>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:user uuid="3c1fd56de7cd4f5c9b1d85ffe8545c90">tempest-ServerDiskConfigTestJSON-1055022473-project-member</nova:user>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:project uuid="6ff6b3c59785407bb06c9d7e3969ea4f">tempest-ServerDiskConfigTestJSON-1055022473</nova:project>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <nova:port uuid="63c5b67d-9c2e-4371-887d-db0d034d9072">
Nov 25 16:41:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <system>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <entry name="serial">a9e83b59-224b-49dd-83f7-d057737f5825</entry>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <entry name="uuid">a9e83b59-224b-49dd-83f7-d057737f5825</entry>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </system>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <os>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   </os>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <features>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   </features>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a9e83b59-224b-49dd-83f7-d057737f5825_disk">
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a9e83b59-224b-49dd-83f7-d057737f5825_disk.config">
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:1b:43:17"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <target dev="tap63c5b67d-9c"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/console.log" append="off"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <video>
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </video>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:41:37 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:41:37 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:41:37 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:41:37 compute-0 nova_compute[254092]: </domain>
Nov 25 16:41:37 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Preparing to wait for external event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.090 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.091 254096 DEBUG nova.virt.libvirt.vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-695586713',display_name='tempest-ServerDiskConfigTestJSON-server-695586713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-695586713',id=65,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-mslb6gvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:27Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=a9e83b59-224b-49dd-83f7-d057737f5825,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.092 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.092 254096 DEBUG nova.network.os_vif_util [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.093 254096 DEBUG os_vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.094 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.094 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.098 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63c5b67d-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.098 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63c5b67d-9c, col_values=(('external_ids', {'iface-id': '63c5b67d-9c2e-4371-887d-db0d034d9072', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:43:17', 'vm-uuid': 'a9e83b59-224b-49dd-83f7-d057737f5825'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:37 compute-0 NetworkManager[48891]: <info>  [1764088897.1007] manager: (tap63c5b67d-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.105 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:37 compute-0 systemd[1]: libpod-conmon-823bbb05d66b5240af8786c33ebb00c8334d59d5b8400eb11b6efa456fc10ca3.scope: Deactivated successfully.
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.109 254096 INFO os_vif [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c')
Nov 25 16:41:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:41:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 7659 writes, 34K keys, 7659 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7659 writes, 7659 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1650 writes, 7633 keys, 1650 commit groups, 1.0 writes per commit group, ingest: 9.79 MB, 0.02 MB/s
                                           Interval WAL: 1650 writes, 1650 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     23.8      1.65              0.14        20    0.082       0      0       0.0       0.0
                                             L6      1/0    9.22 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6     95.3     78.7      1.80              0.39        19    0.095     96K    10K       0.0       0.0
                                            Sum      1/0    9.22 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     49.8     52.4      3.44              0.53        39    0.088     96K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     70.7     73.8      0.67              0.13        10    0.067     30K   3070       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     95.3     78.7      1.80              0.39        19    0.095     96K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.4      1.60              0.14        19    0.084       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.038, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.18 GB write, 0.06 MB/s write, 0.17 GB read, 0.06 MB/s read, 3.4 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 19.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000232 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1304,19.08 MB,6.27751%) FilterBlock(40,278.98 KB,0.0896203%) IndexBlock(40,477.27 KB,0.153316%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.259 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.261 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.261 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] No VIF found with MAC fa:16:3e:1b:43:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.261 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Using config drive
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.279 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:37 compute-0 podman[325049]: 2025-11-25 16:41:37.213066293 +0000 UTC m=+0.023928620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:41:37 compute-0 podman[325049]: 2025-11-25 16:41:37.348144958 +0000 UTC m=+0.159007265 container create 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:41:37 compute-0 systemd[1]: Started libpod-conmon-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope.
Nov 25 16:41:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:37 compute-0 podman[325049]: 2025-11-25 16:41:37.502382372 +0000 UTC m=+0.313244699 container init 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 16:41:37 compute-0 podman[325049]: 2025-11-25 16:41:37.510309367 +0000 UTC m=+0.321171674 container start 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.531 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Successfully updated port: 43b8a38b-0b5a-4b7d-8043-759fa3697e8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 287 op/s
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.574 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.576 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.576 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.579 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Creating config drive at /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.584 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7zuzvjk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:37 compute-0 podman[325049]: 2025-11-25 16:41:37.609142219 +0000 UTC m=+0.420004526 container attach 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.721 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo7zuzvjk" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.742 254096 DEBUG nova.storage.rbd_utils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] rbd image a9e83b59-224b-49dd-83f7-d057737f5825_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.745 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config a9e83b59-224b-49dd-83f7-d057737f5825_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.774 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.918 254096 DEBUG nova.compute.manager [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-changed-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.919 254096 DEBUG nova.compute.manager [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Refreshing instance network info cache due to event network-changed-43b8a38b-0b5a-4b7d-8043-759fa3697e8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.919 254096 DEBUG oslo_concurrency.lockutils [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:37 compute-0 nova_compute[254092]: 2025-11-25 16:41:37.988 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Successfully updated port: 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.057 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.058 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.058 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2873100244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.171 254096 DEBUG oslo_concurrency.processutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config a9e83b59-224b-49dd-83f7-d057737f5825_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.172 254096 INFO nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deleting local config drive /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825/disk.config because it was imported into RBD.
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:38 compute-0 kernel: tap63c5b67d-9c: entered promiscuous mode
Nov 25 16:41:38 compute-0 NetworkManager[48891]: <info>  [1764088898.2143] manager: (tap63c5b67d-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 25 16:41:38 compute-0 ovn_controller[153477]: 2025-11-25T16:41:38Z|00602|binding|INFO|Claiming lport 63c5b67d-9c2e-4371-887d-db0d034d9072 for this chassis.
Nov 25 16:41:38 compute-0 ovn_controller[153477]: 2025-11-25T16:41:38Z|00603|binding|INFO|63c5b67d-9c2e-4371-887d-db0d034d9072: Claiming fa:16:3e:1b:43:17 10.100.0.7
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.224 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:43:17 10.100.0.7'], port_security=['fa:16:3e:1b:43:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a9e83b59-224b-49dd-83f7-d057737f5825', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63c5b67d-9c2e-4371-887d-db0d034d9072) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.233 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63c5b67d-9c2e-4371-887d-db0d034d9072 in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 bound to our chassis
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.234 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:41:38 compute-0 ovn_controller[153477]: 2025-11-25T16:41:38Z|00604|binding|INFO|Setting lport 63c5b67d-9c2e-4371-887d-db0d034d9072 ovn-installed in OVS
Nov 25 16:41:38 compute-0 ovn_controller[153477]: 2025-11-25T16:41:38Z|00605|binding|INFO|Setting lport 63c5b67d-9c2e-4371-887d-db0d034d9072 up in Southbound
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.247 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b8dcbf-cb09-4fdc-bc97-7111647eeb90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 systemd-udevd[325142]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.250 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62c0a8be-b1 in ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.252 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62c0a8be-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.252 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0418c8e6-d96d-48b4-8181-43340772b29b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.254 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a498cdb4-9d8c-4009-98ba-dd759393f71f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 NetworkManager[48891]: <info>  [1764088898.2631] device (tap63c5b67d-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:41:38 compute-0 NetworkManager[48891]: <info>  [1764088898.2643] device (tap63c5b67d-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:41:38 compute-0 systemd-machined[216343]: New machine qemu-76-instance-00000041.
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.266 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[dcef4dc2-9a61-499e-957c-46d6bd3e339f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000041.
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.282 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0702c0f1-5ab6-41d0-9d1e-6abbab7ef41d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.313 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0ddf6c-c347-45e9-9039-57a8856f346d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 NetworkManager[48891]: <info>  [1764088898.3196] manager: (tap62c0a8be-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.320 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[77073b75-5a1e-4b50-b49c-c936733c12dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.331 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.359 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd7377d-0e7d-455b-970d-a91d68abb08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.363 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7505e7df-5a92-4f03-9d69-2cb97117c8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 NetworkManager[48891]: <info>  [1764088898.3830] device (tap62c0a8be-b0): carrier: link connected
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.386 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dc085d-92a0-4541-a29b-169ce981602f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.413 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e29db0-fc7c-4472-8779-1328d9a7c47b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535597, 'reachable_time': 39029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325183, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.430 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3840e01-01b1-4a6c-a011-3b0d8330d77e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:dab2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535597, 'tstamp': 535597}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325186, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.447 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99f56300-17d1-40c8-8639-e58aaf3905f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62c0a8be-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:da:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535597, 'reachable_time': 39029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325189, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.496 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86066a18-4d2e-4381-ae24-3c034dc20c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67190690-c937-4ee2-96be-9e3abb1dca4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.547 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.548 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62c0a8be-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:38 compute-0 NetworkManager[48891]: <info>  [1764088898.5847] manager: (tap62c0a8be-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Nov 25 16:41:38 compute-0 kernel: tap62c0a8be-b0: entered promiscuous mode
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.595 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62c0a8be-b0, col_values=(('external_ids', {'iface-id': '4c0a1e28-b18a-4839-ae5a-4472548ad46f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:38 compute-0 ovn_controller[153477]: 2025-11-25T16:41:38Z|00606|binding|INFO|Releasing lport 4c0a1e28-b18a-4839-ae5a-4472548ad46f from this chassis (sb_readonly=0)
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.600 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.600 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79c64a57-f146-4f22-ad08-74f9899d9c06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.601 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.pid.haproxy
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 62c0a8be-b828-4765-9cf8-f2477a8ef1d9
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:41:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:38.601 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'env', 'PROCESS_TAG=haproxy-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62c0a8be-b828-4765-9cf8-f2477a8ef1d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:38 compute-0 thirsty_rosalind[325084]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:41:38 compute-0 thirsty_rosalind[325084]: --> relative data size: 1.0
Nov 25 16:41:38 compute-0 thirsty_rosalind[325084]: --> All data devices are unavailable
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.658 254096 DEBUG nova.network.neutron [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updated VIF entry in instance network info cache for port 63c5b67d-9c2e-4371-887d-db0d034d9072. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.659 254096 DEBUG nova.network.neutron [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updating instance_info_cache with network_info: [{"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.680 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a9e83b59-224b-49dd-83f7-d057737f5825" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:38 compute-0 systemd[1]: libpod-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope: Deactivated successfully.
Nov 25 16:41:38 compute-0 systemd[1]: libpod-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope: Consumed 1.043s CPU time.
Nov 25 16:41:38 compute-0 podman[325049]: 2025-11-25 16:41:38.68219078 +0000 UTC m=+1.493053087 container died 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.680 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.686 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.686 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.686 254096 DEBUG oslo_concurrency.lockutils [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.687 254096 DEBUG nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] No waiting events found dispatching network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.687 254096 WARNING nova.compute.manager [req-08c49c8f-e1a9-4ac2-9bf3-6a3696c59397 req-30dfa98d-c702-499e-841b-648377fef308 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received unexpected event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b for instance with vm_state active and task_state None.
Nov 25 16:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-742c0325d4eed7e874c7127d07b0d97ac7a0f9d2246ecfca3f89ce3291f1fc93-merged.mount: Deactivated successfully.
Nov 25 16:41:38 compute-0 podman[325049]: 2025-11-25 16:41:38.857030663 +0000 UTC m=+1.667892970 container remove 6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:41:38 compute-0 systemd[1]: libpod-conmon-6e2002ab25c6e3fd466c12021497c9d5a0560c3b901fcd3268546825b59d7eaa.scope: Deactivated successfully.
Nov 25 16:41:38 compute-0 sudo[324796]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:38 compute-0 podman[325231]: 2025-11-25 16:41:38.885446914 +0000 UTC m=+0.173544950 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:41:38 compute-0 podman[325238]: 2025-11-25 16:41:38.899298319 +0000 UTC m=+0.185774870 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:41:38 compute-0 podman[325230]: 2025-11-25 16:41:38.935159753 +0000 UTC m=+0.225551690 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:41:38 compute-0 sudo[325332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:38 compute-0 sudo[325332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:38 compute-0 sudo[325332]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.969 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088898.9685616, a9e83b59-224b-49dd-83f7-d057737f5825 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.970 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Started (Lifecycle Event)
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.990 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.994 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088898.9694002, a9e83b59-224b-49dd-83f7-d057737f5825 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:38 compute-0 nova_compute[254092]: 2025-11-25 16:41:38.994 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Paused (Lifecycle Event)
Nov 25 16:41:39 compute-0 sudo[325370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:41:39 compute-0 sudo[325370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:39 compute-0 sudo[325370]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.018 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:39 compute-0 podman[325364]: 2025-11-25 16:41:39.020836387 +0000 UTC m=+0.059112514 container create fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.031 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.053 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:39 compute-0 systemd[1]: Started libpod-conmon-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9.scope.
Nov 25 16:41:39 compute-0 podman[325364]: 2025-11-25 16:41:38.990596517 +0000 UTC m=+0.028872674 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:39 compute-0 sudo[325402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:39 compute-0 sudo[325402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/549e2f31ca8fd68d7bd7aa09e114cdc7531e725e9150fc3ab7f41d99c3d361aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:39 compute-0 sudo[325402]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.107 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updating instance_info_cache with network_info: [{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:39 compute-0 ceph-mon[74985]: pgmap v1687: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 287 op/s
Nov 25 16:41:39 compute-0 podman[325364]: 2025-11-25 16:41:39.129586818 +0000 UTC m=+0.167862965 container init fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:41:39 compute-0 podman[325364]: 2025-11-25 16:41:39.137198914 +0000 UTC m=+0.175475031 container start fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.137 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.137 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance network_info: |[{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.138 254096 DEBUG oslo_concurrency.lockutils [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.138 254096 DEBUG nova.network.neutron [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Refreshing network info cache for port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.140 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start _get_guest_xml network_info=[{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.145 254096 WARNING nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:39 compute-0 sudo[325432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.150 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:39 compute-0 sudo[325432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.151 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:39 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : New worker (325460) forked
Nov 25 16:41:39 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : Loading success.
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.160 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.161 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.162 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.162 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.163 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.163 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.163 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.164 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.164 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.165 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.165 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.165 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.166 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.166 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.169 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:39 compute-0 podman[325523]: 2025-11-25 16:41:39.485687169 +0000 UTC m=+0.042154805 container create f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 16:41:39 compute-0 systemd[1]: Started libpod-conmon-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope.
Nov 25 16:41:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 284 op/s
Nov 25 16:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:39 compute-0 podman[325523]: 2025-11-25 16:41:39.467881735 +0000 UTC m=+0.024349401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:41:39 compute-0 podman[325523]: 2025-11-25 16:41:39.572391631 +0000 UTC m=+0.128859297 container init f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:41:39 compute-0 podman[325523]: 2025-11-25 16:41:39.578852336 +0000 UTC m=+0.135319982 container start f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:41:39 compute-0 podman[325523]: 2025-11-25 16:41:39.582618058 +0000 UTC m=+0.139085694 container attach f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:41:39 compute-0 systemd[1]: libpod-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope: Deactivated successfully.
Nov 25 16:41:39 compute-0 jolly_herschel[325537]: 167 167
Nov 25 16:41:39 compute-0 conmon[325537]: conmon f8d3affa46b3c1f71338 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope/container/memory.events
Nov 25 16:41:39 compute-0 podman[325523]: 2025-11-25 16:41:39.586081941 +0000 UTC m=+0.142549597 container died f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a29de91016a70ce30ed111802e61732b05a1a219ce343a631d17fe7b502bf26a-merged.mount: Deactivated successfully.
Nov 25 16:41:39 compute-0 podman[325523]: 2025-11-25 16:41:39.622780828 +0000 UTC m=+0.179248464 container remove f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 16:41:39 compute-0 systemd[1]: libpod-conmon-f8d3affa46b3c1f713389d9b08d4378f62443a652c8ba8badd66ad1435289cc0.scope: Deactivated successfully.
Nov 25 16:41:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/608732426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.750 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.779 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:39 compute-0 nova_compute[254092]: 2025-11-25 16:41:39.784 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:39 compute-0 podman[325563]: 2025-11-25 16:41:39.805065243 +0000 UTC m=+0.044104728 container create 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:41:39 compute-0 systemd[1]: Started libpod-conmon-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope.
Nov 25 16:41:39 compute-0 podman[325563]: 2025-11-25 16:41:39.785458281 +0000 UTC m=+0.024497786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:39 compute-0 podman[325563]: 2025-11-25 16:41:39.929219161 +0000 UTC m=+0.168258666 container init 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:41:39 compute-0 podman[325563]: 2025-11-25 16:41:39.937225658 +0000 UTC m=+0.176265153 container start 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:41:39 compute-0 podman[325563]: 2025-11-25 16:41:39.94135148 +0000 UTC m=+0.180390985 container attach 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:41:40
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'backups', 'volumes', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:41:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/608732426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.146 254096 DEBUG nova.network.neutron [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updating instance_info_cache with network_info: [{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.176 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.177 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance network_info: |[{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.180 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start _get_guest_xml network_info=[{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.187 254096 WARNING nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.196 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.197 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.204 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.204 254096 DEBUG nova.virt.libvirt.host [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.205 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.205 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.206 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.207 254096 DEBUG nova.virt.hardware [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.211 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616757905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.289 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.291 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-1',id=66,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:31Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=42be369c-5a19-4073-becc-4f28ef579c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.292 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.293 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.294 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 42be369c-5a19-4073-becc-4f28ef579c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.317 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <uuid>42be369c-5a19-4073-becc-4f28ef579c2c</uuid>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <name>instance-00000042</name>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <nova:name>tempest-tempest.common.compute-instance-258940450-1</nova:name>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:39</nova:creationTime>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <nova:port uuid="43b8a38b-0b5a-4b7d-8043-759fa3697e8a">
Nov 25 16:41:40 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <system>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <entry name="serial">42be369c-5a19-4073-becc-4f28ef579c2c</entry>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <entry name="uuid">42be369c-5a19-4073-becc-4f28ef579c2c</entry>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </system>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <os>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/42be369c-5a19-4073-becc-4f28ef579c2c_disk">
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/42be369c-5a19-4073-becc-4f28ef579c2c_disk.config">
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d3:af:21"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <target dev="tap43b8a38b-0b"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/console.log" append="off"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <video>
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:41:40 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:41:40 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:41:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:41:40 compute-0 nova_compute[254092]: </domain>
Nov 25 16:41:40 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.317 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Preparing to wait for external event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.318 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.318 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.318 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.319 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-1',id=66,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:31Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=42be369c-5a19-4073-becc-4f28ef579c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.319 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.320 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.320 254096 DEBUG os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.321 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.322 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.325 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43b8a38b-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.325 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43b8a38b-0b, col_values=(('external_ids', {'iface-id': '43b8a38b-0b5a-4b7d-8043-759fa3697e8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:af:21', 'vm-uuid': '42be369c-5a19-4073-becc-4f28ef579c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:41:40 compute-0 NetworkManager[48891]: <info>  [1764088900.3311] manager: (tap43b8a38b-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.349 254096 INFO os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b')
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.417 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.418 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.418 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:d3:af:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.419 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Using config drive
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.440 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:41:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734604155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.688 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.707 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.714 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]: {
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:     "0": [
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:         {
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "devices": [
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "/dev/loop3"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             ],
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_name": "ceph_lv0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_size": "21470642176",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "name": "ceph_lv0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "tags": {
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cluster_name": "ceph",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.crush_device_class": "",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.encrypted": "0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osd_id": "0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.type": "block",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.vdo": "0"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             },
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "type": "block",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "vg_name": "ceph_vg0"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:         }
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:     ],
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:     "1": [
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:         {
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "devices": [
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "/dev/loop4"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             ],
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_name": "ceph_lv1",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_size": "21470642176",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "name": "ceph_lv1",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "tags": {
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cluster_name": "ceph",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.crush_device_class": "",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.encrypted": "0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osd_id": "1",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.type": "block",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.vdo": "0"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             },
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "type": "block",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "vg_name": "ceph_vg1"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:         }
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:     ],
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:     "2": [
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:         {
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "devices": [
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "/dev/loop5"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             ],
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_name": "ceph_lv2",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_size": "21470642176",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "name": "ceph_lv2",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "tags": {
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.cluster_name": "ceph",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.crush_device_class": "",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.encrypted": "0",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osd_id": "2",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.type": "block",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:                 "ceph.vdo": "0"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             },
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "type": "block",
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:             "vg_name": "ceph_vg2"
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:         }
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]:     ]
Nov 25 16:41:40 compute-0 relaxed_yalow[325596]: }
Nov 25 16:41:40 compute-0 systemd[1]: libpod-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope: Deactivated successfully.
Nov 25 16:41:40 compute-0 conmon[325596]: conmon 67652e76246d7a6f4216 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope/container/memory.events
Nov 25 16:41:40 compute-0 podman[325688]: 2025-11-25 16:41:40.869909942 +0000 UTC m=+0.032569245 container died 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.915 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-changed-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.915 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Refreshing instance network info cache due to event network-changed-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.915 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.916 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:40 compute-0 nova_compute[254092]: 2025-11-25 16:41:40.916 254096 DEBUG nova.network.neutron [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Refreshing network info cache for port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-98cc19b8cedd1dedab6d2f4483603760edf9f2c118b7e9746c268b78c2a3866a-merged.mount: Deactivated successfully.
Nov 25 16:41:40 compute-0 podman[325688]: 2025-11-25 16:41:40.964706504 +0000 UTC m=+0.127365777 container remove 67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_yalow, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:41:40 compute-0 systemd[1]: libpod-conmon-67652e76246d7a6f4216add59e45adf41106954d2463dc5c7d984ae352c467fd.scope: Deactivated successfully.
Nov 25 16:41:41 compute-0 sudo[325432]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:41 compute-0 sudo[325721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:41 compute-0 sudo[325721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:41 compute-0 sudo[325721]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:41 compute-0 sudo[325746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:41:41 compute-0 sudo[325746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:41 compute-0 sudo[325746]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.117 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Creating config drive at /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.123 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6fsf9bt5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:41 compute-0 ceph-mon[74985]: pgmap v1688: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.3 MiB/s wr, 284 op/s
Nov 25 16:41:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/616757905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/734604155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522512571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:41 compute-0 sudo[325771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:41 compute-0 sudo[325771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:41 compute-0 sudo[325771]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.180 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.182 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-2',id=67,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:32Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=679f5e87-bac4-4169-bffa-555a53e7321f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.183 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.184 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.185 254096 DEBUG nova.objects.instance [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 679f5e87-bac4-4169-bffa-555a53e7321f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.207 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <uuid>679f5e87-bac4-4169-bffa-555a53e7321f</uuid>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <name>instance-00000043</name>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <nova:name>tempest-tempest.common.compute-instance-258940450-2</nova:name>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:40</nova:creationTime>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <nova:port uuid="01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb">
Nov 25 16:41:41 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <system>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <entry name="serial">679f5e87-bac4-4169-bffa-555a53e7321f</entry>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <entry name="uuid">679f5e87-bac4-4169-bffa-555a53e7321f</entry>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </system>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <os>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   </os>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <features>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   </features>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/679f5e87-bac4-4169-bffa-555a53e7321f_disk">
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/679f5e87-bac4-4169-bffa-555a53e7321f_disk.config">
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       </source>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:41:41 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:da:f6:54"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <target dev="tap01b0ba79-e7"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/console.log" append="off"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <video>
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </video>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:41:41 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:41:41 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:41:41 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:41:41 compute-0 nova_compute[254092]: </domain>
Nov 25 16:41:41 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.208 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Preparing to wait for external event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.209 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.209 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.209 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.210 254096 DEBUG nova.virt.libvirt.vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-2',id=67,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:32Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=679f5e87-bac4-4169-bffa-555a53e7321f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.210 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.211 254096 DEBUG nova.network.os_vif_util [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.211 254096 DEBUG os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.212 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.212 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.215 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01b0ba79-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.215 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01b0ba79-e7, col_values=(('external_ids', {'iface-id': '01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:f6:54', 'vm-uuid': '679f5e87-bac4-4169-bffa-555a53e7321f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:41 compute-0 NetworkManager[48891]: <info>  [1764088901.2180] manager: (tap01b0ba79-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.225 254096 INFO os_vif [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7')
Nov 25 16:41:41 compute-0 sudo[325801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:41:41 compute-0 sudo[325801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.259 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6fsf9bt5" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.287 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.290 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.329 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.330 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.330 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:da:f6:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.330 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Using config drive
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.350 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.486 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config 42be369c-5a19-4073-becc-4f28ef579c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.487 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deleting local config drive /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c/disk.config because it was imported into RBD.
Nov 25 16:41:41 compute-0 kernel: tap43b8a38b-0b: entered promiscuous mode
Nov 25 16:41:41 compute-0 NetworkManager[48891]: <info>  [1764088901.5459] manager: (tap43b8a38b-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Nov 25 16:41:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.4 MiB/s wr, 302 op/s
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:41 compute-0 ovn_controller[153477]: 2025-11-25T16:41:41Z|00607|binding|INFO|Claiming lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a for this chassis.
Nov 25 16:41:41 compute-0 ovn_controller[153477]: 2025-11-25T16:41:41Z|00608|binding|INFO|43b8a38b-0b5a-4b7d-8043-759fa3697e8a: Claiming fa:16:3e:d3:af:21 10.100.0.10
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.586 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:af:21 10.100.0.10'], port_security=['fa:16:3e:d3:af:21 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '42be369c-5a19-4073-becc-4f28ef579c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=43b8a38b-0b5a-4b7d-8043-759fa3697e8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f bound to our chassis
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.588 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:41:41 compute-0 systemd-udevd[325940]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.599 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80b048f7-380f-4051-a3f7-f1b3668fa8e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.600 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf66413c8-51 in ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.603 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf66413c8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad5900-fd8e-41b0-9c62-5d32410bff19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[814cacc1-ffd3-4320-973b-ae098f07e202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 NetworkManager[48891]: <info>  [1764088901.6074] device (tap43b8a38b-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:41:41 compute-0 NetworkManager[48891]: <info>  [1764088901.6084] device (tap43b8a38b-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:41:41 compute-0 systemd-machined[216343]: New machine qemu-77-instance-00000042.
Nov 25 16:41:41 compute-0 podman[325925]: 2025-11-25 16:41:41.614286556 +0000 UTC m=+0.061331164 container create 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.615 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[823c3ef7-6bd1-4a67-8f31-b72717b2469e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000042.
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.641 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e885240-96c8-42b4-8a8b-ef851dda0bf5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 systemd[1]: Started libpod-conmon-6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5.scope.
Nov 25 16:41:41 compute-0 ovn_controller[153477]: 2025-11-25T16:41:41Z|00609|binding|INFO|Setting lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a ovn-installed in OVS
Nov 25 16:41:41 compute-0 ovn_controller[153477]: 2025-11-25T16:41:41Z|00610|binding|INFO|Setting lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a up in Southbound
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:41 compute-0 podman[325925]: 2025-11-25 16:41:41.583135961 +0000 UTC m=+0.030180579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.680 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2e6044-390c-4b17-802d-39c09baae9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 systemd-udevd[325946]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:41 compute-0 NetworkManager[48891]: <info>  [1764088901.6920] manager: (tapf66413c8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/270)
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.689 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[49e462e0-ec91-4fe0-a3e2-6aee87b66e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 podman[325925]: 2025-11-25 16:41:41.711185295 +0000 UTC m=+0.158229913 container init 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:41:41 compute-0 podman[325925]: 2025-11-25 16:41:41.724785174 +0000 UTC m=+0.171829772 container start 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 16:41:41 compute-0 podman[325925]: 2025-11-25 16:41:41.728303129 +0000 UTC m=+0.175347717 container attach 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:41:41 compute-0 vigilant_newton[325955]: 167 167
Nov 25 16:41:41 compute-0 systemd[1]: libpod-6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5.scope: Deactivated successfully.
Nov 25 16:41:41 compute-0 podman[325925]: 2025-11-25 16:41:41.732424191 +0000 UTC m=+0.179468789 container died 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.740 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5414a6-480b-48c0-b334-f6f2f6c5b3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.744 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[96b8f2c1-16da-4e9c-ac38-4a45b1defbdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec0f19aa4ce3910aa6a4c3434eebe4245f1cbc1562e5de76bd7584f806c2cb70-merged.mount: Deactivated successfully.
Nov 25 16:41:41 compute-0 NetworkManager[48891]: <info>  [1764088901.7768] device (tapf66413c8-50): carrier: link connected
Nov 25 16:41:41 compute-0 podman[325925]: 2025-11-25 16:41:41.779073576 +0000 UTC m=+0.226118174 container remove 6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_newton, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.784 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f899a85b-bdf6-4ce6-b545-68c810fa1055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.812 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50db7d6a-6daa-4910-9530-beb872984019]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326001, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 systemd[1]: libpod-conmon-6c8e752c3d11c972782817e36f00b5ec36a03f91dc1a1ddc63037b32d35856c5.scope: Deactivated successfully.
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e549a84f-b3f3-467e-a77d-bfca89166e11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535936, 'tstamp': 535936}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326003, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.869 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17b1d668-c8f1-4624-baf9-564ead40b257]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326004, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.900 254096 DEBUG nova.network.neutron [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updated VIF entry in instance network info cache for port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.901 254096 DEBUG nova.network.neutron [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updating instance_info_cache with network_info: [{"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:41.919 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8938a777-61cf-4477-8c59-a660d0b01a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:41 compute-0 nova_compute[254092]: 2025-11-25 16:41:41.923 254096 DEBUG oslo_concurrency.lockutils [req-ccda2a2d-b17a-4285-af63-f79357f755df req-bc77424b-e480-4715-a8e0-a613ddbf1d0a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-42be369c-5a19-4073-becc-4f28ef579c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.022 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Creating config drive at /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.029 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp36q8xz_q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.022 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf36cf2-4b25-4975-83fe-8e4e49df1b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.035 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.038 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.039 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:42 compute-0 NetworkManager[48891]: <info>  [1764088902.0415] manager: (tapf66413c8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Nov 25 16:41:42 compute-0 kernel: tapf66413c8-50: entered promiscuous mode
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.051 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:42 compute-0 ovn_controller[153477]: 2025-11-25T16:41:42Z|00611|binding|INFO|Releasing lport 347c541f-24a8-4230-9881-74343160f6c8 from this chassis (sb_readonly=0)
Nov 25 16:41:42 compute-0 podman[326013]: 2025-11-25 16:41:42.059018831 +0000 UTC m=+0.110861808 container create 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:41:42 compute-0 podman[326013]: 2025-11-25 16:41:41.975789584 +0000 UTC m=+0.027632601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.092 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.098 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d5d1b1-12c3-42e3-a1fa-0df60d5f0249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.098 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.099 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'env', 'PROCESS_TAG=haproxy-f66413c8-5cde-4f70-af70-6b7886c1219f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f66413c8-5cde-4f70-af70-6b7886c1219f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:41:42 compute-0 systemd[1]: Started libpod-conmon-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope.
Nov 25 16:41:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1522512571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:42 compute-0 ceph-mon[74985]: pgmap v1689: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.4 MiB/s wr, 302 op/s
Nov 25 16:41:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.185 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp36q8xz_q" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:42 compute-0 podman[326013]: 2025-11-25 16:41:42.199582434 +0000 UTC m=+0.251425441 container init 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 16:41:42 compute-0 podman[326013]: 2025-11-25 16:41:42.212518176 +0000 UTC m=+0.264361163 container start 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.227 254096 DEBUG nova.storage.rbd_utils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:42 compute-0 podman[326013]: 2025-11-25 16:41:42.241083601 +0000 UTC m=+0.292926608 container attach 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.241 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.280 254096 DEBUG oslo_concurrency.lockutils [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.281 254096 DEBUG oslo_concurrency.lockutils [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.281 254096 DEBUG nova.compute.manager [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.294 254096 DEBUG nova.compute.manager [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.296 254096 DEBUG nova.objects.instance [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'flavor' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.356 254096 DEBUG nova.virt.libvirt.driver [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.432 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088902.4318604, 42be369c-5a19-4073-becc-4f28ef579c2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.432 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Started (Lifecycle Event)
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.435 254096 DEBUG oslo_concurrency.processutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config 679f5e87-bac4-4169-bffa-555a53e7321f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.435 254096 INFO nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deleting local config drive /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f/disk.config because it was imported into RBD.
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.453 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.464 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088902.4319508, 42be369c-5a19-4073-becc-4f28ef579c2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.465 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Paused (Lifecycle Event)
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.485 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.490 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:42 compute-0 kernel: tap01b0ba79-e7: entered promiscuous mode
Nov 25 16:41:42 compute-0 ovn_controller[153477]: 2025-11-25T16:41:42Z|00612|binding|INFO|Claiming lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for this chassis.
Nov 25 16:41:42 compute-0 systemd-udevd[325969]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:42 compute-0 ovn_controller[153477]: 2025-11-25T16:41:42Z|00613|binding|INFO|01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb: Claiming fa:16:3e:da:f6:54 10.100.0.4
Nov 25 16:41:42 compute-0 NetworkManager[48891]: <info>  [1764088902.4965] manager: (tap01b0ba79-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Nov 25 16:41:42 compute-0 NetworkManager[48891]: <info>  [1764088902.5067] device (tap01b0ba79-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:41:42 compute-0 NetworkManager[48891]: <info>  [1764088902.5073] device (tap01b0ba79-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.509 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:f6:54 10.100.0.4'], port_security=['fa:16:3e:da:f6:54 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '679f5e87-bac4-4169-bffa-555a53e7321f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.513 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:42 compute-0 ovn_controller[153477]: 2025-11-25T16:41:42Z|00614|binding|INFO|Setting lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb ovn-installed in OVS
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:42 compute-0 ovn_controller[153477]: 2025-11-25T16:41:42Z|00615|binding|INFO|Setting lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb up in Southbound
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:42 compute-0 systemd-machined[216343]: New machine qemu-78-instance-00000043.
Nov 25 16:41:42 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000043.
Nov 25 16:41:42 compute-0 podman[326150]: 2025-11-25 16:41:42.551600855 +0000 UTC m=+0.078671885 container create a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:41:42 compute-0 podman[326150]: 2025-11-25 16:41:42.514967581 +0000 UTC m=+0.042038631 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:41:42 compute-0 systemd[1]: Started libpod-conmon-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce.scope.
Nov 25 16:41:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9347ecdd347217bc82bfdd8002e184ed71c674b5a74b2d55a33c2bf2b613a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:41:42 compute-0 podman[326150]: 2025-11-25 16:41:42.668541438 +0000 UTC m=+0.195612498 container init a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:41:42 compute-0 podman[326150]: 2025-11-25 16:41:42.67672506 +0000 UTC m=+0.203796090 container start a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:41:42 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : New worker (326187) forked
Nov 25 16:41:42 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : Loading success.
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.732 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.734 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.750 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[211299be-1204-4825-beff-5100701836de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.779 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[653f5974-aadc-48fb-af2f-2e65bb81d869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.783 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ea34ec01-1097-4c9a-8c12-f9a7e5d9f9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.809 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9242c1d6-0fac-4fbe-9812-e56208cbd6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.831 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feac263e-96bf-4a61-9048-51a6cbb03518]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326201, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.847 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b01e0d3d-1345-4240-8707-e872e7f3f892]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535955, 'tstamp': 535955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326202, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535960, 'tstamp': 535960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326202, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.848 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.851 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.851 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:42.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.980 254096 DEBUG nova.network.neutron [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updated VIF entry in instance network info cache for port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:41:42 compute-0 nova_compute[254092]: 2025-11-25 16:41:42.980 254096 DEBUG nova.network.neutron [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updating instance_info_cache with network_info: [{"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.001 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-679f5e87-bac4-4169-bffa-555a53e7321f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.001 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.002 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.002 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.002 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Processing event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.003 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.004 254096 DEBUG oslo_concurrency.lockutils [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.004 254096 DEBUG nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] No waiting events found dispatching network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.004 254096 WARNING nova.compute.manager [req-be17c287-9c0d-452b-90bc-103cec4ae829 req-086f3cc0-9356-4ba0-942e-9522b046715c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received unexpected event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 for instance with vm_state building and task_state spawning.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.005 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.010 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.0105665, a9e83b59-224b-49dd-83f7-d057737f5825 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.011 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Resumed (Lifecycle Event)
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.013 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.023 254096 INFO nova.virt.libvirt.driver [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance spawned successfully.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.024 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.041 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.046 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Processing event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.047 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] No waiting events found dispatching network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 WARNING nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received unexpected event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a for instance with vm_state building and task_state spawning.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.048 254096 DEBUG oslo_concurrency.lockutils [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.049 254096 DEBUG nova.compute.manager [req-5a978fee-9156-48db-974c-a0252b68610a req-8b9e80cd-df5f-49fa-980e-aec9059b9b5b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Processing event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.049 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.051 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.055 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.055 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.056 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.056 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.056 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.057 254096 DEBUG nova.virt.libvirt.driver [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.061 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.074 254096 INFO nova.virt.libvirt.driver [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance spawned successfully.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.075 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.081 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.082 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.0537705, 42be369c-5a19-4073-becc-4f28ef579c2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.082 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Resumed (Lifecycle Event)
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.101 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.101 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.102 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.102 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.103 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.103 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.108 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.114 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.149 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.167 254096 INFO nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 14.73 seconds to spawn the instance on the hypervisor.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.168 254096 DEBUG nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.183 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 11.63 seconds to spawn the instance on the hypervisor.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.183 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.277 254096 INFO nova.compute.manager [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 16.38 seconds to build instance.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.306 254096 DEBUG oslo_concurrency.lockutils [None req-9b9cbd5f-413d-488c-a348-5c52cdb25d25 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.308 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 13.12 seconds to build instance.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.327 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 distracted_knuth[326044]: {
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "osd_id": 1,
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "type": "bluestore"
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:     },
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "osd_id": 2,
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "type": "bluestore"
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:     },
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "osd_id": 0,
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:         "type": "bluestore"
Nov 25 16:41:43 compute-0 distracted_knuth[326044]:     }
Nov 25 16:41:43 compute-0 distracted_knuth[326044]: }
Nov 25 16:41:43 compute-0 systemd[1]: libpod-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope: Deactivated successfully.
Nov 25 16:41:43 compute-0 systemd[1]: libpod-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope: Consumed 1.099s CPU time.
Nov 25 16:41:43 compute-0 podman[326013]: 2025-11-25 16:41:43.484956526 +0000 UTC m=+1.536799513 container died 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5ce1b7a48a43784b10ac824df71803582f20bd105763898717981f31fab813b-merged.mount: Deactivated successfully.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.534 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.5335133, 679f5e87-bac4-4169-bffa-555a53e7321f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.534 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Started (Lifecycle Event)
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.536 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:41:43 compute-0 podman[326013]: 2025-11-25 16:41:43.546172507 +0000 UTC m=+1.598015484 container remove 232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.549 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.553 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.560 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.563 254096 INFO nova.virt.libvirt.driver [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance spawned successfully.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.563 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.569 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.570 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.533881, 679f5e87-bac4-4169-bffa-555a53e7321f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.570 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Paused (Lifecycle Event)
Nov 25 16:41:43 compute-0 systemd[1]: libpod-conmon-232617e21c76680bc5b891308a5bfd2dc0dfb9ccb9afa04111584d0448f778d1.scope: Deactivated successfully.
Nov 25 16:41:43 compute-0 sudo[325801]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.584 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.587 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.588 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.589 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.589 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.590 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.590 254096 DEBUG nova.virt.libvirt.driver [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.597 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088903.5388157, 679f5e87-bac4-4169-bffa-555a53e7321f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.597 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Resumed (Lifecycle Event)
Nov 25 16:41:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:41:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:41:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:41:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 24f50b42-530a-4a50-8873-51fc98660269 does not exist
Nov 25 16:41:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4cade59b-ea21-477c-bd2a-9149784902a6 does not exist
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.620 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.626 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.642 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.648 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 11.16 seconds to spawn the instance on the hypervisor.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.649 254096 DEBUG nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:43 compute-0 sudo[326285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:41:43 compute-0 sudo[326285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:43 compute-0 sudo[326285]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.714 254096 INFO nova.compute.manager [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 13.28 seconds to build instance.
Nov 25 16:41:43 compute-0 nova_compute[254092]: 2025-11-25 16:41:43.729 254096 DEBUG oslo_concurrency.lockutils [None req-cb2a3eb9-5aa8-45a3-b0d8-517f527ef3ae 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:43 compute-0 sudo[326310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:41:43 compute-0 sudo[326310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:41:43 compute-0 sudo[326310]: pam_unix(sudo:session): session closed for user root
Nov 25 16:41:43 compute-0 ovn_controller[153477]: 2025-11-25T16:41:43Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:c0:a9 10.100.0.8
Nov 25 16:41:43 compute-0 ovn_controller[153477]: 2025-11-25T16:41:43Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:c0:a9 10.100.0.8
Nov 25 16:41:44 compute-0 ceph-mon[74985]: pgmap v1690: 321 pgs: 321 active+clean; 320 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Nov 25 16:41:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:41:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:41:45 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 16:41:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 345 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 5.2 MiB/s wr, 401 op/s
Nov 25 16:41:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:45 compute-0 nova_compute[254092]: 2025-11-25 16:41:45.986 254096 DEBUG nova.compute.manager [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:45 compute-0 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG oslo_concurrency.lockutils [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:45 compute-0 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG oslo_concurrency.lockutils [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:45 compute-0 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG oslo_concurrency.lockutils [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:45 compute-0 nova_compute[254092]: 2025-11-25 16:41:45.987 254096 DEBUG nova.compute.manager [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] No waiting events found dispatching network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:45 compute-0 nova_compute[254092]: 2025-11-25 16:41:45.988 254096 WARNING nova.compute.manager [req-94e2311f-1ade-48be-8644-96a2113eb595 req-98787e64-8bc7-4f82-92e2-ebe7c6959f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received unexpected event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for instance with vm_state active and task_state None.
Nov 25 16:41:46 compute-0 nova_compute[254092]: 2025-11-25 16:41:46.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:46 compute-0 ceph-mon[74985]: pgmap v1691: 321 pgs: 321 active+clean; 345 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 5.2 MiB/s wr, 401 op/s
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.472 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.473 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.474 254096 INFO nova.compute.manager [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Terminating instance
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.475 254096 DEBUG nova.compute.manager [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:41:47 compute-0 kernel: tap43b8a38b-0b (unregistering): left promiscuous mode
Nov 25 16:41:47 compute-0 NetworkManager[48891]: <info>  [1764088907.5301] device (tap43b8a38b-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00616|binding|INFO|Releasing lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a from this chassis (sb_readonly=0)
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00617|binding|INFO|Setting lport 43b8a38b-0b5a-4b7d-8043-759fa3697e8a down in Southbound
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00618|binding|INFO|Removing iface tap43b8a38b-0b ovn-installed in OVS
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.556 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:af:21 10.100.0.10'], port_security=['fa:16:3e:d3:af:21 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '42be369c-5a19-4073-becc-4f28ef579c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=43b8a38b-0b5a-4b7d-8043-759fa3697e8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.557 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 43b8a38b-0b5a-4b7d-8043-759fa3697e8a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis
Nov 25 16:41:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 362 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 312 op/s
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.562 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000042.scope: Deactivated successfully.
Nov 25 16:41:47 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000042.scope: Consumed 4.702s CPU time.
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b02b4f4-9141-43aa-b7c0-ec1e71759b2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:47 compute-0 systemd-machined[216343]: Machine qemu-77-instance-00000042 terminated.
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:61:e3 10.100.0.13
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:61:e3 10.100.0.13
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.645 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc247d4-1e16-443d-9062-0e5db0024b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.648 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[329d5092-ce08-4bdc-872d-1fe3ee3993bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.682 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.684 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.684 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.685 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.685 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.686 254096 INFO nova.compute.manager [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Terminating instance
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.687 254096 DEBUG nova.compute.manager [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.686 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9fb4a5-c848-49a2-8640-cdca12fc0ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.713 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0367a6-9e8e-4215-991e-593eeda62551]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535936, 'reachable_time': 16135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326350, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.718 254096 INFO nova.virt.libvirt.driver [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Instance destroyed successfully.
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.718 254096 DEBUG nova.objects.instance [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid 42be369c-5a19-4073-becc-4f28ef579c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.728 254096 DEBUG nova.virt.libvirt.vif [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-1',id=66,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:43Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=42be369c-5a19-4073-becc-4f28ef579c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.728 254096 DEBUG nova.network.os_vif_util [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "address": "fa:16:3e:d3:af:21", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43b8a38b-0b", "ovs_interfaceid": "43b8a38b-0b5a-4b7d-8043-759fa3697e8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.729 254096 DEBUG nova.network.os_vif_util [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.729 254096 DEBUG os_vif [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.732 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43b8a38b-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.735 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f56533-92db-4c79-8e16-25e821a4ba8a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535955, 'tstamp': 535955}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326360, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535960, 'tstamp': 535960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326360, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.737 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.740 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.740 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.740 254096 INFO os_vif [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:af:21,bridge_name='br-int',has_traffic_filtering=True,id=43b8a38b-0b5a-4b7d-8043-759fa3697e8a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43b8a38b-0b')
Nov 25 16:41:47 compute-0 kernel: tap01b0ba79-e7 (unregistering): left promiscuous mode
Nov 25 16:41:47 compute-0 NetworkManager[48891]: <info>  [1764088907.7564] device (tap01b0ba79-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00619|binding|INFO|Releasing lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb from this chassis (sb_readonly=0)
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00620|binding|INFO|Setting lport 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb down in Southbound
Nov 25 16:41:47 compute-0 ovn_controller[153477]: 2025-11-25T16:41:47Z|00621|binding|INFO|Removing iface tap01b0ba79-e7 ovn-installed in OVS
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.776 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:f6:54 10.100.0.4'], port_security=['fa:16:3e:da:f6:54 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '679f5e87-bac4-4169-bffa-555a53e7321f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.778 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.780 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f66413c8-5cde-4f70-af70-6b7886c1219f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[613646d4-ffd5-4059-9e48-dd9ef6772119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:47.781 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace which is not needed anymore
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.784 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 25 16:41:47 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000043.scope: Consumed 4.827s CPU time.
Nov 25 16:41:47 compute-0 systemd-machined[216343]: Machine qemu-78-instance-00000043 terminated.
Nov 25 16:41:47 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : haproxy version is 2.8.14-c23fe91
Nov 25 16:41:47 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [NOTICE]   (326185) : path to executable is /usr/sbin/haproxy
Nov 25 16:41:47 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [WARNING]  (326185) : Exiting Master process...
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.925 254096 INFO nova.virt.libvirt.driver [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Instance destroyed successfully.
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.926 254096 DEBUG nova.objects.instance [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid 679f5e87-bac4-4169-bffa-555a53e7321f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:47 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [ALERT]    (326185) : Current worker (326187) exited with code 143 (Terminated)
Nov 25 16:41:47 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[326180]: [WARNING]  (326185) : All workers exited. Exiting... (0)
Nov 25 16:41:47 compute-0 systemd[1]: libpod-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce.scope: Deactivated successfully.
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.939 254096 DEBUG nova.virt.libvirt.vif [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-258940450',display_name='tempest-tempest.common.compute-instance-258940450-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-258940450-2',id=67,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T16:41:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-ieom0say',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:43Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=679f5e87-bac4-4169-bffa-555a53e7321f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.940 254096 DEBUG nova.network.os_vif_util [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "address": "fa:16:3e:da:f6:54", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01b0ba79-e7", "ovs_interfaceid": "01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.941 254096 DEBUG nova.network.os_vif_util [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.941 254096 DEBUG os_vif [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.945 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01b0ba79-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:47 compute-0 podman[326399]: 2025-11-25 16:41:47.947607133 +0000 UTC m=+0.071709755 container died a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:47 compute-0 nova_compute[254092]: 2025-11-25 16:41:47.955 254096 INFO os_vif [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:f6:54,bridge_name='br-int',has_traffic_filtering=True,id=01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01b0ba79-e7')
Nov 25 16:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce-userdata-shm.mount: Deactivated successfully.
Nov 25 16:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b9347ecdd347217bc82bfdd8002e184ed71c674b5a74b2d55a33c2bf2b613a2-merged.mount: Deactivated successfully.
Nov 25 16:41:48 compute-0 podman[326399]: 2025-11-25 16:41:48.005005081 +0000 UTC m=+0.129107693 container cleanup a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:41:48 compute-0 systemd[1]: libpod-conmon-a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce.scope: Deactivated successfully.
Nov 25 16:41:48 compute-0 podman[326460]: 2025-11-25 16:41:48.081858726 +0000 UTC m=+0.051895719 container remove a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.089 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42ba9198-91bb-4791-a7b4-1d8b1c7fe0c8]: (4, ('Tue Nov 25 04:41:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce)\na9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce\nTue Nov 25 04:41:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (a9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce)\na9b638cfe306cd9b174a0a018b0446485c6648195127bbc42fd09bbfe56e5fce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.091 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0698f8e2-2aa4-4df8-b546-05bf9263d48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.092 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:48 compute-0 kernel: tapf66413c8-50: left promiscuous mode
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64fc119c-694b-4448-8907-3e255dee13b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.133 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66f366bc-c7c9-4bc2-873c-bdfb8ce8dd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4c93cf-1e4c-4406-9053-7439612f90ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.151 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18d8ec7f-b718-4bae-906a-c029d14bb664]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535926, 'reachable_time': 23963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326475, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:48 compute-0 systemd[1]: run-netns-ovnmeta\x2df66413c8\x2d5cde\x2d4f70\x2daf70\x2d6b7886c1219f.mount: Deactivated successfully.
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.154 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:41:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:48.155 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[054829f6-b9ae-4da6-82e2-571811998bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.168 254096 INFO nova.virt.libvirt.driver [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deleting instance files /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c_del
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.169 254096 INFO nova.virt.libvirt.driver [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deletion of /var/lib/nova/instances/42be369c-5a19-4073-becc-4f28ef579c2c_del complete
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.250 254096 INFO nova.compute.manager [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.250 254096 DEBUG oslo.service.loopingcall [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.251 254096 DEBUG nova.compute.manager [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.251 254096 DEBUG nova.network.neutron [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.501 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-unplugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] No waiting events found dispatching network-vif-unplugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-unplugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.502 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG oslo_concurrency.lockutils [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 DEBUG nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] No waiting events found dispatching network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.503 254096 WARNING nova.compute.manager [req-bbec85bb-9e95-4e42-80b1-7849e19e42ab req-c6321c0f-32d9-4787-bfa3-25fad0d240d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received unexpected event network-vif-plugged-43b8a38b-0b5a-4b7d-8043-759fa3697e8a for instance with vm_state active and task_state deleting.
Nov 25 16:41:48 compute-0 ceph-mon[74985]: pgmap v1692: 321 pgs: 321 active+clean; 362 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 312 op/s
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.716 254096 INFO nova.virt.libvirt.driver [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deleting instance files /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f_del
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.717 254096 INFO nova.virt.libvirt.driver [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deletion of /var/lib/nova/instances/679f5e87-bac4-4169-bffa-555a53e7321f_del complete
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.806 254096 INFO nova.compute.manager [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 1.12 seconds to destroy the instance on the hypervisor.
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.806 254096 DEBUG oslo.service.loopingcall [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.807 254096 DEBUG nova.compute.manager [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:41:48 compute-0 nova_compute[254092]: 2025-11-25 16:41:48.807 254096 DEBUG nova.network.neutron [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.012 254096 DEBUG nova.network.neutron [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.031 254096 INFO nova.compute.manager [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Took 0.78 seconds to deallocate network for instance.
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.079 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.079 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.128 254096 DEBUG nova.compute.manager [req-204250bd-1497-4235-9a40-e758533d271a req-938236db-821c-435d-907b-095f1eecad69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Received event network-vif-deleted-43b8a38b-0b5a-4b7d-8043-759fa3697e8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.244 254096 DEBUG oslo_concurrency.processutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.281 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.281 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.282 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.282 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.282 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.285 254096 INFO nova.compute.manager [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Terminating instance
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.287 254096 DEBUG nova.compute.manager [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:41:49 compute-0 kernel: tap63c5b67d-9c (unregistering): left promiscuous mode
Nov 25 16:41:49 compute-0 NetworkManager[48891]: <info>  [1764088909.3244] device (tap63c5b67d-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:41:49 compute-0 ovn_controller[153477]: 2025-11-25T16:41:49Z|00622|binding|INFO|Releasing lport 63c5b67d-9c2e-4371-887d-db0d034d9072 from this chassis (sb_readonly=0)
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 ovn_controller[153477]: 2025-11-25T16:41:49Z|00623|binding|INFO|Setting lport 63c5b67d-9c2e-4371-887d-db0d034d9072 down in Southbound
Nov 25 16:41:49 compute-0 ovn_controller[153477]: 2025-11-25T16:41:49Z|00624|binding|INFO|Removing iface tap63c5b67d-9c ovn-installed in OVS
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.355 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:43:17 10.100.0.7'], port_security=['fa:16:3e:1b:43:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a9e83b59-224b-49dd-83f7-d057737f5825', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ff6b3c59785407bb06c9d7e3969ea4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b774cfb5-cd97-4ca0-9d7e-110986406220', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d2dad0a-51f1-4edf-9261-6f40860e89ef, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=63c5b67d-9c2e-4371-887d-db0d034d9072) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.356 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 63c5b67d-9c2e-4371-887d-db0d034d9072 in datapath 62c0a8be-b828-4765-9cf8-f2477a8ef1d9 unbound from our chassis
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.359 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62c0a8be-b828-4765-9cf8-f2477a8ef1d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e9eb3b-aac1-4428-b7e2-83072742ae37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.360 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 namespace which is not needed anymore
Nov 25 16:41:49 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000041.scope: Deactivated successfully.
Nov 25 16:41:49 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000041.scope: Consumed 6.390s CPU time.
Nov 25 16:41:49 compute-0 systemd-machined[216343]: Machine qemu-76-instance-00000041 terminated.
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : haproxy version is 2.8.14-c23fe91
Nov 25 16:41:49 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [NOTICE]   (325456) : path to executable is /usr/sbin/haproxy
Nov 25 16:41:49 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [WARNING]  (325456) : Exiting Master process...
Nov 25 16:41:49 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [WARNING]  (325456) : Exiting Master process...
Nov 25 16:41:49 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [ALERT]    (325456) : Current worker (325460) exited with code 143 (Terminated)
Nov 25 16:41:49 compute-0 systemd[1]: libpod-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9.scope: Deactivated successfully.
Nov 25 16:41:49 compute-0 neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9[325427]: [WARNING]  (325456) : All workers exited. Exiting... (0)
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.513 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.518 254096 INFO nova.virt.libvirt.driver [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Instance destroyed successfully.
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.519 254096 DEBUG nova.objects.instance [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lazy-loading 'resources' on Instance uuid a9e83b59-224b-49dd-83f7-d057737f5825 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:49 compute-0 podman[326519]: 2025-11-25 16:41:49.520936647 +0000 UTC m=+0.068699475 container died fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.528 254096 DEBUG nova.network.neutron [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.530 254096 DEBUG nova.virt.libvirt.vif [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-695586713',display_name='tempest-ServerDiskConfigTestJSON-server-695586713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-695586713',id=65,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ff6b3c59785407bb06c9d7e3969ea4f',ramdisk_id='',reservation_id='r-mslb6gvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1055022473',owner_user_name='tempest-ServerDiskConfigTestJSON-1055022473-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:47Z,user_data=None,user_id='3c1fd56de7cd4f5c9b1d85ffe8545c90',uuid=a9e83b59-224b-49dd-83f7-d057737f5825,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.531 254096 DEBUG nova.network.os_vif_util [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converting VIF {"id": "63c5b67d-9c2e-4371-887d-db0d034d9072", "address": "fa:16:3e:1b:43:17", "network": {"id": "62c0a8be-b828-4765-9cf8-f2477a8ef1d9", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-433746422-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ff6b3c59785407bb06c9d7e3969ea4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c5b67d-9c", "ovs_interfaceid": "63c5b67d-9c2e-4371-887d-db0d034d9072", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.532 254096 DEBUG nova.network.os_vif_util [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.532 254096 DEBUG os_vif [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.535 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63c5b67d-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.541 254096 INFO os_vif [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:43:17,bridge_name='br-int',has_traffic_filtering=True,id=63c5b67d-9c2e-4371-887d-db0d034d9072,network=Network(62c0a8be-b828-4765-9cf8-f2477a8ef1d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c5b67d-9c')
Nov 25 16:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9-userdata-shm.mount: Deactivated successfully.
Nov 25 16:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-549e2f31ca8fd68d7bd7aa09e114cdc7531e725e9150fc3ab7f41d99c3d361aa-merged.mount: Deactivated successfully.
Nov 25 16:41:49 compute-0 podman[326519]: 2025-11-25 16:41:49.560435289 +0000 UTC m=+0.108198117 container cleanup fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:41:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 362 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.564 254096 INFO nova.compute.manager [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Took 0.76 seconds to deallocate network for instance.
Nov 25 16:41:49 compute-0 systemd[1]: libpod-conmon-fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9.scope: Deactivated successfully.
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.613 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:49 compute-0 podman[326574]: 2025-11-25 16:41:49.631597819 +0000 UTC m=+0.048366303 container remove fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.637 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[76fa45b1-f255-436c-ac3a-96000b277335]: (4, ('Tue Nov 25 04:41:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9)\nfd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9\nTue Nov 25 04:41:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 (fd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9)\nfd3d886f03b64ed41474997e2ec7486a377a5f3e50d324379b9453788f1733b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.639 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d15554f5-6f90-4e76-9518-f678396a431f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.639 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62c0a8be-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:49 compute-0 kernel: tap62c0a8be-b0: left promiscuous mode
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163470565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[828af609-3003-435c-845b-0ad134c0aa0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 ovn_controller[153477]: 2025-11-25T16:41:49Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 16:41:49 compute-0 ovn_controller[153477]: 2025-11-25T16:41:49Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3dffe21d-c05c-4e18-b79d-770dac77c415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.719 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28244552-aac2-409c-9331-96c5cdf619e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.732 254096 DEBUG oslo_concurrency.processutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.739 254096 DEBUG nova.compute.provider_tree [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28af4fcd-1af9-488a-86fe-b356f5f38fcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535589, 'reachable_time': 34979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326594, 'error': None, 'target': 'ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.742 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62c0a8be-b828-4765-9cf8-f2477a8ef1d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:41:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:49.742 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6c75b4-6a6b-44c6-948b-43e135331690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d62c0a8be\x2db828\x2d4765\x2d9cf8\x2df2477a8ef1d9.mount: Deactivated successfully.
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.759 254096 DEBUG nova.scheduler.client.report [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.780 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.783 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.828 254096 INFO nova.scheduler.client.report [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance 42be369c-5a19-4073-becc-4f28ef579c2c
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.895 254096 DEBUG oslo_concurrency.lockutils [None req-d4e5924f-cfef-4aad-b1c6-9a404b057083 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "42be369c-5a19-4073-becc-4f28ef579c2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.907 254096 DEBUG oslo_concurrency.processutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.966 254096 INFO nova.virt.libvirt.driver [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deleting instance files /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825_del
Nov 25 16:41:49 compute-0 nova_compute[254092]: 2025-11-25 16:41:49.967 254096 INFO nova.virt.libvirt.driver [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deletion of /var/lib/nova/instances/a9e83b59-224b-49dd-83f7-d057737f5825_del complete
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.011 254096 INFO nova.compute.manager [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.012 254096 DEBUG oslo.service.loopingcall [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.012 254096 DEBUG nova.compute.manager [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.012 254096 DEBUG nova.network.neutron [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:41:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727871419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.333 254096 DEBUG oslo_concurrency.processutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.338 254096 DEBUG nova.compute.provider_tree [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.351 254096 DEBUG nova.scheduler.client.report [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.374 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.412 254096 INFO nova.scheduler.client.report [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance 679f5e87-bac4-4169-bffa-555a53e7321f
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.479 254096 DEBUG oslo_concurrency.lockutils [None req-384122a2-4903-4ae0-86bc-97f1f9cfa553 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.583 254096 DEBUG nova.network.neutron [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.601 254096 INFO nova.compute.manager [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Took 0.59 seconds to deallocate network for instance.
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-unplugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.607 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] No waiting events found dispatching network-vif-unplugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 WARNING nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received unexpected event network-vif-unplugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for instance with vm_state deleted and task_state None.
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.608 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "679f5e87-bac4-4169-bffa-555a53e7321f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] No waiting events found dispatching network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 WARNING nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received unexpected event network-vif-plugged-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb for instance with vm_state deleted and task_state None.
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-unplugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.609 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] No waiting events found dispatching network-vif-unplugged-63c5b67d-9c2e-4371-887d-db0d034d9072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-unplugged-63c5b67d-9c2e-4371-887d-db0d034d9072 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Received event network-vif-deleted-01b0ba79-e74b-4cfc-9a21-270d9a4ca0eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.610 254096 DEBUG oslo_concurrency.lockutils [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.611 254096 DEBUG nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] No waiting events found dispatching network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.611 254096 WARNING nova.compute.manager [req-598a6852-230f-45e0-8f5a-ff86923ca929 req-62afad83-4da0-49a1-817b-ed2e2250f626 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received unexpected event network-vif-plugged-63c5b67d-9c2e-4371-887d-db0d034d9072 for instance with vm_state active and task_state deleting.
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.657 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.657 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:50 compute-0 ceph-mon[74985]: pgmap v1693: 321 pgs: 321 active+clean; 362 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Nov 25 16:41:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2163470565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2727871419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:50 compute-0 nova_compute[254092]: 2025-11-25 16:41:50.794 254096 DEBUG oslo_concurrency.processutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441204567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002612369867975516 of space, bias 1.0, pg target 0.7837109603926548 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:41:51 compute-0 nova_compute[254092]: 2025-11-25 16:41:51.228 254096 DEBUG oslo_concurrency.processutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:51 compute-0 nova_compute[254092]: 2025-11-25 16:41:51.236 254096 DEBUG nova.compute.provider_tree [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:51 compute-0 nova_compute[254092]: 2025-11-25 16:41:51.257 254096 DEBUG nova.scheduler.client.report [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:51 compute-0 nova_compute[254092]: 2025-11-25 16:41:51.287 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:51 compute-0 nova_compute[254092]: 2025-11-25 16:41:51.322 254096 INFO nova.scheduler.client.report [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Deleted allocations for instance a9e83b59-224b-49dd-83f7-d057737f5825
Nov 25 16:41:51 compute-0 nova_compute[254092]: 2025-11-25 16:41:51.412 254096 DEBUG oslo_concurrency.lockutils [None req-d90f14b4-e137-4917-b3ab-72b1df9ef60b 3c1fd56de7cd4f5c9b1d85ffe8545c90 6ff6b3c59785407bb06c9d7e3969ea4f - - default default] Lock "a9e83b59-224b-49dd-83f7-d057737f5825" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 279 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 493 op/s
Nov 25 16:41:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2441204567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:52 compute-0 nova_compute[254092]: 2025-11-25 16:41:52.479 254096 DEBUG nova.virt.libvirt.driver [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:41:52 compute-0 ceph-mon[74985]: pgmap v1694: 321 pgs: 321 active+clean; 279 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 493 op/s
Nov 25 16:41:52 compute-0 nova_compute[254092]: 2025-11-25 16:41:52.865 254096 DEBUG nova.compute.manager [req-7b3eeca5-12f5-4590-9c34-2f197e641e67 req-f878ab69-620c-46c0-a16a-4bb3d3f2eb29 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Received event network-vif-deleted-63c5b67d-9c2e-4371-887d-db0d034d9072 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 279 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 475 op/s
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.590 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.590 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.611 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.631 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.632 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.658 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.693 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.694 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.704 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.705 254096 INFO nova.compute.claims [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.747 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:53 compute-0 nova_compute[254092]: 2025-11-25 16:41:53.891 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422864745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.377 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.384 254096 DEBUG nova.compute.provider_tree [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.401 254096 DEBUG nova.scheduler.client.report [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.423 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.424 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.427 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.437 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.437 254096 INFO nova.compute.claims [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.500 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.500 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.539 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.562 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.670 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.709 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.711 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.712 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Creating image(s)
Nov 25 16:41:54 compute-0 ceph-mon[74985]: pgmap v1695: 321 pgs: 321 active+clean; 279 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 475 op/s
Nov 25 16:41:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2422864745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.734 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.758 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.789 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.795 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.842 254096 DEBUG nova.policy [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:54 compute-0 kernel: tap5136afea-10 (unregistering): left promiscuous mode
Nov 25 16:41:54 compute-0 NetworkManager[48891]: <info>  [1764088914.8573] device (tap5136afea-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:54 compute-0 ovn_controller[153477]: 2025-11-25T16:41:54Z|00625|binding|INFO|Releasing lport 5136afea-102e-46a1-8fdb-0af970c5af04 from this chassis (sb_readonly=0)
Nov 25 16:41:54 compute-0 ovn_controller[153477]: 2025-11-25T16:41:54Z|00626|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 down in Southbound
Nov 25 16:41:54 compute-0 ovn_controller[153477]: 2025-11-25T16:41:54Z|00627|binding|INFO|Removing iface tap5136afea-10 ovn-installed in OVS
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.876 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:41:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.877 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis
Nov 25 16:41:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.879 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.900 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.901 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.902 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.902 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.902 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85c706d9-0b9a-44ec-9cf9-877b85176e19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:54 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 16:41:54 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000003e.scope: Consumed 15.027s CPU time.
Nov 25 16:41:54 compute-0 systemd-machined[216343]: Machine qemu-73-instance-0000003e terminated.
Nov 25 16:41:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.938 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa99468-0e68-4682-a3d9-8d2e6ffc8ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.941 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.944 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3720b7d2-d73a-44a7-a29c-fc6c5caa14dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:54 compute-0 nova_compute[254092]: 2025-11-25 16:41:54.958 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.975 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ef225fec-827d-4beb-955e-14e82e70d1f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:54.999 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b3c049-9b40-4e91-857a-fd9c097dc8ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326770, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99a10069-eb53-44dd-adcc-dad173bfbe13]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326771, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326771, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:41:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.021 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.023 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.029 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.029 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:41:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:41:55.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:41:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614987473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.164 254096 DEBUG nova.compute.manager [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-unplugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.165 254096 DEBUG oslo_concurrency.lockutils [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.165 254096 DEBUG oslo_concurrency.lockutils [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.166 254096 DEBUG oslo_concurrency.lockutils [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.167 254096 DEBUG nova.compute.manager [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-unplugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.169 254096 WARNING nova.compute.manager [req-8ab50d42-3dfe-4776-80b0-95eaef1e71a0 req-24263492-4a88-43df-8f15-69da581e4484 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-unplugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state powering-off.
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.172 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.186 254096 DEBUG nova.compute.provider_tree [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.214 254096 DEBUG nova.scheduler.client.report [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:41:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:41:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920058783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:41:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:41:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920058783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.264 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.266 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.339 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.339 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.384 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.416 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.465 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.553 254096 INFO nova.virt.libvirt.driver [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance shutdown successfully after 13 seconds.
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.556 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.557 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.557 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Creating image(s)
Nov 25 16:41:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 279 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 478 op/s
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.577 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.598 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.619 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.623 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.670 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.780 254096 DEBUG nova.policy [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03b851e958654346bc1caa437d7b76a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.784 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.785 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Successfully created port: 9ef3b6ce-50de-444f-b27f-b16a5b2b832a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.789 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.790 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.844 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2614987473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:41:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/920058783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:41:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/920058783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.848 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5e2a584-5835-4c63-84de-6f0446220d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.890 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance destroyed successfully.
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.891 254096 DEBUG nova.objects.instance [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'numa_topology' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.906 254096 DEBUG nova.compute.manager [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.983 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 796c46a8-971c-4b51-96c9-0e7c8682cfa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.998 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.999 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Ensure instance console log exists: /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:55 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.999 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:56 compute-0 nova_compute[254092]: 2025-11-25 16:41:55.999 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:56 compute-0 nova_compute[254092]: 2025-11-25 16:41:56.000 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:56 compute-0 nova_compute[254092]: 2025-11-25 16:41:56.022 254096 DEBUG oslo_concurrency.lockutils [None req-30011fdc-ab45-40fe-bd9f-eb5262b71275 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:56 compute-0 nova_compute[254092]: 2025-11-25 16:41:56.784 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b5e2a584-5835-4c63-84de-6f0446220d35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.936s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:56 compute-0 nova_compute[254092]: 2025-11-25 16:41:56.850 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] resizing rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:41:56 compute-0 ceph-mon[74985]: pgmap v1696: 321 pgs: 321 active+clean; 279 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 6.4 MiB/s wr, 478 op/s
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.256 254096 DEBUG nova.compute.manager [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.256 254096 DEBUG oslo_concurrency.lockutils [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.257 254096 DEBUG oslo_concurrency.lockutils [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.258 254096 DEBUG oslo_concurrency.lockutils [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.258 254096 DEBUG nova.compute.manager [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.259 254096 WARNING nova.compute.manager [req-7b6335b7-e5a1-4439-a517-fde0c090df60 req-9348785c-0a85-466f-b080-485564c711b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state stopped and task_state None.
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.317 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Successfully created port: d2de6446-cca8-4827-a039-647fe671bab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.322 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'migration_context' on Instance uuid b5e2a584-5835-4c63-84de-6f0446220d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.336 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.336 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Ensure instance console log exists: /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.337 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.337 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.338 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.413 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Successfully updated port: 9ef3b6ce-50de-444f-b27f-b16a5b2b832a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.433 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.433 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.433 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:57 compute-0 ovn_controller[153477]: 2025-11-25T16:41:57Z|00628|binding|INFO|Releasing lport 57c889f7-e44b-4f52-8e8a-db17b4e1f3b8 from this chassis (sb_readonly=0)
Nov 25 16:41:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 303 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.4 MiB/s wr, 341 op/s
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:57 compute-0 nova_compute[254092]: 2025-11-25 16:41:57.679 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.183 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'flavor' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.201 254096 DEBUG oslo_concurrency.lockutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.202 254096 DEBUG oslo_concurrency.lockutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.202 254096 DEBUG nova.network.neutron [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.202 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'info_cache' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.534 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Successfully updated port: d2de6446-cca8-4827-a039-647fe671bab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.560 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.561 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquired lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.561 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.805 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:41:58 compute-0 ceph-mon[74985]: pgmap v1697: 321 pgs: 321 active+clean; 303 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.4 MiB/s wr, 341 op/s
Nov 25 16:41:58 compute-0 nova_compute[254092]: 2025-11-25 16:41:58.979 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updating instance_info_cache with network_info: [{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.060 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.061 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance network_info: |[{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.064 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start _get_guest_xml network_info=[{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.072 254096 WARNING nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.077 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.078 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.083 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.084 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.084 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.085 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.085 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.085 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.086 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.087 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.092 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.356 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-changed-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Refreshing instance network info cache due to event network-changed-9ef3b6ce-50de-444f-b27f-b16a5b2b832a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.357 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Refreshing network info cache for port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:41:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 303 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 263 op/s
Nov 25 16:41:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:41:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125572864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.625 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.646 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.650 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.763 254096 DEBUG nova.network.neutron [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updating instance_info_cache with network_info: [{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.886 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Releasing lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.886 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance network_info: |[{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.888 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start _get_guest_xml network_info=[{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.892 254096 WARNING nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.896 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.896 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.899 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.900 254096 DEBUG nova.virt.libvirt.host [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.900 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.901 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.901 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.902 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.903 254096 DEBUG nova.virt.hardware [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:41:59 compute-0 nova_compute[254092]: 2025-11-25 16:41:59.907 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:41:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4125572864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925887909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.109 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.111 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-1',id=68,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:54Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=796c46a8-971c-4b51-96c9-0e7c8682cfa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.111 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.112 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.113 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 796c46a8-971c-4b51-96c9-0e7c8682cfa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.124 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <uuid>796c46a8-971c-4b51-96c9-0e7c8682cfa8</uuid>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <name>instance-00000044</name>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:name>tempest-MultipleCreateTestJSON-server-1199053129-1</nova:name>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:59</nova:creationTime>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:port uuid="9ef3b6ce-50de-444f-b27f-b16a5b2b832a">
Nov 25 16:42:00 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <system>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="serial">796c46a8-971c-4b51-96c9-0e7c8682cfa8</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="uuid">796c46a8-971c-4b51-96c9-0e7c8682cfa8</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </system>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <os>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </os>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <features>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </features>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:b9:ae:2e"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <target dev="tap9ef3b6ce-50"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/console.log" append="off"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <video>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </video>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:42:00 compute-0 nova_compute[254092]: </domain>
Nov 25 16:42:00 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.125 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Preparing to wait for external event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.125 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.126 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.126 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.127 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-1',id=68,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:54Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=796c46a8-971c-4b51-96c9-0e7c8682cfa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.127 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.127 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.128 254096 DEBUG os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.128 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.129 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.129 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.131 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ef3b6ce-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.132 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ef3b6ce-50, col_values=(('external_ids', {'iface-id': '9ef3b6ce-50de-444f-b27f-b16a5b2b832a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:ae:2e', 'vm-uuid': '796c46a8-971c-4b51-96c9-0e7c8682cfa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:00 compute-0 NetworkManager[48891]: <info>  [1764088920.1343] manager: (tap9ef3b6ce-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.146 254096 INFO os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50')
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.218 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.218 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.218 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:b9:ae:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.219 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Using config drive
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.242 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.261 254096 DEBUG nova.network.neutron [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.276 254096 DEBUG oslo_concurrency.lockutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.297 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance destroyed successfully.
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.297 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'numa_topology' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.309 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.323 254096 DEBUG nova.virt.libvirt.vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.323 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.324 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.324 254096 DEBUG os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.326 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5136afea-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.334 254096 INFO os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.340 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start _get_guest_xml network_info=[{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.342 254096 WARNING nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:42:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238201947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.368 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.369 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.372 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.372 254096 DEBUG nova.virt.libvirt.host [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.373 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.373 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.374 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.374 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.375 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.376 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.376 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.376 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.377 254096 DEBUG nova.virt.hardware [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.377 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.384 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.402 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.408 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.438 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207363251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788681208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.865 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.867 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-2',id=69,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=b5e2a584-5835-4c63-84de-6f0446220d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.868 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.869 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.870 254096 DEBUG nova.objects.instance [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5e2a584-5835-4c63-84de-6f0446220d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.878 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.914 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <uuid>b5e2a584-5835-4c63-84de-6f0446220d35</uuid>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <name>instance-00000045</name>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:name>tempest-MultipleCreateTestJSON-server-1199053129-2</nova:name>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:41:59</nova:creationTime>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:user uuid="03b851e958654346bc1caa437d7b76a5">tempest-MultipleCreateTestJSON-193123640-project-member</nova:user>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:project uuid="20e192e04e814ae28e09f3c1fdbd39d0">tempest-MultipleCreateTestJSON-193123640</nova:project>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <nova:port uuid="d2de6446-cca8-4827-a039-647fe671bab4">
Nov 25 16:42:00 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <system>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="serial">b5e2a584-5835-4c63-84de-6f0446220d35</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="uuid">b5e2a584-5835-4c63-84de-6f0446220d35</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </system>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <os>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </os>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <features>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </features>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b5e2a584-5835-4c63-84de-6f0446220d35_disk">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b5e2a584-5835-4c63-84de-6f0446220d35_disk.config">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:00 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:a1:2d:7c"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <target dev="tapd2de6446-cc"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/console.log" append="off"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <video>
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </video>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:42:00 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:42:00 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:42:00 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:42:00 compute-0 nova_compute[254092]: </domain>
Nov 25 16:42:00 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.915 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Preparing to wait for external event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.916 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.916 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.917 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.918 254096 DEBUG nova.virt.libvirt.vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-2',id=69,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=b5e2a584-5835-4c63-84de-6f0446220d35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.918 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.919 254096 DEBUG nova.network.os_vif_util [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.919 254096 DEBUG os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.920 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.921 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.926 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.958 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2de6446-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.958 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd2de6446-cc, col_values=(('external_ids', {'iface-id': 'd2de6446-cca8-4827-a039-647fe671bab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:2d:7c', 'vm-uuid': 'b5e2a584-5835-4c63-84de-6f0446220d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:00 compute-0 NetworkManager[48891]: <info>  [1764088920.9790] manager: (tapd2de6446-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:00 compute-0 nova_compute[254092]: 2025-11-25 16:42:00.989 254096 INFO os_vif [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc')
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.005 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Creating config drive at /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.011 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgl4etayo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.050 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updated VIF entry in instance network info cache for port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.051 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updating instance_info_cache with network_info: [{"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.067 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-796c46a8-971c-4b51-96c9-0e7c8682cfa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.068 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-changed-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.068 254096 DEBUG nova.compute.manager [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Refreshing instance network info cache due to event network-changed-d2de6446-cca8-4827-a039-647fe671bab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.068 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.069 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.069 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Refreshing network info cache for port d2de6446-cca8-4827-a039-647fe671bab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.157 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgl4etayo" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.187 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.193 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.277 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.278 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.278 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] No VIF found with MAC fa:16:3e:a1:2d:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.279 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Using config drive
Nov 25 16:42:01 compute-0 ceph-mon[74985]: pgmap v1698: 321 pgs: 321 active+clean; 303 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 263 op/s
Nov 25 16:42:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/925887909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1238201947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4207363251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2788681208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542486749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.429 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.441 254096 DEBUG oslo_concurrency.processutils [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.442 254096 DEBUG nova.virt.libvirt.vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.442 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.443 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.444 254096 DEBUG nova.objects.instance [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.465 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <uuid>013dc18e-57cd-4733-8e98-7d20e3b5c4db</uuid>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <name>instance-0000003e</name>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1278256596</nova:name>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:42:00</nova:creationTime>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:user uuid="e0d077039e0b4d9e8d5663768f40fa48">tempest-ListServerFiltersTestJSON-675991579-project-member</nova:user>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:project uuid="2319f378c7fb448ebe89427d8dfa7e43">tempest-ListServerFiltersTestJSON-675991579</nova:project>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <nova:port uuid="5136afea-102e-46a1-8fdb-0af970c5af04">
Nov 25 16:42:01 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <system>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <entry name="serial">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <entry name="uuid">013dc18e-57cd-4733-8e98-7d20e3b5c4db</entry>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </system>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <os>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   </os>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <features>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   </features>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk">
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/013dc18e-57cd-4733-8e98-7d20e3b5c4db_disk.config">
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:70:61:e3"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <target dev="tap5136afea-10"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db/console.log" append="off"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <video>
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </video>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:42:01 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:42:01 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:42:01 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:42:01 compute-0 nova_compute[254092]: </domain>
Nov 25 16:42:01 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.466 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.467 254096 DEBUG nova.virt.libvirt.driver [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.467 254096 DEBUG nova.virt.libvirt.vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:55Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.468 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.468 254096 DEBUG nova.network.os_vif_util [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.469 254096 DEBUG os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.470 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.473 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5136afea-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.473 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5136afea-10, col_values=(('external_ids', {'iface-id': '5136afea-102e-46a1-8fdb-0af970c5af04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:61:e3', 'vm-uuid': '013dc18e-57cd-4733-8e98-7d20e3b5c4db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 NetworkManager[48891]: <info>  [1764088921.4756] manager: (tap5136afea-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.486 254096 INFO os_vif [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')
Nov 25 16:42:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.3 MiB/s wr, 314 op/s
Nov 25 16:42:01 compute-0 NetworkManager[48891]: <info>  [1764088921.6574] manager: (tap5136afea-10): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Nov 25 16:42:01 compute-0 kernel: tap5136afea-10: entered promiscuous mode
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 ovn_controller[153477]: 2025-11-25T16:42:01Z|00629|binding|INFO|Claiming lport 5136afea-102e-46a1-8fdb-0af970c5af04 for this chassis.
Nov 25 16:42:01 compute-0 ovn_controller[153477]: 2025-11-25T16:42:01Z|00630|binding|INFO|5136afea-102e-46a1-8fdb-0af970c5af04: Claiming fa:16:3e:70:61:e3 10.100.0.13
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.682 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.683 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 bound to our chassis
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.685 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:42:01 compute-0 systemd-udevd[327322]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:42:01 compute-0 systemd-machined[216343]: New machine qemu-79-instance-0000003e.
Nov 25 16:42:01 compute-0 NetworkManager[48891]: <info>  [1764088921.6994] device (tap5136afea-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:42:01 compute-0 NetworkManager[48891]: <info>  [1764088921.7005] device (tap5136afea-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:42:01 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-0000003e.
Nov 25 16:42:01 compute-0 ovn_controller[153477]: 2025-11-25T16:42:01Z|00631|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 ovn-installed in OVS
Nov 25 16:42:01 compute-0 ovn_controller[153477]: 2025-11-25T16:42:01Z|00632|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 up in Southbound
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c24338a6-8cbe-494e-bbd7-d2053a93ffd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.733 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cdab62b0-9abe-44c8-b9e9-3ed5de3b8734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.736 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d47a4063-359d-4bd2-ad60-c659874f8981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.761 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[16dcb656-1095-4ea7-8b83-a1219533fabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.780 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[189c5a0d-1112-456b-bb12-a774b53e6a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327338, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd470bf-7b56-4abd-9394-44396693faad]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327339, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327339, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.798 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.800 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 nova_compute[254092]: 2025-11-25 16:42:01.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.807 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.808 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.808 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:01.808 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.172 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Creating config drive at /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.176 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjpj3x57 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.310 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzjpj3x57" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.470 254096 DEBUG nova.storage.rbd_utils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] rbd image b5e2a584-5835-4c63-84de-6f0446220d35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.477 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config b5e2a584-5835-4c63-84de-6f0446220d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.512 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 013dc18e-57cd-4733-8e98-7d20e3b5c4db due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.512 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088922.3605864, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.513 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Resumed (Lifecycle Event)
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.514 254096 DEBUG nova.compute.manager [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.519 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance rebooted successfully.
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.519 254096 DEBUG nova.compute.manager [None req-e04a04d5-5dea-4bfa-9b16-84ee9a4db632 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.549 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.554 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.571 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.571 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088922.3607922, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.571 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Started (Lifecycle Event)
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.588 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.591 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.613 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 16:42:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2542486749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:02 compute-0 ceph-mon[74985]: pgmap v1699: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.3 MiB/s wr, 314 op/s
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.715 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088907.7149312, 42be369c-5a19-4073-becc-4f28ef579c2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.716 254096 INFO nova.compute.manager [-] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] VM Stopped (Lifecycle Event)
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.730 254096 DEBUG nova.compute.manager [None req-35896b72-4f45-48d9-8633-8d97ebadc23f - - - - - -] [instance: 42be369c-5a19-4073-becc-4f28ef579c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.834 254096 DEBUG nova.compute.manager [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG oslo_concurrency.lockutils [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG oslo_concurrency.lockutils [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG oslo_concurrency.lockutils [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.835 254096 DEBUG nova.compute.manager [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.836 254096 WARNING nova.compute.manager [req-56557632-fdaa-4e65-8a62-e118150f937a req-edfa747c-50bd-4c3c-977b-4e1603351802 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state None.
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.925 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088907.924449, 679f5e87-bac4-4169-bffa-555a53e7321f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.926 254096 INFO nova.compute.manager [-] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] VM Stopped (Lifecycle Event)
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.929 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config 796c46a8-971c-4b51-96c9-0e7c8682cfa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.929 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deleting local config drive /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8/disk.config because it was imported into RBD.
Nov 25 16:42:02 compute-0 nova_compute[254092]: 2025-11-25 16:42:02.948 254096 DEBUG nova.compute.manager [None req-596190c9-3316-4940-8643-0290f953124b - - - - - -] [instance: 679f5e87-bac4-4169-bffa-555a53e7321f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:02 compute-0 systemd-udevd[327327]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:42:02 compute-0 NetworkManager[48891]: <info>  [1764088922.9927] manager: (tap9ef3b6ce-50): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Nov 25 16:42:02 compute-0 kernel: tap9ef3b6ce-50: entered promiscuous mode
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.001 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00633|binding|INFO|Claiming lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a for this chassis.
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00634|binding|INFO|9ef3b6ce-50de-444f-b27f-b16a5b2b832a: Claiming fa:16:3e:b9:ae:2e 10.100.0.5
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.013 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:ae:2e 10.100.0.5'], port_security=['fa:16:3e:b9:ae:2e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '796c46a8-971c-4b51-96c9-0e7c8682cfa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9ef3b6ce-50de-444f-b27f-b16a5b2b832a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.014 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f bound to our chassis
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.016 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.0174] device (tap9ef3b6ce-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.0183] device (tap9ef3b6ce-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.016 254096 DEBUG oslo_concurrency.processutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config b5e2a584-5835-4c63-84de-6f0446220d35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.017 254096 INFO nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deleting local config drive /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35/disk.config because it was imported into RBD.
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.030 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1024e5-83be-42ee-9e28-c660e033113c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.032 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf66413c8-51 in ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.033 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf66413c8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.033 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6b5bad-497a-40ba-acd8-18f6f92b5225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.035 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[190ebc35-8b5a-437c-b0d3-ad1a905fb455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 systemd-machined[216343]: New machine qemu-80-instance-00000044.
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.049 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2a876f02-8414-451a-bef7-f3ee69a51bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000044.
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.053 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updated VIF entry in instance network info cache for port d2de6446-cca8-4827-a039-647fe671bab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.054 254096 DEBUG nova.network.neutron [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updating instance_info_cache with network_info: [{"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.067 254096 DEBUG oslo_concurrency.lockutils [req-67bd1c5c-6176-4036-8cb3-24fc51f1cd6a req-255d81dc-cef1-41fd-9edc-f0ff4a9bffb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b5e2a584-5835-4c63-84de-6f0446220d35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.070 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af9e00b6-b464-4732-993c-c5b8e4fa14ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 kernel: tapd2de6446-cc: entered promiscuous mode
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.0907] manager: (tapd2de6446-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/278)
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00635|binding|INFO|Claiming lport d2de6446-cca8-4827-a039-647fe671bab4 for this chassis.
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00636|binding|INFO|d2de6446-cca8-4827-a039-647fe671bab4: Claiming fa:16:3e:a1:2d:7c 10.100.0.4
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00637|binding|INFO|Setting lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a ovn-installed in OVS
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00638|binding|INFO|Setting lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a up in Southbound
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.1055] device (tapd2de6446-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.102 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:2d:7c 10.100.0.4'], port_security=['fa:16:3e:a1:2d:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5e2a584-5835-4c63-84de-6f0446220d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d2de6446-cca8-4827-a039-647fe671bab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.1063] device (tapd2de6446-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.111 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[653cc620-282c-437e-b4d0-24e47d4248e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00639|binding|INFO|Setting lport d2de6446-cca8-4827-a039-647fe671bab4 ovn-installed in OVS
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00640|binding|INFO|Setting lport d2de6446-cca8-4827-a039-647fe671bab4 up in Southbound
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.1218] manager: (tapf66413c8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/279)
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d33b79db-5d38-482d-8be6-d37be5d42098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.156 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb332ea-4c9d-4171-a95e-5d15548b5e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 systemd-machined[216343]: New machine qemu-81-instance-00000045.
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.159 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe4ae9e-2529-4ef0-b1ff-89d135ebb44d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000045.
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.1883] device (tapf66413c8-50): carrier: link connected
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.194 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[446fbb57-fb5d-4d8e-9112-cbcdc82bd780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbb5b1c-40cd-4b6a-8f8e-a1a8cf54052a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327490, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.227 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[adb0e6c4-88da-4e12-81cd-df8c46b5ac6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538077, 'tstamp': 538077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327494, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.248 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[db10aea9-9b5f-43c2-9a6b-8e2972b894b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327497, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.278 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42fd6aab-9d67-4010-98f7-5f282e571e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.342 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c181a4-40e8-4141-ae80-c12e58e5e447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 kernel: tapf66413c8-50: entered promiscuous mode
Nov 25 16:42:03 compute-0 NetworkManager[48891]: <info>  [1764088923.3468] manager: (tapf66413c8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.351 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.355 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:42:03 compute-0 ovn_controller[153477]: 2025-11-25T16:42:03Z|00641|binding|INFO|Releasing lport 347c541f-24a8-4230-9881-74343160f6c8 from this chassis (sb_readonly=0)
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45b80b8e-a9a0-475e-ae55-8bcfd2b14527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.357 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/f66413c8-5cde-4f70-af70-6b7886c1219f.pid.haproxy
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.357 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'env', 'PROCESS_TAG=haproxy-f66413c8-5cde-4f70-af70-6b7886c1219f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f66413c8-5cde-4f70-af70-6b7886c1219f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.782 254096 DEBUG nova.compute.manager [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.782 254096 DEBUG oslo_concurrency.lockutils [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.784 254096 DEBUG oslo_concurrency.lockutils [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.784 254096 DEBUG oslo_concurrency.lockutils [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.784 254096 DEBUG nova.compute.manager [req-6b762242-846e-4cef-b13b-de3c9dcfd7d2 req-c0e44eae-f2b1-49c1-960c-285235c02a54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Processing event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:42:03 compute-0 podman[327595]: 2025-11-25 16:42:03.78632142 +0000 UTC m=+0.076350192 container create df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.799 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.7978551, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.799 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Started (Lifecycle Event)
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.801 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.807 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.811 254096 INFO nova.virt.libvirt.driver [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance spawned successfully.
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.812 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.828 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:03 compute-0 systemd[1]: Started libpod-conmon-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope.
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.839 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.839 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:03 compute-0 podman[327595]: 2025-11-25 16:42:03.746426858 +0000 UTC m=+0.036455680 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.840 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.842 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.842 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.842 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.847 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/211f7d06795e09a0a0b405763765b8431f236ef9df850c7ebdf97675234af87d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:03 compute-0 podman[327595]: 2025-11-25 16:42:03.876790524 +0000 UTC m=+0.166819316 container init df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.881 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.882 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.7979243, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.882 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Paused (Lifecycle Event)
Nov 25 16:42:03 compute-0 podman[327595]: 2025-11-25 16:42:03.882830379 +0000 UTC m=+0.172859161 container start df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:42:03 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : New worker (327634) forked
Nov 25 16:42:03 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : Loading success.
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.915 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.919 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.803791, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.920 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Resumed (Lifecycle Event)
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.933 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 9.22 seconds to spawn the instance on the hypervisor.
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.933 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.942 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d2de6446-cca8-4827-a039-647fe671bab4 in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.944 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.947 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.950 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:03.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e870018b-ff19-453a-902a-6b5577579ead]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.974 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.975 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.8594325, b5e2a584-5835-4c63-84de-6f0446220d35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:03 compute-0 nova_compute[254092]: 2025-11-25 16:42:03.975 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Started (Lifecycle Event)
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.017 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.021 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1fcde9-7784-495c-9e50-8aec0b3a1641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.027 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 10.36 seconds to build instance.
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.028 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c78f82fe-a4c9-414b-8ebc-e9428b0366ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.030 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088923.8595078, b5e2a584-5835-4c63-84de-6f0446220d35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.030 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Paused (Lifecycle Event)
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.050 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.051 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.054 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.055 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2207f9cd-151b-4388-9b7d-22c60b48371b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.071 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a189a1c-eedb-4e1a-b9c5-68c0d9ce46fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 180, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 180, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327648, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.077 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b85ca7-7535-4746-97de-79a62218b4d1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538090, 'tstamp': 538090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327649, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538092, 'tstamp': 538092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327649, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.092 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.096 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:04.096 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.518 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088909.5167675, a9e83b59-224b-49dd-83f7-d057737f5825 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.518 254096 INFO nova.compute.manager [-] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] VM Stopped (Lifecycle Event)
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.535 254096 DEBUG nova.compute.manager [None req-e391b162-c827-4594-a752-7bfa7090f033 - - - - - -] [instance: a9e83b59-224b-49dd-83f7-d057737f5825] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:04 compute-0 ceph-mon[74985]: pgmap v1700: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.966 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.967 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.967 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.967 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] No waiting events found dispatching network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 WARNING nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received unexpected event network-vif-plugged-5136afea-102e-46a1-8fdb-0af970c5af04 for instance with vm_state active and task_state None.
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.968 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Processing event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.969 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 DEBUG oslo_concurrency.lockutils [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 DEBUG nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] No waiting events found dispatching network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.970 254096 WARNING nova.compute.manager [req-954b338e-ddf8-4d67-b242-f14bcc7c5df8 req-595794af-9386-4bc3-b121-95714b8f59c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received unexpected event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 for instance with vm_state building and task_state spawning.
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.971 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.974 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088924.9738872, b5e2a584-5835-4c63-84de-6f0446220d35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.975 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Resumed (Lifecycle Event)
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.976 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.984 254096 INFO nova.virt.libvirt.driver [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance spawned successfully.
Nov 25 16:42:04 compute-0 nova_compute[254092]: 2025-11-25 16:42:04.985 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.010 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.016 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.016 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.018 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.020 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.021 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.021 254096 DEBUG nova.virt.libvirt.driver [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.029 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.066 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.101 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 9.54 seconds to spawn the instance on the hypervisor.
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.101 254096 DEBUG nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.325 254096 INFO nova.compute.manager [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 11.60 seconds to build instance.
Nov 25 16:42:05 compute-0 nova_compute[254092]: 2025-11-25 16:42:05.345 254096 DEBUG oslo_concurrency.lockutils [None req-526ab1df-f391-4cc6-9a8b-7c9f8240e5a2 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Nov 25 16:42:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG nova.compute.manager [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG oslo_concurrency.lockutils [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG oslo_concurrency.lockutils [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.085 254096 DEBUG oslo_concurrency.lockutils [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.086 254096 DEBUG nova.compute.manager [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] No waiting events found dispatching network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.086 254096 WARNING nova.compute.manager [req-9be63050-2bc7-4242-a77f-5a7e9265efa7 req-475ef079-49d4-49ef-87c4-7d8ac73f5388 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received unexpected event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a for instance with vm_state active and task_state None.
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.475 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:06 compute-0 nova_compute[254092]: 2025-11-25 16:42:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:06 compute-0 ceph-mon[74985]: pgmap v1701: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:42:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.779 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.780 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.781 254096 INFO nova.compute.manager [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Terminating instance
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.782 254096 DEBUG nova.compute.manager [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:42:07 compute-0 kernel: tap9ef3b6ce-50 (unregistering): left promiscuous mode
Nov 25 16:42:07 compute-0 NetworkManager[48891]: <info>  [1764088927.8271] device (tap9ef3b6ce-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:07 compute-0 ovn_controller[153477]: 2025-11-25T16:42:07Z|00642|binding|INFO|Releasing lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a from this chassis (sb_readonly=0)
Nov 25 16:42:07 compute-0 ovn_controller[153477]: 2025-11-25T16:42:07Z|00643|binding|INFO|Setting lport 9ef3b6ce-50de-444f-b27f-b16a5b2b832a down in Southbound
Nov 25 16:42:07 compute-0 ovn_controller[153477]: 2025-11-25T16:42:07Z|00644|binding|INFO|Removing iface tap9ef3b6ce-50 ovn-installed in OVS
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.838 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.842 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:ae:2e 10.100.0.5'], port_security=['fa:16:3e:b9:ae:2e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '796c46a8-971c-4b51-96c9-0e7c8682cfa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9ef3b6ce-50de-444f-b27f-b16a5b2b832a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.843 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9ef3b6ce-50de-444f-b27f-b16a5b2b832a in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.844 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f66413c8-5cde-4f70-af70-6b7886c1219f
Nov 25 16:42:07 compute-0 nova_compute[254092]: 2025-11-25 16:42:07.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.866 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc136dc9-c0d5-4604-aeb3-a2b630ddf7d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:07 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Deactivated successfully.
Nov 25 16:42:07 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000044.scope: Consumed 4.634s CPU time.
Nov 25 16:42:07 compute-0 systemd-machined[216343]: Machine qemu-80-instance-00000044 terminated.
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.909 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[161f4af2-38e9-4b6f-8ded-6f0080d5960c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3861c5-1b9b-4116-bceb-694c9f0331f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.961 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ecec605a-3d10-43b8-a770-6d1bb7683ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73623577-b892-418b-a0a3-50b7d6c156ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf66413c8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:00:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538077, 'reachable_time': 24018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327662, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae405a79-fb00-406e-82f9-0de6fdf34e2c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538090, 'tstamp': 538090}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327663, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf66413c8-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538092, 'tstamp': 538092}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327663, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:07.998 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66413c8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf66413c8-50, col_values=(('external_ids', {'iface-id': '347c541f-24a8-4230-9881-74343160f6c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.021 254096 INFO nova.virt.libvirt.driver [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Instance destroyed successfully.
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.023 254096 DEBUG nova.objects.instance [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid 796c46a8-971c-4b51-96c9-0e7c8682cfa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.036 254096 DEBUG nova.virt.libvirt.vif [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-1',id=68,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:03Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=796c46a8-971c-4b51-96c9-0e7c8682cfa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.036 254096 DEBUG nova.network.os_vif_util [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "address": "fa:16:3e:b9:ae:2e", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ef3b6ce-50", "ovs_interfaceid": "9ef3b6ce-50de-444f-b27f-b16a5b2b832a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.037 254096 DEBUG nova.network.os_vif_util [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.038 254096 DEBUG os_vif [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.040 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ef3b6ce-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.050 254096 INFO os_vif [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:ae:2e,bridge_name='br-int',has_traffic_filtering=True,id=9ef3b6ce-50de-444f-b27f-b16a5b2b832a,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ef3b6ce-50')
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.089 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.090 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.092 254096 INFO nova.compute.manager [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Terminating instance
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.093 254096 DEBUG nova.compute.manager [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:42:08 compute-0 kernel: tapd2de6446-cc (unregistering): left promiscuous mode
Nov 25 16:42:08 compute-0 NetworkManager[48891]: <info>  [1764088928.1371] device (tapd2de6446-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 ovn_controller[153477]: 2025-11-25T16:42:08Z|00645|binding|INFO|Releasing lport d2de6446-cca8-4827-a039-647fe671bab4 from this chassis (sb_readonly=0)
Nov 25 16:42:08 compute-0 ovn_controller[153477]: 2025-11-25T16:42:08Z|00646|binding|INFO|Setting lport d2de6446-cca8-4827-a039-647fe671bab4 down in Southbound
Nov 25 16:42:08 compute-0 ovn_controller[153477]: 2025-11-25T16:42:08Z|00647|binding|INFO|Removing iface tapd2de6446-cc ovn-installed in OVS
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.150 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:2d:7c 10.100.0.4'], port_security=['fa:16:3e:a1:2d:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5e2a584-5835-4c63-84de-6f0446220d35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f66413c8-5cde-4f70-af70-6b7886c1219f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20e192e04e814ae28e09f3c1fdbd39d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd45b4fe4-a7ee-460c-87e5-fc7e0e415a7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82931fa4-5f59-456f-908b-c16b2ecf859e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d2de6446-cca8-4827-a039-647fe671bab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.153 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d2de6446-cca8-4827-a039-647fe671bab4 in datapath f66413c8-5cde-4f70-af70-6b7886c1219f unbound from our chassis
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.157 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f66413c8-5cde-4f70-af70-6b7886c1219f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.158 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7123e92b-24e9-47bb-888c-f48d38e116f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:08.159 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f namespace which is not needed anymore
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Deactivated successfully.
Nov 25 16:42:08 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Consumed 3.832s CPU time.
Nov 25 16:42:08 compute-0 systemd-machined[216343]: Machine qemu-81-instance-00000045 terminated.
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.333 254096 INFO nova.virt.libvirt.driver [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Instance destroyed successfully.
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.333 254096 DEBUG nova.objects.instance [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lazy-loading 'resources' on Instance uuid b5e2a584-5835-4c63-84de-6f0446220d35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.348 254096 DEBUG nova.virt.libvirt.vif [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1199053129',display_name='tempest-MultipleCreateTestJSON-server-1199053129-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1199053129-2',id=69,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T16:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20e192e04e814ae28e09f3c1fdbd39d0',ramdisk_id='',reservation_id='r-vw7x2v7w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-193123640',owner_user_name='tempest-MultipleCreateTestJSON-193123640-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:05Z,user_data=None,user_id='03b851e958654346bc1caa437d7b76a5',uuid=b5e2a584-5835-4c63-84de-6f0446220d35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.348 254096 DEBUG nova.network.os_vif_util [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converting VIF {"id": "d2de6446-cca8-4827-a039-647fe671bab4", "address": "fa:16:3e:a1:2d:7c", "network": {"id": "f66413c8-5cde-4f70-af70-6b7886c1219f", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-74461390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20e192e04e814ae28e09f3c1fdbd39d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd2de6446-cc", "ovs_interfaceid": "d2de6446-cca8-4827-a039-647fe671bab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.349 254096 DEBUG nova.network.os_vif_util [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.349 254096 DEBUG os_vif [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.350 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.351 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2de6446-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.355 254096 INFO os_vif [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:2d:7c,bridge_name='br-int',has_traffic_filtering=True,id=d2de6446-cca8-4827-a039-647fe671bab4,network=Network(f66413c8-5cde-4f70-af70-6b7886c1219f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd2de6446-cc')
Nov 25 16:42:08 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : haproxy version is 2.8.14-c23fe91
Nov 25 16:42:08 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [NOTICE]   (327632) : path to executable is /usr/sbin/haproxy
Nov 25 16:42:08 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [WARNING]  (327632) : Exiting Master process...
Nov 25 16:42:08 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [ALERT]    (327632) : Current worker (327634) exited with code 143 (Terminated)
Nov 25 16:42:08 compute-0 neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f[327628]: [WARNING]  (327632) : All workers exited. Exiting... (0)
Nov 25 16:42:08 compute-0 systemd[1]: libpod-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope: Deactivated successfully.
Nov 25 16:42:08 compute-0 conmon[327628]: conmon df9b579937c6ff8f7575 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope/container/memory.events
Nov 25 16:42:08 compute-0 podman[327714]: 2025-11-25 16:42:08.395081214 +0000 UTC m=+0.123878212 container died df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.508 254096 DEBUG nova.compute.manager [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-unplugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.508 254096 DEBUG oslo_concurrency.lockutils [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG oslo_concurrency.lockutils [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG oslo_concurrency.lockutils [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG nova.compute.manager [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] No waiting events found dispatching network-vif-unplugged-d2de6446-cca8-4827-a039-647fe671bab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.509 254096 DEBUG nova.compute.manager [req-b37ba82f-05f9-4b72-9960-c4a3150da9f1 req-82299522-67a1-4ff8-b967-42e2a8e8bb3e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-unplugged-d2de6446-cca8-4827-a039-647fe671bab4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb-userdata-shm.mount: Deactivated successfully.
Nov 25 16:42:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-211f7d06795e09a0a0b405763765b8431f236ef9df850c7ebdf97675234af87d-merged.mount: Deactivated successfully.
Nov 25 16:42:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538972660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:08 compute-0 podman[327714]: 2025-11-25 16:42:08.972291413 +0000 UTC m=+0.701088391 container cleanup df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:42:08 compute-0 systemd[1]: libpod-conmon-df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb.scope: Deactivated successfully.
Nov 25 16:42:08 compute-0 nova_compute[254092]: 2025-11-25 16:42:08.987 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:09 compute-0 ceph-mon[74985]: pgmap v1702: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.089 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.089 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.092 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.093 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.096 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.097 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.101 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:09 compute-0 podman[327792]: 2025-11-25 16:42:09.200715401 +0000 UTC m=+0.204255233 container remove df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.208 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efeeee9d-6351-4779-8896-0a3d9cf6b50f]: (4, ('Tue Nov 25 04:42:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb)\ndf9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb\nTue Nov 25 04:42:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f (df9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb)\ndf9b579937c6ff8f75757c980c85710facb345c383e33bc6eaf9895651a78beb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.214 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a37cbff5-0b15-4836-9fff-92f3734a8393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.217 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66413c8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:09 compute-0 kernel: tapf66413c8-50: left promiscuous mode
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.251 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[435d18c1-f89e-4cf4-a2ec-f895c5785aca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:09 compute-0 podman[327793]: 2025-11-25 16:42:09.264883941 +0000 UTC m=+0.251890954 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Nov 25 16:42:09 compute-0 podman[327800]: 2025-11-25 16:42:09.268021056 +0000 UTC m=+0.247777503 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.267 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7ac778-b387-4eb3-9d78-522dcf40f26f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.271 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e205491-4680-4d03-93f3-e0aee498c16f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.288 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5075e03e-bf11-4e2e-8e0e-0be300969d3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538069, 'reachable_time': 22413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327866, 'error': None, 'target': 'ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:09 compute-0 systemd[1]: run-netns-ovnmeta\x2df66413c8\x2d5cde\x2d4f70\x2daf70\x2d6b7886c1219f.mount: Deactivated successfully.
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.295 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f66413c8-5cde-4f70-af70-6b7886c1219f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:42:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:09.295 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a3450458-74ee-427c-a613-3cd82a261b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:09 compute-0 podman[327801]: 2025-11-25 16:42:09.298765721 +0000 UTC m=+0.268086725 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.411 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.413 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3441MB free_disk=59.80998611450195GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.414 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 013dc18e-57cd-4733-8e98-7d20e3b5c4db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance dce3a591-9fb6-4495-a7fb-867af2de384f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b5c5a442-8e8e-40c5-9634-e36c49e6e41b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 796c46a8-971c-4b51-96c9-0e7c8682cfa8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b5e2a584-5835-4c63-84de-6f0446220d35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.503 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1216MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:42:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 203 op/s
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.624 254096 INFO nova.virt.libvirt.driver [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deleting instance files /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8_del
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.625 254096 INFO nova.virt.libvirt.driver [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deletion of /var/lib/nova/instances/796c46a8-971c-4b51-96c9-0e7c8682cfa8_del complete
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.693 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.735 254096 INFO nova.virt.libvirt.driver [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deleting instance files /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35_del
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.736 254096 INFO nova.virt.libvirt.driver [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deletion of /var/lib/nova/instances/b5e2a584-5835-4c63-84de-6f0446220d35_del complete
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.742 254096 INFO nova.compute.manager [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 1.96 seconds to destroy the instance on the hypervisor.
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.742 254096 DEBUG oslo.service.loopingcall [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.742 254096 DEBUG nova.compute.manager [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.743 254096 DEBUG nova.network.neutron [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.826 254096 INFO nova.compute.manager [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 1.73 seconds to destroy the instance on the hypervisor.
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.826 254096 DEBUG oslo.service.loopingcall [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.827 254096 DEBUG nova.compute.manager [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:42:09 compute-0 nova_compute[254092]: 2025-11-25 16:42:09.827 254096 DEBUG nova.network.neutron [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:42:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1538972660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.067550) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930067598, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1516, "num_deletes": 254, "total_data_size": 2123862, "memory_usage": 2159280, "flush_reason": "Manual Compaction"}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930088250, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2089018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33889, "largest_seqno": 35404, "table_properties": {"data_size": 2082114, "index_size": 3915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15692, "raw_average_key_size": 20, "raw_value_size": 2067728, "raw_average_value_size": 2702, "num_data_blocks": 174, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088800, "oldest_key_time": 1764088800, "file_creation_time": 1764088930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 21165 microseconds, and 5728 cpu microseconds.
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.088719) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2089018 bytes OK
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.088836) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.090541) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.090561) EVENT_LOG_v1 {"time_micros": 1764088930090554, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.090580) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2117088, prev total WAL file size 2117088, number of live WAL files 2.
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.092015) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2040KB)], [74(9439KB)]
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930092889, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11755346, "oldest_snapshot_seqno": -1}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6054 keys, 10095973 bytes, temperature: kUnknown
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930160393, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10095973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10052813, "index_size": 26922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 153357, "raw_average_key_size": 25, "raw_value_size": 9941366, "raw_average_value_size": 1642, "num_data_blocks": 1093, "num_entries": 6054, "num_filter_entries": 6054, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764088930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.160677) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10095973 bytes
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.163051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.1 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 6577, records dropped: 523 output_compression: NoCompression
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.163076) EVENT_LOG_v1 {"time_micros": 1764088930163066, "job": 42, "event": "compaction_finished", "compaction_time_micros": 67531, "compaction_time_cpu_micros": 24678, "output_level": 6, "num_output_files": 1, "total_output_size": 10095973, "num_input_records": 6577, "num_output_records": 6054, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930163475, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764088930165044, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.091920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:42:10.165133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:42:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2710764261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.188 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.193 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.206 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.230 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.230 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.950 254096 DEBUG nova.compute.manager [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG oslo_concurrency.lockutils [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG oslo_concurrency.lockutils [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG oslo_concurrency.lockutils [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 DEBUG nova.compute.manager [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] No waiting events found dispatching network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:10 compute-0 nova_compute[254092]: 2025-11-25 16:42:10.951 254096 WARNING nova.compute.manager [req-5a52f76d-f625-448a-a6ea-6024d05bdc02 req-f7216925-61d1-4137-a268-824dd6d97b3b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received unexpected event network-vif-plugged-d2de6446-cca8-4827-a039-647fe671bab4 for instance with vm_state active and task_state deleting.
Nov 25 16:42:11 compute-0 nova_compute[254092]: 2025-11-25 16:42:11.077 254096 DEBUG nova.compute.manager [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-unplugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:11 compute-0 nova_compute[254092]: 2025-11-25 16:42:11.077 254096 DEBUG oslo_concurrency.lockutils [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:11 compute-0 nova_compute[254092]: 2025-11-25 16:42:11.078 254096 DEBUG oslo_concurrency.lockutils [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:11 compute-0 nova_compute[254092]: 2025-11-25 16:42:11.078 254096 DEBUG oslo_concurrency.lockutils [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:11 compute-0 nova_compute[254092]: 2025-11-25 16:42:11.079 254096 DEBUG nova.compute.manager [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] No waiting events found dispatching network-vif-unplugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:11 compute-0 nova_compute[254092]: 2025-11-25 16:42:11.079 254096 DEBUG nova.compute.manager [req-fd16d982-81fd-42d4-b446-d705f14bf685 req-70fcb56d-77d8-4921-8b02-6ebee0300f33 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-unplugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:42:11 compute-0 ceph-mon[74985]: pgmap v1703: 321 pgs: 321 active+clean; 372 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.0 MiB/s wr, 203 op/s
Nov 25 16:42:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2710764261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 279 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 323 op/s
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.149 254096 DEBUG nova.network.neutron [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.150 254096 DEBUG nova.network.neutron [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.167 254096 INFO nova.compute.manager [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Took 2.34 seconds to deallocate network for instance.
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.181 254096 INFO nova.compute.manager [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Took 2.44 seconds to deallocate network for instance.
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.253 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.253 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.265 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.368 254096 DEBUG oslo_concurrency.processutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.574 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.575 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.575 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.576 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.576 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.577 254096 INFO nova.compute.manager [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Terminating instance
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.578 254096 DEBUG nova.compute.manager [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:42:12 compute-0 kernel: tap1404e99c-a3 (unregistering): left promiscuous mode
Nov 25 16:42:12 compute-0 NetworkManager[48891]: <info>  [1764088932.6514] device (tap1404e99c-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 ovn_controller[153477]: 2025-11-25T16:42:12Z|00648|binding|INFO|Releasing lport 1404e99c-a32c-404a-a7d6-3daccc67c48b from this chassis (sb_readonly=0)
Nov 25 16:42:12 compute-0 ovn_controller[153477]: 2025-11-25T16:42:12Z|00649|binding|INFO|Setting lport 1404e99c-a32c-404a-a7d6-3daccc67c48b down in Southbound
Nov 25 16:42:12 compute-0 ovn_controller[153477]: 2025-11-25T16:42:12Z|00650|binding|INFO|Removing iface tap1404e99c-a3 ovn-installed in OVS
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.670 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.671 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.672 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.694 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2de614-672d-4bb5-8098-a07e263c3afe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000040.scope: Deactivated successfully.
Nov 25 16:42:12 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000040.scope: Consumed 16.952s CPU time.
Nov 25 16:42:12 compute-0 systemd-machined[216343]: Machine qemu-75-instance-00000040 terminated.
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.730 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[73434ee3-394d-4fbf-9e2b-d0f4c8aeceee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.735 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f58adba7-4104-4bd9-a5d4-89a110248fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.760 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30a2ec2c-6986-4c58-92be-85381a5eaff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8e272aef-1695-48ee-b6ca-71bec07e235f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327924, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 kernel: tap1404e99c-a3: entered promiscuous mode
Nov 25 16:42:12 compute-0 NetworkManager[48891]: <info>  [1764088932.8048] manager: (tap1404e99c-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Nov 25 16:42:12 compute-0 systemd-udevd[327917]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:42:12 compute-0 kernel: tap1404e99c-a3 (unregistering): left promiscuous mode
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.808 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 ovn_controller[153477]: 2025-11-25T16:42:12Z|00651|binding|INFO|Claiming lport 1404e99c-a32c-404a-a7d6-3daccc67c48b for this chassis.
Nov 25 16:42:12 compute-0 ovn_controller[153477]: 2025-11-25T16:42:12Z|00652|binding|INFO|1404e99c-a32c-404a-a7d6-3daccc67c48b: Claiming fa:16:3e:68:dd:b0 10.100.0.4
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.806 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a36a399-0382-4fd4-bd90-4543ddb85504]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327925, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327925, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.815 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.819 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605014799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:12 compute-0 ovn_controller[153477]: 2025-11-25T16:42:12Z|00653|binding|INFO|Releasing lport 1404e99c-a32c-404a-a7d6-3daccc67c48b from this chassis (sb_readonly=0)
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.838 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.840 254096 INFO nova.virt.libvirt.driver [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Instance destroyed successfully.
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.840 254096 DEBUG nova.objects.instance [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid b5c5a442-8e8e-40c5-9634-e36c49e6e41b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.846 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:dd:b0 10.100.0.4'], port_security=['fa:16:3e:68:dd:b0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b5c5a442-8e8e-40c5-9634-e36c49e6e41b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1404e99c-a32c-404a-a7d6-3daccc67c48b) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.854 254096 DEBUG nova.virt.libvirt.vif [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1125840801',display_name='tempest-ListServerFiltersTestJSON-instance-1125840801',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1125840801',id=64,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-e1rd6q3l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:32Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=b5c5a442-8e8e-40c5-9634-e36c49e6e41b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.855 254096 DEBUG nova.network.os_vif_util [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "address": "fa:16:3e:68:dd:b0", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1404e99c-a3", "ovs_interfaceid": "1404e99c-a32c-404a-a7d6-3daccc67c48b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.856 254096 DEBUG nova.network.os_vif_util [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.856 254096 DEBUG os_vif [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.858 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1404e99c-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.860 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.860 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.860 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.861 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.861 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.862 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.864 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.866 254096 INFO os_vif [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:dd:b0,bridge_name='br-int',has_traffic_filtering=True,id=1404e99c-a32c-404a-a7d6-3daccc67c48b,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1404e99c-a3')
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.879 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[68c1321b-e474-4110-8a57-c76475333409]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.901 254096 DEBUG oslo_concurrency.processutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.909 254096 DEBUG nova.compute.provider_tree [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[76588322-e9d2-4272-9a68-6462a3537e82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13917fa3-910c-4e99-867e-58eed2428cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.924 254096 DEBUG nova.scheduler.client.report [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.949 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fbedb1-1004-4ba2-bbdb-2d10044e1d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.953 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.956 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.966 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed50f186-144d-4bf0-8335-4dc216fd6c13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327956, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.977 254096 INFO nova.scheduler.client.report [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance 796c46a8-971c-4b51-96c9-0e7c8682cfa8
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a66976e-5919-4620-849a-95d8d795dfd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327957, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327957, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 nova_compute[254092]: 2025-11-25 16:42:12.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.988 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.988 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.989 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.989 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.990 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1404e99c-a32c-404a-a7d6-3daccc67c48b in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis
Nov 25 16:42:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:12.991 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.008 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb66a936-f215-4cb5-ae5b-4deffbd7bb30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.037 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[673acd5c-90be-4e39-a170-cf29cf46b8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.042 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d408c69f-d998-4e55-aadc-7eeaed8c46c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.044 254096 DEBUG oslo_concurrency.lockutils [None req-5f9b098f-d06b-40a8-ab62-2dd6d05d78c9 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bf496676-88ef-4364-bba8-c8f6e0f809bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.083 254096 DEBUG oslo_concurrency.processutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.089 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[991eacea-c11f-42c9-9373-399295fd85b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 17, 'rx_bytes': 742, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327963, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:13 compute-0 ceph-mon[74985]: pgmap v1704: 321 pgs: 321 active+clean; 279 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 323 op/s
Nov 25 16:42:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3605014799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.113 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc99efd-61fd-4c3b-856f-479f8dd7ae55]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327966, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327966, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.116 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.120 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.121 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.215 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.216 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.216 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.216 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "796c46a8-971c-4b51-96c9-0e7c8682cfa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.217 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] No waiting events found dispatching network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.218 254096 WARNING nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received unexpected event network-vif-plugged-9ef3b6ce-50de-444f-b27f-b16a5b2b832a for instance with vm_state deleted and task_state None.
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.219 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Received event network-vif-deleted-9ef3b6ce-50de-444f-b27f-b16a5b2b832a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.220 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Received event network-vif-deleted-d2de6446-cca8-4827-a039-647fe671bab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.220 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-unplugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG oslo_concurrency.lockutils [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.221 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] No waiting events found dispatching network-vif-unplugged-1404e99c-a32c-404a-a7d6-3daccc67c48b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.222 254096 DEBUG nova.compute.manager [req-db98f889-b65a-43be-833e-046373b80974 req-551497fb-6ac1-48cf-8ed1-613e5b78c27a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-unplugged-1404e99c-a32c-404a-a7d6-3daccc67c48b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.309 254096 INFO nova.virt.libvirt.driver [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deleting instance files /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_del
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.311 254096 INFO nova.virt.libvirt.driver [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deletion of /var/lib/nova/instances/b5c5a442-8e8e-40c5-9634-e36c49e6e41b_del complete
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.366 254096 INFO nova.compute.manager [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.368 254096 DEBUG oslo.service.loopingcall [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.369 254096 DEBUG nova.compute.manager [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.369 254096 DEBUG nova.network.neutron [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:42:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 279 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 29 KiB/s wr, 271 op/s
Nov 25 16:42:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174271123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.603 254096 DEBUG oslo_concurrency.processutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.609 254096 DEBUG nova.compute.provider_tree [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.619 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.620 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:13.621 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.624 254096 DEBUG nova.scheduler.client.report [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.646 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.672 254096 INFO nova.scheduler.client.report [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Deleted allocations for instance b5e2a584-5835-4c63-84de-6f0446220d35
Nov 25 16:42:13 compute-0 nova_compute[254092]: 2025-11-25 16:42:13.757 254096 DEBUG oslo_concurrency.lockutils [None req-97eb9103-1b4a-4bea-981b-f14cd4fc2787 03b851e958654346bc1caa437d7b76a5 20e192e04e814ae28e09f3c1fdbd39d0 - - default default] Lock "b5e2a584-5835-4c63-84de-6f0446220d35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4174271123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:15 compute-0 ceph-mon[74985]: pgmap v1705: 321 pgs: 321 active+clean; 279 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 29 KiB/s wr, 271 op/s
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.226 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.226 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.227 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.227 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.263 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.267 254096 DEBUG nova.network.neutron [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.296 254096 INFO nova.compute.manager [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Took 1.93 seconds to deallocate network for instance.
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.333 254096 DEBUG nova.compute.manager [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.333 254096 DEBUG oslo_concurrency.lockutils [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.333 254096 DEBUG oslo_concurrency.lockutils [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.334 254096 DEBUG oslo_concurrency.lockutils [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.334 254096 DEBUG nova.compute.manager [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] No waiting events found dispatching network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.334 254096 WARNING nova.compute.manager [req-515bdd35-675a-4003-9828-524a27aabaaf req-e26ff951-6672-485c-87ff-8a7204747f9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received unexpected event network-vif-plugged-1404e99c-a32c-404a-a7d6-3daccc67c48b for instance with vm_state active and task_state deleting.
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.354 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.354 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.446 254096 DEBUG oslo_concurrency.processutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.547 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.548 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.549 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 225 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 43 KiB/s wr, 306 op/s
Nov 25 16:42:15 compute-0 ovn_controller[153477]: 2025-11-25T16:42:15Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:61:e3 10.100.0.13
Nov 25 16:42:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938954176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.936 254096 DEBUG oslo_concurrency.processutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.942 254096 DEBUG nova.compute.provider_tree [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.958 254096 DEBUG nova.scheduler.client.report [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:15 compute-0 nova_compute[254092]: 2025-11-25 16:42:15.979 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.001 254096 INFO nova.scheduler.client.report [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Deleted allocations for instance b5c5a442-8e8e-40c5-9634-e36c49e6e41b
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.118 254096 DEBUG oslo_concurrency.lockutils [None req-953b3970-4c96-494f-901c-2364db856be1 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "b5c5a442-8e8e-40c5-9634-e36c49e6e41b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3938954176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.379 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.379 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.380 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.380 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.380 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.381 254096 INFO nova.compute.manager [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Terminating instance
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.382 254096 DEBUG nova.compute.manager [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:42:16 compute-0 kernel: tap9e60e140-ca (unregistering): left promiscuous mode
Nov 25 16:42:16 compute-0 NetworkManager[48891]: <info>  [1764088936.5008] device (tap9e60e140-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:42:16 compute-0 ovn_controller[153477]: 2025-11-25T16:42:16Z|00654|binding|INFO|Releasing lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 from this chassis (sb_readonly=0)
Nov 25 16:42:16 compute-0 ovn_controller[153477]: 2025-11-25T16:42:16Z|00655|binding|INFO|Setting lport 9e60e140-ca34-40f4-b867-d7c53f05bca4 down in Southbound
Nov 25 16:42:16 compute-0 ovn_controller[153477]: 2025-11-25T16:42:16Z|00656|binding|INFO|Removing iface tap9e60e140-ca ovn-installed in OVS
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.569 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:c0:a9 10.100.0.8'], port_security=['fa:16:3e:67:c0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dce3a591-9fb6-4495-a7fb-867af2de384f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9e60e140-ca34-40f4-b867-d7c53f05bca4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.570 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9e60e140-ca34-40f4-b867-d7c53f05bca4 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.571 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f00f265b-63fa-48fb-9383-38ff6abf51c1
Nov 25 16:42:16 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Nov 25 16:42:16 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000003f.scope: Consumed 15.427s CPU time.
Nov 25 16:42:16 compute-0 systemd-machined[216343]: Machine qemu-74-instance-0000003f terminated.
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.590 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e356c8dd-8d4d-4def-a47d-c7e7765e8288]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.619 254096 INFO nova.virt.libvirt.driver [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Instance destroyed successfully.
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.620 254096 DEBUG nova.objects.instance [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid dce3a591-9fb6-4495-a7fb-867af2de384f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.632 254096 DEBUG nova.virt.libvirt.vif [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-596690940',display_name='tempest-ListServerFiltersTestJSON-instance-596690940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-596690940',id=63,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-pzh0k6o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:41:30Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=dce3a591-9fb6-4495-a7fb-867af2de384f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.633 254096 DEBUG nova.network.os_vif_util [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "address": "fa:16:3e:67:c0:a9", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e60e140-ca", "ovs_interfaceid": "9e60e140-ca34-40f4-b867-d7c53f05bca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.634 254096 DEBUG nova.network.os_vif_util [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.635 254096 DEBUG os_vif [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.638 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e60e140-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.640 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.640 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[14cdad82-8b24-44a3-9b5c-a28ba432c7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.645 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1ecd2c-aa15-4857-b68a-8e560bbab56c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.646 254096 INFO os_vif [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:c0:a9,bridge_name='br-int',has_traffic_filtering=True,id=9e60e140-ca34-40f4-b867-d7c53f05bca4,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e60e140-ca')
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.675 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.675 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.683 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c33b068-ed97-4c0d-8d67-60f1fd8cd1f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.694 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c83dead-b4c5-44cc-9535-16dbbbab0deb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf00f265b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:fc:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 19, 'rx_bytes': 742, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534207, 'reachable_time': 15987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328050, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.723 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a96966a-ec0a-4c9d-8858-d70cb376292d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534218, 'tstamp': 534218}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328051, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf00f265b-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534221, 'tstamp': 534221}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328051, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.725 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.728 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf00f265b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.728 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf00f265b-60, col_values=(('external_ids', {'iface-id': '57c889f7-e44b-4f52-8e8a-db17b4e1f3b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:16.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.876 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.876 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.884 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:42:16 compute-0 nova_compute[254092]: 2025-11-25 16:42:16.884 254096 INFO nova.compute.claims [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.071 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:17 compute-0 ceph-mon[74985]: pgmap v1706: 321 pgs: 321 active+clean; 225 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 43 KiB/s wr, 306 op/s
Nov 25 16:42:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183870431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.535 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.541 254096 DEBUG nova.compute.provider_tree [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.558 254096 DEBUG nova.scheduler.client.report [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 200 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 19 KiB/s wr, 242 op/s
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.593 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Received event network-vif-deleted-1404e99c-a32c-404a-a7d6-3daccc67c48b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-unplugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG oslo_concurrency.lockutils [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG oslo_concurrency.lockutils [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.594 254096 DEBUG oslo_concurrency.lockutils [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.595 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] No waiting events found dispatching network-vif-unplugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.596 254096 DEBUG nova.compute.manager [req-a2208b2a-88ed-4bff-8a09-83220798a92b req-e70de638-7d55-45dd-8946-c0fcd0f3acad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-unplugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.600 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.600 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.680 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.680 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.702 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.726 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.844 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.845 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.845 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Creating image(s)
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.866 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.892 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.912 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.916 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.986 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.987 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.987 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:17 compute-0 nova_compute[254092]: 2025-11-25 16:42:17.988 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.012 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.016 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e887a377-e792-462d-8bcd-002a93dac12d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.046 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [{"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.084 254096 DEBUG nova.policy [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e76c1b261c0442caa52f39297ccf296d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea024a03380a4251a920e126716935de', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.112 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-013dc18e-57cd-4733-8e98-7d20e3b5c4db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.112 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2183870431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:18 compute-0 ceph-mon[74985]: pgmap v1707: 321 pgs: 321 active+clean; 200 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 19 KiB/s wr, 242 op/s
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.683 254096 INFO nova.virt.libvirt.driver [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deleting instance files /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f_del
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.684 254096 INFO nova.virt.libvirt.driver [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deletion of /var/lib/nova/instances/dce3a591-9fb6-4495-a7fb-867af2de384f_del complete
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.688 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e887a377-e792-462d-8bcd-002a93dac12d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.766 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] resizing rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.801 254096 INFO nova.compute.manager [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 2.42 seconds to destroy the instance on the hypervisor.
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.802 254096 DEBUG oslo.service.loopingcall [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.803 254096 DEBUG nova.compute.manager [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.803 254096 DEBUG nova.network.neutron [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.868 254096 DEBUG nova.objects.instance [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'migration_context' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.883 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.883 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Ensure instance console log exists: /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.884 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.884 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:18 compute-0 nova_compute[254092]: 2025-11-25 16:42:18.884 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 200 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 176 op/s
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.697 254096 DEBUG nova.compute.manager [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.698 254096 DEBUG oslo_concurrency.lockutils [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.698 254096 DEBUG oslo_concurrency.lockutils [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.698 254096 DEBUG oslo_concurrency.lockutils [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.699 254096 DEBUG nova.compute.manager [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] No waiting events found dispatching network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.699 254096 WARNING nova.compute.manager [req-f3d37a07-08a9-4b8b-bcbd-66b873cc4627 req-b14aed8d-d082-45f2-82df-89dda12fa9b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received unexpected event network-vif-plugged-9e60e140-ca34-40f4-b867-d7c53f05bca4 for instance with vm_state active and task_state deleting.
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.700 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Successfully created port: ce73fc27-d707-4321-8e2e-f77bd4b984ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.753 254096 DEBUG nova.network.neutron [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.768 254096 INFO nova.compute.manager [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Took 0.96 seconds to deallocate network for instance.
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.823 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.824 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:19.863 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:19.864 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:19 compute-0 nova_compute[254092]: 2025-11-25 16:42:19.970 254096 DEBUG oslo_concurrency.processutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860457252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:20 compute-0 ovn_controller[153477]: 2025-11-25T16:42:20Z|00657|binding|INFO|Releasing lport 57c889f7-e44b-4f52-8e8a-db17b4e1f3b8 from this chassis (sb_readonly=0)
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.430 254096 DEBUG oslo_concurrency.processutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.437 254096 DEBUG nova.compute.provider_tree [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.462 254096 DEBUG nova.scheduler.client.report [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.496 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.522 254096 INFO nova.scheduler.client.report [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Deleted allocations for instance dce3a591-9fb6-4495-a7fb-867af2de384f
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.607 254096 DEBUG oslo_concurrency.lockutils [None req-b9d1a686-8005-49fa-9562-9746ed4d0d86 e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "dce3a591-9fb6-4495-a7fb-867af2de384f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:20 compute-0 ceph-mon[74985]: pgmap v1708: 321 pgs: 321 active+clean; 200 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 176 op/s
Nov 25 16:42:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1860457252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.838 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Successfully updated port: ce73fc27-d707-4321-8e2e-f77bd4b984ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquired lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:42:20 compute-0 nova_compute[254092]: 2025-11-25 16:42:20.853 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:42:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:21 compute-0 nova_compute[254092]: 2025-11-25 16:42:21.087 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:42:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 169 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Nov 25 16:42:21 compute-0 nova_compute[254092]: 2025-11-25 16:42:21.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:21 compute-0 nova_compute[254092]: 2025-11-25 16:42:21.910 254096 DEBUG nova.compute.manager [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Received event network-vif-deleted-9e60e140-ca34-40f4-b867-d7c53f05bca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:21 compute-0 nova_compute[254092]: 2025-11-25 16:42:21.911 254096 DEBUG nova.compute.manager [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-changed-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:21 compute-0 nova_compute[254092]: 2025-11-25 16:42:21.911 254096 DEBUG nova.compute.manager [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Refreshing instance network info cache due to event network-changed-ce73fc27-d707-4321-8e2e-f77bd4b984ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:42:21 compute-0 nova_compute[254092]: 2025-11-25 16:42:21.911 254096 DEBUG oslo_concurrency.lockutils [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.120 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.121 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.121 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.122 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.122 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.124 254096 INFO nova.compute.manager [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Terminating instance
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.125 254096 DEBUG nova.compute.manager [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:42:22 compute-0 kernel: tap5136afea-10 (unregistering): left promiscuous mode
Nov 25 16:42:22 compute-0 NetworkManager[48891]: <info>  [1764088942.1670] device (tap5136afea-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:42:22 compute-0 ovn_controller[153477]: 2025-11-25T16:42:22Z|00658|binding|INFO|Releasing lport 5136afea-102e-46a1-8fdb-0af970c5af04 from this chassis (sb_readonly=0)
Nov 25 16:42:22 compute-0 ovn_controller[153477]: 2025-11-25T16:42:22Z|00659|binding|INFO|Setting lport 5136afea-102e-46a1-8fdb-0af970c5af04 down in Southbound
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 ovn_controller[153477]: 2025-11-25T16:42:22Z|00660|binding|INFO|Removing iface tap5136afea-10 ovn-installed in OVS
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.217 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:61:e3 10.100.0.13'], port_security=['fa:16:3e:70:61:e3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '013dc18e-57cd-4733-8e98-7d20e3b5c4db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2319f378c7fb448ebe89427d8dfa7e43', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ccb207e5-4339-4ba4-8da5-c985ba20f305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b6d34d-eb18-415e-af2d-9e007dc0f6c6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5136afea-102e-46a1-8fdb-0af970c5af04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.218 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5136afea-102e-46a1-8fdb-0af970c5af04 in datapath f00f265b-63fa-48fb-9383-38ff6abf51c1 unbound from our chassis
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.219 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f00f265b-63fa-48fb-9383-38ff6abf51c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.220 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[332ad6f0-c632-4dda-9de8-691670e3e127]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.221 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 namespace which is not needed anymore
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 25 16:42:22 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003e.scope: Consumed 14.333s CPU time.
Nov 25 16:42:22 compute-0 systemd-machined[216343]: Machine qemu-79-instance-0000003e terminated.
Nov 25 16:42:22 compute-0 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : haproxy version is 2.8.14-c23fe91
Nov 25 16:42:22 compute-0 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [NOTICE]   (323779) : path to executable is /usr/sbin/haproxy
Nov 25 16:42:22 compute-0 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [WARNING]  (323779) : Exiting Master process...
Nov 25 16:42:22 compute-0 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [ALERT]    (323779) : Current worker (323781) exited with code 143 (Terminated)
Nov 25 16:42:22 compute-0 neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1[323775]: [WARNING]  (323779) : All workers exited. Exiting... (0)
Nov 25 16:42:22 compute-0 systemd[1]: libpod-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192.scope: Deactivated successfully.
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 podman[328287]: 2025-11-25 16:42:22.34725567 +0000 UTC m=+0.044092138 container died 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.350 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.358 254096 INFO nova.virt.libvirt.driver [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Instance destroyed successfully.
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.359 254096 DEBUG nova.objects.instance [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lazy-loading 'resources' on Instance uuid 013dc18e-57cd-4733-8e98-7d20e3b5c4db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.374 254096 DEBUG nova.virt.libvirt.vif [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:41:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1278256596',display_name='tempest-ListServerFiltersTestJSON-instance-1278256596',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1278256596',id=62,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:41:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2319f378c7fb448ebe89427d8dfa7e43',ramdisk_id='',reservation_id='r-t7tk7x1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-675991579',owner_user_name='tempest-ListServerFiltersTestJSON-675991579-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:02Z,user_data=None,user_id='e0d077039e0b4d9e8d5663768f40fa48',uuid=013dc18e-57cd-4733-8e98-7d20e3b5c4db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.375 254096 DEBUG nova.network.os_vif_util [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converting VIF {"id": "5136afea-102e-46a1-8fdb-0af970c5af04", "address": "fa:16:3e:70:61:e3", "network": {"id": "f00f265b-63fa-48fb-9383-38ff6abf51c1", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1531904617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2319f378c7fb448ebe89427d8dfa7e43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5136afea-10", "ovs_interfaceid": "5136afea-102e-46a1-8fdb-0af970c5af04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.375 254096 DEBUG nova.network.os_vif_util [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.376 254096 DEBUG os_vif [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.378 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5136afea-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192-userdata-shm.mount: Deactivated successfully.
Nov 25 16:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b958805e8f49ae0dd91301b40e75db313a46de7a441e4d3c3ccb3b8eb4101d4c-merged.mount: Deactivated successfully.
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.386 254096 INFO os_vif [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:61:e3,bridge_name='br-int',has_traffic_filtering=True,id=5136afea-102e-46a1-8fdb-0af970c5af04,network=Network(f00f265b-63fa-48fb-9383-38ff6abf51c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5136afea-10')
Nov 25 16:42:22 compute-0 podman[328287]: 2025-11-25 16:42:22.395175129 +0000 UTC m=+0.092011617 container cleanup 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:42:22 compute-0 systemd[1]: libpod-conmon-3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192.scope: Deactivated successfully.
Nov 25 16:42:22 compute-0 podman[328341]: 2025-11-25 16:42:22.473364751 +0000 UTC m=+0.054908531 container remove 3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4adae5f9-a8f2-46e4-b923-27268ba7dba3]: (4, ('Tue Nov 25 04:42:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 (3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192)\n3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192\nTue Nov 25 04:42:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 (3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192)\n3def2450d84bf90268deb60c88b8c80f15394b0dbc56ee53278d0ebd061ec192\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.484 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3613d2b1-2b55-4514-95fd-ee94012db26c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.485 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf00f265b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:22 compute-0 kernel: tapf00f265b-60: left promiscuous mode
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15d571eb-c339-42fb-b1a2-ca4ac3d0eb0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4255a10f-c05e-4b6a-85f3-7df4bbbd27f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.531 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5150948-1842-4408-80bd-6a911cd45992]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.550 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[628d0486-3713-4d87-8849-3f8d8fa38fbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534200, 'reachable_time': 24562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328359, 'error': None, 'target': 'ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 systemd[1]: run-netns-ovnmeta\x2df00f265b\x2d63fa\x2d48fb\x2d9383\x2d38ff6abf51c1.mount: Deactivated successfully.
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.554 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f00f265b-63fa-48fb-9383-38ff6abf51c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:42:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:22.554 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[706ed680-578d-4571-b35f-c00091baa083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:22 compute-0 ceph-mon[74985]: pgmap v1709: 321 pgs: 321 active+clean; 169 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.764 254096 DEBUG nova.network.neutron [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.782 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Releasing lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.782 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance network_info: |[{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.783 254096 DEBUG oslo_concurrency.lockutils [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.783 254096 DEBUG nova.network.neutron [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Refreshing network info cache for port ce73fc27-d707-4321-8e2e-f77bd4b984ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.785 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start _get_guest_xml network_info=[{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.789 254096 WARNING nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.793 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.794 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.802 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.802 254096 DEBUG nova.virt.libvirt.host [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.803 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.803 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.803 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.804 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.805 254096 DEBUG nova.virt.hardware [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.808 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.856 254096 INFO nova.virt.libvirt.driver [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deleting instance files /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db_del
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.858 254096 INFO nova.virt.libvirt.driver [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deletion of /var/lib/nova/instances/013dc18e-57cd-4733-8e98-7d20e3b5c4db_del complete
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.911 254096 INFO nova.compute.manager [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.912 254096 DEBUG oslo.service.loopingcall [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.912 254096 DEBUG nova.compute.manager [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:42:22 compute-0 nova_compute[254092]: 2025-11-25 16:42:22.912 254096 DEBUG nova.network.neutron [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.015 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088928.0119298, 796c46a8-971c-4b51-96c9-0e7c8682cfa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.015 254096 INFO nova.compute.manager [-] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] VM Stopped (Lifecycle Event)
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.039 254096 DEBUG nova.compute.manager [None req-5e9c52a4-0d49-4095-a995-f30a67046872 - - - - - -] [instance: 796c46a8-971c-4b51-96c9-0e7c8682cfa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323649743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.288 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.314 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.318 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.348 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088928.329823, b5e2a584-5835-4c63-84de-6f0446220d35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.349 254096 INFO nova.compute.manager [-] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] VM Stopped (Lifecycle Event)
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.371 254096 DEBUG nova.compute.manager [None req-518c6854-485a-4063-b198-59d9916613fc - - - - - -] [instance: b5e2a584-5835-4c63-84de-6f0446220d35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 169 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 25 16:42:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/323649743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/929171924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.748 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.749 254096 DEBUG nova.virt.libvirt.vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:17Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.750 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.750 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.751 254096 DEBUG nova.objects.instance [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'pci_devices' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.836 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <uuid>e887a377-e792-462d-8bcd-002a93dac12d</uuid>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <name>instance-00000046</name>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <nova:name>tempest-InstanceActionsTestJSON-server-1282971748</nova:name>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:42:22</nova:creationTime>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:user uuid="e76c1b261c0442caa52f39297ccf296d">tempest-InstanceActionsTestJSON-16048106-project-member</nova:user>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:project uuid="ea024a03380a4251a920e126716935de">tempest-InstanceActionsTestJSON-16048106</nova:project>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <nova:port uuid="ce73fc27-d707-4321-8e2e-f77bd4b984ad">
Nov 25 16:42:23 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <system>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <entry name="serial">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <entry name="uuid">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </system>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <os>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   </os>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <features>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   </features>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk">
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk.config">
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:5e:35:7f"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <target dev="tapce73fc27-d7"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/console.log" append="off"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <video>
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </video>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:42:23 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:42:23 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:42:23 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:42:23 compute-0 nova_compute[254092]: </domain>
Nov 25 16:42:23 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.837 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Preparing to wait for external event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.837 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.838 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.838 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.839 254096 DEBUG nova.virt.libvirt.vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:17Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.839 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.840 254096 DEBUG nova.network.os_vif_util [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.840 254096 DEBUG os_vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.841 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.841 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.846 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce73fc27-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.846 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce73fc27-d7, col_values=(('external_ids', {'iface-id': 'ce73fc27-d707-4321-8e2e-f77bd4b984ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:35:7f', 'vm-uuid': 'e887a377-e792-462d-8bcd-002a93dac12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:23 compute-0 NetworkManager[48891]: <info>  [1764088943.8489] manager: (tapce73fc27-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.854 254096 INFO os_vif [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.914 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.915 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.915 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] No VIF found with MAC fa:16:3e:5e:35:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.915 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Using config drive
Nov 25 16:42:23 compute-0 nova_compute[254092]: 2025-11-25 16:42:23.933 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.232 254096 DEBUG nova.network.neutron [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.250 254096 INFO nova.compute.manager [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Took 1.34 seconds to deallocate network for instance.
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.312 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.313 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.365 254096 DEBUG nova.compute.manager [req-dad1052f-a80c-478f-b3d6-a8bfcd90a4d7 req-4a0bd464-e551-4273-a8aa-5cff078df4e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Received event network-vif-deleted-5136afea-102e-46a1-8fdb-0af970c5af04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.391 254096 DEBUG oslo_concurrency.processutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.479 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Creating config drive at /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.484 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsjmcps10 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.623 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsjmcps10" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.648 254096 DEBUG nova.storage.rbd_utils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] rbd image e887a377-e792-462d-8bcd-002a93dac12d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.651 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config e887a377-e792-462d-8bcd-002a93dac12d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:24 compute-0 ceph-mon[74985]: pgmap v1710: 321 pgs: 321 active+clean; 169 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 1.8 MiB/s wr, 129 op/s
Nov 25 16:42:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/929171924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309791242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.851 254096 DEBUG oslo_concurrency.processutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config e887a377-e792-462d-8bcd-002a93dac12d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.852 254096 INFO nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deleting local config drive /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/disk.config because it was imported into RBD.
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.869 254096 DEBUG oslo_concurrency.processutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.879 254096 DEBUG nova.compute.provider_tree [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.893 254096 DEBUG nova.scheduler.client.report [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:24 compute-0 kernel: tapce73fc27-d7: entered promiscuous mode
Nov 25 16:42:24 compute-0 systemd-udevd[328266]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:42:24 compute-0 NetworkManager[48891]: <info>  [1764088944.9056] manager: (tapce73fc27-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Nov 25 16:42:24 compute-0 ovn_controller[153477]: 2025-11-25T16:42:24Z|00661|binding|INFO|Claiming lport ce73fc27-d707-4321-8e2e-f77bd4b984ad for this chassis.
Nov 25 16:42:24 compute-0 ovn_controller[153477]: 2025-11-25T16:42:24Z|00662|binding|INFO|ce73fc27-d707-4321-8e2e-f77bd4b984ad: Claiming fa:16:3e:5e:35:7f 10.100.0.6
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:24 compute-0 NetworkManager[48891]: <info>  [1764088944.9180] device (tapce73fc27-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.916 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:24 compute-0 NetworkManager[48891]: <info>  [1764088944.9192] device (tapce73fc27-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.924 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.926 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a bound to our chassis
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.927 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.940 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8b79aa-b2ba-4bc2-8c52-477476791db8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.942 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap203581b6-f1 in ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.944 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap203581b6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.945 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4c1ca5-f169-428f-bf9b-69cf86ec39c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[219e04e3-a456-44d1-8c03-9095f4badc61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:24 compute-0 systemd-machined[216343]: New machine qemu-82-instance-00000046.
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.951 254096 INFO nova.scheduler.client.report [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Deleted allocations for instance 013dc18e-57cd-4733-8e98-7d20e3b5c4db
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.957 254096 DEBUG nova.network.neutron [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updated VIF entry in instance network info cache for port ce73fc27-d707-4321-8e2e-f77bd4b984ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.958 254096 DEBUG nova.network.neutron [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.962 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[280911db-9de5-4304-aefd-e10120663705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:24 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000046.
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.975 254096 DEBUG oslo_concurrency.lockutils [req-435bb7db-ee7e-4d9d-8d11-a5e5b316e33c req-0189f948-a54d-4a0f-9bf9-a6f82223b13c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:24 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:24.994 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5cce7dc7-e301-4a81-9148-dc041aa6edf6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:24.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.024 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6b55cb-a1bc-435a-a524-4df160a9bf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.031 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29473529-13e1-4177-817e-3a99e4179120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 NetworkManager[48891]: <info>  [1764088945.0318] manager: (tap203581b6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/284)
Nov 25 16:42:25 compute-0 ovn_controller[153477]: 2025-11-25T16:42:25Z|00663|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad ovn-installed in OVS
Nov 25 16:42:25 compute-0 ovn_controller[153477]: 2025-11-25T16:42:25Z|00664|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad up in Southbound
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.036 254096 DEBUG oslo_concurrency.lockutils [None req-bbc69bd2-c04b-41b1-939e-fda19cc02b4a e0d077039e0b4d9e8d5663768f40fa48 2319f378c7fb448ebe89427d8dfa7e43 - - default default] Lock "013dc18e-57cd-4733-8e98-7d20e3b5c4db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.066 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[72664c52-faa4-4361-8101-e776296087a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.069 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3d4c34-b34b-4ab7-89e9-9840669bce8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 NetworkManager[48891]: <info>  [1764088945.0924] device (tap203581b6-f0): carrier: link connected
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.099 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ee52ad-ca4b-4794-b4c7-69098239f78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5594fbf-7b6b-4d90-9ccf-81909d54524d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540268, 'reachable_time': 21983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328550, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8179abe-00cc-457f-a470-b557caa8146a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:fc5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540268, 'tstamp': 540268}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328551, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7885b97-001d-46f5-8c0a-f837d2aa1bb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540268, 'reachable_time': 21983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328552, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.185 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f5724c-033c-4259-ad1d-de1cb17d16e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.252 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f93e4a56-043b-4246-afd6-a46b148e3f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.253 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.253 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.254 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap203581b6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:25 compute-0 kernel: tap203581b6-f0: entered promiscuous mode
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:25 compute-0 NetworkManager[48891]: <info>  [1764088945.2580] manager: (tap203581b6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.261 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap203581b6-f0, col_values=(('external_ids', {'iface-id': '43fd9010-c369-4c3b-8331-da7c798cb131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:25 compute-0 ovn_controller[153477]: 2025-11-25T16:42:25Z|00665|binding|INFO|Releasing lport 43fd9010-c369-4c3b-8331-da7c798cb131 from this chassis (sb_readonly=0)
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.263 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.265 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.266 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a307bfdd-2d7f-42aa-9174-0f80de575f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.267 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:42:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:25.267 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'env', 'PROCESS_TAG=haproxy-203581b6-f356-4499-9dc8-abafe93b350a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/203581b6-f356-4499-9dc8-abafe93b350a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.329 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088945.328489, e887a377-e792-462d-8bcd-002a93dac12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.330 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Started (Lifecycle Event)
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.353 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.358 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088945.32885, e887a377-e792-462d-8bcd-002a93dac12d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Paused (Lifecycle Event)
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.377 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:25 compute-0 nova_compute[254092]: 2025-11-25 16:42:25.404 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:42:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 120 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 25 16:42:25 compute-0 podman[328627]: 2025-11-25 16:42:25.64108867 +0000 UTC m=+0.049365631 container create d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:42:25 compute-0 systemd[1]: Started libpod-conmon-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7.scope.
Nov 25 16:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:25 compute-0 podman[328627]: 2025-11-25 16:42:25.614542079 +0000 UTC m=+0.022819060 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5789ea30890dfa3f5bff74e20beffed1576d518a78c46e7d983852961c8fdd8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:25 compute-0 podman[328627]: 2025-11-25 16:42:25.737884235 +0000 UTC m=+0.146161246 container init d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:42:25 compute-0 podman[328627]: 2025-11-25 16:42:25.743393085 +0000 UTC m=+0.151670046 container start d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 16:42:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2309791242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:25 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : New worker (328648) forked
Nov 25 16:42:25 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : Loading success.
Nov 25 16:42:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:26 compute-0 ceph-mon[74985]: pgmap v1711: 321 pgs: 321 active+clean; 120 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 1.8 MiB/s wr, 158 op/s
Nov 25 16:42:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 16:42:27 compute-0 nova_compute[254092]: 2025-11-25 16:42:27.831 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088932.8272161, b5c5a442-8e8e-40c5-9634-e36c49e6e41b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:27 compute-0 nova_compute[254092]: 2025-11-25 16:42:27.832 254096 INFO nova.compute.manager [-] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] VM Stopped (Lifecycle Event)
Nov 25 16:42:27 compute-0 nova_compute[254092]: 2025-11-25 16:42:27.848 254096 DEBUG nova.compute.manager [None req-346bd3e4-1623-4fdb-b819-57a19e383d25 - - - - - -] [instance: b5c5a442-8e8e-40c5-9634-e36c49e6e41b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:27.866 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:28 compute-0 nova_compute[254092]: 2025-11-25 16:42:28.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:28 compute-0 nova_compute[254092]: 2025-11-25 16:42:28.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:28 compute-0 ceph-mon[74985]: pgmap v1712: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 16:42:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 16:42:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:31 compute-0 ceph-mon[74985]: pgmap v1713: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Nov 25 16:42:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 16:42:31 compute-0 nova_compute[254092]: 2025-11-25 16:42:31.615 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088936.614849, dce3a591-9fb6-4495-a7fb-867af2de384f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:31 compute-0 nova_compute[254092]: 2025-11-25 16:42:31.616 254096 INFO nova.compute.manager [-] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] VM Stopped (Lifecycle Event)
Nov 25 16:42:31 compute-0 nova_compute[254092]: 2025-11-25 16:42:31.634 254096 DEBUG nova.compute.manager [None req-786bd18d-c146-497f-91fc-2f74b43b54da - - - - - -] [instance: dce3a591-9fb6-4495-a7fb-867af2de384f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.151 254096 DEBUG nova.compute.manager [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.151 254096 DEBUG oslo_concurrency.lockutils [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.151 254096 DEBUG oslo_concurrency.lockutils [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.152 254096 DEBUG oslo_concurrency.lockutils [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.152 254096 DEBUG nova.compute.manager [req-f3dd0f95-a959-42e1-aad3-f3a8021c58f2 req-96e70ebe-94c6-43cd-9c3b-caf1ad72a24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Processing event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.153 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.157 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088952.1572037, e887a377-e792-462d-8bcd-002a93dac12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.157 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Resumed (Lifecycle Event)
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.159 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.162 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance spawned successfully.
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.163 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.262 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.272 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.279 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.280 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.280 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.281 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.281 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.282 254096 DEBUG nova.virt.libvirt.driver [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.351 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:42:32 compute-0 ceph-mon[74985]: pgmap v1714: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.876 254096 INFO nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 15.03 seconds to spawn the instance on the hypervisor.
Nov 25 16:42:32 compute-0 nova_compute[254092]: 2025-11-25 16:42:32.876 254096 DEBUG nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:33 compute-0 nova_compute[254092]: 2025-11-25 16:42:33.033 254096 INFO nova.compute.manager [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 16.18 seconds to build instance.
Nov 25 16:42:33 compute-0 nova_compute[254092]: 2025-11-25 16:42:33.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:33 compute-0 nova_compute[254092]: 2025-11-25 16:42:33.230 254096 DEBUG oslo_concurrency.lockutils [None req-7c070df7-741e-4063-9f27-772c8bc0aa69 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 22 KiB/s wr, 37 op/s
Nov 25 16:42:33 compute-0 nova_compute[254092]: 2025-11-25 16:42:33.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:34 compute-0 ceph-mon[74985]: pgmap v1715: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 22 KiB/s wr, 37 op/s
Nov 25 16:42:34 compute-0 nova_compute[254092]: 2025-11-25 16:42:34.272 254096 DEBUG nova.compute.manager [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:34 compute-0 nova_compute[254092]: 2025-11-25 16:42:34.273 254096 DEBUG oslo_concurrency.lockutils [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:34 compute-0 nova_compute[254092]: 2025-11-25 16:42:34.273 254096 DEBUG oslo_concurrency.lockutils [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:34 compute-0 nova_compute[254092]: 2025-11-25 16:42:34.274 254096 DEBUG oslo_concurrency.lockutils [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:34 compute-0 nova_compute[254092]: 2025-11-25 16:42:34.274 254096 DEBUG nova.compute.manager [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] No waiting events found dispatching network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:42:34 compute-0 nova_compute[254092]: 2025-11-25 16:42:34.274 254096 WARNING nova.compute.manager [req-fb9af0e3-f6ce-4ca0-a8fc-630aaa933d4b req-17421be2-c9d1-450d-8660-8748ad023f87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received unexpected event network-vif-plugged-ce73fc27-d707-4321-8e2e-f77bd4b984ad for instance with vm_state active and task_state None.
Nov 25 16:42:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 89 op/s
Nov 25 16:42:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:36 compute-0 ceph-mon[74985]: pgmap v1716: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 89 op/s
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.357 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088942.3568356, 013dc18e-57cd-4733-8e98-7d20e3b5c4db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.357 254096 INFO nova.compute.manager [-] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] VM Stopped (Lifecycle Event)
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.386 254096 DEBUG nova.compute.manager [None req-7af2a72e-3ab8-4168-b629-d45248412868 - - - - - -] [instance: 013dc18e-57cd-4733-8e98-7d20e3b5c4db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 73 op/s
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.611 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.612 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.612 254096 INFO nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Rebooting instance
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.625 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.626 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquired lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:42:37 compute-0 nova_compute[254092]: 2025-11-25 16:42:37.626 254096 DEBUG nova.network.neutron [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:42:38 compute-0 nova_compute[254092]: 2025-11-25 16:42:38.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:38 compute-0 sshd-session[328657]: Connection closed by authenticating user root 171.244.51.45 port 47886 [preauth]
Nov 25 16:42:38 compute-0 nova_compute[254092]: 2025-11-25 16:42:38.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:39 compute-0 ceph-mon[74985]: pgmap v1717: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 73 op/s
Nov 25 16:42:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 16:42:39 compute-0 podman[328660]: 2025-11-25 16:42:39.654674849 +0000 UTC m=+0.073779872 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:42:39 compute-0 podman[328659]: 2025-11-25 16:42:39.672454081 +0000 UTC m=+0.091983266 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 16:42:39 compute-0 podman[328661]: 2025-11-25 16:42:39.675500014 +0000 UTC m=+0.090789184 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:42:39 compute-0 nova_compute[254092]: 2025-11-25 16:42:39.856 254096 DEBUG nova.network.neutron [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.016 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Releasing lock "refresh_cache-e887a377-e792-462d-8bcd-002a93dac12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.017 254096 DEBUG nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:42:40
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data']
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:42:40 compute-0 kernel: tapce73fc27-d7 (unregistering): left promiscuous mode
Nov 25 16:42:40 compute-0 NetworkManager[48891]: <info>  [1764088960.3425] device (tapce73fc27-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 ovn_controller[153477]: 2025-11-25T16:42:40Z|00666|binding|INFO|Releasing lport ce73fc27-d707-4321-8e2e-f77bd4b984ad from this chassis (sb_readonly=0)
Nov 25 16:42:40 compute-0 ovn_controller[153477]: 2025-11-25T16:42:40Z|00667|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad down in Southbound
Nov 25 16:42:40 compute-0 ovn_controller[153477]: 2025-11-25T16:42:40Z|00668|binding|INFO|Removing iface tapce73fc27-d7 ovn-installed in OVS
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.413 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.414 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a unbound from our chassis
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.415 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 203581b6-f356-4499-9dc8-abafe93b350a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ead0fcc-93d1-4668-a5ea-05f98ad65207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.416 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace which is not needed anymore
Nov 25 16:42:40 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 25 16:42:40 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000046.scope: Consumed 8.588s CPU time.
Nov 25 16:42:40 compute-0 systemd-machined[216343]: Machine qemu-82-instance-00000046 terminated.
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:42:40 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : haproxy version is 2.8.14-c23fe91
Nov 25 16:42:40 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [NOTICE]   (328646) : path to executable is /usr/sbin/haproxy
Nov 25 16:42:40 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [WARNING]  (328646) : Exiting Master process...
Nov 25 16:42:40 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [ALERT]    (328646) : Current worker (328648) exited with code 143 (Terminated)
Nov 25 16:42:40 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328642]: [WARNING]  (328646) : All workers exited. Exiting... (0)
Nov 25 16:42:40 compute-0 systemd[1]: libpod-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7.scope: Deactivated successfully.
Nov 25 16:42:40 compute-0 podman[328745]: 2025-11-25 16:42:40.567630548 +0000 UTC m=+0.061962482 container died d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7-userdata-shm.mount: Deactivated successfully.
Nov 25 16:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5789ea30890dfa3f5bff74e20beffed1576d518a78c46e7d983852961c8fdd8a-merged.mount: Deactivated successfully.
Nov 25 16:42:40 compute-0 podman[328745]: 2025-11-25 16:42:40.619205326 +0000 UTC m=+0.113537240 container cleanup d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 systemd[1]: libpod-conmon-d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7.scope: Deactivated successfully.
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.640 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance destroyed successfully.
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.641 254096 DEBUG nova.objects.instance [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'resources' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.661 254096 DEBUG nova.virt.libvirt.vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:40Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.662 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.664 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.664 254096 DEBUG os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce73fc27-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.673 254096 INFO os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.680 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start _get_guest_xml network_info=[{"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.684 254096 WARNING nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.694 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.695 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:42:40 compute-0 podman[328778]: 2025-11-25 16:42:40.697355907 +0000 UTC m=+0.050968044 container remove d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.699 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.700 254096 DEBUG nova.virt.libvirt.host [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.700 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.701 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.701 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.701 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.702 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.virt.hardware [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.703 254096 DEBUG nova.objects.instance [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'vcpu_model' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.704 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1525a9-63b2-4e8f-8a03-a8ff4104c247]: (4, ('Tue Nov 25 04:42:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7)\nd600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7\nTue Nov 25 04:42:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (d600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7)\nd600de5a900722cfa83065a4e943ef76490c2cd102b9da7acfc16aec73624ce7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.705 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e2abe1-68a3-4d24-bf06-972bfff79118]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.706 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 kernel: tap203581b6-f0: left promiscuous mode
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.726 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03678243-95c5-44b7-86fe-a2b2acc1d82b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.747 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce67f42b-03b0-417c-b653-8616306727fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.749 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[14cbb22b-f3c4-4c18-b4ab-90b4c7cb118c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 nova_compute[254092]: 2025-11-25 16:42:40.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4fc717-4dc0-4779-a70d-741b21417394]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540260, 'reachable_time': 19161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328798, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d203581b6\x2df356\x2d4499\x2d9dc8\x2dabafe93b350a.mount: Deactivated successfully.
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.769 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:42:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:40.769 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6876b98e-aadf-4b53-9e1a-f71eec06029e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:41 compute-0 ceph-mon[74985]: pgmap v1718: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 16:42:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038257087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.170 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.201 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 16:42:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649746531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.635 254096 DEBUG oslo_concurrency.processutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.637 254096 DEBUG nova.virt.libvirt.vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:40Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.637 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.638 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.639 254096 DEBUG nova.objects.instance [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'pci_devices' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.653 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <uuid>e887a377-e792-462d-8bcd-002a93dac12d</uuid>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <name>instance-00000046</name>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <nova:name>tempest-InstanceActionsTestJSON-server-1282971748</nova:name>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:42:40</nova:creationTime>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:user uuid="e76c1b261c0442caa52f39297ccf296d">tempest-InstanceActionsTestJSON-16048106-project-member</nova:user>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:project uuid="ea024a03380a4251a920e126716935de">tempest-InstanceActionsTestJSON-16048106</nova:project>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <nova:port uuid="ce73fc27-d707-4321-8e2e-f77bd4b984ad">
Nov 25 16:42:41 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <system>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <entry name="serial">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <entry name="uuid">e887a377-e792-462d-8bcd-002a93dac12d</entry>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </system>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <os>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   </os>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <features>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   </features>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk">
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e887a377-e792-462d-8bcd-002a93dac12d_disk.config">
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:41 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:5e:35:7f"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <target dev="tapce73fc27-d7"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d/console.log" append="off"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <video>
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </video>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:42:41 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:42:41 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:42:41 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:42:41 compute-0 nova_compute[254092]: </domain>
Nov 25 16:42:41 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.654 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.655 254096 DEBUG nova.virt.libvirt.driver [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.655 254096 DEBUG nova.virt.libvirt.vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:40Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.655 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.656 254096 DEBUG nova.network.os_vif_util [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.656 254096 DEBUG os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.657 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.659 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce73fc27-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.660 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce73fc27-d7, col_values=(('external_ids', {'iface-id': 'ce73fc27-d707-4321-8e2e-f77bd4b984ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:35:7f', 'vm-uuid': 'e887a377-e792-462d-8bcd-002a93dac12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:41 compute-0 NetworkManager[48891]: <info>  [1764088961.6623] manager: (tapce73fc27-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.667 254096 INFO os_vif [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')
Nov 25 16:42:41 compute-0 kernel: tapce73fc27-d7: entered promiscuous mode
Nov 25 16:42:41 compute-0 systemd-udevd[328727]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:42:41 compute-0 NetworkManager[48891]: <info>  [1764088961.7253] manager: (tapce73fc27-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Nov 25 16:42:41 compute-0 ovn_controller[153477]: 2025-11-25T16:42:41Z|00669|binding|INFO|Claiming lport ce73fc27-d707-4321-8e2e-f77bd4b984ad for this chassis.
Nov 25 16:42:41 compute-0 ovn_controller[153477]: 2025-11-25T16:42:41Z|00670|binding|INFO|ce73fc27-d707-4321-8e2e-f77bd4b984ad: Claiming fa:16:3e:5e:35:7f 10.100.0.6
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.727 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:41 compute-0 NetworkManager[48891]: <info>  [1764088961.7355] device (tapce73fc27-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.735 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:41 compute-0 NetworkManager[48891]: <info>  [1764088961.7368] device (tapce73fc27-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.737 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a bound to our chassis
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.738 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 16:42:41 compute-0 ovn_controller[153477]: 2025-11-25T16:42:41Z|00671|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad ovn-installed in OVS
Nov 25 16:42:41 compute-0 ovn_controller[153477]: 2025-11-25T16:42:41Z|00672|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad up in Southbound
Nov 25 16:42:41 compute-0 nova_compute[254092]: 2025-11-25 16:42:41.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.748 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d433513d-b9bb-495b-841a-d77039cdd668]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.749 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap203581b6-f1 in ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.751 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap203581b6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.751 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aceb397a-72cd-42bf-b48a-8c86225dd341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.752 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[96273c90-e33d-439a-b6c6-02ecdce19f63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 systemd-machined[216343]: New machine qemu-83-instance-00000046.
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.764 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fe790576-7757-4169-ab04-1568c98491c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-00000046.
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.786 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2028d0-1f85-4a85-b9ba-a719a8402120]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.813 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[12d98f0a-729d-40cc-827b-057ea1382395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.817 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7e364a-5159-41dd-899a-f615b40da74f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 NetworkManager[48891]: <info>  [1764088961.8181] manager: (tap203581b6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/288)
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.845 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[74ce8ed8-20d3-455d-a7ca-a3d02cbab975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.847 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eedb7ddd-06a1-420e-b59f-0049d282b9e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 NetworkManager[48891]: <info>  [1764088961.8693] device (tap203581b6-f0): carrier: link connected
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.874 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c0846485-8ebc-4b81-b82d-9d58f0e9bccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.890 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f571aa5-9da4-488b-abe1-395c78d45047]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541946, 'reachable_time': 36665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328905, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5756fe8-90f3-488d-9ab1-56a423f663c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:fc5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541946, 'tstamp': 541946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328906, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7cf8335-b7ac-47e4-9b11-dda9cb3f88ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap203581b6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:fc:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541946, 'reachable_time': 36665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328907, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:41.950 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2679b167-e0b4-4fa2-ad64-dc1deee02b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20792e6c-b0fe-406b-bab0-7f1a9ff94a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.013 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap203581b6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:42 compute-0 NetworkManager[48891]: <info>  [1764088962.0154] manager: (tap203581b6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Nov 25 16:42:42 compute-0 kernel: tap203581b6-f0: entered promiscuous mode
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.019 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap203581b6-f0, col_values=(('external_ids', {'iface-id': '43fd9010-c369-4c3b-8331-da7c798cb131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:42 compute-0 ovn_controller[153477]: 2025-11-25T16:42:42Z|00673|binding|INFO|Releasing lport 43fd9010-c369-4c3b-8331-da7c798cb131 from this chassis (sb_readonly=0)
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.022 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.023 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7da86e35-9a5e-4cf8-a733-b65f369a07cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.024 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/203581b6-f356-4499-9dc8-abafe93b350a.pid.haproxy
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 203581b6-f356-4499-9dc8-abafe93b350a
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:42:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:42.025 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'env', 'PROCESS_TAG=haproxy-203581b6-f356-4499-9dc8-abafe93b350a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/203581b6-f356-4499-9dc8-abafe93b350a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4038257087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2649746531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.150 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for e887a377-e792-462d-8bcd-002a93dac12d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.150 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088962.1497972, e887a377-e792-462d-8bcd-002a93dac12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.151 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Resumed (Lifecycle Event)
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.153 254096 DEBUG nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.156 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance rebooted successfully.
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.157 254096 DEBUG nova.compute.manager [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.182 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.186 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.225 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.226 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088962.15123, e887a377-e792-462d-8bcd-002a93dac12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.226 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Started (Lifecycle Event)
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.241 254096 DEBUG oslo_concurrency.lockutils [None req-976210b2-e4ab-4b9b-9f68-fcce95208ce0 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.271 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:42 compute-0 nova_compute[254092]: 2025-11-25 16:42:42.273 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:42 compute-0 podman[328981]: 2025-11-25 16:42:42.377845128 +0000 UTC m=+0.048219549 container create 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 16:42:42 compute-0 systemd[1]: Started libpod-conmon-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526.scope.
Nov 25 16:42:42 compute-0 podman[328981]: 2025-11-25 16:42:42.350442174 +0000 UTC m=+0.020816625 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:42:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16913faf8a5c974f1946e4ed69e9941f406ff9e2fad5af1678ff5c86655f484c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:42 compute-0 podman[328981]: 2025-11-25 16:42:42.480689588 +0000 UTC m=+0.151064019 container init 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:42:42 compute-0 podman[328981]: 2025-11-25 16:42:42.487281697 +0000 UTC m=+0.157656138 container start 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:42:42 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : New worker (329003) forked
Nov 25 16:42:42 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : Loading success.
Nov 25 16:42:43 compute-0 ceph-mon[74985]: pgmap v1719: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 16:42:43 compute-0 nova_compute[254092]: 2025-11-25 16:42:43.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 25 16:42:43 compute-0 sudo[329012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:43 compute-0 sudo[329012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:43 compute-0 sudo[329012]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:43 compute-0 sudo[329037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:42:43 compute-0 sudo[329037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:43 compute-0 sudo[329037]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:43 compute-0 sudo[329062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:43 compute-0 sudo[329062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:43 compute-0 sudo[329062]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:44 compute-0 sudo[329087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:42:44 compute-0 sudo[329087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.356 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.357 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.357 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "e887a377-e792-462d-8bcd-002a93dac12d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.358 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.358 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.359 254096 INFO nova.compute.manager [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Terminating instance
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.360 254096 DEBUG nova.compute.manager [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:42:44 compute-0 kernel: tapce73fc27-d7 (unregistering): left promiscuous mode
Nov 25 16:42:44 compute-0 NetworkManager[48891]: <info>  [1764088964.4092] device (tapce73fc27-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:42:44 compute-0 ovn_controller[153477]: 2025-11-25T16:42:44Z|00674|binding|INFO|Releasing lport ce73fc27-d707-4321-8e2e-f77bd4b984ad from this chassis (sb_readonly=0)
Nov 25 16:42:44 compute-0 ovn_controller[153477]: 2025-11-25T16:42:44Z|00675|binding|INFO|Setting lport ce73fc27-d707-4321-8e2e-f77bd4b984ad down in Southbound
Nov 25 16:42:44 compute-0 ovn_controller[153477]: 2025-11-25T16:42:44Z|00676|binding|INFO|Removing iface tapce73fc27-d7 ovn-installed in OVS
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.429 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:35:7f 10.100.0.6'], port_security=['fa:16:3e:5e:35:7f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e887a377-e792-462d-8bcd-002a93dac12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-203581b6-f356-4499-9dc8-abafe93b350a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea024a03380a4251a920e126716935de', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b59c253f-f6f3-4911-a227-0f0801f548cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dfd907d8-79ab-4527-a43f-a884a4ea8ecf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=ce73fc27-d707-4321-8e2e-f77bd4b984ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.430 163338 INFO neutron.agent.ovn.metadata.agent [-] Port ce73fc27-d707-4321-8e2e-f77bd4b984ad in datapath 203581b6-f356-4499-9dc8-abafe93b350a unbound from our chassis
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.431 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 203581b6-f356-4499-9dc8-abafe93b350a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.433 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[588a9af2-7ca9-4d5d-a1f9-f2eccf59e218]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.434 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a namespace which is not needed anymore
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000046.scope: Deactivated successfully.
Nov 25 16:42:44 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000046.scope: Consumed 2.627s CPU time.
Nov 25 16:42:44 compute-0 systemd-machined[216343]: Machine qemu-83-instance-00000046 terminated.
Nov 25 16:42:44 compute-0 sudo[329087]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:42:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:42:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:42:44 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:42:44 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : haproxy version is 2.8.14-c23fe91
Nov 25 16:42:44 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [NOTICE]   (329001) : path to executable is /usr/sbin/haproxy
Nov 25 16:42:44 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [WARNING]  (329001) : Exiting Master process...
Nov 25 16:42:44 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [ALERT]    (329001) : Current worker (329003) exited with code 143 (Terminated)
Nov 25 16:42:44 compute-0 neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a[328997]: [WARNING]  (329001) : All workers exited. Exiting... (0)
Nov 25 16:42:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:42:44 compute-0 systemd[1]: libpod-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526.scope: Deactivated successfully.
Nov 25 16:42:44 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:42:44 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 729ea660-28c2-4fec-bbb4-319f3ca59699 does not exist
Nov 25 16:42:44 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev aaf0e74c-dd12-4261-97cc-c4e5c66d58e3 does not exist
Nov 25 16:42:44 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0f0d044c-b72c-40eb-b2a2-a01ec5e40948 does not exist
Nov 25 16:42:44 compute-0 podman[329167]: 2025-11-25 16:42:44.575845999 +0000 UTC m=+0.046884453 container died 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:42:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:42:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:42:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:42:44 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:42:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:42:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.592 254096 INFO nova.virt.libvirt.driver [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Instance destroyed successfully.
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.593 254096 DEBUG nova.objects.instance [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lazy-loading 'resources' on Instance uuid e887a377-e792-462d-8bcd-002a93dac12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.603 254096 DEBUG nova.virt.libvirt.vif [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1282971748',display_name='tempest-InstanceActionsTestJSON-server-1282971748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1282971748',id=70,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea024a03380a4251a920e126716935de',ramdisk_id='',reservation_id='r-29eo0uj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-16048106',owner_user_name='tempest-InstanceActionsTestJSON-16048106-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:42:42Z,user_data=None,user_id='e76c1b261c0442caa52f39297ccf296d',uuid=e887a377-e792-462d-8bcd-002a93dac12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.604 254096 DEBUG nova.network.os_vif_util [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converting VIF {"id": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "address": "fa:16:3e:5e:35:7f", "network": {"id": "203581b6-f356-4499-9dc8-abafe93b350a", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-425599925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea024a03380a4251a920e126716935de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce73fc27-d7", "ovs_interfaceid": "ce73fc27-d707-4321-8e2e-f77bd4b984ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.605 254096 DEBUG nova.network.os_vif_util [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.605 254096 DEBUG os_vif [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526-userdata-shm.mount: Deactivated successfully.
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.607 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce73fc27-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-16913faf8a5c974f1946e4ed69e9941f406ff9e2fad5af1678ff5c86655f484c-merged.mount: Deactivated successfully.
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.612 254096 INFO os_vif [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:35:7f,bridge_name='br-int',has_traffic_filtering=True,id=ce73fc27-d707-4321-8e2e-f77bd4b984ad,network=Network(203581b6-f356-4499-9dc8-abafe93b350a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce73fc27-d7')
Nov 25 16:42:44 compute-0 podman[329167]: 2025-11-25 16:42:44.620535551 +0000 UTC m=+0.091573995 container cleanup 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:42:44 compute-0 systemd[1]: libpod-conmon-47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526.scope: Deactivated successfully.
Nov 25 16:42:44 compute-0 sudo[329195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:44 compute-0 sudo[329195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:44 compute-0 sudo[329195]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:44 compute-0 podman[329237]: 2025-11-25 16:42:44.6949574 +0000 UTC m=+0.051714514 container remove 47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0edbfa-9d02-485d-b025-dcc27001dbfc]: (4, ('Tue Nov 25 04:42:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526)\n47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526\nTue Nov 25 04:42:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a (47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526)\n47d6e3f54b3217400ffd752eaa0ae86f3ad9d71d62db6a18eb6b08d8bb71e526\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07a92950-cdce-4e3c-a08c-ff07d45637e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.704 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap203581b6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 kernel: tap203581b6-f0: left promiscuous mode
Nov 25 16:42:44 compute-0 sudo[329260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:42:44 compute-0 sudo[329260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:44 compute-0 sudo[329260]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:44 compute-0 nova_compute[254092]: 2025-11-25 16:42:44.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.725 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05d2d850-c9f6-44fe-b321-1d4a5256e41f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[44cc6d00-9296-4766-8b63-62e67552ecca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc250b2-aa9b-4113-96b2-1089baeec61b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.756 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ec534d5f-9550-42ba-8531-84f9e0ff1ba9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541939, 'reachable_time': 32130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329306, 'error': None, 'target': 'ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.759 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-203581b6-f356-4499-9dc8-abafe93b350a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:42:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d203581b6\x2df356\x2d4499\x2d9dc8\x2dabafe93b350a.mount: Deactivated successfully.
Nov 25 16:42:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:44.759 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd5dd9a-561c-413e-ac8a-02cc87590353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:44 compute-0 sudo[329286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:44 compute-0 sudo[329286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:44 compute-0 sudo[329286]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:44 compute-0 sudo[329314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:42:44 compute-0 sudo[329314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:45 compute-0 nova_compute[254092]: 2025-11-25 16:42:45.021 254096 INFO nova.virt.libvirt.driver [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deleting instance files /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d_del
Nov 25 16:42:45 compute-0 nova_compute[254092]: 2025-11-25 16:42:45.022 254096 INFO nova.virt.libvirt.driver [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deletion of /var/lib/nova/instances/e887a377-e792-462d-8bcd-002a93dac12d_del complete
Nov 25 16:42:45 compute-0 ceph-mon[74985]: pgmap v1720: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 25 16:42:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:42:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:42:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:42:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:42:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:42:45 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:42:45 compute-0 nova_compute[254092]: 2025-11-25 16:42:45.068 254096 INFO nova.compute.manager [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 16:42:45 compute-0 nova_compute[254092]: 2025-11-25 16:42:45.069 254096 DEBUG oslo.service.loopingcall [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:42:45 compute-0 nova_compute[254092]: 2025-11-25 16:42:45.070 254096 DEBUG nova.compute.manager [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:42:45 compute-0 nova_compute[254092]: 2025-11-25 16:42:45.070 254096 DEBUG nova.network.neutron [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:42:45 compute-0 podman[329381]: 2025-11-25 16:42:45.155902275 +0000 UTC m=+0.040450739 container create 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:42:45 compute-0 systemd[1]: Started libpod-conmon-483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f.scope.
Nov 25 16:42:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:45 compute-0 podman[329381]: 2025-11-25 16:42:45.139254134 +0000 UTC m=+0.023802618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:42:45 compute-0 podman[329381]: 2025-11-25 16:42:45.238632819 +0000 UTC m=+0.123181303 container init 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:42:45 compute-0 podman[329381]: 2025-11-25 16:42:45.248115697 +0000 UTC m=+0.132664161 container start 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:42:45 compute-0 podman[329381]: 2025-11-25 16:42:45.251246262 +0000 UTC m=+0.135794726 container attach 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 16:42:45 compute-0 suspicious_satoshi[329398]: 167 167
Nov 25 16:42:45 compute-0 systemd[1]: libpod-483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f.scope: Deactivated successfully.
Nov 25 16:42:45 compute-0 podman[329381]: 2025-11-25 16:42:45.254747927 +0000 UTC m=+0.139296411 container died 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b639813311d0af849d826cf090bba73658a9b6757e54569c2a61c66b82337996-merged.mount: Deactivated successfully.
Nov 25 16:42:45 compute-0 podman[329381]: 2025-11-25 16:42:45.292513321 +0000 UTC m=+0.177061785 container remove 483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 16:42:45 compute-0 systemd[1]: libpod-conmon-483e26f20b5c8d77ef9872edee237b4a5504e161ac51d64ac196cf12c446f58f.scope: Deactivated successfully.
Nov 25 16:42:45 compute-0 podman[329421]: 2025-11-25 16:42:45.44619034 +0000 UTC m=+0.042993587 container create 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:42:45 compute-0 systemd[1]: Started libpod-conmon-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope.
Nov 25 16:42:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:45 compute-0 podman[329421]: 2025-11-25 16:42:45.425012506 +0000 UTC m=+0.021815783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:45 compute-0 podman[329421]: 2025-11-25 16:42:45.536267154 +0000 UTC m=+0.133070431 container init 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:42:45 compute-0 podman[329421]: 2025-11-25 16:42:45.545374392 +0000 UTC m=+0.142177639 container start 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:42:45 compute-0 podman[329421]: 2025-11-25 16:42:45.548797534 +0000 UTC m=+0.145600801 container attach 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 16:42:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 47 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 853 B/s wr, 144 op/s
Nov 25 16:42:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.141 254096 DEBUG nova.network.neutron [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.160 254096 INFO nova.compute.manager [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Took 1.09 seconds to deallocate network for instance.
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.210 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.211 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.263 254096 DEBUG oslo_concurrency.processutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.344 254096 DEBUG nova.compute.manager [req-a9de1b94-714b-4ad0-94a5-0989047d30cb req-2b21b84b-df4c-4202-85fa-06f82996f233 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Received event network-vif-deleted-ce73fc27-d707-4321-8e2e-f77bd4b984ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:46 compute-0 goofy_hermann[329438]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:42:46 compute-0 goofy_hermann[329438]: --> relative data size: 1.0
Nov 25 16:42:46 compute-0 goofy_hermann[329438]: --> All data devices are unavailable
Nov 25 16:42:46 compute-0 systemd[1]: libpod-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope: Deactivated successfully.
Nov 25 16:42:46 compute-0 podman[329421]: 2025-11-25 16:42:46.657065522 +0000 UTC m=+1.253868769 container died 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:42:46 compute-0 systemd[1]: libpod-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope: Consumed 1.047s CPU time.
Nov 25 16:42:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd87b09160cb8c886907e5bef633b7453f864805e65e10e51897b606fc75ac02-merged.mount: Deactivated successfully.
Nov 25 16:42:46 compute-0 podman[329421]: 2025-11-25 16:42:46.718430655 +0000 UTC m=+1.315233902 container remove 62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hermann, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:42:46 compute-0 systemd[1]: libpod-conmon-62172db7c8151e19d94fd45b8a6c0907c00ea35fc2d733bceef75840f1a6882e.scope: Deactivated successfully.
Nov 25 16:42:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54234647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:46 compute-0 sudo[329314]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.764 254096 DEBUG oslo_concurrency.processutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.771 254096 DEBUG nova.compute.provider_tree [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.788 254096 DEBUG nova.scheduler.client.report [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.811 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:46 compute-0 sudo[329503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:46 compute-0 sudo[329503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:46 compute-0 sudo[329503]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.843 254096 INFO nova.scheduler.client.report [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Deleted allocations for instance e887a377-e792-462d-8bcd-002a93dac12d
Nov 25 16:42:46 compute-0 sudo[329528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:42:46 compute-0 sudo[329528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:46 compute-0 sudo[329528]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:46 compute-0 nova_compute[254092]: 2025-11-25 16:42:46.901 254096 DEBUG oslo_concurrency.lockutils [None req-5b7dd4a9-044e-46df-b635-4305d48fb271 e76c1b261c0442caa52f39297ccf296d ea024a03380a4251a920e126716935de - - default default] Lock "e887a377-e792-462d-8bcd-002a93dac12d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:46 compute-0 sudo[329553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:46 compute-0 sudo[329553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:46 compute-0 sudo[329553]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:46 compute-0 sudo[329578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:42:46 compute-0 sudo[329578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:47 compute-0 ceph-mon[74985]: pgmap v1721: 321 pgs: 321 active+clean; 47 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 853 B/s wr, 144 op/s
Nov 25 16:42:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/54234647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:47 compute-0 podman[329643]: 2025-11-25 16:42:47.300806835 +0000 UTC m=+0.039333598 container create bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 16:42:47 compute-0 systemd[1]: Started libpod-conmon-bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44.scope.
Nov 25 16:42:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:47 compute-0 podman[329643]: 2025-11-25 16:42:47.285283164 +0000 UTC m=+0.023809957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:42:47 compute-0 podman[329643]: 2025-11-25 16:42:47.39533825 +0000 UTC m=+0.133865033 container init bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:42:47 compute-0 podman[329643]: 2025-11-25 16:42:47.404011145 +0000 UTC m=+0.142537908 container start bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:42:47 compute-0 podman[329643]: 2025-11-25 16:42:47.407938312 +0000 UTC m=+0.146465075 container attach bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 16:42:47 compute-0 unruffled_mendeleev[329659]: 167 167
Nov 25 16:42:47 compute-0 systemd[1]: libpod-bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44.scope: Deactivated successfully.
Nov 25 16:42:47 compute-0 podman[329643]: 2025-11-25 16:42:47.41336951 +0000 UTC m=+0.151896273 container died bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-aedef1d132b3938a6898ce5e3b04c6412c0ce47cc4c6d6bcba01882e08cb1c13-merged.mount: Deactivated successfully.
Nov 25 16:42:47 compute-0 podman[329643]: 2025-11-25 16:42:47.453814147 +0000 UTC m=+0.192340910 container remove bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:42:47 compute-0 systemd[1]: libpod-conmon-bd63a08179e515f2c92ce630f3b1cbaf9d4b39c3b34917bed212ed9131d55f44.scope: Deactivated successfully.
Nov 25 16:42:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 41 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 109 op/s
Nov 25 16:42:47 compute-0 podman[329683]: 2025-11-25 16:42:47.618301779 +0000 UTC m=+0.040334296 container create 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:42:47 compute-0 systemd[1]: Started libpod-conmon-23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7.scope.
Nov 25 16:42:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:47 compute-0 podman[329683]: 2025-11-25 16:42:47.602237903 +0000 UTC m=+0.024270350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:42:47 compute-0 podman[329683]: 2025-11-25 16:42:47.702462513 +0000 UTC m=+0.124494950 container init 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:42:47 compute-0 podman[329683]: 2025-11-25 16:42:47.709727899 +0000 UTC m=+0.131760306 container start 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:42:47 compute-0 podman[329683]: 2025-11-25 16:42:47.712309309 +0000 UTC m=+0.134341726 container attach 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:42:47 compute-0 nova_compute[254092]: 2025-11-25 16:42:47.794 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:47 compute-0 nova_compute[254092]: 2025-11-25 16:42:47.795 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:47 compute-0 nova_compute[254092]: 2025-11-25 16:42:47.811 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:42:47 compute-0 nova_compute[254092]: 2025-11-25 16:42:47.880 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:47 compute-0 nova_compute[254092]: 2025-11-25 16:42:47.880 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:47 compute-0 nova_compute[254092]: 2025-11-25 16:42:47.888 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:42:47 compute-0 nova_compute[254092]: 2025-11-25 16:42:47.889 254096 INFO nova.compute.claims [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.013 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351790797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.479 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.486 254096 DEBUG nova.compute.provider_tree [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:48 compute-0 musing_driscoll[329700]: {
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:     "0": [
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:         {
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "devices": [
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "/dev/loop3"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             ],
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_name": "ceph_lv0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_size": "21470642176",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "name": "ceph_lv0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "tags": {
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cluster_name": "ceph",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.crush_device_class": "",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.encrypted": "0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osd_id": "0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.type": "block",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.vdo": "0"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             },
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "type": "block",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "vg_name": "ceph_vg0"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:         }
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:     ],
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:     "1": [
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:         {
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "devices": [
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "/dev/loop4"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             ],
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_name": "ceph_lv1",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_size": "21470642176",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "name": "ceph_lv1",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "tags": {
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cluster_name": "ceph",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.crush_device_class": "",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.encrypted": "0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osd_id": "1",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.type": "block",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.vdo": "0"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             },
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "type": "block",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "vg_name": "ceph_vg1"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:         }
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:     ],
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:     "2": [
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:         {
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "devices": [
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "/dev/loop5"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             ],
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_name": "ceph_lv2",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_size": "21470642176",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "name": "ceph_lv2",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "tags": {
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.cluster_name": "ceph",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.crush_device_class": "",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.encrypted": "0",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osd_id": "2",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.type": "block",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:                 "ceph.vdo": "0"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             },
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "type": "block",
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:             "vg_name": "ceph_vg2"
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:         }
Nov 25 16:42:48 compute-0 musing_driscoll[329700]:     ]
Nov 25 16:42:48 compute-0 musing_driscoll[329700]: }
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.507 254096 DEBUG nova.scheduler.client.report [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:48 compute-0 systemd[1]: libpod-23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7.scope: Deactivated successfully.
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.535 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:48 compute-0 podman[329683]: 2025-11-25 16:42:48.53653512 +0000 UTC m=+0.958567537 container died 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.537 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e9da96db1a91e20a8ffdd23392bf1ff155f71f656c997debc7118857f6baa27-merged.mount: Deactivated successfully.
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.578 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.578 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:42:48 compute-0 podman[329683]: 2025-11-25 16:42:48.595816939 +0000 UTC m=+1.017849356 container remove 23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.597 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:42:48 compute-0 systemd[1]: libpod-conmon-23c0ad6b739af1cbbecaa86171c89839ecf6e721a9b5d1b280e0286b338213a7.scope: Deactivated successfully.
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.612 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:42:48 compute-0 sudo[329578]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.709 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.711 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.711 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Creating image(s)
Nov 25 16:42:48 compute-0 sudo[329743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:48 compute-0 sudo[329743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:48 compute-0 sudo[329743]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.737 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.769 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:48 compute-0 sudo[329783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:42:48 compute-0 sudo[329783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:48 compute-0 sudo[329783]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.796 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.802 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:48 compute-0 sudo[329845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:48 compute-0 sudo[329845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:48 compute-0 sudo[329845]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.871 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.872 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.872 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.872 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.892 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.895 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ed4031e8-a918-4816-b9b8-b1134a086f8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:48 compute-0 sudo[329873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:42:48 compute-0 sudo[329873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:48 compute-0 nova_compute[254092]: 2025-11-25 16:42:48.927 254096 DEBUG nova.policy [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3c9e536e4984598a1b18e79b453cbde', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '20d786999bc74073bae1fde6aede7fcd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:42:49 compute-0 ceph-mon[74985]: pgmap v1722: 321 pgs: 321 active+clean; 41 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 109 op/s
Nov 25 16:42:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/351790797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.197 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ed4031e8-a918-4816-b9b8-b1134a086f8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:49 compute-0 podman[329980]: 2025-11-25 16:42:49.271797438 +0000 UTC m=+0.049730261 container create 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.276 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] resizing rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:42:49 compute-0 systemd[1]: Started libpod-conmon-659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907.scope.
Nov 25 16:42:49 compute-0 podman[329980]: 2025-11-25 16:42:49.247336314 +0000 UTC m=+0.025269157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:42:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:49 compute-0 podman[329980]: 2025-11-25 16:42:49.370937277 +0000 UTC m=+0.148870120 container init 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:42:49 compute-0 podman[329980]: 2025-11-25 16:42:49.385311167 +0000 UTC m=+0.163244010 container start 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 16:42:49 compute-0 podman[329980]: 2025-11-25 16:42:49.390474707 +0000 UTC m=+0.168407560 container attach 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 16:42:49 compute-0 wizardly_heyrovsky[330043]: 167 167
Nov 25 16:42:49 compute-0 systemd[1]: libpod-659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907.scope: Deactivated successfully.
Nov 25 16:42:49 compute-0 podman[329980]: 2025-11-25 16:42:49.395408061 +0000 UTC m=+0.173340904 container died 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.415 254096 DEBUG nova.objects.instance [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lazy-loading 'migration_context' on Instance uuid ed4031e8-a918-4816-b9b8-b1134a086f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-105daf22d5684f8b0ef198339bd145d8bbff71ad58844a3ace581a7d88e39c49-merged.mount: Deactivated successfully.
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.426 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.426 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Ensure instance console log exists: /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.427 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.427 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.427 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:49 compute-0 podman[329980]: 2025-11-25 16:42:49.438162171 +0000 UTC m=+0.216094994 container remove 659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:42:49 compute-0 systemd[1]: libpod-conmon-659acc0fca6e5ee658678792327808921270c7ffdd895263a24b2f86403a4907.scope: Deactivated successfully.
Nov 25 16:42:49 compute-0 podman[330086]: 2025-11-25 16:42:49.581999944 +0000 UTC m=+0.037988223 container create 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:42:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 41 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 25 16:42:49 compute-0 nova_compute[254092]: 2025-11-25 16:42:49.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:49 compute-0 podman[330086]: 2025-11-25 16:42:49.564221371 +0000 UTC m=+0.020209670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:42:49 compute-0 systemd[1]: Started libpod-conmon-71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14.scope.
Nov 25 16:42:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:49 compute-0 podman[330086]: 2025-11-25 16:42:49.711033074 +0000 UTC m=+0.167021403 container init 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:42:49 compute-0 podman[330086]: 2025-11-25 16:42:49.720438709 +0000 UTC m=+0.176426998 container start 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:42:49 compute-0 podman[330086]: 2025-11-25 16:42:49.724782977 +0000 UTC m=+0.180771306 container attach 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]: {
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "osd_id": 1,
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "type": "bluestore"
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:     },
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "osd_id": 2,
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "type": "bluestore"
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:     },
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "osd_id": 0,
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:         "type": "bluestore"
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]:     }
Nov 25 16:42:50 compute-0 jolly_aryabhata[330103]: }
Nov 25 16:42:50 compute-0 systemd[1]: libpod-71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14.scope: Deactivated successfully.
Nov 25 16:42:50 compute-0 podman[330086]: 2025-11-25 16:42:50.709414119 +0000 UTC m=+1.165402398 container died 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bfdacafeb1a0852f422c82b42f840a4d043ebb645fcce44e3e9fdba2fbd3046-merged.mount: Deactivated successfully.
Nov 25 16:42:50 compute-0 nova_compute[254092]: 2025-11-25 16:42:50.768 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Successfully created port: b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:42:50 compute-0 podman[330086]: 2025-11-25 16:42:50.768204994 +0000 UTC m=+1.224193313 container remove 71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:42:50 compute-0 systemd[1]: libpod-conmon-71a8bc308bea024f257874b67c1520cac8b6a88f5cddadb1f2d1a14bb9dcdc14.scope: Deactivated successfully.
Nov 25 16:42:50 compute-0 sudo[329873]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:42:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:42:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:42:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:42:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 994a639a-9258-4fcd-9713-3b5971ccf6f3 does not exist
Nov 25 16:42:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 500d9054-3604-428a-a201-0201c38fdb92 does not exist
Nov 25 16:42:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:50 compute-0 sudo[330151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:42:50 compute-0 sudo[330151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:50 compute-0 sudo[330151]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:50 compute-0 sudo[330176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:42:50 compute-0 sudo[330176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:42:50 compute-0 sudo[330176]: pam_unix(sudo:session): session closed for user root
Nov 25 16:42:51 compute-0 ceph-mon[74985]: pgmap v1723: 321 pgs: 321 active+clean; 41 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 25 16:42:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:42:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:42:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.722 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Successfully updated port: b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.735 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.736 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquired lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.736 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.835 254096 DEBUG nova.compute.manager [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-changed-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.835 254096 DEBUG nova.compute.manager [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Refreshing instance network info cache due to event network-changed-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.836 254096 DEBUG oslo_concurrency.lockutils [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:42:52 compute-0 nova_compute[254092]: 2025-11-25 16:42:52.991 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:42:53 compute-0 ceph-mon[74985]: pgmap v1724: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.881 254096 DEBUG nova.network.neutron [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updating instance_info_cache with network_info: [{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.911 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Releasing lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.912 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance network_info: |[{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.912 254096 DEBUG oslo_concurrency.lockutils [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.912 254096 DEBUG nova.network.neutron [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Refreshing network info cache for port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.915 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start _get_guest_xml network_info=[{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.919 254096 WARNING nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.928 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.929 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.932 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.932 254096 DEBUG nova.virt.libvirt.host [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.933 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.934 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.935 254096 DEBUG nova.virt.hardware [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:42:53 compute-0 nova_compute[254092]: 2025-11-25 16:42:53.938 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31402073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.371 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.394 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.397 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:42:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049341593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.825 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.826 254096 DEBUG nova.virt.libvirt.vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1926067749',display_name='tempest-ServerAddressesNegativeTestJSON-server-1926067749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1926067749',id=71,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20d786999bc74073bae1fde6aede7fcd',ramdisk_id='',reservation_id='r-wet8rv2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-772969752',owner_user_name='tempest-ServerAddressesNegativeTestJSON-772969752-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:48Z,user_data=None,user_id='e3c9e536e4984598a1b18e79b453cbde',uuid=ed4031e8-a918-4816-b9b8-b1134a086f8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.827 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converting VIF {"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.827 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.829 254096 DEBUG nova.objects.instance [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lazy-loading 'pci_devices' on Instance uuid ed4031e8-a918-4816-b9b8-b1134a086f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.845 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <uuid>ed4031e8-a918-4816-b9b8-b1134a086f8b</uuid>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <name>instance-00000047</name>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1926067749</nova:name>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:42:53</nova:creationTime>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:user uuid="e3c9e536e4984598a1b18e79b453cbde">tempest-ServerAddressesNegativeTestJSON-772969752-project-member</nova:user>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:project uuid="20d786999bc74073bae1fde6aede7fcd">tempest-ServerAddressesNegativeTestJSON-772969752</nova:project>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <nova:port uuid="b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4">
Nov 25 16:42:54 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <system>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <entry name="serial">ed4031e8-a918-4816-b9b8-b1134a086f8b</entry>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <entry name="uuid">ed4031e8-a918-4816-b9b8-b1134a086f8b</entry>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </system>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <os>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   </os>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <features>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   </features>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/ed4031e8-a918-4816-b9b8-b1134a086f8b_disk">
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config">
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       </source>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:42:54 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:2f:d4:24"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <target dev="tapb0cf5f9a-6e"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/console.log" append="off"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <video>
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </video>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:42:54 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:42:54 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:42:54 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:42:54 compute-0 nova_compute[254092]: </domain>
Nov 25 16:42:54 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.848 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Preparing to wait for external event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.848 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.849 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.849 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.850 254096 DEBUG nova.virt.libvirt.vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1926067749',display_name='tempest-ServerAddressesNegativeTestJSON-server-1926067749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1926067749',id=71,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='20d786999bc74073bae1fde6aede7fcd',ramdisk_id='',reservation_id='r-wet8rv2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-772969752',owner_user_name='tempest-ServerAddressesNegativeTestJSON-772969752-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:48Z,user_data=None,user_id='e3c9e536e4984598a1b18e79b453cbde',uuid=ed4031e8-a918-4816-b9b8-b1134a086f8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.851 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converting VIF {"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.852 254096 DEBUG nova.network.os_vif_util [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.852 254096 DEBUG os_vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.854 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.855 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.860 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.860 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0cf5f9a-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.861 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0cf5f9a-6e, col_values=(('external_ids', {'iface-id': 'b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:d4:24', 'vm-uuid': 'ed4031e8-a918-4816-b9b8-b1134a086f8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:54 compute-0 NetworkManager[48891]: <info>  [1764088974.8641] manager: (tapb0cf5f9a-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.872 254096 INFO os_vif [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e')
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.927 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.928 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.929 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] No VIF found with MAC fa:16:3e:2f:d4:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.929 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Using config drive
Nov 25 16:42:54 compute-0 nova_compute[254092]: 2025-11-25 16:42:54.950 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:55 compute-0 ceph-mon[74985]: pgmap v1725: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 16:42:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/31402073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1049341593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:42:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:42:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3345856000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:42:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:42:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3345856000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:42:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 16:42:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.019 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Creating config drive at /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.024 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzu2zenz_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3345856000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:42:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3345856000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:42:56 compute-0 ceph-mon[74985]: pgmap v1726: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.160 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzu2zenz_" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.188 254096 DEBUG nova.storage.rbd_utils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] rbd image ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.193 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.350 254096 DEBUG oslo_concurrency.processutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config ed4031e8-a918-4816-b9b8-b1134a086f8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.351 254096 INFO nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deleting local config drive /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b/disk.config because it was imported into RBD.
Nov 25 16:42:56 compute-0 kernel: tapb0cf5f9a-6e: entered promiscuous mode
Nov 25 16:42:56 compute-0 NetworkManager[48891]: <info>  [1764088976.4215] manager: (tapb0cf5f9a-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 ovn_controller[153477]: 2025-11-25T16:42:56Z|00677|binding|INFO|Claiming lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for this chassis.
Nov 25 16:42:56 compute-0 ovn_controller[153477]: 2025-11-25T16:42:56Z|00678|binding|INFO|b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4: Claiming fa:16:3e:2f:d4:24 10.100.0.13
Nov 25 16:42:56 compute-0 systemd-udevd[330334]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.474 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d4:24 10.100.0.13'], port_security=['fa:16:3e:2f:d4:24 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ed4031e8-a918-4816-b9b8-b1134a086f8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3f25be-8960-4c87-987c-97d65f879d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20d786999bc74073bae1fde6aede7fcd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4b33984-130a-4842-ad65-63b97381bcf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6108dd19-2654-4369-9d7f-d2bb4f54095b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.474 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 in datapath aa3f25be-8960-4c87-987c-97d65f879d23 bound to our chassis
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.475 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3f25be-8960-4c87-987c-97d65f879d23
Nov 25 16:42:56 compute-0 NetworkManager[48891]: <info>  [1764088976.4787] device (tapb0cf5f9a-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:42:56 compute-0 NetworkManager[48891]: <info>  [1764088976.4799] device (tapb0cf5f9a-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.486 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24999901-1243-4aa3-96be-a9242511930f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.487 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa3f25be-81 in ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.489 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa3f25be-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.489 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f166ecc8-6dc7-459a-ac89-d58e1562b1dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[636b3cc7-35be-4311-8aff-42c368d5f420]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 systemd-machined[216343]: New machine qemu-84-instance-00000047.
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.503 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[32ff4d74-ccd0-4ff8-898d-2dddef241c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.530 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45491d41-cf66-45a6-bff6-5ec57c013ee1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-00000047.
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 ovn_controller[153477]: 2025-11-25T16:42:56Z|00679|binding|INFO|Setting lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 ovn-installed in OVS
Nov 25 16:42:56 compute-0 ovn_controller[153477]: 2025-11-25T16:42:56Z|00680|binding|INFO|Setting lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 up in Southbound
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.558 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc565b68-0919-4fe5-b789-484340b90ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.564 254096 DEBUG nova.network.neutron [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updated VIF entry in instance network info cache for port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.565 254096 DEBUG nova.network.neutron [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updating instance_info_cache with network_info: [{"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:42:56 compute-0 systemd-udevd[330338]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:42:56 compute-0 NetworkManager[48891]: <info>  [1764088976.5659] manager: (tapaa3f25be-80): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.565 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fbf455-6c02-40d3-a89a-2da97523edd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.578 254096 DEBUG oslo_concurrency.lockutils [req-757d3df4-a236-4a89-a4f0-b95b904165fe req-85a60aec-0114-438d-a597-ef43276cd485 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-ed4031e8-a918-4816-b9b8-b1134a086f8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.601 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a2f4b9-49ec-4622-a990-e7dcf63f91f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.604 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[136838a3-2d13-4a42-b9fe-e01a12564a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 NetworkManager[48891]: <info>  [1764088976.6291] device (tapaa3f25be-80): carrier: link connected
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.637 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0f852db2-c5da-4ea1-823e-41f81f0d1dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.658 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[61118e7f-d2d9-4a94-8445-96002efc1c84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3f25be-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:0e:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543421, 'reachable_time': 25326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330370, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.675 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aba46940-681d-4545-a406-c644410d6a11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:e31'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543421, 'tstamp': 543421}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330371, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b374673c-e1c9-4329-8328-a9f35cee6bdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3f25be-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:0e:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543421, 'reachable_time': 25326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330372, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0d9a7c-97b5-4176-b302-ca93211d093e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a808d87-8d27-4508-8b55-70d2f10787ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.799 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3f25be-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.799 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.799 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3f25be-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:56 compute-0 NetworkManager[48891]: <info>  [1764088976.8025] manager: (tapaa3f25be-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Nov 25 16:42:56 compute-0 kernel: tapaa3f25be-80: entered promiscuous mode
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.805 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3f25be-80, col_values=(('external_ids', {'iface-id': 'a8379971-0cc4-450a-a8ab-bf056efebfda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 ovn_controller[153477]: 2025-11-25T16:42:56Z|00681|binding|INFO|Releasing lport a8379971-0cc4-450a-a8ab-bf056efebfda from this chassis (sb_readonly=0)
Nov 25 16:42:56 compute-0 nova_compute[254092]: 2025-11-25 16:42:56.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.823 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa3f25be-8960-4c87-987c-97d65f879d23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa3f25be-8960-4c87-987c-97d65f879d23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[321a7e56-b67c-4416-a13d-287a199f8f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.825 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-aa3f25be-8960-4c87-987c-97d65f879d23
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/aa3f25be-8960-4c87-987c-97d65f879d23.pid.haproxy
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID aa3f25be-8960-4c87-987c-97d65f879d23
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:42:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:42:56.826 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'env', 'PROCESS_TAG=haproxy-aa3f25be-8960-4c87-987c-97d65f879d23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa3f25be-8960-4c87-987c-97d65f879d23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.102 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088977.1023457, ed4031e8-a918-4816-b9b8-b1134a086f8b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.104 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Started (Lifecycle Event)
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.120 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.123 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088977.1025941, ed4031e8-a918-4816-b9b8-b1134a086f8b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.123 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Paused (Lifecycle Event)
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.137 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.141 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:42:57 compute-0 nova_compute[254092]: 2025-11-25 16:42:57.155 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:42:57 compute-0 podman[330444]: 2025-11-25 16:42:57.223936657 +0000 UTC m=+0.052797504 container create 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:42:57 compute-0 systemd[1]: Started libpod-conmon-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2.scope.
Nov 25 16:42:57 compute-0 podman[330444]: 2025-11-25 16:42:57.193262204 +0000 UTC m=+0.022123081 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:42:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f606a2dbbfae9d634065c5f557617efb065ff4a0c40004d3347bac53165b01a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:42:57 compute-0 podman[330444]: 2025-11-25 16:42:57.311884742 +0000 UTC m=+0.140745619 container init 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:42:57 compute-0 podman[330444]: 2025-11-25 16:42:57.317255708 +0000 UTC m=+0.146116555 container start 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 16:42:57 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : New worker (330465) forked
Nov 25 16:42:57 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : Loading success.
Nov 25 16:42:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.141 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.142 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.162 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.236 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.237 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.249 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.249 254096 INFO nova.compute.claims [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.388 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:58 compute-0 ceph-mon[74985]: pgmap v1727: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Nov 25 16:42:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:42:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293241420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.806 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.812 254096 DEBUG nova.compute.provider_tree [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.833 254096 DEBUG nova.scheduler.client.report [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.853 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.854 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.914 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.915 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.942 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:42:58 compute-0 nova_compute[254092]: 2025-11-25 16:42:58.964 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.088 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.089 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.090 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Creating image(s)
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.112 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.145 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.175 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.180 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.282 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.283 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.284 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.285 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.312 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.316 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.591 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088964.5906055, e887a377-e792-462d-8bcd-002a93dac12d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.592 254096 INFO nova.compute.manager [-] [instance: e887a377-e792-462d-8bcd-002a93dac12d] VM Stopped (Lifecycle Event)
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.614 254096 DEBUG nova.compute.manager [None req-76090146-6253-4170-9f7b-a62524b6de36 - - - - - -] [instance: e887a377-e792-462d-8bcd-002a93dac12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.624 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:42:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:42:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/293241420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.699 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] resizing rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.741 254096 DEBUG nova.policy [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aea2d8cf3bb54cdbbc72e41805fb1f90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d15f5aabd3491da5314b126a20225a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.801 254096 DEBUG nova.objects.instance [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'migration_context' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.816 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.817 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Ensure instance console log exists: /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.817 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.818 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.818 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:42:59 compute-0 nova_compute[254092]: 2025-11-25 16:42:59.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:00 compute-0 nova_compute[254092]: 2025-11-25 16:43:00.450 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Successfully created port: 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:43:00 compute-0 ceph-mon[74985]: pgmap v1728: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:43:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.055 254096 DEBUG nova.compute.manager [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.055 254096 DEBUG oslo_concurrency.lockutils [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.056 254096 DEBUG oslo_concurrency.lockutils [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.056 254096 DEBUG oslo_concurrency.lockutils [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.056 254096 DEBUG nova.compute.manager [req-f70a9ae5-d71f-43ae-9bb1-5b4c439ae7d6 req-1f1e6d17-5e92-44f1-9854-020214398d2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Processing event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.057 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.060 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088981.0603907, ed4031e8-a918-4816-b9b8-b1134a086f8b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.060 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Resumed (Lifecycle Event)
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.063 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.067 254096 INFO nova.virt.libvirt.driver [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance spawned successfully.
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.067 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.098 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.102 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.102 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.103 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.103 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.103 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.104 254096 DEBUG nova.virt.libvirt.driver [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.108 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.140 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.220 254096 INFO nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 12.51 seconds to spawn the instance on the hypervisor.
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.220 254096 DEBUG nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.296 254096 INFO nova.compute.manager [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 13.43 seconds to build instance.
Nov 25 16:43:01 compute-0 nova_compute[254092]: 2025-11-25 16:43:01.313 254096 DEBUG oslo_concurrency.lockutils [None req-56759d6c-9fee-46fc-94e6-9cb8e3131be9 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 134 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 64 op/s
Nov 25 16:43:02 compute-0 nova_compute[254092]: 2025-11-25 16:43:02.255 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Successfully updated port: 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:43:02 compute-0 nova_compute[254092]: 2025-11-25 16:43:02.291 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:02 compute-0 nova_compute[254092]: 2025-11-25 16:43:02.292 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:02 compute-0 nova_compute[254092]: 2025-11-25 16:43:02.292 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:43:02 compute-0 nova_compute[254092]: 2025-11-25 16:43:02.459 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:43:02 compute-0 ceph-mon[74985]: pgmap v1729: 321 pgs: 321 active+clean; 134 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 64 op/s
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.147 254096 DEBUG nova.network.neutron [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.152 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.153 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.153 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.153 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] No waiting events found dispatching network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 WARNING nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received unexpected event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for instance with vm_state active and task_state None.
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.154 254096 DEBUG nova.compute.manager [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing instance network info cache due to event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.155 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.176 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.177 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance network_info: |[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.177 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.178 254096 DEBUG nova.network.neutron [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.181 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start _get_guest_xml network_info=[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.187 254096 WARNING nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.191 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.192 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.199 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.199 254096 DEBUG nova.virt.libvirt.host [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.200 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.200 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.201 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.201 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.201 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.202 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.203 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.203 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.203 254096 DEBUG nova.virt.hardware [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.206 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017691378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 134 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.644 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.663 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:03 compute-0 nova_compute[254092]: 2025-11-25 16:43:03.666 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4017691378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159350667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.103 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.105 254096 DEBUG nova.virt.libvirt.vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.105 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.106 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.107 254096 DEBUG nova.objects.instance [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.126 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <uuid>01f96314-1fbe-4eee-a4ed-db7f448a5320</uuid>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <name>instance-00000048</name>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestJSON-server-1951123052</nova:name>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:43:03</nova:creationTime>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <nova:port uuid="4fe8c3a9-70ba-4a51-8bc1-1505a49fe349">
Nov 25 16:43:04 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <system>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <entry name="serial">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <entry name="uuid">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </system>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <os>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   </os>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <features>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   </features>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk">
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config">
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:04 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ff:a0:1c"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <target dev="tap4fe8c3a9-70"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log" append="off"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <video>
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </video>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:43:04 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:43:04 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:43:04 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:43:04 compute-0 nova_compute[254092]: </domain>
Nov 25 16:43:04 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Preparing to wait for external event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.128 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.129 254096 DEBUG nova.virt.libvirt.vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:42:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.129 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.130 254096 DEBUG nova.network.os_vif_util [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.130 254096 DEBUG os_vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.131 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.131 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.134 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.134 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.134 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:04 compute-0 NetworkManager[48891]: <info>  [1764088984.1367] manager: (tap4fe8c3a9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.143 254096 INFO os_vif [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.211 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.211 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.212 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No VIF found with MAC fa:16:3e:ff:a0:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.212 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Using config drive
Nov 25 16:43:04 compute-0 nova_compute[254092]: 2025-11-25 16:43:04.233 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:04 compute-0 ceph-mon[74985]: pgmap v1730: 321 pgs: 321 active+clean; 134 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 16:43:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3159350667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.170 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Creating config drive at /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.176 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zmineeo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.285 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.286 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.288 254096 INFO nova.compute.manager [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Terminating instance
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.289 254096 DEBUG nova.compute.manager [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.315 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zmineeo" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:05 compute-0 kernel: tapb0cf5f9a-6e (unregistering): left promiscuous mode
Nov 25 16:43:05 compute-0 NetworkManager[48891]: <info>  [1764088985.3350] device (tapb0cf5f9a-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.340 254096 DEBUG nova.storage.rbd_utils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.343 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:05 compute-0 ovn_controller[153477]: 2025-11-25T16:43:05Z|00682|binding|INFO|Releasing lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 from this chassis (sb_readonly=0)
Nov 25 16:43:05 compute-0 ovn_controller[153477]: 2025-11-25T16:43:05Z|00683|binding|INFO|Setting lport b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 down in Southbound
Nov 25 16:43:05 compute-0 ovn_controller[153477]: 2025-11-25T16:43:05Z|00684|binding|INFO|Removing iface tapb0cf5f9a-6e ovn-installed in OVS
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.363 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:d4:24 10.100.0.13'], port_security=['fa:16:3e:2f:d4:24 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ed4031e8-a918-4816-b9b8-b1134a086f8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3f25be-8960-4c87-987c-97d65f879d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '20d786999bc74073bae1fde6aede7fcd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4b33984-130a-4842-ad65-63b97381bcf8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6108dd19-2654-4369-9d7f-d2bb4f54095b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.364 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 in datapath aa3f25be-8960-4c87-987c-97d65f879d23 unbound from our chassis
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.365 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa3f25be-8960-4c87-987c-97d65f879d23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.367 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f82b436-97ac-4bc4-bd37-afba0c15009f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.367 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 namespace which is not needed anymore
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000047.scope: Deactivated successfully.
Nov 25 16:43:05 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000047.scope: Consumed 4.913s CPU time.
Nov 25 16:43:05 compute-0 systemd-machined[216343]: Machine qemu-84-instance-00000047 terminated.
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:05 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : haproxy version is 2.8.14-c23fe91
Nov 25 16:43:05 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [NOTICE]   (330463) : path to executable is /usr/sbin/haproxy
Nov 25 16:43:05 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [WARNING]  (330463) : Exiting Master process...
Nov 25 16:43:05 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [WARNING]  (330463) : Exiting Master process...
Nov 25 16:43:05 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [ALERT]    (330463) : Current worker (330465) exited with code 143 (Terminated)
Nov 25 16:43:05 compute-0 neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23[330459]: [WARNING]  (330463) : All workers exited. Exiting... (0)
Nov 25 16:43:05 compute-0 systemd[1]: libpod-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2.scope: Deactivated successfully.
Nov 25 16:43:05 compute-0 podman[330810]: 2025-11-25 16:43:05.508323687 +0000 UTC m=+0.053878343 container died 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.526 254096 INFO nova.virt.libvirt.driver [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Instance destroyed successfully.
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.527 254096 DEBUG nova.objects.instance [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lazy-loading 'resources' on Instance uuid ed4031e8-a918-4816-b9b8-b1134a086f8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.529 254096 DEBUG oslo_concurrency.processutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config 01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.529 254096 INFO nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deleting local config drive /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/disk.config because it was imported into RBD.
Nov 25 16:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2-userdata-shm.mount: Deactivated successfully.
Nov 25 16:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f606a2dbbfae9d634065c5f557617efb065ff4a0c40004d3347bac53165b01a7-merged.mount: Deactivated successfully.
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.546 254096 DEBUG nova.virt.libvirt.vif [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1926067749',display_name='tempest-ServerAddressesNegativeTestJSON-server-1926067749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1926067749',id=71,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='20d786999bc74073bae1fde6aede7fcd',ramdisk_id='',reservation_id='r-wet8rv2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-772969752',owner_user_name='tempest-ServerAddressesNegativeTestJSON-772969752-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:01Z,user_data=None,user_id='e3c9e536e4984598a1b18e79b453cbde',uuid=ed4031e8-a918-4816-b9b8-b1134a086f8b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.548 254096 DEBUG nova.network.os_vif_util [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converting VIF {"id": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "address": "fa:16:3e:2f:d4:24", "network": {"id": "aa3f25be-8960-4c87-987c-97d65f879d23", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-147105921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "20d786999bc74073bae1fde6aede7fcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0cf5f9a-6e", "ovs_interfaceid": "b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.549 254096 DEBUG nova.network.os_vif_util [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.549 254096 DEBUG os_vif [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.553 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0cf5f9a-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:05 compute-0 podman[330810]: 2025-11-25 16:43:05.559119195 +0000 UTC m=+0.104673861 container cleanup 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.561 254096 INFO os_vif [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:d4:24,bridge_name='br-int',has_traffic_filtering=True,id=b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4,network=Network(aa3f25be-8960-4c87-987c-97d65f879d23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0cf5f9a-6e')
Nov 25 16:43:05 compute-0 systemd[1]: libpod-conmon-9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2.scope: Deactivated successfully.
Nov 25 16:43:05 compute-0 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 16:43:05 compute-0 NetworkManager[48891]: <info>  [1764088985.5971] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 25 16:43:05 compute-0 systemd-udevd[330772]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:43:05 compute-0 ovn_controller[153477]: 2025-11-25T16:43:05Z|00685|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 16:43:05 compute-0 ovn_controller[153477]: 2025-11-25T16:43:05Z|00686|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 NetworkManager[48891]: <info>  [1764088985.6119] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:43:05 compute-0 NetworkManager[48891]: <info>  [1764088985.6129] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.616 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 134 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 16:43:05 compute-0 systemd-machined[216343]: New machine qemu-85-instance-00000048.
Nov 25 16:43:05 compute-0 podman[330867]: 2025-11-25 16:43:05.6466827 +0000 UTC m=+0.059529306 container remove 9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.654 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d94a960c-9875-4f79-b312-af39dcf9ff72]: (4, ('Tue Nov 25 04:43:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 (9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2)\n9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2\nTue Nov 25 04:43:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 (9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2)\n9866e0faa37b05b71d920daa8c16581e73b22e15d9b05aa6dec6046c5adb1da2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.655 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eaae98ac-794a-49fd-ad9e-cc70869c9d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.656 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3f25be-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000048.
Nov 25 16:43:05 compute-0 kernel: tapaa3f25be-80: left promiscuous mode
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.694 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.692 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da1e742c-62fa-4066-8559-2bf24eda5135]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_controller[153477]: 2025-11-25T16:43:05Z|00687|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 16:43:05 compute-0 ovn_controller[153477]: 2025-11-25T16:43:05Z|00688|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d47df880-a1eb-4948-a86a-51b4dbc24489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.710 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2835f68b-a7ca-4246-8bba-5c08fb66797f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.711 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.726 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[371b99b1-4347-44cc-b3c6-36298267ebdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543414, 'reachable_time': 34653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330907, 'error': None, 'target': 'ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 systemd[1]: run-netns-ovnmeta\x2daa3f25be\x2d8960\x2d4c87\x2d987c\x2d97d65f879d23.mount: Deactivated successfully.
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.728 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa3f25be-8960-4c87-987c-97d65f879d23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.728 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d13116-bf33-4626-871f-156b0453e58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.729 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.730 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bca06b3e-3685-4216-9d8e-25b435efa349]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.743 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.745 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[16fd92f0-c505-4003-9f84-11fd69d11084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe6cec5-ec74-4cd1-8f59-ff492c7905ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.758 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b6101ce1-44ee-4af6-a9c9-b0d04e90c21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa529f2-944d-4bae-9d65-1a562ae1b365]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.829 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[130e064a-652b-4bbc-b77d-c77ff5f14f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.834 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8801791a-e4b1-46a0-b410-3165a77d871e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 NetworkManager[48891]: <info>  [1764088985.8363] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Nov 25 16:43:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.870 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fafea6-a74a-4cab-86aa-02a347982c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.873 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[45e4f5f3-a40f-457e-be2e-31ac9f110d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 NetworkManager[48891]: <info>  [1764088985.9024] device (tap50ea1716-90): carrier: link connected
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.906 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7fadef4a-6928-484b-b79a-aec87b059537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.909 254096 DEBUG nova.network.neutron [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updated VIF entry in instance network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.910 254096 DEBUG nova.network.neutron [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[02280116-6204-4892-96f1-abd3d1ee4c85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544349, 'reachable_time': 38505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330934, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.925 254096 DEBUG oslo_concurrency.lockutils [req-f5c2a098-db99-4e9a-aeef-2c5a58837dab req-0b4fc2cf-d764-40c8-99ed-6ba504817cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c753dd24-07e1-48d1-8f8f-012ca1070a10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544349, 'tstamp': 544349}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330935, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.963 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ec3727-c40b-45cb-9c89-739bc9d34a67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544349, 'reachable_time': 38505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330936, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.992 254096 INFO nova.virt.libvirt.driver [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deleting instance files /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b_del
Nov 25 16:43:05 compute-0 nova_compute[254092]: 2025-11-25 16:43:05.993 254096 INFO nova.virt.libvirt.driver [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deletion of /var/lib/nova/instances/ed4031e8-a918-4816-b9b8-b1134a086f8b_del complete
Nov 25 16:43:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:05.998 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[267c55f4-cb56-46e7-8c79-f3400ebbcf73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6873eb81-730f-40ad-8390-3ea939d44257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.069 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.069 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.069 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:06 compute-0 NetworkManager[48891]: <info>  [1764088986.0727] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Nov 25 16:43:06 compute-0 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.075 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:06 compute-0 ovn_controller[153477]: 2025-11-25T16:43:06Z|00689|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.095 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.096 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5832ad8-c1bf-4fa8-bd23-307e9c6c9edc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.097 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:43:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:06.098 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.173 254096 INFO nova.compute.manager [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 0.88 seconds to destroy the instance on the hypervisor.
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.174 254096 DEBUG oslo.service.loopingcall [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.174 254096 DEBUG nova.compute.manager [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.174 254096 DEBUG nova.network.neutron [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.445 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088986.4445972, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.446 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.467 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.471 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088986.4447162, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.472 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Paused (Lifecycle Event)
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.488 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.496 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:06 compute-0 podman[331007]: 2025-11-25 16:43:06.503783243 +0000 UTC m=+0.097617520 container create 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:43:06 compute-0 nova_compute[254092]: 2025-11-25 16:43:06.514 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:06 compute-0 podman[331007]: 2025-11-25 16:43:06.432758676 +0000 UTC m=+0.026592973 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:43:06 compute-0 systemd[1]: Started libpod-conmon-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62.scope.
Nov 25 16:43:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c59fe121ea6f90e0e0f73a558d007bdcbf7807f621ad6454cfd7572f9bd582/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:06 compute-0 podman[331007]: 2025-11-25 16:43:06.614281591 +0000 UTC m=+0.208115898 container init 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:43:06 compute-0 podman[331007]: 2025-11-25 16:43:06.619993366 +0000 UTC m=+0.213827643 container start 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 16:43:06 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : New worker (331030) forked
Nov 25 16:43:06 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : Loading success.
Nov 25 16:43:06 compute-0 ceph-mon[74985]: pgmap v1731: 321 pgs: 321 active+clean; 134 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 16:43:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 117 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 16:43:07 compute-0 nova_compute[254092]: 2025-11-25 16:43:07.909 254096 DEBUG nova.compute.manager [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-unplugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:07 compute-0 nova_compute[254092]: 2025-11-25 16:43:07.909 254096 DEBUG oslo_concurrency.lockutils [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:07 compute-0 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG oslo_concurrency.lockutils [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:07 compute-0 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG oslo_concurrency.lockutils [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:07 compute-0 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG nova.compute.manager [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] No waiting events found dispatching network-vif-unplugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:07 compute-0 nova_compute[254092]: 2025-11-25 16:43:07.910 254096 DEBUG nova.compute.manager [req-374a17ec-2749-447d-a1ae-f0ccad703d8d req-194feb82-d1ab-4714-a035-cdef19f7a790 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-unplugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.042 254096 DEBUG nova.network.neutron [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.059 254096 INFO nova.compute.manager [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Took 1.89 seconds to deallocate network for instance.
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.100 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.101 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.168 254096 DEBUG oslo_concurrency.processutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563667184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.608 254096 DEBUG oslo_concurrency.processutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.616 254096 DEBUG nova.compute.provider_tree [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.639 254096 DEBUG nova.scheduler.client.report [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.674 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.727 254096 INFO nova.scheduler.client.report [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Deleted allocations for instance ed4031e8-a918-4816-b9b8-b1134a086f8b
Nov 25 16:43:08 compute-0 nova_compute[254092]: 2025-11-25 16:43:08.793 254096 DEBUG oslo_concurrency.lockutils [None req-0998a5cf-74e9-471c-b08a-fb3f190cd983 e3c9e536e4984598a1b18e79b453cbde 20d786999bc74073bae1fde6aede7fcd - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:08 compute-0 ceph-mon[74985]: pgmap v1732: 321 pgs: 321 active+clean; 117 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 16:43:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2563667184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:09 compute-0 nova_compute[254092]: 2025-11-25 16:43:09.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:09 compute-0 nova_compute[254092]: 2025-11-25 16:43:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:09 compute-0 nova_compute[254092]: 2025-11-25 16:43:09.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:43:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 117 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.016 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-deleted-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.016 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ed4031e8-a918-4816-b9b8-b1134a086f8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] No waiting events found dispatching network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.017 254096 WARNING nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Received unexpected event network-vif-plugged-b0cf5f9a-6e2f-4cae-87e9-f7fc22986eb4 for instance with vm_state deleted and task_state None.
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.018 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Processing event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.019 254096 DEBUG oslo_concurrency.lockutils [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.020 254096 DEBUG nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.020 254096 WARNING nova.compute.manager [req-5f5f9c46-3d63-4bb6-b701-8910e715a92c req-05f58b9b-97bb-48df-9bbd-a4ba95283111 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state building and task_state spawning.
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.021 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.026 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764088990.0253155, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.026 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.028 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.032 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance spawned successfully.
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.032 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.045 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.052 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.055 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.055 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.056 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.056 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.057 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.057 254096 DEBUG nova.virt.libvirt.driver [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.079 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.113 254096 INFO nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 11.02 seconds to spawn the instance on the hypervisor.
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.113 254096 DEBUG nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.175 254096 INFO nova.compute.manager [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 11.97 seconds to build instance.
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.187 254096 DEBUG oslo_concurrency.lockutils [None req-a61b47c7-3f9d-4332-8960-bbf48876ef38 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:10 compute-0 nova_compute[254092]: 2025-11-25 16:43:10.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:10 compute-0 podman[331063]: 2025-11-25 16:43:10.650659905 +0000 UTC m=+0.054178791 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 16:43:10 compute-0 podman[331062]: 2025-11-25 16:43:10.65193893 +0000 UTC m=+0.059463604 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 16:43:10 compute-0 podman[331064]: 2025-11-25 16:43:10.694947277 +0000 UTC m=+0.092788019 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:43:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:10 compute-0 ceph-mon[74985]: pgmap v1733: 321 pgs: 321 active+clean; 117 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 25 16:43:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1978490605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.005 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.066 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.066 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.207 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.209 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4026MB free_disk=59.95417404174805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.209 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.209 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.275 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 01f96314-1fbe-4eee-a4ed-db7f448a5320 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.276 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.276 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.307 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Nov 25 16:43:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446431314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.712 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.718 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.746 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.774 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:43:11 compute-0 nova_compute[254092]: 2025-11-25 16:43:11.774 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1978490605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1446431314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:12 compute-0 nova_compute[254092]: 2025-11-25 16:43:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:12 compute-0 nova_compute[254092]: 2025-11-25 16:43:12.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:43:12 compute-0 NetworkManager[48891]: <info>  [1764088992.6452] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Nov 25 16:43:12 compute-0 nova_compute[254092]: 2025-11-25 16:43:12.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:12 compute-0 NetworkManager[48891]: <info>  [1764088992.6469] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Nov 25 16:43:12 compute-0 nova_compute[254092]: 2025-11-25 16:43:12.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:12 compute-0 ovn_controller[153477]: 2025-11-25T16:43:12Z|00690|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:43:12 compute-0 nova_compute[254092]: 2025-11-25 16:43:12.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:12 compute-0 nova_compute[254092]: 2025-11-25 16:43:12.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:12 compute-0 ceph-mon[74985]: pgmap v1734: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Nov 25 16:43:13 compute-0 nova_compute[254092]: 2025-11-25 16:43:13.270 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:13 compute-0 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG nova.compute.manager [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:13 compute-0 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG nova.compute.manager [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing instance network info cache due to event network-changed-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:43:13 compute-0 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG oslo_concurrency.lockutils [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:13 compute-0 nova_compute[254092]: 2025-11-25 16:43:13.305 254096 DEBUG oslo_concurrency.lockutils [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:13 compute-0 nova_compute[254092]: 2025-11-25 16:43:13.306 254096 DEBUG nova.network.neutron [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Refreshing network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:43:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:13.621 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:13.621 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:13.622 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 103 op/s
Nov 25 16:43:14 compute-0 nova_compute[254092]: 2025-11-25 16:43:14.504 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:14 compute-0 nova_compute[254092]: 2025-11-25 16:43:14.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:14 compute-0 nova_compute[254092]: 2025-11-25 16:43:14.505 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:43:14 compute-0 nova_compute[254092]: 2025-11-25 16:43:14.526 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:43:14 compute-0 ovn_controller[153477]: 2025-11-25T16:43:14Z|00691|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:43:14 compute-0 nova_compute[254092]: 2025-11-25 16:43:14.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:14 compute-0 ceph-mon[74985]: pgmap v1735: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 103 op/s
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.180 254096 DEBUG nova.network.neutron [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updated VIF entry in instance network info cache for port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.181 254096 DEBUG nova.network.neutron [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.198 254096 DEBUG oslo_concurrency.lockutils [req-9e112b7e-2321-496c-bc3e-6ec414c831bb req-1e6644f7-3615-4bf0-9485-d8d512e93a8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.487 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.487 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.516 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.534 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.535 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.564 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.567 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.567 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.569 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.595 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.595 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.601 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.601 254096 INFO nova.compute.claims [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.642 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:43:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 16 KiB/s wr, 149 op/s
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.715 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.736 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:15 compute-0 nova_compute[254092]: 2025-11-25 16:43:15.800 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406646157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.227 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.232 254096 DEBUG nova.compute.provider_tree [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.245 254096 DEBUG nova.scheduler.client.report [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.267 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.268 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.270 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.276 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.276 254096 INFO nova.compute.claims [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.365 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.366 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.403 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.424 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.465 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.504 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.538 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.558 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.560 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.560 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Creating image(s)
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.587 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.612 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.649 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.653 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.725 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.726 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.727 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.727 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.746 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.750 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ef028cf3-f8af-4112-9424-8a12fdda7690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.886 254096 DEBUG nova.policy [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a8a49326e3040eea57c8e1a61660f19', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:43:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685190118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.982 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:16 compute-0 nova_compute[254092]: 2025-11-25 16:43:16.989 254096 DEBUG nova.compute.provider_tree [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.003 254096 DEBUG nova.scheduler.client.report [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:17 compute-0 ceph-mon[74985]: pgmap v1736: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 16 KiB/s wr, 149 op/s
Nov 25 16:43:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/406646157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1685190118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.025 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.026 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.028 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.036 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.037 254096 INFO nova.compute.claims [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.068 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 ef028cf3-f8af-4112-9424-8a12fdda7690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.130 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.130 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.139 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] resizing rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.174 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.201 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.252 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'migration_context' on Instance uuid ef028cf3-f8af-4112-9424-8a12fdda7690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.270 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.271 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Ensure instance console log exists: /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.271 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.272 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.272 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.297 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.299 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.299 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Creating image(s)
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.324 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.353 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.382 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.389 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.445 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.476 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.477 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.478 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.478 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.502 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.506 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f5a259d0-4460-4335-aa4a-f874f93a7e93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.560 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Successfully created port: 146f0586-22f7-43d7-9a96-06459ea85508 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.622 254096 DEBUG nova.policy [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a8a49326e3040eea57c8e1a61660f19', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:43:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 16 KiB/s wr, 122 op/s
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.817 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f5a259d0-4460-4335-aa4a-f874f93a7e93_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.873 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] resizing rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:43:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280267706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.903 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.908 254096 DEBUG nova.compute.provider_tree [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.921 254096 DEBUG nova.scheduler.client.report [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.961 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.962 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.968 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'migration_context' on Instance uuid f5a259d0-4460-4335-aa4a-f874f93a7e93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.984 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.984 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Ensure instance console log exists: /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.985 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.985 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:17 compute-0 nova_compute[254092]: 2025-11-25 16:43:17.985 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.007 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.007 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:43:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3280267706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.023 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.038 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.124 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.126 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.126 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Creating image(s)
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.143 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.164 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.188 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.192 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.264 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.265 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.265 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.265 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.309 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.312 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.425 254096 DEBUG nova.policy [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a8a49326e3040eea57c8e1a61660f19', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.430 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Successfully created port: 2a3974bd-02ad-406e-9531-3844e5df4bfa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.536 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.615 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.681 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] resizing rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.766 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'migration_context' on Instance uuid 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.776 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.777 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Ensure instance console log exists: /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.777 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.777 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.778 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.791 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Successfully updated port: 146f0586-22f7-43d7-9a96-06459ea85508 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.806 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.807 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquired lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.807 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.930 254096 DEBUG nova.compute.manager [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-changed-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.931 254096 DEBUG nova.compute.manager [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Refreshing instance network info cache due to event network-changed-146f0586-22f7-43d7-9a96-06459ea85508. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.932 254096 DEBUG oslo_concurrency.lockutils [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:18 compute-0 nova_compute[254092]: 2025-11-25 16:43:18.975 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:43:19 compute-0 ceph-mon[74985]: pgmap v1737: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 16 KiB/s wr, 122 op/s
Nov 25 16:43:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 95 op/s
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.046 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Successfully created port: a297c9f1-753f-4f96-b8e4-38a42969484d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:43:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:20.219 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:20.220 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.231 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updating instance_info_cache with network_info: [{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.251 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Releasing lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.251 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance network_info: |[{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.252 254096 DEBUG oslo_concurrency.lockutils [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.252 254096 DEBUG nova.network.neutron [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Refreshing network info cache for port 146f0586-22f7-43d7-9a96-06459ea85508 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.255 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start _get_guest_xml network_info=[{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.259 254096 WARNING nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.265 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.265 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.272 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.272 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.273 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.273 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.273 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.274 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.274 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.274 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.275 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.276 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.276 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.278 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.353 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Successfully updated port: 2a3974bd-02ad-406e-9531-3844e5df4bfa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.375 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.376 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquired lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.376 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.525 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764088985.524121, ed4031e8-a918-4816-b9b8-b1134a086f8b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.526 254096 INFO nova.compute.manager [-] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] VM Stopped (Lifecycle Event)
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.570 254096 DEBUG nova.compute.manager [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-changed-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.571 254096 DEBUG nova.compute.manager [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Refreshing instance network info cache due to event network-changed-2a3974bd-02ad-406e-9531-3844e5df4bfa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.571 254096 DEBUG oslo_concurrency.lockutils [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.574 254096 DEBUG nova.compute.manager [None req-b1129082-1b9a-4bd2-8e84-e9fdba443370 - - - - - -] [instance: ed4031e8-a918-4816-b9b8-b1134a086f8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652906314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.712 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.750 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.756 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:20 compute-0 nova_compute[254092]: 2025-11-25 16:43:20.883 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:43:21 compute-0 ceph-mon[74985]: pgmap v1738: 321 pgs: 321 active+clean; 88 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 95 op/s
Nov 25 16:43:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1652906314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647307016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.261 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.263 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-1',id=73,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:16Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=ef028cf3-f8af-4112-9424-8a12fdda7690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.264 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.264 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.265 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'pci_devices' on Instance uuid ef028cf3-f8af-4112-9424-8a12fdda7690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.278 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <uuid>ef028cf3-f8af-4112-9424-8a12fdda7690</uuid>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <name>instance-00000049</name>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <nova:name>tempest-ListServersNegativeTestJSON-server-206851324-1</nova:name>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:43:20</nova:creationTime>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:user uuid="1a8a49326e3040eea57c8e1a61660f19">tempest-ListServersNegativeTestJSON-999655333-project-member</nova:user>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:project uuid="b96c962def8e44a98e659bf2a55a8dcc">tempest-ListServersNegativeTestJSON-999655333</nova:project>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <nova:port uuid="146f0586-22f7-43d7-9a96-06459ea85508">
Nov 25 16:43:21 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <system>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <entry name="serial">ef028cf3-f8af-4112-9424-8a12fdda7690</entry>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <entry name="uuid">ef028cf3-f8af-4112-9424-8a12fdda7690</entry>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </system>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <os>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   </os>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <features>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   </features>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/ef028cf3-f8af-4112-9424-8a12fdda7690_disk">
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config">
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:62:f4:12"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <target dev="tap146f0586-22"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/console.log" append="off"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <video>
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </video>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:43:21 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:43:21 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:43:21 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:43:21 compute-0 nova_compute[254092]: </domain>
Nov 25 16:43:21 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.279 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Preparing to wait for external event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.279 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.280 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.280 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.281 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-1',id=73,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:16Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=ef028cf3-f8af-4112-9424-8a12fdda7690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.281 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.281 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.282 254096 DEBUG os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.283 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.283 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.286 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap146f0586-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.287 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap146f0586-22, col_values=(('external_ids', {'iface-id': '146f0586-22f7-43d7-9a96-06459ea85508', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:f4:12', 'vm-uuid': 'ef028cf3-f8af-4112-9424-8a12fdda7690'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:21 compute-0 NetworkManager[48891]: <info>  [1764089001.2892] manager: (tap146f0586-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.294 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.295 254096 INFO os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22')
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.353 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.353 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.353 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No VIF found with MAC fa:16:3e:62:f4:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.354 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Using config drive
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.376 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 227 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 177 op/s
Nov 25 16:43:21 compute-0 nova_compute[254092]: 2025-11-25 16:43:21.872 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Successfully updated port: a297c9f1-753f-4f96-b8e4-38a42969484d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.036 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.036 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquired lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.036 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.079 254096 DEBUG nova.network.neutron [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updated VIF entry in instance network info cache for port 146f0586-22f7-43d7-9a96-06459ea85508. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.080 254096 DEBUG nova.network.neutron [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updating instance_info_cache with network_info: [{"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.093 254096 DEBUG oslo_concurrency.lockutils [req-f3f901ef-bb7e-40c6-a57b-afa743cb478d req-9316f888-27fe-4bd8-a8f0-94c52ee33a2b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-ef028cf3-f8af-4112-9424-8a12fdda7690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.326 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Creating config drive at /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.331 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98qiu354 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.365 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.447 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updating instance_info_cache with network_info: [{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.470 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98qiu354" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.796 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.799 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.851 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Releasing lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.852 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance network_info: |[{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.853 254096 DEBUG nova.compute.manager [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-changed-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG nova.compute.manager [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Refreshing instance network info cache due to event network-changed-a297c9f1-753f-4f96-b8e4-38a42969484d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG oslo_concurrency.lockutils [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG oslo_concurrency.lockutils [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.854 254096 DEBUG nova.network.neutron [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Refreshing network info cache for port 2a3974bd-02ad-406e-9531-3844e5df4bfa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.857 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start _get_guest_xml network_info=[{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.861 254096 WARNING nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.865 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.865 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.867 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.867 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.868 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.868 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.868 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.869 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.870 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.870 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:43:22 compute-0 nova_compute[254092]: 2025-11-25 16:43:22.873 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/647307016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:23 compute-0 ceph-mon[74985]: pgmap v1739: 321 pgs: 321 active+clean; 227 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 177 op/s
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960710798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.372 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.404 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.410 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.600 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config ef028cf3-f8af-4112-9424-8a12fdda7690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.601 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deleting local config drive /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690/disk.config because it was imported into RBD.
Nov 25 16:43:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 227 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 141 op/s
Nov 25 16:43:23 compute-0 NetworkManager[48891]: <info>  [1764089003.6589] manager: (tap146f0586-22): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Nov 25 16:43:23 compute-0 kernel: tap146f0586-22: entered promiscuous mode
Nov 25 16:43:23 compute-0 ovn_controller[153477]: 2025-11-25T16:43:23Z|00692|binding|INFO|Claiming lport 146f0586-22f7-43d7-9a96-06459ea85508 for this chassis.
Nov 25 16:43:23 compute-0 ovn_controller[153477]: 2025-11-25T16:43:23Z|00693|binding|INFO|146f0586-22f7-43d7-9a96-06459ea85508: Claiming fa:16:3e:62:f4:12 10.100.0.3
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.672 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f4:12 10.100.0.3'], port_security=['fa:16:3e:62:f4:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef028cf3-f8af-4112-9424-8a12fdda7690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=146f0586-22f7-43d7-9a96-06459ea85508) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.674 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 146f0586-22f7-43d7-9a96-06459ea85508 in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf bound to our chassis
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.675 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 16:43:23 compute-0 ovn_controller[153477]: 2025-11-25T16:43:23Z|00694|binding|INFO|Setting lport 146f0586-22f7-43d7-9a96-06459ea85508 ovn-installed in OVS
Nov 25 16:43:23 compute-0 ovn_controller[153477]: 2025-11-25T16:43:23Z|00695|binding|INFO|Setting lport 146f0586-22f7-43d7-9a96-06459ea85508 up in Southbound
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.689 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73583b6c-3c2c-45f0-88e3-0b09617611f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.690 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap126cf01f-b1 in ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.692 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap126cf01f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.693 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efffe1de-9d6d-4cb5-9239-5c187a7c52aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.694 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6951a5cf-7641-4c50-81fc-19577db58080]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 systemd-udevd[331928]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:43:23 compute-0 systemd-machined[216343]: New machine qemu-86-instance-00000049.
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.709 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1c568d-e24f-4184-a8c2-e5997c3b37e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 NetworkManager[48891]: <info>  [1764089003.7140] device (tap146f0586-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:43:23 compute-0 NetworkManager[48891]: <info>  [1764089003.7151] device (tap146f0586-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:43:23 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-00000049.
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a63ca00-5fbf-4267-bce9-09ac15a5f266]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.768 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d79cf7e4-ba66-43ae-81f2-a852d16c9a91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 NetworkManager[48891]: <info>  [1764089003.7769] manager: (tap126cf01f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/302)
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a96471e2-a128-4c82-8dfc-863afa4a7d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 systemd-udevd[331932]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.810 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2becb42e-257d-4a0c-9643-065923b7ecc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.814 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[422e9328-7fec-41ba-bf27-d32cf2793568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 NetworkManager[48891]: <info>  [1764089003.8439] device (tap126cf01f-b0): carrier: link connected
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.850 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffa82a2-3e7a-4b02-8252-fb105a6da427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.868 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfdd176-989d-4e3a-bc78-b8f4b40e634e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331963, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.886 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb35b7b-bdd4-47d9-bd2a-7530d5bf1945]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546143, 'tstamp': 546143}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331964, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971467510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.909 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3105e9-c068-4f39-a38e-0fd948f194e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331965, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.938 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.940 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-2',id=74,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:17Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=f5a259d0-4460-4335-aa4a-f874f93a7e93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.940 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.941 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.943 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'pci_devices' on Instance uuid f5a259d0-4460-4335-aa4a-f874f93a7e93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:23.949 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb10ac2-4283-44c4-90f1-7bf553299ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.961 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <uuid>f5a259d0-4460-4335-aa4a-f874f93a7e93</uuid>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <name>instance-0000004a</name>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <nova:name>tempest-ListServersNegativeTestJSON-server-206851324-2</nova:name>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:43:22</nova:creationTime>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:user uuid="1a8a49326e3040eea57c8e1a61660f19">tempest-ListServersNegativeTestJSON-999655333-project-member</nova:user>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:project uuid="b96c962def8e44a98e659bf2a55a8dcc">tempest-ListServersNegativeTestJSON-999655333</nova:project>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <nova:port uuid="2a3974bd-02ad-406e-9531-3844e5df4bfa">
Nov 25 16:43:23 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <system>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <entry name="serial">f5a259d0-4460-4335-aa4a-f874f93a7e93</entry>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <entry name="uuid">f5a259d0-4460-4335-aa4a-f874f93a7e93</entry>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </system>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <os>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   </os>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <features>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   </features>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f5a259d0-4460-4335-aa4a-f874f93a7e93_disk">
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config">
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:02:21:9c"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <target dev="tap2a3974bd-02"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/console.log" append="off"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <video>
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </video>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:43:23 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:43:23 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:43:23 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:43:23 compute-0 nova_compute[254092]: </domain>
Nov 25 16:43:23 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.967 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Preparing to wait for external event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.967 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.967 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.968 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.969 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-2',id=74,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:17Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=f5a259d0-4460-4335-aa4a-f874f93a7e93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.969 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.970 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.971 254096 DEBUG os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.972 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.972 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.978 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a3974bd-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.978 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a3974bd-02, col_values=(('external_ids', {'iface-id': '2a3974bd-02ad-406e-9531-3844e5df4bfa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:21:9c', 'vm-uuid': 'f5a259d0-4460-4335-aa4a-f874f93a7e93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:23 compute-0 NetworkManager[48891]: <info>  [1764089003.9811] manager: (tap2a3974bd-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:23 compute-0 nova_compute[254092]: 2025-11-25 16:43:23.986 254096 INFO os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02')
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.026 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d11e085-a483-440d-9b9f-fbd492e952c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.028 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.028 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.028 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:24 compute-0 NetworkManager[48891]: <info>  [1764089004.0318] manager: (tap126cf01f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Nov 25 16:43:24 compute-0 kernel: tap126cf01f-b0: entered promiscuous mode
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.038 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.039 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:24 compute-0 ovn_controller[153477]: 2025-11-25T16:43:24Z|00696|binding|INFO|Releasing lport 41886c6c-e968-4c0b-b7f6-75887ad8a7ea from this chassis (sb_readonly=0)
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.062 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.062 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.063 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No VIF found with MAC fa:16:3e:02:21:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.063 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Using config drive
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.066 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/126cf01f-b6da-4bbc-847b-2d16936986cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/126cf01f-b6da-4bbc-847b-2d16936986cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2a817f-91ab-4f88-9b66-ede888906737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.069 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/126cf01f-b6da-4bbc-847b-2d16936986cf.pid.haproxy
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.071 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'env', 'PROCESS_TAG=haproxy-126cf01f-b6da-4bbc-847b-2d16936986cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/126cf01f-b6da-4bbc-847b-2d16936986cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.094 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1960710798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1971467510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.158 254096 DEBUG nova.network.neutron [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updating instance_info_cache with network_info: [{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.177 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Releasing lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.178 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance network_info: |[{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.179 254096 DEBUG oslo_concurrency.lockutils [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.179 254096 DEBUG nova.network.neutron [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Refreshing network info cache for port a297c9f1-753f-4f96-b8e4-38a42969484d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.184 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start _get_guest_xml network_info=[{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.190 254096 WARNING nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.197 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.198 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.201 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.201 254096 DEBUG nova.virt.libvirt.host [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.202 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.203 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.203 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.203 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.204 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.205 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.206 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.206 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.206 254096 DEBUG nova.virt.hardware [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.209 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:24.225 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:24 compute-0 podman[332038]: 2025-11-25 16:43:24.435098431 +0000 UTC m=+0.025093442 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.561 254096 DEBUG nova.compute.manager [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.563 254096 DEBUG oslo_concurrency.lockutils [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.564 254096 DEBUG oslo_concurrency.lockutils [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.564 254096 DEBUG oslo_concurrency.lockutils [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.565 254096 DEBUG nova.compute.manager [req-aa41da87-8bae-4ffe-805d-7a9e4519efd9 req-b39c81ee-9d7f-421f-95fd-dd680e982f0c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Processing event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.619 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Creating config drive at /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.629 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpdmsrhy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444421611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.689 254096 DEBUG nova.network.neutron [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updated VIF entry in instance network info cache for port 2a3974bd-02ad-406e-9531-3844e5df4bfa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.690 254096 DEBUG nova.network.neutron [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updating instance_info_cache with network_info: [{"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.695 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:24 compute-0 podman[332038]: 2025-11-25 16:43:24.726011493 +0000 UTC m=+0.316006494 container create b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.729 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.738 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.771 254096 DEBUG oslo_concurrency.lockutils [req-d85c7eb4-a778-469c-bd4d-77ff5fc3b25f req-629af8b7-0649-45b4-af05-a8d7516a0246 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f5a259d0-4460-4335-aa4a-f874f93a7e93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:24 compute-0 systemd[1]: Started libpod-conmon-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da.scope.
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.797 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpdmsrhy" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.830 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965eee395a059bda45c6a5b38023a9548ac21b0145fb7c81bf8446d934d09752/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.837 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.868 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089004.8052185, ef028cf3-f8af-4112-9424-8a12fdda7690 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.869 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Started (Lifecycle Event)
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.872 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:43:24 compute-0 podman[332038]: 2025-11-25 16:43:24.877351919 +0000 UTC m=+0.467346950 container init b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.883 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:43:24 compute-0 podman[332038]: 2025-11-25 16:43:24.886632171 +0000 UTC m=+0.476627172 container start b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.891 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.907 254096 INFO nova.virt.libvirt.driver [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance spawned successfully.
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.908 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:43:24 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : New worker (332180) forked
Nov 25 16:43:24 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : Loading success.
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.925 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.930 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.930 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.931 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.931 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.931 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.932 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.961 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.962 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089004.8053908, ef028cf3-f8af-4112-9424-8a12fdda7690 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.962 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Paused (Lifecycle Event)
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.991 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.994 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089004.8753698, ef028cf3-f8af-4112-9424-8a12fdda7690 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:24 compute-0 nova_compute[254092]: 2025-11-25 16:43:24.994 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Resumed (Lifecycle Event)
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.004 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 8.44 seconds to spawn the instance on the hypervisor.
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.004 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.016 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.020 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.040 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config f5a259d0-4460-4335-aa4a-f874f93a7e93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.041 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deleting local config drive /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93/disk.config because it was imported into RBD.
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.062 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.099 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 9.53 seconds to build instance.
Nov 25 16:43:25 compute-0 kernel: tap2a3974bd-02: entered promiscuous mode
Nov 25 16:43:25 compute-0 NetworkManager[48891]: <info>  [1764089005.1089] manager: (tap2a3974bd-02): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Nov 25 16:43:25 compute-0 systemd-udevd[331953]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 ovn_controller[153477]: 2025-11-25T16:43:25Z|00697|binding|INFO|Claiming lport 2a3974bd-02ad-406e-9531-3844e5df4bfa for this chassis.
Nov 25 16:43:25 compute-0 ovn_controller[153477]: 2025-11-25T16:43:25Z|00698|binding|INFO|2a3974bd-02ad-406e-9531-3844e5df4bfa: Claiming fa:16:3e:02:21:9c 10.100.0.9
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.120 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:21:9c 10.100.0.9'], port_security=['fa:16:3e:02:21:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a259d0-4460-4335-aa4a-f874f93a7e93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a3974bd-02ad-406e-9531-3844e5df4bfa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:25 compute-0 ceph-mon[74985]: pgmap v1740: 321 pgs: 321 active+clean; 227 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 141 op/s
Nov 25 16:43:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/444421611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.123 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a3974bd-02ad-406e-9531-3844e5df4bfa in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf bound to our chassis
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.126 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 16:43:25 compute-0 NetworkManager[48891]: <info>  [1764089005.1299] device (tap2a3974bd-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:43:25 compute-0 ovn_controller[153477]: 2025-11-25T16:43:25Z|00699|binding|INFO|Setting lport 2a3974bd-02ad-406e-9531-3844e5df4bfa ovn-installed in OVS
Nov 25 16:43:25 compute-0 ovn_controller[153477]: 2025-11-25T16:43:25Z|00700|binding|INFO|Setting lport 2a3974bd-02ad-406e-9531-3844e5df4bfa up in Southbound
Nov 25 16:43:25 compute-0 NetworkManager[48891]: <info>  [1764089005.1312] device (tap2a3974bd-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.135 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.154 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1f3124-74ce-41d8-9407-14cd9413077a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:25 compute-0 systemd-machined[216343]: New machine qemu-87-instance-0000004a.
Nov 25 16:43:25 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-0000004a.
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.199 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abf57aed-7cf9-45d7-bd01-a3bb8bbcc85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.204 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9994441e-71a0-4f8e-9498-87862607895e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719824996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.254 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f48f4ac1-f63e-49b6-8bc1-381565a2adbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.266 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.268 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-3',id=75,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:18Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=69b3cbbb-9713-4f49-9e67-1f33a3ae2642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.269 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.269 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.271 254096 DEBUG nova.objects.instance [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'pci_devices' on Instance uuid 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.283 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <uuid>69b3cbbb-9713-4f49-9e67-1f33a3ae2642</uuid>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <name>instance-0000004b</name>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <nova:name>tempest-ListServersNegativeTestJSON-server-206851324-3</nova:name>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:43:24</nova:creationTime>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:user uuid="1a8a49326e3040eea57c8e1a61660f19">tempest-ListServersNegativeTestJSON-999655333-project-member</nova:user>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:project uuid="b96c962def8e44a98e659bf2a55a8dcc">tempest-ListServersNegativeTestJSON-999655333</nova:project>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <nova:port uuid="a297c9f1-753f-4f96-b8e4-38a42969484d">
Nov 25 16:43:25 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <system>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <entry name="serial">69b3cbbb-9713-4f49-9e67-1f33a3ae2642</entry>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <entry name="uuid">69b3cbbb-9713-4f49-9e67-1f33a3ae2642</entry>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </system>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <os>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   </os>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <features>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   </features>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk">
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config">
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:1e:28:6d"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <target dev="tapa297c9f1-75"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/console.log" append="off"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <video>
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </video>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:43:25 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:43:25 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:43:25 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:43:25 compute-0 nova_compute[254092]: </domain>
Nov 25 16:43:25 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.284 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Preparing to wait for external event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.284 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.285 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.285 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.286 254096 DEBUG nova.virt.libvirt.vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-3',id=75,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:18Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=69b3cbbb-9713-4f49-9e67-1f33a3ae2642,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.287 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.288 254096 DEBUG nova.network.os_vif_util [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.288 254096 DEBUG os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.289 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.289 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.291 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa297c9f1-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.292 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa297c9f1-75, col_values=(('external_ids', {'iface-id': 'a297c9f1-753f-4f96-b8e4-38a42969484d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:28:6d', 'vm-uuid': '69b3cbbb-9713-4f49-9e67-1f33a3ae2642'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:25 compute-0 ovn_controller[153477]: 2025-11-25T16:43:25Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:43:25 compute-0 ovn_controller[153477]: 2025-11-25T16:43:25Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:43:25 compute-0 NetworkManager[48891]: <info>  [1764089005.2941] manager: (tapa297c9f1-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.295 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2df270-e5a5-460c-8cc7-256cd7d39935]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 5, 'tx_packets': 5, 'rx_bytes': 442, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 5, 'inoctets': 372, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 5, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 372, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 5, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332219, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.295 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.298 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.299 254096 INFO os_vif [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75')
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.320 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e54bbc82-d69a-4b7d-bcae-a43beebcf57b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332221, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332221, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.322 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.328 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.328 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.329 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:25.329 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] No VIF found with MAC fa:16:3e:1e:28:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.347 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Using config drive
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.367 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.627 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089005.6266854, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.627 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Started (Lifecycle Event)
Nov 25 16:43:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 241 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 179 op/s
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.651 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.655 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089005.629937, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.655 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Paused (Lifecycle Event)
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.674 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:25 compute-0 nova_compute[254092]: 2025-11-25 16:43:25.687 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2719824996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.207 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Creating config drive at /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.217 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_vktite execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.368 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_vktite" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.408 254096 DEBUG nova.storage.rbd_utils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] rbd image 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.415 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.465 254096 DEBUG nova.compute.manager [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.466 254096 DEBUG oslo_concurrency.lockutils [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.467 254096 DEBUG oslo_concurrency.lockutils [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.467 254096 DEBUG oslo_concurrency.lockutils [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.467 254096 DEBUG nova.compute.manager [req-1dcf34dd-86c7-48c7-b7df-0ef0285ba2b5 req-ac2445f9-6faa-4994-8c1b-d0894935fbf0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Processing event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.468 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.471 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089006.4703562, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.471 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Resumed (Lifecycle Event)
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.474 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.477 254096 INFO nova.virt.libvirt.driver [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance spawned successfully.
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.477 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.505 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.513 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.517 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.518 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.518 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.519 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.519 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.519 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.543 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.572 254096 DEBUG oslo_concurrency.processutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config 69b3cbbb-9713-4f49-9e67-1f33a3ae2642_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.572 254096 INFO nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deleting local config drive /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642/disk.config because it was imported into RBD.
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.582 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 9.28 seconds to spawn the instance on the hypervisor.
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.583 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:26 compute-0 NetworkManager[48891]: <info>  [1764089006.6177] manager: (tapa297c9f1-75): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Nov 25 16:43:26 compute-0 kernel: tapa297c9f1-75: entered promiscuous mode
Nov 25 16:43:26 compute-0 ovn_controller[153477]: 2025-11-25T16:43:26Z|00701|binding|INFO|Claiming lport a297c9f1-753f-4f96-b8e4-38a42969484d for this chassis.
Nov 25 16:43:26 compute-0 ovn_controller[153477]: 2025-11-25T16:43:26Z|00702|binding|INFO|a297c9f1-753f-4f96-b8e4-38a42969484d: Claiming fa:16:3e:1e:28:6d 10.100.0.8
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.626 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:28:6d 10.100.0.8'], port_security=['fa:16:3e:1e:28:6d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69b3cbbb-9713-4f49-9e67-1f33a3ae2642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a297c9f1-753f-4f96-b8e4-38a42969484d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.627 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a297c9f1-753f-4f96-b8e4-38a42969484d in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf bound to our chassis
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.629 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 16:43:26 compute-0 NetworkManager[48891]: <info>  [1764089006.6326] device (tapa297c9f1-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:43:26 compute-0 NetworkManager[48891]: <info>  [1764089006.6335] device (tapa297c9f1-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:43:26 compute-0 ovn_controller[153477]: 2025-11-25T16:43:26Z|00703|binding|INFO|Setting lport a297c9f1-753f-4f96-b8e4-38a42969484d ovn-installed in OVS
Nov 25 16:43:26 compute-0 ovn_controller[153477]: 2025-11-25T16:43:26Z|00704|binding|INFO|Setting lport a297c9f1-753f-4f96-b8e4-38a42969484d up in Southbound
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.649 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 10.99 seconds to build instance.
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.652 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[121d79d3-5c8f-4735-8faa-40fa47731404]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:26 compute-0 systemd-machined[216343]: New machine qemu-88-instance-0000004b.
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.668 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.670 254096 DEBUG nova.compute.manager [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG oslo_concurrency.lockutils [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG oslo_concurrency.lockutils [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG oslo_concurrency.lockutils [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.671 254096 DEBUG nova.compute.manager [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] No waiting events found dispatching network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.672 254096 WARNING nova.compute.manager [req-370cca18-7cac-4cf0-9d31-9bb059d05264 req-845e9698-4d74-49c1-9b35-1ff70ebb0222 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received unexpected event network-vif-plugged-146f0586-22f7-43d7-9a96-06459ea85508 for instance with vm_state active and task_state None.
Nov 25 16:43:26 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004b.
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.678 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bf56c87d-58cb-4827-813e-3b28e4e6060c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.682 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[00478d77-50f7-46ba-bd50-4ad681ad08ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.709 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1db08130-585f-4996-85e1-f05f0095dd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.726 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba44a919-af9e-4fea-bdef-671c0f4b6940]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332346, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.743 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[958b6abd-0145-4af8-92e9-da8171425f63]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332351, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332351, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.744 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:26 compute-0 nova_compute[254092]: 2025-11-25 16:43:26.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:26.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.049 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089007.0487735, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.049 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Started (Lifecycle Event)
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.073 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.077 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089007.0537202, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.077 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Paused (Lifecycle Event)
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.092 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.095 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.112 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.117 254096 DEBUG nova.network.neutron [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updated VIF entry in instance network info cache for port a297c9f1-753f-4f96-b8e4-38a42969484d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.118 254096 DEBUG nova.network.neutron [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updating instance_info_cache with network_info: [{"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.129 254096 DEBUG oslo_concurrency.lockutils [req-bce81339-127a-4560-8029-301e367e6f28 req-9fd14b47-aacf-4cde-b2ca-41d08c05a86f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-69b3cbbb-9713-4f49-9e67-1f33a3ae2642" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:27 compute-0 ceph-mon[74985]: pgmap v1741: 321 pgs: 321 active+clean; 241 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 179 op/s
Nov 25 16:43:27 compute-0 nova_compute[254092]: 2025-11-25 16:43:27.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 248 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 983 KiB/s rd, 7.4 MiB/s wr, 163 op/s
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:28 compute-0 ceph-mon[74985]: pgmap v1742: 321 pgs: 321 active+clean; 248 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 983 KiB/s rd, 7.4 MiB/s wr, 163 op/s
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.571 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.572 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] No waiting events found dispatching network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 WARNING nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received unexpected event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa for instance with vm_state active and task_state None.
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.573 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Processing event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG oslo_concurrency.lockutils [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.574 254096 DEBUG nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] No waiting events found dispatching network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.575 254096 WARNING nova.compute.manager [req-abf61367-949e-4159-81ee-31509b6fd220 req-60d08eaa-94ff-49ce-a002-2d6fa6311691 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received unexpected event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d for instance with vm_state building and task_state spawning.
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.576 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.639 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089008.6331325, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.640 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Resumed (Lifecycle Event)
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.656 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.657 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.660 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.663 254096 INFO nova.virt.libvirt.driver [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance spawned successfully.
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.663 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.681 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.690 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.690 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.691 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.691 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.691 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.692 254096 DEBUG nova.virt.libvirt.driver [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.762 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 10.64 seconds to spawn the instance on the hypervisor.
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.762 254096 DEBUG nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.820 254096 INFO nova.compute.manager [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 13.10 seconds to build instance.
Nov 25 16:43:28 compute-0 nova_compute[254092]: 2025-11-25 16:43:28.837 254096 DEBUG oslo_concurrency.lockutils [None req-efc5564e-395e-4fb5-b05d-95f95cc6c7ee 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 248 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 7.4 MiB/s wr, 149 op/s
Nov 25 16:43:30 compute-0 nova_compute[254092]: 2025-11-25 16:43:30.294 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:30 compute-0 ceph-mon[74985]: pgmap v1743: 321 pgs: 321 active+clean; 248 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 7.4 MiB/s wr, 149 op/s
Nov 25 16:43:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 260 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 7.5 MiB/s wr, 362 op/s
Nov 25 16:43:32 compute-0 ceph-mon[74985]: pgmap v1744: 321 pgs: 321 active+clean; 260 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 7.5 MiB/s wr, 362 op/s
Nov 25 16:43:33 compute-0 nova_compute[254092]: 2025-11-25 16:43:33.079 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:33 compute-0 nova_compute[254092]: 2025-11-25 16:43:33.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 260 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 281 op/s
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.266 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.267 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.267 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.267 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.268 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.269 254096 INFO nova.compute.manager [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Terminating instance
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.270 254096 DEBUG nova.compute.manager [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:43:34 compute-0 kernel: tap146f0586-22 (unregistering): left promiscuous mode
Nov 25 16:43:34 compute-0 NetworkManager[48891]: <info>  [1764089014.3205] device (tap146f0586-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:43:34 compute-0 ovn_controller[153477]: 2025-11-25T16:43:34Z|00705|binding|INFO|Releasing lport 146f0586-22f7-43d7-9a96-06459ea85508 from this chassis (sb_readonly=0)
Nov 25 16:43:34 compute-0 ovn_controller[153477]: 2025-11-25T16:43:34Z|00706|binding|INFO|Setting lport 146f0586-22f7-43d7-9a96-06459ea85508 down in Southbound
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:34 compute-0 ovn_controller[153477]: 2025-11-25T16:43:34Z|00707|binding|INFO|Removing iface tap146f0586-22 ovn-installed in OVS
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.338 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f4:12 10.100.0.3'], port_security=['fa:16:3e:62:f4:12 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ef028cf3-f8af-4112-9424-8a12fdda7690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=146f0586-22f7-43d7-9a96-06459ea85508) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.341 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 146f0586-22f7-43d7-9a96-06459ea85508 in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf unbound from our chassis
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.343 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.377 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d0375b-1895-46ba-aa15-2927ffa11d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:34 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000049.scope: Deactivated successfully.
Nov 25 16:43:34 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000049.scope: Consumed 10.337s CPU time.
Nov 25 16:43:34 compute-0 systemd-machined[216343]: Machine qemu-86-instance-00000049 terminated.
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.409 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[071f5a97-e864-4b16-978c-53d42ab2246d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.412 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fd888d-dde8-49b1-8a5f-7b1c262eff6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.442 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[37e1d7a2-b31d-461b-b658-151f8ed7d1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.465 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e7565498-6d28-470f-af23-68b38298dbfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332407, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.489 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[608d67ea-4a3a-4300-9d21-52386a115ed8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332408, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332408, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.494 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.506 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.507 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.507 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:34.508 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.510 254096 INFO nova.virt.libvirt.driver [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Instance destroyed successfully.
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.511 254096 DEBUG nova.objects.instance [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'resources' on Instance uuid ef028cf3-f8af-4112-9424-8a12fdda7690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.525 254096 DEBUG nova.virt.libvirt.vif [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-1',id=73,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:25Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=ef028cf3-f8af-4112-9424-8a12fdda7690,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.525 254096 DEBUG nova.network.os_vif_util [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "146f0586-22f7-43d7-9a96-06459ea85508", "address": "fa:16:3e:62:f4:12", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap146f0586-22", "ovs_interfaceid": "146f0586-22f7-43d7-9a96-06459ea85508", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.526 254096 DEBUG nova.network.os_vif_util [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.527 254096 DEBUG os_vif [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap146f0586-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.534 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.537 254096 INFO os_vif [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f4:12,bridge_name='br-int',has_traffic_filtering=True,id=146f0586-22f7-43d7-9a96-06459ea85508,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap146f0586-22')
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.892 254096 INFO nova.virt.libvirt.driver [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deleting instance files /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690_del
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.893 254096 INFO nova.virt.libvirt.driver [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deletion of /var/lib/nova/instances/ef028cf3-f8af-4112-9424-8a12fdda7690_del complete
Nov 25 16:43:34 compute-0 ceph-mon[74985]: pgmap v1745: 321 pgs: 321 active+clean; 260 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 281 op/s
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.949 254096 INFO nova.compute.manager [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 0.68 seconds to destroy the instance on the hypervisor.
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.951 254096 DEBUG oslo.service.loopingcall [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.951 254096 DEBUG nova.compute.manager [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:43:34 compute-0 nova_compute[254092]: 2025-11-25 16:43:34.951 254096 DEBUG nova.network.neutron [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:43:35 compute-0 nova_compute[254092]: 2025-11-25 16:43:35.650 254096 DEBUG nova.network.neutron [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 229 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 303 op/s
Nov 25 16:43:35 compute-0 nova_compute[254092]: 2025-11-25 16:43:35.674 254096 INFO nova.compute.manager [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Took 0.72 seconds to deallocate network for instance.
Nov 25 16:43:35 compute-0 nova_compute[254092]: 2025-11-25 16:43:35.723 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:35 compute-0 nova_compute[254092]: 2025-11-25 16:43:35.723 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:35 compute-0 nova_compute[254092]: 2025-11-25 16:43:35.738 254096 DEBUG nova.compute.manager [req-c2014b31-b142-49e2-acc6-3e8ebafbe71f req-ba8b0a06-4b57-4222-b89c-8bc886fdae86 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Received event network-vif-deleted-146f0586-22f7-43d7-9a96-06459ea85508 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:35 compute-0 nova_compute[254092]: 2025-11-25 16:43:35.836 254096 DEBUG oslo_concurrency.processutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:35 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 25 16:43:35 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:35.918096) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:43:35 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 25 16:43:35 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089015918197, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 979, "num_deletes": 250, "total_data_size": 1281279, "memory_usage": 1301552, "flush_reason": "Manual Compaction"}
Nov 25 16:43:35 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016025083, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 794860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35405, "largest_seqno": 36383, "table_properties": {"data_size": 791049, "index_size": 1463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10622, "raw_average_key_size": 20, "raw_value_size": 782686, "raw_average_value_size": 1537, "num_data_blocks": 66, "num_entries": 509, "num_filter_entries": 509, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764088931, "oldest_key_time": 1764088931, "file_creation_time": 1764089015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 106976 microseconds, and 3160 cpu microseconds.
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.025124) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 794860 bytes OK
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.025144) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.034470) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.034514) EVENT_LOG_v1 {"time_micros": 1764089016034501, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.034544) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1276576, prev total WAL file size 1303112, number of live WAL files 2.
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.035591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323537' seq:72057594037927935, type:22 .. '6D6772737461740031353038' seq:0, type:0; will stop at (end)
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(776KB)], [77(9859KB)]
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016035700, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 10890833, "oldest_snapshot_seqno": -1}
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6087 keys, 8057783 bytes, temperature: kUnknown
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016138601, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 8057783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8018197, "index_size": 23284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 154306, "raw_average_key_size": 25, "raw_value_size": 7910021, "raw_average_value_size": 1299, "num_data_blocks": 944, "num_entries": 6087, "num_filter_entries": 6087, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089016, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.138879) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 8057783 bytes
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.151590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.7 rd, 78.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(23.8) write-amplify(10.1) OK, records in: 6563, records dropped: 476 output_compression: NoCompression
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.151628) EVENT_LOG_v1 {"time_micros": 1764089016151616, "job": 44, "event": "compaction_finished", "compaction_time_micros": 103008, "compaction_time_cpu_micros": 24637, "output_level": 6, "num_output_files": 1, "total_output_size": 8057783, "num_input_records": 6563, "num_output_records": 6087, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016152010, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089016153563, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.035434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:43:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:43:36.153670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:43:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190893710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.378 254096 DEBUG oslo_concurrency.processutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.384 254096 DEBUG nova.compute.provider_tree [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.404 254096 DEBUG nova.scheduler.client.report [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.435 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.477 254096 INFO nova.scheduler.client.report [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Deleted allocations for instance ef028cf3-f8af-4112-9424-8a12fdda7690
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.570 254096 DEBUG oslo_concurrency.lockutils [None req-e6ee4eed-3f43-4743-8f64-52629f79c4d1 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "ef028cf3-f8af-4112-9424-8a12fdda7690" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.579 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.579 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.613 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.685 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.686 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.691 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.691 254096 INFO nova.compute.claims [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:43:36 compute-0 nova_compute[254092]: 2025-11-25 16:43:36.869 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:37 compute-0 ceph-mon[74985]: pgmap v1746: 321 pgs: 321 active+clean; 229 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 303 op/s
Nov 25 16:43:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1190893710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458475742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.441 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.447 254096 DEBUG nova.compute.provider_tree [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.461 254096 DEBUG nova.scheduler.client.report [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.490 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.491 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.537 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.537 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.564 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.585 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:43:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 214 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 914 KiB/s wr, 269 op/s
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.701 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.703 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.703 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Creating image(s)
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.727 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.751 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.779 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.783 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.854 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.855 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.856 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.856 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.880 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.884 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 44f3c94a-060c-4650-bfe7-a214c6a10207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:37 compute-0 nova_compute[254092]: 2025-11-25 16:43:37.921 254096 DEBUG nova.policy [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e0e4ce0eeda4a79ab738e1f8dc0f725', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e0a920505c8240228ed836913ffcdbe4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:43:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2458475742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.640 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 44f3c94a-060c-4650-bfe7-a214c6a10207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.711 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] resizing rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.857 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Successfully created port: b5793671-5020-4692-92f1-65a87bcdf38e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.864 254096 DEBUG nova.objects.instance [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lazy-loading 'migration_context' on Instance uuid 44f3c94a-060c-4650-bfe7-a214c6a10207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.876 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.877 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Ensure instance console log exists: /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.877 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.878 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:38 compute-0 nova_compute[254092]: 2025-11-25 16:43:38.878 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:39 compute-0 ceph-mon[74985]: pgmap v1747: 321 pgs: 321 active+clean; 214 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 914 KiB/s wr, 269 op/s
Nov 25 16:43:39 compute-0 nova_compute[254092]: 2025-11-25 16:43:39.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 214 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 122 KiB/s wr, 239 op/s
Nov 25 16:43:39 compute-0 ovn_controller[153477]: 2025-11-25T16:43:39Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:21:9c 10.100.0.9
Nov 25 16:43:39 compute-0 ovn_controller[153477]: 2025-11-25T16:43:39Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:21:9c 10.100.0.9
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:43:40
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'volumes', 'backups', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:43:40 compute-0 ceph-mon[74985]: pgmap v1748: 321 pgs: 321 active+clean; 214 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 122 KiB/s wr, 239 op/s
Nov 25 16:43:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:43:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 23K writes, 96K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 23K writes, 7556 syncs, 3.09 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8508 writes, 34K keys, 8508 commit groups, 1.0 writes per commit group, ingest: 33.82 MB, 0.06 MB/s
                                           Interval WAL: 8507 writes, 3171 syncs, 2.68 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:43:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:41 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.019 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.020 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.020 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.021 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.021 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.023 254096 INFO nova.compute.manager [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Terminating instance
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.025 254096 DEBUG nova.compute.manager [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:43:41 compute-0 kernel: tap2a3974bd-02 (unregistering): left promiscuous mode
Nov 25 16:43:41 compute-0 NetworkManager[48891]: <info>  [1764089021.1987] device (tap2a3974bd-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 ovn_controller[153477]: 2025-11-25T16:43:41Z|00708|binding|INFO|Releasing lport 2a3974bd-02ad-406e-9531-3844e5df4bfa from this chassis (sb_readonly=0)
Nov 25 16:43:41 compute-0 ovn_controller[153477]: 2025-11-25T16:43:41Z|00709|binding|INFO|Setting lport 2a3974bd-02ad-406e-9531-3844e5df4bfa down in Southbound
Nov 25 16:43:41 compute-0 ovn_controller[153477]: 2025-11-25T16:43:41Z|00710|binding|INFO|Removing iface tap2a3974bd-02 ovn-installed in OVS
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.211 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Nov 25 16:43:41 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004a.scope: Consumed 13.729s CPU time.
Nov 25 16:43:41 compute-0 systemd-machined[216343]: Machine qemu-87-instance-0000004a terminated.
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.237 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:21:9c 10.100.0.9'], port_security=['fa:16:3e:02:21:9c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f5a259d0-4460-4335-aa4a-f874f93a7e93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2a3974bd-02ad-406e-9531-3844e5df4bfa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.239 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2a3974bd-02ad-406e-9531-3844e5df4bfa in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf unbound from our chassis
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.240 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 126cf01f-b6da-4bbc-847b-2d16936986cf
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.249 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.249 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.250 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.250 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.250 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.252 254096 INFO nova.compute.manager [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Terminating instance
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.253 254096 DEBUG nova.compute.manager [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.255 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[485ccb36-c684-45c2-9cd7-5f10eb928e28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.286 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[86b25006-5b8e-4a1f-9415-0579ef7f69fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.288 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da8595e1-949d-486a-9fe0-9137ba09eb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.317 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e306e1e2-6cbe-42c6-8b41-a633757c98d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:41 compute-0 podman[332652]: 2025-11-25 16:43:41.322134297 +0000 UTC m=+0.086922749 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 16:43:41 compute-0 podman[332653]: 2025-11-25 16:43:41.333166096 +0000 UTC m=+0.097867036 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.336 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da4001dd-e9e4-4143-a5db-256c0c8cd53b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap126cf01f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:03:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546143, 'reachable_time': 36845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332718, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.351 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a01d4d36-0e7b-4184-805a-59693fc61394]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546156, 'tstamp': 546156}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332722, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap126cf01f-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 546161, 'tstamp': 546161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332722, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:41 compute-0 podman[332654]: 2025-11-25 16:43:41.3528426 +0000 UTC m=+0.116990215 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.353 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.359 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126cf01f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.360 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.360 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap126cf01f-b0, col_values=(('external_ids', {'iface-id': '41886c6c-e968-4c0b-b7f6-75887ad8a7ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.468 254096 INFO nova.virt.libvirt.driver [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Instance destroyed successfully.
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.468 254096 DEBUG nova.objects.instance [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'resources' on Instance uuid f5a259d0-4460-4335-aa4a-f874f93a7e93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.483 254096 DEBUG nova.virt.libvirt.vif [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-2',id=74,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-25T16:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:26Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=f5a259d0-4460-4335-aa4a-f874f93a7e93,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.483 254096 DEBUG nova.network.os_vif_util [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "address": "fa:16:3e:02:21:9c", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a3974bd-02", "ovs_interfaceid": "2a3974bd-02ad-406e-9531-3844e5df4bfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.484 254096 DEBUG nova.network.os_vif_util [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.484 254096 DEBUG os_vif [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.487 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a3974bd-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.492 254096 INFO os_vif [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:21:9c,bridge_name='br-int',has_traffic_filtering=True,id=2a3974bd-02ad-406e-9531-3844e5df4bfa,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a3974bd-02')
Nov 25 16:43:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 288 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.0 MiB/s wr, 315 op/s
Nov 25 16:43:41 compute-0 kernel: tapa297c9f1-75 (unregistering): left promiscuous mode
Nov 25 16:43:41 compute-0 NetworkManager[48891]: <info>  [1764089021.8008] device (tapa297c9f1-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 ovn_controller[153477]: 2025-11-25T16:43:41Z|00711|binding|INFO|Releasing lport a297c9f1-753f-4f96-b8e4-38a42969484d from this chassis (sb_readonly=0)
Nov 25 16:43:41 compute-0 ovn_controller[153477]: 2025-11-25T16:43:41Z|00712|binding|INFO|Setting lport a297c9f1-753f-4f96-b8e4-38a42969484d down in Southbound
Nov 25 16:43:41 compute-0 ovn_controller[153477]: 2025-11-25T16:43:41Z|00713|binding|INFO|Removing iface tapa297c9f1-75 ovn-installed in OVS
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.826 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:28:6d 10.100.0.8'], port_security=['fa:16:3e:1e:28:6d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '69b3cbbb-9713-4f49-9e67-1f33a3ae2642', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-126cf01f-b6da-4bbc-847b-2d16936986cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b96c962def8e44a98e659bf2a55a8dcc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71017c54-8c41-4e8e-be7c-5ebf5417c30b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff19d899-3f5c-4864-b6a7-e950e287f839, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a297c9f1-753f-4f96-b8e4-38a42969484d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a297c9f1-753f-4f96-b8e4-38a42969484d in datapath 126cf01f-b6da-4bbc-847b-2d16936986cf unbound from our chassis
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.828 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 126cf01f-b6da-4bbc-847b-2d16936986cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86e99a9e-a964-40c9-b562-ef888da73b83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:41.830 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf namespace which is not needed anymore
Nov 25 16:43:41 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Nov 25 16:43:41 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004b.scope: Consumed 11.860s CPU time.
Nov 25 16:43:41 compute-0 systemd-machined[216343]: Machine qemu-88-instance-0000004b terminated.
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.919 254096 DEBUG nova.compute.manager [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-unplugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.920 254096 DEBUG oslo_concurrency.lockutils [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.920 254096 DEBUG oslo_concurrency.lockutils [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.920 254096 DEBUG oslo_concurrency.lockutils [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.921 254096 DEBUG nova.compute.manager [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] No waiting events found dispatching network-vif-unplugged-2a3974bd-02ad-406e-9531-3844e5df4bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:41 compute-0 nova_compute[254092]: 2025-11-25 16:43:41.921 254096 DEBUG nova.compute.manager [req-4866b6b0-0e93-48ca-8c1f-51711e7aa634 req-5bceeb27-9521-491a-8fd2-e15b5103d7d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-unplugged-2a3974bd-02ad-406e-9531-3844e5df4bfa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:43:41 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : haproxy version is 2.8.14-c23fe91
Nov 25 16:43:41 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [NOTICE]   (332155) : path to executable is /usr/sbin/haproxy
Nov 25 16:43:41 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [WARNING]  (332155) : Exiting Master process...
Nov 25 16:43:41 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [WARNING]  (332155) : Exiting Master process...
Nov 25 16:43:41 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [ALERT]    (332155) : Current worker (332180) exited with code 143 (Terminated)
Nov 25 16:43:41 compute-0 neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf[332121]: [WARNING]  (332155) : All workers exited. Exiting... (0)
Nov 25 16:43:41 compute-0 systemd[1]: libpod-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da.scope: Deactivated successfully.
Nov 25 16:43:41 compute-0 podman[332774]: 2025-11-25 16:43:41.993903962 +0000 UTC m=+0.070427652 container died b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.092 254096 INFO nova.virt.libvirt.driver [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Instance destroyed successfully.
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.092 254096 DEBUG nova.objects.instance [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lazy-loading 'resources' on Instance uuid 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.103 254096 DEBUG nova.virt.libvirt.vif [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-206851324',display_name='tempest-ListServersNegativeTestJSON-server-206851324-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-206851324-3',id=75,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-11-25T16:43:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b96c962def8e44a98e659bf2a55a8dcc',ramdisk_id='',reservation_id='r-4m4xi3n8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-999655333',owner_user_name='tempest-ListServersNegativeTestJSON-999655333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:28Z,user_data=None,user_id='1a8a49326e3040eea57c8e1a61660f19',uuid=69b3cbbb-9713-4f49-9e67-1f33a3ae2642,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.104 254096 DEBUG nova.network.os_vif_util [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converting VIF {"id": "a297c9f1-753f-4f96-b8e4-38a42969484d", "address": "fa:16:3e:1e:28:6d", "network": {"id": "126cf01f-b6da-4bbc-847b-2d16936986cf", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-944147619-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b96c962def8e44a98e659bf2a55a8dcc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa297c9f1-75", "ovs_interfaceid": "a297c9f1-753f-4f96-b8e4-38a42969484d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.105 254096 DEBUG nova.network.os_vif_util [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.105 254096 DEBUG os_vif [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.107 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa297c9f1-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.111 254096 INFO os_vif [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:28:6d,bridge_name='br-int',has_traffic_filtering=True,id=a297c9f1-753f-4f96-b8e4-38a42969484d,network=Network(126cf01f-b6da-4bbc-847b-2d16936986cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa297c9f1-75')
Nov 25 16:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da-userdata-shm.mount: Deactivated successfully.
Nov 25 16:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-965eee395a059bda45c6a5b38023a9548ac21b0145fb7c81bf8446d934d09752-merged.mount: Deactivated successfully.
Nov 25 16:43:42 compute-0 podman[332774]: 2025-11-25 16:43:42.274112414 +0000 UTC m=+0.350636104 container cleanup b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:43:42 compute-0 systemd[1]: libpod-conmon-b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da.scope: Deactivated successfully.
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.281 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Successfully updated port: b5793671-5020-4692-92f1-65a87bcdf38e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.296 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.296 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquired lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.296 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:43:42 compute-0 podman[332834]: 2025-11-25 16:43:42.387375696 +0000 UTC m=+0.090294240 container remove b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.393 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[876a1a05-d9c2-4a89-b4e8-8d99dd77aa7d]: (4, ('Tue Nov 25 04:43:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf (b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da)\nb39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da\nTue Nov 25 04:43:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf (b39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da)\nb39a7ce3f61e8aca415dd9cd88bf447069eba7b7cb1c5c68f82d9eff0db236da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.395 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eee44d6b-e85f-4963-aa76-1ecd8ebe546c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.396 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126cf01f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:42 compute-0 kernel: tap126cf01f-b0: left promiscuous mode
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.452 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:43:42 compute-0 nova_compute[254092]: 2025-11-25 16:43:42.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.469 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da7f4e55-b4ec-4142-b340-1d508122db0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8abf357-437d-419a-b3cb-53670a993438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.491 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f152c1ba-251a-411e-811a-38e7f21f2ed9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.508 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf713ecd-424b-4620-8be0-e8c07fc87bee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 546135, 'reachable_time': 34050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332849, 'error': None, 'target': 'ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.511 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-126cf01f-b6da-4bbc-847b-2d16936986cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:43:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:42.511 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9f90325e-ede1-493a-bac0-2d4bb9478968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d126cf01f\x2db6da\x2d4bbc\x2d847b\x2d2d16936986cf.mount: Deactivated successfully.
Nov 25 16:43:42 compute-0 ceph-mon[74985]: pgmap v1749: 321 pgs: 321 active+clean; 288 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.0 MiB/s wr, 315 op/s
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.309 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.530 254096 INFO nova.virt.libvirt.driver [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deleting instance files /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93_del
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.530 254096 INFO nova.virt.libvirt.driver [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deletion of /var/lib/nova/instances/f5a259d0-4460-4335-aa4a-f874f93a7e93_del complete
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.583 254096 INFO nova.compute.manager [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 2.56 seconds to destroy the instance on the hypervisor.
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.584 254096 DEBUG oslo.service.loopingcall [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.584 254096 DEBUG nova.compute.manager [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.584 254096 DEBUG nova.network.neutron [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:43:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 288 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.658 254096 INFO nova.virt.libvirt.driver [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deleting instance files /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_del
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.659 254096 INFO nova.virt.libvirt.driver [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deletion of /var/lib/nova/instances/69b3cbbb-9713-4f49-9e67-1f33a3ae2642_del complete
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.720 254096 INFO nova.compute.manager [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 2.47 seconds to destroy the instance on the hypervisor.
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.721 254096 DEBUG oslo.service.loopingcall [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.721 254096 DEBUG nova.compute.manager [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:43:43 compute-0 nova_compute[254092]: 2025-11-25 16:43:43.722 254096 DEBUG nova.network.neutron [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.024 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-changed-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.025 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Refreshing instance network info cache due to event network-changed-b5793671-5020-4692-92f1-65a87bcdf38e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.025 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.571 254096 DEBUG nova.network.neutron [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updating instance_info_cache with network_info: [{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.737 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Releasing lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.737 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance network_info: |[{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.738 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.738 254096 DEBUG nova.network.neutron [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Refreshing network info cache for port b5793671-5020-4692-92f1-65a87bcdf38e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.741 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start _get_guest_xml network_info=[{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.749 254096 WARNING nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.758 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.759 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.765 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.766 254096 DEBUG nova.virt.libvirt.host [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.767 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.767 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.767 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.768 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.769 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.770 254096 DEBUG nova.virt.hardware [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.776 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.888 254096 DEBUG nova.network.neutron [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:44 compute-0 ceph-mon[74985]: pgmap v1750: 321 pgs: 321 active+clean; 288 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.943 254096 INFO nova.compute.manager [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Took 1.36 seconds to deallocate network for instance.
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.944 254096 DEBUG nova.network.neutron [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:44 compute-0 nova_compute[254092]: 2025-11-25 16:43:44.998 254096 INFO nova.compute.manager [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Took 1.28 seconds to deallocate network for instance.
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.059 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.060 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.100 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.218 254096 DEBUG oslo_concurrency.processutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397520148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.302 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.339 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.344 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 200 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 5.3 MiB/s wr, 161 op/s
Nov 25 16:43:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773892915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.696 254096 DEBUG oslo_concurrency.processutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.704 254096 DEBUG nova.compute.provider_tree [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.725 254096 DEBUG nova.scheduler.client.report [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237226620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.805 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.806 254096 DEBUG nova.virt.libvirt.vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-816918544',display_name='tempest-ServerMetadataNegativeTestJSON-server-816918544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-816918544',id=76,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0a920505c8240228ed836913ffcdbe4',ramdisk_id='',reservation_id='r-0p559jg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1696100389',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1696100389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:37Z,user_data=None,user_id='5e0e4ce0eeda4a79ab738e1f8dc0f725',uuid=44f3c94a-060c-4650-bfe7-a214c6a10207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.807 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converting VIF {"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.807 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.808 254096 DEBUG nova.objects.instance [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f3c94a-060c-4650-bfe7-a214c6a10207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.819 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <uuid>44f3c94a-060c-4650-bfe7-a214c6a10207</uuid>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <name>instance-0000004c</name>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-816918544</nova:name>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:43:44</nova:creationTime>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:user uuid="5e0e4ce0eeda4a79ab738e1f8dc0f725">tempest-ServerMetadataNegativeTestJSON-1696100389-project-member</nova:user>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:project uuid="e0a920505c8240228ed836913ffcdbe4">tempest-ServerMetadataNegativeTestJSON-1696100389</nova:project>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <nova:port uuid="b5793671-5020-4692-92f1-65a87bcdf38e">
Nov 25 16:43:45 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <system>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <entry name="serial">44f3c94a-060c-4650-bfe7-a214c6a10207</entry>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <entry name="uuid">44f3c94a-060c-4650-bfe7-a214c6a10207</entry>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </system>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <os>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   </os>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <features>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   </features>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/44f3c94a-060c-4650-bfe7-a214c6a10207_disk">
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config">
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e1:5b:98"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <target dev="tapb5793671-50"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/console.log" append="off"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <video>
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </video>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:43:45 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:43:45 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:43:45 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:43:45 compute-0 nova_compute[254092]: </domain>
Nov 25 16:43:45 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.821 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Preparing to wait for external event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.821 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.821 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.822 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.822 254096 DEBUG nova.virt.libvirt.vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:43:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-816918544',display_name='tempest-ServerMetadataNegativeTestJSON-server-816918544',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-816918544',id=76,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0a920505c8240228ed836913ffcdbe4',ramdisk_id='',reservation_id='r-0p559jg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1696100389',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1696100389-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:43:37Z,user_data=None,user_id='5e0e4ce0eeda4a79ab738e1f8dc0f725',uuid=44f3c94a-060c-4650-bfe7-a214c6a10207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.823 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converting VIF {"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.823 254096 DEBUG nova.network.os_vif_util [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.823 254096 DEBUG os_vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.824 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.825 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5793671-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5793671-50, col_values=(('external_ids', {'iface-id': 'b5793671-5020-4692-92f1-65a87bcdf38e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:5b:98', 'vm-uuid': '44f3c94a-060c-4650-bfe7-a214c6a10207'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:45 compute-0 NetworkManager[48891]: <info>  [1764089025.8300] manager: (tapb5793671-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.835 254096 INFO os_vif [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50')
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.855 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.857 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.899 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.899 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.899 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] No VIF found with MAC fa:16:3e:e1:5b:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.900 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Using config drive
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.965 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:45 compute-0 nova_compute[254092]: 2025-11-25 16:43:45.972 254096 INFO nova.scheduler.client.report [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Deleted allocations for instance f5a259d0-4460-4335-aa4a-f874f93a7e93
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.055 254096 DEBUG oslo_concurrency.processutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2397520148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2773892915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1237226620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.116 254096 DEBUG oslo_concurrency.lockutils [None req-f3161e04-7f48-4933-ade6-fea6c8fc85f8 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.309 254096 DEBUG nova.compute.manager [req-95c9df47-c4c4-4bf2-95cf-75aed0f8b948 req-21291017-af0c-4f65-aee2-e5fa11588642 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-deleted-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.310 254096 DEBUG nova.compute.manager [req-95c9df47-c4c4-4bf2-95cf-75aed0f8b948 req-21291017-af0c-4f65-aee2-e5fa11588642 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-deleted-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.469 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Creating config drive at /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.473 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqochm2wc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1029103014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.590 254096 DEBUG oslo_concurrency.processutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.596 254096 DEBUG nova.compute.provider_tree [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.608 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqochm2wc" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.629 254096 DEBUG nova.storage.rbd_utils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] rbd image 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.633 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.663 254096 DEBUG nova.scheduler.client.report [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.688 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.733 254096 INFO nova.scheduler.client.report [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Deleted allocations for instance 69b3cbbb-9713-4f49-9e67-1f33a3ae2642
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.735 254096 DEBUG nova.network.neutron [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updated VIF entry in instance network info cache for port b5793671-5020-4692-92f1-65a87bcdf38e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.736 254096 DEBUG nova.network.neutron [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updating instance_info_cache with network_info: [{"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.753 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-44f3c94a-060c-4650-bfe7-a214c6a10207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.754 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.754 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f5a259d0-4460-4335-aa4a-f874f93a7e93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] No waiting events found dispatching network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.755 254096 WARNING nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Received unexpected event network-vif-plugged-2a3974bd-02ad-406e-9531-3844e5df4bfa for instance with vm_state active and task_state deleting.
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-unplugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.756 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.757 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] No waiting events found dispatching network-vif-unplugged-a297c9f1-753f-4f96-b8e4-38a42969484d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.757 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-unplugged-a297c9f1-753f-4f96-b8e4-38a42969484d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.757 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.758 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.758 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.758 254096 DEBUG oslo_concurrency.lockutils [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.759 254096 DEBUG nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] No waiting events found dispatching network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.759 254096 WARNING nova.compute.manager [req-36cb0d62-8948-4587-9387-06f3a9bbc13d req-f7f6bbb9-ba2f-4173-9e68-2b128ea10e54 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Received unexpected event network-vif-plugged-a297c9f1-753f-4f96-b8e4-38a42969484d for instance with vm_state active and task_state deleting.
Nov 25 16:43:46 compute-0 nova_compute[254092]: 2025-11-25 16:43:46.837 254096 DEBUG oslo_concurrency.lockutils [None req-f3a1befb-b055-4e5d-b0fd-6826616fac2f 1a8a49326e3040eea57c8e1a61660f19 b96c962def8e44a98e659bf2a55a8dcc - - default default] Lock "69b3cbbb-9713-4f49-9e67-1f33a3ae2642" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.014 254096 DEBUG oslo_concurrency.processutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config 44f3c94a-060c-4650-bfe7-a214c6a10207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.015 254096 INFO nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deleting local config drive /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207/disk.config because it was imported into RBD.
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.024 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.025 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.025 254096 INFO nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Rebooting instance
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.043 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.043 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.044 254096 DEBUG nova.network.neutron [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:43:47 compute-0 kernel: tapb5793671-50: entered promiscuous mode
Nov 25 16:43:47 compute-0 ovn_controller[153477]: 2025-11-25T16:43:47Z|00714|binding|INFO|Claiming lport b5793671-5020-4692-92f1-65a87bcdf38e for this chassis.
Nov 25 16:43:47 compute-0 ovn_controller[153477]: 2025-11-25T16:43:47Z|00715|binding|INFO|b5793671-5020-4692-92f1-65a87bcdf38e: Claiming fa:16:3e:e1:5b:98 10.100.0.7
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:47 compute-0 NetworkManager[48891]: <info>  [1764089027.0755] manager: (tapb5793671-50): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.084 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5b:98 10.100.0.7'], port_security=['fa:16:3e:e1:5b:98 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44f3c94a-060c-4650-bfe7-a214c6a10207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b60c60-81b1-400f-ad99-152388e55616', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0a920505c8240228ed836913ffcdbe4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fab53028-087c-4e81-a981-98f5af5e037e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=268fa885-b3f3-471a-9d08-3e6f7bd64b52, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b5793671-5020-4692-92f1-65a87bcdf38e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.087 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b5793671-5020-4692-92f1-65a87bcdf38e in datapath 41b60c60-81b1-400f-ad99-152388e55616 bound to our chassis
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.089 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41b60c60-81b1-400f-ad99-152388e55616
Nov 25 16:43:47 compute-0 ovn_controller[153477]: 2025-11-25T16:43:47Z|00716|binding|INFO|Setting lport b5793671-5020-4692-92f1-65a87bcdf38e ovn-installed in OVS
Nov 25 16:43:47 compute-0 ovn_controller[153477]: 2025-11-25T16:43:47Z|00717|binding|INFO|Setting lport b5793671-5020-4692-92f1-65a87bcdf38e up in Southbound
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.102 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2a5c41-9162-4b55-87b2-39f6a719888a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.103 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41b60c60-81 in ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.105 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41b60c60-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.105 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4f692d-fe12-4dc6-9c3b-75c2a0c3ef62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.106 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ab641-0d5f-42a8-b885-3d8d85a1f824]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 systemd-udevd[333033]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:43:47 compute-0 systemd-machined[216343]: New machine qemu-89-instance-0000004c.
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.119 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a7eb0cec-e9af-455e-a5cf-722f7a11e738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 NetworkManager[48891]: <info>  [1764089027.1242] device (tapb5793671-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:43:47 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-0000004c.
Nov 25 16:43:47 compute-0 NetworkManager[48891]: <info>  [1764089027.1264] device (tapb5793671-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[493eced0-fa9c-4cb0-a5f4-38d81c0b3fd6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.179 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1846c71f-1e35-414a-9f81-27ea0ee74749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.184 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a31824fd-ab93-47a4-9723-f10d42fb5787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 NetworkManager[48891]: <info>  [1764089027.1855] manager: (tap41b60c60-80): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Nov 25 16:43:47 compute-0 ceph-mon[74985]: pgmap v1751: 321 pgs: 321 active+clean; 200 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 5.3 MiB/s wr, 161 op/s
Nov 25 16:43:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1029103014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.220 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13d0a946-b05d-4876-9770-4c315862d12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.223 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[23bcb1a6-0696-4f24-a0e5-71cfb8b0c465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 NetworkManager[48891]: <info>  [1764089027.2586] device (tap41b60c60-80): carrier: link connected
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.265 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cebbc111-ed1c-4b43-91a2-32766727a56e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.286 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66a49ce2-f17f-46d7-a8b2-a2d428005b8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b60c60-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:21:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548484, 'reachable_time': 16232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333065, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f84f807-83d7-4650-a240-534955b6e425]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:219a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548484, 'tstamp': 548484}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333066, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24718d09-ab56-499f-a6f7-d45937264681]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41b60c60-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:21:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548484, 'reachable_time': 16232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333067, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.358 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe0cb6f-a133-4137-9d35-97b9b2cdb86e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c6d02a-8522-461a-87ed-fbd8552ebcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.529 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b60c60-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.529 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.530 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41b60c60-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:47 compute-0 NetworkManager[48891]: <info>  [1764089027.5323] manager: (tap41b60c60-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Nov 25 16:43:47 compute-0 kernel: tap41b60c60-80: entered promiscuous mode
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.534 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41b60c60-80, col_values=(('external_ids', {'iface-id': 'd615f472-f4a0-4cb5-a4b8-8b9aa4b9f756'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:47 compute-0 ovn_controller[153477]: 2025-11-25T16:43:47Z|00718|binding|INFO|Releasing lport d615f472-f4a0-4cb5-a4b8-8b9aa4b9f756 from this chassis (sb_readonly=0)
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.553 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41b60c60-81b1-400f-ad99-152388e55616.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41b60c60-81b1-400f-ad99-152388e55616.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67111b8a-396a-4b30-b775-cc557d0c1311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.556 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-41b60c60-81b1-400f-ad99-152388e55616
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/41b60c60-81b1-400f-ad99-152388e55616.pid.haproxy
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 41b60c60-81b1-400f-ad99-152388e55616
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:43:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:47.557 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'env', 'PROCESS_TAG=haproxy-41b60c60-81b1-400f-ad99-152388e55616', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41b60c60-81b1-400f-ad99-152388e55616.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:43:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 5.8 MiB/s wr, 160 op/s
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.790 254096 DEBUG nova.compute.manager [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.791 254096 DEBUG oslo_concurrency.lockutils [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.793 254096 DEBUG oslo_concurrency.lockutils [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.793 254096 DEBUG oslo_concurrency.lockutils [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.793 254096 DEBUG nova.compute.manager [req-e8af9de0-f5e1-43f1-8cdc-ea570c0074de req-8fe7d243-c644-46cc-a635-d463e1bc5eb1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Processing event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.894 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089027.893465, 44f3c94a-060c-4650-bfe7-a214c6a10207 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.895 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] VM Started (Lifecycle Event)
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.897 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.902 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.905 254096 INFO nova.virt.libvirt.driver [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance spawned successfully.
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.906 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.911 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.914 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.930 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.931 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.931 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.932 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.932 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.933 254096 DEBUG nova.virt.libvirt.driver [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.937 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.938 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089027.8937201, 44f3c94a-060c-4650-bfe7-a214c6a10207 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.938 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] VM Paused (Lifecycle Event)
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.972 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.976 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089027.900881, 44f3c94a-060c-4650-bfe7-a214c6a10207 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:47 compute-0 nova_compute[254092]: 2025-11-25 16:43:47.977 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] VM Resumed (Lifecycle Event)
Nov 25 16:43:48 compute-0 podman[333141]: 2025-11-25 16:43:47.927112967 +0000 UTC m=+0.042617747 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.036 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.039 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.058 254096 INFO nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 10.36 seconds to spawn the instance on the hypervisor.
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.058 254096 DEBUG nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.059 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:43:48 compute-0 podman[333141]: 2025-11-25 16:43:48.126692431 +0000 UTC m=+0.242197201 container create b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.158 254096 INFO nova.compute.manager [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 11.49 seconds to build instance.
Nov 25 16:43:48 compute-0 systemd[1]: Started libpod-conmon-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038.scope.
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.175 254096 DEBUG oslo_concurrency.lockutils [None req-3fdcbe52-20a2-4f2f-9698-294b07d7b5ea 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c02ee99f386b93fdd51ea0e33033064775b4969dd78ae64bfa17f9e14b0747ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:48 compute-0 podman[333141]: 2025-11-25 16:43:48.224810323 +0000 UTC m=+0.340315113 container init b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:43:48 compute-0 ceph-mon[74985]: pgmap v1752: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 5.8 MiB/s wr, 160 op/s
Nov 25 16:43:48 compute-0 podman[333141]: 2025-11-25 16:43:48.232104961 +0000 UTC m=+0.347609731 container start b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:43:48 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : New worker (333162) forked
Nov 25 16:43:48 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : Loading success.
Nov 25 16:43:48 compute-0 nova_compute[254092]: 2025-11-25 16:43:48.312 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.018 254096 DEBUG nova.network.neutron [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.036 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.037 254096 DEBUG nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:49 compute-0 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 16:43:49 compute-0 NetworkManager[48891]: <info>  [1764089029.2393] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 ovn_controller[153477]: 2025-11-25T16:43:49Z|00719|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 16:43:49 compute-0 ovn_controller[153477]: 2025-11-25T16:43:49Z|00720|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 16:43:49 compute-0 ovn_controller[153477]: 2025-11-25T16:43:49Z|00721|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.254 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.255 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.257 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.257 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a067d70-9843-40f0-8059-ed8f69ae6404]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.258 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 16:43:49 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000048.scope: Consumed 15.256s CPU time.
Nov 25 16:43:49 compute-0 systemd-machined[216343]: Machine qemu-85-instance-00000048 terminated.
Nov 25 16:43:49 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : haproxy version is 2.8.14-c23fe91
Nov 25 16:43:49 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [NOTICE]   (331028) : path to executable is /usr/sbin/haproxy
Nov 25 16:43:49 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [WARNING]  (331028) : Exiting Master process...
Nov 25 16:43:49 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [ALERT]    (331028) : Current worker (331030) exited with code 143 (Terminated)
Nov 25 16:43:49 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[331023]: [WARNING]  (331028) : All workers exited. Exiting... (0)
Nov 25 16:43:49 compute-0 systemd[1]: libpod-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62.scope: Deactivated successfully.
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 podman[333192]: 2025-11-25 16:43:49.404390754 +0000 UTC m=+0.049526274 container died 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.423 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.423 254096 DEBUG nova.objects.instance [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.435 254096 DEBUG nova.virt.libvirt.vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.436 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.436 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.437 254096 DEBUG os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62-userdata-shm.mount: Deactivated successfully.
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.440 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe8c3a9-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c59fe121ea6f90e0e0f73a558d007bdcbf7807f621ad6454cfd7572f9bd582-merged.mount: Deactivated successfully.
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.449 254096 INFO os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:43:49 compute-0 podman[333192]: 2025-11-25 16:43:49.451464821 +0000 UTC m=+0.096600341 container cleanup 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.458 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start _get_guest_xml network_info=[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:43:49 compute-0 systemd[1]: libpod-conmon-687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62.scope: Deactivated successfully.
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.462 254096 WARNING nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.467 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.467 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.470 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.libvirt.host [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.471 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.472 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.virt.hardware [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.473 254096 DEBUG nova.objects.instance [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.484 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:49 compute-0 podman[333231]: 2025-11-25 16:43:49.520288949 +0000 UTC m=+0.042535186 container remove 687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.523 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089014.5038822, ef028cf3-f8af-4112-9424-8a12fdda7690 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.525 254096 INFO nova.compute.manager [-] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] VM Stopped (Lifecycle Event)
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5eab71d1-f244-417a-8338-d65924e7fee2]: (4, ('Tue Nov 25 04:43:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62)\n687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62\nTue Nov 25 04:43:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62)\n687daea182501f598729080157745bbb6a8893f57172d27904e3db78d9125e62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.530 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35ae4606-831a-4b8c-a011-09bde389de18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.532 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.546 254096 DEBUG nova.compute.manager [None req-0a8a06a5-9d0f-4783-bff6-1fe6e9110e30 - - - - - -] [instance: ef028cf3-f8af-4112-9424-8a12fdda7690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.549 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88f85e59-5cd0-4269-9fed-fbdcc3403e40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f42e547c-1641-4025-8046-08d76d8badb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fc53ddb5-08e8-420d-ae31-e5bd4202e040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.590 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18bb5d93-f5a6-43f8-a013-d08476d1e729]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544341, 'reachable_time': 25878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333246, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.597 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:43:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:49.597 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b44e8f0e-4678-42f5-bc03-220710b653a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 5.7 MiB/s wr, 155 op/s
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.878 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.878 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.879 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.879 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.879 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] No waiting events found dispatching network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 WARNING nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received unexpected event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e for instance with vm_state active and task_state None.
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.880 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.881 254096 DEBUG oslo_concurrency.lockutils [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.881 254096 DEBUG nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.881 254096 WARNING nova.compute.manager [req-59763a69-ebda-4bec-b1f9-3cc662a171f9 req-d0891cac-fcd5-4e43-b181-02cec81f23be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state reboot_started_hard.
Nov 25 16:43:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219119069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.955 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:49 compute-0 nova_compute[254092]: 2025-11-25 16:43:49.985 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:43:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.2 total, 600.0 interval
                                           Cumulative writes: 25K writes, 98K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 25K writes, 8494 syncs, 3.01 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8833 writes, 32K keys, 8833 commit groups, 1.0 writes per commit group, ingest: 32.32 MB, 0.05 MB/s
                                           Interval WAL: 8833 writes, 3452 syncs, 2.56 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:43:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:43:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169005987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.455 254096 DEBUG oslo_concurrency.processutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.457 254096 DEBUG nova.virt.libvirt.vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.457 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.458 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.460 254096 DEBUG nova.objects.instance [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.480 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <uuid>01f96314-1fbe-4eee-a4ed-db7f448a5320</uuid>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <name>instance-00000048</name>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestJSON-server-1951123052</nova:name>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:43:49</nova:creationTime>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <nova:port uuid="4fe8c3a9-70ba-4a51-8bc1-1505a49fe349">
Nov 25 16:43:50 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <system>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <entry name="serial">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <entry name="uuid">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </system>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <os>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   </os>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <features>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   </features>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk">
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config">
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       </source>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:43:50 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ff:a0:1c"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <target dev="tap4fe8c3a9-70"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log" append="off"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <video>
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </video>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:43:50 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:43:50 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:43:50 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:43:50 compute-0 nova_compute[254092]: </domain>
Nov 25 16:43:50 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.481 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.481 254096 DEBUG nova.virt.libvirt.driver [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.481 254096 DEBUG nova.virt.libvirt.vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.482 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.482 254096 DEBUG nova.network.os_vif_util [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.482 254096 DEBUG os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.483 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.486 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.486 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.487 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 NetworkManager[48891]: <info>  [1764089030.4887] manager: (tap4fe8c3a9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.495 254096 INFO os_vif [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:43:50 compute-0 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 16:43:50 compute-0 ovn_controller[153477]: 2025-11-25T16:43:50Z|00722|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 16:43:50 compute-0 ovn_controller[153477]: 2025-11-25T16:43:50Z|00723|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 NetworkManager[48891]: <info>  [1764089030.5757] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.585 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.586 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:43:50 compute-0 ovn_controller[153477]: 2025-11-25T16:43:50Z|00724|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 16:43:50 compute-0 ovn_controller[153477]: 2025-11-25T16:43:50Z|00725|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 16:43:50 compute-0 systemd-udevd[333321]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4904131-8bb6-4230-8a26-90fcd947b878]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.600 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.602 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb68b11-444b-4a77-beff-64d95c4530f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d46b36c-7011-4184-aed9-d700519ee3b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 NetworkManager[48891]: <info>  [1764089030.6110] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:43:50 compute-0 NetworkManager[48891]: <info>  [1764089030.6120] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.616 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5538d68e-611d-461b-96e4-1b9911b2e972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 systemd-machined[216343]: New machine qemu-90-instance-00000048.
Nov 25 16:43:50 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-00000048.
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e44a5997-0fa1-46b3-90d9-2cb21795daa8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.665 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf145470-f41b-40f5-99d9-2c0fab65148b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 NetworkManager[48891]: <info>  [1764089030.6741] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/314)
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.673 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[96560310-264f-448b-87dc-7a79941251b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.707 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd25ac3-3465-429b-8cf7-4242aea69783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.710 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[01b46ca3-dfd4-4260-8d08-ed3e8b9afa88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ceph-mon[74985]: pgmap v1753: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 5.7 MiB/s wr, 155 op/s
Nov 25 16:43:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1219119069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/169005987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:43:50 compute-0 NetworkManager[48891]: <info>  [1764089030.7389] device (tap50ea1716-90): carrier: link connected
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.742 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f23f5cf-0f22-4d67-8eaf-20c8663b22c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2a0b17-08f5-4252-8bc0-8129fd820fdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548832, 'reachable_time': 28935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333355, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af56328a-b15f-46a2-98e0-1687ad570228]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548832, 'tstamp': 548832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333356, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.791 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b64e2b8-1c70-4e5e-a266-492d2f46314e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548832, 'reachable_time': 28935, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333357, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efe255be-cb18-44ca-9a44-d724742e1a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.889 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c6be5e-487c-4e2d-94e2-8dfe13c560bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.891 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.891 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:43:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.892 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 NetworkManager[48891]: <info>  [1764089030.8983] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 25 16:43:50 compute-0 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:50 compute-0 ovn_controller[153477]: 2025-11-25T16:43:50Z|00726|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.908 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 nova_compute[254092]: 2025-11-25 16:43:50.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.920 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.921 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b403d4f-5dda-4ad0-93e5-ce548bbc2fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.923 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:43:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:50.924 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:43:51 compute-0 sudo[333400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:51 compute-0 sudo[333400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:51 compute-0 sudo[333400]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.091 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 01f96314-1fbe-4eee-a4ed-db7f448a5320 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.092 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089031.0913043, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.092 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.097 254096 DEBUG nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:43:51 compute-0 sudo[333433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:43:51 compute-0 sudo[333433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.100 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance rebooted successfully.
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.101 254096 DEBUG nova.compute.manager [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:51 compute-0 sudo[333433]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.112 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.139 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.139 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089031.0942383, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.139 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.157 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.160 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:43:51 compute-0 sudo[333459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:51 compute-0 sudo[333459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:51 compute-0 sudo[333459]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.196 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.198 254096 DEBUG oslo_concurrency.lockutils [None req-53e526e2-819b-451d-972a-855053003fc8 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:51 compute-0 sudo[333484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:43:51 compute-0 sudo[333484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105651636875692 of space, bias 1.0, pg target 0.33169549106270757 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:43:51 compute-0 podman[333529]: 2025-11-25 16:43:51.378130041 +0000 UTC m=+0.060193024 container create 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 16:43:51 compute-0 systemd[1]: Started libpod-conmon-9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a.scope.
Nov 25 16:43:51 compute-0 podman[333529]: 2025-11-25 16:43:51.343608804 +0000 UTC m=+0.025671817 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:43:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0644dc8bc60d336343ae20e3273c06bdd19306c75a44733d23fab24fe3d7c49b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:51 compute-0 podman[333529]: 2025-11-25 16:43:51.479994864 +0000 UTC m=+0.162057847 container init 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:43:51 compute-0 podman[333529]: 2025-11-25 16:43:51.485444702 +0000 UTC m=+0.167507685 container start 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 16:43:51 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [NOTICE]   (333559) : New worker (333561) forked
Nov 25 16:43:51 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [NOTICE]   (333559) : Loading success.
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.8 MiB/s wr, 232 op/s
Nov 25 16:43:51 compute-0 sudo[333484]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:43:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:43:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:43:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev dd228caf-a806-47f0-ae1a-6816370d9963 does not exist
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 82fd6c2a-ef87-4c00-a018-7834258fc4c1 does not exist
Nov 25 16:43:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8ee99efa-6dfd-4573-b90d-79d1b13db401 does not exist
Nov 25 16:43:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:43:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:43:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:43:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.946 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.947 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 WARNING nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.948 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 WARNING nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.949 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 DEBUG oslo_concurrency.lockutils [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 DEBUG nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.950 254096 WARNING nova.compute.manager [req-83e9ce22-079f-4891-85dc-0c61e2baa8e9 req-334cfb63-6f27-4ac7-8651-527da7e6fa22 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.951 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.952 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.953 254096 INFO nova.compute.manager [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Terminating instance
Nov 25 16:43:51 compute-0 nova_compute[254092]: 2025-11-25 16:43:51.954 254096 DEBUG nova.compute.manager [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:43:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:43:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:43:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:43:51 compute-0 sudo[333587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:51 compute-0 sudo[333587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:51 compute-0 sudo[333587]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:52 compute-0 kernel: tapb5793671-50 (unregistering): left promiscuous mode
Nov 25 16:43:52 compute-0 NetworkManager[48891]: <info>  [1764089032.0425] device (tapb5793671-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:43:52 compute-0 sudo[333612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:43:52 compute-0 sudo[333612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:52 compute-0 ovn_controller[153477]: 2025-11-25T16:43:52Z|00727|binding|INFO|Releasing lport b5793671-5020-4692-92f1-65a87bcdf38e from this chassis (sb_readonly=0)
Nov 25 16:43:52 compute-0 ovn_controller[153477]: 2025-11-25T16:43:52Z|00728|binding|INFO|Setting lport b5793671-5020-4692-92f1-65a87bcdf38e down in Southbound
Nov 25 16:43:52 compute-0 ovn_controller[153477]: 2025-11-25T16:43:52Z|00729|binding|INFO|Removing iface tapb5793671-50 ovn-installed in OVS
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.051 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:52 compute-0 sudo[333612]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.069 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5b:98 10.100.0.7'], port_security=['fa:16:3e:e1:5b:98 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44f3c94a-060c-4650-bfe7-a214c6a10207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41b60c60-81b1-400f-ad99-152388e55616', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0a920505c8240228ed836913ffcdbe4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fab53028-087c-4e81-a981-98f5af5e037e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=268fa885-b3f3-471a-9d08-3e6f7bd64b52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b5793671-5020-4692-92f1-65a87bcdf38e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:43:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.071 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b5793671-5020-4692-92f1-65a87bcdf38e in datapath 41b60c60-81b1-400f-ad99-152388e55616 unbound from our chassis
Nov 25 16:43:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.072 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41b60c60-81b1-400f-ad99-152388e55616, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.074 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bdc61d-a8bb-452e-8cfe-55f68ae44633]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:52.075 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 namespace which is not needed anymore
Nov 25 16:43:52 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Nov 25 16:43:52 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004c.scope: Consumed 4.691s CPU time.
Nov 25 16:43:52 compute-0 systemd-machined[216343]: Machine qemu-89-instance-0000004c terminated.
Nov 25 16:43:52 compute-0 sudo[333641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:52 compute-0 sudo[333641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:52 compute-0 sudo[333641]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:52 compute-0 sudo[333675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:43:52 compute-0 sudo[333675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:52 compute-0 NetworkManager[48891]: <info>  [1764089032.1732] manager: (tapb5793671-50): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.189 254096 INFO nova.virt.libvirt.driver [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Instance destroyed successfully.
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.190 254096 DEBUG nova.objects.instance [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lazy-loading 'resources' on Instance uuid 44f3c94a-060c-4650-bfe7-a214c6a10207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.208 254096 DEBUG nova.virt.libvirt.vif [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:43:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-816918544',display_name='tempest-ServerMetadataNegativeTestJSON-server-816918544',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-816918544',id=76,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e0a920505c8240228ed836913ffcdbe4',ramdisk_id='',reservation_id='r-0p559jg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1696100389',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1696100389-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:43:48Z,user_data=None,user_id='5e0e4ce0eeda4a79ab738e1f8dc0f725',uuid=44f3c94a-060c-4650-bfe7-a214c6a10207,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.208 254096 DEBUG nova.network.os_vif_util [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converting VIF {"id": "b5793671-5020-4692-92f1-65a87bcdf38e", "address": "fa:16:3e:e1:5b:98", "network": {"id": "41b60c60-81b1-400f-ad99-152388e55616", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1280594054-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0a920505c8240228ed836913ffcdbe4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5793671-50", "ovs_interfaceid": "b5793671-5020-4692-92f1-65a87bcdf38e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.209 254096 DEBUG nova.network.os_vif_util [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.209 254096 DEBUG os_vif [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.211 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5793671-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.219 254096 INFO os_vif [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5b:98,bridge_name='br-int',has_traffic_filtering=True,id=b5793671-5020-4692-92f1-65a87bcdf38e,network=Network(41b60c60-81b1-400f-ad99-152388e55616),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5793671-50')
Nov 25 16:43:52 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : haproxy version is 2.8.14-c23fe91
Nov 25 16:43:52 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [NOTICE]   (333160) : path to executable is /usr/sbin/haproxy
Nov 25 16:43:52 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [WARNING]  (333160) : Exiting Master process...
Nov 25 16:43:52 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [WARNING]  (333160) : Exiting Master process...
Nov 25 16:43:52 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [ALERT]    (333160) : Current worker (333162) exited with code 143 (Terminated)
Nov 25 16:43:52 compute-0 neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616[333156]: [WARNING]  (333160) : All workers exited. Exiting... (0)
Nov 25 16:43:52 compute-0 systemd[1]: libpod-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038.scope: Deactivated successfully.
Nov 25 16:43:52 compute-0 podman[333705]: 2025-11-25 16:43:52.391835033 +0000 UTC m=+0.217819751 container died b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c02ee99f386b93fdd51ea0e33033064775b4969dd78ae64bfa17f9e14b0747ea-merged.mount: Deactivated successfully.
Nov 25 16:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038-userdata-shm.mount: Deactivated successfully.
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.866 254096 INFO nova.compute.manager [None req-c6203dad-193b-429b-9337-df54fabcd9db aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Get console output
Nov 25 16:43:52 compute-0 nova_compute[254092]: 2025-11-25 16:43:52.881 254096 INFO oslo.privsep.daemon [None req-c6203dad-193b-429b-9337-df54fabcd9db aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpxhc0cb_k/privsep.sock']
Nov 25 16:43:52 compute-0 podman[333705]: 2025-11-25 16:43:52.92586546 +0000 UTC m=+0.751850178 container cleanup b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:43:52 compute-0 systemd[1]: libpod-conmon-b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038.scope: Deactivated successfully.
Nov 25 16:43:53 compute-0 ceph-mon[74985]: pgmap v1754: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.8 MiB/s wr, 232 op/s
Nov 25 16:43:53 compute-0 podman[333791]: 2025-11-25 16:43:53.032358439 +0000 UTC m=+0.080234407 container remove b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.040 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba64593d-8232-4449-975b-5e03e6576e73]: (4, ('Tue Nov 25 04:43:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 (b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038)\nb12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038\nTue Nov 25 04:43:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 (b12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038)\nb12ab01ec6e8b71e2b8f6ae6b06e1ffe441581ec33b5c13822424a64db82f038\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.042 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57b0f6f9-eb2d-4380-a321-6497a56d992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.043 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41b60c60-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:43:53 compute-0 kernel: tap41b60c60-80: left promiscuous mode
Nov 25 16:43:53 compute-0 nova_compute[254092]: 2025-11-25 16:43:53.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:53 compute-0 nova_compute[254092]: 2025-11-25 16:43:53.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.069 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d5012f-ce6c-4316-9d33-4f93686ad95c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f647b802-ac1d-410a-81ab-24496712afbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ee49206c-6de2-4e30-859b-df6de6f3cdcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.115 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9e75fa-ba98-42f8-a7b4-2e7c93aed366]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548476, 'reachable_time': 21228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333823, 'error': None, 'target': 'ovnmeta-41b60c60-81b1-400f-ad99-152388e55616', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d41b60c60\x2d81b1\x2d400f\x2dad99\x2d152388e55616.mount: Deactivated successfully.
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.122 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41b60c60-81b1-400f-ad99-152388e55616 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:43:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:43:53.122 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a21ee7-d4c3-46f9-8656-ac5d6d5a84b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:43:53 compute-0 podman[333814]: 2025-11-25 16:43:53.147530944 +0000 UTC m=+0.059640099 container create bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:43:53 compute-0 podman[333814]: 2025-11-25 16:43:53.111893427 +0000 UTC m=+0.024002502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:43:53 compute-0 systemd[1]: Started libpod-conmon-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope.
Nov 25 16:43:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:53 compute-0 podman[333814]: 2025-11-25 16:43:53.311202144 +0000 UTC m=+0.223311209 container init bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:43:53 compute-0 nova_compute[254092]: 2025-11-25 16:43:53.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:53 compute-0 podman[333814]: 2025-11-25 16:43:53.320550338 +0000 UTC m=+0.232659383 container start bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:43:53 compute-0 friendly_bohr[333831]: 167 167
Nov 25 16:43:53 compute-0 systemd[1]: libpod-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope: Deactivated successfully.
Nov 25 16:43:53 compute-0 conmon[333831]: conmon bb7c4c75e632a45836a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope/container/memory.events
Nov 25 16:43:53 compute-0 podman[333814]: 2025-11-25 16:43:53.369809204 +0000 UTC m=+0.281918249 container attach bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:43:53 compute-0 podman[333814]: 2025-11-25 16:43:53.370274527 +0000 UTC m=+0.282383562 container died bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb915abef550dd25c802353cc02a98f9c0fe4d02ace417380da0cc868af7931-merged.mount: Deactivated successfully.
Nov 25 16:43:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Nov 25 16:43:53 compute-0 podman[333814]: 2025-11-25 16:43:53.904707436 +0000 UTC m=+0.816816471 container remove bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:43:54 compute-0 systemd[1]: libpod-conmon-bb7c4c75e632a45836a69cb85a0081cee348cb416b9d1253b377802ee4fc613f.scope: Deactivated successfully.
Nov 25 16:43:54 compute-0 nova_compute[254092]: 2025-11-25 16:43:54.058 254096 INFO oslo.privsep.daemon [None req-c6203dad-193b-429b-9337-df54fabcd9db aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Spawned new privsep daemon via rootwrap
Nov 25 16:43:54 compute-0 nova_compute[254092]: 2025-11-25 16:43:53.907 333852 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 16:43:54 compute-0 nova_compute[254092]: 2025-11-25 16:43:53.911 333852 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 16:43:54 compute-0 nova_compute[254092]: 2025-11-25 16:43:53.913 333852 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 16:43:54 compute-0 nova_compute[254092]: 2025-11-25 16:43:53.913 333852 INFO oslo.privsep.daemon [-] privsep daemon running as pid 333852
Nov 25 16:43:54 compute-0 nova_compute[254092]: 2025-11-25 16:43:54.171 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:43:54 compute-0 podman[333860]: 2025-11-25 16:43:54.084717369 +0000 UTC m=+0.024144876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:43:54 compute-0 podman[333860]: 2025-11-25 16:43:54.337342123 +0000 UTC m=+0.276769600 container create c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:43:54 compute-0 ceph-mon[74985]: pgmap v1755: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Nov 25 16:43:54 compute-0 systemd[1]: Started libpod-conmon-c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02.scope.
Nov 25 16:43:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:54 compute-0 podman[333860]: 2025-11-25 16:43:54.76297264 +0000 UTC m=+0.702400137 container init c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:43:54 compute-0 podman[333860]: 2025-11-25 16:43:54.769694352 +0000 UTC m=+0.709121839 container start c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:43:54 compute-0 podman[333860]: 2025-11-25 16:43:54.851740699 +0000 UTC m=+0.791168186 container attach c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 16:43:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:43:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/427274679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:43:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:43:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/427274679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:43:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/427274679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:43:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/427274679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:43:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 128 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.9 MiB/s wr, 225 op/s
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.789 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-unplugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] No waiting events found dispatching network-vif-unplugged-b5793671-5020-4692-92f1-65a87bcdf38e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.791 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-unplugged-b5793671-5020-4692-92f1-65a87bcdf38e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG oslo_concurrency.lockutils [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 DEBUG nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] No waiting events found dispatching network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:43:55 compute-0 nova_compute[254092]: 2025-11-25 16:43:55.792 254096 WARNING nova.compute.manager [req-7d2c7329-b064-4adb-94f4-7cf9eeb8d23b req-e0a53a58-eaeb-4b41-aa58-0661dc137544 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received unexpected event network-vif-plugged-b5793671-5020-4692-92f1-65a87bcdf38e for instance with vm_state active and task_state deleting.
Nov 25 16:43:55 compute-0 sweet_herschel[333877]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:43:55 compute-0 sweet_herschel[333877]: --> relative data size: 1.0
Nov 25 16:43:55 compute-0 sweet_herschel[333877]: --> All data devices are unavailable
Nov 25 16:43:55 compute-0 systemd[1]: libpod-c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02.scope: Deactivated successfully.
Nov 25 16:43:55 compute-0 podman[333860]: 2025-11-25 16:43:55.830763929 +0000 UTC m=+1.770191466 container died c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:43:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc4f91347716de3d55d3d1fe1293a851856451b7869fe607a189f3b59508cd66-merged.mount: Deactivated successfully.
Nov 25 16:43:56 compute-0 podman[333860]: 2025-11-25 16:43:56.287468599 +0000 UTC m=+2.226896066 container remove c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 16:43:56 compute-0 systemd[1]: libpod-conmon-c8dfaed0c362b053d45b652711eb6760af583192d6ca4fa6d3340aece58aef02.scope: Deactivated successfully.
Nov 25 16:43:56 compute-0 sudo[333675]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:56 compute-0 sudo[333920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:56 compute-0 sudo[333920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:56 compute-0 sudo[333920]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:56 compute-0 sudo[333945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:43:56 compute-0 sudo[333945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:56 compute-0 sudo[333945]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.465 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089021.4648309, f5a259d0-4460-4335-aa4a-f874f93a7e93 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.466 254096 INFO nova.compute.manager [-] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] VM Stopped (Lifecycle Event)
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.486 254096 DEBUG nova.compute.manager [None req-2dc35482-4134-4a50-94e3-cc87e7901877 - - - - - -] [instance: f5a259d0-4460-4335-aa4a-f874f93a7e93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:56 compute-0 sudo[333970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:56 compute-0 sudo[333970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:56 compute-0 sudo[333970]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:56 compute-0 sudo[333995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:43:56 compute-0 sudo[333995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:56 compute-0 ceph-mon[74985]: pgmap v1756: 321 pgs: 321 active+clean; 128 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.9 MiB/s wr, 225 op/s
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.805 254096 INFO nova.virt.libvirt.driver [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deleting instance files /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207_del
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.805 254096 INFO nova.virt.libvirt.driver [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deletion of /var/lib/nova/instances/44f3c94a-060c-4650-bfe7-a214c6a10207_del complete
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.894 254096 INFO nova.compute.manager [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 4.94 seconds to destroy the instance on the hypervisor.
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.894 254096 DEBUG oslo.service.loopingcall [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.895 254096 DEBUG nova.compute.manager [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:43:56 compute-0 nova_compute[254092]: 2025-11-25 16:43:56.895 254096 DEBUG nova.network.neutron [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:43:56 compute-0 podman[334059]: 2025-11-25 16:43:56.921416448 +0000 UTC m=+0.052056203 container create 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:43:56 compute-0 systemd[1]: Started libpod-conmon-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope.
Nov 25 16:43:56 compute-0 podman[334059]: 2025-11-25 16:43:56.898411283 +0000 UTC m=+0.029051038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:43:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:57 compute-0 podman[334059]: 2025-11-25 16:43:57.01920459 +0000 UTC m=+0.149844335 container init 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:43:57 compute-0 podman[334059]: 2025-11-25 16:43:57.025706167 +0000 UTC m=+0.156345922 container start 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:43:57 compute-0 systemd[1]: libpod-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope: Deactivated successfully.
Nov 25 16:43:57 compute-0 amazing_feynman[334074]: 167 167
Nov 25 16:43:57 compute-0 conmon[334074]: conmon 87001c545f35b5f163ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope/container/memory.events
Nov 25 16:43:57 compute-0 podman[334059]: 2025-11-25 16:43:57.062536856 +0000 UTC m=+0.193176631 container attach 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:43:57 compute-0 podman[334059]: 2025-11-25 16:43:57.063017819 +0000 UTC m=+0.193657574 container died 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:43:57 compute-0 nova_compute[254092]: 2025-11-25 16:43:57.091 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089022.0897741, 69b3cbbb-9713-4f49-9e67-1f33a3ae2642 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:43:57 compute-0 nova_compute[254092]: 2025-11-25 16:43:57.092 254096 INFO nova.compute.manager [-] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] VM Stopped (Lifecycle Event)
Nov 25 16:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8fca0df505a6272f4dfeafd5006482cda961329ae10c5308395bd080f232575-merged.mount: Deactivated successfully.
Nov 25 16:43:57 compute-0 podman[334059]: 2025-11-25 16:43:57.207555241 +0000 UTC m=+0.338194996 container remove 87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:43:57 compute-0 nova_compute[254092]: 2025-11-25 16:43:57.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:57 compute-0 nova_compute[254092]: 2025-11-25 16:43:57.246 254096 DEBUG nova.compute.manager [None req-ef07e316-ff35-48af-947f-f9ed5a65919a - - - - - -] [instance: 69b3cbbb-9713-4f49-9e67-1f33a3ae2642] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:43:57 compute-0 systemd[1]: libpod-conmon-87001c545f35b5f163ec7b8a9fe7b5643fb801cfe0979bf6defd84b8ae22882e.scope: Deactivated successfully.
Nov 25 16:43:57 compute-0 ovn_controller[153477]: 2025-11-25T16:43:57Z|00730|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:43:57 compute-0 nova_compute[254092]: 2025-11-25 16:43:57.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:57 compute-0 podman[334100]: 2025-11-25 16:43:57.39109137 +0000 UTC m=+0.054518650 container create e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:43:57 compute-0 podman[334100]: 2025-11-25 16:43:57.362071733 +0000 UTC m=+0.025499033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:43:57 compute-0 systemd[1]: Started libpod-conmon-e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c.scope.
Nov 25 16:43:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:43:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 121 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 433 KiB/s wr, 185 op/s
Nov 25 16:43:57 compute-0 podman[334100]: 2025-11-25 16:43:57.796125198 +0000 UTC m=+0.459552508 container init e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:43:57 compute-0 podman[334100]: 2025-11-25 16:43:57.804785553 +0000 UTC m=+0.468212833 container start e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:43:57 compute-0 podman[334100]: 2025-11-25 16:43:57.972116403 +0000 UTC m=+0.635543703 container attach e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:43:58 compute-0 nova_compute[254092]: 2025-11-25 16:43:58.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:43:58 compute-0 nova_compute[254092]: 2025-11-25 16:43:58.425 254096 DEBUG nova.network.neutron [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:43:58 compute-0 nova_compute[254092]: 2025-11-25 16:43:58.470 254096 INFO nova.compute.manager [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Took 1.58 seconds to deallocate network for instance.
Nov 25 16:43:58 compute-0 nova_compute[254092]: 2025-11-25 16:43:58.554 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:43:58 compute-0 nova_compute[254092]: 2025-11-25 16:43:58.554 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:43:58 compute-0 nova_compute[254092]: 2025-11-25 16:43:58.580 254096 DEBUG nova.compute.manager [req-4bc1b623-8588-494f-8bb7-7de3bce07d17 req-909a90a9-4f53-4ade-b8bc-68b264ff0e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Received event network-vif-deleted-b5793671-5020-4692-92f1-65a87bcdf38e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]: {
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:     "0": [
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:         {
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "devices": [
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "/dev/loop3"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             ],
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_name": "ceph_lv0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_size": "21470642176",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "name": "ceph_lv0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "tags": {
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cluster_name": "ceph",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.crush_device_class": "",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.encrypted": "0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osd_id": "0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.type": "block",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.vdo": "0"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             },
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "type": "block",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "vg_name": "ceph_vg0"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:         }
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:     ],
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:     "1": [
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:         {
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "devices": [
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "/dev/loop4"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             ],
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_name": "ceph_lv1",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_size": "21470642176",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "name": "ceph_lv1",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "tags": {
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cluster_name": "ceph",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.crush_device_class": "",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.encrypted": "0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osd_id": "1",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.type": "block",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.vdo": "0"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             },
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "type": "block",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "vg_name": "ceph_vg1"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:         }
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:     ],
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:     "2": [
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:         {
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "devices": [
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "/dev/loop5"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             ],
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_name": "ceph_lv2",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_size": "21470642176",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "name": "ceph_lv2",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "tags": {
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.cluster_name": "ceph",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.crush_device_class": "",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.encrypted": "0",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osd_id": "2",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.type": "block",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:                 "ceph.vdo": "0"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             },
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "type": "block",
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:             "vg_name": "ceph_vg2"
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:         }
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]:     ]
Nov 25 16:43:58 compute-0 recursing_chaplygin[334116]: }
Nov 25 16:43:58 compute-0 systemd[1]: libpod-e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c.scope: Deactivated successfully.
Nov 25 16:43:58 compute-0 podman[334100]: 2025-11-25 16:43:58.609290149 +0000 UTC m=+1.272717429 container died e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f9808bc7866ec3d03550969c083403b27b3cb6ad3ebd0aa6d7131c64316e63c-merged.mount: Deactivated successfully.
Nov 25 16:43:59 compute-0 ceph-mon[74985]: pgmap v1757: 321 pgs: 321 active+clean; 121 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 433 KiB/s wr, 185 op/s
Nov 25 16:43:59 compute-0 podman[334100]: 2025-11-25 16:43:59.131383963 +0000 UTC m=+1.794811243 container remove e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:43:59 compute-0 systemd[1]: libpod-conmon-e3b98c1baa78ff93a1dfc5790271187a11b3ff41f29b9716b332df47a179e01c.scope: Deactivated successfully.
Nov 25 16:43:59 compute-0 nova_compute[254092]: 2025-11-25 16:43:59.151 254096 DEBUG oslo_concurrency.processutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:43:59 compute-0 sudo[333995]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:59 compute-0 sudo[334140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:59 compute-0 sudo[334140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:59 compute-0 sudo[334140]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:59 compute-0 sudo[334166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:43:59 compute-0 sudo[334166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:59 compute-0 sudo[334166]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:59 compute-0 sudo[334201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:43:59 compute-0 sudo[334201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:59 compute-0 sudo[334201]: pam_unix(sudo:session): session closed for user root
Nov 25 16:43:59 compute-0 sudo[334235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:43:59 compute-0 sudo[334235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:43:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:43:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809511181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:43:59 compute-0 nova_compute[254092]: 2025-11-25 16:43:59.644 254096 DEBUG oslo_concurrency.processutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:43:59 compute-0 nova_compute[254092]: 2025-11-25 16:43:59.650 254096 DEBUG nova.compute.provider_tree [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:43:59 compute-0 nova_compute[254092]: 2025-11-25 16:43:59.663 254096 DEBUG nova.scheduler.client.report [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:43:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 121 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 13 KiB/s wr, 164 op/s
Nov 25 16:43:59 compute-0 nova_compute[254092]: 2025-11-25 16:43:59.695 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:59 compute-0 nova_compute[254092]: 2025-11-25 16:43:59.750 254096 INFO nova.scheduler.client.report [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Deleted allocations for instance 44f3c94a-060c-4650-bfe7-a214c6a10207
Nov 25 16:43:59 compute-0 podman[334302]: 2025-11-25 16:43:59.77927005 +0000 UTC m=+0.072769645 container create dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:43:59 compute-0 podman[334302]: 2025-11-25 16:43:59.727086774 +0000 UTC m=+0.020586369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:43:59 compute-0 nova_compute[254092]: 2025-11-25 16:43:59.839 254096 DEBUG oslo_concurrency.lockutils [None req-9337c3a3-5f83-455c-99bb-ad423a2a058d 5e0e4ce0eeda4a79ab738e1f8dc0f725 e0a920505c8240228ed836913ffcdbe4 - - default default] Lock "44f3c94a-060c-4650-bfe7-a214c6a10207" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:43:59 compute-0 systemd[1]: Started libpod-conmon-dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b.scope.
Nov 25 16:43:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:43:59 compute-0 podman[334302]: 2025-11-25 16:43:59.994571601 +0000 UTC m=+0.288071196 container init dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:44:00 compute-0 podman[334302]: 2025-11-25 16:44:00.001803357 +0000 UTC m=+0.295302932 container start dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:44:00 compute-0 wonderful_lamport[334318]: 167 167
Nov 25 16:44:00 compute-0 systemd[1]: libpod-dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b.scope: Deactivated successfully.
Nov 25 16:44:00 compute-0 podman[334302]: 2025-11-25 16:44:00.088295854 +0000 UTC m=+0.381795449 container attach dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 16:44:00 compute-0 podman[334302]: 2025-11-25 16:44:00.088723795 +0000 UTC m=+0.382223370 container died dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:44:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3809511181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b53dc4531679885d5a0b623aec9717b0a510fe1c545d5a5468fd5fe13eb3fbf1-merged.mount: Deactivated successfully.
Nov 25 16:44:00 compute-0 nova_compute[254092]: 2025-11-25 16:44:00.468 254096 DEBUG oslo_concurrency.lockutils [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:00 compute-0 nova_compute[254092]: 2025-11-25 16:44:00.469 254096 DEBUG oslo_concurrency.lockutils [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:00 compute-0 nova_compute[254092]: 2025-11-25 16:44:00.469 254096 DEBUG nova.compute.manager [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:00 compute-0 nova_compute[254092]: 2025-11-25 16:44:00.474 254096 DEBUG nova.compute.manager [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:44:00 compute-0 nova_compute[254092]: 2025-11-25 16:44:00.475 254096 DEBUG nova.objects.instance [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:00 compute-0 nova_compute[254092]: 2025-11-25 16:44:00.493 254096 DEBUG nova.virt.libvirt.driver [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:44:00 compute-0 podman[334302]: 2025-11-25 16:44:00.503678233 +0000 UTC m=+0.797177818 container remove dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:44:00 compute-0 systemd[1]: libpod-conmon-dacd60d6c4428a472d6a2572c3641697923315708888540e6c32bb7eb2cf511b.scope: Deactivated successfully.
Nov 25 16:44:00 compute-0 podman[334343]: 2025-11-25 16:44:00.680629823 +0000 UTC m=+0.025297857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:44:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:00 compute-0 podman[334343]: 2025-11-25 16:44:00.953431965 +0000 UTC m=+0.298099959 container create b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 16:44:01 compute-0 systemd[1]: Started libpod-conmon-b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2.scope.
Nov 25 16:44:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:01 compute-0 podman[334343]: 2025-11-25 16:44:01.132671097 +0000 UTC m=+0.477339111 container init b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:44:01 compute-0 podman[334343]: 2025-11-25 16:44:01.140020786 +0000 UTC m=+0.484688780 container start b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 16:44:01 compute-0 podman[334343]: 2025-11-25 16:44:01.185033068 +0000 UTC m=+0.529701082 container attach b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:44:01 compute-0 ceph-mon[74985]: pgmap v1758: 321 pgs: 321 active+clean; 121 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 13 KiB/s wr, 164 op/s
Nov 25 16:44:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 171 op/s
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]: {
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "osd_id": 1,
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "type": "bluestore"
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:     },
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "osd_id": 2,
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "type": "bluestore"
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:     },
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "osd_id": 0,
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:         "type": "bluestore"
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]:     }
Nov 25 16:44:02 compute-0 heuristic_matsumoto[334360]: }
Nov 25 16:44:02 compute-0 podman[334343]: 2025-11-25 16:44:02.109097477 +0000 UTC m=+1.453765481 container died b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 16:44:02 compute-0 systemd[1]: libpod-b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2.scope: Deactivated successfully.
Nov 25 16:44:02 compute-0 nova_compute[254092]: 2025-11-25 16:44:02.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:02 compute-0 rsyslogd[1006]: imjournal from <np0005535469:podman>: begin to drop messages due to rate-limiting
Nov 25 16:44:02 compute-0 ceph-mon[74985]: pgmap v1759: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 171 op/s
Nov 25 16:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-715b00a216a324b1ebe33e882b69c803de0216143cf46f3b938d328be35a48e7-merged.mount: Deactivated successfully.
Nov 25 16:44:03 compute-0 podman[334343]: 2025-11-25 16:44:03.238018634 +0000 UTC m=+2.582686628 container remove b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_matsumoto, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:44:03 compute-0 sudo[334235]: pam_unix(sudo:session): session closed for user root
Nov 25 16:44:03 compute-0 systemd[1]: libpod-conmon-b4c79e7b4f38ce4cc3ab6c014372286254f2f484cad345248703f6463326a7d2.scope: Deactivated successfully.
Nov 25 16:44:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:44:03 compute-0 nova_compute[254092]: 2025-11-25 16:44:03.319 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:03 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:44:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:44:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Nov 25 16:44:03 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:44:03 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a14b36f1-b522-4543-879a-66c7bf823dc9 does not exist
Nov 25 16:44:03 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f533a1dd-1c78-4902-9abd-dedf4e982dcb does not exist
Nov 25 16:44:03 compute-0 sudo[334405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:44:03 compute-0 sudo[334405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:44:03 compute-0 sudo[334405]: pam_unix(sudo:session): session closed for user root
Nov 25 16:44:03 compute-0 sudo[334430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:44:03 compute-0 sudo[334430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:44:03 compute-0 sudo[334430]: pam_unix(sudo:session): session closed for user root
Nov 25 16:44:04 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:44:04 compute-0 ceph-mon[74985]: pgmap v1760: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Nov 25 16:44:04 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:44:04 compute-0 ovn_controller[153477]: 2025-11-25T16:44:04Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:44:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 12 KiB/s wr, 127 op/s
Nov 25 16:44:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:06 compute-0 ovn_controller[153477]: 2025-11-25T16:44:06Z|00731|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:44:06 compute-0 nova_compute[254092]: 2025-11-25 16:44:06.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:06 compute-0 ceph-mon[74985]: pgmap v1761: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 12 KiB/s wr, 127 op/s
Nov 25 16:44:07 compute-0 nova_compute[254092]: 2025-11-25 16:44:07.188 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089032.1857388, 44f3c94a-060c-4650-bfe7-a214c6a10207 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:07 compute-0 nova_compute[254092]: 2025-11-25 16:44:07.188 254096 INFO nova.compute.manager [-] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] VM Stopped (Lifecycle Event)
Nov 25 16:44:07 compute-0 nova_compute[254092]: 2025-11-25 16:44:07.208 254096 DEBUG nova.compute.manager [None req-63ee2f4f-6105-45da-a0ca-0c6c7a008401 - - - - - -] [instance: 44f3c94a-060c-4650-bfe7-a214c6a10207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:07 compute-0 nova_compute[254092]: 2025-11-25 16:44:07.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:07 compute-0 nova_compute[254092]: 2025-11-25 16:44:07.542 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:07 compute-0 nova_compute[254092]: 2025-11-25 16:44:07.543 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:44:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.5 total, 600.0 interval
                                           Cumulative writes: 21K writes, 84K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 6935 syncs, 3.05 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6875 writes, 25K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 25.70 MB, 0.04 MB/s
                                           Interval WAL: 6875 writes, 2726 syncs, 2.52 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:44:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 25 16:44:08 compute-0 nova_compute[254092]: 2025-11-25 16:44:08.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:09 compute-0 ceph-mon[74985]: pgmap v1762: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 25 16:44:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 536 KiB/s rd, 12 KiB/s wr, 51 op/s
Nov 25 16:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:44:10 compute-0 nova_compute[254092]: 2025-11-25 16:44:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:10 compute-0 nova_compute[254092]: 2025-11-25 16:44:10.534 254096 DEBUG nova.virt.libvirt.driver [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:44:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:11 compute-0 ceph-mon[74985]: pgmap v1763: 321 pgs: 321 active+clean; 121 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 536 KiB/s rd, 12 KiB/s wr, 51 op/s
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.539 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.539 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:11 compute-0 podman[334456]: 2025-11-25 16:44:11.648344493 +0000 UTC m=+0.063541865 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:44:11 compute-0 podman[334457]: 2025-11-25 16:44:11.665478428 +0000 UTC m=+0.078611483 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 16:44:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 23 KiB/s wr, 53 op/s
Nov 25 16:44:11 compute-0 podman[334458]: 2025-11-25 16:44:11.694602348 +0000 UTC m=+0.103731725 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:44:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:44:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854530360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:11 compute-0 nova_compute[254092]: 2025-11-25 16:44:11.975 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.051 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.051 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:44:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/854530360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.201 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.202 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3840MB free_disk=59.94270324707031GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.202 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.203 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 01f96314-1fbe-4eee-a4ed-db7f448a5320 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.311 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:44:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571805003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.764 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.769 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.793 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.831 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:44:12 compute-0 nova_compute[254092]: 2025-11-25 16:44:12.832 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:13 compute-0 ceph-mon[74985]: pgmap v1764: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 23 KiB/s wr, 53 op/s
Nov 25 16:44:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3571805003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:13 compute-0 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 16:44:13 compute-0 NetworkManager[48891]: <info>  [1764089053.2108] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:44:13 compute-0 ovn_controller[153477]: 2025-11-25T16:44:13Z|00732|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 16:44:13 compute-0 ovn_controller[153477]: 2025-11-25T16:44:13Z|00733|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 16:44:13 compute-0 ovn_controller[153477]: 2025-11-25T16:44:13Z|00734|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.241 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.258 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.259 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.260 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.261 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f99f5cf6-cee7-46f5-8e8b-8109c186d99b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.262 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore
Nov 25 16:44:13 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 16:44:13 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d00000048.scope: Consumed 13.271s CPU time.
Nov 25 16:44:13 compute-0 systemd-machined[216343]: Machine qemu-90-instance-00000048 terminated.
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:13 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [NOTICE]   (333559) : haproxy version is 2.8.14-c23fe91
Nov 25 16:44:13 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [NOTICE]   (333559) : path to executable is /usr/sbin/haproxy
Nov 25 16:44:13 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [WARNING]  (333559) : Exiting Master process...
Nov 25 16:44:13 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [ALERT]    (333559) : Current worker (333561) exited with code 143 (Terminated)
Nov 25 16:44:13 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[333553]: [WARNING]  (333559) : All workers exited. Exiting... (0)
Nov 25 16:44:13 compute-0 systemd[1]: libpod-9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a.scope: Deactivated successfully.
Nov 25 16:44:13 compute-0 podman[334585]: 2025-11-25 16:44:13.415104214 +0000 UTC m=+0.071202902 container died 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a-userdata-shm.mount: Deactivated successfully.
Nov 25 16:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0644dc8bc60d336343ae20e3273c06bdd19306c75a44733d23fab24fe3d7c49b-merged.mount: Deactivated successfully.
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.574 254096 INFO nova.virt.libvirt.driver [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance shutdown successfully after 13 seconds.
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.579 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.580 254096 DEBUG nova.objects.instance [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.583 254096 DEBUG nova.compute.manager [req-865f1f14-e3b1-4787-8af4-72444fafdf79 req-bac6cc8c-0e0e-4c74-b49b-543115392f55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.583 254096 DEBUG oslo_concurrency.lockutils [req-865f1f14-e3b1-4787-8af4-72444fafdf79 req-bac6cc8c-0e0e-4c74-b49b-543115392f55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.584 254096 DEBUG oslo_concurrency.lockutils [req-865f1f14-e3b1-4787-8af4-72444fafdf79 req-bac6cc8c-0e0e-4c74-b49b-543115392f55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.584 254096 DEBUG oslo_concurrency.lockutils [req-865f1f14-e3b1-4787-8af4-72444fafdf79 req-bac6cc8c-0e0e-4c74-b49b-543115392f55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.584 254096 DEBUG nova.compute.manager [req-865f1f14-e3b1-4787-8af4-72444fafdf79 req-bac6cc8c-0e0e-4c74-b49b-543115392f55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.584 254096 WARNING nova.compute.manager [req-865f1f14-e3b1-4787-8af4-72444fafdf79 req-bac6cc8c-0e0e-4c74-b49b-543115392f55 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state powering-off.
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.595 254096 DEBUG nova.compute.manager [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.622 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.622 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.623 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:13 compute-0 podman[334585]: 2025-11-25 16:44:13.654016926 +0000 UTC m=+0.310115614 container cleanup 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 16:44:13 compute-0 systemd[1]: libpod-conmon-9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a.scope: Deactivated successfully.
Nov 25 16:44:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 23 KiB/s wr, 46 op/s
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.724 254096 DEBUG oslo_concurrency.lockutils [None req-0e5c5c54-96d3-4b48-905a-46e1b567656f aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:13 compute-0 podman[334627]: 2025-11-25 16:44:13.757792592 +0000 UTC m=+0.083516497 container remove 9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.762 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29b6c164-065f-4968-9c14-f4113d35ea59]: (4, ('Tue Nov 25 04:44:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a)\n9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a\nTue Nov 25 04:44:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a)\n9778d0dafb54d301e404c9fc3e64358d5289a7de6a44fb328a55da914816026a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b32a3f5-6332-4dc1-9335-7b2420985814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.764 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.821 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:13 compute-0 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 16:44:13 compute-0 nova_compute[254092]: 2025-11-25 16:44:13.838 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b448db-9859-4dbc-a473-e57f22d22373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.857 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8e010f-2dbc-481b-b8e8-088623788b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.858 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9b2339-4591-443a-a347-c3c6ef306e76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b681247-c843-41af-b018-23daf093175b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548824, 'reachable_time': 27117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334644, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.878 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:13.878 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[171c2a6d-2443-4d0e-aa0f-40efbb8fbbbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:14 compute-0 ceph-mon[74985]: pgmap v1765: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 23 KiB/s wr, 46 op/s
Nov 25 16:44:14 compute-0 nova_compute[254092]: 2025-11-25 16:44:14.626 254096 DEBUG nova.objects.instance [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:14 compute-0 nova_compute[254092]: 2025-11-25 16:44:14.645 254096 DEBUG oslo_concurrency.lockutils [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:14 compute-0 nova_compute[254092]: 2025-11-25 16:44:14.646 254096 DEBUG oslo_concurrency.lockutils [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:14 compute-0 nova_compute[254092]: 2025-11-25 16:44:14.646 254096 DEBUG nova.network.neutron [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:44:14 compute-0 nova_compute[254092]: 2025-11-25 16:44:14.646 254096 DEBUG nova.objects.instance [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'info_cache' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 38 KiB/s wr, 48 op/s
Nov 25 16:44:15 compute-0 nova_compute[254092]: 2025-11-25 16:44:15.712 254096 DEBUG nova.compute.manager [req-bf16c247-989d-467e-b3de-87dd9c9fde10 req-3b2c917a-c81a-4624-b7ab-6ecf8cd3df64 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:15 compute-0 nova_compute[254092]: 2025-11-25 16:44:15.713 254096 DEBUG oslo_concurrency.lockutils [req-bf16c247-989d-467e-b3de-87dd9c9fde10 req-3b2c917a-c81a-4624-b7ab-6ecf8cd3df64 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:15 compute-0 nova_compute[254092]: 2025-11-25 16:44:15.714 254096 DEBUG oslo_concurrency.lockutils [req-bf16c247-989d-467e-b3de-87dd9c9fde10 req-3b2c917a-c81a-4624-b7ab-6ecf8cd3df64 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:15 compute-0 nova_compute[254092]: 2025-11-25 16:44:15.714 254096 DEBUG oslo_concurrency.lockutils [req-bf16c247-989d-467e-b3de-87dd9c9fde10 req-3b2c917a-c81a-4624-b7ab-6ecf8cd3df64 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:15 compute-0 nova_compute[254092]: 2025-11-25 16:44:15.714 254096 DEBUG nova.compute.manager [req-bf16c247-989d-467e-b3de-87dd9c9fde10 req-3b2c917a-c81a-4624-b7ab-6ecf8cd3df64 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:15 compute-0 nova_compute[254092]: 2025-11-25 16:44:15.714 254096 WARNING nova.compute.manager [req-bf16c247-989d-467e-b3de-87dd9c9fde10 req-3b2c917a-c81a-4624-b7ab-6ecf8cd3df64 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state stopped and task_state powering-on.
Nov 25 16:44:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.225 254096 DEBUG nova.network.neutron [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.266 254096 DEBUG oslo_concurrency.lockutils [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.292 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.293 254096 DEBUG nova.objects.instance [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.304 254096 DEBUG nova.objects.instance [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.313 254096 DEBUG nova.virt.libvirt.vif [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.313 254096 DEBUG nova.network.os_vif_util [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.314 254096 DEBUG nova.network.os_vif_util [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.314 254096 DEBUG os_vif [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.316 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe8c3a9-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.319 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.322 254096 INFO os_vif [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.328 254096 DEBUG nova.virt.libvirt.driver [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start _get_guest_xml network_info=[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.332 254096 WARNING nova.virt.libvirt.driver [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.336 254096 DEBUG nova.virt.libvirt.host [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.336 254096 DEBUG nova.virt.libvirt.host [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.339 254096 DEBUG nova.virt.libvirt.host [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.340 254096 DEBUG nova.virt.libvirt.host [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.341 254096 DEBUG nova.virt.libvirt.driver [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.341 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.341 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.342 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.342 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.342 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.342 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.342 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.343 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.343 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.343 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.343 254096 DEBUG nova.virt.hardware [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.343 254096 DEBUG nova.objects.instance [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.357 254096 DEBUG oslo_concurrency.processutils [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:16 compute-0 ceph-mon[74985]: pgmap v1766: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 38 KiB/s wr, 48 op/s
Nov 25 16:44:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591543607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.833 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.834 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.834 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.834 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.851 254096 DEBUG oslo_concurrency.processutils [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.894 254096 DEBUG oslo_concurrency.processutils [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.933 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.933 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.933 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:44:16 compute-0 nova_compute[254092]: 2025-11-25 16:44:16.934 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936328393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.371 254096 DEBUG oslo_concurrency.processutils [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.372 254096 DEBUG nova.virt.libvirt.vif [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.373 254096 DEBUG nova.network.os_vif_util [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.373 254096 DEBUG nova.network.os_vif_util [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.375 254096 DEBUG nova.objects.instance [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.390 254096 DEBUG nova.virt.libvirt.driver [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <uuid>01f96314-1fbe-4eee-a4ed-db7f448a5320</uuid>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <name>instance-00000048</name>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestJSON-server-1951123052</nova:name>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:44:16</nova:creationTime>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <nova:port uuid="4fe8c3a9-70ba-4a51-8bc1-1505a49fe349">
Nov 25 16:44:17 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <system>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <entry name="serial">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <entry name="uuid">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </system>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <os>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   </os>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <features>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   </features>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk">
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config">
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:17 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ff:a0:1c"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <target dev="tap4fe8c3a9-70"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log" append="off"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <video>
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </video>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:44:17 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:44:17 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:44:17 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:44:17 compute-0 nova_compute[254092]: </domain>
Nov 25 16:44:17 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.392 254096 DEBUG nova.virt.libvirt.driver [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.392 254096 DEBUG nova.virt.libvirt.driver [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.393 254096 DEBUG nova.virt.libvirt.vif [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.393 254096 DEBUG nova.network.os_vif_util [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.394 254096 DEBUG nova.network.os_vif_util [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.395 254096 DEBUG os_vif [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.395 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.396 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.396 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.400 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.400 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.401 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:17 compute-0 NetworkManager[48891]: <info>  [1764089057.4034] manager: (tap4fe8c3a9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.405 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.419 254096 INFO os_vif [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:44:17 compute-0 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 16:44:17 compute-0 NetworkManager[48891]: <info>  [1764089057.4889] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Nov 25 16:44:17 compute-0 ovn_controller[153477]: 2025-11-25T16:44:17Z|00735|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 16:44:17 compute-0 ovn_controller[153477]: 2025-11-25T16:44:17Z|00736|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 ovn_controller[153477]: 2025-11-25T16:44:17Z|00737|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.508 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 systemd-udevd[334721]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:44:17 compute-0 systemd-machined[216343]: New machine qemu-91-instance-00000048.
Nov 25 16:44:17 compute-0 NetworkManager[48891]: <info>  [1764089057.5300] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:44:17 compute-0 NetworkManager[48891]: <info>  [1764089057.5313] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:44:17 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-00000048.
Nov 25 16:44:17 compute-0 ovn_controller[153477]: 2025-11-25T16:44:17Z|00738|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.547 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.548 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.549 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.563 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3dacb8e3-a887-4b68-9600-f0ca8d93d78d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.564 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.566 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.566 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[382bcac6-17a6-4164-9b3e-504c007d2478]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.567 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eef8ce6f-939e-4b6f-bd52-f84977dd0afd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.580 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb4971f-a18e-4214-8dcc-0f602ab2cda1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.593 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2abf27c-31ce-41e3-95ab-4d8e0a723c5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.627 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d48f96ba-ba44-4360-979d-2d7c17748fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 NetworkManager[48891]: <info>  [1764089057.6338] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c90682a-f021-4923-9e36-423ba07377e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.666 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0df5dfe6-924f-49a0-9a8e-a1c1660ea5a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.669 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8524300b-aef1-4476-a997-f88e753bb8c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 27 KiB/s wr, 16 op/s
Nov 25 16:44:17 compute-0 NetworkManager[48891]: <info>  [1764089057.6894] device (tap50ea1716-90): carrier: link connected
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.693 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaf61e0-4971-41ab-b607-bb14777463de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fc73edc7-7252-49df-9b5f-72e0ecea3b0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551528, 'reachable_time': 30420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334755, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.726 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc041a0-aeaa-4f78-9e72-19292698a94a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 551528, 'tstamp': 551528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334756, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1658714b-0cf6-4834-9bb0-9ebd2e22422b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551528, 'reachable_time': 30420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334757, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3591543607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1936328393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50df95dc-e611-4ae4-8aff-008981ddda08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.837 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5602766-bad0-4ec4-932d-3a39384b7927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.838 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.839 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.839 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 16:44:17 compute-0 NetworkManager[48891]: <info>  [1764089057.8417] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.847 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:17 compute-0 ovn_controller[153477]: 2025-11-25T16:44:17Z|00739|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.850 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.851 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a6570efa-d1aa-4689-9e52-7e80a6410038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.852 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:44:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:17.853 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:44:17 compute-0 nova_compute[254092]: 2025-11-25 16:44:17.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.016 254096 DEBUG nova.compute.manager [req-cde4bed0-14f6-4386-88d3-6cc8581de6b9 req-e524167d-8b32-4b9e-a4dd-dc61a45cc0db a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.016 254096 DEBUG oslo_concurrency.lockutils [req-cde4bed0-14f6-4386-88d3-6cc8581de6b9 req-e524167d-8b32-4b9e-a4dd-dc61a45cc0db a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.017 254096 DEBUG oslo_concurrency.lockutils [req-cde4bed0-14f6-4386-88d3-6cc8581de6b9 req-e524167d-8b32-4b9e-a4dd-dc61a45cc0db a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.017 254096 DEBUG oslo_concurrency.lockutils [req-cde4bed0-14f6-4386-88d3-6cc8581de6b9 req-e524167d-8b32-4b9e-a4dd-dc61a45cc0db a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.017 254096 DEBUG nova.compute.manager [req-cde4bed0-14f6-4386-88d3-6cc8581de6b9 req-e524167d-8b32-4b9e-a4dd-dc61a45cc0db a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.018 254096 WARNING nova.compute.manager [req-cde4bed0-14f6-4386-88d3-6cc8581de6b9 req-e524167d-8b32-4b9e-a4dd-dc61a45cc0db a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state stopped and task_state powering-on.
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.106 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 01f96314-1fbe-4eee-a4ed-db7f448a5320 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.106 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089058.1057804, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.107 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.108 254096 DEBUG nova.compute.manager [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.112 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance rebooted successfully.
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.112 254096 DEBUG nova.compute.manager [None req-0e273163-7da3-4b41-b70c-2f7588519470 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.132 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.135 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.148 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.149 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089058.1065469, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.149 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.163 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.166 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.192 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 16:44:18 compute-0 podman[334831]: 2025-11-25 16:44:18.170423654 +0000 UTC m=+0.020306852 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:18 compute-0 podman[334831]: 2025-11-25 16:44:18.356659676 +0000 UTC m=+0.206542854 container create 10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 16:44:18 compute-0 systemd[1]: Started libpod-conmon-10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b.scope.
Nov 25 16:44:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f08ec3ca82ffeedf6f0dbdbd4e85e5261e38826982127dc852dcb2188945f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:18 compute-0 podman[334831]: 2025-11-25 16:44:18.510699735 +0000 UTC m=+0.360582913 container init 10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:44:18 compute-0 podman[334831]: 2025-11-25 16:44:18.517083748 +0000 UTC m=+0.366966926 container start 10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:44:18 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[334846]: [NOTICE]   (334850) : New worker (334852) forked
Nov 25 16:44:18 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[334846]: [NOTICE]   (334850) : Loading success.
Nov 25 16:44:18 compute-0 ceph-mon[74985]: pgmap v1767: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 27 KiB/s wr, 16 op/s
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.946 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.960 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:18 compute-0 nova_compute[254092]: 2025-11-25 16:44:18.961 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:44:19 compute-0 nova_compute[254092]: 2025-11-25 16:44:19.479 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:44:19 compute-0 nova_compute[254092]: 2025-11-25 16:44:19.494 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 16:44:19 compute-0 nova_compute[254092]: 2025-11-25 16:44:19.495 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:19 compute-0 nova_compute[254092]: 2025-11-25 16:44:19.495 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:19 compute-0 nova_compute[254092]: 2025-11-25 16:44:19.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 26 KiB/s wr, 3 op/s
Nov 25 16:44:20 compute-0 nova_compute[254092]: 2025-11-25 16:44:20.115 254096 DEBUG nova.compute.manager [req-05f33fcd-6cef-4091-9cbe-362a158de8be req-e32c2d13-8c9e-4e53-9265-b23e6d193bd2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:20 compute-0 nova_compute[254092]: 2025-11-25 16:44:20.116 254096 DEBUG oslo_concurrency.lockutils [req-05f33fcd-6cef-4091-9cbe-362a158de8be req-e32c2d13-8c9e-4e53-9265-b23e6d193bd2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:20 compute-0 nova_compute[254092]: 2025-11-25 16:44:20.116 254096 DEBUG oslo_concurrency.lockutils [req-05f33fcd-6cef-4091-9cbe-362a158de8be req-e32c2d13-8c9e-4e53-9265-b23e6d193bd2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:20 compute-0 nova_compute[254092]: 2025-11-25 16:44:20.116 254096 DEBUG oslo_concurrency.lockutils [req-05f33fcd-6cef-4091-9cbe-362a158de8be req-e32c2d13-8c9e-4e53-9265-b23e6d193bd2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:20 compute-0 nova_compute[254092]: 2025-11-25 16:44:20.117 254096 DEBUG nova.compute.manager [req-05f33fcd-6cef-4091-9cbe-362a158de8be req-e32c2d13-8c9e-4e53-9265-b23e6d193bd2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:20 compute-0 nova_compute[254092]: 2025-11-25 16:44:20.117 254096 WARNING nova.compute.manager [req-05f33fcd-6cef-4091-9cbe-362a158de8be req-e32c2d13-8c9e-4e53-9265-b23e6d193bd2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.
Nov 25 16:44:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:20.448 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:20 compute-0 nova_compute[254092]: 2025-11-25 16:44:20.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:20.449 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:44:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:21 compute-0 ceph-mon[74985]: pgmap v1768: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 26 KiB/s wr, 3 op/s
Nov 25 16:44:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Nov 25 16:44:22 compute-0 nova_compute[254092]: 2025-11-25 16:44:22.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:23 compute-0 ceph-mon[74985]: pgmap v1769: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Nov 25 16:44:23 compute-0 nova_compute[254092]: 2025-11-25 16:44:23.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:24 compute-0 ceph-mon[74985]: pgmap v1770: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.615 254096 INFO nova.compute.manager [None req-7e40f9af-4327-4f90-9fa7-035777523df1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Pausing
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.616 254096 DEBUG nova.objects.instance [None req-7e40f9af-4327-4f90-9fa7-035777523df1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.642 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089064.6418903, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.642 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Paused (Lifecycle Event)
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.644 254096 DEBUG nova.compute.manager [None req-7e40f9af-4327-4f90-9fa7-035777523df1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.673 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.679 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:24 compute-0 nova_compute[254092]: 2025-11-25 16:44:24.697 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 16:44:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 16:44:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:25.451 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Nov 25 16:44:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:26 compute-0 nova_compute[254092]: 2025-11-25 16:44:26.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:26 compute-0 ceph-mon[74985]: pgmap v1771: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.778 254096 INFO nova.compute.manager [None req-595718f5-0493-424d-ace8-03cd9acad780 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Unpausing
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.779 254096 DEBUG nova.objects.instance [None req-595718f5-0493-424d-ace8-03cd9acad780 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.811 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089067.8106759, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.811 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)
Nov 25 16:44:27 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.815 254096 DEBUG nova.virt.libvirt.guest [None req-595718f5-0493-424d-ace8-03cd9acad780 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.815 254096 DEBUG nova.compute.manager [None req-595718f5-0493-424d-ace8-03cd9acad780 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.838 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.841 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:27 compute-0 nova_compute[254092]: 2025-11-25 16:44:27.856 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.813 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.814 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.829 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.934 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.935 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.942 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:44:28 compute-0 nova_compute[254092]: 2025-11-25 16:44:28.943 254096 INFO nova.compute.claims [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:44:29 compute-0 ceph-mon[74985]: pgmap v1772: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.079 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:44:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2716223218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.635 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.640 254096 DEBUG nova.compute.provider_tree [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.656 254096 DEBUG nova.scheduler.client.report [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:44:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.751 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.752 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.838 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.838 254096 DEBUG nova.network.neutron [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.865 254096 INFO nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.962 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:44:29 compute-0 nova_compute[254092]: 2025-11-25 16:44:29.996 254096 DEBUG nova.policy [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4086d59097134dd6a71a9056c34359e5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8b7977bc30d5465b88d98c64d7c70db7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:44:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2716223218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.102 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.103 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.104 254096 INFO nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Creating image(s)
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.136 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.158 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.178 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.183 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.255 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.257 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.257 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.257 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.276 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.281 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.848 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.908 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] resizing rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:44:30 compute-0 nova_compute[254092]: 2025-11-25 16:44:30.996 254096 DEBUG nova.network.neutron [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Successfully created port: 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.005 254096 DEBUG nova.objects.instance [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lazy-loading 'migration_context' on Instance uuid f0783fc0-46a8-4c51-95ca-79db0d6d849d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.016 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.017 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Ensure instance console log exists: /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.017 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.017 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.018 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:31 compute-0 ceph-mon[74985]: pgmap v1773: 321 pgs: 321 active+clean; 123 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Nov 25 16:44:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 134 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 323 KiB/s wr, 72 op/s
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.863 254096 DEBUG nova.network.neutron [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Successfully updated port: 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.910 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "refresh_cache-f0783fc0-46a8-4c51-95ca-79db0d6d849d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.911 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquired lock "refresh_cache-f0783fc0-46a8-4c51-95ca-79db0d6d849d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:31 compute-0 nova_compute[254092]: 2025-11-25 16:44:31.911 254096 DEBUG nova.network.neutron [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.101 254096 DEBUG nova.network.neutron [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:44:32 compute-0 ceph-mon[74985]: pgmap v1774: 321 pgs: 321 active+clean; 134 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 323 KiB/s wr, 72 op/s
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.295 254096 DEBUG nova.compute.manager [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received event network-changed-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.295 254096 DEBUG nova.compute.manager [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Refreshing instance network info cache due to event network-changed-94d30e8d-55a7-40af-be2f-0ce1a8c10b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.296 254096 DEBUG oslo_concurrency.lockutils [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0783fc0-46a8-4c51-95ca-79db0d6d849d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.804 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.804 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.833 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.902 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.903 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.910 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:44:32 compute-0 nova_compute[254092]: 2025-11-25 16:44:32.910 254096 INFO nova.compute.claims [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.053 254096 DEBUG nova.network.neutron [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Updating instance_info_cache with network_info: [{"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.069 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.099 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Releasing lock "refresh_cache-f0783fc0-46a8-4c51-95ca-79db0d6d849d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.100 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Instance network_info: |[{"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.101 254096 DEBUG oslo_concurrency.lockutils [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0783fc0-46a8-4c51-95ca-79db0d6d849d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.101 254096 DEBUG nova.network.neutron [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Refreshing network info cache for port 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.104 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Start _get_guest_xml network_info=[{"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.111 254096 WARNING nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.120 254096 DEBUG nova.virt.libvirt.host [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.121 254096 DEBUG nova.virt.libvirt.host [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.124 254096 DEBUG nova.virt.libvirt.host [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.125 254096 DEBUG nova.virt.libvirt.host [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.125 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.126 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.126 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.126 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.127 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.127 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.127 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.127 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.128 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.129 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.129 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.130 254096 DEBUG nova.virt.hardware [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.134 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:33 compute-0 ovn_controller[153477]: 2025-11-25T16:44:33Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:44:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:44:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190681478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.621 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.626 254096 DEBUG nova.compute.provider_tree [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.643 254096 DEBUG nova.scheduler.client.report [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:44:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 134 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 323 KiB/s wr, 0 op/s
Nov 25 16:44:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122800180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:33 compute-0 nova_compute[254092]: 2025-11-25 16:44:33.840 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2190681478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.311 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.315 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.348 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.349 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.580 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.580 254096 DEBUG nova.network.neutron [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.715 254096 INFO nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:44:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521869160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.733 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.734 254096 DEBUG nova.virt.libvirt.vif [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:44:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1255165271',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1255165271',id=77,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8b7977bc30d5465b88d98c64d7c70db7',ramdisk_id='',reservation_id='r-dmevmba6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-406281948',owner_user_name='tempest-ServerTagsTestJSON-406281948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:44:30Z,user_data=None,user_id='4086d59097134dd6a71a9056c34359e5',uuid=f0783fc0-46a8-4c51-95ca-79db0d6d849d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.735 254096 DEBUG nova.network.os_vif_util [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Converting VIF {"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.736 254096 DEBUG nova.network.os_vif_util [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:66,bridge_name='br-int',has_traffic_filtering=True,id=94d30e8d-55a7-40af-be2f-0ce1a8c10b97,network=Network(06ec9e45-7215-4d71-bb7c-e65059e97c36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d30e8d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.736 254096 DEBUG nova.objects.instance [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0783fc0-46a8-4c51-95ca-79db0d6d849d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.749 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <uuid>f0783fc0-46a8-4c51-95ca-79db0d6d849d</uuid>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <name>instance-0000004d</name>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerTagsTestJSON-server-1255165271</nova:name>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:44:33</nova:creationTime>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:user uuid="4086d59097134dd6a71a9056c34359e5">tempest-ServerTagsTestJSON-406281948-project-member</nova:user>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:project uuid="8b7977bc30d5465b88d98c64d7c70db7">tempest-ServerTagsTestJSON-406281948</nova:project>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <nova:port uuid="94d30e8d-55a7-40af-be2f-0ce1a8c10b97">
Nov 25 16:44:34 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <system>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <entry name="serial">f0783fc0-46a8-4c51-95ca-79db0d6d849d</entry>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <entry name="uuid">f0783fc0-46a8-4c51-95ca-79db0d6d849d</entry>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </system>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <os>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   </os>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <features>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   </features>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk">
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk.config">
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:aa:ad:66"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <target dev="tap94d30e8d-55"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/console.log" append="off"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <video>
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </video>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:44:34 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:44:34 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:44:34 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:44:34 compute-0 nova_compute[254092]: </domain>
Nov 25 16:44:34 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.750 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Preparing to wait for external event network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.750 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.751 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.751 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.751 254096 DEBUG nova.virt.libvirt.vif [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:44:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1255165271',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1255165271',id=77,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8b7977bc30d5465b88d98c64d7c70db7',ramdisk_id='',reservation_id='r-dmevmba6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-406281948',owner_user_name='tempest-ServerTagsTestJSON-406281948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:44:30Z,user_data=None,user_id='4086d59097134dd6a71a9056c34359e5',uuid=f0783fc0-46a8-4c51-95ca-79db0d6d849d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.752 254096 DEBUG nova.network.os_vif_util [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Converting VIF {"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.752 254096 DEBUG nova.network.os_vif_util [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:66,bridge_name='br-int',has_traffic_filtering=True,id=94d30e8d-55a7-40af-be2f-0ce1a8c10b97,network=Network(06ec9e45-7215-4d71-bb7c-e65059e97c36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d30e8d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.752 254096 DEBUG os_vif [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:66,bridge_name='br-int',has_traffic_filtering=True,id=94d30e8d-55a7-40af-be2f-0ce1a8c10b97,network=Network(06ec9e45-7215-4d71-bb7c-e65059e97c36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d30e8d-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.753 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.753 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.754 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.756 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94d30e8d-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.757 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94d30e8d-55, col_values=(('external_ids', {'iface-id': '94d30e8d-55a7-40af-be2f-0ce1a8c10b97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:ad:66', 'vm-uuid': 'f0783fc0-46a8-4c51-95ca-79db0d6d849d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:34 compute-0 NetworkManager[48891]: <info>  [1764089074.7595] manager: (tap94d30e8d-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.766 254096 INFO os_vif [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:66,bridge_name='br-int',has_traffic_filtering=True,id=94d30e8d-55a7-40af-be2f-0ce1a8c10b97,network=Network(06ec9e45-7215-4d71-bb7c-e65059e97c36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d30e8d-55')
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.864 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.934 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.935 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.935 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] No VIF found with MAC fa:16:3e:aa:ad:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:44:34 compute-0 nova_compute[254092]: 2025-11-25 16:44:34.936 254096 INFO nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Using config drive
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.024 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.029 254096 DEBUG nova.policy [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bd4800c25cd462b9365649e599d0a0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:44:35 compute-0 ceph-mon[74985]: pgmap v1775: 321 pgs: 321 active+clean; 134 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 323 KiB/s wr, 0 op/s
Nov 25 16:44:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/122800180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1521869160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.202 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.203 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.203 254096 INFO nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Creating image(s)
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.279 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.301 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.324 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.328 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.429 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.430 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.430 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.431 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.451 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.455 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 169 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 16:44:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.960 254096 DEBUG nova.network.neutron [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Updated VIF entry in instance network info cache for port 94d30e8d-55a7-40af-be2f-0ce1a8c10b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.961 254096 DEBUG nova.network.neutron [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Updating instance_info_cache with network_info: [{"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:35 compute-0 nova_compute[254092]: 2025-11-25 16:44:35.976 254096 DEBUG oslo_concurrency.lockutils [req-3584ac5b-5211-451b-838c-0ec4a1d72975 req-04dba9da-6c01-4a60-9fd6-401dc8c61c8c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0783fc0-46a8-4c51-95ca-79db0d6d849d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.173 254096 INFO nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Creating config drive at /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/disk.config
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.178 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3zwjovyu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:36 compute-0 ceph-mon[74985]: pgmap v1776: 321 pgs: 321 active+clean; 169 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.326 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3zwjovyu" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.418 254096 DEBUG nova.storage.rbd_utils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] rbd image f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.424 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/disk.config f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.684 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.746 254096 DEBUG nova.network.neutron [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Successfully created port: 41fd5f5b-445b-4eed-adf5-045ddb262021 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.755 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] resizing rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.895 254096 DEBUG oslo_concurrency.processutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/disk.config f0783fc0-46a8-4c51-95ca-79db0d6d849d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.895 254096 INFO nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Deleting local config drive /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d/disk.config because it was imported into RBD.
Nov 25 16:44:36 compute-0 kernel: tap94d30e8d-55: entered promiscuous mode
Nov 25 16:44:36 compute-0 NetworkManager[48891]: <info>  [1764089076.9463] manager: (tap94d30e8d-55): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Nov 25 16:44:36 compute-0 nova_compute[254092]: 2025-11-25 16:44:36.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:36 compute-0 ovn_controller[153477]: 2025-11-25T16:44:36Z|00740|binding|INFO|Claiming lport 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 for this chassis.
Nov 25 16:44:36 compute-0 ovn_controller[153477]: 2025-11-25T16:44:36Z|00741|binding|INFO|94d30e8d-55a7-40af-be2f-0ce1a8c10b97: Claiming fa:16:3e:aa:ad:66 10.100.0.7
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.960 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:66 10.100.0.7'], port_security=['fa:16:3e:aa:ad:66 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f0783fc0-46a8-4c51-95ca-79db0d6d849d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b7977bc30d5465b88d98c64d7c70db7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e299ae9f-54d0-4e42-b065-cef8e69869a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b47f788-f336-4085-985d-b8073abf30d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=94d30e8d-55a7-40af-be2f-0ce1a8c10b97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.963 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 in datapath 06ec9e45-7215-4d71-bb7c-e65059e97c36 bound to our chassis
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.965 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06ec9e45-7215-4d71-bb7c-e65059e97c36
Nov 25 16:44:36 compute-0 ovn_controller[153477]: 2025-11-25T16:44:36Z|00742|binding|INFO|Setting lport 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 ovn-installed in OVS
Nov 25 16:44:36 compute-0 ovn_controller[153477]: 2025-11-25T16:44:36Z|00743|binding|INFO|Setting lport 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 up in Southbound
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[81c26432-3a46-4a6c-9737-b1f769248c80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.978 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap06ec9e45-71 in ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.980 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap06ec9e45-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.980 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[13ec2969-6870-4477-8a17-b1b07529564f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.981 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a56500-b0fb-4028-a156-c487803f0113]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:36 compute-0 systemd-udevd[335373]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:44:36 compute-0 systemd-machined[216343]: New machine qemu-92-instance-0000004d.
Nov 25 16:44:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:36.995 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[40c2af35-2293-41b9-bdc1-8bef7531e57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-0000004d.
Nov 25 16:44:37 compute-0 NetworkManager[48891]: <info>  [1764089077.0108] device (tap94d30e8d-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:44:37 compute-0 NetworkManager[48891]: <info>  [1764089077.0116] device (tap94d30e8d-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.022 254096 DEBUG nova.objects.instance [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'migration_context' on Instance uuid 98410ff5-26ab-4406-8d1b-063d9e114cf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.024 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cea53f3e-f734-4d0b-be41-86e1f097bf7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.036 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.037 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Ensure instance console log exists: /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.037 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.037 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.037 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.056 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[302a0bae-eb4c-43ef-9f7d-30d5d87f8fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 NetworkManager[48891]: <info>  [1764089077.0621] manager: (tap06ec9e45-70): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Nov 25 16:44:37 compute-0 systemd-udevd[335379]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.062 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d259539-30b8-4e9c-b66d-7c4790f73c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.095 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ccbf18-6ae2-4d3a-a99c-a1200d26465f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.099 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a30aff40-b007-4139-87fa-68b85f122338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 NetworkManager[48891]: <info>  [1764089077.1288] device (tap06ec9e45-70): carrier: link connected
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.136 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ef8495-d37a-4e35-bd23-9749ce1b2ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.158 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f91b5a3-1c7b-40d0-8aa3-a761242111a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06ec9e45-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:f2:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553471, 'reachable_time': 28841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335407, 'error': None, 'target': 'ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.181 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10867522-f196-4713-92ab-45723a61d128]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:f246'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553471, 'tstamp': 553471}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335408, 'error': None, 'target': 'ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.198 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aca90a3d-e519-4334-9d1e-ac6ef6cf7643]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06ec9e45-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:f2:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553471, 'reachable_time': 28841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335409, 'error': None, 'target': 'ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.234 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[136ae67c-4681-4af9-bc78-434dbbea4c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.325 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf06735-5119-4c78-ad52-05e4ef3f2fab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.327 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06ec9e45-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.327 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.328 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06ec9e45-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:37 compute-0 kernel: tap06ec9e45-70: entered promiscuous mode
Nov 25 16:44:37 compute-0 NetworkManager[48891]: <info>  [1764089077.3323] manager: (tap06ec9e45-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.335 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.336 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06ec9e45-70, col_values=(('external_ids', {'iface-id': '10c1befa-03e3-4d22-ad11-5664c5f5c4d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:37 compute-0 ovn_controller[153477]: 2025-11-25T16:44:37Z|00744|binding|INFO|Releasing lport 10c1befa-03e3-4d22-ad11-5664c5f5c4d6 from this chassis (sb_readonly=0)
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.341 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/06ec9e45-7215-4d71-bb7c-e65059e97c36.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/06ec9e45-7215-4d71-bb7c-e65059e97c36.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.343 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[81de193f-8c02-4f97-b12b-02095ddffe44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.344 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-06ec9e45-7215-4d71-bb7c-e65059e97c36
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/06ec9e45-7215-4d71-bb7c-e65059e97c36.pid.haproxy
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 06ec9e45-7215-4d71-bb7c-e65059e97c36
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:44:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:37.346 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'env', 'PROCESS_TAG=haproxy-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/06ec9e45-7215-4d71-bb7c-e65059e97c36.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.496 254096 DEBUG nova.network.neutron [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Successfully updated port: 41fd5f5b-445b-4eed-adf5-045ddb262021 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.513 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.514 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquired lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.514 254096 DEBUG nova.network.neutron [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.517 254096 DEBUG nova.compute.manager [req-8ebb8d02-a6e8-4bf7-9d71-6b458a4bcdf9 req-eecb8938-3ec9-4461-b5b7-472b1f4246ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received event network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.518 254096 DEBUG oslo_concurrency.lockutils [req-8ebb8d02-a6e8-4bf7-9d71-6b458a4bcdf9 req-eecb8938-3ec9-4461-b5b7-472b1f4246ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.518 254096 DEBUG oslo_concurrency.lockutils [req-8ebb8d02-a6e8-4bf7-9d71-6b458a4bcdf9 req-eecb8938-3ec9-4461-b5b7-472b1f4246ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.518 254096 DEBUG oslo_concurrency.lockutils [req-8ebb8d02-a6e8-4bf7-9d71-6b458a4bcdf9 req-eecb8938-3ec9-4461-b5b7-472b1f4246ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.518 254096 DEBUG nova.compute.manager [req-8ebb8d02-a6e8-4bf7-9d71-6b458a4bcdf9 req-eecb8938-3ec9-4461-b5b7-472b1f4246ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Processing event network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.671 254096 DEBUG nova.compute.manager [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Received event network-changed-41fd5f5b-445b-4eed-adf5-045ddb262021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.672 254096 DEBUG nova.compute.manager [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Refreshing instance network info cache due to event network-changed-41fd5f5b-445b-4eed-adf5-045ddb262021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.672 254096 DEBUG oslo_concurrency.lockutils [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.696 254096 DEBUG nova.network.neutron [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:44:37 compute-0 podman[335469]: 2025-11-25 16:44:37.728983738 +0000 UTC m=+0.025618806 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.889 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.890 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089077.8884153, f0783fc0-46a8-4c51-95ca-79db0d6d849d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.891 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] VM Started (Lifecycle Event)
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.897 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.902 254096 INFO nova.virt.libvirt.driver [-] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Instance spawned successfully.
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.902 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:44:37 compute-0 podman[335469]: 2025-11-25 16:44:37.912210639 +0000 UTC m=+0.208845687 container create 0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.919 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.928 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.932 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.932 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.933 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.933 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.933 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.934 254096 DEBUG nova.virt.libvirt.driver [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.961 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.962 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089077.889907, f0783fc0-46a8-4c51-95ca-79db0d6d849d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.962 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] VM Paused (Lifecycle Event)
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.987 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.993 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089077.8961594, f0783fc0-46a8-4c51-95ca-79db0d6d849d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:37 compute-0 nova_compute[254092]: 2025-11-25 16:44:37.993 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] VM Resumed (Lifecycle Event)
Nov 25 16:44:37 compute-0 systemd[1]: Started libpod-conmon-0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa.scope.
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.013 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.017 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af42e6464b8cbe52d413948164d2e659141496865c81e8115ef129bbdc7df64/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.041 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.054 254096 INFO nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Took 7.95 seconds to spawn the instance on the hypervisor.
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.055 254096 DEBUG nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:38 compute-0 podman[335469]: 2025-11-25 16:44:38.087882605 +0000 UTC m=+0.384517683 container init 0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:44:38 compute-0 podman[335469]: 2025-11-25 16:44:38.094194666 +0000 UTC m=+0.390829714 container start 0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 16:44:38 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [NOTICE]   (335502) : New worker (335504) forked
Nov 25 16:44:38 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [NOTICE]   (335502) : Loading success.
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.190 254096 INFO nova.compute.manager [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Took 9.29 seconds to build instance.
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.233 254096 DEBUG oslo_concurrency.lockutils [None req-ce532e72-cc1f-436b-821e-42a33c3f9389 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.744 254096 DEBUG nova.network.neutron [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updating instance_info_cache with network_info: [{"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.780 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Releasing lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.781 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Instance network_info: |[{"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.782 254096 DEBUG oslo_concurrency.lockutils [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.782 254096 DEBUG nova.network.neutron [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Refreshing network info cache for port 41fd5f5b-445b-4eed-adf5-045ddb262021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.786 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Start _get_guest_xml network_info=[{"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.790 254096 WARNING nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.796 254096 DEBUG nova.virt.libvirt.host [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.797 254096 DEBUG nova.virt.libvirt.host [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.804 254096 DEBUG nova.virt.libvirt.host [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.805 254096 DEBUG nova.virt.libvirt.host [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.805 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.805 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.806 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.807 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.807 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.807 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.808 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.808 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.808 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.809 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.809 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.809 254096 DEBUG nova.virt.hardware [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:44:38 compute-0 nova_compute[254092]: 2025-11-25 16:44:38.812 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:38 compute-0 ceph-mon[74985]: pgmap v1777: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 25 16:44:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736142780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.327 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.349 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.353 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2708072605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.799 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.801 254096 DEBUG nova.virt.libvirt.vif [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2135129959',display_name='tempest-ServerActionsTestOtherA-server-2135129959',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2135129959',id=78,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0CfOXgpdL9TA9v80eVPgWMFMAd3kyDMITWZbq91VqT30SkdY0BSiRtiMf/N/PxHYN1QDKdbRV0yenlOn8E69+KpPA991BPfs7OG9A96fwH3GKazl2NNuFOCSFE4XMmXQ==',key_name='tempest-keypair-1463003804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-z971v96r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:44:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=98410ff5-26ab-4406-8d1b-063d9e114cf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.802 254096 DEBUG nova.network.os_vif_util [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.803 254096 DEBUG nova.network.os_vif_util [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.804 254096 DEBUG nova.objects.instance [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'pci_devices' on Instance uuid 98410ff5-26ab-4406-8d1b-063d9e114cf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.828 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <uuid>98410ff5-26ab-4406-8d1b-063d9e114cf8</uuid>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <name>instance-0000004e</name>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestOtherA-server-2135129959</nova:name>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:44:38</nova:creationTime>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:user uuid="7bd4800c25cd462b9365649e599d0a0e">tempest-ServerActionsTestOtherA-878981139-project-member</nova:user>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:project uuid="d4964e211a6d4699ab499f7cadee8a8d">tempest-ServerActionsTestOtherA-878981139</nova:project>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <nova:port uuid="41fd5f5b-445b-4eed-adf5-045ddb262021">
Nov 25 16:44:39 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <system>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <entry name="serial">98410ff5-26ab-4406-8d1b-063d9e114cf8</entry>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <entry name="uuid">98410ff5-26ab-4406-8d1b-063d9e114cf8</entry>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </system>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <os>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   </os>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <features>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   </features>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/98410ff5-26ab-4406-8d1b-063d9e114cf8_disk">
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/98410ff5-26ab-4406-8d1b-063d9e114cf8_disk.config">
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:a2:dc:a2"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <target dev="tap41fd5f5b-44"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/console.log" append="off"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <video>
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </video>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:44:39 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:44:39 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:44:39 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:44:39 compute-0 nova_compute[254092]: </domain>
Nov 25 16:44:39 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.835 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Preparing to wait for external event network-vif-plugged-41fd5f5b-445b-4eed-adf5-045ddb262021 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.835 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.836 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.836 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.837 254096 DEBUG nova.virt.libvirt.vif [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2135129959',display_name='tempest-ServerActionsTestOtherA-server-2135129959',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2135129959',id=78,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0CfOXgpdL9TA9v80eVPgWMFMAd3kyDMITWZbq91VqT30SkdY0BSiRtiMf/N/PxHYN1QDKdbRV0yenlOn8E69+KpPA991BPfs7OG9A96fwH3GKazl2NNuFOCSFE4XMmXQ==',key_name='tempest-keypair-1463003804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-z971v96r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:44:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=98410ff5-26ab-4406-8d1b-063d9e114cf8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.837 254096 DEBUG nova.network.os_vif_util [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.838 254096 DEBUG nova.network.os_vif_util [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.838 254096 DEBUG os_vif [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.840 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.840 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.843 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41fd5f5b-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.843 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41fd5f5b-44, col_values=(('external_ids', {'iface-id': '41fd5f5b-445b-4eed-adf5-045ddb262021', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a2:dc:a2', 'vm-uuid': '98410ff5-26ab-4406-8d1b-063d9e114cf8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:39 compute-0 NetworkManager[48891]: <info>  [1764089079.8456] manager: (tap41fd5f5b-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.854 254096 INFO os_vif [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44')
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.951 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.952 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.952 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No VIF found with MAC fa:16:3e:a2:dc:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:44:39 compute-0 nova_compute[254092]: 2025-11-25 16:44:39.953 254096 INFO nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Using config drive
Nov 25 16:44:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1736142780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2708072605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.028 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.049 254096 DEBUG nova.compute.manager [req-7f825759-270e-424a-b11a-b33cc898be76 req-4759431e-e3a5-43ce-96fb-88930f2f1fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received event network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.050 254096 DEBUG oslo_concurrency.lockutils [req-7f825759-270e-424a-b11a-b33cc898be76 req-4759431e-e3a5-43ce-96fb-88930f2f1fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.050 254096 DEBUG oslo_concurrency.lockutils [req-7f825759-270e-424a-b11a-b33cc898be76 req-4759431e-e3a5-43ce-96fb-88930f2f1fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.051 254096 DEBUG oslo_concurrency.lockutils [req-7f825759-270e-424a-b11a-b33cc898be76 req-4759431e-e3a5-43ce-96fb-88930f2f1fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.051 254096 DEBUG nova.compute.manager [req-7f825759-270e-424a-b11a-b33cc898be76 req-4759431e-e3a5-43ce-96fb-88930f2f1fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] No waiting events found dispatching network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.051 254096 WARNING nova.compute.manager [req-7f825759-270e-424a-b11a-b33cc898be76 req-4759431e-e3a5-43ce-96fb-88930f2f1fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received unexpected event network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 for instance with vm_state active and task_state None.
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:44:40
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.data']
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:44:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.985 254096 INFO nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Creating config drive at /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/disk.config
Nov 25 16:44:40 compute-0 nova_compute[254092]: 2025-11-25 16:44:40.995 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpme2mkk6q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:41 compute-0 nova_compute[254092]: 2025-11-25 16:44:41.027 254096 DEBUG nova.network.neutron [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updated VIF entry in instance network info cache for port 41fd5f5b-445b-4eed-adf5-045ddb262021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:44:41 compute-0 nova_compute[254092]: 2025-11-25 16:44:41.028 254096 DEBUG nova.network.neutron [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updating instance_info_cache with network_info: [{"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:41 compute-0 nova_compute[254092]: 2025-11-25 16:44:41.046 254096 DEBUG oslo_concurrency.lockutils [req-2ac16688-0dcc-4337-8603-7b267dd8dd76 req-f4201910-ce18-4682-9da3-72a34480561f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:41 compute-0 nova_compute[254092]: 2025-11-25 16:44:41.135 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpme2mkk6q" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:41 compute-0 ceph-mon[74985]: pgmap v1778: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 25 16:44:41 compute-0 nova_compute[254092]: 2025-11-25 16:44:41.167 254096 DEBUG nova.storage.rbd_utils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:44:41 compute-0 nova_compute[254092]: 2025-11-25 16:44:41.171 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/disk.config 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 215 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Nov 25 16:44:42 compute-0 ceph-mon[74985]: pgmap v1779: 321 pgs: 321 active+clean; 215 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.629 254096 DEBUG oslo_concurrency.processutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/disk.config 98410ff5-26ab-4406-8d1b-063d9e114cf8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.629 254096 INFO nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Deleting local config drive /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8/disk.config because it was imported into RBD.
Nov 25 16:44:42 compute-0 podman[335636]: 2025-11-25 16:44:42.680707736 +0000 UTC m=+0.097088855 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:44:42 compute-0 NetworkManager[48891]: <info>  [1764089082.6851] manager: (tap41fd5f5b-44): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Nov 25 16:44:42 compute-0 kernel: tap41fd5f5b-44: entered promiscuous mode
Nov 25 16:44:42 compute-0 podman[335638]: 2025-11-25 16:44:42.685286871 +0000 UTC m=+0.095072671 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 16:44:42 compute-0 podman[335637]: 2025-11-25 16:44:42.685297671 +0000 UTC m=+0.090450375 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:44:42 compute-0 ovn_controller[153477]: 2025-11-25T16:44:42Z|00745|binding|INFO|Claiming lport 41fd5f5b-445b-4eed-adf5-045ddb262021 for this chassis.
Nov 25 16:44:42 compute-0 ovn_controller[153477]: 2025-11-25T16:44:42Z|00746|binding|INFO|41fd5f5b-445b-4eed-adf5-045ddb262021: Claiming fa:16:3e:a2:dc:a2 10.100.0.6
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:42 compute-0 ovn_controller[153477]: 2025-11-25T16:44:42Z|00747|binding|INFO|Setting lport 41fd5f5b-445b-4eed-adf5-045ddb262021 ovn-installed in OVS
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:42 compute-0 systemd-machined[216343]: New machine qemu-93-instance-0000004e.
Nov 25 16:44:42 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004e.
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.749 254096 DEBUG oslo_concurrency.lockutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.750 254096 DEBUG oslo_concurrency.lockutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.750 254096 INFO nova.compute.manager [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Rebooting instance
Nov 25 16:44:42 compute-0 systemd-udevd[335710]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:44:42 compute-0 NetworkManager[48891]: <info>  [1764089082.7688] device (tap41fd5f5b-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:44:42 compute-0 NetworkManager[48891]: <info>  [1764089082.7701] device (tap41fd5f5b-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.795 254096 DEBUG oslo_concurrency.lockutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.796 254096 DEBUG oslo_concurrency.lockutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:42 compute-0 nova_compute[254092]: 2025-11-25 16:44:42.796 254096 DEBUG nova.network.neutron [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.848 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:dc:a2 10.100.0.6'], port_security=['fa:16:3e:a2:dc:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '98410ff5-26ab-4406-8d1b-063d9e114cf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c92cd9ca-5dd9-48df-bed9-cecbc09aacca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=41fd5f5b-445b-4eed-adf5-045ddb262021) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.850 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 41fd5f5b-445b-4eed-adf5-045ddb262021 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 bound to our chassis
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.852 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:44:42 compute-0 ovn_controller[153477]: 2025-11-25T16:44:42Z|00748|binding|INFO|Setting lport 41fd5f5b-445b-4eed-adf5-045ddb262021 up in Southbound
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.871 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b285b536-f5bf-4145-9eaf-748a08d764d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.873 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap290484fa-91 in ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.875 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap290484fa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a64a7c1b-fe51-4157-b61e-c36e9797c7a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f5426207-e8d1-418b-a925-4899d1beb9a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.893 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d6e83a-2ece-4bba-b4ab-e7db2b5b8944]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.912 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91b14cb4-07da-4778-bf1f-48c462781d46]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.943 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5e37b29c-d646-410a-8a1b-c19999a1e78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.951 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b19e7a2-e22c-4b81-962a-8b0fb027a799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 NetworkManager[48891]: <info>  [1764089082.9523] manager: (tap290484fa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Nov 25 16:44:42 compute-0 systemd-udevd[335714]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.988 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a6a9bf-3a7d-4a71-87f3-cd5634040581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:42.991 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5b964382-3657-4f1f-81df-b54e37d99934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 NetworkManager[48891]: <info>  [1764089083.0171] device (tap290484fa-90): carrier: link connected
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.023 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c66d22a2-cb12-4d6d-9ede-79be462023b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.041 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a05620-e1ab-4780-b014-efe90ef67e34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 32335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335745, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.063 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[db3a4234-26c8-4751-b67d-5f8dcb818701]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:a377'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554060, 'tstamp': 554060}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335762, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.086 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa00178b-6348-44a2-87cb-0cc817727879]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 32335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335765, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.120 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb2f29f-99d5-416d-8363-b58083507146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.187 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3838271e-ff5e-4de4-9b23-13c1a22e0f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.190 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.191 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.191 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:43 compute-0 kernel: tap290484fa-90: entered promiscuous mode
Nov 25 16:44:43 compute-0 NetworkManager[48891]: <info>  [1764089083.1935] manager: (tap290484fa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.199 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:43 compute-0 ovn_controller[153477]: 2025-11-25T16:44:43Z|00749|binding|INFO|Releasing lport db192ec3-55c1-4137-aaad-99a175bfa879 from this chassis (sb_readonly=0)
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.205 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/290484fa-908f-44de-87e4-4f5bc85c5679.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/290484fa-908f-44de-87e4-4f5bc85c5679.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.206 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4acd79-44eb-4474-843c-65d5ce47e5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.207 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/290484fa-908f-44de-87e4-4f5bc85c5679.pid.haproxy
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:44:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:43.208 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'env', 'PROCESS_TAG=haproxy-290484fa-908f-44de-87e4-4f5bc85c5679', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/290484fa-908f-44de-87e4-4f5bc85c5679.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.248 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089083.2482753, 98410ff5-26ab-4406-8d1b-063d9e114cf8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.249 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] VM Started (Lifecycle Event)
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.268 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.272 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089083.2483757, 98410ff5-26ab-4406-8d1b-063d9e114cf8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.272 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] VM Paused (Lifecycle Event)
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.295 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.298 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.336 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:44:43 compute-0 nova_compute[254092]: 2025-11-25 16:44:43.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:43 compute-0 podman[335821]: 2025-11-25 16:44:43.574835233 +0000 UTC m=+0.026761696 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:44:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 215 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 171 op/s
Nov 25 16:44:43 compute-0 podman[335821]: 2025-11-25 16:44:43.701267123 +0000 UTC m=+0.153193556 container create 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:44:43 compute-0 systemd[1]: Started libpod-conmon-78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db.scope.
Nov 25 16:44:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:44:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8c31be6c702279b234bc478f162b1997c18dd87616887343918d4c9ac2c2c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:43 compute-0 podman[335821]: 2025-11-25 16:44:43.866062914 +0000 UTC m=+0.317989347 container init 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:44:43 compute-0 podman[335821]: 2025-11-25 16:44:43.872249972 +0000 UTC m=+0.324176405 container start 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:44:43 compute-0 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [NOTICE]   (335841) : New worker (335843) forked
Nov 25 16:44:43 compute-0 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [NOTICE]   (335841) : Loading success.
Nov 25 16:44:44 compute-0 ceph-mon[74985]: pgmap v1780: 321 pgs: 321 active+clean; 215 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 171 op/s
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.924 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.926 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.926 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.927 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.927 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.929 254096 INFO nova.compute.manager [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Terminating instance
Nov 25 16:44:44 compute-0 nova_compute[254092]: 2025-11-25 16:44:44.931 254096 DEBUG nova.compute.manager [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:44:45 compute-0 kernel: tap94d30e8d-55 (unregistering): left promiscuous mode
Nov 25 16:44:45 compute-0 NetworkManager[48891]: <info>  [1764089085.0503] device (tap94d30e8d-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:44:45 compute-0 ovn_controller[153477]: 2025-11-25T16:44:45Z|00750|binding|INFO|Releasing lport 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 from this chassis (sb_readonly=0)
Nov 25 16:44:45 compute-0 ovn_controller[153477]: 2025-11-25T16:44:45Z|00751|binding|INFO|Setting lport 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 down in Southbound
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:45 compute-0 ovn_controller[153477]: 2025-11-25T16:44:45Z|00752|binding|INFO|Removing iface tap94d30e8d-55 ovn-installed in OVS
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.080 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:45.081 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ad:66 10.100.0.7'], port_security=['fa:16:3e:aa:ad:66 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f0783fc0-46a8-4c51-95ca-79db0d6d849d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b7977bc30d5465b88d98c64d7c70db7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e299ae9f-54d0-4e42-b065-cef8e69869a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b47f788-f336-4085-985d-b8073abf30d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=94d30e8d-55a7-40af-be2f-0ce1a8c10b97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:45.083 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 94d30e8d-55a7-40af-be2f-0ce1a8c10b97 in datapath 06ec9e45-7215-4d71-bb7c-e65059e97c36 unbound from our chassis
Nov 25 16:44:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:45.085 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 06ec9e45-7215-4d71-bb7c-e65059e97c36, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:44:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:45.086 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbb769b-fc3d-4ac6-a391-5a1337460e74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:45.087 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36 namespace which is not needed anymore
Nov 25 16:44:45 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Nov 25 16:44:45 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004d.scope: Consumed 7.868s CPU time.
Nov 25 16:44:45 compute-0 systemd-machined[216343]: Machine qemu-92-instance-0000004d terminated.
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.171 254096 INFO nova.virt.libvirt.driver [-] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Instance destroyed successfully.
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.172 254096 DEBUG nova.objects.instance [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lazy-loading 'resources' on Instance uuid f0783fc0-46a8-4c51-95ca-79db0d6d849d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.244 254096 DEBUG nova.virt.libvirt.vif [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:44:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1255165271',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1255165271',id=77,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:44:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8b7977bc30d5465b88d98c64d7c70db7',ramdisk_id='',reservation_id='r-dmevmba6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-406281948',owner_user_name='tempest-ServerTagsTestJSON-406281948-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:38Z,user_data=None,user_id='4086d59097134dd6a71a9056c34359e5',uuid=f0783fc0-46a8-4c51-95ca-79db0d6d849d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.244 254096 DEBUG nova.network.os_vif_util [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Converting VIF {"id": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "address": "fa:16:3e:aa:ad:66", "network": {"id": "06ec9e45-7215-4d71-bb7c-e65059e97c36", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-214544278-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8b7977bc30d5465b88d98c64d7c70db7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94d30e8d-55", "ovs_interfaceid": "94d30e8d-55a7-40af-be2f-0ce1a8c10b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.245 254096 DEBUG nova.network.os_vif_util [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:66,bridge_name='br-int',has_traffic_filtering=True,id=94d30e8d-55a7-40af-be2f-0ce1a8c10b97,network=Network(06ec9e45-7215-4d71-bb7c-e65059e97c36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d30e8d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.245 254096 DEBUG os_vif [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:66,bridge_name='br-int',has_traffic_filtering=True,id=94d30e8d-55a7-40af-be2f-0ce1a8c10b97,network=Network(06ec9e45-7215-4d71-bb7c-e65059e97c36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d30e8d-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.247 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94d30e8d-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.250 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.254 254096 INFO os_vif [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ad:66,bridge_name='br-int',has_traffic_filtering=True,id=94d30e8d-55a7-40af-be2f-0ce1a8c10b97,network=Network(06ec9e45-7215-4d71-bb7c-e65059e97c36),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94d30e8d-55')
Nov 25 16:44:45 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [NOTICE]   (335502) : haproxy version is 2.8.14-c23fe91
Nov 25 16:44:45 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [NOTICE]   (335502) : path to executable is /usr/sbin/haproxy
Nov 25 16:44:45 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [WARNING]  (335502) : Exiting Master process...
Nov 25 16:44:45 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [WARNING]  (335502) : Exiting Master process...
Nov 25 16:44:45 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [ALERT]    (335502) : Current worker (335504) exited with code 143 (Terminated)
Nov 25 16:44:45 compute-0 neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36[335498]: [WARNING]  (335502) : All workers exited. Exiting... (0)
Nov 25 16:44:45 compute-0 systemd[1]: libpod-0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa.scope: Deactivated successfully.
Nov 25 16:44:45 compute-0 podman[335883]: 2025-11-25 16:44:45.2953583 +0000 UTC m=+0.108058112 container died 0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.341 254096 DEBUG nova.network.neutron [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.477 254096 DEBUG oslo_concurrency.lockutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:45 compute-0 nova_compute[254092]: 2025-11-25 16:44:45.478 254096 DEBUG nova.compute.manager [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa-userdata-shm.mount: Deactivated successfully.
Nov 25 16:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af42e6464b8cbe52d413948164d2e659141496865c81e8115ef129bbdc7df64-merged.mount: Deactivated successfully.
Nov 25 16:44:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 216 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 181 op/s
Nov 25 16:44:45 compute-0 podman[335883]: 2025-11-25 16:44:45.793278079 +0000 UTC m=+0.605977911 container cleanup 0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:44:45 compute-0 systemd[1]: libpod-conmon-0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa.scope: Deactivated successfully.
Nov 25 16:44:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.290 254096 DEBUG nova.compute.manager [req-821d78f5-dbe6-4946-9754-30ff60763d9a req-a2f8d098-90b4-4e42-9578-a7ced114abe1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received event network-vif-unplugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.291 254096 DEBUG oslo_concurrency.lockutils [req-821d78f5-dbe6-4946-9754-30ff60763d9a req-a2f8d098-90b4-4e42-9578-a7ced114abe1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.291 254096 DEBUG oslo_concurrency.lockutils [req-821d78f5-dbe6-4946-9754-30ff60763d9a req-a2f8d098-90b4-4e42-9578-a7ced114abe1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.291 254096 DEBUG oslo_concurrency.lockutils [req-821d78f5-dbe6-4946-9754-30ff60763d9a req-a2f8d098-90b4-4e42-9578-a7ced114abe1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.292 254096 DEBUG nova.compute.manager [req-821d78f5-dbe6-4946-9754-30ff60763d9a req-a2f8d098-90b4-4e42-9578-a7ced114abe1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] No waiting events found dispatching network-vif-unplugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.292 254096 DEBUG nova.compute.manager [req-821d78f5-dbe6-4946-9754-30ff60763d9a req-a2f8d098-90b4-4e42-9578-a7ced114abe1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received event network-vif-unplugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:44:46 compute-0 podman[335933]: 2025-11-25 16:44:46.362275865 +0000 UTC m=+0.539494406 container remove 0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.376 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4922ce-7dd4-4182-a4ab-d252f25bcd3a]: (4, ('Tue Nov 25 04:44:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36 (0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa)\n0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa\nTue Nov 25 04:44:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36 (0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa)\n0ff8520898244ac405f63ad1964255ffd9eb71115b8d3c83b30fa63a03b710fa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.378 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4e3741-39c3-44c4-8063-f04b77665341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.379 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06ec9e45-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 16:44:46 compute-0 NetworkManager[48891]: <info>  [1764089086.3877] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:44:46 compute-0 kernel: tap06ec9e45-70: left promiscuous mode
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 ovn_controller[153477]: 2025-11-25T16:44:46Z|00753|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 16:44:46 compute-0 ovn_controller[153477]: 2025-11-25T16:44:46Z|00754|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 ovn_controller[153477]: 2025-11-25T16:44:46Z|00755|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb83ae2-66f5-471a-bca6-b6c1844668bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 16:44:46 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d00000048.scope: Consumed 13.947s CPU time.
Nov 25 16:44:46 compute-0 systemd-machined[216343]: Machine qemu-91-instance-00000048 terminated.
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04914e5f-0821-4375-9d90-944b1286ddd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.450 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e78cc167-3c33-4990-9079-a8dde85ffb79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.471 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2caec49-0600-4171-81f0-fea7f13a59ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553464, 'reachable_time': 25900, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335958, 'error': None, 'target': 'ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.478 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-06ec9e45-7215-4d71-bb7c-e65059e97c36 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.478 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[23fb6e66-1729-4b9c-8185-392f6003cc0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d06ec9e45\x2d7215\x2d4d71\x2dbb7c\x2de65059e97c36.mount: Deactivated successfully.
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.494 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.497 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.500 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.501 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d950fbc7-a421-46dd-bfd8-af714c4e4ee4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:46.503 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.521 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.521 254096 DEBUG nova.objects.instance [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.537 254096 DEBUG nova.virt.libvirt.vif [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.538 254096 DEBUG nova.network.os_vif_util [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.538 254096 DEBUG nova.network.os_vif_util [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.539 254096 DEBUG os_vif [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.540 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe8c3a9-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.547 254096 INFO os_vif [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.554 254096 DEBUG nova.virt.libvirt.driver [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start _get_guest_xml network_info=[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.559 254096 WARNING nova.virt.libvirt.driver [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.565 254096 DEBUG nova.virt.libvirt.host [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.565 254096 DEBUG nova.virt.libvirt.host [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.568 254096 DEBUG nova.virt.libvirt.host [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.568 254096 DEBUG nova.virt.libvirt.host [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.568 254096 DEBUG nova.virt.libvirt.driver [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.568 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.569 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.569 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.569 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.569 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.569 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.570 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.570 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.570 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.570 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.570 254096 DEBUG nova.virt.hardware [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.570 254096 DEBUG nova.objects.instance [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:46 compute-0 nova_compute[254092]: 2025-11-25 16:44:46.583 254096 DEBUG oslo_concurrency.processutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:46 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[334846]: [NOTICE]   (334850) : haproxy version is 2.8.14-c23fe91
Nov 25 16:44:46 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[334846]: [NOTICE]   (334850) : path to executable is /usr/sbin/haproxy
Nov 25 16:44:46 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[334846]: [WARNING]  (334850) : Exiting Master process...
Nov 25 16:44:46 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[334846]: [ALERT]    (334850) : Current worker (334852) exited with code 143 (Terminated)
Nov 25 16:44:46 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[334846]: [WARNING]  (334850) : All workers exited. Exiting... (0)
Nov 25 16:44:46 compute-0 systemd[1]: libpod-10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b.scope: Deactivated successfully.
Nov 25 16:44:46 compute-0 podman[335989]: 2025-11-25 16:44:46.760530249 +0000 UTC m=+0.157869043 container died 10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:44:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697321682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b-userdata-shm.mount: Deactivated successfully.
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.058 254096 DEBUG oslo_concurrency.processutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f08ec3ca82ffeedf6f0dbdbd4e85e5261e38826982127dc852dcb2188945f4-merged.mount: Deactivated successfully.
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.096 254096 DEBUG oslo_concurrency.processutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:47 compute-0 ceph-mon[74985]: pgmap v1781: 321 pgs: 321 active+clean; 216 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 181 op/s
Nov 25 16:44:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1697321682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:47 compute-0 podman[335989]: 2025-11-25 16:44:47.370680213 +0000 UTC m=+0.768018997 container cleanup 10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:44:47 compute-0 systemd[1]: libpod-conmon-10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b.scope: Deactivated successfully.
Nov 25 16:44:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:44:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386877185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.567 254096 DEBUG oslo_concurrency.processutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.574 254096 DEBUG nova.virt.libvirt.vif [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.575 254096 DEBUG nova.network.os_vif_util [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.576 254096 DEBUG nova.network.os_vif_util [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.577 254096 DEBUG nova.objects.instance [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.591 254096 DEBUG nova.virt.libvirt.driver [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <uuid>01f96314-1fbe-4eee-a4ed-db7f448a5320</uuid>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <name>instance-00000048</name>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestJSON-server-1951123052</nova:name>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:44:46</nova:creationTime>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <nova:port uuid="4fe8c3a9-70ba-4a51-8bc1-1505a49fe349">
Nov 25 16:44:47 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <system>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <entry name="serial">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <entry name="uuid">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </system>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <os>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   </os>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <features>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   </features>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk">
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config">
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       </source>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:44:47 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ff:a0:1c"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <target dev="tap4fe8c3a9-70"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log" append="off"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <video>
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </video>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:44:47 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:44:47 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:44:47 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:44:47 compute-0 nova_compute[254092]: </domain>
Nov 25 16:44:47 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.592 254096 DEBUG nova.virt.libvirt.driver [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.592 254096 DEBUG nova.virt.libvirt.driver [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.593 254096 DEBUG nova.virt.libvirt.vif [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.593 254096 DEBUG nova.network.os_vif_util [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.594 254096 DEBUG nova.network.os_vif_util [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.594 254096 DEBUG os_vif [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.595 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.595 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.599 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.599 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:47 compute-0 NetworkManager[48891]: <info>  [1764089087.6016] manager: (tap4fe8c3a9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.606 254096 INFO os_vif [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:44:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 216 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 25 16:44:47 compute-0 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 16:44:47 compute-0 NetworkManager[48891]: <info>  [1764089087.8104] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Nov 25 16:44:47 compute-0 systemd-udevd[335952]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:44:47 compute-0 ovn_controller[153477]: 2025-11-25T16:44:47Z|00756|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 16:44:47 compute-0 ovn_controller[153477]: 2025-11-25T16:44:47Z|00757|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:47 compute-0 NetworkManager[48891]: <info>  [1764089087.8237] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:44:47 compute-0 NetworkManager[48891]: <info>  [1764089087.8250] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:44:47 compute-0 ovn_controller[153477]: 2025-11-25T16:44:47Z|00758|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:47 compute-0 nova_compute[254092]: 2025-11-25 16:44:47.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:47 compute-0 systemd-machined[216343]: New machine qemu-94-instance-00000048.
Nov 25 16:44:47 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-00000048.
Nov 25 16:44:47 compute-0 ovn_controller[153477]: 2025-11-25T16:44:47Z|00759|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 16:44:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:47.924 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:44:48 compute-0 podman[336080]: 2025-11-25 16:44:48.064346282 +0000 UTC m=+0.666096352 container remove 10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.076 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d40c9f-4f8d-4fd5-a3d6-06200479e997]: (4, ('Tue Nov 25 04:44:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b)\n10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b\nTue Nov 25 04:44:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b)\n10d18702ae5afdd836b5d5f261c5cf853f9772136dfef412da1ae7c87051a63b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c23c8dc-939d-4bff-b48b-0728f54c6a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.083 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:48 compute-0 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.112 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b81202-f847-4ef0-9438-220fb25b9410]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdb48a5-6a16-4f9a-ac01-79f832e4d487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[21f57190-81f3-4cfe-9bbe-62168298cf11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.157 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8f902108-d998-448d-9d73-25ee4b014178]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 551521, 'reachable_time': 20684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336120, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.161 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.161 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[63e00231-8b0e-48ba-a186-c53b7ea9b904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.162 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.165 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1742636f-0bf9-4a99-8315-ce12e45d99c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.185 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.187 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.187 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d81c9db-abc1-476b-b868-728efd4f34e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.189 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6841f2d5-c4e6-42df-96c7-663c4cfe3389]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.202 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6763ac-3df7-452c-9664-fb16dd8fd721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.220 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c497c0-d8c4-437d-9640-170365c42d41]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.270 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9749f4-9e05-41a6-8f36-797dad55e8c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.280 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9ca76f-3504-40bb-8a61-fc73df5ddf13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 NetworkManager[48891]: <info>  [1764089088.2836] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/331)
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.324 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6da10ec5-2059-43b9-9866-63813771c9d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.328 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[796f2e7d-c61e-42dd-ab77-1ec967e75cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 NetworkManager[48891]: <info>  [1764089088.3555] device (tap50ea1716-90): carrier: link connected
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.362 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd98066-5a14-43a1-8ac3-dd10ed79a876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.383 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da0647b7-64dc-45ac-a0ce-7dd807bd444d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554594, 'reachable_time': 15382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336163, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.402 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aff4ec2a-ae46-4880-8c35-937411e4314d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554594, 'tstamp': 554594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336164, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.417 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[622ab4ac-b934-4ce5-bd0b-a13c84ee0118]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554594, 'reachable_time': 15382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336165, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.444 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb749cb8-7d9b-4a5e-b429-d4fe844174df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.472 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received event network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.472 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.472 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.473 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.473 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] No waiting events found dispatching network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.473 254096 WARNING nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received unexpected event network-vif-plugged-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 for instance with vm_state active and task_state deleting.
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.473 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.473 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.473 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.474 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.474 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.474 254096 WARNING nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state reboot_started_hard.
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.474 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.474 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.475 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.475 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.475 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.475 254096 WARNING nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state reboot_started_hard.
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.475 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.476 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.476 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.476 254096 DEBUG oslo_concurrency.lockutils [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.476 254096 DEBUG nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.476 254096 WARNING nova.compute.manager [req-5b26998d-a0cb-4ad3-b0b3-01ea4b42a87e req-a10ed94f-3c87-4bd2-a4e7-446e60f4e595 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state reboot_started_hard.
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.508 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1fab1554-c873-468b-9502-1f2c3a79c55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.510 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.510 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.511 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 NetworkManager[48891]: <info>  [1764089088.5152] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Nov 25 16:44:48 compute-0 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.519 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.518 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 ovn_controller[153477]: 2025-11-25T16:44:48Z|00760|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.520 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.538 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.540 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[905a58be-ae73-4fa1-aeb4-348277040f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.541 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:44:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:44:48.542 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:44:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1386877185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:44:48 compute-0 ceph-mon[74985]: pgmap v1782: 321 pgs: 321 active+clean; 216 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.777 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 01f96314-1fbe-4eee-a4ed-db7f448a5320 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.778 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089088.776217, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.779 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.784 254096 DEBUG nova.compute.manager [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.790 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance rebooted successfully.
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.790 254096 DEBUG nova.compute.manager [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.801 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.805 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.821 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.822 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089088.7764027, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.822 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.853 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.858 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:48 compute-0 nova_compute[254092]: 2025-11-25 16:44:48.879 254096 DEBUG oslo_concurrency.lockutils [None req-e6c0d172-7714-4492-b3b8-0e711e96ece1 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:48 compute-0 podman[336221]: 2025-11-25 16:44:48.90580339 +0000 UTC m=+0.027729663 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:44:49 compute-0 podman[336221]: 2025-11-25 16:44:49.223603313 +0000 UTC m=+0.345529546 container create 5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:44:49 compute-0 systemd[1]: Started libpod-conmon-5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c.scope.
Nov 25 16:44:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28edf0ac28fb1100bbac0b1fef16684a1848bb22824963373e71ee5754226dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:44:49 compute-0 podman[336221]: 2025-11-25 16:44:49.427336969 +0000 UTC m=+0.549263202 container init 5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 16:44:49 compute-0 podman[336221]: 2025-11-25 16:44:49.435744807 +0000 UTC m=+0.557671050 container start 5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 16:44:49 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [NOTICE]   (336241) : New worker (336243) forked
Nov 25 16:44:49 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [NOTICE]   (336241) : Loading success.
Nov 25 16:44:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 216 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.751 254096 DEBUG nova.compute.manager [req-333f4d3f-451d-4687-b98a-09a7a4a9fcac req-30be99c2-380a-4ce5-b108-95e9add9ab98 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Received event network-vif-plugged-41fd5f5b-445b-4eed-adf5-045ddb262021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.751 254096 DEBUG oslo_concurrency.lockutils [req-333f4d3f-451d-4687-b98a-09a7a4a9fcac req-30be99c2-380a-4ce5-b108-95e9add9ab98 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.752 254096 DEBUG oslo_concurrency.lockutils [req-333f4d3f-451d-4687-b98a-09a7a4a9fcac req-30be99c2-380a-4ce5-b108-95e9add9ab98 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.752 254096 DEBUG oslo_concurrency.lockutils [req-333f4d3f-451d-4687-b98a-09a7a4a9fcac req-30be99c2-380a-4ce5-b108-95e9add9ab98 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.752 254096 DEBUG nova.compute.manager [req-333f4d3f-451d-4687-b98a-09a7a4a9fcac req-30be99c2-380a-4ce5-b108-95e9add9ab98 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Processing event network-vif-plugged-41fd5f5b-445b-4eed-adf5-045ddb262021 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.753 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.764 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089089.7639554, 98410ff5-26ab-4406-8d1b-063d9e114cf8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.765 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] VM Resumed (Lifecycle Event)
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.766 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.770 254096 INFO nova.virt.libvirt.driver [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Instance spawned successfully.
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.770 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.792 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.800 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.803 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.804 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.804 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.804 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.804 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.805 254096 DEBUG nova.virt.libvirt.driver [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.824 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.873 254096 INFO nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Took 14.67 seconds to spawn the instance on the hypervisor.
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.873 254096 DEBUG nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.939 254096 INFO nova.compute.manager [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Took 17.06 seconds to build instance.
Nov 25 16:44:49 compute-0 nova_compute[254092]: 2025-11-25 16:44:49.975 254096 DEBUG oslo_concurrency.lockutils [None req-bc8071c8-4428-4cda-87b7-7f4b7344cdbb 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.743 254096 DEBUG nova.compute.manager [req-07a2f5ea-b753-46d4-9837-ef49c5cb571e req-866dfa0e-ec49-449f-b10e-6e4ba912da9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.743 254096 DEBUG oslo_concurrency.lockutils [req-07a2f5ea-b753-46d4-9837-ef49c5cb571e req-866dfa0e-ec49-449f-b10e-6e4ba912da9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.744 254096 DEBUG oslo_concurrency.lockutils [req-07a2f5ea-b753-46d4-9837-ef49c5cb571e req-866dfa0e-ec49-449f-b10e-6e4ba912da9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.744 254096 DEBUG oslo_concurrency.lockutils [req-07a2f5ea-b753-46d4-9837-ef49c5cb571e req-866dfa0e-ec49-449f-b10e-6e4ba912da9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.744 254096 DEBUG nova.compute.manager [req-07a2f5ea-b753-46d4-9837-ef49c5cb571e req-866dfa0e-ec49-449f-b10e-6e4ba912da9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.744 254096 WARNING nova.compute.manager [req-07a2f5ea-b753-46d4-9837-ef49c5cb571e req-866dfa0e-ec49-449f-b10e-6e4ba912da9f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.893 254096 INFO nova.virt.libvirt.driver [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Deleting instance files /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d_del
Nov 25 16:44:50 compute-0 nova_compute[254092]: 2025-11-25 16:44:50.894 254096 INFO nova.virt.libvirt.driver [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Deletion of /var/lib/nova/instances/f0783fc0-46a8-4c51-95ca-79db0d6d849d_del complete
Nov 25 16:44:50 compute-0 ceph-mon[74985]: pgmap v1783: 321 pgs: 321 active+clean; 216 MiB data, 665 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 16:44:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.011 254096 INFO nova.compute.manager [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Took 6.08 seconds to destroy the instance on the hypervisor.
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.011 254096 DEBUG oslo.service.loopingcall [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.012 254096 DEBUG nova.compute.manager [-] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.012 254096 DEBUG nova.network.neutron [-] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001458007748909292 of space, bias 1.0, pg target 0.4374023246727876 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:44:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 218 op/s
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.696 254096 DEBUG nova.network.neutron [-] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.715 254096 INFO nova.compute.manager [-] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Took 0.70 seconds to deallocate network for instance.
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.779 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.779 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.861 254096 DEBUG nova.compute.manager [req-349a53f3-c07a-41c5-97ad-20f81675b391 req-dea86908-df5d-4c89-8f5d-bc9fbd16c5b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Received event network-vif-plugged-41fd5f5b-445b-4eed-adf5-045ddb262021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.862 254096 DEBUG oslo_concurrency.lockutils [req-349a53f3-c07a-41c5-97ad-20f81675b391 req-dea86908-df5d-4c89-8f5d-bc9fbd16c5b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.862 254096 DEBUG oslo_concurrency.lockutils [req-349a53f3-c07a-41c5-97ad-20f81675b391 req-dea86908-df5d-4c89-8f5d-bc9fbd16c5b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.862 254096 DEBUG oslo_concurrency.lockutils [req-349a53f3-c07a-41c5-97ad-20f81675b391 req-dea86908-df5d-4c89-8f5d-bc9fbd16c5b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.862 254096 DEBUG nova.compute.manager [req-349a53f3-c07a-41c5-97ad-20f81675b391 req-dea86908-df5d-4c89-8f5d-bc9fbd16c5b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] No waiting events found dispatching network-vif-plugged-41fd5f5b-445b-4eed-adf5-045ddb262021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.862 254096 WARNING nova.compute.manager [req-349a53f3-c07a-41c5-97ad-20f81675b391 req-dea86908-df5d-4c89-8f5d-bc9fbd16c5b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Received unexpected event network-vif-plugged-41fd5f5b-445b-4eed-adf5-045ddb262021 for instance with vm_state active and task_state None.
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.863 254096 DEBUG nova.compute.manager [req-349a53f3-c07a-41c5-97ad-20f81675b391 req-dea86908-df5d-4c89-8f5d-bc9fbd16c5b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Received event network-vif-deleted-94d30e8d-55a7-40af-be2f-0ce1a8c10b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:51 compute-0 nova_compute[254092]: 2025-11-25 16:44:51.889 254096 DEBUG oslo_concurrency.processutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:44:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:44:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448841234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.363 254096 DEBUG oslo_concurrency.processutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.369 254096 DEBUG nova.compute.provider_tree [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.383 254096 DEBUG nova.scheduler.client.report [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.404 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.437 254096 INFO nova.scheduler.client.report [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Deleted allocations for instance f0783fc0-46a8-4c51-95ca-79db0d6d849d
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.527 254096 DEBUG oslo_concurrency.lockutils [None req-e508a1e9-94f2-4a8e-aa91-1309b819dc18 4086d59097134dd6a71a9056c34359e5 8b7977bc30d5465b88d98c64d7c70db7 - - default default] Lock "f0783fc0-46a8-4c51-95ca-79db0d6d849d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.852 254096 DEBUG nova.compute.manager [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Received event network-changed-41fd5f5b-445b-4eed-adf5-045ddb262021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.852 254096 DEBUG nova.compute.manager [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Refreshing instance network info cache due to event network-changed-41fd5f5b-445b-4eed-adf5-045ddb262021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.853 254096 DEBUG oslo_concurrency.lockutils [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.853 254096 DEBUG oslo_concurrency.lockutils [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:44:52 compute-0 nova_compute[254092]: 2025-11-25 16:44:52.853 254096 DEBUG nova.network.neutron [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Refreshing network info cache for port 41fd5f5b-445b-4eed-adf5-045ddb262021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:44:52 compute-0 ceph-mon[74985]: pgmap v1784: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 218 op/s
Nov 25 16:44:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1448841234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:44:53 compute-0 nova_compute[254092]: 2025-11-25 16:44:53.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 117 op/s
Nov 25 16:44:55 compute-0 ceph-mon[74985]: pgmap v1785: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 117 op/s
Nov 25 16:44:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:44:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/796395703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:44:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:44:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/796395703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:44:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 28 KiB/s wr, 169 op/s
Nov 25 16:44:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:44:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/796395703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:44:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/796395703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:44:56 compute-0 nova_compute[254092]: 2025-11-25 16:44:56.402 254096 DEBUG nova.network.neutron [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updated VIF entry in instance network info cache for port 41fd5f5b-445b-4eed-adf5-045ddb262021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:44:56 compute-0 nova_compute[254092]: 2025-11-25 16:44:56.402 254096 DEBUG nova.network.neutron [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updating instance_info_cache with network_info: [{"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:44:56 compute-0 nova_compute[254092]: 2025-11-25 16:44:56.469 254096 DEBUG oslo_concurrency.lockutils [req-a990f445-1531-4c60-97be-de5aa2a0fc7e req-49aad9e2-5614-4ab5-98e8-51594dd0abfb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:44:57 compute-0 ceph-mon[74985]: pgmap v1786: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 28 KiB/s wr, 169 op/s
Nov 25 16:44:57 compute-0 nova_compute[254092]: 2025-11-25 16:44:57.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 KiB/s wr, 163 op/s
Nov 25 16:44:58 compute-0 ovn_controller[153477]: 2025-11-25T16:44:58Z|00761|binding|INFO|Releasing lport db192ec3-55c1-4137-aaad-99a175bfa879 from this chassis (sb_readonly=0)
Nov 25 16:44:58 compute-0 ovn_controller[153477]: 2025-11-25T16:44:58Z|00762|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:44:58 compute-0 nova_compute[254092]: 2025-11-25 16:44:58.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:58 compute-0 nova_compute[254092]: 2025-11-25 16:44:58.361 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:44:59 compute-0 ceph-mon[74985]: pgmap v1787: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 KiB/s wr, 163 op/s
Nov 25 16:44:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 KiB/s wr, 161 op/s
Nov 25 16:45:00 compute-0 nova_compute[254092]: 2025-11-25 16:45:00.170 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089085.1693516, f0783fc0-46a8-4c51-95ca-79db0d6d849d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:00 compute-0 nova_compute[254092]: 2025-11-25 16:45:00.171 254096 INFO nova.compute.manager [-] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] VM Stopped (Lifecycle Event)
Nov 25 16:45:00 compute-0 nova_compute[254092]: 2025-11-25 16:45:00.197 254096 DEBUG nova.compute.manager [None req-18c60211-9d3d-4295-9d9c-fb0734c556fc - - - - - -] [instance: f0783fc0-46a8-4c51-95ca-79db0d6d849d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:01 compute-0 ceph-mon[74985]: pgmap v1788: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 KiB/s wr, 161 op/s
Nov 25 16:45:01 compute-0 ovn_controller[153477]: 2025-11-25T16:45:01Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:45:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 KiB/s wr, 174 op/s
Nov 25 16:45:02 compute-0 nova_compute[254092]: 2025-11-25 16:45:02.633 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:03 compute-0 ceph-mon[74985]: pgmap v1789: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.8 KiB/s wr, 174 op/s
Nov 25 16:45:03 compute-0 ovn_controller[153477]: 2025-11-25T16:45:03Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a2:dc:a2 10.100.0.6
Nov 25 16:45:03 compute-0 ovn_controller[153477]: 2025-11-25T16:45:03Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a2:dc:a2 10.100.0.6
Nov 25 16:45:03 compute-0 nova_compute[254092]: 2025-11-25 16:45:03.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 938 B/s wr, 69 op/s
Nov 25 16:45:03 compute-0 sudo[336274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:03 compute-0 sudo[336274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:03 compute-0 sudo[336274]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:04 compute-0 sudo[336299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:45:04 compute-0 sudo[336299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:04 compute-0 sudo[336299]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:04 compute-0 sudo[336324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:04 compute-0 sudo[336324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:04 compute-0 sudo[336324]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:04 compute-0 sudo[336349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 16:45:04 compute-0 sudo[336349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:04 compute-0 sudo[336349]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:45:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:45:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:04 compute-0 sudo[336395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:04 compute-0 sudo[336395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:04 compute-0 sudo[336395]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:04 compute-0 sudo[336420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:45:04 compute-0 sudo[336420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:04 compute-0 sudo[336420]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:04 compute-0 sudo[336445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:04 compute-0 sudo[336445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:04 compute-0 sudo[336445]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:04 compute-0 sudo[336470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:45:04 compute-0 sudo[336470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:05 compute-0 ceph-mon[74985]: pgmap v1790: 321 pgs: 321 active+clean; 169 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 938 B/s wr, 69 op/s
Nov 25 16:45:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:05 compute-0 sudo[336470]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:45:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:45:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:45:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 58554820-5fd1-4912-af29-788c61f5e871 does not exist
Nov 25 16:45:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev eea5b9c8-8e55-4299-a9bf-6377979a1a30 does not exist
Nov 25 16:45:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c2187c38-1969-4d74-bc1d-aa08ec986f91 does not exist
Nov 25 16:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:45:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:45:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:45:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:45:05 compute-0 sudo[336526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:05 compute-0 sudo[336526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:05 compute-0 sudo[336526]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:05 compute-0 sudo[336551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:45:05 compute-0 sudo[336551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:05 compute-0 sudo[336551]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:05 compute-0 sudo[336576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:05 compute-0 sudo[336576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:05 compute-0 sudo[336576]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:05 compute-0 sudo[336601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:45:05 compute-0 sudo[336601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 194 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 143 op/s
Nov 25 16:45:05 compute-0 podman[336667]: 2025-11-25 16:45:05.87931527 +0000 UTC m=+0.045175978 container create bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_germain, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:45:05 compute-0 systemd[1]: Started libpod-conmon-bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec.scope.
Nov 25 16:45:05 compute-0 podman[336667]: 2025-11-25 16:45:05.858563477 +0000 UTC m=+0.024424185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:45:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:45:05 compute-0 podman[336667]: 2025-11-25 16:45:05.986092072 +0000 UTC m=+0.151952800 container init bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_germain, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:05 compute-0 podman[336667]: 2025-11-25 16:45:05.995841977 +0000 UTC m=+0.161702685 container start bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_germain, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 16:45:05 compute-0 podman[336667]: 2025-11-25 16:45:05.999537108 +0000 UTC m=+0.165397836 container attach bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:45:06 compute-0 elegant_germain[336683]: 167 167
Nov 25 16:45:06 compute-0 systemd[1]: libpod-bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec.scope: Deactivated successfully.
Nov 25 16:45:06 compute-0 podman[336667]: 2025-11-25 16:45:06.003855625 +0000 UTC m=+0.169716343 container died bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e3efe760fe80d72f3c9c76ca367b9193f9c98c508eb1e7aa20bafa8442e84a7-merged.mount: Deactivated successfully.
Nov 25 16:45:06 compute-0 podman[336667]: 2025-11-25 16:45:06.051030717 +0000 UTC m=+0.216891425 container remove bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:45:06 compute-0 systemd[1]: libpod-conmon-bb398516a44212986a601c20903847ded4c8a3b3f9e83bdd99328bb896bf7eec.scope: Deactivated successfully.
Nov 25 16:45:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:45:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:45:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:45:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:45:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:45:06 compute-0 podman[336706]: 2025-11-25 16:45:06.247553078 +0000 UTC m=+0.050683458 container create d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:45:06 compute-0 systemd[1]: Started libpod-conmon-d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599.scope.
Nov 25 16:45:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:45:06 compute-0 podman[336706]: 2025-11-25 16:45:06.228908101 +0000 UTC m=+0.032038501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab013554d565ebbc62070928d738bec0705a50aac91d98084c4c3b035d11abb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab013554d565ebbc62070928d738bec0705a50aac91d98084c4c3b035d11abb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab013554d565ebbc62070928d738bec0705a50aac91d98084c4c3b035d11abb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab013554d565ebbc62070928d738bec0705a50aac91d98084c4c3b035d11abb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab013554d565ebbc62070928d738bec0705a50aac91d98084c4c3b035d11abb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:06 compute-0 podman[336706]: 2025-11-25 16:45:06.339094445 +0000 UTC m=+0.142224845 container init d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:45:06 compute-0 podman[336706]: 2025-11-25 16:45:06.347606746 +0000 UTC m=+0.150737126 container start d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:45:06 compute-0 podman[336706]: 2025-11-25 16:45:06.350963228 +0000 UTC m=+0.154093608 container attach d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:45:07 compute-0 ceph-mon[74985]: pgmap v1791: 321 pgs: 321 active+clean; 194 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 143 op/s
Nov 25 16:45:07 compute-0 nova_compute[254092]: 2025-11-25 16:45:07.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:07 compute-0 inspiring_vaughan[336722]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:45:07 compute-0 inspiring_vaughan[336722]: --> relative data size: 1.0
Nov 25 16:45:07 compute-0 inspiring_vaughan[336722]: --> All data devices are unavailable
Nov 25 16:45:07 compute-0 systemd[1]: libpod-d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599.scope: Deactivated successfully.
Nov 25 16:45:07 compute-0 systemd[1]: libpod-d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599.scope: Consumed 1.135s CPU time.
Nov 25 16:45:07 compute-0 podman[336706]: 2025-11-25 16:45:07.542139488 +0000 UTC m=+1.345269868 container died d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:45:07 compute-0 nova_compute[254092]: 2025-11-25 16:45:07.638 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 16:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab013554d565ebbc62070928d738bec0705a50aac91d98084c4c3b035d11abb1-merged.mount: Deactivated successfully.
Nov 25 16:45:08 compute-0 podman[336706]: 2025-11-25 16:45:08.308407551 +0000 UTC m=+2.111537921 container remove d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:45:08 compute-0 sudo[336601]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:08 compute-0 nova_compute[254092]: 2025-11-25 16:45:08.365 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:08 compute-0 systemd[1]: libpod-conmon-d962cdc3e2cc448ff27ec029ba64962cd4512ae76a1325c18dd0e3c5f7087599.scope: Deactivated successfully.
Nov 25 16:45:08 compute-0 sudo[336765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:08 compute-0 sudo[336765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:08 compute-0 sudo[336765]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:08 compute-0 ceph-mon[74985]: pgmap v1792: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 16:45:08 compute-0 sudo[336790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:45:08 compute-0 sudo[336790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:08 compute-0 sudo[336790]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:08 compute-0 nova_compute[254092]: 2025-11-25 16:45:08.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:08 compute-0 sudo[336815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:08 compute-0 sudo[336815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:08 compute-0 sudo[336815]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:08 compute-0 sudo[336840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:45:08 compute-0 sudo[336840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:08 compute-0 podman[336908]: 2025-11-25 16:45:08.959185747 +0000 UTC m=+0.052412626 container create 074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:45:09 compute-0 podman[336908]: 2025-11-25 16:45:08.930412784 +0000 UTC m=+0.023639683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:45:09 compute-0 systemd[1]: Started libpod-conmon-074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4.scope.
Nov 25 16:45:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:45:09 compute-0 podman[336908]: 2025-11-25 16:45:09.189076414 +0000 UTC m=+0.282303313 container init 074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_napier, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:45:09 compute-0 podman[336908]: 2025-11-25 16:45:09.197657977 +0000 UTC m=+0.290884856 container start 074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:45:09 compute-0 podman[336908]: 2025-11-25 16:45:09.201609394 +0000 UTC m=+0.294836303 container attach 074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_napier, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:45:09 compute-0 relaxed_napier[336925]: 167 167
Nov 25 16:45:09 compute-0 systemd[1]: libpod-074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4.scope: Deactivated successfully.
Nov 25 16:45:09 compute-0 podman[336908]: 2025-11-25 16:45:09.206053235 +0000 UTC m=+0.299280124 container died 074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 16:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ea438a7120b40e07e32f411732ff3cd54cad2d10f03e0cd06ecfdeecbe7991d-merged.mount: Deactivated successfully.
Nov 25 16:45:09 compute-0 podman[336908]: 2025-11-25 16:45:09.252362713 +0000 UTC m=+0.345589592 container remove 074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_napier, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:45:09 compute-0 systemd[1]: libpod-conmon-074b9ffae7b9a1a317f426874630cb4aa588d9d4da80a5982c8ccc9a28e921a4.scope: Deactivated successfully.
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.414 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "504931a2-d324-4142-8698-9090b5cf7a23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.415 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "504931a2-d324-4142-8698-9090b5cf7a23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.436 254096 DEBUG nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:45:09 compute-0 podman[336949]: 2025-11-25 16:45:09.447661411 +0000 UTC m=+0.050123773 container create 20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.458 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.458 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.489 254096 DEBUG nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:45:09 compute-0 systemd[1]: Started libpod-conmon-20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98.scope.
Nov 25 16:45:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:45:09 compute-0 podman[336949]: 2025-11-25 16:45:09.427947285 +0000 UTC m=+0.030409677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.521 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.521 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85cb9c55f9c9b6cf5124c3303b6d6843aea8c9267ddffa672390f396b7bc444/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85cb9c55f9c9b6cf5124c3303b6d6843aea8c9267ddffa672390f396b7bc444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85cb9c55f9c9b6cf5124c3303b6d6843aea8c9267ddffa672390f396b7bc444/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85cb9c55f9c9b6cf5124c3303b6d6843aea8c9267ddffa672390f396b7bc444/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.528 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.529 254096 INFO nova.compute.claims [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:45:09 compute-0 podman[336949]: 2025-11-25 16:45:09.539467096 +0000 UTC m=+0.141929478 container init 20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:45:09 compute-0 podman[336949]: 2025-11-25 16:45:09.548005188 +0000 UTC m=+0.150467550 container start 20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:45:09 compute-0 podman[336949]: 2025-11-25 16:45:09.551311588 +0000 UTC m=+0.153774030 container attach 20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.558 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.634 254096 DEBUG nova.scheduler.client.report [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.656 254096 DEBUG nova.scheduler.client.report [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.656 254096 DEBUG nova.compute.provider_tree [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.676 254096 DEBUG nova.scheduler.client.report [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:45:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.713 254096 DEBUG nova.scheduler.client.report [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:45:09 compute-0 nova_compute[254092]: 2025-11-25 16:45:09.808 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:45:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509155111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.247 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.254 254096 DEBUG nova.compute.provider_tree [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.269 254096 DEBUG nova.scheduler.client.report [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.302 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.303 254096 DEBUG nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.305 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.312 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.313 254096 INFO nova.compute.claims [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:45:10 compute-0 bold_hopper[336965]: {
Nov 25 16:45:10 compute-0 bold_hopper[336965]:     "0": [
Nov 25 16:45:10 compute-0 bold_hopper[336965]:         {
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "devices": [
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "/dev/loop3"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             ],
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_name": "ceph_lv0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_size": "21470642176",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "name": "ceph_lv0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "tags": {
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cluster_name": "ceph",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.crush_device_class": "",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.encrypted": "0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osd_id": "0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.type": "block",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.vdo": "0"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             },
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "type": "block",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "vg_name": "ceph_vg0"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:         }
Nov 25 16:45:10 compute-0 bold_hopper[336965]:     ],
Nov 25 16:45:10 compute-0 bold_hopper[336965]:     "1": [
Nov 25 16:45:10 compute-0 bold_hopper[336965]:         {
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "devices": [
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "/dev/loop4"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             ],
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_name": "ceph_lv1",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_size": "21470642176",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "name": "ceph_lv1",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "tags": {
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cluster_name": "ceph",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.crush_device_class": "",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.encrypted": "0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osd_id": "1",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.type": "block",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.vdo": "0"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             },
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "type": "block",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "vg_name": "ceph_vg1"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:         }
Nov 25 16:45:10 compute-0 bold_hopper[336965]:     ],
Nov 25 16:45:10 compute-0 bold_hopper[336965]:     "2": [
Nov 25 16:45:10 compute-0 bold_hopper[336965]:         {
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "devices": [
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "/dev/loop5"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             ],
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_name": "ceph_lv2",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_size": "21470642176",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "name": "ceph_lv2",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "tags": {
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.cluster_name": "ceph",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.crush_device_class": "",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.encrypted": "0",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osd_id": "2",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.type": "block",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:                 "ceph.vdo": "0"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             },
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "type": "block",
Nov 25 16:45:10 compute-0 bold_hopper[336965]:             "vg_name": "ceph_vg2"
Nov 25 16:45:10 compute-0 bold_hopper[336965]:         }
Nov 25 16:45:10 compute-0 bold_hopper[336965]:     ]
Nov 25 16:45:10 compute-0 bold_hopper[336965]: }
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.368 254096 DEBUG nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.382 254096 INFO nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:45:10 compute-0 systemd[1]: libpod-20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98.scope: Deactivated successfully.
Nov 25 16:45:10 compute-0 podman[336949]: 2025-11-25 16:45:10.389931907 +0000 UTC m=+0.992394279 container died 20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.399 254096 DEBUG nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c85cb9c55f9c9b6cf5124c3303b6d6843aea8c9267ddffa672390f396b7bc444-merged.mount: Deactivated successfully.
Nov 25 16:45:10 compute-0 podman[336949]: 2025-11-25 16:45:10.449301551 +0000 UTC m=+1.051763913 container remove 20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 16:45:10 compute-0 systemd[1]: libpod-conmon-20a90574b8d0d1672c6b58afb846ec38ea37132d97cd71307c94696f0bab1e98.scope: Deactivated successfully.
Nov 25 16:45:10 compute-0 sudo[336840]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.486 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.518 254096 DEBUG nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.520 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.521 254096 INFO nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Creating image(s)
Nov 25 16:45:10 compute-0 sudo[337007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.543 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:10 compute-0 sudo[337007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:10 compute-0 sudo[337007]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.573 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.598 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:10 compute-0 sudo[337058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.604 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:10 compute-0 sudo[337058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:10 compute-0 sudo[337058]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:10 compute-0 sudo[337129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:10 compute-0 sudo[337129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:10 compute-0 sudo[337129]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.691 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.693 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.693 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.694 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:10 compute-0 sudo[337157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:45:10 compute-0 sudo[337157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.743 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.747 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 504931a2-d324-4142-8698-9090b5cf7a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:10 compute-0 ceph-mon[74985]: pgmap v1793: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Nov 25 16:45:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1509155111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295818991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.951 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.958 254096 DEBUG nova.compute.provider_tree [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:45:10 compute-0 nova_compute[254092]: 2025-11-25 16:45:10.975 254096 DEBUG nova.scheduler.client.report [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:45:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.007 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.008 254096 DEBUG nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:45:11 compute-0 podman[337261]: 2025-11-25 16:45:11.065889707 +0000 UTC m=+0.041365525 container create 778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.091 254096 DEBUG nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.095 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 504931a2-d324-4142-8698-9090b5cf7a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:11 compute-0 systemd[1]: Started libpod-conmon-778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58.scope.
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.124 254096 INFO nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:45:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:45:11 compute-0 podman[337261]: 2025-11-25 16:45:11.049386308 +0000 UTC m=+0.024862156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:45:11 compute-0 podman[337261]: 2025-11-25 16:45:11.147766302 +0000 UTC m=+0.123242140 container init 778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:45:11 compute-0 podman[337261]: 2025-11-25 16:45:11.155051129 +0000 UTC m=+0.130526947 container start 778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:45:11 compute-0 podman[337261]: 2025-11-25 16:45:11.159002287 +0000 UTC m=+0.134478125 container attach 778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:45:11 compute-0 strange_jennings[337284]: 167 167
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.159 254096 DEBUG nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:45:11 compute-0 systemd[1]: libpod-778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58.scope: Deactivated successfully.
Nov 25 16:45:11 compute-0 podman[337261]: 2025-11-25 16:45:11.161155416 +0000 UTC m=+0.136631254 container died 778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.170 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] resizing rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b528199dd4da89d73fbb65c0cf5a829ef8ee02fa41617de47a4d64f314e5149d-merged.mount: Deactivated successfully.
Nov 25 16:45:11 compute-0 podman[337261]: 2025-11-25 16:45:11.195811097 +0000 UTC m=+0.171286925 container remove 778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 16:45:11 compute-0 systemd[1]: libpod-conmon-778af0995c424e33ba4e0532f89cba4ce640c8d192bbbab2f09a0c045bbd1c58.scope: Deactivated successfully.
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.283 254096 DEBUG nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.284 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.285 254096 INFO nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Creating image(s)
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.306 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.332 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.361 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.367 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:11 compute-0 podman[337408]: 2025-11-25 16:45:11.38471173 +0000 UTC m=+0.039294829 container create d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.409 254096 DEBUG nova.objects.instance [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'migration_context' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.431 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.432 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Ensure instance console log exists: /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.432 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.432 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.432 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.434 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:45:11 compute-0 systemd[1]: Started libpod-conmon-d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0.scope.
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.440 254096 WARNING nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.444 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.445 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.445 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.446 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:45:11 compute-0 podman[337408]: 2025-11-25 16:45:11.369534118 +0000 UTC m=+0.024117237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634fba1622ac26eb5408d53bd0d8c112a7cf7a2cc38a769aa5de89f0468eb31a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634fba1622ac26eb5408d53bd0d8c112a7cf7a2cc38a769aa5de89f0468eb31a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634fba1622ac26eb5408d53bd0d8c112a7cf7a2cc38a769aa5de89f0468eb31a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/634fba1622ac26eb5408d53bd0d8c112a7cf7a2cc38a769aa5de89f0468eb31a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.475 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:11 compute-0 podman[337408]: 2025-11-25 16:45:11.482825796 +0000 UTC m=+0.137408915 container init d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.487 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:11 compute-0 podman[337408]: 2025-11-25 16:45:11.491190374 +0000 UTC m=+0.145773473 container start d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 16:45:11 compute-0 podman[337408]: 2025-11-25 16:45:11.495026499 +0000 UTC m=+0.149609598 container attach d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.519 254096 DEBUG nova.virt.libvirt.host [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.520 254096 DEBUG nova.virt.libvirt.host [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.524 254096 DEBUG nova.virt.libvirt.host [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.524 254096 DEBUG nova.virt.libvirt.host [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.525 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.525 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.526 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.526 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.526 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.526 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.527 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.527 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.527 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.527 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.528 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.528 254096 DEBUG nova.virt.hardware [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.532 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Nov 25 16:45:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2295818991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.878 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:11 compute-0 nova_compute[254092]: 2025-11-25 16:45:11.953 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] resizing rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:45:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789233908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.006 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.034 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.039 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.119 254096 DEBUG nova.objects.instance [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'migration_context' on Instance uuid efdd3cf9-3df8-4a1e-9e45-12172f99cbac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.131 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.132 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Ensure instance console log exists: /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.132 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.132 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.133 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.134 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.139 254096 WARNING nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.143 254096 DEBUG nova.virt.libvirt.host [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.144 254096 DEBUG nova.virt.libvirt.host [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.147 254096 DEBUG nova.virt.libvirt.host [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.147 254096 DEBUG nova.virt.libvirt.host [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.147 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.148 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.148 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.148 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.148 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.149 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.149 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.149 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.149 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.149 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.150 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.150 254096 DEBUG nova.virt.hardware [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.153 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695097079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:12 compute-0 eager_rubin[337444]: {
Nov 25 16:45:12 compute-0 eager_rubin[337444]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "osd_id": 1,
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "type": "bluestore"
Nov 25 16:45:12 compute-0 eager_rubin[337444]:     },
Nov 25 16:45:12 compute-0 eager_rubin[337444]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "osd_id": 2,
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "type": "bluestore"
Nov 25 16:45:12 compute-0 eager_rubin[337444]:     },
Nov 25 16:45:12 compute-0 eager_rubin[337444]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "osd_id": 0,
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:45:12 compute-0 eager_rubin[337444]:         "type": "bluestore"
Nov 25 16:45:12 compute-0 eager_rubin[337444]:     }
Nov 25 16:45:12 compute-0 eager_rubin[337444]: }
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.513 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.515 254096 DEBUG nova.objects.instance [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.551 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <uuid>504931a2-d324-4142-8698-9090b5cf7a23</uuid>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <name>instance-0000004f</name>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerShowV247Test-server-1449704858</nova:name>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:45:11</nova:creationTime>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <nova:user uuid="225ac79669cf4b6dab40b373facccda7">tempest-ServerShowV247Test-787926520-project-member</nova:user>
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <nova:project uuid="07462d65bafa490d8f9bfcba3972ad42">tempest-ServerShowV247Test-787926520</nova:project>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <system>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <entry name="serial">504931a2-d324-4142-8698-9090b5cf7a23</entry>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <entry name="uuid">504931a2-d324-4142-8698-9090b5cf7a23</entry>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </system>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <os>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   </os>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <features>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   </features>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/504931a2-d324-4142-8698-9090b5cf7a23_disk">
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/504931a2-d324-4142-8698-9090b5cf7a23_disk.config">
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/console.log" append="off"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <video>
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </video>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:45:12 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:45:12 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:45:12 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:45:12 compute-0 nova_compute[254092]: </domain>
Nov 25 16:45:12 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:45:12 compute-0 systemd[1]: libpod-d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0.scope: Deactivated successfully.
Nov 25 16:45:12 compute-0 systemd[1]: libpod-d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0.scope: Consumed 1.027s CPU time.
Nov 25 16:45:12 compute-0 conmon[337444]: conmon d18b996c0f2a69abca59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0.scope/container/memory.events
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.611 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.612 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.612 254096 INFO nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Using config drive
Nov 25 16:45:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397498626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:12 compute-0 podman[337672]: 2025-11-25 16:45:12.618439628 +0000 UTC m=+0.035353253 container died d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.635 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-634fba1622ac26eb5408d53bd0d8c112a7cf7a2cc38a769aa5de89f0468eb31a-merged.mount: Deactivated successfully.
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.652 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.678 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:12 compute-0 podman[337672]: 2025-11-25 16:45:12.679790445 +0000 UTC m=+0.096704070 container remove d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.684 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:12 compute-0 systemd[1]: libpod-conmon-d18b996c0f2a69abca5940bd6ddf3a20a1a2467427c3447206621421110437b0.scope: Deactivated successfully.
Nov 25 16:45:12 compute-0 sudo[337157]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:45:12 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:45:12 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:12 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ba0c9a5e-33c8-4cd9-8420-f24e82b074cf does not exist
Nov 25 16:45:12 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7c53b1f2-95ae-4c95-b562-77004ebdfc29 does not exist
Nov 25 16:45:12 compute-0 sudo[337742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:45:12 compute-0 ceph-mon[74985]: pgmap v1794: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 861 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Nov 25 16:45:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3789233908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3695097079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1397498626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:45:12 compute-0 sudo[337742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:12 compute-0 sudo[337742]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.920 254096 INFO nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Creating config drive at /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.926 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd2fm8mhp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:12 compute-0 podman[337785]: 2025-11-25 16:45:12.928305627 +0000 UTC m=+0.076276373 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 16:45:12 compute-0 sudo[337804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:45:12 compute-0 sudo[337804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:45:12 compute-0 sudo[337804]: pam_unix(sudo:session): session closed for user root
Nov 25 16:45:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545565407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:12 compute-0 podman[337787]: 2025-11-25 16:45:12.966847835 +0000 UTC m=+0.114308167 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 16:45:12 compute-0 podman[337767]: 2025-11-25 16:45:12.988864974 +0000 UTC m=+0.136323196 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:45:12 compute-0 nova_compute[254092]: 2025-11-25 16:45:12.991 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.078 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd2fm8mhp" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.102 254096 DEBUG nova.storage.rbd_utils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.105 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config 504931a2-d324-4142-8698-9090b5cf7a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.136 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.136 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.141 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.141 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:45:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2393703152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.199 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.200 254096 DEBUG nova.objects.instance [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'pci_devices' on Instance uuid efdd3cf9-3df8-4a1e-9e45-12172f99cbac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.219 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <uuid>efdd3cf9-3df8-4a1e-9e45-12172f99cbac</uuid>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <name>instance-00000050</name>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerShowV247Test-server-790030760</nova:name>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:45:12</nova:creationTime>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <nova:user uuid="225ac79669cf4b6dab40b373facccda7">tempest-ServerShowV247Test-787926520-project-member</nova:user>
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <nova:project uuid="07462d65bafa490d8f9bfcba3972ad42">tempest-ServerShowV247Test-787926520</nova:project>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <system>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <entry name="serial">efdd3cf9-3df8-4a1e-9e45-12172f99cbac</entry>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <entry name="uuid">efdd3cf9-3df8-4a1e-9e45-12172f99cbac</entry>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </system>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <os>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   </os>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <features>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   </features>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk">
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk.config">
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:13 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/console.log" append="off"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <video>
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </video>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:45:13 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:45:13 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:45:13 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:45:13 compute-0 nova_compute[254092]: </domain>
Nov 25 16:45:13 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.272 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.273 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.273 254096 INFO nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Using config drive
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.293 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.300 254096 DEBUG oslo_concurrency.processutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config 504931a2-d324-4142-8698-9090b5cf7a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.300 254096 INFO nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Deleting local config drive /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config because it was imported into RBD.
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:13 compute-0 systemd-machined[216343]: New machine qemu-95-instance-0000004f.
Nov 25 16:45:13 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-0000004f.
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.451 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.452 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.897056579589844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.452 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.453 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:13.623 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:13.624 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.632 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 01f96314-1fbe-4eee-a4ed-db7f448a5320 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.632 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 98410ff5-26ab-4406-8d1b-063d9e114cf8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.632 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 504931a2-d324-4142-8698-9090b5cf7a23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.632 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance efdd3cf9-3df8-4a1e-9e45-12172f99cbac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.632 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.633 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:45:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.716 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2545565407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2393703152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.913 254096 INFO nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Creating config drive at /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/disk.config
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.918 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ftmieya execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.958 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089113.9574075, 504931a2-d324-4142-8698-9090b5cf7a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.958 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] VM Resumed (Lifecycle Event)
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.963 254096 DEBUG nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.964 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.976 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.980 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.990 254096 INFO nova.virt.libvirt.driver [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance spawned successfully.
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.991 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.997 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.997 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089113.962381, 504931a2-d324-4142-8698-9090b5cf7a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:13 compute-0 nova_compute[254092]: 2025-11-25 16:45:13.997 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] VM Started (Lifecycle Event)
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.012 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.015 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.015 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.015 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.016 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.016 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.017 254096 DEBUG nova.virt.libvirt.driver [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.023 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.050 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.057 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ftmieya" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.079 254096 DEBUG nova.storage.rbd_utils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.083 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/disk.config efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.123 254096 INFO nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Took 3.60 seconds to spawn the instance on the hypervisor.
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.124 254096 DEBUG nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.179 254096 INFO nova.compute.manager [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Took 4.69 seconds to build instance.
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.196 254096 DEBUG oslo_concurrency.lockutils [None req-c639869d-e9a1-4e48-b263-ed00f5a5aee1 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "504931a2-d324-4142-8698-9090b5cf7a23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1122700375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.231 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.237 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.254 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.260 254096 DEBUG oslo_concurrency.processutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/disk.config efdd3cf9-3df8-4a1e-9e45-12172f99cbac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.261 254096 INFO nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Deleting local config drive /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac/disk.config because it was imported into RBD.
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:45:14 compute-0 nova_compute[254092]: 2025-11-25 16:45:14.281 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:14 compute-0 systemd-machined[216343]: New machine qemu-96-instance-00000050.
Nov 25 16:45:14 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-00000050.
Nov 25 16:45:14 compute-0 ceph-mon[74985]: pgmap v1795: 321 pgs: 321 active+clean; 202 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 676 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 25 16:45:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1122700375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.055 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089115.0548232, efdd3cf9-3df8-4a1e-9e45-12172f99cbac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.055 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] VM Resumed (Lifecycle Event)
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.058 254096 DEBUG nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.058 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.061 254096 INFO nova.virt.libvirt.driver [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Instance spawned successfully.
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.061 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.074 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.078 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.082 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.082 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.083 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.083 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.084 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.084 254096 DEBUG nova.virt.libvirt.driver [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.105 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.105 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089115.0567498, efdd3cf9-3df8-4a1e-9e45-12172f99cbac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.105 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] VM Started (Lifecycle Event)
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.130 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.133 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.147 254096 INFO nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Took 3.86 seconds to spawn the instance on the hypervisor.
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.148 254096 DEBUG nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.155 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.210 254096 INFO nova.compute.manager [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Took 5.67 seconds to build instance.
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.240 254096 INFO nova.compute.manager [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Rebuilding instance
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.246 254096 DEBUG oslo_concurrency.lockutils [None req-c18b5f36-4d04-45ff-a1d3-b1aaee98562a 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.280 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.280 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.280 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.280 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.459 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.476 254096 DEBUG nova.compute.manager [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.544 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'pci_requests' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.555 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.563 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'resources' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.572 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'migration_context' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.582 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:45:15 compute-0 nova_compute[254092]: 2025-11-25 16:45:15.584 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:45:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 276 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Nov 25 16:45:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:16 compute-0 ceph-mon[74985]: pgmap v1796: 321 pgs: 321 active+clean; 276 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.517 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 158 op/s
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.939 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.939 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.939 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:45:17 compute-0 nova_compute[254092]: 2025-11-25 16:45:17.940 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:18 compute-0 nova_compute[254092]: 2025-11-25 16:45:18.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:18 compute-0 ceph-mon[74985]: pgmap v1797: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 158 op/s
Nov 25 16:45:19 compute-0 nova_compute[254092]: 2025-11-25 16:45:19.503 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:19 compute-0 nova_compute[254092]: 2025-11-25 16:45:19.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:45:19 compute-0 nova_compute[254092]: 2025-11-25 16:45:19.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:45:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Nov 25 16:45:20 compute-0 nova_compute[254092]: 2025-11-25 16:45:20.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:45:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:21 compute-0 ceph-mon[74985]: pgmap v1798: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Nov 25 16:45:21 compute-0 nova_compute[254092]: 2025-11-25 16:45:21.071 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:21.073 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:45:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:21.074 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:45:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:21.075 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 25 16:45:22 compute-0 nova_compute[254092]: 2025-11-25 16:45:22.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:23 compute-0 ceph-mon[74985]: pgmap v1799: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 25 16:45:23 compute-0 nova_compute[254092]: 2025-11-25 16:45:23.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 16:45:25 compute-0 ceph-mon[74985]: pgmap v1800: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 16:45:25 compute-0 nova_compute[254092]: 2025-11-25 16:45:25.626 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:45:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 25 16:45:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:27 compute-0 ceph-mon[74985]: pgmap v1801: 321 pgs: 321 active+clean; 295 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Nov 25 16:45:27 compute-0 nova_compute[254092]: 2025-11-25 16:45:27.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 304 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 25 16:45:28 compute-0 nova_compute[254092]: 2025-11-25 16:45:28.373 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:29 compute-0 ceph-mon[74985]: pgmap v1802: 321 pgs: 321 active+clean; 304 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Nov 25 16:45:29 compute-0 ovn_controller[153477]: 2025-11-25T16:45:29Z|00763|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 16:45:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 304 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 629 KiB/s wr, 70 op/s
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.001 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.002 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.016 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.088 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.088 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.097 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.098 254096 INFO nova.compute.claims [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.258 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824484492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.755 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.762 254096 DEBUG nova.compute.provider_tree [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.776 254096 DEBUG nova.scheduler.client.report [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.800 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.800 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.848 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.849 254096 DEBUG nova.network.neutron [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.867 254096 INFO nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.882 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.971 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.972 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.973 254096 INFO nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Creating image(s)
Nov 25 16:45:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:30 compute-0 nova_compute[254092]: 2025-11-25 16:45:30.994 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.020 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.043 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.048 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.119 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.120 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.120 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.121 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.143 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:31 compute-0 ceph-mon[74985]: pgmap v1803: 321 pgs: 321 active+clean; 304 MiB data, 716 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 629 KiB/s wr, 70 op/s
Nov 25 16:45:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2824484492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.147 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.176 254096 DEBUG nova.policy [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aea2d8cf3bb54cdbbc72e41805fb1f90', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d15f5aabd3491da5314b126a20225a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.293 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.294 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.322 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.445 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.446 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.451 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.452 254096 INFO nova.compute.claims [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.501 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.577 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] resizing rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:45:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 361 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.707 254096 DEBUG nova.objects.instance [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'migration_context' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.723 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.724 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Ensure instance console log exists: /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.725 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.725 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.725 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:31 compute-0 nova_compute[254092]: 2025-11-25 16:45:31.753 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48214446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.221 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.228 254096 DEBUG nova.compute.provider_tree [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.247 254096 DEBUG nova.scheduler.client.report [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.279 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.280 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.336 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.336 254096 DEBUG nova.network.neutron [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.356 254096 INFO nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.381 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.474 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.476 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.477 254096 INFO nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Creating image(s)
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.501 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.530 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.563 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.569 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.601 254096 DEBUG nova.policy [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bd4800c25cd462b9365649e599d0a0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.604 254096 DEBUG nova.network.neutron [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Successfully created port: 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.637 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.638 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.638 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.639 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.661 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.667 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b676874-6857-4021-9d83-c3673f57cebb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:32 compute-0 nova_compute[254092]: 2025-11-25 16:45:32.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.030 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b676874-6857-4021-9d83-c3673f57cebb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.093 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] resizing rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:45:33 compute-0 ceph-mon[74985]: pgmap v1804: 321 pgs: 321 active+clean; 361 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Nov 25 16:45:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/48214446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.192 254096 DEBUG nova.objects.instance [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'migration_context' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.207 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.208 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Ensure instance console log exists: /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.209 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.209 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.209 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 361 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 16:45:33 compute-0 nova_compute[254092]: 2025-11-25 16:45:33.943 254096 DEBUG nova.network.neutron [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Successfully created port: 0ebbbca7-8751-4479-a892-216433a26e74 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.048 254096 DEBUG nova.network.neutron [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Successfully updated port: 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.078 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-b487a735-c096-4a8f-b8ba-fc5b6c055f56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.079 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-b487a735-c096-4a8f-b8ba-fc5b6c055f56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.079 254096 DEBUG nova.network.neutron [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:45:34 compute-0 ceph-mon[74985]: pgmap v1805: 321 pgs: 321 active+clean; 361 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.185 254096 DEBUG nova.compute.manager [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-changed-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.185 254096 DEBUG nova.compute.manager [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Refreshing instance network info cache due to event network-changed-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.186 254096 DEBUG oslo_concurrency.lockutils [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b487a735-c096-4a8f-b8ba-fc5b6c055f56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.255 254096 DEBUG nova.network.neutron [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.870 254096 DEBUG nova.network.neutron [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Successfully updated port: 0ebbbca7-8751-4479-a892-216433a26e74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.883 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "refresh_cache-6b676874-6857-4021-9d83-c3673f57cebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.883 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquired lock "refresh_cache-6b676874-6857-4021-9d83-c3673f57cebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.883 254096 DEBUG nova.network.neutron [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.992 254096 DEBUG nova.compute.manager [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-changed-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.993 254096 DEBUG nova.compute.manager [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Refreshing instance network info cache due to event network-changed-0ebbbca7-8751-4479-a892-216433a26e74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:45:34 compute-0 nova_compute[254092]: 2025-11-25 16:45:34.993 254096 DEBUG oslo_concurrency.lockutils [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6b676874-6857-4021-9d83-c3673f57cebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.076 254096 DEBUG nova.network.neutron [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.663 254096 DEBUG nova.network.neutron [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Updating instance_info_cache with network_info: [{"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 416 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 6.6 MiB/s wr, 167 op/s
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.721 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-b487a735-c096-4a8f-b8ba-fc5b6c055f56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.721 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance network_info: |[{"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.722 254096 DEBUG oslo_concurrency.lockutils [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b487a735-c096-4a8f-b8ba-fc5b6c055f56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.722 254096 DEBUG nova.network.neutron [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Refreshing network info cache for port 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.725 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Start _get_guest_xml network_info=[{"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.729 254096 WARNING nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.736 254096 DEBUG nova.virt.libvirt.host [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.737 254096 DEBUG nova.virt.libvirt.host [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.744 254096 DEBUG nova.virt.libvirt.host [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.745 254096 DEBUG nova.virt.libvirt.host [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.745 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.745 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.746 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.746 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.746 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.746 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.746 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.746 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.747 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.747 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.747 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.747 254096 DEBUG nova.virt.hardware [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:45:35 compute-0 nova_compute[254092]: 2025-11-25 16:45:35.750 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.000 254096 DEBUG nova.network.neutron [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Updating instance_info_cache with network_info: [{"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.016 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Releasing lock "refresh_cache-6b676874-6857-4021-9d83-c3673f57cebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.017 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance network_info: |[{"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.017 254096 DEBUG oslo_concurrency.lockutils [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6b676874-6857-4021-9d83-c3673f57cebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.017 254096 DEBUG nova.network.neutron [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Refreshing network info cache for port 0ebbbca7-8751-4479-a892-216433a26e74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.021 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Start _get_guest_xml network_info=[{"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.025 254096 WARNING nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.028 254096 DEBUG nova.virt.libvirt.host [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.029 254096 DEBUG nova.virt.libvirt.host [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.031 254096 DEBUG nova.virt.libvirt.host [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.031 254096 DEBUG nova.virt.libvirt.host [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.032 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.032 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.032 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.032 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.033 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.033 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.033 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.033 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.033 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.034 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.034 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.034 254096 DEBUG nova.virt.hardware [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.037 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990225299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.221 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.245 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.252 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742841881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.486 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.510 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.514 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694013210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.689 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.691 254096 DEBUG nova.virt.libvirt.vif [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1676188267',display_name='tempest-tempest.common.compute-instance-1676188267',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1676188267',id=81,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-puouwpb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:45:30Z,user_data=None,user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=b487a735-c096-4a8f-b8ba-fc5b6c055f56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.691 254096 DEBUG nova.network.os_vif_util [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.692 254096 DEBUG nova.network.os_vif_util [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.693 254096 DEBUG nova.objects.instance [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.711 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <uuid>b487a735-c096-4a8f-b8ba-fc5b6c055f56</uuid>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <name>instance-00000051</name>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <nova:name>tempest-tempest.common.compute-instance-1676188267</nova:name>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:45:35</nova:creationTime>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <nova:port uuid="5b25a3f7-0f17-4813-8f18-d5d20a92f4aa">
Nov 25 16:45:36 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <system>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <entry name="serial">b487a735-c096-4a8f-b8ba-fc5b6c055f56</entry>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <entry name="uuid">b487a735-c096-4a8f-b8ba-fc5b6c055f56</entry>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </system>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <os>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   </os>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <features>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   </features>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk">
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config">
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:23:43:9a"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <target dev="tap5b25a3f7-0f"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/console.log" append="off"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <video>
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </video>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:45:36 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:45:36 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:45:36 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:45:36 compute-0 nova_compute[254092]: </domain>
Nov 25 16:45:36 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.712 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Preparing to wait for external event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.713 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.713 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.713 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.714 254096 DEBUG nova.virt.libvirt.vif [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1676188267',display_name='tempest-tempest.common.compute-instance-1676188267',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1676188267',id=81,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-puouwpb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:45:30Z,user_data=None,user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=b487a735-c096-4a8f-b8ba-fc5b6c055f56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.714 254096 DEBUG nova.network.os_vif_util [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.714 254096 DEBUG nova.network.os_vif_util [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.715 254096 DEBUG os_vif [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.716 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.716 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.720 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b25a3f7-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.720 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b25a3f7-0f, col_values=(('external_ids', {'iface-id': '5b25a3f7-0f17-4813-8f18-d5d20a92f4aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:43:9a', 'vm-uuid': 'b487a735-c096-4a8f-b8ba-fc5b6c055f56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:36 compute-0 NetworkManager[48891]: <info>  [1764089136.7234] manager: (tap5b25a3f7-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.725 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.730 254096 INFO os_vif [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f')
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.732 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:45:36 compute-0 ceph-mon[74985]: pgmap v1806: 321 pgs: 321 active+clean; 416 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 6.6 MiB/s wr, 167 op/s
Nov 25 16:45:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3990225299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2742841881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1694013210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.811 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.811 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.811 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No VIF found with MAC fa:16:3e:23:43:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.812 254096 INFO nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Using config drive
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.834 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793862733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.992 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.993 254096 DEBUG nova.virt.libvirt.vif [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-666063642',display_name='tempest-tempest.common.compute-instance-666063642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-666063642',id=82,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2qewqols',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:45:32Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=6b676874-6857-4021-9d83-c3673f57cebb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.994 254096 DEBUG nova.network.os_vif_util [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.994 254096 DEBUG nova.network.os_vif_util [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:45:36 compute-0 nova_compute[254092]: 2025-11-25 16:45:36.995 254096 DEBUG nova.objects.instance [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.007 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <uuid>6b676874-6857-4021-9d83-c3673f57cebb</uuid>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <name>instance-00000052</name>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <nova:name>tempest-tempest.common.compute-instance-666063642</nova:name>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:45:36</nova:creationTime>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:user uuid="7bd4800c25cd462b9365649e599d0a0e">tempest-ServerActionsTestOtherA-878981139-project-member</nova:user>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:project uuid="d4964e211a6d4699ab499f7cadee8a8d">tempest-ServerActionsTestOtherA-878981139</nova:project>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <nova:port uuid="0ebbbca7-8751-4479-a892-216433a26e74">
Nov 25 16:45:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <system>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <entry name="serial">6b676874-6857-4021-9d83-c3673f57cebb</entry>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <entry name="uuid">6b676874-6857-4021-9d83-c3673f57cebb</entry>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </system>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <os>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   </os>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <features>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   </features>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b676874-6857-4021-9d83-c3673f57cebb_disk">
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b676874-6857-4021-9d83-c3673f57cebb_disk.config">
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:09:27:70"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <target dev="tap0ebbbca7-87"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/console.log" append="off"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <video>
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </video>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:45:37 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:45:37 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:45:37 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:45:37 compute-0 nova_compute[254092]: </domain>
Nov 25 16:45:37 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.008 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Preparing to wait for external event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.008 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.009 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.009 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.009 254096 DEBUG nova.virt.libvirt.vif [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-666063642',display_name='tempest-tempest.common.compute-instance-666063642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-666063642',id=82,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2qewqols',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:45:32Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=6b676874-6857-4021-9d83-c3673f57cebb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.010 254096 DEBUG nova.network.os_vif_util [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.010 254096 DEBUG nova.network.os_vif_util [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.011 254096 DEBUG os_vif [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.011 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.012 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.014 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ebbbca7-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.015 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ebbbca7-87, col_values=(('external_ids', {'iface-id': '0ebbbca7-8751-4479-a892-216433a26e74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:27:70', 'vm-uuid': '6b676874-6857-4021-9d83-c3673f57cebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 NetworkManager[48891]: <info>  [1764089137.0170] manager: (tap0ebbbca7-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.027 254096 INFO os_vif [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87')
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.066 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.066 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.066 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No VIF found with MAC fa:16:3e:09:27:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.067 254096 INFO nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Using config drive
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.087 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.131 254096 INFO nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Creating config drive at /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.136 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcr7jbsym execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.166 254096 DEBUG nova.network.neutron [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Updated VIF entry in instance network info cache for port 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.167 254096 DEBUG nova.network.neutron [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Updating instance_info_cache with network_info: [{"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.186 254096 DEBUG oslo_concurrency.lockutils [req-c15c511f-b428-413a-8fdf-050fb96c1898 req-09f4bb16-48b2-4ec7-a6d5-29d228de8fde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b487a735-c096-4a8f-b8ba-fc5b6c055f56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.267 254096 DEBUG nova.network.neutron [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Updated VIF entry in instance network info cache for port 0ebbbca7-8751-4479-a892-216433a26e74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.268 254096 DEBUG nova.network.neutron [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Updating instance_info_cache with network_info: [{"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.274 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcr7jbsym" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.296 254096 DEBUG nova.storage.rbd_utils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.300 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.329 254096 DEBUG oslo_concurrency.lockutils [req-76ff4640-a799-4a90-9ebc-681b7cf4d8b8 req-138bb899-977f-4fb3-8ac3-f1c0a330f5c4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6b676874-6857-4021-9d83-c3673f57cebb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.400 254096 INFO nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Creating config drive at /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.405 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpctxxsfve execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.453 254096 DEBUG oslo_concurrency.processutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.454 254096 INFO nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Deleting local config drive /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config because it was imported into RBD.
Nov 25 16:45:37 compute-0 NetworkManager[48891]: <info>  [1764089137.5054] manager: (tap5b25a3f7-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/335)
Nov 25 16:45:37 compute-0 kernel: tap5b25a3f7-0f: entered promiscuous mode
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.510 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00764|binding|INFO|Claiming lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa for this chassis.
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00765|binding|INFO|5b25a3f7-0f17-4813-8f18-d5d20a92f4aa: Claiming fa:16:3e:23:43:9a 10.100.0.7
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.517 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:43:9a 10.100.0.7'], port_security=['fa:16:3e:23:43:9a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b487a735-c096-4a8f-b8ba-fc5b6c055f56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3b6fd59-89c5-40df-ad88-a6d7ff4d1d92', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.518 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.520 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00766|binding|INFO|Setting lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa ovn-installed in OVS
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00767|binding|INFO|Setting lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa up in Southbound
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 systemd-machined[216343]: New machine qemu-97-instance-00000051.
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.540 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab33f29-3d4e-487c-8b19-d2c95a4e16c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.543 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpctxxsfve" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:37 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-00000051.
Nov 25 16:45:37 compute-0 systemd-udevd[338717]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:45:37 compute-0 NetworkManager[48891]: <info>  [1764089137.5661] device (tap5b25a3f7-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:45:37 compute-0 NetworkManager[48891]: <info>  [1764089137.5667] device (tap5b25a3f7-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.580 254096 DEBUG nova.storage.rbd_utils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.581 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[865eb0c9-f0dd-48ac-96d5-f681fc240718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.584 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e43c458a-f05d-41b8-9f6b-ed2925b6c500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.585 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config 6b676874-6857-4021-9d83-c3673f57cebb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.613 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e4119715-0ff6-4c80-b012-0e08e910e4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c526ac0-d1c0-42c6-8957-545cb5c96368]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554594, 'reachable_time': 15382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338748, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.648 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[750f54bd-4686-49d9-a8e4-f5a776bbf0e1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554606, 'tstamp': 554606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338749, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554609, 'tstamp': 554609}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338749, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.650 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.657 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.657 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.658 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.658 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 453 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 7.8 MiB/s wr, 180 op/s
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.750 254096 DEBUG oslo_concurrency.processutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config 6b676874-6857-4021-9d83-c3673f57cebb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.751 254096 INFO nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Deleting local config drive /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config because it was imported into RBD.
Nov 25 16:45:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3793862733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:37 compute-0 kernel: tap0ebbbca7-87: entered promiscuous mode
Nov 25 16:45:37 compute-0 NetworkManager[48891]: <info>  [1764089137.8082] manager: (tap0ebbbca7-87): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Nov 25 16:45:37 compute-0 NetworkManager[48891]: <info>  [1764089137.8200] device (tap0ebbbca7-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:45:37 compute-0 NetworkManager[48891]: <info>  [1764089137.8211] device (tap0ebbbca7-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00768|binding|INFO|Claiming lport 0ebbbca7-8751-4479-a892-216433a26e74 for this chassis.
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00769|binding|INFO|0ebbbca7-8751-4479-a892-216433a26e74: Claiming fa:16:3e:09:27:70 10.100.0.9
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.863 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:27:70 10.100.0.9'], port_security=['fa:16:3e:09:27:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6b676874-6857-4021-9d83-c3673f57cebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '168dc57b-72e8-4bf9-9fa6-0f910875b8fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ebbbca7-8751-4479-a892-216433a26e74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.864 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ebbbca7-8751-4479-a892-216433a26e74 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 bound to our chassis
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.866 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00770|binding|INFO|Setting lport 0ebbbca7-8751-4479-a892-216433a26e74 ovn-installed in OVS
Nov 25 16:45:37 compute-0 ovn_controller[153477]: 2025-11-25T16:45:37Z|00771|binding|INFO|Setting lport 0ebbbca7-8751-4479-a892-216433a26e74 up in Southbound
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.873 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 systemd-machined[216343]: New machine qemu-98-instance-00000052.
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.886 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e02635dd-853a-4c79-aadd-0f4b98661fb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000052.
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.914 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2498845b-a817-4a9b-8f41-d6aec1ab8bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.917 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089137.9170752, b487a735-c096-4a8f-b8ba-fc5b6c055f56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.917 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] VM Started (Lifecycle Event)
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.918 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a419275b-4b72-4b1f-851d-71be8d84c08b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.932 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.937 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089137.9171972, b487a735-c096-4a8f-b8ba-fc5b6c055f56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.938 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] VM Paused (Lifecycle Event)
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.942 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[023799aa-6b38-4bc4-b64a-80ea8b96c6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.952 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.955 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.960 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0afdd6d3-33b6-49e2-84b3-ebfa49121a17]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 32335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338836, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.974 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b847e668-842a-4a45-afc0-a34b2b3253de]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338838, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338838, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 nova_compute[254092]: 2025-11-25 16:45:37.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.982 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.982 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.982 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:37.983 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.188 254096 DEBUG nova.compute.manager [req-61d87780-2e0e-41e8-83dd-6aae9c205659 req-373ae2b0-1f52-4651-b9b9-13234b94ac1a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.188 254096 DEBUG oslo_concurrency.lockutils [req-61d87780-2e0e-41e8-83dd-6aae9c205659 req-373ae2b0-1f52-4651-b9b9-13234b94ac1a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.189 254096 DEBUG oslo_concurrency.lockutils [req-61d87780-2e0e-41e8-83dd-6aae9c205659 req-373ae2b0-1f52-4651-b9b9-13234b94ac1a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.189 254096 DEBUG oslo_concurrency.lockutils [req-61d87780-2e0e-41e8-83dd-6aae9c205659 req-373ae2b0-1f52-4651-b9b9-13234b94ac1a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.189 254096 DEBUG nova.compute.manager [req-61d87780-2e0e-41e8-83dd-6aae9c205659 req-373ae2b0-1f52-4651-b9b9-13234b94ac1a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Processing event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.190 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.194 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089138.1934886, b487a735-c096-4a8f-b8ba-fc5b6c055f56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.194 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] VM Resumed (Lifecycle Event)
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.196 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.200 254096 INFO nova.virt.libvirt.driver [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance spawned successfully.
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.200 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.225 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.231 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.234 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.235 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.236 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.236 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.237 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.237 254096 DEBUG nova.virt.libvirt.driver [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.262 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.263 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089138.2223375, 6b676874-6857-4021-9d83-c3673f57cebb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.263 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Started (Lifecycle Event)
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.301 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.305 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089138.2224526, 6b676874-6857-4021-9d83-c3673f57cebb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.306 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Paused (Lifecycle Event)
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.323 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.326 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.335 254096 INFO nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Took 7.36 seconds to spawn the instance on the hypervisor.
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.335 254096 DEBUG nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.342 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.382 254096 INFO nova.compute.manager [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Took 8.32 seconds to build instance.
Nov 25 16:45:38 compute-0 nova_compute[254092]: 2025-11-25 16:45:38.398 254096 DEBUG oslo_concurrency.lockutils [None req-833b795d-3239-4c82-a08a-31eb3b250aab aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:38 compute-0 ceph-mon[74985]: pgmap v1807: 321 pgs: 321 active+clean; 453 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 7.8 MiB/s wr, 180 op/s
Nov 25 16:45:39 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 25 16:45:39 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004f.scope: Consumed 13.367s CPU time.
Nov 25 16:45:39 compute-0 systemd-machined[216343]: Machine qemu-95-instance-0000004f terminated.
Nov 25 16:45:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 453 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 7.2 MiB/s wr, 176 op/s
Nov 25 16:45:39 compute-0 nova_compute[254092]: 2025-11-25 16:45:39.750 254096 INFO nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance shutdown successfully after 24 seconds.
Nov 25 16:45:39 compute-0 nova_compute[254092]: 2025-11-25 16:45:39.756 254096 INFO nova.virt.libvirt.driver [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance destroyed successfully.
Nov 25 16:45:39 compute-0 nova_compute[254092]: 2025-11-25 16:45:39.761 254096 INFO nova.virt.libvirt.driver [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance destroyed successfully.
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:45:40
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'vms', '.rgw.root', 'images', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.146 254096 INFO nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Deleting instance files /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23_del
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.147 254096 INFO nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Deletion of /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23_del complete
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.275 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.276 254096 INFO nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Creating image(s)
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.294 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.313 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.332 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.335 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.408 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.410 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.411 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.411 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.437 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.442 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 504931a2-d324-4142-8698-9090b5cf7a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.725 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 504931a2-d324-4142-8698-9090b5cf7a23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.786 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] resizing rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:45:40 compute-0 ceph-mon[74985]: pgmap v1808: 321 pgs: 321 active+clean; 453 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 7.2 MiB/s wr, 176 op/s
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.878 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.879 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Ensure instance console log exists: /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.880 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.880 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.880 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.882 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.886 254096 WARNING nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.890 254096 DEBUG nova.virt.libvirt.host [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.891 254096 DEBUG nova.virt.libvirt.host [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.893 254096 DEBUG nova.virt.libvirt.host [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.894 254096 DEBUG nova.virt.libvirt.host [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.894 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.894 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.895 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.895 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.895 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.896 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.896 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.896 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.896 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.897 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.897 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.897 254096 DEBUG nova.virt.hardware [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.898 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:40 compute-0 nova_compute[254092]: 2025-11-25 16:45:40.910 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316797106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.351 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.373 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.377 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 387 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.9 MiB/s wr, 311 op/s
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.803323) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089141803352, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1354, "num_deletes": 251, "total_data_size": 2002879, "memory_usage": 2035968, "flush_reason": "Manual Compaction"}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 25 16:45:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1316797106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089141819102, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1960808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36384, "largest_seqno": 37737, "table_properties": {"data_size": 1954518, "index_size": 3493, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13804, "raw_average_key_size": 20, "raw_value_size": 1941723, "raw_average_value_size": 2822, "num_data_blocks": 156, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089015, "oldest_key_time": 1764089015, "file_creation_time": 1764089141, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 15884 microseconds, and 5284 cpu microseconds.
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.819204) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1960808 bytes OK
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.819247) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.820876) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.820902) EVENT_LOG_v1 {"time_micros": 1764089141820896, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.820922) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1996774, prev total WAL file size 1996774, number of live WAL files 2.
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.821943) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1914KB)], [80(7868KB)]
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089141822013, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10018591, "oldest_snapshot_seqno": -1}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170525336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.853 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.857 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <uuid>504931a2-d324-4142-8698-9090b5cf7a23</uuid>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <name>instance-0000004f</name>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerShowV247Test-server-1449704858</nova:name>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:45:40</nova:creationTime>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <nova:user uuid="225ac79669cf4b6dab40b373facccda7">tempest-ServerShowV247Test-787926520-project-member</nova:user>
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <nova:project uuid="07462d65bafa490d8f9bfcba3972ad42">tempest-ServerShowV247Test-787926520</nova:project>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <system>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <entry name="serial">504931a2-d324-4142-8698-9090b5cf7a23</entry>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <entry name="uuid">504931a2-d324-4142-8698-9090b5cf7a23</entry>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </system>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <os>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   </os>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <features>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   </features>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/504931a2-d324-4142-8698-9090b5cf7a23_disk">
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/504931a2-d324-4142-8698-9090b5cf7a23_disk.config">
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:41 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/console.log" append="off"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <video>
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </video>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:45:41 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:45:41 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:45:41 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:45:41 compute-0 nova_compute[254092]: </domain>
Nov 25 16:45:41 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6261 keys, 8330986 bytes, temperature: kUnknown
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089141874616, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8330986, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8290184, "index_size": 24052, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158683, "raw_average_key_size": 25, "raw_value_size": 8178875, "raw_average_value_size": 1306, "num_data_blocks": 971, "num_entries": 6261, "num_filter_entries": 6261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089141, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.874840) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8330986 bytes
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.875815) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.2 rd, 158.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.7 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(9.4) write-amplify(4.2) OK, records in: 6775, records dropped: 514 output_compression: NoCompression
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.875830) EVENT_LOG_v1 {"time_micros": 1764089141875823, "job": 46, "event": "compaction_finished", "compaction_time_micros": 52681, "compaction_time_cpu_micros": 18625, "output_level": 6, "num_output_files": 1, "total_output_size": 8330986, "num_input_records": 6775, "num_output_records": 6261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089141876189, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089141877462, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.821807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.877584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.877602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.877606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.877609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:45:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:45:41.877612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.902 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.903 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.903 254096 INFO nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Using config drive
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.922 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.937 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:41 compute-0 nova_compute[254092]: 2025-11-25 16:45:41.963 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'keypairs' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.072 254096 DEBUG nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.072 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.073 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.073 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.073 254096 DEBUG nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] No waiting events found dispatching network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.073 254096 WARNING nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received unexpected event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa for instance with vm_state active and task_state None.
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.074 254096 DEBUG nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.074 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.074 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.075 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.075 254096 DEBUG nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Processing event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.075 254096 DEBUG nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.075 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.076 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.076 254096 DEBUG oslo_concurrency.lockutils [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.076 254096 DEBUG nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] No waiting events found dispatching network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.077 254096 WARNING nova.compute.manager [req-0e970322-0a07-4f85-80cb-1aaf30fd4e39 req-b3812f50-8716-4cbf-a0da-eb6555924835 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received unexpected event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 for instance with vm_state building and task_state spawning.
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.078 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.081 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089142.0811067, 6b676874-6857-4021-9d83-c3673f57cebb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.081 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Resumed (Lifecycle Event)
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.084 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.094 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance spawned successfully.
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.094 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.111 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.116 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.121 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.121 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.122 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.123 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.123 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.124 254096 DEBUG nova.virt.libvirt.driver [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.154 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.194 254096 INFO nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Took 9.72 seconds to spawn the instance on the hypervisor.
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.195 254096 DEBUG nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.251 254096 INFO nova.compute.manager [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Took 10.85 seconds to build instance.
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.270 254096 DEBUG oslo_concurrency.lockutils [None req-0c5529da-339b-425d-8c37-6f7a6bc7fa51 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.509 254096 INFO nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Creating config drive at /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.515 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl0arq9pa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.656 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl0arq9pa" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.688 254096 DEBUG nova.storage.rbd_utils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] rbd image 504931a2-d324-4142-8698-9090b5cf7a23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.693 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config 504931a2-d324-4142-8698-9090b5cf7a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:42 compute-0 ceph-mon[74985]: pgmap v1809: 321 pgs: 321 active+clean; 387 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.9 MiB/s wr, 311 op/s
Nov 25 16:45:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/170525336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.851 254096 INFO nova.compute.manager [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Rebuilding instance
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.869 254096 DEBUG oslo_concurrency.processutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config 504931a2-d324-4142-8698-9090b5cf7a23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:42 compute-0 nova_compute[254092]: 2025-11-25 16:45:42.869 254096 INFO nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Deleting local config drive /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23/disk.config because it was imported into RBD.
Nov 25 16:45:42 compute-0 systemd-machined[216343]: New machine qemu-99-instance-0000004f.
Nov 25 16:45:42 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-0000004f.
Nov 25 16:45:43 compute-0 podman[339198]: 2025-11-25 16:45:43.018667228 +0000 UTC m=+0.063654841 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 16:45:43 compute-0 podman[339220]: 2025-11-25 16:45:43.100864772 +0000 UTC m=+0.059236971 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.111 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'trusted_certs' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.124 254096 DEBUG nova.compute.manager [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:43 compute-0 podman[339221]: 2025-11-25 16:45:43.135614846 +0000 UTC m=+0.092446973 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.166 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_requests' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.179 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.187 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.194 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'migration_context' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.203 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.206 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.569 254096 DEBUG nova.compute.manager [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.570 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.570 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 504931a2-d324-4142-8698-9090b5cf7a23 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.571 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089143.5697937, 504931a2-d324-4142-8698-9090b5cf7a23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.571 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] VM Resumed (Lifecycle Event)
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.576 254096 INFO nova.virt.libvirt.driver [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance spawned successfully.
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.576 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.589 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.592 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.602 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.602 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.603 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.603 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.603 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.604 254096 DEBUG nova.virt.libvirt.driver [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.608 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.608 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089143.5729728, 504931a2-d324-4142-8698-9090b5cf7a23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.608 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] VM Started (Lifecycle Event)
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.630 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.634 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.658 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.671 254096 DEBUG nova.compute.manager [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 387 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.726 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.726 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.727 254096 DEBUG nova.objects.instance [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:45:43 compute-0 nova_compute[254092]: 2025-11-25 16:45:43.776 254096 DEBUG oslo_concurrency.lockutils [None req-48925d95-4452-4426-9034-2506dd05b54c 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:44 compute-0 ceph-mon[74985]: pgmap v1810: 321 pgs: 321 active+clean; 387 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Nov 25 16:45:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 421 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 289 op/s
Nov 25 16:45:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.606 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "504931a2-d324-4142-8698-9090b5cf7a23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.606 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "504931a2-d324-4142-8698-9090b5cf7a23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.607 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "504931a2-d324-4142-8698-9090b5cf7a23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.607 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "504931a2-d324-4142-8698-9090b5cf7a23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.607 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "504931a2-d324-4142-8698-9090b5cf7a23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.608 254096 INFO nova.compute.manager [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Terminating instance
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.609 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "refresh_cache-504931a2-d324-4142-8698-9090b5cf7a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.609 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquired lock "refresh_cache-504931a2-d324-4142-8698-9090b5cf7a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.610 254096 DEBUG nova.network.neutron [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.754 254096 DEBUG nova.network.neutron [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:45:46 compute-0 ceph-mon[74985]: pgmap v1811: 321 pgs: 321 active+clean; 421 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 289 op/s
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.823 254096 DEBUG oslo_concurrency.lockutils [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.823 254096 DEBUG oslo_concurrency.lockutils [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.824 254096 DEBUG nova.compute.manager [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.827 254096 DEBUG nova.compute.manager [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.827 254096 DEBUG nova.objects.instance [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'flavor' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:46 compute-0 nova_compute[254092]: 2025-11-25 16:45:46.850 254096 DEBUG nova.virt.libvirt.driver [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.004 254096 DEBUG nova.network.neutron [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.016 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Releasing lock "refresh_cache-504931a2-d324-4142-8698-9090b5cf7a23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.016 254096 DEBUG nova.compute.manager [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:47 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Nov 25 16:45:47 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d0000004f.scope: Consumed 3.978s CPU time.
Nov 25 16:45:47 compute-0 systemd-machined[216343]: Machine qemu-99-instance-0000004f terminated.
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.233 254096 INFO nova.virt.libvirt.driver [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance destroyed successfully.
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.234 254096 DEBUG nova.objects.instance [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'resources' on Instance uuid 504931a2-d324-4142-8698-9090b5cf7a23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.679 254096 INFO nova.virt.libvirt.driver [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Deleting instance files /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23_del
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.680 254096 INFO nova.virt.libvirt.driver [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Deletion of /var/lib/nova/instances/504931a2-d324-4142-8698-9090b5cf7a23_del complete
Nov 25 16:45:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 421 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 294 op/s
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.723 254096 INFO nova.compute.manager [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.724 254096 DEBUG oslo.service.loopingcall [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.724 254096 DEBUG nova.compute.manager [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.725 254096 DEBUG nova.network.neutron [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.819 254096 DEBUG nova.network.neutron [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.831 254096 DEBUG nova.network.neutron [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.841 254096 INFO nova.compute.manager [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Took 0.12 seconds to deallocate network for instance.
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.907 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:47 compute-0 nova_compute[254092]: 2025-11-25 16:45:47.908 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.059 254096 DEBUG oslo_concurrency.processutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879218224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.629 254096 DEBUG oslo_concurrency.processutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.634 254096 DEBUG nova.compute.provider_tree [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.652 254096 DEBUG nova.scheduler.client.report [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.684 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.714 254096 INFO nova.scheduler.client.report [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Deleted allocations for instance 504931a2-d324-4142-8698-9090b5cf7a23
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.772 254096 DEBUG oslo_concurrency.lockutils [None req-a758d44e-228d-440c-8c18-10da4e4656b0 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "504931a2-d324-4142-8698-9090b5cf7a23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:48 compute-0 ceph-mon[74985]: pgmap v1812: 321 pgs: 321 active+clean; 421 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 294 op/s
Nov 25 16:45:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1879218224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:48 compute-0 nova_compute[254092]: 2025-11-25 16:45:48.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.511 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.512 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.512 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.512 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.513 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.514 254096 INFO nova.compute.manager [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Terminating instance
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.515 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "refresh_cache-efdd3cf9-3df8-4a1e-9e45-12172f99cbac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.515 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquired lock "refresh_cache-efdd3cf9-3df8-4a1e-9e45-12172f99cbac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.515 254096 DEBUG nova.network.neutron [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.657 254096 DEBUG nova.network.neutron [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:45:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 421 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 279 op/s
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.932 254096 DEBUG nova.network.neutron [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.947 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Releasing lock "refresh_cache-efdd3cf9-3df8-4a1e-9e45-12172f99cbac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:45:49 compute-0 nova_compute[254092]: 2025-11-25 16:45:49.948 254096 DEBUG nova.compute.manager [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:45:50 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000050.scope: Deactivated successfully.
Nov 25 16:45:50 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000050.scope: Consumed 14.528s CPU time.
Nov 25 16:45:50 compute-0 systemd-machined[216343]: Machine qemu-96-instance-00000050 terminated.
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.165 254096 INFO nova.virt.libvirt.driver [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Instance destroyed successfully.
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.166 254096 DEBUG nova.objects.instance [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lazy-loading 'resources' on Instance uuid efdd3cf9-3df8-4a1e-9e45-12172f99cbac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.744 254096 INFO nova.virt.libvirt.driver [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Deleting instance files /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac_del
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.744 254096 INFO nova.virt.libvirt.driver [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Deletion of /var/lib/nova/instances/efdd3cf9-3df8-4a1e-9e45-12172f99cbac_del complete
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.780 254096 INFO nova.compute.manager [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.781 254096 DEBUG oslo.service.loopingcall [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.781 254096 DEBUG nova.compute.manager [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.781 254096 DEBUG nova.network.neutron [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:45:50 compute-0 ceph-mon[74985]: pgmap v1813: 321 pgs: 321 active+clean; 421 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 279 op/s
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.973 254096 DEBUG nova.network.neutron [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.988 254096 DEBUG nova.network.neutron [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:45:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:50 compute-0 nova_compute[254092]: 2025-11-25 16:45:50.999 254096 INFO nova.compute.manager [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Took 0.22 seconds to deallocate network for instance.
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.046 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.046 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.161 254096 DEBUG oslo_concurrency.processutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033243950234420785 of space, bias 1.0, pg target 0.9973185070326236 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:45:51 compute-0 ovn_controller[153477]: 2025-11-25T16:45:51Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:43:9a 10.100.0.7
Nov 25 16:45:51 compute-0 ovn_controller[153477]: 2025-11-25T16:45:51Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:43:9a 10.100.0.7
Nov 25 16:45:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:45:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2602923099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.630 254096 DEBUG oslo_concurrency.processutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.635 254096 DEBUG nova.compute.provider_tree [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.652 254096 DEBUG nova.scheduler.client.report [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.672 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.696 254096 INFO nova.scheduler.client.report [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Deleted allocations for instance efdd3cf9-3df8-4a1e-9e45-12172f99cbac
Nov 25 16:45:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 371 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.0 MiB/s wr, 343 op/s
Nov 25 16:45:51 compute-0 nova_compute[254092]: 2025-11-25 16:45:51.762 254096 DEBUG oslo_concurrency.lockutils [None req-e3028933-f9bc-4542-9edf-0a11168af73e 225ac79669cf4b6dab40b373facccda7 07462d65bafa490d8f9bfcba3972ad42 - - default default] Lock "efdd3cf9-3df8-4a1e-9e45-12172f99cbac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2602923099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:45:52 compute-0 nova_compute[254092]: 2025-11-25 16:45:52.023 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:52 compute-0 nova_compute[254092]: 2025-11-25 16:45:52.690 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:52 compute-0 ceph-mon[74985]: pgmap v1814: 321 pgs: 321 active+clean; 371 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.0 MiB/s wr, 343 op/s
Nov 25 16:45:53 compute-0 nova_compute[254092]: 2025-11-25 16:45:53.249 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:45:53 compute-0 nova_compute[254092]: 2025-11-25 16:45:53.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 371 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.3 MiB/s wr, 207 op/s
Nov 25 16:45:54 compute-0 ceph-mon[74985]: pgmap v1815: 321 pgs: 321 active+clean; 371 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.3 MiB/s wr, 207 op/s
Nov 25 16:45:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:45:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2882931962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:45:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:45:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2882931962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:45:55 compute-0 ovn_controller[153477]: 2025-11-25T16:45:55Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:27:70 10.100.0.9
Nov 25 16:45:55 compute-0 ovn_controller[153477]: 2025-11-25T16:45:55Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:27:70 10.100.0.9
Nov 25 16:45:55 compute-0 kernel: tap5b25a3f7-0f (unregistering): left promiscuous mode
Nov 25 16:45:55 compute-0 NetworkManager[48891]: <info>  [1764089155.6136] device (tap5b25a3f7-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:45:55 compute-0 ovn_controller[153477]: 2025-11-25T16:45:55Z|00772|binding|INFO|Releasing lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa from this chassis (sb_readonly=0)
Nov 25 16:45:55 compute-0 ovn_controller[153477]: 2025-11-25T16:45:55Z|00773|binding|INFO|Setting lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa down in Southbound
Nov 25 16:45:55 compute-0 nova_compute[254092]: 2025-11-25 16:45:55.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:55 compute-0 ovn_controller[153477]: 2025-11-25T16:45:55Z|00774|binding|INFO|Removing iface tap5b25a3f7-0f ovn-installed in OVS
Nov 25 16:45:55 compute-0 nova_compute[254092]: 2025-11-25 16:45:55.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.675 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:43:9a 10.100.0.7'], port_security=['fa:16:3e:23:43:9a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b487a735-c096-4a8f-b8ba-fc5b6c055f56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3b6fd59-89c5-40df-ad88-a6d7ff4d1d92', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.676 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.677 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:45:55 compute-0 nova_compute[254092]: 2025-11-25 16:45:55.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.697 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34699610-b329-43b1-8174-215e16652122]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:55 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 25 16:45:55 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000051.scope: Consumed 13.462s CPU time.
Nov 25 16:45:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 342 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.4 MiB/s wr, 279 op/s
Nov 25 16:45:55 compute-0 systemd-machined[216343]: Machine qemu-97-instance-00000051 terminated.
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.739 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5a805c22-fb65-4b5c-8ca6-deb3ba9f39c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.742 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1dafefb2-fcc0-4d4e-a599-f1c23e9de70a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.771 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[68f303e2-c04b-4882-add7-bb4f019e5819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.788 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25c6606f-c4cb-40c2-b07f-5bbc37ecec56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554594, 'reachable_time': 42897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339403, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.806 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04cef3c1-71f5-4635-bc2b-2d2f59f5cbe8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554606, 'tstamp': 554606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339404, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554609, 'tstamp': 554609}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339404, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.808 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:55 compute-0 nova_compute[254092]: 2025-11-25 16:45:55.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:55 compute-0 nova_compute[254092]: 2025-11-25 16:45:55.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.816 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.816 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.816 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:55.817 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2882931962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:45:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2882931962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:45:55 compute-0 nova_compute[254092]: 2025-11-25 16:45:55.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:55 compute-0 nova_compute[254092]: 2025-11-25 16:45:55.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.263 254096 INFO nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance shutdown successfully after 13 seconds.
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.267 254096 INFO nova.virt.libvirt.driver [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance destroyed successfully.
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.271 254096 INFO nova.virt.libvirt.driver [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance destroyed successfully.
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.272 254096 DEBUG nova.virt.libvirt.vif [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1676188267',display_name='tempest-ServerActionsTestJSON-server-1627841880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1676188267',id=81,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:45:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-puouwpb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:45:42Z,user_data=None,user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=b487a735-c096-4a8f-b8ba-fc5b6c055f56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.273 254096 DEBUG nova.network.os_vif_util [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.273 254096 DEBUG nova.network.os_vif_util [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.274 254096 DEBUG os_vif [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.275 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.276 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b25a3f7-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.277 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.280 254096 INFO os_vif [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f')
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.522 254096 DEBUG nova.compute.manager [req-12aab874-0ae5-4dae-87ec-b792b13cbab0 req-d5a5459d-71a1-4118-9594-7ce02290b49e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-vif-unplugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.523 254096 DEBUG oslo_concurrency.lockutils [req-12aab874-0ae5-4dae-87ec-b792b13cbab0 req-d5a5459d-71a1-4118-9594-7ce02290b49e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.523 254096 DEBUG oslo_concurrency.lockutils [req-12aab874-0ae5-4dae-87ec-b792b13cbab0 req-d5a5459d-71a1-4118-9594-7ce02290b49e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.523 254096 DEBUG oslo_concurrency.lockutils [req-12aab874-0ae5-4dae-87ec-b792b13cbab0 req-d5a5459d-71a1-4118-9594-7ce02290b49e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.523 254096 DEBUG nova.compute.manager [req-12aab874-0ae5-4dae-87ec-b792b13cbab0 req-d5a5459d-71a1-4118-9594-7ce02290b49e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] No waiting events found dispatching network-vif-unplugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.523 254096 WARNING nova.compute.manager [req-12aab874-0ae5-4dae-87ec-b792b13cbab0 req-d5a5459d-71a1-4118-9594-7ce02290b49e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received unexpected event network-vif-unplugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa for instance with vm_state active and task_state rebuilding.
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.627 254096 INFO nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Deleting instance files /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56_del
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.629 254096 INFO nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Deletion of /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56_del complete
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.780 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.780 254096 INFO nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Creating image(s)
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.800 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.820 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.842 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.845 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:56 compute-0 ceph-mon[74985]: pgmap v1816: 321 pgs: 321 active+clean; 342 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.4 MiB/s wr, 279 op/s
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.895 254096 DEBUG nova.virt.libvirt.driver [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.917 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.918 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.919 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.919 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.943 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:56 compute-0 nova_compute[254092]: 2025-11-25 16:45:56.948 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:57 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.231 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.284 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] resizing rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.387 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Ensure instance console log exists: /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.389 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.389 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.390 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.392 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Start _get_guest_xml network_info=[{"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.397 254096 WARNING nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.403 254096 DEBUG nova.virt.libvirt.host [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.404 254096 DEBUG nova.virt.libvirt.host [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.407 254096 DEBUG nova.virt.libvirt.host [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.408 254096 DEBUG nova.virt.libvirt.host [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.408 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.408 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.409 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.409 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.409 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.410 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.410 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.410 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.411 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.411 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.411 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.411 254096 DEBUG nova.virt.hardware [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.412 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'vcpu_model' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.430 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 356 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Nov 25 16:45:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752040518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.919 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.960 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:57 compute-0 nova_compute[254092]: 2025-11-25 16:45:57.966 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:45:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714996359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.441 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.443 254096 DEBUG nova.virt.libvirt.vif [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1676188267',display_name='tempest-ServerActionsTestJSON-server-1627841880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1676188267',id=81,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:45:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-puouwpb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:45:56Z,user_data=None,user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=b487a735-c096-4a8f-b8ba-fc5b6c055f56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.443 254096 DEBUG nova.network.os_vif_util [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.444 254096 DEBUG nova.network.os_vif_util [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.447 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <uuid>b487a735-c096-4a8f-b8ba-fc5b6c055f56</uuid>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <name>instance-00000051</name>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestJSON-server-1627841880</nova:name>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:45:57</nova:creationTime>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <nova:port uuid="5b25a3f7-0f17-4813-8f18-d5d20a92f4aa">
Nov 25 16:45:58 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <system>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <entry name="serial">b487a735-c096-4a8f-b8ba-fc5b6c055f56</entry>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <entry name="uuid">b487a735-c096-4a8f-b8ba-fc5b6c055f56</entry>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </system>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <os>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   </os>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <features>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   </features>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk">
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config">
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:45:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:23:43:9a"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <target dev="tap5b25a3f7-0f"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/console.log" append="off"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <video>
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </video>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:45:58 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:45:58 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:45:58 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:45:58 compute-0 nova_compute[254092]: </domain>
Nov 25 16:45:58 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.448 254096 DEBUG nova.compute.manager [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Preparing to wait for external event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.449 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.449 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.449 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.450 254096 DEBUG nova.virt.libvirt.vif [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1676188267',display_name='tempest-ServerActionsTestJSON-server-1627841880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1676188267',id=81,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:45:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-puouwpb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:45:56Z,user_data=None,user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=b487a735-c096-4a8f-b8ba-fc5b6c055f56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.450 254096 DEBUG nova.network.os_vif_util [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.451 254096 DEBUG nova.network.os_vif_util [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.451 254096 DEBUG os_vif [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.452 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.452 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.454 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b25a3f7-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b25a3f7-0f, col_values=(('external_ids', {'iface-id': '5b25a3f7-0f17-4813-8f18-d5d20a92f4aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:43:9a', 'vm-uuid': 'b487a735-c096-4a8f-b8ba-fc5b6c055f56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:58 compute-0 NetworkManager[48891]: <info>  [1764089158.4578] manager: (tap5b25a3f7-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.462 254096 INFO os_vif [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f')
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.515 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.516 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.516 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] No VIF found with MAC fa:16:3e:23:43:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.516 254096 INFO nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Using config drive
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.541 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.560 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'ec2_ids' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.587 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'keypairs' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.634 254096 DEBUG nova.compute.manager [req-9ad5edaa-d17c-4970-8478-71208605ade3 req-6fe664d0-a8aa-422d-ac7e-693e55719113 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.634 254096 DEBUG oslo_concurrency.lockutils [req-9ad5edaa-d17c-4970-8478-71208605ade3 req-6fe664d0-a8aa-422d-ac7e-693e55719113 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.635 254096 DEBUG oslo_concurrency.lockutils [req-9ad5edaa-d17c-4970-8478-71208605ade3 req-6fe664d0-a8aa-422d-ac7e-693e55719113 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.635 254096 DEBUG oslo_concurrency.lockutils [req-9ad5edaa-d17c-4970-8478-71208605ade3 req-6fe664d0-a8aa-422d-ac7e-693e55719113 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:58 compute-0 nova_compute[254092]: 2025-11-25 16:45:58.635 254096 DEBUG nova.compute.manager [req-9ad5edaa-d17c-4970-8478-71208605ade3 req-6fe664d0-a8aa-422d-ac7e-693e55719113 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Processing event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:45:58 compute-0 ceph-mon[74985]: pgmap v1817: 321 pgs: 321 active+clean; 356 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Nov 25 16:45:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/752040518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3714996359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.011 254096 INFO nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Creating config drive at /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.016 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupu3bnyy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.156 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupu3bnyy" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.182 254096 DEBUG nova.storage.rbd_utils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] rbd image b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.186 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:45:59 compute-0 kernel: tap0ebbbca7-87 (unregistering): left promiscuous mode
Nov 25 16:45:59 compute-0 NetworkManager[48891]: <info>  [1764089159.2553] device (tap0ebbbca7-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:45:59 compute-0 ovn_controller[153477]: 2025-11-25T16:45:59Z|00775|binding|INFO|Releasing lport 0ebbbca7-8751-4479-a892-216433a26e74 from this chassis (sb_readonly=0)
Nov 25 16:45:59 compute-0 ovn_controller[153477]: 2025-11-25T16:45:59Z|00776|binding|INFO|Setting lport 0ebbbca7-8751-4479-a892-216433a26e74 down in Southbound
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.261 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:59 compute-0 ovn_controller[153477]: 2025-11-25T16:45:59Z|00777|binding|INFO|Removing iface tap0ebbbca7-87 ovn-installed in OVS
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.273 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:27:70 10.100.0.9'], port_security=['fa:16:3e:09:27:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6b676874-6857-4021-9d83-c3673f57cebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '168dc57b-72e8-4bf9-9fa6-0f910875b8fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ebbbca7-8751-4479-a892-216433a26e74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.275 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ebbbca7-8751-4479-a892-216433a26e74 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.276 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.293 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c51d8dc-fa7a-41c4-87f8-01bcf9405719]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:59 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 25 16:45:59 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000052.scope: Consumed 13.804s CPU time.
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.320 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e4971c24-d51e-4fd1-9767-54205f1956f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:59 compute-0 systemd-machined[216343]: Machine qemu-98-instance-00000052 terminated.
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.323 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6b90ff44-52ae-4d4c-83e4-c09b521934aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.353 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[69693baf-6ae4-4b6b-bfde-aa89535e87e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.368 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ef2fec-0927-42a3-8660-bb7b1fdd8496]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339734, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.384 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66952129-b526-4120-aed0-e3e492fd0030]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339735, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339735, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.386 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.387 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.395 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.395 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.396 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.396 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 356 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 4.3 MiB/s wr, 168 op/s
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.911 254096 INFO nova.virt.libvirt.driver [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance shutdown successfully after 13 seconds.
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.916 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance destroyed successfully.
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.917 254096 DEBUG nova.objects.instance [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'numa_topology' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.928 254096 DEBUG nova.compute.manager [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.930 254096 DEBUG oslo_concurrency.processutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config b487a735-c096-4a8f-b8ba-fc5b6c055f56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.931 254096 INFO nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Deleting local config drive /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56/disk.config because it was imported into RBD.
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.976 254096 DEBUG oslo_concurrency.lockutils [None req-994f0872-5d20-49df-a56f-de04e95b55e7 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:45:59 compute-0 systemd-udevd[339725]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:45:59 compute-0 NetworkManager[48891]: <info>  [1764089159.9815] manager: (tap5b25a3f7-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Nov 25 16:45:59 compute-0 kernel: tap5b25a3f7-0f: entered promiscuous mode
Nov 25 16:45:59 compute-0 nova_compute[254092]: 2025-11-25 16:45:59.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:45:59 compute-0 ovn_controller[153477]: 2025-11-25T16:45:59Z|00778|binding|INFO|Claiming lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa for this chassis.
Nov 25 16:45:59 compute-0 ovn_controller[153477]: 2025-11-25T16:45:59Z|00779|binding|INFO|5b25a3f7-0f17-4813-8f18-d5d20a92f4aa: Claiming fa:16:3e:23:43:9a 10.100.0.7
Nov 25 16:45:59 compute-0 NetworkManager[48891]: <info>  [1764089159.9950] device (tap5b25a3f7-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:45:59 compute-0 NetworkManager[48891]: <info>  [1764089159.9963] device (tap5b25a3f7-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.994 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:43:9a 10.100.0.7'], port_security=['fa:16:3e:23:43:9a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b487a735-c096-4a8f-b8ba-fc5b6c055f56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd3b6fd59-89c5-40df-ad88-a6d7ff4d1d92', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.995 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis
Nov 25 16:45:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:45:59.997 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:00 compute-0 ovn_controller[153477]: 2025-11-25T16:46:00Z|00780|binding|INFO|Setting lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa ovn-installed in OVS
Nov 25 16:46:00 compute-0 ovn_controller[153477]: 2025-11-25T16:46:00Z|00781|binding|INFO|Setting lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa up in Southbound
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91170841-1fe3-4d68-9939-06f2b71d8a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:00 compute-0 systemd-machined[216343]: New machine qemu-100-instance-00000051.
Nov 25 16:46:00 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000051.
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.045 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eb55ecbf-31c7-41a8-9edc-4532ce7ae4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.048 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[76bcc4da-4694-40d6-a5f4-d840e8dd6450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.079 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9d20ea-9c7f-40e6-9646-bdd0b9ea977c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf6a9d3-f3a8-4ddc-ad43-6be9953be492]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554594, 'reachable_time': 42897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339782, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.114 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3174e077-895d-473f-a507-cfb7c5308690]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554606, 'tstamp': 554606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339783, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554609, 'tstamp': 554609}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339783, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.115 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.117 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.121 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:00.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.657 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for b487a735-c096-4a8f-b8ba-fc5b6c055f56 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.657 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089160.6566808, b487a735-c096-4a8f-b8ba-fc5b6c055f56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.657 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] VM Started (Lifecycle Event)
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.659 254096 DEBUG nova.compute.manager [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.662 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.665 254096 INFO nova.virt.libvirt.driver [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance spawned successfully.
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.665 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.674 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.678 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.682 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.682 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.682 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.683 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.683 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.683 254096 DEBUG nova.virt.libvirt.driver [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.700 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.700 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089160.6569598, b487a735-c096-4a8f-b8ba-fc5b6c055f56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.701 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] VM Paused (Lifecycle Event)
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.721 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.723 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089160.6615856, b487a735-c096-4a8f-b8ba-fc5b6c055f56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.724 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] VM Resumed (Lifecycle Event)
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.737 254096 DEBUG nova.compute.manager [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.738 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.743 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.769 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.786 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.787 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.787 254096 DEBUG nova.objects.instance [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:46:00 compute-0 nova_compute[254092]: 2025-11-25 16:46:00.835 254096 DEBUG oslo_concurrency.lockutils [None req-d5f1f403-34d4-4a58-8769-bc7614740256 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:00 compute-0 ceph-mon[74985]: pgmap v1818: 321 pgs: 321 active+clean; 356 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 4.3 MiB/s wr, 168 op/s
Nov 25 16:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:01 compute-0 nova_compute[254092]: 2025-11-25 16:46:01.482 254096 DEBUG nova.compute.manager [req-88eceb59-57a4-4fed-a432-2ee6c3b557ce req-6c453969-7726-4803-b3d7-61c097a11bc3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-unplugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:01 compute-0 nova_compute[254092]: 2025-11-25 16:46:01.483 254096 DEBUG oslo_concurrency.lockutils [req-88eceb59-57a4-4fed-a432-2ee6c3b557ce req-6c453969-7726-4803-b3d7-61c097a11bc3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:01 compute-0 nova_compute[254092]: 2025-11-25 16:46:01.483 254096 DEBUG oslo_concurrency.lockutils [req-88eceb59-57a4-4fed-a432-2ee6c3b557ce req-6c453969-7726-4803-b3d7-61c097a11bc3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:01 compute-0 nova_compute[254092]: 2025-11-25 16:46:01.483 254096 DEBUG oslo_concurrency.lockutils [req-88eceb59-57a4-4fed-a432-2ee6c3b557ce req-6c453969-7726-4803-b3d7-61c097a11bc3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:01 compute-0 nova_compute[254092]: 2025-11-25 16:46:01.483 254096 DEBUG nova.compute.manager [req-88eceb59-57a4-4fed-a432-2ee6c3b557ce req-6c453969-7726-4803-b3d7-61c097a11bc3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] No waiting events found dispatching network-vif-unplugged-0ebbbca7-8751-4479-a892-216433a26e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:01 compute-0 nova_compute[254092]: 2025-11-25 16:46:01.484 254096 WARNING nova.compute.manager [req-88eceb59-57a4-4fed-a432-2ee6c3b557ce req-6c453969-7726-4803-b3d7-61c097a11bc3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received unexpected event network-vif-unplugged-0ebbbca7-8751-4479-a892-216433a26e74 for instance with vm_state stopped and task_state None.
Nov 25 16:46:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 328 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 6.1 MiB/s wr, 248 op/s
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.031 254096 INFO nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Rebuilding instance
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.233 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089147.2319922, 504931a2-d324-4142-8698-9090b5cf7a23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.234 254096 INFO nova.compute.manager [-] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] VM Stopped (Lifecycle Event)
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.249 254096 DEBUG nova.compute.manager [None req-40559064-7ceb-4416-91bf-7c8cc0f6b0ac - - - - - -] [instance: 504931a2-d324-4142-8698-9090b5cf7a23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.304 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.322 254096 DEBUG nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.368 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'pci_requests' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.378 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.389 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'resources' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.400 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'migration_context' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.411 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.416 254096 INFO nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance already shutdown.
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.425 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance destroyed successfully.
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.433 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance destroyed successfully.
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.435 254096 DEBUG nova.virt.libvirt.vif [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-666063642',display_name='tempest-tempest.common.compute-instance-666063642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-666063642',id=82,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:45:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2qewqols',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:01Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=6b676874-6857-4021-9d83-c3673f57cebb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.436 254096 DEBUG nova.network.os_vif_util [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.436 254096 DEBUG nova.network.os_vif_util [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.437 254096 DEBUG os_vif [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:02 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:02 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.440 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ebbbca7-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.445 254096 INFO os_vif [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87')
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.932 254096 INFO nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Deleting instance files /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb_del
Nov 25 16:46:02 compute-0 nova_compute[254092]: 2025-11-25 16:46:02.933 254096 INFO nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Deletion of /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb_del complete
Nov 25 16:46:02 compute-0 ceph-mon[74985]: pgmap v1819: 321 pgs: 321 active+clean; 328 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 6.1 MiB/s wr, 248 op/s
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.084 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.085 254096 INFO nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Creating image(s)
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.106 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.133 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.156 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.161 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.240 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.241 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.241 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.242 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.266 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.270 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 6b676874-6857-4021-9d83-c3673f57cebb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.304 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.306 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.306 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.306 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.307 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.308 254096 INFO nova.compute.manager [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Terminating instance
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.309 254096 DEBUG nova.compute.manager [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:46:03 compute-0 kernel: tap5b25a3f7-0f (unregistering): left promiscuous mode
Nov 25 16:46:03 compute-0 NetworkManager[48891]: <info>  [1764089163.3472] device (tap5b25a3f7-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:03 compute-0 ovn_controller[153477]: 2025-11-25T16:46:03Z|00782|binding|INFO|Releasing lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa from this chassis (sb_readonly=0)
Nov 25 16:46:03 compute-0 ovn_controller[153477]: 2025-11-25T16:46:03Z|00783|binding|INFO|Setting lport 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa down in Southbound
Nov 25 16:46:03 compute-0 ovn_controller[153477]: 2025-11-25T16:46:03Z|00784|binding|INFO|Removing iface tap5b25a3f7-0f ovn-installed in OVS
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.365 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:43:9a 10.100.0.7'], port_security=['fa:16:3e:23:43:9a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b487a735-c096-4a8f-b8ba-fc5b6c055f56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd3b6fd59-89c5-40df-ad88-a6d7ff4d1d92', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.365 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.366 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5b25a3f7-0f17-4813-8f18-d5d20a92f4aa in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.368 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.388 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[323e3883-6729-45f1-a802-d3164cb3267f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:03 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000051.scope: Deactivated successfully.
Nov 25 16:46:03 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000051.scope: Consumed 3.375s CPU time.
Nov 25 16:46:03 compute-0 systemd-machined[216343]: Machine qemu-100-instance-00000051 terminated.
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.420 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2b47854c-b9f6-4172-a172-ed4630a1f4a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.423 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e322ffe0-7882-464b-8469-574c71341951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.455 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[215d411d-35f4-43fc-bf99-b77067edcf57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.474 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a83c2982-5d9d-4362-9795-e17296edea77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554594, 'reachable_time': 42897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339956, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.495 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[333605c7-0e3c-42df-9804-8b8bb8debcf3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554606, 'tstamp': 554606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339957, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap50ea1716-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554609, 'tstamp': 554609}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339957, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.496 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.501 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.501 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.502 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:03.502 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.543 254096 INFO nova.virt.libvirt.driver [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Instance destroyed successfully.
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.543 254096 DEBUG nova.objects.instance [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid b487a735-c096-4a8f-b8ba-fc5b6c055f56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.554 254096 DEBUG nova.virt.libvirt.vif [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1676188267',display_name='tempest-ServerActionsTestJSON-server-1627841880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1676188267',id=81,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-puouwpb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:00Z,user_data=None,user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=b487a735-c096-4a8f-b8ba-fc5b6c055f56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.555 254096 DEBUG nova.network.os_vif_util [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "address": "fa:16:3e:23:43:9a", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b25a3f7-0f", "ovs_interfaceid": "5b25a3f7-0f17-4813-8f18-d5d20a92f4aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.555 254096 DEBUG nova.network.os_vif_util [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.556 254096 DEBUG os_vif [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.558 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b25a3f7-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.565 254096 INFO os_vif [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:43:9a,bridge_name='br-int',has_traffic_filtering=True,id=5b25a3f7-0f17-4813-8f18-d5d20a92f4aa,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b25a3f7-0f')
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.643 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 6b676874-6857-4021-9d83-c3673f57cebb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 328 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 4.9 MiB/s wr, 184 op/s
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.714 254096 DEBUG nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.715 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.715 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.716 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.716 254096 DEBUG nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] No waiting events found dispatching network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.716 254096 WARNING nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received unexpected event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 for instance with vm_state stopped and task_state rebuild_spawning.
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.716 254096 DEBUG nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.716 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.716 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.717 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.717 254096 DEBUG nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] No waiting events found dispatching network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.717 254096 WARNING nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received unexpected event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa for instance with vm_state active and task_state deleting.
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.717 254096 DEBUG nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.717 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.718 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.718 254096 DEBUG oslo_concurrency.lockutils [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.718 254096 DEBUG nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] No waiting events found dispatching network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.718 254096 WARNING nova.compute.manager [req-9409d2a3-2d20-43fe-b44c-49a151a53615 req-97966d0d-aa83-4e24-b1b7-1ee6e542d5df a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received unexpected event network-vif-plugged-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa for instance with vm_state active and task_state deleting.
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.725 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] resizing rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.827 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.828 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Ensure instance console log exists: /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.828 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.829 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.829 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.831 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Start _get_guest_xml network_info=[{"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.836 254096 WARNING nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.843 254096 DEBUG nova.virt.libvirt.host [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.844 254096 DEBUG nova.virt.libvirt.host [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.846 254096 DEBUG nova.virt.libvirt.host [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.846 254096 DEBUG nova.virt.libvirt.host [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.847 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.847 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.847 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.847 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.848 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.848 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.848 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.848 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.848 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.848 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.848 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.849 254096 DEBUG nova.virt.hardware [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.849 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.863 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.931 254096 INFO nova.virt.libvirt.driver [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Deleting instance files /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56_del
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.933 254096 INFO nova.virt.libvirt.driver [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Deletion of /var/lib/nova/instances/b487a735-c096-4a8f-b8ba-fc5b6c055f56_del complete
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.984 254096 INFO nova.compute.manager [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Took 0.67 seconds to destroy the instance on the hypervisor.
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.984 254096 DEBUG oslo.service.loopingcall [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.984 254096 DEBUG nova.compute.manager [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:46:03 compute-0 nova_compute[254092]: 2025-11-25 16:46:03.985 254096 DEBUG nova.network.neutron [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:46:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898457245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.296 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.323 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.329 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.580 254096 DEBUG nova.network.neutron [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.600 254096 INFO nova.compute.manager [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Took 0.62 seconds to deallocate network for instance.
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.657 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.657 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.749 254096 DEBUG oslo_concurrency.processutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3740749239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.781 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.783 254096 DEBUG nova.virt.libvirt.vif [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-666063642',display_name='tempest-tempest.common.compute-instance-666063642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-666063642',id=82,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:45:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2qewqols',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:03Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=6b676874-6857-4021-9d83-c3673f57cebb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.783 254096 DEBUG nova.network.os_vif_util [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.784 254096 DEBUG nova.network.os_vif_util [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.787 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <uuid>6b676874-6857-4021-9d83-c3673f57cebb</uuid>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <name>instance-00000052</name>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <nova:name>tempest-tempest.common.compute-instance-666063642</nova:name>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:46:03</nova:creationTime>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:user uuid="7bd4800c25cd462b9365649e599d0a0e">tempest-ServerActionsTestOtherA-878981139-project-member</nova:user>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:project uuid="d4964e211a6d4699ab499f7cadee8a8d">tempest-ServerActionsTestOtherA-878981139</nova:project>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <nova:port uuid="0ebbbca7-8751-4479-a892-216433a26e74">
Nov 25 16:46:04 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <system>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <entry name="serial">6b676874-6857-4021-9d83-c3673f57cebb</entry>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <entry name="uuid">6b676874-6857-4021-9d83-c3673f57cebb</entry>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </system>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <os>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   </os>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <features>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   </features>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b676874-6857-4021-9d83-c3673f57cebb_disk">
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b676874-6857-4021-9d83-c3673f57cebb_disk.config">
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:04 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:09:27:70"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <target dev="tap0ebbbca7-87"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/console.log" append="off"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <video>
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </video>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:46:04 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:46:04 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:46:04 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:46:04 compute-0 nova_compute[254092]: </domain>
Nov 25 16:46:04 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.787 254096 DEBUG nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Preparing to wait for external event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.787 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.787 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.788 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.788 254096 DEBUG nova.virt.libvirt.vif [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-666063642',display_name='tempest-tempest.common.compute-instance-666063642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-666063642',id=82,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:45:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2qewqols',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:03Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=6b676874-6857-4021-9d83-c3673f57cebb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.788 254096 DEBUG nova.network.os_vif_util [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.789 254096 DEBUG nova.network.os_vif_util [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.789 254096 DEBUG os_vif [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.790 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.791 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.794 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0ebbbca7-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.795 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0ebbbca7-87, col_values=(('external_ids', {'iface-id': '0ebbbca7-8751-4479-a892-216433a26e74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:27:70', 'vm-uuid': '6b676874-6857-4021-9d83-c3673f57cebb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:04 compute-0 NetworkManager[48891]: <info>  [1764089164.7975] manager: (tap0ebbbca7-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.801 254096 INFO os_vif [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87')
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.846 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.847 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.847 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No VIF found with MAC fa:16:3e:09:27:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.847 254096 INFO nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Using config drive
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.872 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.887 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:04 compute-0 nova_compute[254092]: 2025-11-25 16:46:04.905 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'keypairs' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:04 compute-0 ceph-mon[74985]: pgmap v1820: 321 pgs: 321 active+clean; 328 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 4.9 MiB/s wr, 184 op/s
Nov 25 16:46:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3898457245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3740749239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.165 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089150.1642962, efdd3cf9-3df8-4a1e-9e45-12172f99cbac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.165 254096 INFO nova.compute.manager [-] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] VM Stopped (Lifecycle Event)
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.182 254096 DEBUG nova.compute.manager [None req-a6616167-f2bd-4a3b-9e2a-7e37b8bc4b63 - - - - - -] [instance: efdd3cf9-3df8-4a1e-9e45-12172f99cbac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.192 254096 INFO nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Creating config drive at /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.198 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4k11b2h3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3702109657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.236 254096 DEBUG oslo_concurrency.processutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.242 254096 DEBUG nova.compute.provider_tree [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.255 254096 DEBUG nova.scheduler.client.report [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.279 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.321 254096 INFO nova.scheduler.client.report [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Deleted allocations for instance b487a735-c096-4a8f-b8ba-fc5b6c055f56
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.335 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4k11b2h3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.363 254096 DEBUG nova.storage.rbd_utils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 6b676874-6857-4021-9d83-c3673f57cebb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.369 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config 6b676874-6857-4021-9d83-c3673f57cebb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.458 254096 DEBUG oslo_concurrency.lockutils [None req-f67d43fc-3233-4d6a-8563-382d9bbf7002 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "b487a735-c096-4a8f-b8ba-fc5b6c055f56" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.535 254096 DEBUG oslo_concurrency.processutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config 6b676874-6857-4021-9d83-c3673f57cebb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.536 254096 INFO nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Deleting local config drive /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb/disk.config because it was imported into RBD.
Nov 25 16:46:05 compute-0 kernel: tap0ebbbca7-87: entered promiscuous mode
Nov 25 16:46:05 compute-0 systemd-udevd[339943]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:46:05 compute-0 NetworkManager[48891]: <info>  [1764089165.6045] manager: (tap0ebbbca7-87): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Nov 25 16:46:05 compute-0 ovn_controller[153477]: 2025-11-25T16:46:05Z|00785|binding|INFO|Claiming lport 0ebbbca7-8751-4479-a892-216433a26e74 for this chassis.
Nov 25 16:46:05 compute-0 ovn_controller[153477]: 2025-11-25T16:46:05Z|00786|binding|INFO|0ebbbca7-8751-4479-a892-216433a26e74: Claiming fa:16:3e:09:27:70 10.100.0.9
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.617 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:27:70 10.100.0.9'], port_security=['fa:16:3e:09:27:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6b676874-6857-4021-9d83-c3673f57cebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '168dc57b-72e8-4bf9-9fa6-0f910875b8fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ebbbca7-8751-4479-a892-216433a26e74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:05 compute-0 NetworkManager[48891]: <info>  [1764089165.6196] device (tap0ebbbca7-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.619 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ebbbca7-8751-4479-a892-216433a26e74 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 bound to our chassis
Nov 25 16:46:05 compute-0 NetworkManager[48891]: <info>  [1764089165.6208] device (tap0ebbbca7-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.621 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:46:05 compute-0 ovn_controller[153477]: 2025-11-25T16:46:05Z|00787|binding|INFO|Setting lport 0ebbbca7-8751-4479-a892-216433a26e74 ovn-installed in OVS
Nov 25 16:46:05 compute-0 ovn_controller[153477]: 2025-11-25T16:46:05Z|00788|binding|INFO|Setting lport 0ebbbca7-8751-4479-a892-216433a26e74 up in Southbound
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.642 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef3fccf-109b-4c7b-bf8a-03c25a06d336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:05 compute-0 systemd-machined[216343]: New machine qemu-101-instance-00000052.
Nov 25 16:46:05 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000052.
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.682 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[549fc90c-bc82-4ed1-934b-69eb5c9d3d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.686 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e12f1437-2ce1-476c-a108-0499f1664425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 269 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.2 MiB/s wr, 331 op/s
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.725 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f6a04a-60c2-4379-b14c-0446a5266d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.752 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[615dda88-7a74-41da-999e-d770ca475074]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340228, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d57f27d2-84cb-479b-a6d3-6fa084046ecd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340231, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340231, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.776 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.778 254096 DEBUG nova.compute.manager [req-ab7128b8-c7f1-4bab-9dd5-24e4f1df7b3b req-283dffda-a36e-43b4-9f07-5baeb56a04cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Received event network-vif-deleted-5b25a3f7-0f17-4813-8f18-d5d20a92f4aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:05 compute-0 nova_compute[254092]: 2025-11-25 16:46:05.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.780 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.780 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.781 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:05.781 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3702109657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.439 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 6b676874-6857-4021-9d83-c3673f57cebb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.440 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089166.4384952, 6b676874-6857-4021-9d83-c3673f57cebb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.440 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Started (Lifecycle Event)
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.460 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.467 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089166.4401214, 6b676874-6857-4021-9d83-c3673f57cebb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.467 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Paused (Lifecycle Event)
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.488 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.493 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:06 compute-0 nova_compute[254092]: 2025-11-25 16:46:06.515 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:46:06 compute-0 ceph-mon[74985]: pgmap v1821: 321 pgs: 321 active+clean; 269 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.2 MiB/s wr, 331 op/s
Nov 25 16:46:07 compute-0 ovn_controller[153477]: 2025-11-25T16:46:07Z|00789|binding|INFO|Releasing lport db192ec3-55c1-4137-aaad-99a175bfa879 from this chassis (sb_readonly=0)
Nov 25 16:46:07 compute-0 ovn_controller[153477]: 2025-11-25T16:46:07Z|00790|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:46:07 compute-0 nova_compute[254092]: 2025-11-25 16:46:07.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:07 compute-0 nova_compute[254092]: 2025-11-25 16:46:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 248 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 322 op/s
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.463 254096 DEBUG oslo_concurrency.lockutils [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.463 254096 DEBUG oslo_concurrency.lockutils [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.464 254096 DEBUG nova.compute.manager [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.468 254096 DEBUG nova.compute.manager [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.469 254096 DEBUG nova.objects.instance [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.491 254096 DEBUG nova.virt.libvirt.driver [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.514 254096 DEBUG nova.compute.manager [req-4b9f9417-8a54-4660-b065-205d0e8584b4 req-2da7e77f-f9d1-435c-a9c8-819dfe85e54d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.515 254096 DEBUG oslo_concurrency.lockutils [req-4b9f9417-8a54-4660-b065-205d0e8584b4 req-2da7e77f-f9d1-435c-a9c8-819dfe85e54d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.515 254096 DEBUG oslo_concurrency.lockutils [req-4b9f9417-8a54-4660-b065-205d0e8584b4 req-2da7e77f-f9d1-435c-a9c8-819dfe85e54d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.515 254096 DEBUG oslo_concurrency.lockutils [req-4b9f9417-8a54-4660-b065-205d0e8584b4 req-2da7e77f-f9d1-435c-a9c8-819dfe85e54d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.516 254096 DEBUG nova.compute.manager [req-4b9f9417-8a54-4660-b065-205d0e8584b4 req-2da7e77f-f9d1-435c-a9c8-819dfe85e54d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Processing event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.516 254096 DEBUG nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.520 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089168.519768, 6b676874-6857-4021-9d83-c3673f57cebb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.520 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Resumed (Lifecycle Event)
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.521 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.524 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance spawned successfully.
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.524 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.537 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.544 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.548 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.548 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.549 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.549 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.549 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.549 254096 DEBUG nova.virt.libvirt.driver [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.573 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.604 254096 DEBUG nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.646 254096 INFO nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] bringing vm to original state: 'stopped'
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.705 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.705 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.705 254096 DEBUG nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.711 254096 DEBUG nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:46:08 compute-0 kernel: tap0ebbbca7-87 (unregistering): left promiscuous mode
Nov 25 16:46:08 compute-0 NetworkManager[48891]: <info>  [1764089168.7492] device (tap0ebbbca7-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:08 compute-0 ovn_controller[153477]: 2025-11-25T16:46:08Z|00791|binding|INFO|Releasing lport 0ebbbca7-8751-4479-a892-216433a26e74 from this chassis (sb_readonly=0)
Nov 25 16:46:08 compute-0 ovn_controller[153477]: 2025-11-25T16:46:08Z|00792|binding|INFO|Setting lport 0ebbbca7-8751-4479-a892-216433a26e74 down in Southbound
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:08 compute-0 ovn_controller[153477]: 2025-11-25T16:46:08Z|00793|binding|INFO|Removing iface tap0ebbbca7-87 ovn-installed in OVS
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.769 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:27:70 10.100.0.9'], port_security=['fa:16:3e:09:27:70 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6b676874-6857-4021-9d83-c3673f57cebb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '168dc57b-72e8-4bf9-9fa6-0f910875b8fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0ebbbca7-8751-4479-a892-216433a26e74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.770 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0ebbbca7-8751-4479-a892-216433a26e74 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.771 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c4cf68-9ce2-4c02-822f-7ec1d918de47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:08 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.832 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c8878e-99d4-44d5-8cbb-b273b819ed55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:08 compute-0 systemd-machined[216343]: Machine qemu-101-instance-00000052 terminated.
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.839 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d934a5fb-20b1-4dcb-8690-b7f0ab4f3298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.874 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad23586-e427-4f49-aad8-e34bda9969d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.897 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6d20ca51-943d-4f61-b2e7-6b54978d86ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340285, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8842fa15-e38b-4443-b2ed-216ffb6271a8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340286, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340286, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.925 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.978 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:08.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:08 compute-0 ceph-mon[74985]: pgmap v1822: 321 pgs: 321 active+clean; 248 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 322 op/s
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.988 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance destroyed successfully.
Nov 25 16:46:08 compute-0 nova_compute[254092]: 2025-11-25 16:46:08.988 254096 DEBUG nova.compute.manager [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.053 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.083 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.084 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.084 254096 DEBUG nova.objects.instance [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.137 254096 DEBUG oslo_concurrency.lockutils [None req-855711c2-8df3-4f24-b7c0-a29016b53156 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.296 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.297 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.311 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.366 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.367 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.375 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.375 254096 INFO nova.compute.claims [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.513 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 248 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 289 op/s
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583306994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.962 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.969 254096 DEBUG nova.compute.provider_tree [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:09 compute-0 nova_compute[254092]: 2025-11-25 16:46:09.984 254096 DEBUG nova.scheduler.client.report [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/583306994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.007 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.009 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.049 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.049 254096 DEBUG nova.network.neutron [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.062 254096 INFO nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.075 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.150 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.151 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.151 254096 INFO nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Creating image(s)
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.175 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.206 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.230 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.236 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.312 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.313 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.314 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.314 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.341 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.346 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8b20d119-17cb-4742-9223-90e5020f93a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.701 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8b20d119-17cb-4742-9223-90e5020f93a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:10 compute-0 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 16:46:10 compute-0 NetworkManager[48891]: <info>  [1764089170.7450] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:10 compute-0 ovn_controller[153477]: 2025-11-25T16:46:10Z|00794|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 16:46:10 compute-0 ovn_controller[153477]: 2025-11-25T16:46:10Z|00795|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 16:46:10 compute-0 ovn_controller[153477]: 2025-11-25T16:46:10Z|00796|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 16:46:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:10.762 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:10.763 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:46:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:10.764 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:46:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:10.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e13a56b9-cad7-4d98-8a03-0acf5ac98493]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:10.766 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.785 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:46:10 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 16:46:10 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000048.scope: Consumed 15.849s CPU time.
Nov 25 16:46:10 compute-0 systemd-machined[216343]: Machine qemu-94-instance-00000048 terminated.
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.883 254096 DEBUG nova.objects.instance [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 8b20d119-17cb-4742-9223-90e5020f93a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.895 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.896 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Ensure instance console log exists: /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.896 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.897 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:10 compute-0 nova_compute[254092]: 2025-11-25 16:46:10.897 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:10 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [NOTICE]   (336241) : haproxy version is 2.8.14-c23fe91
Nov 25 16:46:10 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [NOTICE]   (336241) : path to executable is /usr/sbin/haproxy
Nov 25 16:46:10 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [WARNING]  (336241) : Exiting Master process...
Nov 25 16:46:10 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [WARNING]  (336241) : Exiting Master process...
Nov 25 16:46:10 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [ALERT]    (336241) : Current worker (336243) exited with code 143 (Terminated)
Nov 25 16:46:10 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[336237]: [WARNING]  (336241) : All workers exited. Exiting... (0)
Nov 25 16:46:10 compute-0 systemd[1]: libpod-5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c.scope: Deactivated successfully.
Nov 25 16:46:10 compute-0 conmon[336237]: conmon 5a5589ecefb61d23eb5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c.scope/container/memory.events
Nov 25 16:46:10 compute-0 podman[340498]: 2025-11-25 16:46:10.919973851 +0000 UTC m=+0.045094867 container died 5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:46:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c-userdata-shm.mount: Deactivated successfully.
Nov 25 16:46:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e28edf0ac28fb1100bbac0b1fef16684a1848bb22824963373e71ee5754226dc-merged.mount: Deactivated successfully.
Nov 25 16:46:10 compute-0 podman[340498]: 2025-11-25 16:46:10.968485029 +0000 UTC m=+0.093606045 container cleanup 5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:46:10 compute-0 systemd[1]: libpod-conmon-5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c.scope: Deactivated successfully.
Nov 25 16:46:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:11 compute-0 ceph-mon[74985]: pgmap v1823: 321 pgs: 321 active+clean; 248 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 289 op/s
Nov 25 16:46:11 compute-0 podman[340544]: 2025-11-25 16:46:11.039100148 +0000 UTC m=+0.048121129 container remove 5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3957b7-91ee-4671-8477-5109a7c851dd]: (4, ('Tue Nov 25 04:46:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c)\n5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c\nTue Nov 25 04:46:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c)\n5a5589ecefb61d23eb5a0bc142b221ae387fe582e371e7516c0aebef1cf30f1c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.046 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4473e46a-caa5-4de4-826f-88e4343701fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.047 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:11 compute-0 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.069 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.072 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53175236-ce3b-4e79-89cb-b40b5883928b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.083 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2191d6-486d-4f11-b7ff-d5fe21b68afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.084 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[51723a5a-e14f-404f-80ee-265423604577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.098 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[52d4768c-b32c-4f5a-ade1-e04c4bd506e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554585, 'reachable_time': 27560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340572, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.099 254096 DEBUG nova.policy [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:46:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.102 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:46:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:11.102 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[deb70529-3657-45c5-903f-b9e65408996d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.508 254096 INFO nova.virt.libvirt.driver [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance shutdown successfully after 3 seconds.
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.514 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.515 254096 DEBUG nova.objects.instance [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.524 254096 DEBUG nova.compute.manager [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:11 compute-0 nova_compute[254092]: 2025-11-25 16:46:11.563 254096 DEBUG oslo_concurrency.lockutils [None req-626a8327-3918-4f76-ad90-88c372a7dbf2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 275 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.1 MiB/s wr, 302 op/s
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.120 254096 DEBUG nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.120 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.121 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.121 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.121 254096 DEBUG nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] No waiting events found dispatching network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.121 254096 WARNING nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received unexpected event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 for instance with vm_state stopped and task_state None.
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.122 254096 DEBUG nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-unplugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.122 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.122 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.123 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.123 254096 DEBUG nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] No waiting events found dispatching network-vif-unplugged-0ebbbca7-8751-4479-a892-216433a26e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.123 254096 WARNING nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received unexpected event network-vif-unplugged-0ebbbca7-8751-4479-a892-216433a26e74 for instance with vm_state stopped and task_state None.
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.123 254096 DEBUG nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.123 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.124 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.124 254096 DEBUG oslo_concurrency.lockutils [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.124 254096 DEBUG nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] No waiting events found dispatching network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:12 compute-0 nova_compute[254092]: 2025-11-25 16:46:12.124 254096 WARNING nova.compute.manager [req-5614e2e0-cd68-41de-853c-897576426adc req-981a721e-6787-494a-a8cd-ec03253a1b1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received unexpected event network-vif-plugged-0ebbbca7-8751-4479-a892-216433a26e74 for instance with vm_state stopped and task_state None.
Nov 25 16:46:13 compute-0 ceph-mon[74985]: pgmap v1824: 321 pgs: 321 active+clean; 275 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.1 MiB/s wr, 302 op/s
Nov 25 16:46:13 compute-0 sudo[340573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:13 compute-0 sudo[340573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:13 compute-0 sudo[340573]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:13 compute-0 sudo[340598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:46:13 compute-0 sudo[340598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:13 compute-0 sudo[340598]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:13 compute-0 podman[340622]: 2025-11-25 16:46:13.198699746 +0000 UTC m=+0.059754665 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 16:46:13 compute-0 sudo[340635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:13 compute-0 sudo[340635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:13 compute-0 sudo[340635]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:13 compute-0 podman[340623]: 2025-11-25 16:46:13.229488422 +0000 UTC m=+0.087126818 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 25 16:46:13 compute-0 sudo[340690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:46:13 compute-0 sudo[340690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:13 compute-0 podman[340683]: 2025-11-25 16:46:13.300896162 +0000 UTC m=+0.080101597 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.516 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:46:13 compute-0 nova_compute[254092]: 2025-11-25 16:46:13.516 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:13.623 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:13.627 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 275 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 223 op/s
Nov 25 16:46:13 compute-0 sudo[340690]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:46:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:46:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:46:13 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:46:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:46:13 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:46:13 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e5237423-668b-4c56-9e1e-f79e42924a6b does not exist
Nov 25 16:46:13 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 064cd9a6-8c20-4b48-acca-7fa3599e471d does not exist
Nov 25 16:46:13 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d69545ed-06d4-41e3-b586-0867533a2eba does not exist
Nov 25 16:46:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:46:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:46:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:46:13 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:46:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:46:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:46:13 compute-0 sudo[340786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:13 compute-0 sudo[340786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:13 compute-0 sudo[340786]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:13 compute-0 sudo[340811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:46:13 compute-0 sudo[340811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:13 compute-0 sudo[340811]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:13 compute-0 sudo[340836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:13 compute-0 sudo[340836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:13 compute-0 sudo[340836]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296010038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.000 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:14 compute-0 sudo[340861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:46:14 compute-0 sudo[340861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:46:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:46:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:46:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:46:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:46:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:46:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/296010038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.083 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.084 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.088 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.088 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.092 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.092 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.270 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.272 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3756MB free_disk=59.85932540893555GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.272 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.272 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.337 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 01f96314-1fbe-4eee-a4ed-db7f448a5320 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.337 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 98410ff5-26ab-4406-8d1b-063d9e114cf8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.337 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 6b676874-6857-4021-9d83-c3673f57cebb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.337 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8b20d119-17cb-4742-9223-90e5020f93a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.338 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.338 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:46:14 compute-0 podman[340929]: 2025-11-25 16:46:14.346616141 +0000 UTC m=+0.037007207 container create dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:46:14 compute-0 systemd[1]: Started libpod-conmon-dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab.scope.
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.406 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:14 compute-0 podman[340929]: 2025-11-25 16:46:14.332389244 +0000 UTC m=+0.022780330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:46:14 compute-0 podman[340929]: 2025-11-25 16:46:14.430384537 +0000 UTC m=+0.120775633 container init dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:46:14 compute-0 podman[340929]: 2025-11-25 16:46:14.43750767 +0000 UTC m=+0.127898736 container start dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:46:14 compute-0 podman[340929]: 2025-11-25 16:46:14.44121053 +0000 UTC m=+0.131601596 container attach dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 16:46:14 compute-0 eloquent_albattani[340946]: 167 167
Nov 25 16:46:14 compute-0 systemd[1]: libpod-dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab.scope: Deactivated successfully.
Nov 25 16:46:14 compute-0 podman[340929]: 2025-11-25 16:46:14.443898544 +0000 UTC m=+0.134289630 container died dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bc864873b7b8fcc1b4e172e7de30d68803c2bd4b501c8fa1966d9057b4b531f-merged.mount: Deactivated successfully.
Nov 25 16:46:14 compute-0 podman[340929]: 2025-11-25 16:46:14.487511859 +0000 UTC m=+0.177902935 container remove dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:46:14 compute-0 systemd[1]: libpod-conmon-dcc0ea7d33677e65e9d1910fd6d9a2086f3d03fb7c115c10f691c85ba53248ab.scope: Deactivated successfully.
Nov 25 16:46:14 compute-0 podman[340988]: 2025-11-25 16:46:14.683179827 +0000 UTC m=+0.059199000 container create 432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 25 16:46:14 compute-0 systemd[1]: Started libpod-conmon-432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e.scope.
Nov 25 16:46:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ae85e5b372db8b856f0f783ede7bd7b3dd6790f068b67ebf332eb7223f2cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ae85e5b372db8b856f0f783ede7bd7b3dd6790f068b67ebf332eb7223f2cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ae85e5b372db8b856f0f783ede7bd7b3dd6790f068b67ebf332eb7223f2cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ae85e5b372db8b856f0f783ede7bd7b3dd6790f068b67ebf332eb7223f2cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce4ae85e5b372db8b856f0f783ede7bd7b3dd6790f068b67ebf332eb7223f2cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:14 compute-0 podman[340988]: 2025-11-25 16:46:14.662022551 +0000 UTC m=+0.038041744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:46:14 compute-0 podman[340988]: 2025-11-25 16:46:14.768251358 +0000 UTC m=+0.144270561 container init 432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:46:14 compute-0 podman[340988]: 2025-11-25 16:46:14.776261036 +0000 UTC m=+0.152280209 container start 432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:46:14 compute-0 podman[340988]: 2025-11-25 16:46:14.778887217 +0000 UTC m=+0.154906480 container attach 432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668563585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.892 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.899 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.912 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.929 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:46:14 compute-0 nova_compute[254092]: 2025-11-25 16:46:14.929 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:15 compute-0 ceph-mon[74985]: pgmap v1825: 321 pgs: 321 active+clean; 275 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 223 op/s
Nov 25 16:46:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2668563585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:15 compute-0 ovn_controller[153477]: 2025-11-25T16:46:15Z|00797|binding|INFO|Releasing lport db192ec3-55c1-4137-aaad-99a175bfa879 from this chassis (sb_readonly=0)
Nov 25 16:46:15 compute-0 nova_compute[254092]: 2025-11-25 16:46:15.166 254096 DEBUG nova.network.neutron [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Successfully created port: 419102e4-bcb4-496b-b45c-fba9f5525746 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:46:15 compute-0 nova_compute[254092]: 2025-11-25 16:46:15.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 295 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 248 op/s
Nov 25 16:46:15 compute-0 clever_einstein[341005]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:46:15 compute-0 clever_einstein[341005]: --> relative data size: 1.0
Nov 25 16:46:15 compute-0 clever_einstein[341005]: --> All data devices are unavailable
Nov 25 16:46:15 compute-0 systemd[1]: libpod-432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e.scope: Deactivated successfully.
Nov 25 16:46:15 compute-0 systemd[1]: libpod-432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e.scope: Consumed 1.067s CPU time.
Nov 25 16:46:15 compute-0 podman[340988]: 2025-11-25 16:46:15.925961049 +0000 UTC m=+1.301980622 container died 432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:46:15 compute-0 nova_compute[254092]: 2025-11-25 16:46:15.930 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce4ae85e5b372db8b856f0f783ede7bd7b3dd6790f068b67ebf332eb7223f2cb-merged.mount: Deactivated successfully.
Nov 25 16:46:15 compute-0 podman[340988]: 2025-11-25 16:46:15.999267452 +0000 UTC m=+1.375286625 container remove 432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:46:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:16 compute-0 systemd[1]: libpod-conmon-432c2c93dfb4000d80f985ab913d2fa59218dfe8b8e6ec8be767244f094d5f5e.scope: Deactivated successfully.
Nov 25 16:46:16 compute-0 sudo[340861]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:16 compute-0 sudo[341047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:16 compute-0 sudo[341047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:16 compute-0 sudo[341047]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:16 compute-0 sudo[341072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:46:16 compute-0 sudo[341072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.147 254096 DEBUG nova.compute.manager [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.148 254096 DEBUG oslo_concurrency.lockutils [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.148 254096 DEBUG oslo_concurrency.lockutils [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.148 254096 DEBUG oslo_concurrency.lockutils [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.148 254096 DEBUG nova.compute.manager [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.148 254096 WARNING nova.compute.manager [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state stopped and task_state None.
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.149 254096 DEBUG nova.compute.manager [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.149 254096 DEBUG oslo_concurrency.lockutils [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:16 compute-0 sudo[341072]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.149 254096 DEBUG oslo_concurrency.lockutils [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.149 254096 DEBUG oslo_concurrency.lockutils [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.149 254096 DEBUG nova.compute.manager [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.150 254096 WARNING nova.compute.manager [req-ae5670ab-86f6-4079-873e-8742d770e05a req-a22575a1-3ec5-4f1c-a950-462c4c314309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state stopped and task_state None.
Nov 25 16:46:16 compute-0 sudo[341097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:16 compute-0 sudo[341097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:16 compute-0 sudo[341097]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:16 compute-0 sudo[341122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:46:16 compute-0 sudo[341122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.268 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.269 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.269 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "6b676874-6857-4021-9d83-c3673f57cebb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.269 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.269 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.270 254096 INFO nova.compute.manager [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Terminating instance
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.271 254096 DEBUG nova.compute.manager [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.278 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Instance destroyed successfully.
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.279 254096 DEBUG nova.objects.instance [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'resources' on Instance uuid 6b676874-6857-4021-9d83-c3673f57cebb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.291 254096 DEBUG nova.virt.libvirt.vif [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:45:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-666063642',display_name='tempest-tempest.common.compute-instance-666063642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-666063642',id=82,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2qewqols',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:09Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=6b676874-6857-4021-9d83-c3673f57cebb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.291 254096 DEBUG nova.network.os_vif_util [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "0ebbbca7-8751-4479-a892-216433a26e74", "address": "fa:16:3e:09:27:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0ebbbca7-87", "ovs_interfaceid": "0ebbbca7-8751-4479-a892-216433a26e74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.292 254096 DEBUG nova.network.os_vif_util [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.292 254096 DEBUG os_vif [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.294 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.294 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0ebbbca7-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.306 254096 INFO os_vif [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:27:70,bridge_name='br-int',has_traffic_filtering=True,id=0ebbbca7-8751-4479-a892-216433a26e74,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0ebbbca7-87')
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.535 254096 DEBUG nova.objects.instance [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.558 254096 DEBUG oslo_concurrency.lockutils [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.559 254096 DEBUG oslo_concurrency.lockutils [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.559 254096 DEBUG nova.network.neutron [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.560 254096 DEBUG nova.objects.instance [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'info_cache' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:16 compute-0 podman[341206]: 2025-11-25 16:46:16.615077066 +0000 UTC m=+0.045963400 container create 2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:46:16 compute-0 systemd[1]: Started libpod-conmon-2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b.scope.
Nov 25 16:46:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:16 compute-0 podman[341206]: 2025-11-25 16:46:16.593828659 +0000 UTC m=+0.024714983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:46:16 compute-0 podman[341206]: 2025-11-25 16:46:16.692391297 +0000 UTC m=+0.123277641 container init 2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 16:46:16 compute-0 podman[341206]: 2025-11-25 16:46:16.700694013 +0000 UTC m=+0.131580337 container start 2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:46:16 compute-0 vibrant_kapitsa[341223]: 167 167
Nov 25 16:46:16 compute-0 systemd[1]: libpod-2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b.scope: Deactivated successfully.
Nov 25 16:46:16 compute-0 podman[341206]: 2025-11-25 16:46:16.706981453 +0000 UTC m=+0.137867797 container attach 2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:46:16 compute-0 podman[341206]: 2025-11-25 16:46:16.707547048 +0000 UTC m=+0.138433372 container died 2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1614285f114a4085c405c7b300baf05f1fa59d96c26ccfb44d66f0de3e2a045-merged.mount: Deactivated successfully.
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.736 254096 INFO nova.virt.libvirt.driver [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Deleting instance files /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb_del
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.737 254096 INFO nova.virt.libvirt.driver [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Deletion of /var/lib/nova/instances/6b676874-6857-4021-9d83-c3673f57cebb_del complete
Nov 25 16:46:16 compute-0 podman[341206]: 2025-11-25 16:46:16.747095204 +0000 UTC m=+0.177981528 container remove 2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:46:16 compute-0 systemd[1]: libpod-conmon-2be3bc3b0c07824a3e0a4691ca50fe840839b6f7fd88143c1d21e018fe68ee0b.scope: Deactivated successfully.
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.795 254096 INFO nova.compute.manager [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Took 0.52 seconds to destroy the instance on the hypervisor.
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.796 254096 DEBUG oslo.service.loopingcall [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.796 254096 DEBUG nova.compute.manager [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:46:16 compute-0 nova_compute[254092]: 2025-11-25 16:46:16.796 254096 DEBUG nova.network.neutron [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:46:16 compute-0 podman[341247]: 2025-11-25 16:46:16.927165637 +0000 UTC m=+0.041867229 container create 0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 16:46:16 compute-0 systemd[1]: Started libpod-conmon-0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc.scope.
Nov 25 16:46:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4689e56054553d621f23b1702ed6e0cf25c30147d868bd2efdce1472c63dacc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4689e56054553d621f23b1702ed6e0cf25c30147d868bd2efdce1472c63dacc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4689e56054553d621f23b1702ed6e0cf25c30147d868bd2efdce1472c63dacc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4689e56054553d621f23b1702ed6e0cf25c30147d868bd2efdce1472c63dacc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:17 compute-0 podman[341247]: 2025-11-25 16:46:16.907417681 +0000 UTC m=+0.022119293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:46:17 compute-0 podman[341247]: 2025-11-25 16:46:17.016899566 +0000 UTC m=+0.131601168 container init 0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:46:17 compute-0 podman[341247]: 2025-11-25 16:46:17.022850947 +0000 UTC m=+0.137552529 container start 0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:46:17 compute-0 podman[341247]: 2025-11-25 16:46:17.026879106 +0000 UTC m=+0.141580718 container attach 0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:46:17 compute-0 ceph-mon[74985]: pgmap v1826: 321 pgs: 321 active+clean; 295 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 248 op/s
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.325 254096 DEBUG nova.network.neutron [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.344 254096 INFO nova.compute.manager [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Took 0.55 seconds to deallocate network for instance.
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.351 254096 DEBUG nova.network.neutron [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Successfully updated port: 419102e4-bcb4-496b-b45c-fba9f5525746 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.371 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.371 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.371 254096 DEBUG nova.network.neutron [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.392 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.392 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.485 254096 DEBUG oslo_concurrency.processutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.574 254096 DEBUG nova.compute.manager [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-changed-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.575 254096 DEBUG nova.compute.manager [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Refreshing instance network info cache due to event network-changed-419102e4-bcb4-496b-b45c-fba9f5525746. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.575 254096 DEBUG oslo_concurrency.lockutils [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 295 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]: {
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:     "0": [
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:         {
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "devices": [
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "/dev/loop3"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             ],
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_name": "ceph_lv0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_size": "21470642176",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "name": "ceph_lv0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "tags": {
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cluster_name": "ceph",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.crush_device_class": "",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.encrypted": "0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osd_id": "0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.type": "block",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.vdo": "0"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             },
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "type": "block",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "vg_name": "ceph_vg0"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:         }
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:     ],
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:     "1": [
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:         {
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "devices": [
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "/dev/loop4"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             ],
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_name": "ceph_lv1",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_size": "21470642176",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "name": "ceph_lv1",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "tags": {
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cluster_name": "ceph",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.crush_device_class": "",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.encrypted": "0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osd_id": "1",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.type": "block",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.vdo": "0"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             },
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "type": "block",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "vg_name": "ceph_vg1"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:         }
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:     ],
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:     "2": [
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:         {
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "devices": [
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "/dev/loop5"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             ],
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_name": "ceph_lv2",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_size": "21470642176",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "name": "ceph_lv2",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "tags": {
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.cluster_name": "ceph",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.crush_device_class": "",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.encrypted": "0",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osd_id": "2",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.type": "block",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:                 "ceph.vdo": "0"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             },
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "type": "block",
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:             "vg_name": "ceph_vg2"
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:         }
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]:     ]
Nov 25 16:46:17 compute-0 eloquent_lichterman[341264]: }
Nov 25 16:46:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336404924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:17 compute-0 systemd[1]: libpod-0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc.scope: Deactivated successfully.
Nov 25 16:46:17 compute-0 podman[341247]: 2025-11-25 16:46:17.926861444 +0000 UTC m=+1.041563016 container died 0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.940 254096 DEBUG oslo_concurrency.processutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.947 254096 DEBUG nova.compute.provider_tree [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4689e56054553d621f23b1702ed6e0cf25c30147d868bd2efdce1472c63dacc5-merged.mount: Deactivated successfully.
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.964 254096 DEBUG nova.scheduler.client.report [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:17 compute-0 podman[341247]: 2025-11-25 16:46:17.980825941 +0000 UTC m=+1.095527523 container remove 0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 16:46:17 compute-0 systemd[1]: libpod-conmon-0b3fa32f67049bcc1a82d098ac6261be525abc63fed83988a536744cbfbfffbc.scope: Deactivated successfully.
Nov 25 16:46:17 compute-0 nova_compute[254092]: 2025-11-25 16:46:17.995 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:18 compute-0 sudo[341122]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1336404924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.044 254096 INFO nova.scheduler.client.report [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Deleted allocations for instance 6b676874-6857-4021-9d83-c3673f57cebb
Nov 25 16:46:18 compute-0 sudo[341309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.107 254096 DEBUG nova.network.neutron [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:46:18 compute-0 sudo[341309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:18 compute-0 sudo[341309]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.113 254096 DEBUG oslo_concurrency.lockutils [None req-9b568295-e7ab-4a72-ad83-33ac6a1dd319 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "6b676874-6857-4021-9d83-c3673f57cebb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:18 compute-0 sudo[341334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:46:18 compute-0 sudo[341334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:18 compute-0 sudo[341334]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.219 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.220 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.220 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:46:18 compute-0 sudo[341359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:18 compute-0 sudo[341359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:18 compute-0 sudo[341359]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:18 compute-0 sudo[341384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:46:18 compute-0 sudo[341384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.381 254096 DEBUG nova.compute.manager [req-282db85c-2063-4f3a-ba96-a39c79f73822 req-b341578f-2a19-436e-ad78-426dcfc625c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Received event network-vif-deleted-0ebbbca7-8751-4479-a892-216433a26e74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.541 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089163.5393865, b487a735-c096-4a8f-b8ba-fc5b6c055f56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.541 254096 INFO nova.compute.manager [-] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] VM Stopped (Lifecycle Event)
Nov 25 16:46:18 compute-0 nova_compute[254092]: 2025-11-25 16:46:18.561 254096 DEBUG nova.compute.manager [None req-0f8f119d-1ef1-4cc5-bda6-9ca9f201e355 - - - - - -] [instance: b487a735-c096-4a8f-b8ba-fc5b6c055f56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:18 compute-0 podman[341449]: 2025-11-25 16:46:18.591592018 +0000 UTC m=+0.036749419 container create 3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:46:18 compute-0 systemd[1]: Started libpod-conmon-3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0.scope.
Nov 25 16:46:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:18 compute-0 podman[341449]: 2025-11-25 16:46:18.671472549 +0000 UTC m=+0.116629950 container init 3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_greider, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 25 16:46:18 compute-0 podman[341449]: 2025-11-25 16:46:18.577101945 +0000 UTC m=+0.022259366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:46:18 compute-0 podman[341449]: 2025-11-25 16:46:18.67887179 +0000 UTC m=+0.124029191 container start 3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:46:18 compute-0 podman[341449]: 2025-11-25 16:46:18.681392518 +0000 UTC m=+0.126549919 container attach 3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:46:18 compute-0 epic_greider[341465]: 167 167
Nov 25 16:46:18 compute-0 systemd[1]: libpod-3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0.scope: Deactivated successfully.
Nov 25 16:46:18 compute-0 podman[341449]: 2025-11-25 16:46:18.684722929 +0000 UTC m=+0.129880340 container died 3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_greider, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d88cf9cd2a50991f40ed092e2179ef116f6b65812cc111db726f5c8467267a18-merged.mount: Deactivated successfully.
Nov 25 16:46:18 compute-0 podman[341449]: 2025-11-25 16:46:18.718734663 +0000 UTC m=+0.163892064 container remove 3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:46:18 compute-0 systemd[1]: libpod-conmon-3d7ff081a0a5c65d6d6cca67efcb785439c7ba404f865e0bdfc96a38c2ee5cc0.scope: Deactivated successfully.
Nov 25 16:46:18 compute-0 podman[341487]: 2025-11-25 16:46:18.892402872 +0000 UTC m=+0.045123417 container create 38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:46:18 compute-0 systemd[1]: Started libpod-conmon-38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c.scope.
Nov 25 16:46:18 compute-0 podman[341487]: 2025-11-25 16:46:18.869901081 +0000 UTC m=+0.022621646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:46:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f6703b24b06f4948070c71c7890837983376452b955bc5f9f29a8aa6293c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f6703b24b06f4948070c71c7890837983376452b955bc5f9f29a8aa6293c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f6703b24b06f4948070c71c7890837983376452b955bc5f9f29a8aa6293c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f6703b24b06f4948070c71c7890837983376452b955bc5f9f29a8aa6293c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:18 compute-0 podman[341487]: 2025-11-25 16:46:18.994003374 +0000 UTC m=+0.146723959 container init 38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lamport, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:46:19 compute-0 podman[341487]: 2025-11-25 16:46:19.001074386 +0000 UTC m=+0.153794931 container start 38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lamport, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:46:19 compute-0 podman[341487]: 2025-11-25 16:46:19.005430255 +0000 UTC m=+0.158150810 container attach 38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:46:19 compute-0 ceph-mon[74985]: pgmap v1827: 321 pgs: 321 active+clean; 295 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.209 254096 DEBUG nova.network.neutron [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.238 254096 DEBUG oslo_concurrency.lockutils [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.262 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.262 254096 DEBUG nova.objects.instance [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.275 254096 DEBUG nova.objects.instance [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.283 254096 DEBUG nova.virt.libvirt.vif [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.284 254096 DEBUG nova.network.os_vif_util [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.284 254096 DEBUG nova.network.os_vif_util [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.285 254096 DEBUG os_vif [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.287 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.288 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe8c3a9-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.293 254096 INFO os_vif [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.299 254096 DEBUG nova.virt.libvirt.driver [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start _get_guest_xml network_info=[{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.304 254096 WARNING nova.virt.libvirt.driver [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.308 254096 DEBUG nova.virt.libvirt.host [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.309 254096 DEBUG nova.virt.libvirt.host [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.312 254096 DEBUG nova.virt.libvirt.host [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.313 254096 DEBUG nova.virt.libvirt.host [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.313 254096 DEBUG nova.virt.libvirt.driver [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.313 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.314 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.314 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.315 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.315 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.315 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.315 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.316 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.316 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.316 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.316 254096 DEBUG nova.virt.hardware [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.317 254096 DEBUG nova.objects.instance [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.328 254096 DEBUG oslo_concurrency.processutils [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 295 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 16:46:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182826064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.806 254096 DEBUG oslo_concurrency.processutils [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:19 compute-0 nova_compute[254092]: 2025-11-25 16:46:19.846 254096 DEBUG oslo_concurrency.processutils [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:19 compute-0 eager_lamport[341503]: {
Nov 25 16:46:19 compute-0 eager_lamport[341503]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "osd_id": 1,
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "type": "bluestore"
Nov 25 16:46:19 compute-0 eager_lamport[341503]:     },
Nov 25 16:46:19 compute-0 eager_lamport[341503]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "osd_id": 2,
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "type": "bluestore"
Nov 25 16:46:19 compute-0 eager_lamport[341503]:     },
Nov 25 16:46:19 compute-0 eager_lamport[341503]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "osd_id": 0,
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:46:19 compute-0 eager_lamport[341503]:         "type": "bluestore"
Nov 25 16:46:19 compute-0 eager_lamport[341503]:     }
Nov 25 16:46:19 compute-0 eager_lamport[341503]: }
Nov 25 16:46:20 compute-0 systemd[1]: libpod-38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c.scope: Deactivated successfully.
Nov 25 16:46:20 compute-0 systemd[1]: libpod-38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c.scope: Consumed 1.001s CPU time.
Nov 25 16:46:20 compute-0 podman[341487]: 2025-11-25 16:46:20.003529868 +0000 UTC m=+1.156250413 container died 38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lamport, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.005 254096 DEBUG nova.network.neutron [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updating instance_info_cache with network_info: [{"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.027 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.028 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance network_info: |[{"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.028 254096 DEBUG oslo_concurrency.lockutils [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.029 254096 DEBUG nova.network.neutron [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Refreshing network info cache for port 419102e4-bcb4-496b-b45c-fba9f5525746 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.032 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Start _get_guest_xml network_info=[{"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-321f6703b24b06f4948070c71c7890837983376452b955bc5f9f29a8aa6293c5-merged.mount: Deactivated successfully.
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.046 254096 WARNING nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.055 254096 DEBUG nova.virt.libvirt.host [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.056 254096 DEBUG nova.virt.libvirt.host [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.059 254096 DEBUG nova.virt.libvirt.host [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.060 254096 DEBUG nova.virt.libvirt.host [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:46:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/182826064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.060 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.060 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.061 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.061 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.061 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.061 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.061 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.062 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.062 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.062 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.062 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.063 254096 DEBUG nova.virt.hardware [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.067 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:20 compute-0 podman[341487]: 2025-11-25 16:46:20.068432311 +0000 UTC m=+1.221152856 container remove 38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:46:20 compute-0 systemd[1]: libpod-conmon-38bcb4172465ab066712ac45962af3ad29f8aafafac5b619f39bea1dbd629e9c.scope: Deactivated successfully.
Nov 25 16:46:20 compute-0 sudo[341384]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:46:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:46:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.116 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updating instance_info_cache with network_info: [{"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:46:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f561c12f-327e-447c-9458-94f8f11265ba does not exist
Nov 25 16:46:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f2b1c58b-0cd1-4be3-9c46-e082505c1e22 does not exist
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.129 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-98410ff5-26ab-4406-8d1b-063d9e114cf8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.130 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:46:20 compute-0 sudo[341609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:46:20 compute-0 sudo[341609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:20 compute-0 sudo[341609]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:20 compute-0 sudo[341634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:46:20 compute-0 sudo[341634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:46:20 compute-0 sudo[341634]: pam_unix(sudo:session): session closed for user root
Nov 25 16:46:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104325184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.286 254096 DEBUG oslo_concurrency.processutils [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.287 254096 DEBUG nova.virt.libvirt.vif [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.287 254096 DEBUG nova.network.os_vif_util [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.288 254096 DEBUG nova.network.os_vif_util [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.289 254096 DEBUG nova.objects.instance [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.306 254096 DEBUG nova.virt.libvirt.driver [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <uuid>01f96314-1fbe-4eee-a4ed-db7f448a5320</uuid>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <name>instance-00000048</name>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestJSON-server-1951123052</nova:name>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:46:19</nova:creationTime>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:user uuid="aea2d8cf3bb54cdbbc72e41805fb1f90">tempest-ServerActionsTestJSON-1811453217-project-member</nova:user>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:project uuid="b4d15f5aabd3491da5314b126a20225a">tempest-ServerActionsTestJSON-1811453217</nova:project>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <nova:port uuid="4fe8c3a9-70ba-4a51-8bc1-1505a49fe349">
Nov 25 16:46:20 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <system>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <entry name="serial">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <entry name="uuid">01f96314-1fbe-4eee-a4ed-db7f448a5320</entry>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </system>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <os>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   </os>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <features>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   </features>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk">
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/01f96314-1fbe-4eee-a4ed-db7f448a5320_disk.config">
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ff:a0:1c"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <target dev="tap4fe8c3a9-70"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320/console.log" append="off"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <video>
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </video>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:46:20 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:46:20 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:46:20 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:46:20 compute-0 nova_compute[254092]: </domain>
Nov 25 16:46:20 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.306 254096 DEBUG nova.virt.libvirt.driver [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.307 254096 DEBUG nova.virt.libvirt.driver [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.307 254096 DEBUG nova.virt.libvirt.vif [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.308 254096 DEBUG nova.network.os_vif_util [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.308 254096 DEBUG nova.network.os_vif_util [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.308 254096 DEBUG os_vif [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.309 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.309 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.310 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.320 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.320 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.321 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:20 compute-0 NetworkManager[48891]: <info>  [1764089180.3233] manager: (tap4fe8c3a9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.325 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.329 254096 INFO os_vif [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:46:20 compute-0 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 16:46:20 compute-0 NetworkManager[48891]: <info>  [1764089180.3901] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/342)
Nov 25 16:46:20 compute-0 ovn_controller[153477]: 2025-11-25T16:46:20Z|00798|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 16:46:20 compute-0 ovn_controller[153477]: 2025-11-25T16:46:20Z|00799|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.393 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.405 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.406 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.408 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:20 compute-0 ovn_controller[153477]: 2025-11-25T16:46:20Z|00800|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 16:46:20 compute-0 ovn_controller[153477]: 2025-11-25T16:46:20Z|00801|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.415 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 systemd-udevd[341692]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.425 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92aec0e6-c6ca-46df-9c25-37ef79ccf29d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.425 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.428 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.428 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0a1de4-e7fa-474a-9d89-33de6f973adc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.429 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9aaac3d4-37ee-4426-a999-37bab72498d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 NetworkManager[48891]: <info>  [1764089180.4443] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:46:20 compute-0 NetworkManager[48891]: <info>  [1764089180.4458] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.445 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[e73f79fe-e22a-4b8f-b094-7e9a0b64cb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 systemd-machined[216343]: New machine qemu-102-instance-00000048.
Nov 25 16:46:20 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000048.
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.460 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20a07dcc-5444-4d12-9ae5-3684582b6b05]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.490 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[76f36d29-b18f-4ba6-9980-6b38cf56aff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 systemd-udevd[341697]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:46:20 compute-0 NetworkManager[48891]: <info>  [1764089180.4980] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/343)
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.494 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f179684-cdae-422e-85be-2b50639bc929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227485882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.535 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[de59ec31-fd7e-4e9f-8915-ded1916e6eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.540 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6f1799-7c79-421f-98c9-c367188d4a5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.551 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.570 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:20 compute-0 NetworkManager[48891]: <info>  [1764089180.5717] device (tap50ea1716-90): carrier: link connected
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.577 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.577 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca980e4-15ee-4d9d-b1c3-1de4912abc8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.596 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99bcb11f-117e-45d9-aa4e-c1d9a2c63fb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563816, 'reachable_time': 42556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341747, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbc85db-04e7-4a24-b938-572b86718a8a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563816, 'tstamp': 563816}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341749, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.639 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e51f5206-2efd-4357-912a-7ea8b04e99a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563816, 'reachable_time': 42556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341750, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.672 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d356ffc9-1946-45a2-8937-11696e874be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.744 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c40701-eea5-4952-892c-f60bf840976e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.745 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.746 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:20 compute-0 NetworkManager[48891]: <info>  [1764089180.7481] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Nov 25 16:46:20 compute-0 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.752 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.752 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c02c0848-fbcc-4211-8dba-4647825f1036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.753 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:46:20 compute-0 ovn_controller[153477]: 2025-11-25T16:46:20Z|00802|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:46:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:20.754 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:46:20 compute-0 nova_compute[254092]: 2025-11-25 16:46:20.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654757494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.046 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.048 254096 DEBUG nova.virt.libvirt.vif [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1354265431',display_name='tempest-₡-1354265431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1354265431',id=83,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-wvl75eev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:10Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=8b20d119-17cb-4742-9223-90e5020f93a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.048 254096 DEBUG nova.network.os_vif_util [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.049 254096 DEBUG nova.network.os_vif_util [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.050 254096 DEBUG nova.objects.instance [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8b20d119-17cb-4742-9223-90e5020f93a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.063 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <uuid>8b20d119-17cb-4742-9223-90e5020f93a7</uuid>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <name>instance-00000053</name>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <nova:name>tempest-₡-1354265431</nova:name>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:46:20</nova:creationTime>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <nova:port uuid="419102e4-bcb4-496b-b45c-fba9f5525746">
Nov 25 16:46:21 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <system>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <entry name="serial">8b20d119-17cb-4742-9223-90e5020f93a7</entry>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <entry name="uuid">8b20d119-17cb-4742-9223-90e5020f93a7</entry>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </system>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <os>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   </os>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <features>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   </features>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8b20d119-17cb-4742-9223-90e5020f93a7_disk">
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8b20d119-17cb-4742-9223-90e5020f93a7_disk.config">
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:08:0f:a2"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <target dev="tap419102e4-bc"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/console.log" append="off"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <video>
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </video>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:46:21 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:46:21 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:46:21 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:46:21 compute-0 nova_compute[254092]: </domain>
Nov 25 16:46:21 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.065 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Preparing to wait for external event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.065 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.065 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.066 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.066 254096 DEBUG nova.virt.libvirt.vif [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1354265431',display_name='tempest-₡-1354265431',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1354265431',id=83,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-wvl75eev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:10Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=8b20d119-17cb-4742-9223-90e5020f93a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.066 254096 DEBUG nova.network.os_vif_util [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.067 254096 DEBUG nova.network.os_vif_util [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.067 254096 DEBUG os_vif [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.068 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.068 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.071 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.071 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap419102e4-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.072 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap419102e4-bc, col_values=(('external_ids', {'iface-id': '419102e4-bcb4-496b-b45c-fba9f5525746', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:0f:a2', 'vm-uuid': '8b20d119-17cb-4742-9223-90e5020f93a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:21 compute-0 NetworkManager[48891]: <info>  [1764089181.0739] manager: (tap419102e4-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.079 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.080 254096 INFO os_vif [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc')
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.106 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:46:21 compute-0 ceph-mon[74985]: pgmap v1828: 321 pgs: 321 active+clean; 295 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 16:46:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:46:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:46:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2104325184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2227485882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2654757494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.124 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.124 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.125 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:08:0f:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.125 254096 INFO nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Using config drive
Nov 25 16:46:21 compute-0 podman[341803]: 2025-11-25 16:46:21.125609651 +0000 UTC m=+0.040370019 container create ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.153 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:21 compute-0 systemd[1]: Started libpod-conmon-ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab.scope.
Nov 25 16:46:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d10bf89bd6257734a4b32f3f12e85a3c8ca3a700feb7a137e3b1e15f2c8e91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:21 compute-0 podman[341803]: 2025-11-25 16:46:21.19585475 +0000 UTC m=+0.110615148 container init ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 16:46:21 compute-0 podman[341803]: 2025-11-25 16:46:21.201080822 +0000 UTC m=+0.115841200 container start ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 16:46:21 compute-0 podman[341803]: 2025-11-25 16:46:21.106288336 +0000 UTC m=+0.021048734 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:46:21 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [NOTICE]   (341841) : New worker (341843) forked
Nov 25 16:46:21 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [NOTICE]   (341841) : Loading success.
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.494 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 01f96314-1fbe-4eee-a4ed-db7f448a5320 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.496 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089181.494146, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.496 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.498 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.499 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.500 254096 DEBUG nova.compute.manager [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.504 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance rebooted successfully.
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.505 254096 DEBUG nova.compute.manager [None req-814bc6ff-e5d7-4f91-8413-4bd17fac5a15 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.519 254096 INFO nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Creating config drive at /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/disk.config
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.525 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c3bmwi4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.562 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.571 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.590 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089181.4999049, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.590 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.612 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.617 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.669 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8c3bmwi4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.699 254096 DEBUG nova.storage.rbd_utils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 8b20d119-17cb-4742-9223-90e5020f93a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.705 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/disk.config 8b20d119-17cb-4742-9223-90e5020f93a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.741 254096 DEBUG nova.compute.manager [req-1be4dbcd-4e36-4724-a599-2170c46ddcf4 req-26e4217e-035a-4252-b21e-ae6fa8806e16 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.741 254096 DEBUG oslo_concurrency.lockutils [req-1be4dbcd-4e36-4724-a599-2170c46ddcf4 req-26e4217e-035a-4252-b21e-ae6fa8806e16 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.742 254096 DEBUG oslo_concurrency.lockutils [req-1be4dbcd-4e36-4724-a599-2170c46ddcf4 req-26e4217e-035a-4252-b21e-ae6fa8806e16 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.742 254096 DEBUG oslo_concurrency.lockutils [req-1be4dbcd-4e36-4724-a599-2170c46ddcf4 req-26e4217e-035a-4252-b21e-ae6fa8806e16 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.742 254096 DEBUG nova.compute.manager [req-1be4dbcd-4e36-4724-a599-2170c46ddcf4 req-26e4217e-035a-4252-b21e-ae6fa8806e16 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.742 254096 WARNING nova.compute.manager [req-1be4dbcd-4e36-4724-a599-2170c46ddcf4 req-26e4217e-035a-4252-b21e-ae6fa8806e16 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.847 254096 DEBUG oslo_concurrency.processutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/disk.config 8b20d119-17cb-4742-9223-90e5020f93a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:21 compute-0 nova_compute[254092]: 2025-11-25 16:46:21.848 254096 INFO nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Deleting local config drive /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7/disk.config because it was imported into RBD.
Nov 25 16:46:21 compute-0 kernel: tap419102e4-bc: entered promiscuous mode
Nov 25 16:46:21 compute-0 NetworkManager[48891]: <info>  [1764089181.9296] manager: (tap419102e4-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Nov 25 16:46:21 compute-0 systemd-udevd[341715]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:46:21 compute-0 ovn_controller[153477]: 2025-11-25T16:46:21Z|00803|binding|INFO|Claiming lport 419102e4-bcb4-496b-b45c-fba9f5525746 for this chassis.
Nov 25 16:46:21 compute-0 ovn_controller[153477]: 2025-11-25T16:46:21Z|00804|binding|INFO|419102e4-bcb4-496b-b45c-fba9f5525746: Claiming fa:16:3e:08:0f:a2 10.100.0.7
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.937 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:0f:a2 10.100.0.7'], port_security=['fa:16:3e:08:0f:a2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8b20d119-17cb-4742-9223-90e5020f93a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=419102e4-bcb4-496b-b45c-fba9f5525746) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.939 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 419102e4-bcb4-496b-b45c-fba9f5525746 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.940 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:46:21 compute-0 ovn_controller[153477]: 2025-11-25T16:46:21Z|00805|binding|INFO|Setting lport 419102e4-bcb4-496b-b45c-fba9f5525746 ovn-installed in OVS
Nov 25 16:46:21 compute-0 ovn_controller[153477]: 2025-11-25T16:46:21Z|00806|binding|INFO|Setting lport 419102e4-bcb4-496b-b45c-fba9f5525746 up in Southbound
Nov 25 16:46:21 compute-0 NetworkManager[48891]: <info>  [1764089181.9581] device (tap419102e4-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:46:21 compute-0 NetworkManager[48891]: <info>  [1764089181.9596] device (tap419102e4-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.956 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3160a514-77a1-45ba-a20e-9b91dc825a04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.960 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1431a6bc-91 in ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.964 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1431a6bc-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cac132-6c08-4f45-965b-9ef0976c0552]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.966 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[358282ba-b7ce-4877-9e43-68b3e96813ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:21.988 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea6739c-1208-479e-9bfd-0d2bccb5fbbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 systemd-machined[216343]: New machine qemu-103-instance-00000053.
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.008 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a29838c5-ef25-460a-a746-d808c750ec31]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000053.
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.056 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e2af00-d37c-452f-8b6a-56599ebe712e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 NetworkManager[48891]: <info>  [1764089182.0642] manager: (tap1431a6bc-90): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.062 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2dcdb4-e745-47f6-9e97-bceabfa5b338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.104 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[09c657ad-830d-427a-a516-2f3fce59b121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.108 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[100bccbf-a075-4ac7-997b-9cef2d1cf266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 NetworkManager[48891]: <info>  [1764089182.1348] device (tap1431a6bc-90): carrier: link connected
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.140 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9404913f-36ba-4018-8d2e-52fad98b39fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.157 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d85c64af-4032-420e-baa3-fe85db79b9f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341966, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.175 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed52a16f-6a64-4eae-a06e-86ab9511b811]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:f894'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563972, 'tstamp': 563972}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341967, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebe48eb-d6f0-41c5-beaa-af0a746a5828]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341968, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.223 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[762b8bba-6ad8-4b59-a593-37f4407a326e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.273 254096 DEBUG nova.network.neutron [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updated VIF entry in instance network info cache for port 419102e4-bcb4-496b-b45c-fba9f5525746. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.273 254096 DEBUG nova.network.neutron [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updating instance_info_cache with network_info: [{"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.288 254096 DEBUG oslo_concurrency.lockutils [req-8101d03a-bf96-42b9-992c-91c60c72a4e9 req-f936f1bd-b70b-4dee-838b-3c53df722e39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.291 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf9cd53-73b7-4575-9724-7269933c0826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.292 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.293 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.293 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:22 compute-0 NetworkManager[48891]: <info>  [1764089182.3449] manager: (tap1431a6bc-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Nov 25 16:46:22 compute-0 kernel: tap1431a6bc-90: entered promiscuous mode
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.347 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:22 compute-0 ovn_controller[153477]: 2025-11-25T16:46:22Z|00807|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.369 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1431a6bc-93c8-4db5-a148-b2950f02c941.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1431a6bc-93c8-4db5-a148-b2950f02c941.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.370 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaee24c-5c38-498e-9d8c-52702d46b5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.371 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/1431a6bc-93c8-4db5-a148-b2950f02c941.pid.haproxy
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:46:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:22.372 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'env', 'PROCESS_TAG=haproxy-1431a6bc-93c8-4db5-a148-b2950f02c941', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1431a6bc-93c8-4db5-a148-b2950f02c941.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.511 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089182.5109217, 8b20d119-17cb-4742-9223-90e5020f93a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.511 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] VM Started (Lifecycle Event)
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.527 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.532 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089182.511022, 8b20d119-17cb-4742-9223-90e5020f93a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.532 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] VM Paused (Lifecycle Event)
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.548 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.552 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:22 compute-0 nova_compute[254092]: 2025-11-25 16:46:22.569 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:46:22 compute-0 podman[342042]: 2025-11-25 16:46:22.815594766 +0000 UTC m=+0.060694260 container create 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:46:22 compute-0 systemd[1]: Started libpod-conmon-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815.scope.
Nov 25 16:46:22 compute-0 podman[342042]: 2025-11-25 16:46:22.783900605 +0000 UTC m=+0.029000109 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:46:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1774b9fa250e4e770fd47cef979c1f2ebcd875e71157ccd0e21761837005a179/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:22 compute-0 podman[342042]: 2025-11-25 16:46:22.919331465 +0000 UTC m=+0.164430979 container init 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:46:22 compute-0 podman[342042]: 2025-11-25 16:46:22.927574939 +0000 UTC m=+0.172674423 container start 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:46:22 compute-0 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : New worker (342063) forked
Nov 25 16:46:22 compute-0 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : Loading success.
Nov 25 16:46:23 compute-0 ceph-mon[74985]: pgmap v1829: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 25 16:46:23 compute-0 rsyslogd[1006]: imjournal: 5793 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 349 KiB/s wr, 53 op/s
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.810 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.811 254096 WARNING nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state None.
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.812 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Processing event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.813 254096 DEBUG oslo_concurrency.lockutils [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.814 254096 DEBUG nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] No waiting events found dispatching network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.814 254096 WARNING nova.compute.manager [req-7d3c7c03-28be-4851-9ecb-8b7aff317527 req-20930de5-acf0-4734-8aea-bf0a0716a947 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received unexpected event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 for instance with vm_state building and task_state spawning.
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.814 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.819 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089183.8196986, 8b20d119-17cb-4742-9223-90e5020f93a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.820 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] VM Resumed (Lifecycle Event)
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.823 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.826 254096 INFO nova.virt.libvirt.driver [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance spawned successfully.
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.826 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.839 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.846 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.850 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.851 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.851 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.852 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.852 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.853 254096 DEBUG nova.virt.libvirt.driver [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.878 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.906 254096 INFO nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 13.76 seconds to spawn the instance on the hypervisor.
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.906 254096 DEBUG nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.968 254096 INFO nova.compute.manager [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 14.62 seconds to build instance.
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.986 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089168.9848976, 6b676874-6857-4021-9d83-c3673f57cebb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:23 compute-0 nova_compute[254092]: 2025-11-25 16:46:23.986 254096 INFO nova.compute.manager [-] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] VM Stopped (Lifecycle Event)
Nov 25 16:46:24 compute-0 nova_compute[254092]: 2025-11-25 16:46:24.018 254096 DEBUG nova.compute.manager [None req-a40a15e2-1c7a-4689-8da1-09eb58de538a - - - - - -] [instance: 6b676874-6857-4021-9d83-c3673f57cebb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:24 compute-0 nova_compute[254092]: 2025-11-25 16:46:24.099 254096 DEBUG oslo_concurrency.lockutils [None req-69f63ef2-46c4-41b8-a0d2-e447899efbe4 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:24.502 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:25 compute-0 ceph-mon[74985]: pgmap v1830: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 349 KiB/s wr, 53 op/s
Nov 25 16:46:25 compute-0 nova_compute[254092]: 2025-11-25 16:46:25.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 350 KiB/s wr, 124 op/s
Nov 25 16:46:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.142612) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186142708, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 654, "num_deletes": 257, "total_data_size": 678734, "memory_usage": 690824, "flush_reason": "Manual Compaction"}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186150338, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 671929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37738, "largest_seqno": 38391, "table_properties": {"data_size": 668513, "index_size": 1260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8073, "raw_average_key_size": 18, "raw_value_size": 661502, "raw_average_value_size": 1556, "num_data_blocks": 55, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089142, "oldest_key_time": 1764089142, "file_creation_time": 1764089186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 7735 microseconds, and 2541 cpu microseconds.
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.150378) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 671929 bytes OK
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.150395) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.161695) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.161737) EVENT_LOG_v1 {"time_micros": 1764089186161728, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.161760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 675195, prev total WAL file size 675645, number of live WAL files 2.
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.162286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353038' seq:0, type:0; will stop at (end)
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(656KB)], [83(8135KB)]
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186162351, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 9002915, "oldest_snapshot_seqno": -1}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6160 keys, 8881270 bytes, temperature: kUnknown
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186229750, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8881270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8840074, "index_size": 24702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 157578, "raw_average_key_size": 25, "raw_value_size": 8729546, "raw_average_value_size": 1417, "num_data_blocks": 996, "num_entries": 6160, "num_filter_entries": 6160, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.230017) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8881270 bytes
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.233563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 131.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(26.6) write-amplify(13.2) OK, records in: 6686, records dropped: 526 output_compression: NoCompression
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.233588) EVENT_LOG_v1 {"time_micros": 1764089186233578, "job": 48, "event": "compaction_finished", "compaction_time_micros": 67498, "compaction_time_cpu_micros": 20585, "output_level": 6, "num_output_files": 1, "total_output_size": 8881270, "num_input_records": 6686, "num_output_records": 6160, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186233867, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089186235262, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.162159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:46:26 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:46:26.235364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.387 254096 DEBUG nova.objects.instance [None req-92cefdaf-c9ce-4055-8bc3-3bc13c332018 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.420 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089186.4197242, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.421 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Paused (Lifecycle Event)
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.442 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.448 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 16:46:26 compute-0 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 16:46:26 compute-0 NetworkManager[48891]: <info>  [1764089186.9663] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:26 compute-0 ovn_controller[153477]: 2025-11-25T16:46:26Z|00808|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 16:46:26 compute-0 ovn_controller[153477]: 2025-11-25T16:46:26Z|00809|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 16:46:26 compute-0 ovn_controller[153477]: 2025-11-25T16:46:26Z|00810|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.975 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.976 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:46:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.977 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:46:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bcb617-2363-4e3d-bfc3-ec64102c962d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:26.979 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore
Nov 25 16:46:26 compute-0 nova_compute[254092]: 2025-11-25 16:46:26.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:27 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 16:46:27 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000048.scope: Consumed 6.224s CPU time.
Nov 25 16:46:27 compute-0 systemd-machined[216343]: Machine qemu-102-instance-00000048 terminated.
Nov 25 16:46:27 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [NOTICE]   (341841) : haproxy version is 2.8.14-c23fe91
Nov 25 16:46:27 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [NOTICE]   (341841) : path to executable is /usr/sbin/haproxy
Nov 25 16:46:27 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [WARNING]  (341841) : Exiting Master process...
Nov 25 16:46:27 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [WARNING]  (341841) : Exiting Master process...
Nov 25 16:46:27 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [ALERT]    (341841) : Current worker (341843) exited with code 143 (Terminated)
Nov 25 16:46:27 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[341837]: [WARNING]  (341841) : All workers exited. Exiting... (0)
Nov 25 16:46:27 compute-0 systemd[1]: libpod-ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab.scope: Deactivated successfully.
Nov 25 16:46:27 compute-0 podman[342099]: 2025-11-25 16:46:27.124004028 +0000 UTC m=+0.046401462 container died ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.157 254096 DEBUG nova.compute.manager [None req-92cefdaf-c9ce-4055-8bc3-3bc13c332018 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab-userdata-shm.mount: Deactivated successfully.
Nov 25 16:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-88d10bf89bd6257734a4b32f3f12e85a3c8ca3a700feb7a137e3b1e15f2c8e91-merged.mount: Deactivated successfully.
Nov 25 16:46:27 compute-0 ceph-mon[74985]: pgmap v1831: 321 pgs: 321 active+clean; 248 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 350 KiB/s wr, 124 op/s
Nov 25 16:46:27 compute-0 podman[342099]: 2025-11-25 16:46:27.183216077 +0000 UTC m=+0.105613511 container cleanup ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:46:27 compute-0 systemd[1]: libpod-conmon-ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab.scope: Deactivated successfully.
Nov 25 16:46:27 compute-0 podman[342139]: 2025-11-25 16:46:27.247766871 +0000 UTC m=+0.041167370 container remove ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.256 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9e32c2-4ab1-4fb4-882c-f98e1f9b0f3a]: (4, ('Tue Nov 25 04:46:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab)\ned28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab\nTue Nov 25 04:46:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (ed28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab)\ned28dda384e1f820916b08f867ea9cdc430125be845d9f0d6d1de7695fc691ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.257 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[865efda7-ed9f-4697-8dda-a1488fd2ccd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.258 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:27 compute-0 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.285 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b27732e-b85c-4bea-ae99-b008f9aed439]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.304 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a79bb9d-1ea3-4926-ab77-052af997d160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.306 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ae47b-85b7-41d5-822d-b528e3dded23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.322 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c979bd2c-ff9a-4d17-8ec0-b9b120300070]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563807, 'reachable_time': 15350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342157, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.326 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:46:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:27.327 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[6395d366-6365-405b-93cf-ec98906bd8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.384 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.385 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.402 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.505 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.505 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.510 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.511 254096 INFO nova.compute.claims [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:46:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 249 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 166 op/s
Nov 25 16:46:27 compute-0 nova_compute[254092]: 2025-11-25 16:46:27.898 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:28 compute-0 ceph-mon[74985]: pgmap v1832: 321 pgs: 321 active+clean; 249 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 166 op/s
Nov 25 16:46:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854033698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.393 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.400 254096 DEBUG nova.compute.provider_tree [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.415 254096 DEBUG nova.scheduler.client.report [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.437 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.437 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.473 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.474 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.489 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.504 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.572 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.574 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.574 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Creating image(s)
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.606 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.639 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.670 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.676 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.757 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.763 254096 DEBUG nova.policy [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bd4800c25cd462b9365649e599d0a0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.768 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:28 compute-0 nova_compute[254092]: 2025-11-25 16:46:28.769 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.366 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2854033698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.388 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.391 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.427 254096 DEBUG nova.compute.manager [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG oslo_concurrency.lockutils [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG oslo_concurrency.lockutils [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG oslo_concurrency.lockutils [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 DEBUG nova.compute.manager [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.428 254096 WARNING nova.compute.manager [req-5dd94eb8-ee32-4c44-bd20-b5f664554cb8 req-6603a6b1-a180-4e6d-b82a-4964086932c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state suspended and task_state resuming.
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.466 254096 INFO nova.compute.manager [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Resuming
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.467 254096 DEBUG nova.objects.instance [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'flavor' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.503 254096 DEBUG oslo_concurrency.lockutils [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.504 254096 DEBUG oslo_concurrency.lockutils [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquired lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:29 compute-0 nova_compute[254092]: 2025-11-25 16:46:29.504 254096 DEBUG nova.network.neutron [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:46:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 249 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 166 op/s
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.326 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.381 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] resizing rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:46:30 compute-0 ceph-mon[74985]: pgmap v1833: 321 pgs: 321 active+clean; 249 MiB data, 718 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 166 op/s
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.512 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Successfully created port: 522caf03-4901-44aa-ba29-8d9b37be1158 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.714 254096 DEBUG nova.objects.instance [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'migration_context' on Instance uuid 7bf5d985-1a7f-41b6-8002-b801999e99f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.731 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.731 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Ensure instance console log exists: /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.732 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.732 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:30 compute-0 nova_compute[254092]: 2025-11-25 16:46:30.732 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.458 254096 DEBUG nova.network.neutron [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [{"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.472 254096 DEBUG oslo_concurrency.lockutils [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Releasing lock "refresh_cache-01f96314-1fbe-4eee-a4ed-db7f448a5320" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.478 254096 DEBUG nova.virt.libvirt.vif [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.479 254096 DEBUG nova.network.os_vif_util [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.479 254096 DEBUG nova.network.os_vif_util [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.480 254096 DEBUG os_vif [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fe8c3a9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fe8c3a9-70, col_values=(('external_ids', {'iface-id': '4fe8c3a9-70ba-4a51-8bc1-1505a49fe349', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:a0:1c', 'vm-uuid': '01f96314-1fbe-4eee-a4ed-db7f448a5320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.484 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.485 254096 INFO os_vif [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.504 254096 DEBUG nova.objects.instance [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'numa_topology' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:31 compute-0 kernel: tap4fe8c3a9-70: entered promiscuous mode
Nov 25 16:46:31 compute-0 NetworkManager[48891]: <info>  [1764089191.5687] manager: (tap4fe8c3a9-70): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Nov 25 16:46:31 compute-0 ovn_controller[153477]: 2025-11-25T16:46:31Z|00811|binding|INFO|Claiming lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for this chassis.
Nov 25 16:46:31 compute-0 ovn_controller[153477]: 2025-11-25T16:46:31Z|00812|binding|INFO|4fe8c3a9-70ba-4a51-8bc1-1505a49fe349: Claiming fa:16:3e:ff:a0:1c 10.100.0.11
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.639 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:31 compute-0 ovn_controller[153477]: 2025-11-25T16:46:31Z|00813|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 up in Southbound
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.640 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 bound to our chassis
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.641 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:31 compute-0 ovn_controller[153477]: 2025-11-25T16:46:31Z|00814|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 ovn-installed in OVS
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 systemd-udevd[342359]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:46:31 compute-0 systemd-machined[216343]: New machine qemu-104-instance-00000048.
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.657 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[568f1961-0ddf-40bb-8d8b-f2b665bc2ddd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.659 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50ea1716-91 in ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:46:31 compute-0 NetworkManager[48891]: <info>  [1764089191.6603] device (tap4fe8c3a9-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:46:31 compute-0 NetworkManager[48891]: <info>  [1764089191.6612] device (tap4fe8c3a9-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.661 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50ea1716-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:46:31 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000048.
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.662 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fd37c1-34f0-4490-8857-5aded4859058]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.670 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[958b9ef4-4b61-4c7c-8503-25267a02cf40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.681 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[954432f1-289a-46be-9c01-c1ec336d2cec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1cf3ae-c724-4837-835d-d558b15c586b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.722 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[934d6a2f-e41a-463e-b8c7-7466c08a6ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 289 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 196 op/s
Nov 25 16:46:31 compute-0 NetworkManager[48891]: <info>  [1764089191.7434] manager: (tap50ea1716-90): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[858e044f-7795-4713-b88f-66aebef74622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.779 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5982a5a6-d936-4512-9f17-a63399bca561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.782 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4e04b53a-2fa3-4c0b-9770-efe793ce240d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.798 254096 DEBUG nova.compute.manager [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.798 254096 DEBUG oslo_concurrency.lockutils [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 DEBUG oslo_concurrency.lockutils [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 DEBUG oslo_concurrency.lockutils [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 DEBUG nova.compute.manager [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.799 254096 WARNING nova.compute.manager [req-2455feca-5e04-4a88-ae16-81d5494f503d req-28d9e2a7-1948-4427-9b3d-af03252c6188 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state suspended and task_state resuming.
Nov 25 16:46:31 compute-0 NetworkManager[48891]: <info>  [1764089191.8043] device (tap50ea1716-90): carrier: link connected
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.811 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[be004aca-4b85-4074-80a6-f4da78ab514d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9dcd2f-b2b3-4a96-8e91-fb4903bae458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564939, 'reachable_time': 31744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342393, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.845 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e2cb6c-b3ae-4a0f-bfe2-6dc37d340b39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:3a54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564939, 'tstamp': 564939}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342394, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.863 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2155ced8-7807-404d-929c-d90de76e2477]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50ea1716-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:3a:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564939, 'reachable_time': 31744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342395, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c77ceec7-2208-406d-be78-6bd12908ca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.960 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c3d218-289a-422e-b8f4-bfea4df48a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.962 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.962 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.963 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea1716-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:31 compute-0 kernel: tap50ea1716-90: entered promiscuous mode
Nov 25 16:46:31 compute-0 NetworkManager[48891]: <info>  [1764089191.9668] manager: (tap50ea1716-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.970 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50ea1716-90, col_values=(('external_ids', {'iface-id': '1ec3401d-acb9-4d50-bfc2-f0a1127f4745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:31 compute-0 ovn_controller[153477]: 2025-11-25T16:46:31Z|00815|binding|INFO|Releasing lport 1ec3401d-acb9-4d50-bfc2-f0a1127f4745 from this chassis (sb_readonly=0)
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.975 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.979 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a716e08-0e5c-4f63-bb79-f26148dd3665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.980 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/50ea1716-9d0b-409f-b648-8d2ad5a30525.pid.haproxy
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 50ea1716-9d0b-409f-b648-8d2ad5a30525
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:46:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:31.982 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'env', 'PROCESS_TAG=haproxy-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50ea1716-9d0b-409f-b648-8d2ad5a30525.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:46:31 compute-0 nova_compute[254092]: 2025-11-25 16:46:31.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:32 compute-0 podman[342445]: 2025-11-25 16:46:32.345046201 +0000 UTC m=+0.023850289 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.477 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Successfully updated port: 522caf03-4901-44aa-ba29-8d9b37be1158 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.497 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.497 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquired lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.497 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:46:32 compute-0 podman[342445]: 2025-11-25 16:46:32.511963757 +0000 UTC m=+0.190767815 container create 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:46:32 compute-0 systemd[1]: Started libpod-conmon-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff.scope.
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.585 254096 DEBUG nova.compute.manager [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.585 254096 DEBUG nova.compute.manager [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing instance network info cache due to event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.585 254096 DEBUG oslo_concurrency.lockutils [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cdd06ef15be99e9f8eefd5df555fdd7f637aaafdee50048abe0ba92363351e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:46:32 compute-0 podman[342445]: 2025-11-25 16:46:32.613237939 +0000 UTC m=+0.292042017 container init 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:46:32 compute-0 podman[342445]: 2025-11-25 16:46:32.621297078 +0000 UTC m=+0.300101136 container start 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 16:46:32 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : New worker (342490) forked
Nov 25 16:46:32 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : Loading success.
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.658 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.659 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.678 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.682 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.708 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 01f96314-1fbe-4eee-a4ed-db7f448a5320 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.709 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089192.7084296, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.709 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Started (Lifecycle Event)
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.729 254096 DEBUG nova.compute.manager [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.729 254096 DEBUG nova.objects.instance [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'pci_devices' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.734 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.737 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.753 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance running successfully.
Nov 25 16:46:32 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.758 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089192.713489, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Resumed (Lifecycle Event)
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.761 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.761 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.763 254096 DEBUG nova.virt.libvirt.guest [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.763 254096 DEBUG nova.compute.manager [None req-6f2e4756-121a-47bc-9526-9f604c4368c2 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.774 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.774 254096 INFO nova.compute.claims [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.778 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.783 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.812 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 16:46:32 compute-0 ceph-mon[74985]: pgmap v1834: 321 pgs: 321 active+clean; 289 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 196 op/s
Nov 25 16:46:32 compute-0 nova_compute[254092]: 2025-11-25 16:46:32.978 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587360265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.440 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.448 254096 DEBUG nova.compute.provider_tree [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.475 254096 DEBUG nova.scheduler.client.report [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.496 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.497 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.542 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.543 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.560 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.575 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.602 254096 DEBUG nova.network.neutron [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.620 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Releasing lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.621 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance network_info: |[{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.621 254096 DEBUG oslo_concurrency.lockutils [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.621 254096 DEBUG nova.network.neutron [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.624 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start _get_guest_xml network_info=[{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.628 254096 WARNING nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.636 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.636 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.644 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.645 254096 DEBUG nova.virt.libvirt.host [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.645 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.645 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.646 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.647 254096 DEBUG nova.virt.hardware [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.650 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.695 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.697 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.697 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Creating image(s)
Nov 25 16:46:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 289 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 168 op/s
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.729 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.758 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.786 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.790 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1587360265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.865 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.866 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.866 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.867 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.887 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:33 compute-0 nova_compute[254092]: 2025-11-25 16:46:33.890 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 076182c5-e049-4b78-b5a0-64489e37776b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.003 254096 DEBUG nova.policy [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:46:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625772133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.118 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.158 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.164 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.234 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 076182c5-e049-4b78-b5a0-64489e37776b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.286 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.368 254096 DEBUG nova.objects.instance [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 076182c5-e049-4b78-b5a0-64489e37776b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.380 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.381 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Ensure instance console log exists: /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.381 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.382 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.382 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972452691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.634 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.636 254096 DEBUG nova.virt.libvirt.vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1585503050',display_name='tempest-ServerActionsTestOtherA-server-1585503050',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1585503050',id=84,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2d3os2i1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:28Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=7bf5d985-1a7f-41b6-8002-b801999e99f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.636 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.638 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.640 254096 DEBUG nova.objects.instance [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bf5d985-1a7f-41b6-8002-b801999e99f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.656 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <uuid>7bf5d985-1a7f-41b6-8002-b801999e99f5</uuid>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <name>instance-00000054</name>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestOtherA-server-1585503050</nova:name>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:46:33</nova:creationTime>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:user uuid="7bd4800c25cd462b9365649e599d0a0e">tempest-ServerActionsTestOtherA-878981139-project-member</nova:user>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:project uuid="d4964e211a6d4699ab499f7cadee8a8d">tempest-ServerActionsTestOtherA-878981139</nova:project>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <nova:port uuid="522caf03-4901-44aa-ba29-8d9b37be1158">
Nov 25 16:46:34 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <system>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <entry name="serial">7bf5d985-1a7f-41b6-8002-b801999e99f5</entry>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <entry name="uuid">7bf5d985-1a7f-41b6-8002-b801999e99f5</entry>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </system>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <os>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   </os>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <features>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   </features>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7bf5d985-1a7f-41b6-8002-b801999e99f5_disk">
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config">
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:fd:24:70"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <target dev="tap522caf03-49"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/console.log" append="off"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <video>
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </video>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:46:34 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:46:34 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:46:34 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:46:34 compute-0 nova_compute[254092]: </domain>
Nov 25 16:46:34 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.659 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Preparing to wait for external event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.659 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.660 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.660 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.662 254096 DEBUG nova.virt.libvirt.vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1585503050',display_name='tempest-ServerActionsTestOtherA-server-1585503050',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1585503050',id=84,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2d3os2i1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:28Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=7bf5d985-1a7f-41b6-8002-b801999e99f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.663 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.664 254096 DEBUG nova.network.os_vif_util [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.665 254096 DEBUG os_vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.668 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.669 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.677 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap522caf03-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.679 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap522caf03-49, col_values=(('external_ids', {'iface-id': '522caf03-4901-44aa-ba29-8d9b37be1158', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:24:70', 'vm-uuid': '7bf5d985-1a7f-41b6-8002-b801999e99f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.682 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:34 compute-0 NetworkManager[48891]: <info>  [1764089194.6832] manager: (tap522caf03-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.692 254096 INFO os_vif [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49')
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.754 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.755 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.755 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] No VIF found with MAC fa:16:3e:fd:24:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.756 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Using config drive
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.777 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.827 254096 DEBUG nova.network.neutron [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updated VIF entry in instance network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.828 254096 DEBUG nova.network.neutron [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:34 compute-0 nova_compute[254092]: 2025-11-25 16:46:34.843 254096 DEBUG oslo_concurrency.lockutils [req-e8454537-53d0-4c3c-b96a-e12e43471dc3 req-78065d80-6ba7-4c99-95a8-cf68c28630b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:34 compute-0 ceph-mon[74985]: pgmap v1835: 321 pgs: 321 active+clean; 289 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 168 op/s
Nov 25 16:46:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2625772133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/972452691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.027 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Successfully created port: 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.280 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Creating config drive at /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.286 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwes829p_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.425 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwes829p_" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.451 254096 DEBUG nova.storage.rbd_utils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] rbd image 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.455 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.504 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.505 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.505 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.505 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.506 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.508 254096 INFO nova.compute.manager [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Terminating instance
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.509 254096 DEBUG nova.compute.manager [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:46:35 compute-0 kernel: tap4fe8c3a9-70 (unregistering): left promiscuous mode
Nov 25 16:46:35 compute-0 NetworkManager[48891]: <info>  [1764089195.5565] device (tap4fe8c3a9-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:35 compute-0 ovn_controller[153477]: 2025-11-25T16:46:35Z|00816|binding|INFO|Releasing lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 from this chassis (sb_readonly=0)
Nov 25 16:46:35 compute-0 ovn_controller[153477]: 2025-11-25T16:46:35Z|00817|binding|INFO|Setting lport 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 down in Southbound
Nov 25 16:46:35 compute-0 ovn_controller[153477]: 2025-11-25T16:46:35Z|00818|binding|INFO|Removing iface tap4fe8c3a9-70 ovn-installed in OVS
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.577 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:a0:1c 10.100.0.11'], port_security=['fa:16:3e:ff:a0:1c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '01f96314-1fbe-4eee-a4ed-db7f448a5320', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d15f5aabd3491da5314b126a20225a', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f659b7e6-4ca9-46df-a38f-d29c92b67f9f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aa1f8c26-9163-4397-beb3-8395c780a12b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 in datapath 50ea1716-9d0b-409f-b648-8d2ad5a30525 unbound from our chassis
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.582 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ea1716-9d0b-409f-b648-8d2ad5a30525, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1dbd85d-1686-4de3-926a-9c3f49b6b0d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.585 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 namespace which is not needed anymore
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 25 16:46:35 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000048.scope: Consumed 3.519s CPU time.
Nov 25 16:46:35 compute-0 systemd-machined[216343]: Machine qemu-104-instance-00000048 terminated.
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.649 254096 DEBUG oslo_concurrency.processutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config 7bf5d985-1a7f-41b6-8002-b801999e99f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.650 254096 INFO nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deleting local config drive /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5/disk.config because it was imported into RBD.
Nov 25 16:46:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 327 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Nov 25 16:46:35 compute-0 systemd-udevd[342815]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:46:35 compute-0 kernel: tap522caf03-49: entered promiscuous mode
Nov 25 16:46:35 compute-0 NetworkManager[48891]: <info>  [1764089195.7350] manager: (tap522caf03-49): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Nov 25 16:46:35 compute-0 ovn_controller[153477]: 2025-11-25T16:46:35Z|00819|binding|INFO|Claiming lport 522caf03-4901-44aa-ba29-8d9b37be1158 for this chassis.
Nov 25 16:46:35 compute-0 ovn_controller[153477]: 2025-11-25T16:46:35Z|00820|binding|INFO|522caf03-4901-44aa-ba29-8d9b37be1158: Claiming fa:16:3e:fd:24:70 10.100.0.3
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.746 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:24:70 10.100.0.3'], port_security=['fa:16:3e:fd:24:70 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7bf5d985-1a7f-41b6-8002-b801999e99f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '168dc57b-72e8-4bf9-9fa6-0f910875b8fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=522caf03-4901-44aa-ba29-8d9b37be1158) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:35 compute-0 NetworkManager[48891]: <info>  [1764089195.7562] device (tap522caf03-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:46:35 compute-0 NetworkManager[48891]: <info>  [1764089195.7577] device (tap522caf03-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:46:35 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : haproxy version is 2.8.14-c23fe91
Nov 25 16:46:35 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [NOTICE]   (342488) : path to executable is /usr/sbin/haproxy
Nov 25 16:46:35 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [WARNING]  (342488) : Exiting Master process...
Nov 25 16:46:35 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [ALERT]    (342488) : Current worker (342490) exited with code 143 (Terminated)
Nov 25 16:46:35 compute-0 neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525[342480]: [WARNING]  (342488) : All workers exited. Exiting... (0)
Nov 25 16:46:35 compute-0 ovn_controller[153477]: 2025-11-25T16:46:35Z|00821|binding|INFO|Setting lport 522caf03-4901-44aa-ba29-8d9b37be1158 ovn-installed in OVS
Nov 25 16:46:35 compute-0 ovn_controller[153477]: 2025-11-25T16:46:35Z|00822|binding|INFO|Setting lport 522caf03-4901-44aa-ba29-8d9b37be1158 up in Southbound
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 systemd[1]: libpod-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff.scope: Deactivated successfully.
Nov 25 16:46:35 compute-0 podman[342838]: 2025-11-25 16:46:35.774285221 +0000 UTC m=+0.065220854 container died 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.792 254096 INFO nova.virt.libvirt.driver [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Instance destroyed successfully.
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.793 254096 DEBUG nova.objects.instance [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lazy-loading 'resources' on Instance uuid 01f96314-1fbe-4eee-a4ed-db7f448a5320 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:35 compute-0 systemd-machined[216343]: New machine qemu-105-instance-00000054.
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.804 254096 DEBUG nova.virt.libvirt.vif [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:42:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1951123052',display_name='tempest-ServerActionsTestJSON-server-1951123052',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1951123052',id=72,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOul8HeAt+UIxCfJfn+60t4YpZV7iSL4LJDORWr6iCR1ov+YI0semWU/ei1rpSzmkTyhB+zsYhvO/JrM/v0aowbnChSU76J6tWISFPQqrNakhVmsB30NDWBI6kYIFzj8+Q==',key_name='tempest-keypair-1452948426',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d15f5aabd3491da5314b126a20225a',ramdisk_id='',reservation_id='r-gjhb48uk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1811453217',owner_user_name='tempest-ServerActionsTestJSON-1811453217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aea2d8cf3bb54cdbbc72e41805fb1f90',uuid=01f96314-1fbe-4eee-a4ed-db7f448a5320,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.804 254096 DEBUG nova.network.os_vif_util [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converting VIF {"id": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "address": "fa:16:3e:ff:a0:1c", "network": {"id": "50ea1716-9d0b-409f-b648-8d2ad5a30525", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-990943012-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d15f5aabd3491da5314b126a20225a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fe8c3a9-70", "ovs_interfaceid": "4fe8c3a9-70ba-4a51-8bc1-1505a49fe349", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.805 254096 DEBUG nova.network.os_vif_util [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.806 254096 DEBUG os_vif [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.808 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.808 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fe8c3a9-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:35 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-00000054.
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.814 254096 INFO os_vif [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:a0:1c,bridge_name='br-int',has_traffic_filtering=True,id=4fe8c3a9-70ba-4a51-8bc1-1505a49fe349,network=Network(50ea1716-9d0b-409f-b648-8d2ad5a30525),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fe8c3a9-70')
Nov 25 16:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff-userdata-shm.mount: Deactivated successfully.
Nov 25 16:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cdd06ef15be99e9f8eefd5df555fdd7f637aaafdee50048abe0ba92363351e7-merged.mount: Deactivated successfully.
Nov 25 16:46:35 compute-0 podman[342838]: 2025-11-25 16:46:35.841726054 +0000 UTC m=+0.132661687 container cleanup 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:46:35 compute-0 systemd[1]: libpod-conmon-0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff.scope: Deactivated successfully.
Nov 25 16:46:35 compute-0 podman[342904]: 2025-11-25 16:46:35.954548089 +0000 UTC m=+0.084704702 container remove 0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.961 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b330f57-a06d-4e09-b4ce-4783bc2021a5]: (4, ('Tue Nov 25 04:46:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff)\n0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff\nTue Nov 25 04:46:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 (0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff)\n0291418efc19617d6cf2bae37cd5a3eadce9477f1cb5e8e728832017d5e432ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.963 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[380846b1-f40c-48cd-8786-3a008c0117c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.965 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea1716-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 kernel: tap50ea1716-90: left promiscuous mode
Nov 25 16:46:35 compute-0 nova_compute[254092]: 2025-11-25 16:46:35.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:35.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9f52ef-aa1e-4298-bfee-bc8ea25f8f2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.004 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6c95f51c-3b5f-4a69-bb00-3dbb4a37f5d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.005 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec5ad17-5abb-45ef-a81e-fd2af2f49315]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.021 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45433fe3-d820-421e-95ef-1a84c41e567f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564930, 'reachable_time': 32712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342927, 'error': None, 'target': 'ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d50ea1716\x2d9d0b\x2d409f\x2db648\x2d8d2ad5a30525.mount: Deactivated successfully.
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.027 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50ea1716-9d0b-409f-b648-8d2ad5a30525 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.027 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[73391ff5-d6dc-440d-b390-9aba910c3221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.028 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 522caf03-4901-44aa-ba29-8d9b37be1158 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.029 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.055 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c110132-193c-4f3a-a26e-5d1a0a53b6cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.094 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24f3ef63-7c60-453d-b325-0ebbb0ccba3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.098 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[66eb7d95-fa69-4212-b5f8-712a78c431ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.129 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd715b8-d575-4ba0-a433-ea6f93e37ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9694f9f8-eb70-4fb6-b02d-8dd68434a712]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342934, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.165 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[768a9934-d47b-48f4-aca4-b2b8ec2b0781]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342935, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342935, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.168 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.168 254096 DEBUG nova.compute.manager [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.169 254096 DEBUG oslo_concurrency.lockutils [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.171 254096 DEBUG oslo_concurrency.lockutils [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.171 254096 DEBUG oslo_concurrency.lockutils [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.171 254096 DEBUG nova.compute.manager [req-35f2ff8c-a7d4-404b-be24-85ab1bd671cf req-e2882fb4-451d-483a-bc07-3ed2f1bb5789 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Processing event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.172 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.172 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.172 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.173 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:36.173 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.229 254096 DEBUG nova.compute.manager [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.229 254096 DEBUG oslo_concurrency.lockutils [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.229 254096 DEBUG oslo_concurrency.lockutils [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.230 254096 DEBUG oslo_concurrency.lockutils [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.230 254096 DEBUG nova.compute.manager [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.230 254096 WARNING nova.compute.manager [req-06b8454c-5dd1-4380-89b5-6470bb8ac0f3 req-b094601b-f3c3-4400-bcaf-13595d5d4ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state deleting.
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.289 254096 INFO nova.virt.libvirt.driver [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deleting instance files /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320_del
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.291 254096 INFO nova.virt.libvirt.driver [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deletion of /var/lib/nova/instances/01f96314-1fbe-4eee-a4ed-db7f448a5320_del complete
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.340 254096 INFO nova.compute.manager [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.341 254096 DEBUG oslo.service.loopingcall [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.341 254096 DEBUG nova.compute.manager [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.341 254096 DEBUG nova.network.neutron [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.370 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089196.3703105, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.371 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Started (Lifecycle Event)
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.373 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.376 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.379 254096 INFO nova.virt.libvirt.driver [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance spawned successfully.
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.379 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.394 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.401 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.404 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.404 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.405 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.405 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.406 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.406 254096 DEBUG nova.virt.libvirt.driver [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.430 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.431 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089196.3704488, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.431 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Paused (Lifecycle Event)
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.457 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.461 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089196.3756955, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.462 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Resumed (Lifecycle Event)
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.466 254096 INFO nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 7.89 seconds to spawn the instance on the hypervisor.
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.466 254096 DEBUG nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.475 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.511 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.546 254096 INFO nova.compute.manager [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 9.08 seconds to build instance.
Nov 25 16:46:36 compute-0 nova_compute[254092]: 2025-11-25 16:46:36.558 254096 DEBUG oslo_concurrency.lockutils [None req-9ee09d20-5d77-4fbd-849c-b5cef93e73d6 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:36 compute-0 ovn_controller[153477]: 2025-11-25T16:46:36Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:0f:a2 10.100.0.7
Nov 25 16:46:36 compute-0 ovn_controller[153477]: 2025-11-25T16:46:36Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:0f:a2 10.100.0.7
Nov 25 16:46:36 compute-0 ceph-mon[74985]: pgmap v1836: 321 pgs: 321 active+clean; 327 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Nov 25 16:46:37 compute-0 nova_compute[254092]: 2025-11-25 16:46:37.126 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Successfully updated port: 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:46:37 compute-0 nova_compute[254092]: 2025-11-25 16:46:37.142 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:37 compute-0 nova_compute[254092]: 2025-11-25 16:46:37.142 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:37 compute-0 nova_compute[254092]: 2025-11-25 16:46:37.142 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:46:37 compute-0 nova_compute[254092]: 2025-11-25 16:46:37.445 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:46:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 344 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 136 op/s
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.800 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.800 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] No waiting events found dispatching network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.801 254096 WARNING nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received unexpected event network-vif-plugged-522caf03-4901-44aa-ba29-8d9b37be1158 for instance with vm_state active and task_state None.
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.802 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-changed-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.802 254096 DEBUG nova.compute.manager [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Refreshing instance network info cache due to event network-changed-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.802 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:38 compute-0 ceph-mon[74985]: pgmap v1837: 321 pgs: 321 active+clean; 344 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 136 op/s
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.871 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.872 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.872 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 WARNING nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state deleting.
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.873 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.874 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.874 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.874 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-unplugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.875 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 DEBUG oslo_concurrency.lockutils [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 DEBUG nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] No waiting events found dispatching network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:38 compute-0 nova_compute[254092]: 2025-11-25 16:46:38.876 254096 WARNING nova.compute.manager [req-8fa98d38-6c5d-4c93-b791-ebccbd8f4305 req-39889f0f-6308-46ff-8cd2-76d4fc49a6a5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received unexpected event network-vif-plugged-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 for instance with vm_state active and task_state deleting.
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.068 254096 DEBUG nova.network.neutron [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.090 254096 INFO nova.compute.manager [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Took 2.75 seconds to deallocate network for instance.
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.131 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.131 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.234 254096 DEBUG nova.network.neutron [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updating instance_info_cache with network_info: [{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.248 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.248 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance network_info: |[{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.249 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.249 254096 DEBUG nova.network.neutron [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Refreshing network info cache for port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.255 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start _get_guest_xml network_info=[{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.264 254096 WARNING nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.273 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.274 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.276 254096 DEBUG oslo_concurrency.processutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.316 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.317 254096 DEBUG nova.virt.libvirt.host [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.318 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.319 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.320 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.320 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.321 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.321 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.321 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.322 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.322 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.323 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.323 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.324 254096 DEBUG nova.virt.hardware [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.330 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246613540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 344 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 3.9 MiB/s wr, 69 op/s
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.740 254096 DEBUG oslo_concurrency.processutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.746 254096 DEBUG nova.compute.provider_tree [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/614024467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.762 254096 DEBUG nova.scheduler.client.report [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.773 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.796 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.800 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.833 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4246613540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/614024467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.890 254096 INFO nova.scheduler.client.report [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Deleted allocations for instance 01f96314-1fbe-4eee-a4ed-db7f448a5320
Nov 25 16:46:39 compute-0 nova_compute[254092]: 2025-11-25 16:46:39.939 254096 DEBUG oslo_concurrency.lockutils [None req-0f6c0b9d-747a-45a2-9da2-1606dd191b37 aea2d8cf3bb54cdbbc72e41805fb1f90 b4d15f5aabd3491da5314b126a20225a - - default default] Lock "01f96314-1fbe-4eee-a4ed-db7f448a5320" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:46:40
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.mgr', 'vms', 'backups', 'volumes', 'cephfs.cephfs.meta']
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:46:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520203387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.242 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.243 254096 DEBUG nova.virt.libvirt.vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-765323280',display_name='tempest-ServersTestJSON-server-765323280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-765323280',id=85,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-s5gihthd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:33Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=076182c5-e049-4b78-b5a0-64489e37776b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.244 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.244 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.245 254096 DEBUG nova.objects.instance [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 076182c5-e049-4b78-b5a0-64489e37776b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.257 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <uuid>076182c5-e049-4b78-b5a0-64489e37776b</uuid>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <name>instance-00000055</name>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestJSON-server-765323280</nova:name>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:46:39</nova:creationTime>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <nova:port uuid="00bc7e19-5b5c-41aa-a9e9-4de7574a63eb">
Nov 25 16:46:40 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <system>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <entry name="serial">076182c5-e049-4b78-b5a0-64489e37776b</entry>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <entry name="uuid">076182c5-e049-4b78-b5a0-64489e37776b</entry>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </system>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <os>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/076182c5-e049-4b78-b5a0-64489e37776b_disk">
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/076182c5-e049-4b78-b5a0-64489e37776b_disk.config">
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:35:b3:37"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <target dev="tap00bc7e19-5b"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/console.log" append="off"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <video>
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:46:40 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:46:40 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:46:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:46:40 compute-0 nova_compute[254092]: </domain>
Nov 25 16:46:40 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Preparing to wait for external event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.258 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.259 254096 DEBUG nova.virt.libvirt.vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-765323280',display_name='tempest-ServersTestJSON-server-765323280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-765323280',id=85,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-s5gihthd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:33Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=076182c5-e049-4b78-b5a0-64489e37776b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.259 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.260 254096 DEBUG nova.network.os_vif_util [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.260 254096 DEBUG os_vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.261 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.261 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.261 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.264 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00bc7e19-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.265 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00bc7e19-5b, col_values=(('external_ids', {'iface-id': '00bc7e19-5b5c-41aa-a9e9-4de7574a63eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:b3:37', 'vm-uuid': '076182c5-e049-4b78-b5a0-64489e37776b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.266 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:40 compute-0 NetworkManager[48891]: <info>  [1764089200.2672] manager: (tap00bc7e19-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.271 254096 INFO os_vif [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b')
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.316 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.316 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.316 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:35:b3:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.317 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Using config drive
Nov 25 16:46:40 compute-0 nova_compute[254092]: 2025-11-25 16:46:40.334 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:46:40 compute-0 ceph-mon[74985]: pgmap v1838: 321 pgs: 321 active+clean; 344 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 3.9 MiB/s wr, 69 op/s
Nov 25 16:46:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/520203387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.175 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Creating config drive at /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.180 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z5q14y3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.319 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z5q14y3" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.342 254096 DEBUG nova.storage.rbd_utils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 076182c5-e049-4b78-b5a0-64489e37776b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.346 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config 076182c5-e049-4b78-b5a0-64489e37776b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.460 254096 DEBUG nova.compute.manager [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Received event network-vif-deleted-4fe8c3a9-70ba-4a51-8bc1-1505a49fe349 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.461 254096 DEBUG nova.compute.manager [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.461 254096 DEBUG nova.compute.manager [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing instance network info cache due to event network-changed-522caf03-4901-44aa-ba29-8d9b37be1158. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.462 254096 DEBUG oslo_concurrency.lockutils [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.462 254096 DEBUG oslo_concurrency.lockutils [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.462 254096 DEBUG nova.network.neutron [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Refreshing network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.500 254096 DEBUG oslo_concurrency.processutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config 076182c5-e049-4b78-b5a0-64489e37776b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.500 254096 INFO nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deleting local config drive /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b/disk.config because it was imported into RBD.
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.522 254096 DEBUG nova.network.neutron [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updated VIF entry in instance network info cache for port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.523 254096 DEBUG nova.network.neutron [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updating instance_info_cache with network_info: [{"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:41 compute-0 NetworkManager[48891]: <info>  [1764089201.5369] manager: (tap00bc7e19-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Nov 25 16:46:41 compute-0 kernel: tap00bc7e19-5b: entered promiscuous mode
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.538 254096 DEBUG oslo_concurrency.lockutils [req-7aab12a5-e21d-48f1-9859-73fd19b1bfb4 req-f9ec39dd-1499-4c04-8616-fb83e55347ef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-076182c5-e049-4b78-b5a0-64489e37776b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:41 compute-0 ovn_controller[153477]: 2025-11-25T16:46:41Z|00823|binding|INFO|Claiming lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb for this chassis.
Nov 25 16:46:41 compute-0 ovn_controller[153477]: 2025-11-25T16:46:41Z|00824|binding|INFO|00bc7e19-5b5c-41aa-a9e9-4de7574a63eb: Claiming fa:16:3e:35:b3:37 10.100.0.13
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.547 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b3:37 10.100.0.13'], port_security=['fa:16:3e:35:b3:37 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '076182c5-e049-4b78-b5a0-64489e37776b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.548 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.550 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:46:41 compute-0 ovn_controller[153477]: 2025-11-25T16:46:41Z|00825|binding|INFO|Setting lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb up in Southbound
Nov 25 16:46:41 compute-0 ovn_controller[153477]: 2025-11-25T16:46:41Z|00826|binding|INFO|Setting lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb ovn-installed in OVS
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.564 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.566 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f29db172-a54c-4bf1-9c24-f3ce8958b9b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 systemd-machined[216343]: New machine qemu-106-instance-00000055.
Nov 25 16:46:41 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-00000055.
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.593 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[de71df63-33da-4f5b-9231-1f74c7fe5de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.600 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfa304f-c2e8-4c10-92ba-7deee08db6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 systemd-udevd[343139]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:46:41 compute-0 NetworkManager[48891]: <info>  [1764089201.6157] device (tap00bc7e19-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:46:41 compute-0 NetworkManager[48891]: <info>  [1764089201.6167] device (tap00bc7e19-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.627 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0251278b-e15a-48ab-8e34-4059912ddc2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.643 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1192ac8e-b76e-4170-95d9-d136036c5b44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343149, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.657 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bdba07ce-da4a-4680-8e2a-fd6ed17e533a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343150, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343150, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.659 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.662 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.662 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.662 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.663 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.674 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.675 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.676 254096 INFO nova.compute.manager [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Terminating instance
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.677 254096 DEBUG nova.compute.manager [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:46:41 compute-0 kernel: tap522caf03-49 (unregistering): left promiscuous mode
Nov 25 16:46:41 compute-0 NetworkManager[48891]: <info>  [1764089201.7257] device (tap522caf03-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 293 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 228 op/s
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 ovn_controller[153477]: 2025-11-25T16:46:41Z|00827|binding|INFO|Releasing lport 522caf03-4901-44aa-ba29-8d9b37be1158 from this chassis (sb_readonly=0)
Nov 25 16:46:41 compute-0 ovn_controller[153477]: 2025-11-25T16:46:41Z|00828|binding|INFO|Setting lport 522caf03-4901-44aa-ba29-8d9b37be1158 down in Southbound
Nov 25 16:46:41 compute-0 ovn_controller[153477]: 2025-11-25T16:46:41Z|00829|binding|INFO|Removing iface tap522caf03-49 ovn-installed in OVS
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.786 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:24:70 10.100.0.3'], port_security=['fa:16:3e:fd:24:70 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7bf5d985-1a7f-41b6-8002-b801999e99f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=522caf03-4901-44aa-ba29-8d9b37be1158) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.788 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 522caf03-4901-44aa-ba29-8d9b37be1158 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.790 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 290484fa-908f-44de-87e4-4f5bc85c5679
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.805 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c606ce5e-8757-464a-a65a-9b5cd25f14c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000054.scope: Deactivated successfully.
Nov 25 16:46:41 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000054.scope: Consumed 5.884s CPU time.
Nov 25 16:46:41 compute-0 systemd-machined[216343]: Machine qemu-105-instance-00000054 terminated.
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.834 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccc4bb0-56ab-4586-bb38-73297933ee2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.839 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[189e38e5-e1d1-4756-bf46-3128ce3a5bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.867 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8332a522-6bcb-45ba-bce1-4d816d8f5130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 NetworkManager[48891]: <info>  [1764089201.8923] manager: (tap522caf03-49): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.892 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcb314d-d2db-48db-9708-8b0dc2c37857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap290484fa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:a3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554060, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343201, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.908 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f60afe-35fd-4cc9-a313-bae463f073b7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554073, 'tstamp': 554073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343210, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap290484fa-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554076, 'tstamp': 554076}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343210, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.909 254096 INFO nova.virt.libvirt.driver [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Instance destroyed successfully.
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.910 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.910 254096 DEBUG nova.objects.instance [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'resources' on Instance uuid 7bf5d985-1a7f-41b6-8002-b801999e99f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap290484fa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap290484fa-90, col_values=(('external_ids', {'iface-id': 'db192ec3-55c1-4137-aaad-99a175bfa879'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:41.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.921 254096 DEBUG nova.virt.libvirt.vif [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1585503050',display_name='tempest-ServerActionsTestOtherA-server-1585503050',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1585503050',id=84,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-2d3os2i1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:36Z,user_data=None,user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=7bf5d985-1a7f-41b6-8002-b801999e99f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.921 254096 DEBUG nova.network.os_vif_util [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.922 254096 DEBUG nova.network.os_vif_util [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.922 254096 DEBUG os_vif [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.924 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap522caf03-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.930 254096 INFO os_vif [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:24:70,bridge_name='br-int',has_traffic_filtering=True,id=522caf03-4901-44aa-ba29-8d9b37be1158,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap522caf03-49')
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.964 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089201.9639485, 076182c5-e049-4b78-b5a0-64489e37776b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.965 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Started (Lifecycle Event)
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.982 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.986 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089201.9644976, 076182c5-e049-4b78-b5a0-64489e37776b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:41 compute-0 nova_compute[254092]: 2025-11-25 16:46:41.986 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Paused (Lifecycle Event)
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.000 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.004 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.019 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.302 254096 INFO nova.virt.libvirt.driver [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deleting instance files /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5_del
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.303 254096 INFO nova.virt.libvirt.driver [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deletion of /var/lib/nova/instances/7bf5d985-1a7f-41b6-8002-b801999e99f5_del complete
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.344 254096 INFO nova.compute.manager [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 0.67 seconds to destroy the instance on the hypervisor.
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.344 254096 DEBUG oslo.service.loopingcall [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.344 254096 DEBUG nova.compute.manager [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:46:42 compute-0 nova_compute[254092]: 2025-11-25 16:46:42.345 254096 DEBUG nova.network.neutron [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:46:42 compute-0 ceph-mon[74985]: pgmap v1839: 321 pgs: 321 active+clean; 293 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 228 op/s
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.615 254096 DEBUG nova.compute.manager [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.616 254096 DEBUG oslo_concurrency.lockutils [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.617 254096 DEBUG oslo_concurrency.lockutils [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.617 254096 DEBUG oslo_concurrency.lockutils [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.617 254096 DEBUG nova.compute.manager [req-2d650b4e-ade9-4848-9d65-ab2f218f644c req-9c7b59d9-db6c-435c-b7ad-d99b03777330 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Processing event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.618 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.622 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089203.6219947, 076182c5-e049-4b78-b5a0-64489e37776b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.622 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Resumed (Lifecycle Event)
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.627 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.631 254096 INFO nova.virt.libvirt.driver [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance spawned successfully.
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.631 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:46:43 compute-0 podman[343238]: 2025-11-25 16:46:43.649367278 +0000 UTC m=+0.059213721 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.652 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.660 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:46:43 compute-0 podman[343237]: 2025-11-25 16:46:43.661796365 +0000 UTC m=+0.072952644 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.664 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.664 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.665 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.665 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.666 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.666 254096 DEBUG nova.virt.libvirt.driver [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:46:43 compute-0 podman[343239]: 2025-11-25 16:46:43.694692359 +0000 UTC m=+0.100274756 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.705 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:46:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 293 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 198 op/s
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.740 254096 INFO nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 10.04 seconds to spawn the instance on the hypervisor.
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.741 254096 DEBUG nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.824 254096 INFO nova.compute.manager [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 11.08 seconds to build instance.
Nov 25 16:46:43 compute-0 nova_compute[254092]: 2025-11-25 16:46:43.841 254096 DEBUG oslo_concurrency.lockutils [None req-6096a97e-70cb-4162-8a43-e96ed6d644ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.106 254096 DEBUG nova.network.neutron [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updated VIF entry in instance network info cache for port 522caf03-4901-44aa-ba29-8d9b37be1158. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.108 254096 DEBUG nova.network.neutron [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [{"id": "522caf03-4901-44aa-ba29-8d9b37be1158", "address": "fa:16:3e:fd:24:70", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap522caf03-49", "ovs_interfaceid": "522caf03-4901-44aa-ba29-8d9b37be1158", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.131 254096 DEBUG oslo_concurrency.lockutils [req-44e5f1b6-e852-4752-812e-ccb837bedbce req-0e4a32b1-53d0-4c45-9b3b-0a1a10f53227 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7bf5d985-1a7f-41b6-8002-b801999e99f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.280 254096 DEBUG nova.network.neutron [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.309 254096 INFO nova.compute.manager [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Took 1.96 seconds to deallocate network for instance.
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.373 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.374 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.476 254096 DEBUG oslo_concurrency.processutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:44 compute-0 ceph-mon[74985]: pgmap v1840: 321 pgs: 321 active+clean; 293 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 198 op/s
Nov 25 16:46:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980439527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.951 254096 DEBUG oslo_concurrency.processutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.957 254096 DEBUG nova.compute.provider_tree [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.970 254096 DEBUG nova.scheduler.client.report [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:44 compute-0 nova_compute[254092]: 2025-11-25 16:46:44.993 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.017 254096 INFO nova.scheduler.client.report [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Deleted allocations for instance 7bf5d985-1a7f-41b6-8002-b801999e99f5
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.068 254096 DEBUG oslo_concurrency.lockutils [None req-d1a1527f-b4cb-42f8-af1d-2ed1e0507993 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "7bf5d985-1a7f-41b6-8002-b801999e99f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 256 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 266 op/s
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.743 254096 DEBUG nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.744 254096 DEBUG oslo_concurrency.lockutils [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.745 254096 DEBUG oslo_concurrency.lockutils [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.745 254096 DEBUG oslo_concurrency.lockutils [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.745 254096 DEBUG nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] No waiting events found dispatching network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.746 254096 WARNING nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received unexpected event network-vif-plugged-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb for instance with vm_state active and task_state None.
Nov 25 16:46:45 compute-0 nova_compute[254092]: 2025-11-25 16:46:45.746 254096 DEBUG nova.compute.manager [req-71212160-4b07-4617-8cfc-7212e64236c2 req-03c48b49-06a3-44fd-9677-63a136635e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Received event network-vif-deleted-522caf03-4901-44aa-ba29-8d9b37be1158 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2980439527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.123 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.124 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.125 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.125 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.125 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.127 254096 INFO nova.compute.manager [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Terminating instance
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.128 254096 DEBUG nova.compute.manager [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:46:46 compute-0 kernel: tap41fd5f5b-44 (unregistering): left promiscuous mode
Nov 25 16:46:46 compute-0 NetworkManager[48891]: <info>  [1764089206.1715] device (tap41fd5f5b-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:46 compute-0 ovn_controller[153477]: 2025-11-25T16:46:46Z|00830|binding|INFO|Releasing lport 41fd5f5b-445b-4eed-adf5-045ddb262021 from this chassis (sb_readonly=0)
Nov 25 16:46:46 compute-0 ovn_controller[153477]: 2025-11-25T16:46:46Z|00831|binding|INFO|Setting lport 41fd5f5b-445b-4eed-adf5-045ddb262021 down in Southbound
Nov 25 16:46:46 compute-0 ovn_controller[153477]: 2025-11-25T16:46:46Z|00832|binding|INFO|Removing iface tap41fd5f5b-44 ovn-installed in OVS
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.193 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:dc:a2 10.100.0.6'], port_security=['fa:16:3e:a2:dc:a2 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '98410ff5-26ab-4406-8d1b-063d9e114cf8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-290484fa-908f-44de-87e4-4f5bc85c5679', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4964e211a6d4699ab499f7cadee8a8d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c92cd9ca-5dd9-48df-bed9-cecbc09aacca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=041a286f-fdb2-474d-8a57-f5c3168624f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=41fd5f5b-445b-4eed-adf5-045ddb262021) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.194 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 41fd5f5b-445b-4eed-adf5-045ddb262021 in datapath 290484fa-908f-44de-87e4-4f5bc85c5679 unbound from our chassis
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.196 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 290484fa-908f-44de-87e4-4f5bc85c5679, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.198 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[13d7d6f1-4811-4d73-96ec-1d08d4483028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.198 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 namespace which is not needed anymore
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 25 16:46:46 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004e.scope: Consumed 17.199s CPU time.
Nov 25 16:46:46 compute-0 systemd-machined[216343]: Machine qemu-93-instance-0000004e terminated.
Nov 25 16:46:46 compute-0 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [NOTICE]   (335841) : haproxy version is 2.8.14-c23fe91
Nov 25 16:46:46 compute-0 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [NOTICE]   (335841) : path to executable is /usr/sbin/haproxy
Nov 25 16:46:46 compute-0 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [WARNING]  (335841) : Exiting Master process...
Nov 25 16:46:46 compute-0 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [ALERT]    (335841) : Current worker (335843) exited with code 143 (Terminated)
Nov 25 16:46:46 compute-0 neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679[335837]: [WARNING]  (335841) : All workers exited. Exiting... (0)
Nov 25 16:46:46 compute-0 systemd[1]: libpod-78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db.scope: Deactivated successfully.
Nov 25 16:46:46 compute-0 podman[343344]: 2025-11-25 16:46:46.368420978 +0000 UTC m=+0.053269629 container died 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.377 254096 INFO nova.virt.libvirt.driver [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Instance destroyed successfully.
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.377 254096 DEBUG nova.objects.instance [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lazy-loading 'resources' on Instance uuid 98410ff5-26ab-4406-8d1b-063d9e114cf8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.390 254096 DEBUG nova.virt.libvirt.vif [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:44:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2135129959',display_name='tempest-ServerActionsTestOtherA-server-2135129959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2135129959',id=78,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH0CfOXgpdL9TA9v80eVPgWMFMAd3kyDMITWZbq91VqT30SkdY0BSiRtiMf/N/PxHYN1QDKdbRV0yenlOn8E69+KpPA991BPfs7OG9A96fwH3GKazl2NNuFOCSFE4XMmXQ==',key_name='tempest-keypair-1463003804',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:44:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4964e211a6d4699ab499f7cadee8a8d',ramdisk_id='',reservation_id='r-z971v96r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-878981139',owner_user_name='tempest-ServerActionsTestOtherA-878981139-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:44:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7bd4800c25cd462b9365649e599d0a0e',uuid=98410ff5-26ab-4406-8d1b-063d9e114cf8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.390 254096 DEBUG nova.network.os_vif_util [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converting VIF {"id": "41fd5f5b-445b-4eed-adf5-045ddb262021", "address": "fa:16:3e:a2:dc:a2", "network": {"id": "290484fa-908f-44de-87e4-4f5bc85c5679", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1196667091-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4964e211a6d4699ab499f7cadee8a8d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41fd5f5b-44", "ovs_interfaceid": "41fd5f5b-445b-4eed-adf5-045ddb262021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.393 254096 DEBUG nova.network.os_vif_util [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.394 254096 DEBUG os_vif [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.397 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41fd5f5b-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.404 254096 INFO os_vif [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a2:dc:a2,bridge_name='br-int',has_traffic_filtering=True,id=41fd5f5b-445b-4eed-adf5-045ddb262021,network=Network(290484fa-908f-44de-87e4-4f5bc85c5679),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41fd5f5b-44')
Nov 25 16:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db-userdata-shm.mount: Deactivated successfully.
Nov 25 16:46:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a8c31be6c702279b234bc478f162b1997c18dd87616887343918d4c9ac2c2c7-merged.mount: Deactivated successfully.
Nov 25 16:46:46 compute-0 podman[343344]: 2025-11-25 16:46:46.430907806 +0000 UTC m=+0.115756457 container cleanup 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:46:46 compute-0 systemd[1]: libpod-conmon-78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db.scope: Deactivated successfully.
Nov 25 16:46:46 compute-0 podman[343396]: 2025-11-25 16:46:46.500026484 +0000 UTC m=+0.045104027 container remove 78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f633cf-528e-4016-9c03-a6c5ea591aa6]: (4, ('Tue Nov 25 04:46:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 (78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db)\n78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db\nTue Nov 25 04:46:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 (78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db)\n78d680061c0bca0af9c8a030a9ac00c58d509344c51f9dea384a2ffc4edbc3db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.512 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be44eac9-1bd7-4586-9b87-f396febd439c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.513 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap290484fa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 kernel: tap290484fa-90: left promiscuous mode
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.574 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cff601ca-66d6-420d-8153-6e6178efd6d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.589 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edb46e56-f793-4b14-8bbe-56be2efc4b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ade95c7-72db-432f-8be3-edad491ede04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.611 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab258389-0773-447b-9ea1-654115d9ebbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554052, 'reachable_time': 34703, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343413, 'error': None, 'target': 'ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d290484fa\x2d908f\x2d44de\x2d87e4\x2d4f5bc85c5679.mount: Deactivated successfully.
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.613 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-290484fa-908f-44de-87e4-4f5bc85c5679 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.614 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[af1fcfb4-4443-4da6-a99f-8fba94eb16a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.738 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.740 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.740 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "076182c5-e049-4b78-b5a0-64489e37776b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.740 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.741 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.743 254096 INFO nova.compute.manager [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Terminating instance
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.744 254096 DEBUG nova.compute.manager [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:46:46 compute-0 kernel: tap00bc7e19-5b (unregistering): left promiscuous mode
Nov 25 16:46:46 compute-0 NetworkManager[48891]: <info>  [1764089206.7902] device (tap00bc7e19-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:46:46 compute-0 ovn_controller[153477]: 2025-11-25T16:46:46Z|00833|binding|INFO|Releasing lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb from this chassis (sb_readonly=0)
Nov 25 16:46:46 compute-0 ovn_controller[153477]: 2025-11-25T16:46:46Z|00834|binding|INFO|Setting lport 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb down in Southbound
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 ovn_controller[153477]: 2025-11-25T16:46:46Z|00835|binding|INFO|Removing iface tap00bc7e19-5b ovn-installed in OVS
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.803 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:b3:37 10.100.0.13'], port_security=['fa:16:3e:35:b3:37 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '076182c5-e049-4b78-b5a0-64489e37776b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.804 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 00bc7e19-5b5c-41aa-a9e9-4de7574a63eb in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.806 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc332c3-edb4-402b-8663-20721d66b367]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.838 254096 INFO nova.virt.libvirt.driver [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Deleting instance files /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8_del
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.839 254096 INFO nova.virt.libvirt.driver [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Deletion of /var/lib/nova/instances/98410ff5-26ab-4406-8d1b-063d9e114cf8_del complete
Nov 25 16:46:46 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 25 16:46:46 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000055.scope: Consumed 3.531s CPU time.
Nov 25 16:46:46 compute-0 systemd-machined[216343]: Machine qemu-106-instance-00000055 terminated.
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.872 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4152dab6-8b6f-460a-b4ca-4bf0f484aecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.875 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6b4c9b-544f-4887-87e0-9694de42928e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.886 254096 INFO nova.compute.manager [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.887 254096 DEBUG oslo.service.loopingcall [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.887 254096 DEBUG nova.compute.manager [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.887 254096 DEBUG nova.network.neutron [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5e83322b-0cbe-489e-b5e8-15eb254066fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ceph-mon[74985]: pgmap v1841: 321 pgs: 321 active+clean; 256 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 266 op/s
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.935 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9473de71-1336-4c01-bfac-ecba3bcc858b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343424, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.955 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4309d918-0bac-4491-8446-6b0d54c408b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343425, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343425, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.965 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:46:46.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.982 254096 INFO nova.virt.libvirt.driver [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Instance destroyed successfully.
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.982 254096 DEBUG nova.objects.instance [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 076182c5-e049-4b78-b5a0-64489e37776b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.995 254096 DEBUG nova.virt.libvirt.vif [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-765323280',display_name='tempest-ServersTestJSON-server-765323280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-765323280',id=85,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-s5gihthd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:43Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=076182c5-e049-4b78-b5a0-64489e37776b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.995 254096 DEBUG nova.network.os_vif_util [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "address": "fa:16:3e:35:b3:37", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00bc7e19-5b", "ovs_interfaceid": "00bc7e19-5b5c-41aa-a9e9-4de7574a63eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.996 254096 DEBUG nova.network.os_vif_util [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.996 254096 DEBUG os_vif [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:46 compute-0 nova_compute[254092]: 2025-11-25 16:46:46.998 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bc7e19-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.006 254096 INFO os_vif [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:b3:37,bridge_name='br-int',has_traffic_filtering=True,id=00bc7e19-5b5c-41aa-a9e9-4de7574a63eb,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00bc7e19-5b')
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.451 254096 INFO nova.virt.libvirt.driver [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deleting instance files /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b_del
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.453 254096 INFO nova.virt.libvirt.driver [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deletion of /var/lib/nova/instances/076182c5-e049-4b78-b5a0-64489e37776b_del complete
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.531 254096 INFO nova.compute.manager [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.532 254096 DEBUG oslo.service.loopingcall [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.533 254096 DEBUG nova.compute.manager [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:46:47 compute-0 nova_compute[254092]: 2025-11-25 16:46:47.533 254096 DEBUG nova.network.neutron [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:46:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 246 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.4 MiB/s wr, 277 op/s
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.339 254096 DEBUG nova.network.neutron [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.373 254096 INFO nova.compute.manager [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Took 1.49 seconds to deallocate network for instance.
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.423 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.423 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.429 254096 DEBUG nova.compute.manager [req-672e817b-fd05-41ab-ae23-317556fdfa52 req-9d04f5e7-c2d6-48da-903a-efe0f5863b34 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Received event network-vif-deleted-41fd5f5b-445b-4eed-adf5-045ddb262021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.511 254096 DEBUG oslo_concurrency.processutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:48 compute-0 ceph-mon[74985]: pgmap v1842: 321 pgs: 321 active+clean; 246 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.4 MiB/s wr, 277 op/s
Nov 25 16:46:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914153769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.950 254096 DEBUG oslo_concurrency.processutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.956 254096 DEBUG nova.compute.provider_tree [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.972 254096 DEBUG nova.scheduler.client.report [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:48 compute-0 nova_compute[254092]: 2025-11-25 16:46:48.993 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:49 compute-0 nova_compute[254092]: 2025-11-25 16:46:49.023 254096 INFO nova.scheduler.client.report [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Deleted allocations for instance 98410ff5-26ab-4406-8d1b-063d9e114cf8
Nov 25 16:46:49 compute-0 nova_compute[254092]: 2025-11-25 16:46:49.087 254096 DEBUG oslo_concurrency.lockutils [None req-40b94535-600f-4a31-aaa1-bae1a3669728 7bd4800c25cd462b9365649e599d0a0e d4964e211a6d4699ab499f7cadee8a8d - - default default] Lock "98410ff5-26ab-4406-8d1b-063d9e114cf8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:49 compute-0 nova_compute[254092]: 2025-11-25 16:46:49.627 254096 DEBUG nova.network.neutron [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:46:49 compute-0 nova_compute[254092]: 2025-11-25 16:46:49.647 254096 INFO nova.compute.manager [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Took 2.11 seconds to deallocate network for instance.
Nov 25 16:46:49 compute-0 nova_compute[254092]: 2025-11-25 16:46:49.703 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:49 compute-0 nova_compute[254092]: 2025-11-25 16:46:49.703 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 246 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 259 op/s
Nov 25 16:46:49 compute-0 nova_compute[254092]: 2025-11-25 16:46:49.757 254096 DEBUG oslo_concurrency.processutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2914153769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426369930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.214 254096 DEBUG oslo_concurrency.processutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.221 254096 DEBUG nova.compute.provider_tree [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.235 254096 DEBUG nova.scheduler.client.report [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.255 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.309 254096 INFO nova.scheduler.client.report [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 076182c5-e049-4b78-b5a0-64489e37776b
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.372 254096 DEBUG oslo_concurrency.lockutils [None req-9943b505-9401-4c92-9370-bcfd789fa946 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "076182c5-e049-4b78-b5a0-64489e37776b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:50 compute-0 ovn_controller[153477]: 2025-11-25T16:46:50Z|00836|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.574 254096 DEBUG nova.compute.manager [req-efea3501-4b11-493b-9311-06cf736a6075 req-7b3e4679-8541-422c-aa23-9155cb7c15e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Received event network-vif-deleted-00bc7e19-5b5c-41aa-a9e9-4de7574a63eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:50 compute-0 ovn_controller[153477]: 2025-11-25T16:46:50Z|00837|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.785 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089195.7837284, 01f96314-1fbe-4eee-a4ed-db7f448a5320 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.786 254096 INFO nova.compute.manager [-] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] VM Stopped (Lifecycle Event)
Nov 25 16:46:50 compute-0 nova_compute[254092]: 2025-11-25 16:46:50.808 254096 DEBUG nova.compute.manager [None req-eb5e3e5f-3082-4750-bef3-558a8e564c2d - - - - - -] [instance: 01f96314-1fbe-4eee-a4ed-db7f448a5320] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:50 compute-0 ceph-mon[74985]: pgmap v1843: 321 pgs: 321 active+clean; 246 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 259 op/s
Nov 25 16:46:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3426369930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018666416373640685 of space, bias 1.0, pg target 0.5599924912092206 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:46:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 313 op/s
Nov 25 16:46:52 compute-0 nova_compute[254092]: 2025-11-25 16:46:52.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:52 compute-0 ceph-mon[74985]: pgmap v1844: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 313 op/s
Nov 25 16:46:53 compute-0 nova_compute[254092]: 2025-11-25 16:46:53.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 154 op/s
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.265 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.265 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.283 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.344 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.345 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.351 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.351 254096 INFO nova.compute.claims [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.462 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081260075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.938 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.946 254096 DEBUG nova.compute.provider_tree [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:54 compute-0 ceph-mon[74985]: pgmap v1845: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 154 op/s
Nov 25 16:46:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4081260075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.962 254096 DEBUG nova.scheduler.client.report [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.990 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:54 compute-0 nova_compute[254092]: 2025-11-25 16:46:54.990 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.035 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.035 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.066 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.083 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.186 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.188 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.188 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Creating image(s)
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.207 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.229 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:46:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2187319379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:46:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:46:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2187319379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.261 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.266 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.346 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.347 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.348 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.348 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.375 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.379 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.677 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 154 op/s
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.744 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.853 254096 DEBUG nova.objects.instance [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.864 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.865 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Ensure instance console log exists: /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.865 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.866 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.866 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:55 compute-0 nova_compute[254092]: 2025-11-25 16:46:55.956 254096 DEBUG nova.policy [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:46:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2187319379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:46:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2187319379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:46:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:46:56 compute-0 nova_compute[254092]: 2025-11-25 16:46:56.909 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089201.906971, 7bf5d985-1a7f-41b6-8002-b801999e99f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:46:56 compute-0 nova_compute[254092]: 2025-11-25 16:46:56.909 254096 INFO nova.compute.manager [-] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] VM Stopped (Lifecycle Event)
Nov 25 16:46:56 compute-0 nova_compute[254092]: 2025-11-25 16:46:56.928 254096 DEBUG nova.compute.manager [None req-0b4b3b4f-f469-4c72-a6ee-dd0a9cd180a8 - - - - - -] [instance: 7bf5d985-1a7f-41b6-8002-b801999e99f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:46:56 compute-0 ceph-mon[74985]: pgmap v1846: 321 pgs: 321 active+clean; 121 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 154 op/s
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.351 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.351 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.371 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.440 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.441 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.452 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.452 254096 INFO nova.compute.claims [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.598 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Successfully created port: f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:46:57 compute-0 nova_compute[254092]: 2025-11-25 16:46:57.617 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 131 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 512 KiB/s wr, 98 op/s
Nov 25 16:46:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:46:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614283455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.097 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.107 254096 DEBUG nova.compute.provider_tree [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.122 254096 DEBUG nova.scheduler.client.report [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.145 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.147 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.189 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.213 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.235 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.337 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.339 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.340 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Creating image(s)
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.372 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.408 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.436 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.441 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.526 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.527 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.528 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.528 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.552 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.557 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.631 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Successfully updated port: f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.648 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.649 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.649 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.819 254096 DEBUG nova.compute.manager [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-changed-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.820 254096 DEBUG nova.compute.manager [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Refreshing instance network info cache due to event network-changed-f0f27b65-aab4-4ab1-ade1-b58eb7124f88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.820 254096 DEBUG oslo_concurrency.lockutils [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.872 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.922 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] resizing rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:46:58 compute-0 nova_compute[254092]: 2025-11-25 16:46:58.960 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:46:58 compute-0 ceph-mon[74985]: pgmap v1847: 321 pgs: 321 active+clean; 131 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 512 KiB/s wr, 98 op/s
Nov 25 16:46:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3614283455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.007 254096 DEBUG nova.objects.instance [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'migration_context' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.022 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.022 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Ensure instance console log exists: /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.023 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.023 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.024 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.025 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.030 254096 WARNING nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.034 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.034 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.037 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.037 254096 DEBUG nova.virt.libvirt.host [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.037 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.038 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.039 254096 DEBUG nova.virt.hardware [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.042 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400692947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.461 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.479 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.483 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:46:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 131 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 511 KiB/s wr, 66 op/s
Nov 25 16:46:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:46:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006274789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.915 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.916 254096 DEBUG nova.objects.instance [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:46:59 compute-0 nova_compute[254092]: 2025-11-25 16:46:59.956 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <uuid>b4cc1fd8-a1ed-40f8-8373-33b1d1260300</uuid>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <name>instance-00000057</name>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersAaction247Test-server-681512672</nova:name>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:46:59</nova:creationTime>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <nova:user uuid="845a69c3091245f2a563f43567bf4a2f">tempest-ServersAaction247Test-680551157-project-member</nova:user>
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <nova:project uuid="f23a436fcc3d46efba4e231d5103a5d5">tempest-ServersAaction247Test-680551157</nova:project>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <system>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <entry name="serial">b4cc1fd8-a1ed-40f8-8373-33b1d1260300</entry>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <entry name="uuid">b4cc1fd8-a1ed-40f8-8373-33b1d1260300</entry>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </system>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <os>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   </os>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <features>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   </features>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk">
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config">
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       </source>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:46:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/console.log" append="off"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <video>
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </video>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:46:59 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:46:59 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:46:59 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:46:59 compute-0 nova_compute[254092]: </domain>
Nov 25 16:46:59 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:46:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2400692947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:46:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3006274789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:00 compute-0 nova_compute[254092]: 2025-11-25 16:47:00.004 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:00 compute-0 nova_compute[254092]: 2025-11-25 16:47:00.005 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:00 compute-0 nova_compute[254092]: 2025-11-25 16:47:00.005 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Using config drive
Nov 25 16:47:00 compute-0 nova_compute[254092]: 2025-11-25 16:47:00.025 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:00 compute-0 nova_compute[254092]: 2025-11-25 16:47:00.986 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Creating config drive at /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config
Nov 25 16:47:00 compute-0 nova_compute[254092]: 2025-11-25 16:47:00.990 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpke9spmyg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:00 compute-0 ceph-mon[74985]: pgmap v1848: 321 pgs: 321 active+clean; 131 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 511 KiB/s wr, 66 op/s
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.049 254096 DEBUG nova.network.neutron [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updating instance_info_cache with network_info: [{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.076 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.076 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance network_info: |[{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.077 254096 DEBUG oslo_concurrency.lockutils [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.077 254096 DEBUG nova.network.neutron [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Refreshing network info cache for port f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.080 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start _get_guest_xml network_info=[{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.084 254096 WARNING nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.092 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.093 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.096 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.096 254096 DEBUG nova.virt.libvirt.host [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.096 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.097 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.098 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.099 254096 DEBUG nova.virt.hardware [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.102 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.141 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpke9spmyg" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.165 254096 DEBUG nova.storage.rbd_utils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] rbd image b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.169 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.339 254096 DEBUG oslo_concurrency.processutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config b4cc1fd8-a1ed-40f8-8373-33b1d1260300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.340 254096 INFO nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deleting local config drive /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300/disk.config because it was imported into RBD.
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.375 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089206.3740659, 98410ff5-26ab-4406-8d1b-063d9e114cf8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.376 254096 INFO nova.compute.manager [-] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] VM Stopped (Lifecycle Event)
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.392 254096 DEBUG nova.compute.manager [None req-0e7548c5-1325-441d-b384-034dabbf5c63 - - - - - -] [instance: 98410ff5-26ab-4406-8d1b-063d9e114cf8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:01 compute-0 systemd-machined[216343]: New machine qemu-107-instance-00000057.
Nov 25 16:47:01 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000057.
Nov 25 16:47:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544329129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.531 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.552 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.555 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 3.5 MiB/s wr, 108 op/s
Nov 25 16:47:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623298628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.962 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.964 254096 DEBUG nova.virt.libvirt.vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-262344867',display_name='tempest-ServersTestJSON-server-262344867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-262344867',id=86,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQJGP+WBaWzhABP2DN5HUriBkruJv6UMUXVsLc1/QAALeyb1ZBY8xYIWQD6vo5KB897SaecCjpvmyzj4CBX1phIjZTvTDr6O/Dm/ASyK8mTfIM6RZCRLmfnDkwJw8Gnzw==',key_name='tempest-key-735549316',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-6ok0603s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:55Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.964 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.965 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.967 254096 DEBUG nova.objects.instance [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.978 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089206.9777875, 076182c5-e049-4b78-b5a0-64489e37776b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.978 254096 INFO nova.compute.manager [-] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] VM Stopped (Lifecycle Event)
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.990 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <uuid>d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb</uuid>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <name>instance-00000056</name>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestJSON-server-262344867</nova:name>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:47:01</nova:creationTime>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <nova:port uuid="f0f27b65-aab4-4ab1-ade1-b58eb7124f88">
Nov 25 16:47:01 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <system>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <entry name="serial">d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb</entry>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <entry name="uuid">d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb</entry>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </system>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <os>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   </os>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <features>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   </features>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk">
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config">
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:1f:ac:77"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <target dev="tapf0f27b65-aa"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/console.log" append="off"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <video>
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </video>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:47:01 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:47:01 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:47:01 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:47:01 compute-0 nova_compute[254092]: </domain>
Nov 25 16:47:01 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.996 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Preparing to wait for external event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.996 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.997 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.997 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.998 254096 DEBUG nova.virt.libvirt.vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-262344867',display_name='tempest-ServersTestJSON-server-262344867',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-262344867',id=86,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQJGP+WBaWzhABP2DN5HUriBkruJv6UMUXVsLc1/QAALeyb1ZBY8xYIWQD6vo5KB897SaecCjpvmyzj4CBX1phIjZTvTDr6O/Dm/ASyK8mTfIM6RZCRLmfnDkwJw8Gnzw==',key_name='tempest-key-735549316',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-6ok0603s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:46:55Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.998 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.999 254096 DEBUG nova.network.os_vif_util [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:01 compute-0 nova_compute[254092]: 2025-11-25 16:47:01.999 254096 DEBUG os_vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3544329129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1623298628' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.001 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.001 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.002 254096 DEBUG nova.compute.manager [None req-00bcd083-1173-4d98-91c8-5756a6265137 - - - - - -] [instance: 076182c5-e049-4b78-b5a0-64489e37776b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.004 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.005 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f27b65-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.005 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0f27b65-aa, col_values=(('external_ids', {'iface-id': 'f0f27b65-aab4-4ab1-ade1-b58eb7124f88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:ac:77', 'vm-uuid': 'd60cbcdd-ce8c-49cc-a4ef-f2a324be18bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:02 compute-0 NetworkManager[48891]: <info>  [1764089222.0078] manager: (tapf0f27b65-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.012 254096 INFO os_vif [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa')
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.064 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.064 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.064 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:1f:ac:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.065 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Using config drive
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.086 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.272 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089222.2716823, b4cc1fd8-a1ed-40f8-8373-33b1d1260300 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.273 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] VM Resumed (Lifecycle Event)
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.275 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.276 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.280 254096 INFO nova.virt.libvirt.driver [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance spawned successfully.
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.280 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.295 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.300 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.304 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.305 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.305 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.305 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.306 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.306 254096 DEBUG nova.virt.libvirt.driver [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.338 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.339 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089222.274979, b4cc1fd8-a1ed-40f8-8373-33b1d1260300 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.339 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] VM Started (Lifecycle Event)
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.368 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.374 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.383 254096 INFO nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 4.05 seconds to spawn the instance on the hypervisor.
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.383 254096 DEBUG nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.391 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.438 254096 INFO nova.compute.manager [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 5.02 seconds to build instance.
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.453 254096 DEBUG oslo_concurrency.lockutils [None req-2e0367b9-c267-4c9b-a113-dd33c9381390 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.777 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Creating config drive at /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.783 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2h2tgisx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.924 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2h2tgisx" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.964 254096 DEBUG nova.storage.rbd_utils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:02 compute-0 nova_compute[254092]: 2025-11-25 16:47:02.968 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:03 compute-0 ceph-mon[74985]: pgmap v1849: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 3.5 MiB/s wr, 108 op/s
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.118 254096 DEBUG oslo_concurrency.processutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.119 254096 INFO nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deleting local config drive /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb/disk.config because it was imported into RBD.
Nov 25 16:47:03 compute-0 kernel: tapf0f27b65-aa: entered promiscuous mode
Nov 25 16:47:03 compute-0 systemd-udevd[344136]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:47:03 compute-0 NetworkManager[48891]: <info>  [1764089223.1946] manager: (tapf0f27b65-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 25 16:47:03 compute-0 ovn_controller[153477]: 2025-11-25T16:47:03Z|00838|binding|INFO|Claiming lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for this chassis.
Nov 25 16:47:03 compute-0 ovn_controller[153477]: 2025-11-25T16:47:03Z|00839|binding|INFO|f0f27b65-aab4-4ab1-ade1-b58eb7124f88: Claiming fa:16:3e:1f:ac:77 10.100.0.4
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:03 compute-0 NetworkManager[48891]: <info>  [1764089223.2102] device (tapf0f27b65-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:47:03 compute-0 NetworkManager[48891]: <info>  [1764089223.2134] device (tapf0f27b65-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.215 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ac:77 10.100.0.4'], port_security=['fa:16:3e:1f:ac:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60cbcdd-ce8c-49cc-a4ef-f2a324be18bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f0f27b65-aab4-4ab1-ade1-b58eb7124f88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.217 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f0f27b65-aab4-4ab1-ade1-b58eb7124f88 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.218 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:47:03 compute-0 ovn_controller[153477]: 2025-11-25T16:47:03Z|00840|binding|INFO|Setting lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 ovn-installed in OVS
Nov 25 16:47:03 compute-0 ovn_controller[153477]: 2025-11-25T16:47:03Z|00841|binding|INFO|Setting lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 up in Southbound
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.238 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dabb8021-efc2-4926-a1c0-420571667ae8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:03 compute-0 systemd-machined[216343]: New machine qemu-108-instance-00000056.
Nov 25 16:47:03 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000056.
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.280 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[46bd69b1-f2d7-4266-bfdd-58b6c6fd8f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.284 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28fb8eb9-d52a-4ae2-8587-14719fdcb2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.322 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9db90a2f-0233-4dd4-ad27-b25cf57cae3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.343 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d31af113-ede8-4fdd-ab62-4513857c1fc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344204, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15042de2-a0b3-4c37-98be-b9e011511841]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344206, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344206, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.365 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.365 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.366 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:03.366 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.654 254096 DEBUG nova.network.neutron [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updated VIF entry in instance network info cache for port f0f27b65-aab4-4ab1-ade1-b58eb7124f88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.654 254096 DEBUG nova.network.neutron [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updating instance_info_cache with network_info: [{"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.669 254096 DEBUG oslo_concurrency.lockutils [req-af08f3c0-d9bf-4763-a19d-f4e2d6079466 req-3d17462b-e33c-40be-a38c-88410dc16583 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.676 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089223.6759393, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.676 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Started (Lifecycle Event)
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.694 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.698 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089223.6760862, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.698 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Paused (Lifecycle Event)
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.713 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.716 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.731 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.737 254096 DEBUG nova.compute.manager [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG oslo_concurrency.lockutils [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG oslo_concurrency.lockutils [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG oslo_concurrency.lockutils [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.738 254096 DEBUG nova.compute.manager [req-190f3478-bf52-4a0d-8933-2d46ba1cbeb0 req-25521466-ede8-4a12-b19f-26735092dc6c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Processing event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.739 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:47:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.743 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089223.7432911, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.744 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Resumed (Lifecycle Event)
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.747 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.751 254096 INFO nova.virt.libvirt.driver [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance spawned successfully.
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.752 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.762 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.767 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.776 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.776 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.776 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.777 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.777 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.777 254096 DEBUG nova.virt.libvirt.driver [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.782 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.828 254096 INFO nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 8.64 seconds to spawn the instance on the hypervisor.
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.829 254096 DEBUG nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.885 254096 INFO nova.compute.manager [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 9.55 seconds to build instance.
Nov 25 16:47:03 compute-0 nova_compute[254092]: 2025-11-25 16:47:03.903 254096 DEBUG oslo_concurrency.lockutils [None req-ced97602-426a-479c-8ed9-a36bc5eee7c1 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.216 254096 DEBUG nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.254 254096 INFO nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] instance snapshotting
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.256 254096 DEBUG nova.objects.instance [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'flavor' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.424 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.425 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.426 254096 INFO nova.compute.manager [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Terminating instance
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.427 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "refresh_cache-b4cc1fd8-a1ed-40f8-8373-33b1d1260300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.427 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquired lock "refresh_cache-b4cc1fd8-a1ed-40f8-8373-33b1d1260300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.427 254096 DEBUG nova.network.neutron [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.574 254096 INFO nova.virt.libvirt.driver [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Beginning live snapshot process
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.629 254096 DEBUG nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.637 254096 DEBUG nova.network.neutron [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.970 254096 DEBUG nova.network.neutron [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.984 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Releasing lock "refresh_cache-b4cc1fd8-a1ed-40f8-8373-33b1d1260300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:04 compute-0 nova_compute[254092]: 2025-11-25 16:47:04.985 254096 DEBUG nova.compute.manager [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:47:05 compute-0 ceph-mon[74985]: pgmap v1850: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 16:47:05 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Deactivated successfully.
Nov 25 16:47:05 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Consumed 3.530s CPU time.
Nov 25 16:47:05 compute-0 systemd-machined[216343]: Machine qemu-107-instance-00000057 terminated.
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.156 254096 DEBUG nova.compute.manager [None req-ca0cb80f-bb25-4bca-8ddd-0bb8ce7f2af2 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.207 254096 INFO nova.virt.libvirt.driver [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance destroyed successfully.
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.207 254096 DEBUG nova.objects.instance [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lazy-loading 'resources' on Instance uuid b4cc1fd8-a1ed-40f8-8373-33b1d1260300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.536 254096 INFO nova.virt.libvirt.driver [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deleting instance files /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_del
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.537 254096 INFO nova.virt.libvirt.driver [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deletion of /var/lib/nova/instances/b4cc1fd8-a1ed-40f8-8373-33b1d1260300_del complete
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.584 254096 INFO nova.compute.manager [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 0.60 seconds to destroy the instance on the hypervisor.
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.585 254096 DEBUG oslo.service.loopingcall [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.585 254096 DEBUG nova.compute.manager [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.585 254096 DEBUG nova.network.neutron [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.672 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.673 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.674 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.674 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.676 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.678 254096 INFO nova.compute.manager [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Terminating instance
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.679 254096 DEBUG nova.compute.manager [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:47:05 compute-0 kernel: tapf0f27b65-aa (unregistering): left promiscuous mode
Nov 25 16:47:05 compute-0 NetworkManager[48891]: <info>  [1764089225.7222] device (tapf0f27b65-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.726 254096 DEBUG nova.network.neutron [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:05 compute-0 ovn_controller[153477]: 2025-11-25T16:47:05Z|00842|binding|INFO|Releasing lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 from this chassis (sb_readonly=0)
Nov 25 16:47:05 compute-0 ovn_controller[153477]: 2025-11-25T16:47:05Z|00843|binding|INFO|Setting lport f0f27b65-aab4-4ab1-ade1-b58eb7124f88 down in Southbound
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 ovn_controller[153477]: 2025-11-25T16:47:05Z|00844|binding|INFO|Removing iface tapf0f27b65-aa ovn-installed in OVS
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.740 254096 DEBUG nova.network.neutron [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.742 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ac:77 10.100.0.4'], port_security=['fa:16:3e:1f:ac:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd60cbcdd-ce8c-49cc-a4ef-f2a324be18bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f0f27b65-aab4-4ab1-ade1-b58eb7124f88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.743 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f0f27b65-aab4-4ab1-ade1-b58eb7124f88 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.744 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.754 254096 INFO nova.compute.manager [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Took 0.17 seconds to deallocate network for instance.
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[749baa4b-a3f2-4708-a8e0-cde450cb5093]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:05 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000056.scope: Deactivated successfully.
Nov 25 16:47:05 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000056.scope: Consumed 2.428s CPU time.
Nov 25 16:47:05 compute-0 systemd-machined[216343]: Machine qemu-108-instance-00000056 terminated.
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.803 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.803 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9899f574-975f-4772-95ed-b5af2c8cb9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.804 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.806 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec27c61-ba3f-431b-b889-41a25bdc3a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.833 254096 DEBUG nova.compute.manager [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.834 254096 DEBUG oslo_concurrency.lockutils [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.834 254096 DEBUG oslo_concurrency.lockutils [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.834 254096 DEBUG oslo_concurrency.lockutils [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.835 254096 DEBUG nova.compute.manager [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] No waiting events found dispatching network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.835 254096 WARNING nova.compute.manager [req-2f8cecf1-8b0b-477a-9279-880071f1b171 req-ec1a6cdb-ffc6-446b-b087-2b1d3b3afed6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received unexpected event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for instance with vm_state active and task_state deleting.
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.838 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3e387705-90c3-4cb7-9d26-5490a4bc1707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.860 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4244b066-760e-4535-8191-c7813515f6d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344282, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.883 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd129e8b-d74a-49dd-95eb-75bb8f3fad7c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344283, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344283, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.886 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.897 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.898 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.898 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.898 254096 DEBUG oslo_concurrency.processutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:05.899 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.937 254096 INFO nova.virt.libvirt.driver [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Instance destroyed successfully.
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.938 254096 DEBUG nova.objects.instance [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.949 254096 DEBUG nova.virt.libvirt.vif [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-262344867',display_name='tempest-ServersTestJSON-server-262344867',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-262344867',id=86,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKQJGP+WBaWzhABP2DN5HUriBkruJv6UMUXVsLc1/QAALeyb1ZBY8xYIWQD6vo5KB897SaecCjpvmyzj4CBX1phIjZTvTDr6O/Dm/ASyK8mTfIM6RZCRLmfnDkwJw8Gnzw==',key_name='tempest-key-735549316',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-6ok0603s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:03Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.950 254096 DEBUG nova.network.os_vif_util [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "address": "fa:16:3e:1f:ac:77", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0f27b65-aa", "ovs_interfaceid": "f0f27b65-aab4-4ab1-ade1-b58eb7124f88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.951 254096 DEBUG nova.network.os_vif_util [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.951 254096 DEBUG os_vif [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.954 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f27b65-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:05 compute-0 nova_compute[254092]: 2025-11-25 16:47:05.958 254096 INFO os_vif [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ac:77,bridge_name='br-int',has_traffic_filtering=True,id=f0f27b65-aab4-4ab1-ade1-b58eb7124f88,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0f27b65-aa')
Nov 25 16:47:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.315 254096 INFO nova.virt.libvirt.driver [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deleting instance files /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_del
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.315 254096 INFO nova.virt.libvirt.driver [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deletion of /var/lib/nova/instances/d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb_del complete
Nov 25 16:47:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455516645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.334 254096 DEBUG oslo_concurrency.processutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.339 254096 DEBUG nova.compute.provider_tree [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.354 254096 DEBUG nova.scheduler.client.report [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.361 254096 INFO nova.compute.manager [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 0.68 seconds to destroy the instance on the hypervisor.
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.361 254096 DEBUG oslo.service.loopingcall [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.361 254096 DEBUG nova.compute.manager [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.362 254096 DEBUG nova.network.neutron [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.371 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.422 254096 INFO nova.scheduler.client.report [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Deleted allocations for instance b4cc1fd8-a1ed-40f8-8373-33b1d1260300
Nov 25 16:47:06 compute-0 nova_compute[254092]: 2025-11-25 16:47:06.473 254096 DEBUG oslo_concurrency.lockutils [None req-90b4dfe9-1ba0-4e90-a36b-4c05899ceeb6 845a69c3091245f2a563f43567bf4a2f f23a436fcc3d46efba4e231d5103a5d5 - - default default] Lock "b4cc1fd8-a1ed-40f8-8373-33b1d1260300" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:07 compute-0 ceph-mon[74985]: pgmap v1851: 321 pgs: 321 active+clean; 213 MiB data, 692 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Nov 25 16:47:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3455516645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 194 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.909 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-unplugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.910 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.910 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.910 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] No waiting events found dispatching network-vif-unplugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-unplugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.911 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 DEBUG oslo_concurrency.lockutils [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 DEBUG nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] No waiting events found dispatching network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:07 compute-0 nova_compute[254092]: 2025-11-25 16:47:07.912 254096 WARNING nova.compute.manager [req-18fa4220-d2ab-48b2-83a9-0199da6dd067 req-7df48f94-ccd9-4644-9136-d7ab2f0d0518 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received unexpected event network-vif-plugged-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 for instance with vm_state active and task_state deleting.
Nov 25 16:47:08 compute-0 nova_compute[254092]: 2025-11-25 16:47:08.428 254096 DEBUG nova.network.neutron [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:08 compute-0 nova_compute[254092]: 2025-11-25 16:47:08.453 254096 INFO nova.compute.manager [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Took 2.09 seconds to deallocate network for instance.
Nov 25 16:47:08 compute-0 nova_compute[254092]: 2025-11-25 16:47:08.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:08 compute-0 nova_compute[254092]: 2025-11-25 16:47:08.503 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:08 compute-0 nova_compute[254092]: 2025-11-25 16:47:08.503 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:08 compute-0 nova_compute[254092]: 2025-11-25 16:47:08.569 254096 DEBUG oslo_concurrency.processutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197991953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.021 254096 DEBUG oslo_concurrency.processutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.028 254096 DEBUG nova.compute.provider_tree [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:09 compute-0 ceph-mon[74985]: pgmap v1852: 321 pgs: 321 active+clean; 194 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Nov 25 16:47:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2197991953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.046 254096 DEBUG nova.scheduler.client.report [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.072 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.101 254096 INFO nova.scheduler.client.report [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.166 254096 DEBUG oslo_concurrency.lockutils [None req-8134d527-fb20-4def-a5dc-1a818be52ff2 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 194 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 208 op/s
Nov 25 16:47:09 compute-0 nova_compute[254092]: 2025-11-25 16:47:09.987 254096 DEBUG nova.compute.manager [req-b65a6aa9-89f1-4edd-863f-b2c70de65693 req-2a3eead5-b526-4dd9-b93c-fe4d2ed49d62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Received event network-vif-deleted-f0f27b65-aab4-4ab1-ade1-b58eb7124f88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:47:10 compute-0 nova_compute[254092]: 2025-11-25 16:47:10.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:11 compute-0 ceph-mon[74985]: pgmap v1853: 321 pgs: 321 active+clean; 194 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 208 op/s
Nov 25 16:47:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:11 compute-0 nova_compute[254092]: 2025-11-25 16:47:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 121 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 242 op/s
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.178 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.178 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.211 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.292 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.293 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.298 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.298 254096 INFO nova.compute.claims [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.422 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242272465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.860 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.866 254096 DEBUG nova.compute.provider_tree [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.887 254096 DEBUG nova.scheduler.client.report [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.913 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.913 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.975 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.975 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:47:12 compute-0 nova_compute[254092]: 2025-11-25 16:47:12.998 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.014 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:47:13 compute-0 ceph-mon[74985]: pgmap v1854: 321 pgs: 321 active+clean; 121 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 242 op/s
Nov 25 16:47:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3242272465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.106 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.107 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.108 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Creating image(s)
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.130 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.159 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.186 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.191 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.293 254096 DEBUG nova.policy [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.298 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.299 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.300 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.300 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.326 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.331 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6affd696-c15d-4401-8512-2aabbf55fd4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:13.624 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:13.624 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.683 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6affd696-c15d-4401-8512-2aabbf55fd4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.738 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:47:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 121 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 200 op/s
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.816 254096 DEBUG nova.objects.instance [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 6affd696-c15d-4401-8512-2aabbf55fd4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.823 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.824 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.831 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.832 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Ensure instance console log exists: /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.832 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.832 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.833 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.858 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.927 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.928 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.936 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:47:13 compute-0 nova_compute[254092]: 2025-11-25 16:47:13.937 254096 INFO nova.compute.claims [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.116 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.567 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Successfully created port: be2a1b3b-f8a0-4a67-9582-54b753171490 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:47:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999882169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.593 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.600 254096 DEBUG nova.compute.provider_tree [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.613 254096 DEBUG nova.scheduler.client.report [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.634 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.634 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:47:14 compute-0 podman[344570]: 2025-11-25 16:47:14.658250278 +0000 UTC m=+0.062848080 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:47:14 compute-0 podman[344567]: 2025-11-25 16:47:14.663829959 +0000 UTC m=+0.068196543 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.684 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.684 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.703 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:47:14 compute-0 podman[344571]: 2025-11-25 16:47:14.703843517 +0000 UTC m=+0.101072948 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.718 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.794 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.795 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.796 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating image(s)
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.818 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.845 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.870 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.874 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.907 254096 DEBUG nova.policy [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23f6db77558a477bbd8b8b46cb4107d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.942 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.943 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.944 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.944 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.969 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:14 compute-0 nova_compute[254092]: 2025-11-25 16:47:14.972 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73301044-3bad-4401-9e30-f009d417f662_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:15 compute-0 ceph-mon[74985]: pgmap v1855: 321 pgs: 321 active+clean; 121 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 200 op/s
Nov 25 16:47:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/999882169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.272 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73301044-3bad-4401-9e30-f009d417f662_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.325 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] resizing rbd image 73301044-3bad-4401-9e30-f009d417f662_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.392 254096 DEBUG nova.objects.instance [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.403 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.404 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Ensure instance console log exists: /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.418 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.418 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.418 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.518 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 157 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 983 KiB/s wr, 227 op/s
Nov 25 16:47:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3619001051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:15 compute-0 nova_compute[254092]: 2025-11-25 16:47:15.963 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.041 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.042 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:47:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3619001051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.245 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.942779541015625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.247 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.247 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.296 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Successfully updated port: be2a1b3b-f8a0-4a67-9582-54b753171490 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.313 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.314 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.314 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8b20d119-17cb-4742-9223-90e5020f93a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 6affd696-c15d-4401-8512-2aabbf55fd4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73301044-3bad-4401-9e30-f009d417f662 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.387 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.428 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Successfully created port: 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.502 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.609 254096 DEBUG nova.compute.manager [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-changed-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.609 254096 DEBUG nova.compute.manager [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Refreshing instance network info cache due to event network-changed-be2a1b3b-f8a0-4a67-9582-54b753171490. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.610 254096 DEBUG oslo_concurrency.lockutils [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046386077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.807 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.814 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.826 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:47:16 compute-0 nova_compute[254092]: 2025-11-25 16:47:16.844 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:17 compute-0 ceph-mon[74985]: pgmap v1856: 321 pgs: 321 active+clean; 157 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 983 KiB/s wr, 227 op/s
Nov 25 16:47:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3046386077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 189 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Nov 25 16:47:17 compute-0 nova_compute[254092]: 2025-11-25 16:47:17.845 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:17 compute-0 nova_compute[254092]: 2025-11-25 16:47:17.846 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:47:17 compute-0 nova_compute[254092]: 2025-11-25 16:47:17.875 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:47:17 compute-0 nova_compute[254092]: 2025-11-25 16:47:17.875 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.118 254096 DEBUG nova.network.neutron [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updating instance_info_cache with network_info: [{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.136 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.137 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance network_info: |[{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.137 254096 DEBUG oslo_concurrency.lockutils [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.138 254096 DEBUG nova.network.neutron [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Refreshing network info cache for port be2a1b3b-f8a0-4a67-9582-54b753171490 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.140 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start _get_guest_xml network_info=[{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.145 254096 WARNING nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.155 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.156 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.160 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.161 254096 DEBUG nova.virt.libvirt.host [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.161 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.162 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.162 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.162 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.163 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.164 254096 DEBUG nova.virt.hardware [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.167 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.323 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.323 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.336 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.403 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.404 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.411 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.411 254096 INFO nova.compute.claims [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.525 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.604 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207592377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.652 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.675 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.679 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.796 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Successfully updated port: 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.809 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.809 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.809 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.890 254096 DEBUG nova.compute.manager [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.891 254096 DEBUG nova.compute.manager [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing instance network info cache due to event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.891 254096 DEBUG oslo_concurrency.lockutils [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:18 compute-0 sshd-session[344840]: Connection closed by authenticating user root 171.244.51.45 port 56022 [preauth]
Nov 25 16:47:18 compute-0 nova_compute[254092]: 2025-11-25 16:47:18.996 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820097639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.029 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.035 254096 DEBUG nova.compute.provider_tree [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.049 254096 DEBUG nova.scheduler.client.report [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.068 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.069 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:47:19 compute-0 ceph-mon[74985]: pgmap v1857: 321 pgs: 321 active+clean; 189 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Nov 25 16:47:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2207592377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1820097639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123912769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.110 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.111 254096 DEBUG nova.virt.libvirt.vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=88,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-nafun8zg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:13Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=6affd696-c15d-4401-8512-2aabbf55fd4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.111 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.112 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.113 254096 DEBUG nova.objects.instance [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6affd696-c15d-4401-8512-2aabbf55fd4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.129 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.129 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.132 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <uuid>6affd696-c15d-4401-8512-2aabbf55fd4e</uuid>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <name>instance-00000058</name>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestJSON-server-694487906</nova:name>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:47:18</nova:creationTime>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <nova:port uuid="be2a1b3b-f8a0-4a67-9582-54b753171490">
Nov 25 16:47:19 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <system>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <entry name="serial">6affd696-c15d-4401-8512-2aabbf55fd4e</entry>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <entry name="uuid">6affd696-c15d-4401-8512-2aabbf55fd4e</entry>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </system>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <os>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   </os>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <features>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   </features>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6affd696-c15d-4401-8512-2aabbf55fd4e_disk">
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config">
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:8a:e5:4f"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <target dev="tapbe2a1b3b-f8"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/console.log" append="off"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <video>
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </video>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:47:19 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:47:19 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:47:19 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:47:19 compute-0 nova_compute[254092]: </domain>
Nov 25 16:47:19 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.134 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Preparing to wait for external event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.134 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.134 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.135 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.135 254096 DEBUG nova.virt.libvirt.vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=88,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-nafun8zg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:13Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=6affd696-c15d-4401-8512-2aabbf55fd4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.135 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.136 254096 DEBUG nova.network.os_vif_util [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.136 254096 DEBUG os_vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.140 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.140 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.144 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe2a1b3b-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe2a1b3b-f8, col_values=(('external_ids', {'iface-id': 'be2a1b3b-f8a0-4a67-9582-54b753171490', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:e5:4f', 'vm-uuid': '6affd696-c15d-4401-8512-2aabbf55fd4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.147 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:47:19 compute-0 NetworkManager[48891]: <info>  [1764089239.1480] manager: (tapbe2a1b3b-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.157 254096 INFO os_vif [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8')
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.175 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.243 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.244 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.244 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:8a:e5:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.244 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Using config drive
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.262 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.302 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.304 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.304 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Creating image(s)
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.323 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.347 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.370 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.374 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.440 254096 DEBUG nova.policy [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.443 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.444 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.444 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.444 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.468 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.472 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2122cb4e-4525-451f-a46f-184e4a72cb34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 189 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.783 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2122cb4e-4525-451f-a46f-184e4a72cb34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.846 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.881 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Creating config drive at /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.887 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfacujh8t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.961 254096 DEBUG nova.objects.instance [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.965 254096 DEBUG nova.network.neutron [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updated VIF entry in instance network info cache for port be2a1b3b-f8a0-4a67-9582-54b753171490. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.965 254096 DEBUG nova.network.neutron [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updating instance_info_cache with network_info: [{"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.978 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.978 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Ensure instance console log exists: /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.978 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.979 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.979 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:19 compute-0 nova_compute[254092]: 2025-11-25 16:47:19.995 254096 DEBUG oslo_concurrency.lockutils [req-58f8d9a9-3e2e-48ef-b91a-588b568d91de req-ba66ec21-38ea-49f7-b864-5641fc28ea9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6affd696-c15d-4401-8512-2aabbf55fd4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.025 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfacujh8t" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.046 254096 DEBUG nova.storage.rbd_utils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.052 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4123912769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.170 254096 DEBUG nova.network.neutron [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.183 254096 DEBUG oslo_concurrency.processutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config 6affd696-c15d-4401-8512-2aabbf55fd4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.184 254096 INFO nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deleting local config drive /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e/disk.config because it was imported into RBD.
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.185 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.186 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance network_info: |[{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.186 254096 DEBUG oslo_concurrency.lockutils [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.186 254096 DEBUG nova.network.neutron [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.189 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start _get_guest_xml network_info=[{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.193 254096 WARNING nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.204 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.204 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.205 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089225.204876, b4cc1fd8-a1ed-40f8-8373-33b1d1260300 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.205 254096 INFO nova.compute.manager [-] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] VM Stopped (Lifecycle Event)
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.212 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.212 254096 DEBUG nova.virt.libvirt.host [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.213 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.214 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.215 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.215 254096 DEBUG nova.virt.hardware [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.218 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:20 compute-0 kernel: tapbe2a1b3b-f8: entered promiscuous mode
Nov 25 16:47:20 compute-0 NetworkManager[48891]: <info>  [1764089240.2463] manager: (tapbe2a1b3b-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.254 254096 DEBUG nova.compute.manager [None req-bc86e9ea-46bc-4882-9bcc-a56f9801d154 - - - - - -] [instance: b4cc1fd8-a1ed-40f8-8373-33b1d1260300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:20 compute-0 systemd-udevd[345163]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:47:20 compute-0 ovn_controller[153477]: 2025-11-25T16:47:20Z|00845|binding|INFO|Claiming lport be2a1b3b-f8a0-4a67-9582-54b753171490 for this chassis.
Nov 25 16:47:20 compute-0 ovn_controller[153477]: 2025-11-25T16:47:20Z|00846|binding|INFO|be2a1b3b-f8a0-4a67-9582-54b753171490: Claiming fa:16:3e:8a:e5:4f 10.100.0.14
Nov 25 16:47:20 compute-0 NetworkManager[48891]: <info>  [1764089240.2920] device (tapbe2a1b3b-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:47:20 compute-0 NetworkManager[48891]: <info>  [1764089240.2929] device (tapbe2a1b3b-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.300 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:e5:4f 10.100.0.14'], port_security=['fa:16:3e:8a:e5:4f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6affd696-c15d-4401-8512-2aabbf55fd4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=be2a1b3b-f8a0-4a67-9582-54b753171490) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.304 163338 INFO neutron.agent.ovn.metadata.agent [-] Port be2a1b3b-f8a0-4a67-9582-54b753171490 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.310 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:47:20 compute-0 ovn_controller[153477]: 2025-11-25T16:47:20Z|00847|binding|INFO|Setting lport be2a1b3b-f8a0-4a67-9582-54b753171490 ovn-installed in OVS
Nov 25 16:47:20 compute-0 ovn_controller[153477]: 2025-11-25T16:47:20Z|00848|binding|INFO|Setting lport be2a1b3b-f8a0-4a67-9582-54b753171490 up in Southbound
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:20 compute-0 systemd-machined[216343]: New machine qemu-109-instance-00000058.
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.328 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d34f18f-6405-45bc-9ff9-040c3b4dcb48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:20 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-00000058.
Nov 25 16:47:20 compute-0 sudo[345165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:20 compute-0 sudo[345165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:20 compute-0 sudo[345165]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.365 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24699707-7cc9-49c8-9f2c-a020c236450a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.368 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c86fc115-c4d4-49da-9f15-529c03ed4e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:20 compute-0 sudo[345202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:47:20 compute-0 sudo[345202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.419 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7c8638-a8cc-4c56-9685-ed03075425cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:20 compute-0 sudo[345202]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.445 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26e76eae-2b74-453d-89f5-bcf6dfa30131]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345249, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.465 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2573eef-04ed-4bd2-8ccd-5ee8aaf585ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345267, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345267, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.466 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.469 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:20 compute-0 sudo[345250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:20 compute-0 sudo[345250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:20 compute-0 sudo[345250]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:20 compute-0 sudo[345276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:47:20 compute-0 sudo[345276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262003797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.698 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.722 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.726 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.899 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Successfully created port: 54bd7c02-9f22-4656-9514-7219e656dbef _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.935 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089225.9170492, d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.935 254096 INFO nova.compute.manager [-] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] VM Stopped (Lifecycle Event)
Nov 25 16:47:20 compute-0 nova_compute[254092]: 2025-11-25 16:47:20.959 254096 DEBUG nova.compute.manager [None req-fa44ff10-ccab-4a6d-b0b0-00582f701b25 - - - - - -] [instance: d60cbcdd-ce8c-49cc-a4ef-f2a324be18bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:21 compute-0 sudo[345276]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:47:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0bbf03ff-0b99-4ebf-b745-64f9fd3a9c04 does not exist
Nov 25 16:47:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e0da7f1a-e5ce-4d44-881a-ac0ae9fd0b1a does not exist
Nov 25 16:47:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 860e9e3e-fccd-4b67-b670-0b215a4ae660 does not exist
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.083 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089241.0832179, 6affd696-c15d-4401-8512-2aabbf55fd4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.084 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Started (Lifecycle Event)
Nov 25 16:47:21 compute-0 ceph-mon[74985]: pgmap v1858: 321 pgs: 321 active+clean; 189 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4262003797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:47:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.101 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.107 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089241.084172, 6affd696-c15d-4401-8512-2aabbf55fd4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.107 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Paused (Lifecycle Event)
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.121 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.127 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:21 compute-0 sudo[345412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:21 compute-0 sudo[345412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:21 compute-0 sudo[345412]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.148 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748233305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.169 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.170 254096 DEBUG nova.virt.libvirt.vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.171 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.171 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.173 254096 DEBUG nova.objects.instance [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.190 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <uuid>73301044-3bad-4401-9e30-f009d417f662</uuid>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <name>instance-00000059</name>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestOtherB-server-932750089</nova:name>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:47:20</nova:creationTime>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <nova:port uuid="792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3">
Nov 25 16:47:21 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <system>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <entry name="serial">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <entry name="uuid">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </system>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <os>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   </os>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <features>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   </features>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk">
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:47:21 compute-0 sudo[345437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk.config">
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:21 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:c4:5c:49"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <target dev="tap792a5867-7e"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log" append="off"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <video>
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </video>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 sudo[345437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:47:21 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:47:21 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:47:21 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:47:21 compute-0 nova_compute[254092]: </domain>
Nov 25 16:47:21 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.190 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Preparing to wait for external event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.190 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.191 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.191 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.192 254096 DEBUG nova.virt.libvirt.vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.192 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.193 254096 DEBUG nova.network.os_vif_util [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.193 254096 DEBUG os_vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.195 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.195 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:21 compute-0 sudo[345437]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.198 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap792a5867-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.198 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap792a5867-7e, col_values=(('external_ids', {'iface-id': '792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:5c:49', 'vm-uuid': '73301044-3bad-4401-9e30-f009d417f662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:21 compute-0 NetworkManager[48891]: <info>  [1764089241.2008] manager: (tap792a5867-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.206 254096 INFO os_vif [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')
Nov 25 16:47:21 compute-0 sudo[345465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:21 compute-0 sudo[345465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:21 compute-0 sudo[345465]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.257 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.258 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.258 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:c4:5c:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.258 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Using config drive
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.278 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:21 compute-0 sudo[345491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:47:21 compute-0 sudo[345491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.416 254096 DEBUG nova.network.neutron [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated VIF entry in instance network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.417 254096 DEBUG nova.network.neutron [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.434 254096 DEBUG oslo_concurrency.lockutils [req-6afd6702-094d-4254-819b-0d01af562745 req-19e579ad-ce4f-4116-aa42-ed3124dced45 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:21 compute-0 nova_compute[254092]: 2025-11-25 16:47:21.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:47:21 compute-0 podman[345575]: 2025-11-25 16:47:21.609962561 +0000 UTC m=+0.039148115 container create 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:47:21 compute-0 systemd[1]: Started libpod-conmon-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope.
Nov 25 16:47:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:21 compute-0 podman[345575]: 2025-11-25 16:47:21.594136232 +0000 UTC m=+0.023321766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:47:21 compute-0 podman[345575]: 2025-11-25 16:47:21.707584634 +0000 UTC m=+0.136770178 container init 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:47:21 compute-0 podman[345575]: 2025-11-25 16:47:21.714531604 +0000 UTC m=+0.143717118 container start 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:47:21 compute-0 podman[345575]: 2025-11-25 16:47:21.717493384 +0000 UTC m=+0.146678918 container attach 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:47:21 compute-0 lucid_newton[345591]: 167 167
Nov 25 16:47:21 compute-0 systemd[1]: libpod-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope: Deactivated successfully.
Nov 25 16:47:21 compute-0 conmon[345591]: conmon 434993ca0d8cc7ad6f53 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope/container/memory.events
Nov 25 16:47:21 compute-0 podman[345575]: 2025-11-25 16:47:21.722345656 +0000 UTC m=+0.151531170 container died 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:47:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ce4e6ccf90b7f6e6f7ba9f616b62da07e24d39576900036bf1a70f6201016f8-merged.mount: Deactivated successfully.
Nov 25 16:47:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 259 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.3 MiB/s wr, 118 op/s
Nov 25 16:47:21 compute-0 podman[345575]: 2025-11-25 16:47:21.766322361 +0000 UTC m=+0.195507905 container remove 434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 25 16:47:21 compute-0 systemd[1]: libpod-conmon-434993ca0d8cc7ad6f53f9b72ede0336290f5f31a22da3390bbaac00856b9fd2.scope: Deactivated successfully.
Nov 25 16:47:21 compute-0 podman[345616]: 2025-11-25 16:47:21.953857787 +0000 UTC m=+0.034362825 container create 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:47:21 compute-0 systemd[1]: Started libpod-conmon-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope.
Nov 25 16:47:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:22 compute-0 podman[345616]: 2025-11-25 16:47:22.028196068 +0000 UTC m=+0.108701126 container init 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:47:22 compute-0 podman[345616]: 2025-11-25 16:47:21.939914378 +0000 UTC m=+0.020419436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:47:22 compute-0 podman[345616]: 2025-11-25 16:47:22.039457174 +0000 UTC m=+0.119962222 container start 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:47:22 compute-0 podman[345616]: 2025-11-25 16:47:22.042833505 +0000 UTC m=+0.123338573 container attach 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:47:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1748233305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.183 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating config drive at /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.190 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85sipxzv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.329 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp85sipxzv" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.353 254096 DEBUG nova.storage.rbd_utils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.358 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.509 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Successfully updated port: 54bd7c02-9f22-4656-9514-7219e656dbef _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.521 254096 DEBUG oslo_concurrency.processutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.522 254096 INFO nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting local config drive /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config because it was imported into RBD.
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.535 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.535 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.536 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:47:22 compute-0 kernel: tap792a5867-7e: entered promiscuous mode
Nov 25 16:47:22 compute-0 NetworkManager[48891]: <info>  [1764089242.5721] manager: (tap792a5867-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Nov 25 16:47:22 compute-0 ovn_controller[153477]: 2025-11-25T16:47:22Z|00849|binding|INFO|Claiming lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for this chassis.
Nov 25 16:47:22 compute-0 ovn_controller[153477]: 2025-11-25T16:47:22Z|00850|binding|INFO|792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3: Claiming fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 16:47:22 compute-0 systemd-udevd[345168]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.587 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.588 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis
Nov 25 16:47:22 compute-0 NetworkManager[48891]: <info>  [1764089242.5906] device (tap792a5867-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.590 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:47:22 compute-0 NetworkManager[48891]: <info>  [1764089242.5914] device (tap792a5867-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.605 254096 DEBUG nova.compute.manager [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.606 254096 DEBUG nova.compute.manager [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing instance network info cache due to event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.605 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8797a822-430e-4070-bab2-d15970e4d11e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.606 254096 DEBUG oslo_concurrency.lockutils [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.606 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34b8c77e-81 in ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.608 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34b8c77e-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.608 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ee576e-55ad-4a6e-b78e-f73dee203d27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[160705a4-23cc-40e9-83af-e2e8e5ba3b22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.622 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d7aa58-f4e6-42f8-9309-5a97598ccaa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 systemd-machined[216343]: New machine qemu-110-instance-00000059.
Nov 25 16:47:22 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.644 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc914a2e-9409-4587-931a-3ffabc269bec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:22 compute-0 ovn_controller[153477]: 2025-11-25T16:47:22Z|00851|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 ovn-installed in OVS
Nov 25 16:47:22 compute-0 ovn_controller[153477]: 2025-11-25T16:47:22Z|00852|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 up in Southbound
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.673 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4e84aa51-f890-448e-aa94-05591210fbb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.680 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[717c4cd0-d11f-4b36-a51e-6c65539b486d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 NetworkManager[48891]: <info>  [1764089242.6810] manager: (tap34b8c77e-80): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.692 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.720 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf53c866-4285-45ec-a27c-4c1cb7c233c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.722 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0a513186-ff5c-4592-96de-8bbcc4231be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 NetworkManager[48891]: <info>  [1764089242.7459] device (tap34b8c77e-80): carrier: link connected
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.751 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ebcce5-862b-4b78-b318-aad85c29cd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.769 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84f90e7a-3d2f-4d53-b0da-71acfc6c300c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345728, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9733fdd-47d0-48ca-b605-fc0129aa2142]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:4984'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570033, 'tstamp': 570033}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345731, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.796 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba0a104-51b6-4b2d-911c-17af34d1f4a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345733, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.826 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e27182-a91a-4093-a3c6-a798abb9f425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.878 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c5f388-f546-44b9-823f-b6a196505fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.879 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.880 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.880 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:22 compute-0 NetworkManager[48891]: <info>  [1764089242.8827] manager: (tap34b8c77e-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Nov 25 16:47:22 compute-0 kernel: tap34b8c77e-80: entered promiscuous mode
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.887 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:22 compute-0 ovn_controller[153477]: 2025-11-25T16:47:22Z|00853|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.890 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34b8c77e-8369-4eab-a81e-0825e5fa2919.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34b8c77e-8369-4eab-a81e-0825e5fa2919.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.891 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4abc29f4-f714-4c06-a1de-f5636eb5557c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.892 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/34b8c77e-8369-4eab-a81e-0825e5fa2919.pid.haproxy
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:47:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:22.892 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'env', 'PROCESS_TAG=haproxy-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34b8c77e-8369-4eab-a81e-0825e5fa2919.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:47:22 compute-0 nova_compute[254092]: 2025-11-25 16:47:22.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:23 compute-0 objective_mcnulty[345632]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:47:23 compute-0 objective_mcnulty[345632]: --> relative data size: 1.0
Nov 25 16:47:23 compute-0 objective_mcnulty[345632]: --> All data devices are unavailable
Nov 25 16:47:23 compute-0 systemd[1]: libpod-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope: Deactivated successfully.
Nov 25 16:47:23 compute-0 conmon[345632]: conmon 3cb71bfa33cb47b098c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope/container/memory.events
Nov 25 16:47:23 compute-0 ceph-mon[74985]: pgmap v1859: 321 pgs: 321 active+clean; 259 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 5.3 MiB/s wr, 118 op/s
Nov 25 16:47:23 compute-0 podman[345759]: 2025-11-25 16:47:23.123435041 +0000 UTC m=+0.028390202 container died 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 16:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e308916b6a3e1423a54d470daca3e132e12ce7a681f83703158623cd9aba73b-merged.mount: Deactivated successfully.
Nov 25 16:47:23 compute-0 podman[345759]: 2025-11-25 16:47:23.205279885 +0000 UTC m=+0.110235026 container remove 3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:47:23 compute-0 systemd[1]: libpod-conmon-3cb71bfa33cb47b098c7af4dcd5be87236f8d8aa1739e04c461385630fc48b5a.scope: Deactivated successfully.
Nov 25 16:47:23 compute-0 sudo[345491]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:23 compute-0 podman[345792]: 2025-11-25 16:47:23.273857638 +0000 UTC m=+0.054819780 container create 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:47:23 compute-0 systemd[1]: Started libpod-conmon-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a.scope.
Nov 25 16:47:23 compute-0 sudo[345803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:23 compute-0 sudo[345803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:23 compute-0 sudo[345803]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e7daa1a78ba94c079dcbea794e084a9276b64e1bfa6e1a2b7fa4bec0c3a08d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:23 compute-0 podman[345792]: 2025-11-25 16:47:23.244515331 +0000 UTC m=+0.025477503 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:47:23 compute-0 podman[345792]: 2025-11-25 16:47:23.350323777 +0000 UTC m=+0.131285949 container init 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:47:23 compute-0 podman[345792]: 2025-11-25 16:47:23.355727264 +0000 UTC m=+0.136689406 container start 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:47:23 compute-0 sudo[345834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:47:23 compute-0 sudo[345834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:23 compute-0 sudo[345834]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:23 compute-0 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : New worker (345862) forked
Nov 25 16:47:23 compute-0 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : Loading success.
Nov 25 16:47:23 compute-0 sudo[345870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:23 compute-0 sudo[345870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:23 compute-0 sudo[345870]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:23 compute-0 sudo[345896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:47:23 compute-0 sudo[345896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:23 compute-0 nova_compute[254092]: 2025-11-25 16:47:23.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 259 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 5.3 MiB/s wr, 85 op/s
Nov 25 16:47:23 compute-0 podman[345961]: 2025-11-25 16:47:23.813922345 +0000 UTC m=+0.040067170 container create c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:47:23 compute-0 systemd[1]: Started libpod-conmon-c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81.scope.
Nov 25 16:47:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:23 compute-0 podman[345961]: 2025-11-25 16:47:23.798380453 +0000 UTC m=+0.024525298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:47:23 compute-0 podman[345961]: 2025-11-25 16:47:23.89802744 +0000 UTC m=+0.124172275 container init c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:47:23 compute-0 podman[345961]: 2025-11-25 16:47:23.908131475 +0000 UTC m=+0.134276300 container start c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:47:23 compute-0 agitated_yalow[345978]: 167 167
Nov 25 16:47:23 compute-0 systemd[1]: libpod-c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81.scope: Deactivated successfully.
Nov 25 16:47:23 compute-0 podman[345961]: 2025-11-25 16:47:23.916522453 +0000 UTC m=+0.142667358 container attach c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:47:23 compute-0 podman[345961]: 2025-11-25 16:47:23.91715117 +0000 UTC m=+0.143295995 container died c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-6595cea6809860e4aeeddf78fe2a091798ccaa4a3f3d4fe303ce6f60a8a46a11-merged.mount: Deactivated successfully.
Nov 25 16:47:23 compute-0 podman[345961]: 2025-11-25 16:47:23.961848925 +0000 UTC m=+0.187993750 container remove c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:47:23 compute-0 systemd[1]: libpod-conmon-c3fc14e2d53e5e0e7f739f16830a2345de5eca3dfb394836a5a89df13ab70c81.scope: Deactivated successfully.
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.024 254096 DEBUG nova.network.neutron [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.042 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.043 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance network_info: |[{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.043 254096 DEBUG oslo_concurrency.lockutils [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.043 254096 DEBUG nova.network.neutron [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.046 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start _get_guest_xml network_info=[{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.051 254096 WARNING nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.057 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.058 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.066 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.067 254096 DEBUG nova.virt.libvirt.host [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.067 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.067 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.068 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.068 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.068 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.069 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.069 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.069 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.070 254096 DEBUG nova.virt.hardware [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.073 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:24 compute-0 podman[346003]: 2025-11-25 16:47:24.15010814 +0000 UTC m=+0.042686120 container create 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:47:24 compute-0 systemd[1]: Started libpod-conmon-7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f.scope.
Nov 25 16:47:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:24 compute-0 podman[346003]: 2025-11-25 16:47:24.133253673 +0000 UTC m=+0.025831643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:47:24 compute-0 podman[346003]: 2025-11-25 16:47:24.23731602 +0000 UTC m=+0.129894010 container init 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 16:47:24 compute-0 podman[346003]: 2025-11-25 16:47:24.245912214 +0000 UTC m=+0.138490184 container start 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:47:24 compute-0 podman[346003]: 2025-11-25 16:47:24.24906096 +0000 UTC m=+0.141638950 container attach 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 16:47:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2901929645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.612 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.640 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.646 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.789 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089244.788521, 73301044-3bad-4401-9e30-f009d417f662 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.789 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Started (Lifecycle Event)
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.809 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.815 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089244.7915783, 73301044-3bad-4401-9e30-f009d417f662 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.815 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Paused (Lifecycle Event)
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.837 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.840 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:24 compute-0 nova_compute[254092]: 2025-11-25 16:47:24.859 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.073 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.073 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.087 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:47:25 compute-0 ceph-mon[74985]: pgmap v1860: 321 pgs: 321 active+clean; 259 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 5.3 MiB/s wr, 85 op/s
Nov 25 16:47:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2901929645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:25 compute-0 musing_wilson[346020]: {
Nov 25 16:47:25 compute-0 musing_wilson[346020]:     "0": [
Nov 25 16:47:25 compute-0 musing_wilson[346020]:         {
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "devices": [
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "/dev/loop3"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             ],
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_name": "ceph_lv0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_size": "21470642176",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "name": "ceph_lv0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "tags": {
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cluster_name": "ceph",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.crush_device_class": "",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.encrypted": "0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osd_id": "0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.type": "block",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.vdo": "0"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             },
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "type": "block",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "vg_name": "ceph_vg0"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:         }
Nov 25 16:47:25 compute-0 musing_wilson[346020]:     ],
Nov 25 16:47:25 compute-0 musing_wilson[346020]:     "1": [
Nov 25 16:47:25 compute-0 musing_wilson[346020]:         {
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "devices": [
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "/dev/loop4"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             ],
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_name": "ceph_lv1",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_size": "21470642176",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "name": "ceph_lv1",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "tags": {
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cluster_name": "ceph",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.crush_device_class": "",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.encrypted": "0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osd_id": "1",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.type": "block",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.vdo": "0"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             },
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "type": "block",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "vg_name": "ceph_vg1"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:         }
Nov 25 16:47:25 compute-0 musing_wilson[346020]:     ],
Nov 25 16:47:25 compute-0 musing_wilson[346020]:     "2": [
Nov 25 16:47:25 compute-0 musing_wilson[346020]:         {
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "devices": [
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "/dev/loop5"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             ],
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_name": "ceph_lv2",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_size": "21470642176",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "name": "ceph_lv2",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "tags": {
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.cluster_name": "ceph",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.crush_device_class": "",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.encrypted": "0",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osd_id": "2",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.type": "block",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:                 "ceph.vdo": "0"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             },
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "type": "block",
Nov 25 16:47:25 compute-0 musing_wilson[346020]:             "vg_name": "ceph_vg2"
Nov 25 16:47:25 compute-0 musing_wilson[346020]:         }
Nov 25 16:47:25 compute-0 musing_wilson[346020]:     ]
Nov 25 16:47:25 compute-0 musing_wilson[346020]: }
Nov 25 16:47:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289262250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.159 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.160 254096 DEBUG nova.virt.libvirt.vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399723727',display_name='tempest-TestNetworkAdvancedServerOps-server-1399723727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399723727',id=90,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnJ2dj3tQavSsgt3v0xzD62McsGR8kH7FVN3Mskcpal4JOU2s80ZUbXF/gFef079w4ZACdh3Ov4E4/XDFuKoso7mgUy6/r/VedNuEZjiR2unDQEIrd20/t0Y7CqF7ga+A==',key_name='tempest-TestNetworkAdvancedServerOps-714044061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-yk420f19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:19Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2122cb4e-4525-451f-a46f-184e4a72cb34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.161 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.161 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.162 254096 DEBUG nova.objects.instance [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:25 compute-0 systemd[1]: libpod-7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f.scope: Deactivated successfully.
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.176 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <uuid>2122cb4e-4525-451f-a46f-184e4a72cb34</uuid>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <name>instance-0000005a</name>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1399723727</nova:name>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:47:24</nova:creationTime>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <nova:port uuid="54bd7c02-9f22-4656-9514-7219e656dbef">
Nov 25 16:47:25 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <system>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <entry name="serial">2122cb4e-4525-451f-a46f-184e4a72cb34</entry>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <entry name="uuid">2122cb4e-4525-451f-a46f-184e4a72cb34</entry>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </system>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <os>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   </os>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <features>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   </features>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2122cb4e-4525-451f-a46f-184e4a72cb34_disk">
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config">
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:42:5b:d2"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <target dev="tap54bd7c02-9f"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/console.log" append="off"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <video>
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </video>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:47:25 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:47:25 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:47:25 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:47:25 compute-0 nova_compute[254092]: </domain>
Nov 25 16:47:25 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.176 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Preparing to wait for external event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.176 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.177 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.177 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.177 254096 DEBUG nova.virt.libvirt.vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399723727',display_name='tempest-TestNetworkAdvancedServerOps-server-1399723727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399723727',id=90,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnJ2dj3tQavSsgt3v0xzD62McsGR8kH7FVN3Mskcpal4JOU2s80ZUbXF/gFef079w4ZACdh3Ov4E4/XDFuKoso7mgUy6/r/VedNuEZjiR2unDQEIrd20/t0Y7CqF7ga+A==',key_name='tempest-TestNetworkAdvancedServerOps-714044061',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-yk420f19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:19Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2122cb4e-4525-451f-a46f-184e4a72cb34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.178 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.178 254096 DEBUG nova.network.os_vif_util [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.178 254096 DEBUG os_vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.179 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.179 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.182 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.182 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54bd7c02-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54bd7c02-9f, col_values=(('external_ids', {'iface-id': '54bd7c02-9f22-4656-9514-7219e656dbef', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:5b:d2', 'vm-uuid': '2122cb4e-4525-451f-a46f-184e4a72cb34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:25 compute-0 NetworkManager[48891]: <info>  [1764089245.1877] manager: (tap54bd7c02-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.191 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.191 254096 INFO nova.compute.claims [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.198 254096 INFO os_vif [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f')
Nov 25 16:47:25 compute-0 podman[346133]: 2025-11-25 16:47:25.218664239 +0000 UTC m=+0.032358001 container died 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.272 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.273 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.274 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:42:5b:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-128736b7127285a8f7589c72f62da8d1621bafd1f0b548cba7b27df4fe26b140-merged.mount: Deactivated successfully.
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.275 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Using config drive
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.295 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:25 compute-0 podman[346133]: 2025-11-25 16:47:25.319072137 +0000 UTC m=+0.132765879 container remove 7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:47:25 compute-0 systemd[1]: libpod-conmon-7a8bdfed5f9f752b6f960cdc9ebbd56114cb6ae02410b980db4d50d424723c2f.scope: Deactivated successfully.
Nov 25 16:47:25 compute-0 sudo[345896]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.450 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:25 compute-0 sudo[346168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:25 compute-0 sudo[346168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:25 compute-0 sudo[346168]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:25 compute-0 sudo[346194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:47:25 compute-0 sudo[346194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:25 compute-0 sudo[346194]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.575 254096 DEBUG nova.network.neutron [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updated VIF entry in instance network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.575 254096 DEBUG nova.network.neutron [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.589 254096 DEBUG oslo_concurrency.lockutils [req-ef52c829-749b-440d-bf11-ecee6d555595 req-6b238303-57c7-4c45-9b5d-135ffb5021e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:25 compute-0 sudo[346219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:25 compute-0 sudo[346219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:25 compute-0 sudo[346219]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:25 compute-0 sudo[346263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:47:25 compute-0 sudo[346263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.677 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Creating config drive at /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.682 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwuure9l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 5.4 MiB/s wr, 98 op/s
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.818 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkwuure9l" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.849 254096 DEBUG nova.storage.rbd_utils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.852 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422928314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.954 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.964 254096 DEBUG nova.compute.provider_tree [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:25 compute-0 nova_compute[254092]: 2025-11-25 16:47:25.980 254096 DEBUG nova.scheduler.client.report [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.015 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.017 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:47:26 compute-0 podman[346366]: 2025-11-25 16:47:26.027622772 +0000 UTC m=+0.041326043 container create 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 16:47:26 compute-0 systemd[1]: Started libpod-conmon-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope.
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.078 254096 DEBUG oslo_concurrency.processutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config 2122cb4e-4525-451f-a46f-184e4a72cb34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.078 254096 INFO nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deleting local config drive /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34/disk.config because it was imported into RBD.
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.090 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.090 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:47:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:26 compute-0 podman[346366]: 2025-11-25 16:47:26.008313798 +0000 UTC m=+0.022017089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.109 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:47:26 compute-0 podman[346366]: 2025-11-25 16:47:26.120409054 +0000 UTC m=+0.134112345 container init 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.124 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:47:26 compute-0 podman[346366]: 2025-11-25 16:47:26.12907717 +0000 UTC m=+0.142780441 container start 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:47:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:26 compute-0 kernel: tap54bd7c02-9f: entered promiscuous mode
Nov 25 16:47:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4289262250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2422928314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:26 compute-0 agitated_maxwell[346385]: 167 167
Nov 25 16:47:26 compute-0 NetworkManager[48891]: <info>  [1764089246.1375] manager: (tap54bd7c02-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Nov 25 16:47:26 compute-0 systemd[1]: libpod-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope: Deactivated successfully.
Nov 25 16:47:26 compute-0 ovn_controller[153477]: 2025-11-25T16:47:26Z|00854|binding|INFO|Claiming lport 54bd7c02-9f22-4656-9514-7219e656dbef for this chassis.
Nov 25 16:47:26 compute-0 ovn_controller[153477]: 2025-11-25T16:47:26Z|00855|binding|INFO|54bd7c02-9f22-4656-9514-7219e656dbef: Claiming fa:16:3e:42:5b:d2 10.100.0.4
Nov 25 16:47:26 compute-0 conmon[346385]: conmon 21f6748474a7612dc298 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope/container/memory.events
Nov 25 16:47:26 compute-0 podman[346366]: 2025-11-25 16:47:26.14086264 +0000 UTC m=+0.154565931 container attach 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:26 compute-0 systemd-udevd[346106]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:47:26 compute-0 podman[346366]: 2025-11-25 16:47:26.143219624 +0000 UTC m=+0.156922895 container died 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.155 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:5b:d2 10.100.0.4'], port_security=['fa:16:3e:42:5b:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2122cb4e-4525-451f-a46f-184e4a72cb34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be38e015-3930-495b-9582-fe9707042e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8e904eb-f3d0-4bff-8be5-5af69a444c2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f101b358-01b6-416d-bcc6-f10ed8ec5155, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54bd7c02-9f22-4656-9514-7219e656dbef) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.157 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54bd7c02-9f22-4656-9514-7219e656dbef in datapath be38e015-3930-495b-9582-fe9707042e20 bound to our chassis
Nov 25 16:47:26 compute-0 NetworkManager[48891]: <info>  [1764089246.1594] device (tap54bd7c02-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.158 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be38e015-3930-495b-9582-fe9707042e20
Nov 25 16:47:26 compute-0 NetworkManager[48891]: <info>  [1764089246.1605] device (tap54bd7c02-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.173 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[992340d0-65f3-4edb-a8aa-af487d5ab6cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.175 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe38e015-31 in ovnmeta-be38e015-3930-495b-9582-fe9707042e20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.177 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe38e015-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a78026-90a0-452a-9659-5c7cb8b39bb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.178 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92788db7-5d1e-4377-beec-b9aa3e8ccb74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 systemd-machined[216343]: New machine qemu-111-instance-0000005a.
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.193 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[32c56783-ae13-4a78-a4e5-e6d339090ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Nov 25 16:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-37b1209381e3ac8c8557da9bf91430733833f7ff8bf38c7f70ee5bbf004d326c-merged.mount: Deactivated successfully.
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.212 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.216 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.217 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Creating image(s)
Nov 25 16:47:26 compute-0 podman[346366]: 2025-11-25 16:47:26.224928034 +0000 UTC m=+0.238631305 container remove 21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_maxwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[446fc2ea-5cd1-4a69-bed1-e2da5543a734]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 ovn_controller[153477]: 2025-11-25T16:47:26Z|00856|binding|INFO|Setting lport 54bd7c02-9f22-4656-9514-7219e656dbef ovn-installed in OVS
Nov 25 16:47:26 compute-0 ovn_controller[153477]: 2025-11-25T16:47:26Z|00857|binding|INFO|Setting lport 54bd7c02-9f22-4656-9514-7219e656dbef up in Southbound
Nov 25 16:47:26 compute-0 systemd[1]: libpod-conmon-21f6748474a7612dc29822a0175aaa0aa1b3d36a2e558a99f727f88a1e64ed25.scope: Deactivated successfully.
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.248 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.270 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[69213224-4e57-411f-abd3-d61c55c123a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.274 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.276 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f44d6dfa-903e-40bb-b5b2-3ff384c50171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 NetworkManager[48891]: <info>  [1764089246.2789] manager: (tapbe38e015-30): new Veth device (/org/freedesktop/NetworkManager/Devices/367)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.315 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.316 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d31527-97ab-4f96-927f-b7f9ad1aa5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.319 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c4375c12-2b76-4f1a-9fea-8baea2b6f9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.320 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:26 compute-0 NetworkManager[48891]: <info>  [1764089246.3458] device (tapbe38e015-30): carrier: link connected
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.354 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad0c409-744c-4e11-9c06-deefd7fac36f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.364 254096 DEBUG nova.policy [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ccd27eb10a8431bbd43519a883a3970', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a4c85f6be5040518f229e3e2c1c39ae', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.374 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[78829372-9e93-4e9e-ab76-f4c300de815f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe38e015-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:2b:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570393, 'reachable_time': 21224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346503, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.394 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40475270-4512-4ca1-9b10-f9298919f369]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:2bd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570393, 'tstamp': 570393}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346509, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.401 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.401 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.410 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe66e6a-2b70-4df7-bc04-688bd97fb900]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe38e015-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:2b:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570393, 'reachable_time': 21224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346517, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.432 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.439 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.446 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6ba086-6272-4f66-8777-5a2bddde333a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 podman[346508]: 2025-11-25 16:47:26.456462746 +0000 UTC m=+0.057582686 container create 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:47:26 compute-0 systemd[1]: Started libpod-conmon-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope.
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a77229c7-9b24-4311-b14d-2a5614c597ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.515 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe38e015-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.516 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.516 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe38e015-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:26 compute-0 kernel: tapbe38e015-30: entered promiscuous mode
Nov 25 16:47:26 compute-0 NetworkManager[48891]: <info>  [1764089246.5191] manager: (tapbe38e015-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.520 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.522 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe38e015-30, col_values=(('external_ids', {'iface-id': 'b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:26 compute-0 podman[346508]: 2025-11-25 16:47:26.432886846 +0000 UTC m=+0.034006796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:47:26 compute-0 ovn_controller[153477]: 2025-11-25T16:47:26Z|00858|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 16:47:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.544 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be38e015-3930-495b-9582-fe9707042e20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be38e015-3930-495b-9582-fe9707042e20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.560 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[231f71f0-b11c-4603-b3cf-07eb398d7929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.562 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-be38e015-3930-495b-9582-fe9707042e20
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/be38e015-3930-495b-9582-fe9707042e20.pid.haproxy
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID be38e015-3930-495b-9582-fe9707042e20
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:47:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:26.562 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'env', 'PROCESS_TAG=haproxy-be38e015-3930-495b-9582-fe9707042e20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be38e015-3930-495b-9582-fe9707042e20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:47:26 compute-0 podman[346508]: 2025-11-25 16:47:26.573581379 +0000 UTC m=+0.174701349 container init 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:47:26 compute-0 podman[346508]: 2025-11-25 16:47:26.581893675 +0000 UTC m=+0.183013625 container start 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:47:26 compute-0 podman[346508]: 2025-11-25 16:47:26.631934995 +0000 UTC m=+0.233054935 container attach 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.733 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089246.7334042, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.734 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Started (Lifecycle Event)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.753 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089246.7389889, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.757 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Paused (Lifecycle Event)
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.773 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.779 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:26 compute-0 nova_compute[254092]: 2025-11-25 16:47:26.796 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:27 compute-0 podman[346644]: 2025-11-25 16:47:26.914801512 +0000 UTC m=+0.019473361 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:27.112 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.137 254096 DEBUG nova.compute.manager [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.137 254096 DEBUG oslo_concurrency.lockutils [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.138 254096 DEBUG oslo_concurrency.lockutils [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.138 254096 DEBUG oslo_concurrency.lockutils [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.138 254096 DEBUG nova.compute.manager [req-e009b841-ea5d-43c3-aaea-98c2b36fe1d0 req-931fbb8e-7ea5-4c7c-a556-bee2a07a9119 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Processing event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.139 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.143 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089247.143349, 6affd696-c15d-4401-8512-2aabbf55fd4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.143 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Resumed (Lifecycle Event)
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.145 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.152 254096 INFO nova.virt.libvirt.driver [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance spawned successfully.
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.153 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.169 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.172 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.193 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.201 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.201 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.202 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.202 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.203 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.203 254096 DEBUG nova.virt.libvirt.driver [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:27 compute-0 ceph-mon[74985]: pgmap v1861: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 5.4 MiB/s wr, 98 op/s
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.271 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Successfully created port: 2fd7f15a-e429-4b39-86da-980a7fbc785f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:47:27 compute-0 podman[346644]: 2025-11-25 16:47:27.288856847 +0000 UTC m=+0.393528676 container create 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.308 254096 INFO nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 14.20 seconds to spawn the instance on the hypervisor.
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.309 254096 DEBUG nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:27 compute-0 systemd[1]: Started libpod-conmon-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7.scope.
Nov 25 16:47:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.385 254096 INFO nova.compute.manager [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 15.11 seconds to build instance.
Nov 25 16:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838921d87160c6df4cb470730be2019fa42a93dba0b126d7b101a3d59de1d523/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.404 254096 DEBUG oslo_concurrency.lockutils [None req-9daae9a3-0000-46e5-a856-d52fc2c4ff18 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:27 compute-0 podman[346644]: 2025-11-25 16:47:27.41335408 +0000 UTC m=+0.518025939 container init 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.415 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.976s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:27 compute-0 podman[346644]: 2025-11-25 16:47:27.421559423 +0000 UTC m=+0.526231252 container start 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:47:27 compute-0 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : New worker (346697) forked
Nov 25 16:47:27 compute-0 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : Loading success.
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.481 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] resizing rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:47:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:27.486 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.606 254096 DEBUG nova.objects.instance [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lazy-loading 'migration_context' on Instance uuid 1ccf7cd6-cf8d-400e-820e-940108160fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]: {
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "osd_id": 1,
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "type": "bluestore"
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:     },
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "osd_id": 2,
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "type": "bluestore"
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:     },
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "osd_id": 0,
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:         "type": "bluestore"
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]:     }
Nov 25 16:47:27 compute-0 exciting_bardeen[346572]: }
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.619 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.620 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Ensure instance console log exists: /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.621 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.622 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:27 compute-0 nova_compute[254092]: 2025-11-25 16:47:27.622 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:27 compute-0 systemd[1]: libpod-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope: Deactivated successfully.
Nov 25 16:47:27 compute-0 systemd[1]: libpod-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope: Consumed 1.016s CPU time.
Nov 25 16:47:27 compute-0 podman[346774]: 2025-11-25 16:47:27.692579648 +0000 UTC m=+0.031056265 container died 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b2dba9a18e1cd731ed88bd2ee80e444da559a81f462b2fe58690e935bd9c1c9-merged.mount: Deactivated successfully.
Nov 25 16:47:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.4 MiB/s wr, 75 op/s
Nov 25 16:47:27 compute-0 podman[346774]: 2025-11-25 16:47:27.789252445 +0000 UTC m=+0.127729032 container remove 9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:47:27 compute-0 systemd[1]: libpod-conmon-9b94884a48884ba3ad7bcab27485cde25350fcf0a8dd6bfd50f7e1841455040e.scope: Deactivated successfully.
Nov 25 16:47:27 compute-0 sudo[346263]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:47:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:47:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:47:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:47:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8d33e3f0-5479-4331-be94-8630fb77f48c does not exist
Nov 25 16:47:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cd0fef15-99e5-4948-a9b9-bef8def05f2f does not exist
Nov 25 16:47:27 compute-0 sudo[346790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:47:27 compute-0 sudo[346790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:27 compute-0 sudo[346790]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:27 compute-0 sudo[346815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:47:27 compute-0 sudo[346815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:47:27 compute-0 sudo[346815]: pam_unix(sudo:session): session closed for user root
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.033 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Successfully updated port: 2fd7f15a-e429-4b39-86da-980a7fbc785f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.050 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.051 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquired lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.051 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.123 254096 DEBUG nova.compute.manager [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-changed-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.123 254096 DEBUG nova.compute.manager [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Refreshing instance network info cache due to event network-changed-2fd7f15a-e429-4b39-86da-980a7fbc785f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.123 254096 DEBUG oslo_concurrency.lockutils [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.182 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:28 compute-0 nova_compute[254092]: 2025-11-25 16:47:28.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:28 compute-0 ceph-mon[74985]: pgmap v1862: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.4 MiB/s wr, 75 op/s
Nov 25 16:47:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:47:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:47:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 61 op/s
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.769 254096 DEBUG nova.network.neutron [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updating instance_info_cache with network_info: [{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.790 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Releasing lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.790 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance network_info: |[{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.790 254096 DEBUG oslo_concurrency.lockutils [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.791 254096 DEBUG nova.network.neutron [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Refreshing network info cache for port 2fd7f15a-e429-4b39-86da-980a7fbc785f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.795 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start _get_guest_xml network_info=[{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.800 254096 WARNING nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.807 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.807 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.814 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.815 254096 DEBUG nova.virt.libvirt.host [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.815 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.815 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.816 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.816 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.816 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.817 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.818 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.818 254096 DEBUG nova.virt.hardware [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.821 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.908 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.908 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] No waiting events found dispatching network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.909 254096 WARNING nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received unexpected event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 for instance with vm_state active and task_state None.
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Processing event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.910 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 WARNING nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state building and task_state spawning.
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.911 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Processing event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.912 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 DEBUG oslo_concurrency.lockutils [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 DEBUG nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] No waiting events found dispatching network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 WARNING nova.compute.manager [req-6672e6d2-d965-4a41-ac8c-a2c955c1a812 req-861ef15e-13f0-4a65-98e0-4d767e70c3ba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received unexpected event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef for instance with vm_state building and task_state spawning.
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.913 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.914 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.929 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.957 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089249.9483347, 73301044-3bad-4401-9e30-f009d417f662 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Resumed (Lifecycle Event)
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.959 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.963 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance spawned successfully.
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.963 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.968 254096 INFO nova.virt.libvirt.driver [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance spawned successfully.
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.968 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.981 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:29 compute-0 nova_compute[254092]: 2025-11-25 16:47:29.989 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.011 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.011 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089249.9484584, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.016 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Resumed (Lifecycle Event)
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.020 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.021 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.021 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.022 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.022 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.023 254096 DEBUG nova.virt.libvirt.driver [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.027 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.027 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.027 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.028 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.028 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.029 254096 DEBUG nova.virt.libvirt.driver [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.033 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.037 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.071 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.187 254096 INFO nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 10.88 seconds to spawn the instance on the hypervisor.
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.187 254096 DEBUG nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.232 254096 INFO nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 15.44 seconds to spawn the instance on the hypervisor.
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.233 254096 DEBUG nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019731349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.307 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.331 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.335 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.371 254096 INFO nova.compute.manager [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 11.99 seconds to build instance.
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.378 254096 INFO nova.compute.manager [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 16.47 seconds to build instance.
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.499 254096 DEBUG oslo_concurrency.lockutils [None req-605f4dd6-30d6-43ed-a60c-4b25f2266f41 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.518 254096 DEBUG oslo_concurrency.lockutils [None req-05097e95-b6b8-4fbe-98a9-93a7d07552c0 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854414798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.858 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.860 254096 DEBUG nova.virt.libvirt.vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-349447593',display_name='tempest-ServerPasswordTestJSON-server-349447593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-349447593',id=91,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a4c85f6be5040518f229e3e2c1c39ae',ramdisk_id='',reservation_id='r-gsqeuqfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1941740454',owner_user_name='tempest-ServerPasswordTestJSON-1941740454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:26Z,user_data=None,user_id='3ccd27eb10a8431bbd43519a883a3970',uuid=1ccf7cd6-cf8d-400e-820e-940108160fa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.860 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converting VIF {"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.861 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.863 254096 DEBUG nova.objects.instance [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ccf7cd6-cf8d-400e-820e-940108160fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.879 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <uuid>1ccf7cd6-cf8d-400e-820e-940108160fa8</uuid>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <name>instance-0000005b</name>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerPasswordTestJSON-server-349447593</nova:name>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:47:29</nova:creationTime>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:user uuid="3ccd27eb10a8431bbd43519a883a3970">tempest-ServerPasswordTestJSON-1941740454-project-member</nova:user>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:project uuid="5a4c85f6be5040518f229e3e2c1c39ae">tempest-ServerPasswordTestJSON-1941740454</nova:project>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <nova:port uuid="2fd7f15a-e429-4b39-86da-980a7fbc785f">
Nov 25 16:47:30 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <system>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <entry name="serial">1ccf7cd6-cf8d-400e-820e-940108160fa8</entry>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <entry name="uuid">1ccf7cd6-cf8d-400e-820e-940108160fa8</entry>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </system>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <os>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   </os>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <features>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   </features>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ccf7cd6-cf8d-400e-820e-940108160fa8_disk">
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config">
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:14:46:44"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <target dev="tap2fd7f15a-e4"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/console.log" append="off"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <video>
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </video>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:47:30 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:47:30 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:47:30 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:47:30 compute-0 nova_compute[254092]: </domain>
Nov 25 16:47:30 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.881 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Preparing to wait for external event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.881 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.882 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.882 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.883 254096 DEBUG nova.virt.libvirt.vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-349447593',display_name='tempest-ServerPasswordTestJSON-server-349447593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-349447593',id=91,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a4c85f6be5040518f229e3e2c1c39ae',ramdisk_id='',reservation_id='r-gsqeuqfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1941740454',owner_user_name='tempest-ServerPasswordTestJSON-1941740454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:26Z,user_data=None,user_id='3ccd27eb10a8431bbd43519a883a3970',uuid=1ccf7cd6-cf8d-400e-820e-940108160fa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.883 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converting VIF {"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.884 254096 DEBUG nova.network.os_vif_util [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.884 254096 DEBUG os_vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.885 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.886 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.890 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fd7f15a-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.890 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fd7f15a-e4, col_values=(('external_ids', {'iface-id': '2fd7f15a-e429-4b39-86da-980a7fbc785f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:46:44', 'vm-uuid': '1ccf7cd6-cf8d-400e-820e-940108160fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:30 compute-0 NetworkManager[48891]: <info>  [1764089250.8937] manager: (tap2fd7f15a-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.905 254096 INFO os_vif [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4')
Nov 25 16:47:30 compute-0 ceph-mon[74985]: pgmap v1863: 321 pgs: 321 active+clean; 260 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 61 op/s
Nov 25 16:47:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4019731349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2854414798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.953 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.954 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.954 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] No VIF found with MAC fa:16:3e:14:46:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.955 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Using config drive
Nov 25 16:47:30 compute-0 nova_compute[254092]: 2025-11-25 16:47:30.979 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.398 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.399 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.416 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.453 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Creating config drive at /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.457 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpawx_bbfz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.511 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.512 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.518 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.518 254096 INFO nova.compute.claims [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.592 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpawx_bbfz" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.616 254096 DEBUG nova.storage.rbd_utils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] rbd image 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.619 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 306 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.755 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.792 254096 DEBUG oslo_concurrency.processutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config 1ccf7cd6-cf8d-400e-820e-940108160fa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.793 254096 INFO nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deleting local config drive /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8/disk.config because it was imported into RBD.
Nov 25 16:47:31 compute-0 NetworkManager[48891]: <info>  [1764089251.8537] manager: (tap2fd7f15a-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Nov 25 16:47:31 compute-0 kernel: tap2fd7f15a-e4: entered promiscuous mode
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:31 compute-0 ovn_controller[153477]: 2025-11-25T16:47:31Z|00859|binding|INFO|Claiming lport 2fd7f15a-e429-4b39-86da-980a7fbc785f for this chassis.
Nov 25 16:47:31 compute-0 ovn_controller[153477]: 2025-11-25T16:47:31Z|00860|binding|INFO|2fd7f15a-e429-4b39-86da-980a7fbc785f: Claiming fa:16:3e:14:46:44 10.100.0.5
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.875 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:46:44 10.100.0.5'], port_security=['fa:16:3e:14:46:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1ccf7cd6-cf8d-400e-820e-940108160fa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edeacdb8-47e8-4402-a14f-718b48aff73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a4c85f6be5040518f229e3e2c1c39ae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fa0399d-57fa-4e1b-a9bf-664935612bc3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=503adfaf-62d4-482e-ab7f-70af0baad006, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2fd7f15a-e429-4b39-86da-980a7fbc785f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.876 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2fd7f15a-e429-4b39-86da-980a7fbc785f in datapath edeacdb8-47e8-4402-a14f-718b48aff73b bound to our chassis
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.882 254096 DEBUG nova.network.neutron [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updated VIF entry in instance network info cache for port 2fd7f15a-e429-4b39-86da-980a7fbc785f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.883 254096 DEBUG nova.network.neutron [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updating instance_info_cache with network_info: [{"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.888 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network edeacdb8-47e8-4402-a14f-718b48aff73b
Nov 25 16:47:31 compute-0 systemd-udevd[346985]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a11560a6-66eb-49c2-bd20-77bad6e5470a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.901 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapedeacdb8-41 in ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.904 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapedeacdb8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5498a44d-40af-4ab5-a8f7-e44166d1d9f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.907 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8f2cb0-3575-4f28-ac4c-d9f87ebe982c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:31 compute-0 systemd-machined[216343]: New machine qemu-112-instance-0000005b.
Nov 25 16:47:31 compute-0 NetworkManager[48891]: <info>  [1764089251.9112] device (tap2fd7f15a-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:47:31 compute-0 NetworkManager[48891]: <info>  [1764089251.9123] device (tap2fd7f15a-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:47:31 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.926 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[eb60a2d5-121a-4e89-b1c8-80ce771ff753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.935 254096 DEBUG oslo_concurrency.lockutils [req-0c50f906-aef0-40a2-9d89-dc4b9d5d9f60 req-370ae1e2-e8ed-4573-895c-a5e2ac340398 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ccf7cd6-cf8d-400e-820e-940108160fa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1bfa3f50-0a53-4e62-9566-ea30f453c780]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:31 compute-0 ovn_controller[153477]: 2025-11-25T16:47:31Z|00861|binding|INFO|Setting lport 2fd7f15a-e429-4b39-86da-980a7fbc785f ovn-installed in OVS
Nov 25 16:47:31 compute-0 ovn_controller[153477]: 2025-11-25T16:47:31Z|00862|binding|INFO|Setting lport 2fd7f15a-e429-4b39-86da-980a7fbc785f up in Southbound
Nov 25 16:47:31 compute-0 nova_compute[254092]: 2025-11-25 16:47:31.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.989 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6e592fc0-6bfb-4fb8-977e-225b545be6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:31 compute-0 NetworkManager[48891]: <info>  [1764089251.9964] manager: (tapedeacdb8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Nov 25 16:47:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:31.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04499e67-2ebd-4579-8f6a-310a3acc28c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.033 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[94ef8cbb-5510-4142-b499-5f3a85b1f9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.036 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2a67c467-1840-43cf-b271-66a8b603e574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 NetworkManager[48891]: <info>  [1764089252.0601] device (tapedeacdb8-40): carrier: link connected
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.067 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc148e9-a6ef-4cf6-a51f-26398b703caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.084 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[395a1941-ab27-4c51-b255-123920c81658]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedeacdb8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:f1:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570965, 'reachable_time': 21580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347028, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.104 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c72cdac-09e3-44ac-9c0a-f0f5569fad3a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:f13c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570965, 'tstamp': 570965}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347029, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.112 254096 DEBUG nova.compute.manager [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.113 254096 DEBUG oslo_concurrency.lockutils [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.113 254096 DEBUG oslo_concurrency.lockutils [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.114 254096 DEBUG oslo_concurrency.lockutils [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.114 254096 DEBUG nova.compute.manager [req-84b26229-4786-4602-a48c-ce57bd6f7d90 req-5f64c8b2-a505-4ec2-9f5f-8a07401f00ac a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Processing event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.121 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[22d30857-804e-49a8-ab26-0f29f3682073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapedeacdb8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:f1:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570965, 'reachable_time': 21580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347030, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.154 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5173667a-788e-47cb-8626-87d5ca9358ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.223 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[909d66cc-3851-4c59-8890-c2237c4ec5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.224 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedeacdb8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.224 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.225 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedeacdb8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:32 compute-0 NetworkManager[48891]: <info>  [1764089252.2271] manager: (tapedeacdb8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Nov 25 16:47:32 compute-0 kernel: tapedeacdb8-40: entered promiscuous mode
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.232 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapedeacdb8-40, col_values=(('external_ids', {'iface-id': '0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:32 compute-0 ovn_controller[153477]: 2025-11-25T16:47:32Z|00863|binding|INFO|Releasing lport 0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159 from this chassis (sb_readonly=0)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.250 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/edeacdb8-47e8-4402-a14f-718b48aff73b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/edeacdb8-47e8-4402-a14f-718b48aff73b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.250 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.251 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2e714d-1743-4f37-80cc-e97ea4caa6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.252 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-edeacdb8-47e8-4402-a14f-718b48aff73b
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/edeacdb8-47e8-4402-a14f-718b48aff73b.pid.haproxy
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID edeacdb8-47e8-4402-a14f-718b48aff73b
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.252 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'env', 'PROCESS_TAG=haproxy-edeacdb8-47e8-4402-a14f-718b48aff73b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/edeacdb8-47e8-4402-a14f-718b48aff73b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:47:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163402489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.285 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.296 254096 DEBUG nova.compute.provider_tree [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.316 254096 DEBUG nova.scheduler.client.report [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.338 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.339 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.383 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.384 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.399 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.414 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:47:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:32.488 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.506 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.507 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.507 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Creating image(s)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.532 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.589 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.619 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.623 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.661 254096 DEBUG nova.policy [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.694 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.695 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089252.6936932, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.695 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Started (Lifecycle Event)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.700 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:47:32 compute-0 podman[347161]: 2025-11-25 16:47:32.712406673 +0000 UTC m=+0.052069166 container create cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.712 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.713 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.717 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.717 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.718 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.746 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.757 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4117640c-3ae9-4568-9034-7a7612ac43fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:32 compute-0 systemd[1]: Started libpod-conmon-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8.scope.
Nov 25 16:47:32 compute-0 podman[347161]: 2025-11-25 16:47:32.685146112 +0000 UTC m=+0.024808625 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:47:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3843f8da4445da577fb4ed2371003e380b705a4c191dcb2bd09c10a7e13d22ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.801 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance spawned successfully.
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.801 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.805 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:32 compute-0 podman[347161]: 2025-11-25 16:47:32.819522594 +0000 UTC m=+0.159185097 container init cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:47:32 compute-0 podman[347161]: 2025-11-25 16:47:32.826740609 +0000 UTC m=+0.166403102 container start cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.831 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.831 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089252.6951797, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.831 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Paused (Lifecycle Event)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.839 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.840 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.840 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.840 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.841 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.841 254096 DEBUG nova.virt.libvirt.driver [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:32 compute-0 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : New worker (347218) forked
Nov 25 16:47:32 compute-0 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : Loading success.
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.865 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.871 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089252.699573, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.872 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Resumed (Lifecycle Event)
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.893 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.900 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.905 254096 INFO nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 6.69 seconds to spawn the instance on the hypervisor.
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.905 254096 DEBUG nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.923 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:32 compute-0 ceph-mon[74985]: pgmap v1864: 321 pgs: 321 active+clean; 306 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Nov 25 16:47:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/163402489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:32 compute-0 nova_compute[254092]: 2025-11-25 16:47:32.988 254096 INFO nova.compute.manager [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 7.84 seconds to build instance.
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.015 254096 DEBUG oslo_concurrency.lockutils [None req-40effba9-7539-41d3-b593-51f051e0c756 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.169 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4117640c-3ae9-4568-9034-7a7612ac43fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.241 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.374 254096 DEBUG nova.objects.instance [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 4117640c-3ae9-4568-9034-7a7612ac43fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.384 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.385 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Ensure instance console log exists: /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.385 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.386 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.386 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:33 compute-0 nova_compute[254092]: 2025-11-25 16:47:33.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 306 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.536 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Successfully created port: c9e57355-8fcc-40ec-ada3-03c6d0147098 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.607 254096 DEBUG nova.compute.manager [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG oslo_concurrency.lockutils [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG oslo_concurrency.lockutils [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG oslo_concurrency.lockutils [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.608 254096 DEBUG nova.compute.manager [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] No waiting events found dispatching network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.609 254096 WARNING nova.compute.manager [req-441001f8-be9f-405d-b3af-5412f07ce568 req-3c520b33-84d8-4dc6-96ea-c4bf5713f59a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received unexpected event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f for instance with vm_state active and task_state None.
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:34 compute-0 NetworkManager[48891]: <info>  [1764089254.6640] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Nov 25 16:47:34 compute-0 NetworkManager[48891]: <info>  [1764089254.6651] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00864|binding|INFO|Releasing lport 0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159 from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00865|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00866|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00867|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00868|binding|INFO|Releasing lport 0eaeba0c-c3dd-4eda-8ea5-d01e9ea08159 from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00869|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00870|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 ovn_controller[153477]: 2025-11-25T16:47:34Z|00871|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 16:47:34 compute-0 nova_compute[254092]: 2025-11-25 16:47:34.734 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:34 compute-0 ceph-mon[74985]: pgmap v1865: 321 pgs: 321 active+clean; 306 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 16:47:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 342 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.2 MiB/s wr, 314 op/s
Nov 25 16:47:35 compute-0 nova_compute[254092]: 2025-11-25 16:47:35.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.158 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.159 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.160 254096 INFO nova.compute.manager [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Terminating instance
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.161 254096 DEBUG nova.compute.manager [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:47:36 compute-0 kernel: tap2fd7f15a-e4 (unregistering): left promiscuous mode
Nov 25 16:47:36 compute-0 NetworkManager[48891]: <info>  [1764089256.2077] device (tap2fd7f15a-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 ovn_controller[153477]: 2025-11-25T16:47:36Z|00872|binding|INFO|Releasing lport 2fd7f15a-e429-4b39-86da-980a7fbc785f from this chassis (sb_readonly=0)
Nov 25 16:47:36 compute-0 ovn_controller[153477]: 2025-11-25T16:47:36Z|00873|binding|INFO|Setting lport 2fd7f15a-e429-4b39-86da-980a7fbc785f down in Southbound
Nov 25 16:47:36 compute-0 ovn_controller[153477]: 2025-11-25T16:47:36Z|00874|binding|INFO|Removing iface tap2fd7f15a-e4 ovn-installed in OVS
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.226 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:46:44 10.100.0.5'], port_security=['fa:16:3e:14:46:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1ccf7cd6-cf8d-400e-820e-940108160fa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-edeacdb8-47e8-4402-a14f-718b48aff73b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a4c85f6be5040518f229e3e2c1c39ae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fa0399d-57fa-4e1b-a9bf-664935612bc3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=503adfaf-62d4-482e-ab7f-70af0baad006, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=2fd7f15a-e429-4b39-86da-980a7fbc785f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.227 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 2fd7f15a-e429-4b39-86da-980a7fbc785f in datapath edeacdb8-47e8-4402-a14f-718b48aff73b unbound from our chassis
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.229 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network edeacdb8-47e8-4402-a14f-718b48aff73b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7146a193-0319-456c-8f4f-b650aa486b0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.230 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b namespace which is not needed anymore
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Nov 25 16:47:36 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 4.114s CPU time.
Nov 25 16:47:36 compute-0 systemd-machined[216343]: Machine qemu-112-instance-0000005b terminated.
Nov 25 16:47:36 compute-0 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : haproxy version is 2.8.14-c23fe91
Nov 25 16:47:36 compute-0 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [NOTICE]   (347202) : path to executable is /usr/sbin/haproxy
Nov 25 16:47:36 compute-0 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [WARNING]  (347202) : Exiting Master process...
Nov 25 16:47:36 compute-0 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [ALERT]    (347202) : Current worker (347218) exited with code 143 (Terminated)
Nov 25 16:47:36 compute-0 neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b[347194]: [WARNING]  (347202) : All workers exited. Exiting... (0)
Nov 25 16:47:36 compute-0 systemd[1]: libpod-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8.scope: Deactivated successfully.
Nov 25 16:47:36 compute-0 podman[347325]: 2025-11-25 16:47:36.367246643 +0000 UTC m=+0.046028921 container died cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.395 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Instance destroyed successfully.
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.396 254096 DEBUG nova.objects.instance [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lazy-loading 'resources' on Instance uuid 1ccf7cd6-cf8d-400e-820e-940108160fa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.408 254096 DEBUG nova.virt.libvirt.vif [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-349447593',display_name='tempest-ServerPasswordTestJSON-server-349447593',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-349447593',id=91,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a4c85f6be5040518f229e3e2c1c39ae',ramdisk_id='',reservation_id='r-gsqeuqfs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1941740454',owner_user_name='tempest-ServerPasswordTestJSON-1941740454-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:35Z,user_data=None,user_id='3ccd27eb10a8431bbd43519a883a3970',uuid=1ccf7cd6-cf8d-400e-820e-940108160fa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.409 254096 DEBUG nova.network.os_vif_util [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converting VIF {"id": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "address": "fa:16:3e:14:46:44", "network": {"id": "edeacdb8-47e8-4402-a14f-718b48aff73b", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-625792220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a4c85f6be5040518f229e3e2c1c39ae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fd7f15a-e4", "ovs_interfaceid": "2fd7f15a-e429-4b39-86da-980a7fbc785f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.410 254096 DEBUG nova.network.os_vif_util [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8-userdata-shm.mount: Deactivated successfully.
Nov 25 16:47:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3843f8da4445da577fb4ed2371003e380b705a4c191dcb2bd09c10a7e13d22ea-merged.mount: Deactivated successfully.
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.411 254096 DEBUG os_vif [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.424 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.425 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fd7f15a-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:36 compute-0 podman[347325]: 2025-11-25 16:47:36.426034991 +0000 UTC m=+0.104817249 container cleanup cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.435 254096 INFO os_vif [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:46:44,bridge_name='br-int',has_traffic_filtering=True,id=2fd7f15a-e429-4b39-86da-980a7fbc785f,network=Network(edeacdb8-47e8-4402-a14f-718b48aff73b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fd7f15a-e4')
Nov 25 16:47:36 compute-0 systemd[1]: libpod-conmon-cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8.scope: Deactivated successfully.
Nov 25 16:47:36 compute-0 podman[347362]: 2025-11-25 16:47:36.51506271 +0000 UTC m=+0.053164345 container remove cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.534 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10ca454d-4049-46c8-9555-fa008212397c]: (4, ('Tue Nov 25 04:47:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b (cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8)\ncf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8\nTue Nov 25 04:47:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b (cf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8)\ncf9bdd6e9a8f4005be0aa12b0751187b1d210fa60ec22c395984988112b66fe8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.536 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7a291d-84b7-4337-a416-9e2b4ace8db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.538 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedeacdb8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 kernel: tapedeacdb8-40: left promiscuous mode
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.565 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[011b0dea-0b64-4c14-a71f-ea9ebfc9fe01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.579 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8999fbf-0d2a-43e8-9046-5317abb4c31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88318d1d-b002-4d0e-90f5-c8236bd5bc47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fe09285d-2a00-45e7-b8c9-cd3f8ac872ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570957, 'reachable_time': 33070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347394, 'error': None, 'target': 'ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 systemd[1]: run-netns-ovnmeta\x2dedeacdb8\x2d47e8\x2d4402\x2da14f\x2d718b48aff73b.mount: Deactivated successfully.
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.600 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-edeacdb8-47e8-4402-a14f-718b48aff73b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:47:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:36.600 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c7b4fe-38d3-4c7b-9498-41c86b2d1e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.680 254096 DEBUG nova.compute.manager [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.680 254096 DEBUG nova.compute.manager [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing instance network info cache due to event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.681 254096 DEBUG oslo_concurrency.lockutils [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.681 254096 DEBUG oslo_concurrency.lockutils [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.682 254096 DEBUG nova.network.neutron [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.868 254096 INFO nova.virt.libvirt.driver [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deleting instance files /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8_del
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.870 254096 INFO nova.virt.libvirt.driver [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deletion of /var/lib/nova/instances/1ccf7cd6-cf8d-400e-820e-940108160fa8_del complete
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.910 254096 INFO nova.compute.manager [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.911 254096 DEBUG oslo.service.loopingcall [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.911 254096 DEBUG nova.compute.manager [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:47:36 compute-0 nova_compute[254092]: 2025-11-25 16:47:36.912 254096 DEBUG nova.network.neutron [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:47:36 compute-0 ceph-mon[74985]: pgmap v1866: 321 pgs: 321 active+clean; 342 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 3.2 MiB/s wr, 314 op/s
Nov 25 16:47:37 compute-0 nova_compute[254092]: 2025-11-25 16:47:37.395 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Successfully updated port: c9e57355-8fcc-40ec-ada3-03c6d0147098 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:47:37 compute-0 nova_compute[254092]: 2025-11-25 16:47:37.411 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:37 compute-0 nova_compute[254092]: 2025-11-25 16:47:37.412 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:37 compute-0 nova_compute[254092]: 2025-11-25 16:47:37.412 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:47:37 compute-0 nova_compute[254092]: 2025-11-25 16:47:37.689 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 353 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 332 op/s
Nov 25 16:47:38 compute-0 nova_compute[254092]: 2025-11-25 16:47:38.406 254096 DEBUG nova.network.neutron [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:38 compute-0 nova_compute[254092]: 2025-11-25 16:47:38.426 254096 INFO nova.compute.manager [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Took 1.51 seconds to deallocate network for instance.
Nov 25 16:47:38 compute-0 nova_compute[254092]: 2025-11-25 16:47:38.479 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:38 compute-0 nova_compute[254092]: 2025-11-25 16:47:38.480 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:38 compute-0 nova_compute[254092]: 2025-11-25 16:47:38.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:38 compute-0 nova_compute[254092]: 2025-11-25 16:47:38.598 254096 DEBUG oslo_concurrency.processutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:38 compute-0 ceph-mon[74985]: pgmap v1867: 321 pgs: 321 active+clean; 353 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 332 op/s
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.014 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-unplugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.015 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.015 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.016 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.016 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] No waiting events found dispatching network-vif-unplugged-2fd7f15a-e429-4b39-86da-980a7fbc785f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.016 254096 WARNING nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received unexpected event network-vif-unplugged-2fd7f15a-e429-4b39-86da-980a7fbc785f for instance with vm_state deleted and task_state None.
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.017 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-changed-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.017 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Refreshing instance network info cache due to event network-changed-c9e57355-8fcc-40ec-ada3-03c6d0147098. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.018 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408766462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.038 254096 DEBUG oslo_concurrency.processutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.045 254096 DEBUG nova.compute.provider_tree [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.062 254096 DEBUG nova.scheduler.client.report [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.085 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.105 254096 DEBUG nova.network.neutron [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated VIF entry in instance network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.106 254096 DEBUG nova.network.neutron [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.109 254096 INFO nova.scheduler.client.report [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Deleted allocations for instance 1ccf7cd6-cf8d-400e-820e-940108160fa8
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.122 254096 DEBUG oslo_concurrency.lockutils [req-df5040dd-fca2-4126-a551-02b6b8fd0640 req-20ab0828-a97f-4a98-83b6-e8a98273f736 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.169 254096 DEBUG oslo_concurrency.lockutils [None req-9accb07f-6351-4503-87fe-b32adb075c17 3ccd27eb10a8431bbd43519a883a3970 5a4c85f6be5040518f229e3e2c1c39ae - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.223 254096 DEBUG nova.network.neutron [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updating instance_info_cache with network_info: [{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.249 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.251 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance network_info: |[{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.252 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.253 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Refreshing network info cache for port c9e57355-8fcc-40ec-ada3-03c6d0147098 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.257 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start _get_guest_xml network_info=[{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.262 254096 WARNING nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.267 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.267 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.279 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.280 254096 DEBUG nova.virt.libvirt.host [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.280 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.281 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.282 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.282 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.282 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.283 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.284 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.284 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.285 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.286 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.286 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.287 254096 DEBUG nova.virt.hardware [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.291 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 353 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 329 op/s
Nov 25 16:47:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669675625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.850 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.878 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:39 compute-0 nova_compute[254092]: 2025-11-25 16:47:39.881 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:39 compute-0 ovn_controller[153477]: 2025-11-25T16:47:39Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:e5:4f 10.100.0.14
Nov 25 16:47:39 compute-0 ovn_controller[153477]: 2025-11-25T16:47:39Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:e5:4f 10.100.0.14
Nov 25 16:47:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1408766462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3669675625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:47:40
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes']
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:47:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:47:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664227085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.322 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.325 254096 DEBUG nova.virt.libvirt.vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=92,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-lutj4rar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:32Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4117640c-3ae9-4568-9034-7a7612ac43fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.325 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.326 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.327 254096 DEBUG nova.objects.instance [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4117640c-3ae9-4568-9034-7a7612ac43fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.340 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <uuid>4117640c-3ae9-4568-9034-7a7612ac43fe</uuid>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <name>instance-0000005c</name>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestJSON-server-694487906</nova:name>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:47:39</nova:creationTime>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <nova:port uuid="c9e57355-8fcc-40ec-ada3-03c6d0147098">
Nov 25 16:47:40 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <system>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <entry name="serial">4117640c-3ae9-4568-9034-7a7612ac43fe</entry>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <entry name="uuid">4117640c-3ae9-4568-9034-7a7612ac43fe</entry>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </system>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <os>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   </os>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <features>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   </features>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4117640c-3ae9-4568-9034-7a7612ac43fe_disk">
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config">
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       </source>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:47:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f3:93:e6"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <target dev="tapc9e57355-8f"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/console.log" append="off"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <video>
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </video>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:47:40 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:47:40 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:47:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:47:40 compute-0 nova_compute[254092]: </domain>
Nov 25 16:47:40 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.340 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Preparing to wait for external event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.341 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.341 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.341 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.342 254096 DEBUG nova.virt.libvirt.vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=92,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-lutj4rar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:32Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4117640c-3ae9-4568-9034-7a7612ac43fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.342 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.342 254096 DEBUG nova.network.os_vif_util [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.343 254096 DEBUG os_vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.344 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.344 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.346 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9e57355-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.347 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9e57355-8f, col_values=(('external_ids', {'iface-id': 'c9e57355-8fcc-40ec-ada3-03c6d0147098', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:93:e6', 'vm-uuid': '4117640c-3ae9-4568-9034-7a7612ac43fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:40 compute-0 NetworkManager[48891]: <info>  [1764089260.3488] manager: (tapc9e57355-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.355 254096 INFO os_vif [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f')
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.406 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.407 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.407 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:f3:93:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.408 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Using config drive
Nov 25 16:47:40 compute-0 nova_compute[254092]: 2025-11-25 16:47:40.434 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:47:41 compute-0 ceph-mon[74985]: pgmap v1868: 321 pgs: 321 active+clean; 353 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 329 op/s
Nov 25 16:47:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2664227085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:47:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.259 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Creating config drive at /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.264 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnc27hjsb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.401 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnc27hjsb" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.423 254096 DEBUG nova.storage.rbd_utils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.427 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 333 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.6 MiB/s wr, 400 op/s
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.822 254096 DEBUG oslo_concurrency.processutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config 4117640c-3ae9-4568-9034-7a7612ac43fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.826 254096 INFO nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deleting local config drive /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe/disk.config because it was imported into RBD.
Nov 25 16:47:41 compute-0 kernel: tapc9e57355-8f: entered promiscuous mode
Nov 25 16:47:41 compute-0 NetworkManager[48891]: <info>  [1764089261.8961] manager: (tapc9e57355-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:41 compute-0 ovn_controller[153477]: 2025-11-25T16:47:41Z|00875|binding|INFO|Claiming lport c9e57355-8fcc-40ec-ada3-03c6d0147098 for this chassis.
Nov 25 16:47:41 compute-0 ovn_controller[153477]: 2025-11-25T16:47:41Z|00876|binding|INFO|c9e57355-8fcc-40ec-ada3-03c6d0147098: Claiming fa:16:3e:f3:93:e6 10.100.0.5
Nov 25 16:47:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.904 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:93:e6 10.100.0.5'], port_security=['fa:16:3e:f3:93:e6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4117640c-3ae9-4568-9034-7a7612ac43fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c9e57355-8fcc-40ec-ada3-03c6d0147098) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.906 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c9e57355-8fcc-40ec-ada3-03c6d0147098 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis
Nov 25 16:47:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.909 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:47:41 compute-0 ovn_controller[153477]: 2025-11-25T16:47:41Z|00877|binding|INFO|Setting lport c9e57355-8fcc-40ec-ada3-03c6d0147098 ovn-installed in OVS
Nov 25 16:47:41 compute-0 ovn_controller[153477]: 2025-11-25T16:47:41Z|00878|binding|INFO|Setting lport c9e57355-8fcc-40ec-ada3-03c6d0147098 up in Southbound
Nov 25 16:47:41 compute-0 nova_compute[254092]: 2025-11-25 16:47:41.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:41 compute-0 systemd-udevd[347558]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:47:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.939 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b025d7a-7293-479d-931a-007668d43e1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:41 compute-0 systemd-machined[216343]: New machine qemu-113-instance-0000005c.
Nov 25 16:47:41 compute-0 NetworkManager[48891]: <info>  [1764089261.9512] device (tapc9e57355-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:47:41 compute-0 NetworkManager[48891]: <info>  [1764089261.9520] device (tapc9e57355-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:47:41 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005c.
Nov 25 16:47:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.982 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac32e335-a5eb-4ef5-b54f-08035a9f77c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:41.986 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1478573-3d38-456e-98d2-910399c02149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.022 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41876e0d-1e25-4342-8f97-82577830dcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.046 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aae22ef5-42a1-40e6-aaa2-e960fcca1624]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347571, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.068 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0323d4-c3ef-4ac8-b9a4-5b02fdf2b248]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347573, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347573, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.070 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:42.074 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.306 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updated VIF entry in instance network info cache for port c9e57355-8fcc-40ec-ada3-03c6d0147098. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.307 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updating instance_info_cache with network_info: [{"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.324 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4117640c-3ae9-4568-9034-7a7612ac43fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.325 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.325 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.326 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.326 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ccf7cd6-cf8d-400e-820e-940108160fa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.326 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] No waiting events found dispatching network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 WARNING nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received unexpected event network-vif-plugged-2fd7f15a-e429-4b39-86da-980a7fbc785f for instance with vm_state deleted and task_state None.
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Received event network-vif-deleted-2fd7f15a-e429-4b39-86da-980a7fbc785f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG nova.compute.manager [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing instance network info cache due to event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.327 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.328 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.328 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.408 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089262.4076002, 4117640c-3ae9-4568-9034-7a7612ac43fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.409 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Started (Lifecycle Event)
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.424 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.427 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089262.408412, 4117640c-3ae9-4568-9034-7a7612ac43fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.427 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Paused (Lifecycle Event)
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.446 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.449 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.467 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.700 254096 DEBUG nova.compute.manager [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.700 254096 DEBUG oslo_concurrency.lockutils [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG oslo_concurrency.lockutils [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG oslo_concurrency.lockutils [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG nova.compute.manager [req-8fa180c7-e4fb-44a1-bf9a-697d2bbf1fc2 req-d6c3aaac-6124-4961-a960-7b7ebe56ef39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Processing event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.701 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.705 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089262.7048876, 4117640c-3ae9-4568-9034-7a7612ac43fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.705 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Resumed (Lifecycle Event)
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.706 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.712 254096 INFO nova.virt.libvirt.driver [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance spawned successfully.
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.712 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.726 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.734 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.739 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.740 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.740 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.741 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.741 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.742 254096 DEBUG nova.virt.libvirt.driver [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.770 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.820 254096 INFO nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 10.31 seconds to spawn the instance on the hypervisor.
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.821 254096 DEBUG nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.883 254096 INFO nova.compute.manager [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 11.40 seconds to build instance.
Nov 25 16:47:42 compute-0 nova_compute[254092]: 2025-11-25 16:47:42.896 254096 DEBUG oslo_concurrency.lockutils [None req-29611f86-3ec2-4f7a-b58f-b411a6a419d3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:43 compute-0 ceph-mon[74985]: pgmap v1869: 321 pgs: 321 active+clean; 333 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.6 MiB/s wr, 400 op/s
Nov 25 16:47:43 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 16:47:43 compute-0 ovn_controller[153477]: 2025-11-25T16:47:43Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 16:47:43 compute-0 ovn_controller[153477]: 2025-11-25T16:47:43Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 16:47:43 compute-0 ovn_controller[153477]: 2025-11-25T16:47:43Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:5b:d2 10.100.0.4
Nov 25 16:47:43 compute-0 ovn_controller[153477]: 2025-11-25T16:47:43Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:5b:d2 10.100.0.4
Nov 25 16:47:43 compute-0 nova_compute[254092]: 2025-11-25 16:47:43.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 333 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 269 op/s
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.274 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updated VIF entry in instance network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.275 254096 DEBUG nova.network.neutron [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.500 254096 DEBUG oslo_concurrency.lockutils [req-f24ac897-fab5-4e7c-900d-59f8fd4c8f60 req-be6234d3-77b3-4174-b513-7b772f6701e6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:44 compute-0 ceph-mon[74985]: pgmap v1870: 321 pgs: 321 active+clean; 333 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 269 op/s
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.801 254096 DEBUG nova.compute.manager [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.801 254096 DEBUG oslo_concurrency.lockutils [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.801 254096 DEBUG oslo_concurrency.lockutils [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.802 254096 DEBUG oslo_concurrency.lockutils [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.802 254096 DEBUG nova.compute.manager [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] No waiting events found dispatching network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:44 compute-0 nova_compute[254092]: 2025-11-25 16:47:44.802 254096 WARNING nova.compute.manager [req-c49436e6-6123-4112-b9b0-e0f8dd5fc54a req-b292cd4f-c9be-4c3c-b50b-2157f9b2e4f6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received unexpected event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 for instance with vm_state active and task_state None.
Nov 25 16:47:45 compute-0 nova_compute[254092]: 2025-11-25 16:47:45.350 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:45 compute-0 podman[347617]: 2025-11-25 16:47:45.651453293 +0000 UTC m=+0.058011027 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 16:47:45 compute-0 podman[347616]: 2025-11-25 16:47:45.672999939 +0000 UTC m=+0.081935217 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 16:47:45 compute-0 podman[347618]: 2025-11-25 16:47:45.67999904 +0000 UTC m=+0.082889364 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller)
Nov 25 16:47:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 388 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.6 MiB/s wr, 441 op/s
Nov 25 16:47:46 compute-0 ovn_controller[153477]: 2025-11-25T16:47:46Z|00879|binding|INFO|Releasing lport b7f55d8f-5f76-4e3e-96b9-d43d90f8b6f9 from this chassis (sb_readonly=0)
Nov 25 16:47:46 compute-0 ovn_controller[153477]: 2025-11-25T16:47:46Z|00880|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:47:46 compute-0 ovn_controller[153477]: 2025-11-25T16:47:46Z|00881|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 16:47:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.392 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.392 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.393 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.393 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.394 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.395 254096 INFO nova.compute.manager [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Terminating instance
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.396 254096 DEBUG nova.compute.manager [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:47:46 compute-0 kernel: tapc9e57355-8f (unregistering): left promiscuous mode
Nov 25 16:47:46 compute-0 NetworkManager[48891]: <info>  [1764089266.4394] device (tapc9e57355-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 ovn_controller[153477]: 2025-11-25T16:47:46Z|00882|binding|INFO|Releasing lport c9e57355-8fcc-40ec-ada3-03c6d0147098 from this chassis (sb_readonly=0)
Nov 25 16:47:46 compute-0 ovn_controller[153477]: 2025-11-25T16:47:46Z|00883|binding|INFO|Setting lport c9e57355-8fcc-40ec-ada3-03c6d0147098 down in Southbound
Nov 25 16:47:46 compute-0 ovn_controller[153477]: 2025-11-25T16:47:46Z|00884|binding|INFO|Removing iface tapc9e57355-8f ovn-installed in OVS
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.452 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:93:e6 10.100.0.5'], port_security=['fa:16:3e:f3:93:e6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4117640c-3ae9-4568-9034-7a7612ac43fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c9e57355-8fcc-40ec-ada3-03c6d0147098) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.453 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c9e57355-8fcc-40ec-ada3-03c6d0147098 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.454 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.472 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05f7b275-97a0-4a2a-a667-fddb06a22732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.499 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30120b8e-3ac1-4d69-b23f-843bc7d8acee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.501 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[82969566-53fd-4c30-a4b7-f9a51ff37403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:46 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Nov 25 16:47:46 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Consumed 4.057s CPU time.
Nov 25 16:47:46 compute-0 systemd-machined[216343]: Machine qemu-113-instance-0000005c terminated.
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.528 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3593018-3b4c-4f91-8356-5657cfcc2c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ff98ab-8ba6-4465-96e2-9b40d419751d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347691, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.562 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f52d542e-168a-4eff-8820-a2e1b081139d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347692, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347692, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.564 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.570 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:46.571 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.634 254096 INFO nova.virt.libvirt.driver [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Instance destroyed successfully.
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.635 254096 DEBUG nova.objects.instance [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 4117640c-3ae9-4568-9034-7a7612ac43fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.645 254096 DEBUG nova.virt.libvirt.vif [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=92,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-lutj4rar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:42Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4117640c-3ae9-4568-9034-7a7612ac43fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.646 254096 DEBUG nova.network.os_vif_util [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "address": "fa:16:3e:f3:93:e6", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9e57355-8f", "ovs_interfaceid": "c9e57355-8fcc-40ec-ada3-03c6d0147098", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.646 254096 DEBUG nova.network.os_vif_util [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.647 254096 DEBUG os_vif [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.648 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9e57355-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:46 compute-0 nova_compute[254092]: 2025-11-25 16:47:46.654 254096 INFO os_vif [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:93:e6,bridge_name='br-int',has_traffic_filtering=True,id=c9e57355-8fcc-40ec-ada3-03c6d0147098,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9e57355-8f')
Nov 25 16:47:46 compute-0 ceph-mon[74985]: pgmap v1871: 321 pgs: 321 active+clean; 388 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.6 MiB/s wr, 441 op/s
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.046 254096 INFO nova.virt.libvirt.driver [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deleting instance files /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe_del
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.046 254096 INFO nova.virt.libvirt.driver [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deletion of /var/lib/nova/instances/4117640c-3ae9-4568-9034-7a7612ac43fe_del complete
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.097 254096 INFO nova.compute.manager [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.098 254096 DEBUG oslo.service.loopingcall [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.098 254096 DEBUG nova.compute.manager [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.098 254096 DEBUG nova.network.neutron [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.135 254096 DEBUG nova.compute.manager [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-unplugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.135 254096 DEBUG oslo_concurrency.lockutils [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG oslo_concurrency.lockutils [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG oslo_concurrency.lockutils [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG nova.compute.manager [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] No waiting events found dispatching network-vif-unplugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:47 compute-0 nova_compute[254092]: 2025-11-25 16:47:47.136 254096 DEBUG nova.compute.manager [req-77f88447-2cca-4abc-82f5-900d629d28a9 req-49b45174-20c2-41e6-959a-dcc27ac3d2e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-unplugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:47:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 405 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.8 MiB/s wr, 324 op/s
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.214 254096 DEBUG nova.network.neutron [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.229 254096 INFO nova.compute.manager [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Took 1.13 seconds to deallocate network for instance.
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.267 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.268 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.357 254096 DEBUG oslo_concurrency.processutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903473240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:48 compute-0 ceph-mon[74985]: pgmap v1872: 321 pgs: 321 active+clean; 405 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.8 MiB/s wr, 324 op/s
Nov 25 16:47:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2903473240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.815 254096 DEBUG oslo_concurrency.processutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.821 254096 DEBUG nova.compute.provider_tree [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.837 254096 DEBUG nova.scheduler.client.report [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.861 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.884 254096 INFO nova.scheduler.client.report [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 4117640c-3ae9-4568-9034-7a7612ac43fe
Nov 25 16:47:48 compute-0 nova_compute[254092]: 2025-11-25 16:47:48.948 254096 DEBUG oslo_concurrency.lockutils [None req-f5430e8c-bba3-4bc5-8bde-7ab826b8423c 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.307 254096 DEBUG nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.307 254096 DEBUG oslo_concurrency.lockutils [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG oslo_concurrency.lockutils [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG oslo_concurrency.lockutils [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4117640c-3ae9-4568-9034-7a7612ac43fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] No waiting events found dispatching network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 WARNING nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received unexpected event network-vif-plugged-c9e57355-8fcc-40ec-ada3-03c6d0147098 for instance with vm_state deleted and task_state None.
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.308 254096 DEBUG nova.compute.manager [req-7aad156f-06f2-4a2c-8343-30b522b1dac3 req-5944580f-1bdc-4259-a49f-0009e26b881f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Received event network-vif-deleted-c9e57355-8fcc-40ec-ada3-03c6d0147098 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 405 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 292 op/s
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.777 254096 INFO nova.compute.manager [None req-17d889af-6922-4595-b1ee-7af392aa5130 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Get console output
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.781 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.815 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.816 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.817 254096 INFO nova.compute.manager [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Terminating instance
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.818 254096 DEBUG nova.compute.manager [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:47:49 compute-0 kernel: tapbe2a1b3b-f8 (unregistering): left promiscuous mode
Nov 25 16:47:49 compute-0 NetworkManager[48891]: <info>  [1764089269.8688] device (tapbe2a1b3b-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:47:49 compute-0 ovn_controller[153477]: 2025-11-25T16:47:49Z|00885|binding|INFO|Releasing lport be2a1b3b-f8a0-4a67-9582-54b753171490 from this chassis (sb_readonly=0)
Nov 25 16:47:49 compute-0 ovn_controller[153477]: 2025-11-25T16:47:49Z|00886|binding|INFO|Setting lport be2a1b3b-f8a0-4a67-9582-54b753171490 down in Southbound
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:49 compute-0 ovn_controller[153477]: 2025-11-25T16:47:49Z|00887|binding|INFO|Removing iface tapbe2a1b3b-f8 ovn-installed in OVS
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.884 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:e5:4f 10.100.0.14'], port_security=['fa:16:3e:8a:e5:4f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6affd696-c15d-4401-8512-2aabbf55fd4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=be2a1b3b-f8a0-4a67-9582-54b753171490) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.885 163338 INFO neutron.agent.ovn.metadata.agent [-] Port be2a1b3b-f8a0-4a67-9582-54b753171490 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.887 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.903 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d09a81ec-6661-40b8-8efd-81d90cea1e96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:49 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Deactivated successfully.
Nov 25 16:47:49 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Consumed 13.932s CPU time.
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.930 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b81061-87b2-478d-b412-5177c2b221af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.932 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ceccc3-8b15-4c53-a503-e241f212d2b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:49 compute-0 systemd-machined[216343]: Machine qemu-109-instance-00000058 terminated.
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.960 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7da502fd-a298-40d3-98b0-c7d84412ebcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.975 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a25dcfea-8762-40dc-8889-e76898aefe5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347757, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.991 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4dac64d-092d-458f-9fd3-72db059aa456]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347758, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347758, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.993 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:49 compute-0 nova_compute[254092]: 2025-11-25 16:47:49.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:49.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:50.000 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.053 254096 INFO nova.virt.libvirt.driver [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Instance destroyed successfully.
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.054 254096 DEBUG nova.objects.instance [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 6affd696-c15d-4401-8512-2aabbf55fd4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.070 254096 DEBUG nova.virt.libvirt.vif [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-694487906',display_name='tempest-ServersTestJSON-server-694487906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-694487906',id=88,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-nafun8zg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:27Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=6affd696-c15d-4401-8512-2aabbf55fd4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.071 254096 DEBUG nova.network.os_vif_util [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "be2a1b3b-f8a0-4a67-9582-54b753171490", "address": "fa:16:3e:8a:e5:4f", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe2a1b3b-f8", "ovs_interfaceid": "be2a1b3b-f8a0-4a67-9582-54b753171490", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.071 254096 DEBUG nova.network.os_vif_util [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.071 254096 DEBUG os_vif [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.073 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe2a1b3b-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.074 254096 INFO nova.compute.manager [None req-d6266c33-3e7d-44cc-8a05-e4d64e09237a 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Pausing
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.075 254096 DEBUG nova.objects.instance [None req-d6266c33-3e7d-44cc-8a05-e4d64e09237a 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.079 254096 INFO os_vif [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:e5:4f,bridge_name='br-int',has_traffic_filtering=True,id=be2a1b3b-f8a0-4a67-9582-54b753171490,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe2a1b3b-f8')
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.107 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089270.1067798, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.107 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Paused (Lifecycle Event)
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.109 254096 DEBUG nova.compute.manager [None req-d6266c33-3e7d-44cc-8a05-e4d64e09237a 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.133 254096 DEBUG nova.compute.manager [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-unplugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.134 254096 DEBUG oslo_concurrency.lockutils [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.134 254096 DEBUG oslo_concurrency.lockutils [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.135 254096 DEBUG oslo_concurrency.lockutils [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.135 254096 DEBUG nova.compute.manager [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] No waiting events found dispatching network-vif-unplugged-be2a1b3b-f8a0-4a67-9582-54b753171490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.135 254096 DEBUG nova.compute.manager [req-d0aaa0ec-ab16-415c-b30e-56bd7354d56f req-7ce7d440-fd18-4815-95b4-72959cb8c850 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-unplugged-be2a1b3b-f8a0-4a67-9582-54b753171490 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.140 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.145 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.173 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.400 254096 INFO nova.virt.libvirt.driver [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deleting instance files /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e_del
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.401 254096 INFO nova.virt.libvirt.driver [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deletion of /var/lib/nova/instances/6affd696-c15d-4401-8512-2aabbf55fd4e_del complete
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.463 254096 INFO nova.compute.manager [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 0.64 seconds to destroy the instance on the hypervisor.
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.463 254096 DEBUG oslo.service.loopingcall [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.464 254096 DEBUG nova.compute.manager [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:47:50 compute-0 nova_compute[254092]: 2025-11-25 16:47:50.464 254096 DEBUG nova.network.neutron [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:47:50 compute-0 ceph-mon[74985]: pgmap v1873: 321 pgs: 321 active+clean; 405 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 292 op/s
Nov 25 16:47:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003376475813147099 of space, bias 1.0, pg target 1.0129427439441296 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.393 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089256.392674, 1ccf7cd6-cf8d-400e-820e-940108160fa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.393 254096 INFO nova.compute.manager [-] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] VM Stopped (Lifecycle Event)
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.420 254096 DEBUG nova.compute.manager [None req-2a6b9779-a31a-4c9b-a6d4-e4f44e66c97d - - - - - -] [instance: 1ccf7cd6-cf8d-400e-820e-940108160fa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 330 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 322 op/s
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.808 254096 DEBUG nova.network.neutron [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.821 254096 INFO nova.compute.manager [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Took 1.36 seconds to deallocate network for instance.
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.876 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.876 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.927 254096 DEBUG nova.compute.manager [req-c3788c05-5ef7-445a-9174-ba3376190625 req-a65030dc-3b73-42b1-8c75-2820a77ff0a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-deleted-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:51 compute-0 nova_compute[254092]: 2025-11-25 16:47:51.978 254096 DEBUG oslo_concurrency.processutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG nova.compute.manager [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG oslo_concurrency.lockutils [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG oslo_concurrency.lockutils [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.220 254096 DEBUG oslo_concurrency.lockutils [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.221 254096 DEBUG nova.compute.manager [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] No waiting events found dispatching network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.221 254096 WARNING nova.compute.manager [req-bc3dbf61-80b8-4f9d-9770-80c164b3ac49 req-4dacf5fe-d504-4b95-bbbb-8a2142484efb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Received unexpected event network-vif-plugged-be2a1b3b-f8a0-4a67-9582-54b753171490 for instance with vm_state deleted and task_state None.
Nov 25 16:47:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301542853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.418 254096 DEBUG oslo_concurrency.processutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.424 254096 DEBUG nova.compute.provider_tree [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.445 254096 DEBUG nova.scheduler.client.report [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.464 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.493 254096 INFO nova.compute.manager [None req-8a9fe63d-526c-4d66-9358-b46f7e2afee0 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Get console output
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.495 254096 INFO nova.scheduler.client.report [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 6affd696-c15d-4401-8512-2aabbf55fd4e
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.500 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.561 254096 DEBUG oslo_concurrency.lockutils [None req-a9aec74b-abc3-41c6-af9f-0ea979ba0204 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "6affd696-c15d-4401-8512-2aabbf55fd4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.627 254096 INFO nova.compute.manager [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Unpausing
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.630 254096 DEBUG nova.objects.instance [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.652 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089272.651494, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.652 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Resumed (Lifecycle Event)
Nov 25 16:47:52 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.657 254096 DEBUG nova.virt.libvirt.guest [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.657 254096 DEBUG nova.compute.manager [None req-1d3143a2-bed4-4d71-8b03-5cc17e9af370 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.680 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.683 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:47:52 compute-0 nova_compute[254092]: 2025-11-25 16:47:52.715 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 25 16:47:52 compute-0 ceph-mon[74985]: pgmap v1874: 321 pgs: 321 active+clean; 330 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 322 op/s
Nov 25 16:47:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3301542853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:53 compute-0 nova_compute[254092]: 2025-11-25 16:47:53.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:53 compute-0 nova_compute[254092]: 2025-11-25 16:47:53.634 254096 DEBUG nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:53 compute-0 nova_compute[254092]: 2025-11-25 16:47:53.664 254096 INFO nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] instance snapshotting
Nov 25 16:47:53 compute-0 nova_compute[254092]: 2025-11-25 16:47:53.665 254096 DEBUG nova.objects.instance [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 330 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 251 op/s
Nov 25 16:47:53 compute-0 nova_compute[254092]: 2025-11-25 16:47:53.853 254096 INFO nova.virt.libvirt.driver [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning live snapshot process
Nov 25 16:47:53 compute-0 nova_compute[254092]: 2025-11-25 16:47:53.975 254096 DEBUG nova.virt.libvirt.imagebackend [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:47:54 compute-0 nova_compute[254092]: 2025-11-25 16:47:54.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:54 compute-0 nova_compute[254092]: 2025-11-25 16:47:54.249 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(38b8841a0f7c4e55aa84127e6a3fd187) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:47:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 25 16:47:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 25 16:47:54 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 25 16:47:54 compute-0 ceph-mon[74985]: pgmap v1875: 321 pgs: 321 active+clean; 330 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 251 op/s
Nov 25 16:47:54 compute-0 nova_compute[254092]: 2025-11-25 16:47:54.909 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@38b8841a0f7c4e55aa84127e6a3fd187 to images/658944a7-ebd4-4546-999c-02701f55081a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.001 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/658944a7-ebd4-4546-999c-02701f55081a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.037 254096 INFO nova.compute.manager [None req-d8429df0-7aee-483f-81c6-d4c96fcf2272 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Get console output
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.041 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:47:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2743653227' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:47:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:47:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2743653227' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.501 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(38b8841a0f7c4e55aa84127e6a3fd187) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:47:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 283 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.836 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.837 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.859 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:47:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 25 16:47:55 compute-0 ceph-mon[74985]: osdmap e229: 3 total, 3 up, 3 in
Nov 25 16:47:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2743653227' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:47:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2743653227' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:47:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 25 16:47:55 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.915 254096 DEBUG nova.storage.rbd_utils [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(658944a7-ebd4-4546-999c-02701f55081a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.971 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.972 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.981 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:47:55 compute-0 nova_compute[254092]: 2025-11-25 16:47:55.982 254096 INFO nova.compute.claims [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:47:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.174 254096 DEBUG nova.compute.manager [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.175 254096 DEBUG nova.compute.manager [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing instance network info cache due to event network-changed-54bd7c02-9f22-4656-9514-7219e656dbef. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.175 254096 DEBUG oslo_concurrency.lockutils [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.176 254096 DEBUG oslo_concurrency.lockutils [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.176 254096 DEBUG nova.network.neutron [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Refreshing network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.185 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.279 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.280 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.280 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.281 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.281 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.283 254096 INFO nova.compute.manager [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Terminating instance
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.284 254096 DEBUG nova.compute.manager [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:47:56 compute-0 kernel: tap54bd7c02-9f (unregistering): left promiscuous mode
Nov 25 16:47:56 compute-0 NetworkManager[48891]: <info>  [1764089276.3435] device (tap54bd7c02-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:47:56 compute-0 ovn_controller[153477]: 2025-11-25T16:47:56Z|00888|binding|INFO|Releasing lport 54bd7c02-9f22-4656-9514-7219e656dbef from this chassis (sb_readonly=0)
Nov 25 16:47:56 compute-0 ovn_controller[153477]: 2025-11-25T16:47:56Z|00889|binding|INFO|Setting lport 54bd7c02-9f22-4656-9514-7219e656dbef down in Southbound
Nov 25 16:47:56 compute-0 ovn_controller[153477]: 2025-11-25T16:47:56Z|00890|binding|INFO|Removing iface tap54bd7c02-9f ovn-installed in OVS
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.366 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:5b:d2 10.100.0.4'], port_security=['fa:16:3e:42:5b:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2122cb4e-4525-451f-a46f-184e4a72cb34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be38e015-3930-495b-9582-fe9707042e20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8e904eb-f3d0-4bff-8be5-5af69a444c2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f101b358-01b6-416d-bcc6-f10ed8ec5155, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54bd7c02-9f22-4656-9514-7219e656dbef) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.367 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54bd7c02-9f22-4656-9514-7219e656dbef in datapath be38e015-3930-495b-9582-fe9707042e20 unbound from our chassis
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.368 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be38e015-3930-495b-9582-fe9707042e20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d9976314-d5e9-4252-8a90-7a99e978c8d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.370 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be38e015-3930-495b-9582-fe9707042e20 namespace which is not needed anymore
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Nov 25 16:47:56 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 13.252s CPU time.
Nov 25 16:47:56 compute-0 systemd-machined[216343]: Machine qemu-111-instance-0000005a terminated.
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.511 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.520 254096 INFO nova.virt.libvirt.driver [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Instance destroyed successfully.
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.520 254096 DEBUG nova.objects.instance [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 2122cb4e-4525-451f-a46f-184e4a72cb34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.533 254096 DEBUG nova.virt.libvirt.vif [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1399723727',display_name='tempest-TestNetworkAdvancedServerOps-server-1399723727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1399723727',id=90,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnJ2dj3tQavSsgt3v0xzD62McsGR8kH7FVN3Mskcpal4JOU2s80ZUbXF/gFef079w4ZACdh3Ov4E4/XDFuKoso7mgUy6/r/VedNuEZjiR2unDQEIrd20/t0Y7CqF7ga+A==',key_name='tempest-TestNetworkAdvancedServerOps-714044061',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-yk420f19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:47:52Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2122cb4e-4525-451f-a46f-184e4a72cb34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.534 254096 DEBUG nova.network.os_vif_util [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.534 254096 DEBUG nova.network.os_vif_util [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.535 254096 DEBUG os_vif [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.537 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54bd7c02-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.543 254096 INFO os_vif [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:5b:d2,bridge_name='br-int',has_traffic_filtering=True,id=54bd7c02-9f22-4656-9514-7219e656dbef,network=Network(be38e015-3930-495b-9582-fe9707042e20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bd7c02-9f')
Nov 25 16:47:56 compute-0 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : haproxy version is 2.8.14-c23fe91
Nov 25 16:47:56 compute-0 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [NOTICE]   (346681) : path to executable is /usr/sbin/haproxy
Nov 25 16:47:56 compute-0 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [ALERT]    (346681) : Current worker (346697) exited with code 143 (Terminated)
Nov 25 16:47:56 compute-0 neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20[346664]: [WARNING]  (346681) : All workers exited. Exiting... (0)
Nov 25 16:47:56 compute-0 systemd[1]: libpod-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7.scope: Deactivated successfully.
Nov 25 16:47:56 compute-0 podman[347997]: 2025-11-25 16:47:56.556104259 +0000 UTC m=+0.070745533 container died 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7-userdata-shm.mount: Deactivated successfully.
Nov 25 16:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-838921d87160c6df4cb470730be2019fa42a93dba0b126d7b101a3d59de1d523-merged.mount: Deactivated successfully.
Nov 25 16:47:56 compute-0 podman[347997]: 2025-11-25 16:47:56.627456329 +0000 UTC m=+0.142097573 container cleanup 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:47:56 compute-0 systemd[1]: libpod-conmon-92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7.scope: Deactivated successfully.
Nov 25 16:47:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038232724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.714 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.720 254096 DEBUG nova.compute.provider_tree [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.732 254096 DEBUG nova.scheduler.client.report [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:56 compute-0 podman[348055]: 2025-11-25 16:47:56.745283431 +0000 UTC m=+0.092099934 container remove 92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.752 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f73a904d-7840-4691-b417-dcf501ce6872]: (4, ('Tue Nov 25 04:47:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20 (92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7)\n92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7\nTue Nov 25 04:47:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-be38e015-3930-495b-9582-fe9707042e20 (92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7)\n92c1615b2d8271bd8bf16bfb0e92d9db99affd2dba8a75ee8cb03900200ad2b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.754 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3158259-3f07-4dd0-8483-459d797cb5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.755 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe38e015-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 kernel: tapbe38e015-30: left promiscuous mode
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.762 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.763 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.784 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fed1e3-2782-49dd-a8b4-1cbb925134fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.798 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79dadec0-3305-4370-b3f9-f091e09f4d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[513976f3-ed37-418d-a18f-921227326563]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.820 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.820 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b4799a-7b36-4fd5-a10b-3c93c93e4195]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570385, 'reachable_time': 15437, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348072, 'error': None, 'target': 'ovnmeta-be38e015-3930-495b-9582-fe9707042e20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 systemd[1]: run-netns-ovnmeta\x2dbe38e015\x2d3930\x2d495b\x2d9582\x2dfe9707042e20.mount: Deactivated successfully.
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.830 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be38e015-3930-495b-9582-fe9707042e20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:47:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:47:56.831 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9549fae0-775d-4030-9d87-060699653f30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.842 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.861 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:47:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 25 16:47:56 compute-0 ceph-mon[74985]: pgmap v1877: 321 pgs: 321 active+clean; 283 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Nov 25 16:47:56 compute-0 ceph-mon[74985]: osdmap e230: 3 total, 3 up, 3 in
Nov 25 16:47:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2038232724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 25 16:47:56 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.933 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.934 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.935 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Creating image(s)
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.958 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:56 compute-0 nova_compute[254092]: 2025-11-25 16:47:56.981 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.002 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.007 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.081 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.082 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.083 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.083 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.102 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.106 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7c679c82-4594-4519-a291-de41650ba66b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.146 254096 DEBUG nova.policy [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.193 254096 INFO nova.virt.libvirt.driver [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deleting instance files /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34_del
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.193 254096 INFO nova.virt.libvirt.driver [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deletion of /var/lib/nova/instances/2122cb4e-4525-451f-a46f-184e4a72cb34_del complete
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.249 254096 INFO nova.compute.manager [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 0.96 seconds to destroy the instance on the hypervisor.
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.249 254096 DEBUG oslo.service.loopingcall [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.250 254096 DEBUG nova.compute.manager [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.250 254096 DEBUG nova.network.neutron [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG nova.compute.manager [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-unplugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG oslo_concurrency.lockutils [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG oslo_concurrency.lockutils [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.347 254096 DEBUG oslo_concurrency.lockutils [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.348 254096 DEBUG nova.compute.manager [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] No waiting events found dispatching network-vif-unplugged-54bd7c02-9f22-4656-9514-7219e656dbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.348 254096 DEBUG nova.compute.manager [req-25a3772f-c4d0-488c-9a9b-12688dadf763 req-3a2c7517-1441-4abc-a051-a9b58b783c8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-unplugged-54bd7c02-9f22-4656-9514-7219e656dbef for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.417 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7c679c82-4594-4519-a291-de41650ba66b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.472 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.554 254096 DEBUG nova.objects.instance [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 7c679c82-4594-4519-a291-de41650ba66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.569 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Ensure instance console log exists: /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:57 compute-0 nova_compute[254092]: 2025-11-25 16:47:57.570 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 323 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 25 16:47:57 compute-0 ceph-mon[74985]: osdmap e231: 3 total, 3 up, 3 in
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.062 254096 DEBUG nova.network.neutron [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updated VIF entry in instance network info cache for port 54bd7c02-9f22-4656-9514-7219e656dbef. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.063 254096 DEBUG nova.network.neutron [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [{"id": "54bd7c02-9f22-4656-9514-7219e656dbef", "address": "fa:16:3e:42:5b:d2", "network": {"id": "be38e015-3930-495b-9582-fe9707042e20", "bridge": "br-int", "label": "tempest-network-smoke--1777742074", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bd7c02-9f", "ovs_interfaceid": "54bd7c02-9f22-4656-9514-7219e656dbef", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.079 254096 DEBUG oslo_concurrency.lockutils [req-bfaae8fe-bf1b-44f2-8ad1-76a046648dde req-3275c282-b27c-4501-8a17-f28a73759403 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2122cb4e-4525-451f-a46f-184e4a72cb34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.367 254096 INFO nova.virt.libvirt.driver [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.368 254096 INFO nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 4.69 seconds to snapshot the instance on the hypervisor.
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.532 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Successfully created port: 59b2ac13-fc64-408f-bbb6-977c064ac64a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:58 compute-0 nova_compute[254092]: 2025-11-25 16:47:58.650 254096 DEBUG nova.compute.manager [None req-35bac9f9-52a6-4367-8121-a5a0bae53863 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 16:47:58 compute-0 ceph-mon[74985]: pgmap v1880: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 323 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.105 254096 DEBUG nova.network.neutron [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.144 254096 DEBUG nova.compute.manager [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-deleted-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.145 254096 INFO nova.compute.manager [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Neutron deleted interface 54bd7c02-9f22-4656-9514-7219e656dbef; detaching it from the instance and deleting it from the info cache
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.145 254096 DEBUG nova.network.neutron [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.206 254096 INFO nova.compute.manager [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Took 1.96 seconds to deallocate network for instance.
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.219 254096 DEBUG nova.compute.manager [req-37349d2e-4e6d-4885-a9c2-16e35ba92271 req-5689aefe-8a91-4cec-8602-05aad1121f39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Detach interface failed, port_id=54bd7c02-9f22-4656-9514-7219e656dbef, reason: Instance 2122cb4e-4525-451f-a46f-184e4a72cb34 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.265 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.265 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.288 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.387 254096 DEBUG oslo_concurrency.processutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.431 254096 DEBUG nova.compute.manager [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.431 254096 DEBUG oslo_concurrency.lockutils [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.432 254096 DEBUG oslo_concurrency.lockutils [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.432 254096 DEBUG oslo_concurrency.lockutils [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.432 254096 DEBUG nova.compute.manager [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] No waiting events found dispatching network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.433 254096 WARNING nova.compute.manager [req-7fdee1bb-ec84-483e-b4a7-45ccf77d6ba2 req-61d73bf6-97c4-47b2-9f4c-0d08bf7c2609 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Received unexpected event network-vif-plugged-54bd7c02-9f22-4656-9514-7219e656dbef for instance with vm_state deleted and task_state None.
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.571 254096 DEBUG nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.626 254096 INFO nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] instance snapshotting
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.628 254096 DEBUG nova.objects.instance [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.668 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Successfully updated port: 59b2ac13-fc64-408f-bbb6-977c064ac64a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.685 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.686 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.686 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:47:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 323 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 25 16:47:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:47:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958365030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.833 254096 DEBUG oslo_concurrency.processutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.838 254096 DEBUG nova.compute.provider_tree [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.852 254096 DEBUG nova.scheduler.client.report [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.874 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.878 254096 INFO nova.virt.libvirt.driver [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning live snapshot process
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.897 254096 INFO nova.scheduler.client.report [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 2122cb4e-4525-451f-a46f-184e4a72cb34
Nov 25 16:47:59 compute-0 nova_compute[254092]: 2025-11-25 16:47:59.960 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:47:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2958365030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.053 254096 DEBUG oslo_concurrency.lockutils [None req-069f14ac-ea1a-4053-976b-de6603845c97 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2122cb4e-4525-451f-a46f-184e4a72cb34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.060 254096 DEBUG nova.virt.libvirt.imagebackend [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.244 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(a9099c36e23a49a0929de2f63a1a0978) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.616 254096 DEBUG nova.network.neutron [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updating instance_info_cache with network_info: [{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.633 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.633 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance network_info: |[{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.635 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start _get_guest_xml network_info=[{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.638 254096 WARNING nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.644 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.645 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.648 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.649 254096 DEBUG nova.virt.libvirt.host [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.650 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.650 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.650 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.651 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.652 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.653 254096 DEBUG nova.virt.hardware [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:00 compute-0 nova_compute[254092]: 2025-11-25 16:48:00.655 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 25 16:48:01 compute-0 ceph-mon[74985]: pgmap v1881: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 323 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 132 op/s
Nov 25 16:48:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 25 16:48:01 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 25 16:48:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/382977419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.126 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.146 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.150 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.197 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@a9099c36e23a49a0929de2f63a1a0978 to images/0e96adb9-b508-4139-a36d-294a9f197de1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.226 254096 DEBUG nova.compute.manager [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-changed-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.226 254096 DEBUG nova.compute.manager [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Refreshing instance network info cache due to event network-changed-59b2ac13-fc64-408f-bbb6-977c064ac64a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.227 254096 DEBUG oslo_concurrency.lockutils [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.227 254096 DEBUG oslo_concurrency.lockutils [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.227 254096 DEBUG nova.network.neutron [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Refreshing network info cache for port 59b2ac13-fc64-408f-bbb6-977c064ac64a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.288 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/0e96adb9-b508-4139-a36d-294a9f197de1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222757036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.629 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.631 254096 DEBUG nova.virt.libvirt.vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-824118161',display_name='tempest-ServersTestJSON-server-824118161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-824118161',id=93,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-razyaibd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:56Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=7c679c82-4594-4519-a291-de41650ba66b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.631 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.632 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.633 254096 DEBUG nova.objects.instance [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c679c82-4594-4519-a291-de41650ba66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.633 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089266.6316898, 4117640c-3ae9-4568-9034-7a7612ac43fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.634 254096 INFO nova.compute.manager [-] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] VM Stopped (Lifecycle Event)
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.663 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <uuid>7c679c82-4594-4519-a291-de41650ba66b</uuid>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <name>instance-0000005d</name>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestJSON-server-824118161</nova:name>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:00</nova:creationTime>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <nova:port uuid="59b2ac13-fc64-408f-bbb6-977c064ac64a">
Nov 25 16:48:01 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <entry name="serial">7c679c82-4594-4519-a291-de41650ba66b</entry>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <entry name="uuid">7c679c82-4594-4519-a291-de41650ba66b</entry>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7c679c82-4594-4519-a291-de41650ba66b_disk">
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7c679c82-4594-4519-a291-de41650ba66b_disk.config">
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d2:fa:3e"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <target dev="tap59b2ac13-fc"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/console.log" append="off"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:01 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:01 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:01 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:01 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:01 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Preparing to wait for external event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.664 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.665 254096 DEBUG nova.virt.libvirt.vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-824118161',display_name='tempest-ServersTestJSON-server-824118161',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-824118161',id=93,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-razyaibd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:47:56Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=7c679c82-4594-4519-a291-de41650ba66b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.665 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.665 254096 DEBUG nova.network.os_vif_util [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.666 254096 DEBUG os_vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.668 254096 DEBUG nova.compute.manager [None req-10132a64-2d84-4845-a5fb-540d8d2668aa - - - - - -] [instance: 4117640c-3ae9-4568-9034-7a7612ac43fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.670 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59b2ac13-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.670 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59b2ac13-fc, col_values=(('external_ids', {'iface-id': '59b2ac13-fc64-408f-bbb6-977c064ac64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:fa:3e', 'vm-uuid': '7c679c82-4594-4519-a291-de41650ba66b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:01 compute-0 NetworkManager[48891]: <info>  [1764089281.6728] manager: (tap59b2ac13-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.679 254096 INFO os_vif [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc')
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.750 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.751 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.751 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:d2:fa:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.751 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Using config drive
Nov 25 16:48:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 325 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 278 op/s
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.829 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:01 compute-0 nova_compute[254092]: 2025-11-25 16:48:01.920 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(a9099c36e23a49a0929de2f63a1a0978) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:48:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 25 16:48:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 25 16:48:02 compute-0 ceph-mon[74985]: osdmap e232: 3 total, 3 up, 3 in
Nov 25 16:48:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/382977419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/222757036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:02 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.090 254096 DEBUG nova.storage.rbd_utils [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(0e96adb9-b508-4139-a36d-294a9f197de1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.137 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.138 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.153 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.219 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.220 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.226 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.226 254096 INFO nova.compute.claims [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.369 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.413 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Creating config drive at /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.421 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9b8jsqum execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.562 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9b8jsqum" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.600 254096 DEBUG nova.storage.rbd_utils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 7c679c82-4594-4519-a291-de41650ba66b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.605 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config 7c679c82-4594-4519-a291-de41650ba66b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.767 254096 DEBUG oslo_concurrency.processutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config 7c679c82-4594-4519-a291-de41650ba66b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.768 254096 INFO nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deleting local config drive /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b/disk.config because it was imported into RBD.
Nov 25 16:48:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275781388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:02 compute-0 kernel: tap59b2ac13-fc: entered promiscuous mode
Nov 25 16:48:02 compute-0 NetworkManager[48891]: <info>  [1764089282.8141] manager: (tap59b2ac13-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Nov 25 16:48:02 compute-0 ovn_controller[153477]: 2025-11-25T16:48:02Z|00891|binding|INFO|Claiming lport 59b2ac13-fc64-408f-bbb6-977c064ac64a for this chassis.
Nov 25 16:48:02 compute-0 ovn_controller[153477]: 2025-11-25T16:48:02Z|00892|binding|INFO|59b2ac13-fc64-408f-bbb6-977c064ac64a: Claiming fa:16:3e:d2:fa:3e 10.100.0.10
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.814 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.821 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:fa:3e 10.100.0.10'], port_security=['fa:16:3e:d2:fa:3e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7c679c82-4594-4519-a291-de41650ba66b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=59b2ac13-fc64-408f-bbb6-977c064ac64a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.823 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 59b2ac13-fc64-408f-bbb6-977c064ac64a in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.824 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.830 254096 DEBUG nova.compute.provider_tree [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:02 compute-0 systemd-udevd[348561]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c17569a-7762-4a75-a41f-d48f98e0c62b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:02 compute-0 systemd-machined[216343]: New machine qemu-114-instance-0000005d.
Nov 25 16:48:02 compute-0 ovn_controller[153477]: 2025-11-25T16:48:02Z|00893|binding|INFO|Setting lport 59b2ac13-fc64-408f-bbb6-977c064ac64a ovn-installed in OVS
Nov 25 16:48:02 compute-0 ovn_controller[153477]: 2025-11-25T16:48:02Z|00894|binding|INFO|Setting lport 59b2ac13-fc64-408f-bbb6-977c064ac64a up in Southbound
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.846 254096 DEBUG nova.scheduler.client.report [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:02 compute-0 NetworkManager[48891]: <info>  [1764089282.8554] device (tap59b2ac13-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:02 compute-0 NetworkManager[48891]: <info>  [1764089282.8563] device (tap59b2ac13-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:02 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005d.
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.869 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.869 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.870 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d8197771-6024-4323-89c5-0804912f1cc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.873 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[696eed09-0054-4577-b9ef-e7ab0cf25fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.903 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f8635db5-5d98-4aaf-a517-4313de15d999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.925 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.926 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.925 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea6fb4-8af2-4512-bba5-3457dc875625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348574, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.941 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bc009b8b-cf69-4b5f-a4d2-7a19daa57d42]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348576, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348576, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.958 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.960 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.961 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:02.962 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:02 compute-0 nova_compute[254092]: 2025-11-25 16:48:02.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.058 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.060 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.061 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating image(s)
Nov 25 16:48:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 25 16:48:03 compute-0 ceph-mon[74985]: pgmap v1883: 321 pgs: 321 active+clean; 325 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 278 op/s
Nov 25 16:48:03 compute-0 ceph-mon[74985]: osdmap e233: 3 total, 3 up, 3 in
Nov 25 16:48:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4275781388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 25 16:48:03 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.109 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.142 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.169 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.173 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.210 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089283.1891844, 7c679c82-4594-4519-a291-de41650ba66b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.211 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Started (Lifecycle Event)
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.219 254096 DEBUG nova.network.neutron [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updated VIF entry in instance network info cache for port 59b2ac13-fc64-408f-bbb6-977c064ac64a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.220 254096 DEBUG nova.network.neutron [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updating instance_info_cache with network_info: [{"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.237 254096 DEBUG oslo_concurrency.lockutils [req-89e750fc-7ed9-40d1-80f3-d01b7629d8c1 req-6d340f56-767c-4d95-8781-b665d86c45c6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7c679c82-4594-4519-a291-de41650ba66b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.239 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089283.1896975, 7c679c82-4594-4519-a291-de41650ba66b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.239 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Paused (Lifecycle Event)
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.261 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.265 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.266 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.266 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.266 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.287 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.290 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.326 254096 DEBUG nova.policy [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '013868ddd96f43a49458a4615ab1f41b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '544c4f84ca494482aea8e55248fe4c62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.330 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.338 254096 DEBUG nova.compute.manager [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.339 254096 DEBUG oslo_concurrency.lockutils [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.339 254096 DEBUG oslo_concurrency.lockutils [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.339 254096 DEBUG oslo_concurrency.lockutils [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.340 254096 DEBUG nova.compute.manager [req-9eb1e17e-be99-4432-8e01-6ec0417601a7 req-c309c89a-8ae0-46a4-9658-8952a452532b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Processing event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.341 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.345 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.348 254096 INFO nova.virt.libvirt.driver [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance spawned successfully.
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.349 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.353 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.353 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089283.3434644, 7c679c82-4594-4519-a291-de41650ba66b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.354 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Resumed (Lifecycle Event)
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.377 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.378 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.379 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.379 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.380 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.380 254096 DEBUG nova.virt.libvirt.driver [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.388 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.394 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.457 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.488 254096 INFO nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 6.55 seconds to spawn the instance on the hypervisor.
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.489 254096 DEBUG nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.560 254096 INFO nova.compute.manager [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 7.61 seconds to build instance.
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.580 254096 DEBUG oslo_concurrency.lockutils [None req-56a0e49b-5ef2-4e96-90b4-37b854fa9198 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.625 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.683 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] resizing rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:48:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 325 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 213 op/s
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.772 254096 DEBUG nova.objects.instance [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.785 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.786 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Ensure instance console log exists: /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.786 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.786 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:03 compute-0 nova_compute[254092]: 2025-11-25 16:48:03.787 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:04 compute-0 ceph-mon[74985]: osdmap e234: 3 total, 3 up, 3 in
Nov 25 16:48:04 compute-0 ovn_controller[153477]: 2025-11-25T16:48:04Z|00895|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:48:04 compute-0 ovn_controller[153477]: 2025-11-25T16:48:04Z|00896|binding|INFO|Releasing lport 2b164d7c-ac7d-4965-a9d1-81827e4e9936 from this chassis (sb_readonly=0)
Nov 25 16:48:04 compute-0 nova_compute[254092]: 2025-11-25 16:48:04.148 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Successfully created port: 431770e1-476d-40b3-8477-419b69aa4fe9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:48:04 compute-0 nova_compute[254092]: 2025-11-25 16:48:04.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:04 compute-0 nova_compute[254092]: 2025-11-25 16:48:04.707 254096 INFO nova.virt.libvirt.driver [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete
Nov 25 16:48:04 compute-0 nova_compute[254092]: 2025-11-25 16:48:04.707 254096 INFO nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 5.06 seconds to snapshot the instance on the hypervisor.
Nov 25 16:48:04 compute-0 nova_compute[254092]: 2025-11-25 16:48:04.981 254096 DEBUG nova.compute.manager [None req-5440eb23-590d-4d24-90c5-9dfc08ad7edb 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.049 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089270.048559, 6affd696-c15d-4401-8512-2aabbf55fd4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.049 254096 INFO nova.compute.manager [-] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] VM Stopped (Lifecycle Event)
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.082 254096 DEBUG nova.compute.manager [None req-3d6e68b1-083b-42ad-b640-3b95337ec761 - - - - - -] [instance: 6affd696-c15d-4401-8512-2aabbf55fd4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:05 compute-0 ceph-mon[74985]: pgmap v1886: 321 pgs: 321 active+clean; 325 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 213 op/s
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.601 254096 DEBUG nova.compute.manager [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.601 254096 DEBUG oslo_concurrency.lockutils [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.602 254096 DEBUG oslo_concurrency.lockutils [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.602 254096 DEBUG oslo_concurrency.lockutils [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.603 254096 DEBUG nova.compute.manager [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] No waiting events found dispatching network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.603 254096 WARNING nova.compute.manager [req-cbcace99-549a-42d8-a8fd-e9d8c988d2e9 req-49634207-af0c-4875-aa37-7557eedafa9c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received unexpected event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a for instance with vm_state active and task_state None.
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.653 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Successfully updated port: 431770e1-476d-40b3-8477-419b69aa4fe9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.670 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 396 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 13 MiB/s wr, 468 op/s
Nov 25 16:48:05 compute-0 nova_compute[254092]: 2025-11-25 16:48:05.845 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:48:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 25 16:48:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 25 16:48:06 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.373 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.411 254096 INFO nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] instance snapshotting
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.413 254096 DEBUG nova.objects.instance [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.655 254096 DEBUG nova.network.neutron [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.670 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.670 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance network_info: |[{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.672 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start _get_guest_xml network_info=[{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.679 254096 WARNING nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.681 254096 INFO nova.virt.libvirt.driver [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning live snapshot process
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.684 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.684 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.687 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.687 254096 DEBUG nova.virt.libvirt.host [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.688 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.688 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.688 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.689 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.690 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.690 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.690 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.691 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.691 254096 DEBUG nova.virt.hardware [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.693 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.829 254096 DEBUG nova.virt.libvirt.imagebackend [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.973 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.974 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.974 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.974 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.975 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.976 254096 INFO nova.compute.manager [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Terminating instance
Nov 25 16:48:06 compute-0 nova_compute[254092]: 2025-11-25 16:48:06.977 254096 DEBUG nova.compute.manager [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:48:07 compute-0 kernel: tap59b2ac13-fc (unregistering): left promiscuous mode
Nov 25 16:48:07 compute-0 NetworkManager[48891]: <info>  [1764089287.0130] device (tap59b2ac13-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:48:07 compute-0 ovn_controller[153477]: 2025-11-25T16:48:07Z|00897|binding|INFO|Releasing lport 59b2ac13-fc64-408f-bbb6-977c064ac64a from this chassis (sb_readonly=0)
Nov 25 16:48:07 compute-0 ovn_controller[153477]: 2025-11-25T16:48:07Z|00898|binding|INFO|Setting lport 59b2ac13-fc64-408f-bbb6-977c064ac64a down in Southbound
Nov 25 16:48:07 compute-0 ovn_controller[153477]: 2025-11-25T16:48:07Z|00899|binding|INFO|Removing iface tap59b2ac13-fc ovn-installed in OVS
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.033 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:fa:3e 10.100.0.10'], port_security=['fa:16:3e:d2:fa:3e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7c679c82-4594-4519-a291-de41650ba66b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=59b2ac13-fc64-408f-bbb6-977c064ac64a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.034 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 59b2ac13-fc64-408f-bbb6-977c064ac64a in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.035 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.043 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.054 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc64461-802a-400c-91e3-3b3d5bb182fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.056 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(1a827f2145e345a2a2591ad42a9e04ef) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:48:07 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Nov 25 16:48:07 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Consumed 4.004s CPU time.
Nov 25 16:48:07 compute-0 systemd-machined[216343]: Machine qemu-114-instance-0000005d terminated.
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.084 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6795109a-9594-406f-9b2e-45ede786313b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.087 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7a77af6a-d907-41e7-9cab-fc250b6d4fdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.116 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[32ef197c-e61e-4c45-b0ed-a12a69dd82d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667458367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8688971a-c27d-4d7b-8469-25cf3a5d2bbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348868, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 25 16:48:07 compute-0 ceph-mon[74985]: pgmap v1887: 321 pgs: 321 active+clean; 396 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 13 MiB/s wr, 468 op/s
Nov 25 16:48:07 compute-0 ceph-mon[74985]: osdmap e235: 3 total, 3 up, 3 in
Nov 25 16:48:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/667458367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.148 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.151 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3900905e-f38f-4e44-aaeb-fb44babd6463]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348871, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348871, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.153 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 25 16:48:07 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.176 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.177 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.179 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:07.179 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.200 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.206 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.246 254096 INFO nova.virt.libvirt.driver [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Instance destroyed successfully.
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.246 254096 DEBUG nova.objects.instance [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 7c679c82-4594-4519-a291-de41650ba66b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.264 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@1a827f2145e345a2a2591ad42a9e04ef to images/ca6a619e-78ad-49e2-956b-e212b3350627 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.293 254096 DEBUG nova.virt.libvirt.vif [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-824118161',display_name='tempest-ServersTestJSON-server-824118161',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-824118161',id=93,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-razyaibd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:05Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=7c679c82-4594-4519-a291-de41650ba66b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.293 254096 DEBUG nova.network.os_vif_util [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "address": "fa:16:3e:d2:fa:3e", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59b2ac13-fc", "ovs_interfaceid": "59b2ac13-fc64-408f-bbb6-977c064ac64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.294 254096 DEBUG nova.network.os_vif_util [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.294 254096 DEBUG os_vif [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.296 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.296 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59b2ac13-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.298 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.300 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.303 254096 INFO os_vif [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:fa:3e,bridge_name='br-int',has_traffic_filtering=True,id=59b2ac13-fc64-408f-bbb6-977c064ac64a,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59b2ac13-fc')
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.377 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/ca6a619e-78ad-49e2-956b-e212b3350627 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:48:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492836529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.676 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-changed-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Refreshing instance network info cache due to event network-changed-431770e1-476d-40b3-8477-419b69aa4fe9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.677 254096 DEBUG nova.network.neutron [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Refreshing network info cache for port 431770e1-476d-40b3-8477-419b69aa4fe9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.679 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.680 254096 DEBUG nova.virt.libvirt.vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:02Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.680 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.681 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.682 254096 DEBUG nova.objects.instance [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.703 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <uuid>435ae693-6844-49ae-977b-ec3aa89cfe70</uuid>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <name>instance-0000005e</name>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueTestJSON-server-328897245</nova:name>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:06</nova:creationTime>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <nova:port uuid="431770e1-476d-40b3-8477-419b69aa4fe9">
Nov 25 16:48:07 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <entry name="serial">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <entry name="uuid">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk">
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config">
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:07 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e9:e7:de"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <target dev="tap431770e1-47"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/console.log" append="off"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:07 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:07 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:07 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:07 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:07 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.705 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Preparing to wait for external event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.705 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.706 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.707 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.708 254096 DEBUG nova.virt.libvirt.vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:02Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.709 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.710 254096 DEBUG nova.network.os_vif_util [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.711 254096 DEBUG os_vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.712 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.712 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.713 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.721 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap431770e1-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.722 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap431770e1-47, col_values=(('external_ids', {'iface-id': '431770e1-476d-40b3-8477-419b69aa4fe9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:e7:de', 'vm-uuid': '435ae693-6844-49ae-977b-ec3aa89cfe70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 NetworkManager[48891]: <info>  [1764089287.7250] manager: (tap431770e1-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.725 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.731 254096 INFO os_vif [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47')
Nov 25 16:48:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 451 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 387 op/s
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.820 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.821 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.821 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:e9:e7:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.821 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Using config drive
Nov 25 16:48:07 compute-0 nova_compute[254092]: 2025-11-25 16:48:07.845 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:08 compute-0 ceph-mon[74985]: osdmap e236: 3 total, 3 up, 3 in
Nov 25 16:48:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/492836529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:08 compute-0 nova_compute[254092]: 2025-11-25 16:48:08.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:08 compute-0 nova_compute[254092]: 2025-11-25 16:48:08.880 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating config drive at /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config
Nov 25 16:48:08 compute-0 nova_compute[254092]: 2025-11-25 16:48:08.885 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4l6iibha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:08 compute-0 nova_compute[254092]: 2025-11-25 16:48:08.939 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(1a827f2145e345a2a2591ad42a9e04ef) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.038 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4l6iibha" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.056 254096 DEBUG nova.storage.rbd_utils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.059 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.091 254096 INFO nova.virt.libvirt.driver [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deleting instance files /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b_del
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.093 254096 INFO nova.virt.libvirt.driver [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deletion of /var/lib/nova/instances/7c679c82-4594-4519-a291-de41650ba66b_del complete
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.150 254096 INFO nova.compute.manager [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 2.17 seconds to destroy the instance on the hypervisor.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.150 254096 DEBUG oslo.service.loopingcall [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.151 254096 DEBUG nova.compute.manager [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.151 254096 DEBUG nova.network.neutron [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.193 254096 DEBUG oslo_concurrency.processutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.194 254096 INFO nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deleting local config drive /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config because it was imported into RBD.
Nov 25 16:48:09 compute-0 kernel: tap431770e1-47: entered promiscuous mode
Nov 25 16:48:09 compute-0 systemd-udevd[348842]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:09 compute-0 NetworkManager[48891]: <info>  [1764089289.2429] manager: (tap431770e1-47): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Nov 25 16:48:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 25 16:48:09 compute-0 ovn_controller[153477]: 2025-11-25T16:48:09Z|00900|binding|INFO|Claiming lport 431770e1-476d-40b3-8477-419b69aa4fe9 for this chassis.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:09 compute-0 ovn_controller[153477]: 2025-11-25T16:48:09Z|00901|binding|INFO|431770e1-476d-40b3-8477-419b69aa4fe9: Claiming fa:16:3e:e9:e7:de 10.100.0.8
Nov 25 16:48:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 25 16:48:09 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 25 16:48:09 compute-0 NetworkManager[48891]: <info>  [1764089289.2533] device (tap431770e1-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:09 compute-0 ceph-mon[74985]: pgmap v1890: 321 pgs: 321 active+clean; 451 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 387 op/s
Nov 25 16:48:09 compute-0 NetworkManager[48891]: <info>  [1764089289.2542] device (tap431770e1-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.253 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.254 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis
Nov 25 16:48:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.255 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:48:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:09.256 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[938360a9-3858-4f3c-9927-09b52999f173]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:09 compute-0 ovn_controller[153477]: 2025-11-25T16:48:09Z|00902|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 ovn-installed in OVS
Nov 25 16:48:09 compute-0 ovn_controller[153477]: 2025-11-25T16:48:09Z|00903|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 up in Southbound
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:09 compute-0 systemd-machined[216343]: New machine qemu-115-instance-0000005e.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.278 254096 DEBUG nova.storage.rbd_utils [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(ca6a619e-78ad-49e2-956b-e212b3350627) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:48:09 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005e.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089289.6311123, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.631 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Started (Lifecycle Event)
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.649 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.653 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089289.6312323, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.653 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Paused (Lifecycle Event)
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.668 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 451 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 368 op/s
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.788 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.789 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] No waiting events found dispatching network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 WARNING nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received unexpected event network-vif-plugged-59b2ac13-fc64-408f-bbb6-977c064ac64a for instance with vm_state active and task_state deleting.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.790 254096 DEBUG oslo_concurrency.lockutils [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.791 254096 DEBUG nova.compute.manager [req-4dc6e0d2-8cf0-4438-8843-dd14ea0df9d1 req-63ae2bbc-7979-42a8-9724-40c4697d4fdc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Processing event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.791 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.794 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089289.7939894, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.794 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Resumed (Lifecycle Event)
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.796 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.799 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance spawned successfully.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.799 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.815 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.820 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.823 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.823 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.824 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.824 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.824 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.825 254096 DEBUG nova.virt.libvirt.driver [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.843 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.848 254096 DEBUG nova.network.neutron [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updated VIF entry in instance network info cache for port 431770e1-476d-40b3-8477-419b69aa4fe9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.848 254096 DEBUG nova.network.neutron [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.863 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-unplugged-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7c679c82-4594-4519-a291-de41650ba66b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.864 254096 DEBUG oslo_concurrency.lockutils [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.865 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] No waiting events found dispatching network-vif-unplugged-59b2ac13-fc64-408f-bbb6-977c064ac64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.865 254096 DEBUG nova.compute.manager [req-e0d98e6d-a246-4b84-9505-919feded0f28 req-672da5db-7290-44f9-9660-d135aa98c65e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-unplugged-59b2ac13-fc64-408f-bbb6-977c064ac64a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.873 254096 INFO nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 6.81 seconds to spawn the instance on the hypervisor.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.873 254096 DEBUG nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.937 254096 INFO nova.compute.manager [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 7.75 seconds to build instance.
Nov 25 16:48:09 compute-0 nova_compute[254092]: 2025-11-25 16:48:09.958 254096 DEBUG oslo_concurrency.lockutils [None req-1a281a66-7406-4b0f-b648-4eded14acebb 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:48:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 25 16:48:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 25 16:48:10 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 25 16:48:10 compute-0 ceph-mon[74985]: osdmap e237: 3 total, 3 up, 3 in
Nov 25 16:48:10 compute-0 ceph-mon[74985]: pgmap v1892: 321 pgs: 321 active+clean; 451 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 368 op/s
Nov 25 16:48:10 compute-0 nova_compute[254092]: 2025-11-25 16:48:10.847 254096 DEBUG nova.network.neutron [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:10 compute-0 nova_compute[254092]: 2025-11-25 16:48:10.868 254096 INFO nova.compute.manager [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Took 1.72 seconds to deallocate network for instance.
Nov 25 16:48:10 compute-0 nova_compute[254092]: 2025-11-25 16:48:10.940 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:10 compute-0 nova_compute[254092]: 2025-11-25 16:48:10.941 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.063 254096 INFO nova.compute.manager [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Rescuing
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.064 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.064 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.064 254096 DEBUG nova.network.neutron [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.068 254096 DEBUG oslo_concurrency.processutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:11 compute-0 ceph-mon[74985]: osdmap e238: 3 total, 3 up, 3 in
Nov 25 16:48:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844377541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.517 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089276.5155525, 2122cb4e-4525-451f-a46f-184e4a72cb34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.518 254096 INFO nova.compute.manager [-] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] VM Stopped (Lifecycle Event)
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.524 254096 DEBUG oslo_concurrency.processutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.529 254096 DEBUG nova.compute.provider_tree [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.542 254096 INFO nova.virt.libvirt.driver [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.543 254096 INFO nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 5.11 seconds to snapshot the instance on the hypervisor.
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.553 254096 DEBUG nova.compute.manager [None req-654a6030-2233-4f95-bfa9-e7986b9973d8 - - - - - -] [instance: 2122cb4e-4525-451f-a46f-184e4a72cb34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.563 254096 DEBUG nova.scheduler.client.report [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.592 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.647 254096 INFO nova.scheduler.client.report [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 7c679c82-4594-4519-a291-de41650ba66b
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.709 254096 DEBUG oslo_concurrency.lockutils [None req-239706f1-ad61-4553-96dc-e40c1c5a365a 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "7c679c82-4594-4519-a291-de41650ba66b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 484 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 8.3 MiB/s wr, 310 op/s
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.834 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.835 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.835 254096 DEBUG nova.compute.manager [None req-e374cc0e-9934-45f8-a330-184e0d2babf5 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting image 658944a7-ebd4-4546-999c-02701f55081a _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.902 254096 DEBUG nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.903 254096 DEBUG oslo_concurrency.lockutils [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.903 254096 DEBUG oslo_concurrency.lockutils [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.903 254096 DEBUG oslo_concurrency.lockutils [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.904 254096 DEBUG nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.904 254096 WARNING nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state active and task_state rescuing.
Nov 25 16:48:11 compute-0 nova_compute[254092]: 2025-11-25 16:48:11.904 254096 DEBUG nova.compute.manager [req-b5b5ded9-70ab-4244-8061-b70b4f74ab40 req-7d525fd2-3c49-4636-b90f-4feb8944fbde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Received event network-vif-deleted-59b2ac13-fc64-408f-bbb6-977c064ac64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 25 16:48:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 25 16:48:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/844377541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:12 compute-0 ceph-mon[74985]: pgmap v1894: 321 pgs: 321 active+clean; 484 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 8.3 MiB/s wr, 310 op/s
Nov 25 16:48:12 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 25 16:48:12 compute-0 nova_compute[254092]: 2025-11-25 16:48:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:12 compute-0 nova_compute[254092]: 2025-11-25 16:48:12.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:12 compute-0 nova_compute[254092]: 2025-11-25 16:48:12.828 254096 DEBUG nova.network.neutron [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:12 compute-0 nova_compute[254092]: 2025-11-25 16:48:12.845 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-435ae693-6844-49ae-977b-ec3aa89cfe70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:13 compute-0 nova_compute[254092]: 2025-11-25 16:48:13.057 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:48:13 compute-0 nova_compute[254092]: 2025-11-25 16:48:13.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:13 compute-0 ceph-mon[74985]: osdmap e239: 3 total, 3 up, 3 in
Nov 25 16:48:13 compute-0 nova_compute[254092]: 2025-11-25 16:48:13.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:13 compute-0 nova_compute[254092]: 2025-11-25 16:48:13.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:13.626 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 484 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 291 op/s
Nov 25 16:48:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 25 16:48:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 25 16:48:14 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 25 16:48:14 compute-0 ceph-mon[74985]: pgmap v1896: 321 pgs: 321 active+clean; 484 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 291 op/s
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.511 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.512 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.735 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.802 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.803 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.809 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.809 254096 INFO nova.compute.claims [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:48:14 compute-0 nova_compute[254092]: 2025-11-25 16:48:14.984 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 25 16:48:15 compute-0 ceph-mon[74985]: osdmap e240: 3 total, 3 up, 3 in
Nov 25 16:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 25 16:48:15 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 25 16:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994193304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.461 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.467 254096 DEBUG nova.compute.provider_tree [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.481 254096 DEBUG nova.scheduler.client.report [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.508 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.509 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.552 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.552 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.567 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.579 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.677 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.679 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.679 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Creating image(s)
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.700 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.720 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.738 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.744 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 412 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.9 MiB/s wr, 351 op/s
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.775 254096 DEBUG nova.policy [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a137feb008c49e092cbb94106526835', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56790f54314e4087abb5da8030b17666', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.813 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.813 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.814 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.814 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.833 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:15 compute-0 nova_compute[254092]: 2025-11-25 16:48:15.836 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4677de7c-6625-4c98-a065-214341d8bfea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 25 16:48:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 25 16:48:16 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 25 16:48:16 compute-0 ceph-mon[74985]: osdmap e241: 3 total, 3 up, 3 in
Nov 25 16:48:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3994193304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:16 compute-0 ceph-mon[74985]: pgmap v1899: 321 pgs: 321 active+clean; 412 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.9 MiB/s wr, 351 op/s
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.471 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4677de7c-6625-4c98-a065-214341d8bfea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.568 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] resizing rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:48:16 compute-0 podman[349329]: 2025-11-25 16:48:16.660157451 +0000 UTC m=+0.071332310 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:48:16 compute-0 podman[349337]: 2025-11-25 16:48:16.681056238 +0000 UTC m=+0.089975676 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.682 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Successfully created port: cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.746 254096 DEBUG nova.objects.instance [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'migration_context' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:16 compute-0 podman[349346]: 2025-11-25 16:48:16.756474178 +0000 UTC m=+0.152268359 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.764 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.764 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Ensure instance console log exists: /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.765 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.765 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:16 compute-0 nova_compute[254092]: 2025-11-25 16:48:16.765 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:17 compute-0 ceph-mon[74985]: osdmap e242: 3 total, 3 up, 3 in
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 290 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 213 op/s
Nov 25 16:48:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792957288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:17 compute-0 nova_compute[254092]: 2025-11-25 16:48:17.975 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.066 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.067 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.073 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.081 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.082 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.085 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Successfully updated port: cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.103 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.104 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquired lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.104 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.269 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.302 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.303 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3255MB free_disk=59.87629318237305GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.303 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.304 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8b20d119-17cb-4742-9223-90e5020f93a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.365 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73301044-3bad-4401-9e30-f009d417f662 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 435ae693-6844-49ae-977b-ec3aa89cfe70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.366 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 4677de7c-6625-4c98-a065-214341d8bfea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.368 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.368 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.440 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:18 compute-0 ceph-mon[74985]: pgmap v1901: 321 pgs: 321 active+clean; 290 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 213 op/s
Nov 25 16:48:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3792957288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3567587403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.865 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.871 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.891 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.908 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:48:18 compute-0 nova_compute[254092]: 2025-11-25 16:48:18.908 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.128 254096 DEBUG nova.compute.manager [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-changed-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.129 254096 DEBUG nova.compute.manager [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Refreshing instance network info cache due to event network-changed-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.130 254096 DEBUG oslo_concurrency.lockutils [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.283 254096 DEBUG nova.network.neutron [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updating instance_info_cache with network_info: [{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.298 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Releasing lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.298 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance network_info: |[{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.299 254096 DEBUG oslo_concurrency.lockutils [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.299 254096 DEBUG nova.network.neutron [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Refreshing network info cache for port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.304 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start _get_guest_xml network_info=[{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.352 254096 WARNING nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.357 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.357 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.360 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.360 254096 DEBUG nova.virt.libvirt.host [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.361 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.362 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.363 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.363 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.363 254096 DEBUG nova.virt.hardware [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.366 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3567587403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 290 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 195 op/s
Nov 25 16:48:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3923795873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.804 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.827 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.831 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.909 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.910 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.911 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:48:19 compute-0 nova_compute[254092]: 2025-11-25 16:48:19.930 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.114 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8b20d119-17cb-4742-9223-90e5020f93a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417193328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.293 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.294 254096 DEBUG nova.virt.libvirt.vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1206046660',display_name='tempest-ServersTestJSON-server-1206046660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1206046660',id=95,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-g8d6ljpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:15Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4677de7c-6625-4c98-a065-214341d8bfea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.295 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.295 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.297 254096 DEBUG nova.objects.instance [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.309 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <uuid>4677de7c-6625-4c98-a065-214341d8bfea</uuid>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <name>instance-0000005f</name>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersTestJSON-server-1206046660</nova:name>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:19</nova:creationTime>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:user uuid="7a137feb008c49e092cbb94106526835">tempest-ServersTestJSON-1089675869-project-member</nova:user>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:project uuid="56790f54314e4087abb5da8030b17666">tempest-ServersTestJSON-1089675869</nova:project>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <nova:port uuid="cce0c8fe-e83f-4422-aeb3-8b1e6bafa462">
Nov 25 16:48:20 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <entry name="serial">4677de7c-6625-4c98-a065-214341d8bfea</entry>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <entry name="uuid">4677de7c-6625-4c98-a065-214341d8bfea</entry>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4677de7c-6625-4c98-a065-214341d8bfea_disk">
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4677de7c-6625-4c98-a065-214341d8bfea_disk.config">
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:87:86:24"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <target dev="tapcce0c8fe-e8"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/console.log" append="off"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:20 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:20 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:20 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:20 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:20 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.310 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Preparing to wait for external event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.311 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.311 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.311 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.312 254096 DEBUG nova.virt.libvirt.vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1206046660',display_name='tempest-ServersTestJSON-server-1206046660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1206046660',id=95,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-g8d6ljpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:15Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4677de7c-6625-4c98-a065-214341d8bfea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.314 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.315 254096 DEBUG nova.network.os_vif_util [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.315 254096 DEBUG os_vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.316 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.317 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.320 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.320 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce0c8fe-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.321 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcce0c8fe-e8, col_values=(('external_ids', {'iface-id': 'cce0c8fe-e83f-4422-aeb3-8b1e6bafa462', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:86:24', 'vm-uuid': '4677de7c-6625-4c98-a065-214341d8bfea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:20 compute-0 NetworkManager[48891]: <info>  [1764089300.3719] manager: (tapcce0c8fe-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.380 254096 INFO os_vif [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8')
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.426 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.427 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.427 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] No VIF found with MAC fa:16:3e:87:86:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.428 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Using config drive
Nov 25 16:48:20 compute-0 ceph-mon[74985]: pgmap v1902: 321 pgs: 321 active+clean; 290 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 195 op/s
Nov 25 16:48:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3923795873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1417193328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:20 compute-0 nova_compute[254092]: 2025-11-25 16:48:20.461 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 25 16:48:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 25 16:48:21 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.382 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Creating config drive at /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.387 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_vmfqscv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.482 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.483 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.502 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.549 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_vmfqscv" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.583 254096 DEBUG nova.storage.rbd_utils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] rbd image 4677de7c-6625-4c98-a065-214341d8bfea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.588 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config 4677de7c-6625-4c98-a065-214341d8bfea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.643 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.644 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.651 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.652 254096 INFO nova.compute.claims [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.738 254096 DEBUG oslo_concurrency.processutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config 4677de7c-6625-4c98-a065-214341d8bfea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.739 254096 INFO nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deleting local config drive /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea/disk.config because it was imported into RBD.
Nov 25 16:48:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 293 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 3.3 MiB/s wr, 131 op/s
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.790 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:21 compute-0 kernel: tapcce0c8fe-e8: entered promiscuous mode
Nov 25 16:48:21 compute-0 NetworkManager[48891]: <info>  [1764089301.8150] manager: (tapcce0c8fe-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Nov 25 16:48:21 compute-0 ovn_controller[153477]: 2025-11-25T16:48:21Z|00904|binding|INFO|Claiming lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for this chassis.
Nov 25 16:48:21 compute-0 ovn_controller[153477]: 2025-11-25T16:48:21Z|00905|binding|INFO|cce0c8fe-e83f-4422-aeb3-8b1e6bafa462: Claiming fa:16:3e:87:86:24 10.100.0.12
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.825 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:86:24 10.100.0.12'], port_security=['fa:16:3e:87:86:24 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4677de7c-6625-4c98-a065-214341d8bfea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 bound to our chassis
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.831 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:48:21 compute-0 ovn_controller[153477]: 2025-11-25T16:48:21Z|00906|binding|INFO|Setting lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 ovn-installed in OVS
Nov 25 16:48:21 compute-0 ovn_controller[153477]: 2025-11-25T16:48:21Z|00907|binding|INFO|Setting lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 up in Southbound
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:21 compute-0 systemd-udevd[349611]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.854 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feb6c0f0-f0c4-4e4d-9759-5922fedcf653]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:21 compute-0 NetworkManager[48891]: <info>  [1764089301.8643] device (tapcce0c8fe-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:21 compute-0 systemd-machined[216343]: New machine qemu-116-instance-0000005f.
Nov 25 16:48:21 compute-0 NetworkManager[48891]: <info>  [1764089301.8657] device (tapcce0c8fe-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:21 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-0000005f.
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.891 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1c34d9-5445-48c1-8193-1e69706d3590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.894 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4563ac5c-c8ef-4ef4-9b46-368d027c7aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.931 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eef048af-126f-4f71-b265-94a34e7958b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.955 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4956e56e-e924-49f0-a415-cad306ac5f24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349634, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.980 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[835cfa5e-210b-4c9b-accb-e90066b1cfc2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349645, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349645, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.982 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:21 compute-0 nova_compute[254092]: 2025-11-25 16:48:21.985 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:21.986 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.085 254096 DEBUG nova.network.neutron [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updated VIF entry in instance network info cache for port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.086 254096 DEBUG nova.network.neutron [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updating instance_info_cache with network_info: [{"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.107 254096 DEBUG oslo_concurrency.lockutils [req-fe02c397-3e72-4f63-936f-d76b1394c89c req-73a76b97-fe16-4b2e-9e15-1354a7d5ccc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4677de7c-6625-4c98-a065-214341d8bfea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:22 compute-0 ceph-mon[74985]: osdmap e243: 3 total, 3 up, 3 in
Nov 25 16:48:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396387784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.241 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089287.2068388, 7c679c82-4594-4519-a291-de41650ba66b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.242 254096 INFO nova.compute.manager [-] [instance: 7c679c82-4594-4519-a291-de41650ba66b] VM Stopped (Lifecycle Event)
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.248 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.254 254096 DEBUG nova.compute.provider_tree [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.257 254096 DEBUG nova.compute.manager [None req-15bd64d7-1578-488c-abe3-8f1c470026e8 - - - - - -] [instance: 7c679c82-4594-4519-a291-de41650ba66b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.266 254096 DEBUG nova.scheduler.client.report [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.283 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.283 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.312 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089302.3115609, 4677de7c-6625-4c98-a065-214341d8bfea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.312 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Started (Lifecycle Event)
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.334 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.337 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.337 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.344 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089302.311709, 4677de7c-6625-4c98-a065-214341d8bfea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.344 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Paused (Lifecycle Event)
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.366 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.367 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.371 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.387 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.387 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.479 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.481 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.481 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Creating image(s)
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.502 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.524 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.544 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.548 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.600 254096 DEBUG nova.policy [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.603 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updating instance_info_cache with network_info: [{"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.622 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-8b20d119-17cb-4742-9223-90e5020f93a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.623 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.623 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.640 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.641 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.641 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.641 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.664 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.669 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2e848add-8417-4307-8b01-f0d1c1a76cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:22 compute-0 nova_compute[254092]: 2025-11-25 16:48:22.969 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2e848add-8417-4307-8b01-f0d1c1a76cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.053 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.113 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.191 254096 DEBUG nova.objects.instance [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e848add-8417-4307-8b01-f0d1c1a76cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:23 compute-0 ceph-mon[74985]: pgmap v1904: 321 pgs: 321 active+clean; 293 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 3.3 MiB/s wr, 131 op/s
Nov 25 16:48:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2396387784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.225 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.226 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Ensure instance console log exists: /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.226 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.226 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.227 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.441 254096 DEBUG nova.compute.manager [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.442 254096 DEBUG oslo_concurrency.lockutils [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.442 254096 DEBUG oslo_concurrency.lockutils [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.443 254096 DEBUG oslo_concurrency.lockutils [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.443 254096 DEBUG nova.compute.manager [req-3f7c9b26-79ce-4da5-88a8-1ca9834b3b15 req-666cacab-a26f-44d7-83d3-ace22f179d88 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Processing event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.446 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.449 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089303.4490504, 4677de7c-6625-4c98-a065-214341d8bfea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.449 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Resumed (Lifecycle Event)
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.451 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.455 254096 INFO nova.virt.libvirt.driver [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance spawned successfully.
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.455 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.480 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.487 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.487 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.488 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.488 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.489 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.489 254096 DEBUG nova.virt.libvirt.driver [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.493 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.534 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.603 254096 INFO nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 7.92 seconds to spawn the instance on the hypervisor.
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.603 254096 DEBUG nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 293 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 2.7 MiB/s wr, 106 op/s
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.867 254096 INFO nova.compute.manager [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 9.08 seconds to build instance.
Nov 25 16:48:23 compute-0 nova_compute[254092]: 2025-11-25 16:48:23.900 254096 DEBUG oslo_concurrency.lockutils [None req-5d79e9ef-8160-4d73-8858-aceea8705915 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.204 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:24 compute-0 ceph-mon[74985]: pgmap v1905: 321 pgs: 321 active+clean; 293 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 2.7 MiB/s wr, 106 op/s
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.284 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Successfully created port: 5a3f34de-d3de-439b-ac8f-baabc77892b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.782 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.783 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.800 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.853 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.854 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.860 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:48:24 compute-0 nova_compute[254092]: 2025-11-25 16:48:24.860 254096 INFO nova.compute.claims [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.060 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.519 254096 DEBUG nova.compute.manager [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.519 254096 DEBUG oslo_concurrency.lockutils [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 DEBUG oslo_concurrency.lockutils [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 DEBUG oslo_concurrency.lockutils [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 DEBUG nova.compute.manager [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] No waiting events found dispatching network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.520 254096 WARNING nova.compute.manager [req-af42a9b5-5e0c-4137-843b-02c20a4a20c8 req-1480e710-aca1-4187-a91b-f611504de4ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received unexpected event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for instance with vm_state active and task_state None.
Nov 25 16:48:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799670337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:25 compute-0 kernel: tap431770e1-47 (unregistering): left promiscuous mode
Nov 25 16:48:25 compute-0 NetworkManager[48891]: <info>  [1764089305.5532] device (tap431770e1-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:48:25 compute-0 ovn_controller[153477]: 2025-11-25T16:48:25Z|00908|binding|INFO|Releasing lport 431770e1-476d-40b3-8477-419b69aa4fe9 from this chassis (sb_readonly=0)
Nov 25 16:48:25 compute-0 ovn_controller[153477]: 2025-11-25T16:48:25Z|00909|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 down in Southbound
Nov 25 16:48:25 compute-0 ovn_controller[153477]: 2025-11-25T16:48:25Z|00910|binding|INFO|Removing iface tap431770e1-47 ovn-installed in OVS
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.560 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.572 254096 DEBUG nova.compute.provider_tree [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.571 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.572 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis
Nov 25 16:48:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.574 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:48:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:25.575 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1262ab7f-d941-4488-8b15-5c8841a4fecd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.581 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2799670337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.594 254096 DEBUG nova.scheduler.client.report [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.620 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.621 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:48:25 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 25 16:48:25 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Consumed 13.219s CPU time.
Nov 25 16:48:25 compute-0 systemd-machined[216343]: Machine qemu-115-instance-0000005e terminated.
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.706 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.707 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.722 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.742 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:48:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 360 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.7 MiB/s wr, 203 op/s
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.830 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.832 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.832 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Creating image(s)
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.860 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.890 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.920 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:25 compute-0 nova_compute[254092]: 2025-11-25 16:48:25.924 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.016 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.018 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.018 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.019 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.050 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.054 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0098976-026f-43d8-b686-b2658f9aded9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.128 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance shutdown successfully after 13 seconds.
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.139 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance destroyed successfully.
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.139 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'numa_topology' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.161 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Attempting rescue
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.162 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.169 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.170 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating image(s)
Nov 25 16:48:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.201 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.207 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.255 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.283 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.288 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.361 254096 DEBUG nova.policy [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23f6db77558a477bbd8b8b46cb4107d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.400 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.401 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.402 254096 DEBUG oslo_concurrency.lockutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.435 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.444 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:26 compute-0 ceph-mon[74985]: pgmap v1906: 321 pgs: 321 active+clean; 360 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.7 MiB/s wr, 203 op/s
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.762 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e0098976-026f-43d8-b686-b2658f9aded9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:26 compute-0 nova_compute[254092]: 2025-11-25 16:48:26.884 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] resizing rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.037 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Successfully updated port: 5a3f34de-d3de-439b-ac8f-baabc77892b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.040 254096 DEBUG nova.compute.manager [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.040 254096 DEBUG oslo_concurrency.lockutils [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 DEBUG oslo_concurrency.lockutils [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 DEBUG oslo_concurrency.lockutils [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 DEBUG nova.compute.manager [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.041 254096 WARNING nova.compute.manager [req-683817de-c59a-463b-b9e8-d8792146adc3 req-5a8a2f2c-6926-4385-8551-d64eeb519ab0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state active and task_state rescuing.
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.124 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.124 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.124 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.182 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.738s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.183 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.193 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.194 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start _get_guest_xml network_info=[{"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:e9:e7:de"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.195 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.235 254096 DEBUG nova.objects.instance [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid e0098976-026f-43d8-b686-b2658f9aded9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.239 254096 WARNING nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.244 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.244 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.245 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.246 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Ensure instance console log exists: /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.246 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.247 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.247 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.251 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.252 254096 DEBUG nova.virt.libvirt.host [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.252 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.253 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.253 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.254 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.255 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.256 254096 DEBUG nova.virt.hardware [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.256 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.269 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.430 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.531 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.531 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.550 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:48:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781331037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.743 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:27 compute-0 nova_compute[254092]: 2025-11-25 16:48:27.745 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1781331037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 372 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 249 op/s
Nov 25 16:48:28 compute-0 sudo[350197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:28 compute-0 sudo[350197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:28 compute-0 sudo[350197]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:28 compute-0 sudo[350222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:48:28 compute-0 sudo[350222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:28 compute-0 sudo[350222]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272110234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:28 compute-0 sudo[350247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:28 compute-0 sudo[350247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:28 compute-0 sudo[350247]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.228 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.231 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:28 compute-0 sudo[350274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 16:48:28 compute-0 sudo[350274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220709705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.760 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.762 254096 DEBUG nova.virt.libvirt.vif [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:09Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:e9:e7:de"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.762 254096 DEBUG nova.network.os_vif_util [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:e9:e7:de"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.763 254096 DEBUG nova.network.os_vif_util [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.765 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:28 compute-0 ceph-mon[74985]: pgmap v1907: 321 pgs: 321 active+clean; 372 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 249 op/s
Nov 25 16:48:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3272110234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3220709705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.778 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <uuid>435ae693-6844-49ae-977b-ec3aa89cfe70</uuid>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <name>instance-0000005e</name>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueTestJSON-server-328897245</nova:name>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:27</nova:creationTime>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <nova:port uuid="431770e1-476d-40b3-8477-419b69aa4fe9">
Nov 25 16:48:28 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <entry name="serial">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <entry name="uuid">435ae693-6844-49ae-977b-ec3aa89cfe70</entry>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk.rescue">
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk">
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <target dev="vdb" bus="virtio"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue">
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:28 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e9:e7:de"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <target dev="tap431770e1-47"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/console.log" append="off"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:28 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:28 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:28 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:28 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:28 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.794 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance destroyed successfully.
Nov 25 16:48:28 compute-0 podman[350390]: 2025-11-25 16:48:28.807756663 +0000 UTC m=+0.078952456 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.846 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.847 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.847 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.847 254096 DEBUG nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:e9:e7:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.848 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Using config drive
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.870 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.890 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Successfully created port: 591e580e-30bb-4c0d-b1fb-96d45eca5626 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.910 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:28 compute-0 nova_compute[254092]: 2025-11-25 16:48:28.936 254096 DEBUG nova.objects.instance [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'keypairs' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:28 compute-0 podman[350390]: 2025-11-25 16:48:28.938221398 +0000 UTC m=+0.209417201 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:48:29 compute-0 sudo[350274]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:48:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:48:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:29 compute-0 sudo[350563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:29 compute-0 sudo[350563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:29 compute-0 sudo[350563]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:29 compute-0 sudo[350588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:48:29 compute-0 sudo[350588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:29 compute-0 sudo[350588]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 372 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 249 op/s
Nov 25 16:48:29 compute-0 sudo[350613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:29 compute-0 sudo[350613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:29 compute-0 sudo[350613]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.844 254096 DEBUG nova.network.neutron [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.859 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.860 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance network_info: |[{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.863 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start _get_guest_xml network_info=[{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.870 254096 WARNING nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.876 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.877 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.879 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.880 254096 DEBUG nova.virt.libvirt.host [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.880 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.881 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.881 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.881 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.882 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.882 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.882 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.883 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.884 254096 DEBUG nova.virt.hardware [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.887 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:29 compute-0 sudo[350638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:48:29 compute-0 sudo[350638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.946 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.947 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing instance network info cache due to event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.947 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.948 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:29 compute-0 nova_compute[254092]: 2025-11-25 16:48:29.948 254096 DEBUG nova.network.neutron [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.016 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Creating config drive at /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.023 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1lyutho execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.170 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1lyutho" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.193 254096 DEBUG nova.storage.rbd_utils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.197 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040587605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.370 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.403 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.409 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:30 compute-0 sudo[350638]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.468 254096 DEBUG oslo_concurrency.processutils [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue 435ae693-6844-49ae-977b-ec3aa89cfe70_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.472 254096 INFO nova.virt.libvirt.driver [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deleting local config drive /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70/disk.config.rescue because it was imported into RBD.
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 36a2a155-5a2e-446c-96fa-4112f914e17e does not exist
Nov 25 16:48:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a22f5ec4-2a5b-4ad6-ae60-7032494a86eb does not exist
Nov 25 16:48:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 12d5a424-8f7a-47e9-9f86-1d9a5891d430 does not exist
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.489 254096 DEBUG oslo_concurrency.lockutils [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.489 254096 DEBUG oslo_concurrency.lockutils [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.489 254096 DEBUG nova.compute.manager [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.501 254096 DEBUG nova.compute.manager [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.503 254096 DEBUG nova.objects.instance [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'flavor' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:30 compute-0 kernel: tap431770e1-47: entered promiscuous mode
Nov 25 16:48:30 compute-0 NetworkManager[48891]: <info>  [1764089310.5443] manager: (tap431770e1-47): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Nov 25 16:48:30 compute-0 ovn_controller[153477]: 2025-11-25T16:48:30Z|00911|binding|INFO|Claiming lport 431770e1-476d-40b3-8477-419b69aa4fe9 for this chassis.
Nov 25 16:48:30 compute-0 ovn_controller[153477]: 2025-11-25T16:48:30Z|00912|binding|INFO|431770e1-476d-40b3-8477-419b69aa4fe9: Claiming fa:16:3e:e9:e7:de 10.100.0.8
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.555 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.557 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis
Nov 25 16:48:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.558 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:48:30 compute-0 sudo[350776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:30.559 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb816db-9476-44b7-bbaf-ba69d9fcc7f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.560 254096 DEBUG nova.virt.libvirt.driver [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:48:30 compute-0 sudo[350776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:30 compute-0 sudo[350776]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 ovn_controller[153477]: 2025-11-25T16:48:30Z|00913|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 ovn-installed in OVS
Nov 25 16:48:30 compute-0 ovn_controller[153477]: 2025-11-25T16:48:30Z|00914|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 up in Southbound
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:30 compute-0 ceph-mon[74985]: pgmap v1908: 321 pgs: 321 active+clean; 372 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 249 op/s
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3040587605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:48:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:48:30 compute-0 systemd-udevd[350817]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:30 compute-0 systemd-machined[216343]: New machine qemu-117-instance-0000005e.
Nov 25 16:48:30 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005e.
Nov 25 16:48:30 compute-0 NetworkManager[48891]: <info>  [1764089310.6027] device (tap431770e1-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:30 compute-0 NetworkManager[48891]: <info>  [1764089310.6038] device (tap431770e1-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:30 compute-0 sudo[350812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:48:30 compute-0 sudo[350812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:30 compute-0 sudo[350812]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:30 compute-0 sudo[350862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:30 compute-0 sudo[350862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:30 compute-0 sudo[350862]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:30 compute-0 sudo[350890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:48:30 compute-0 sudo[350890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529209879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.925 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.926 254096 DEBUG nova.virt.libvirt.vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-721299492',display_name='tempest-TestNetworkAdvancedServerOps-server-721299492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-721299492',id=96,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIuS2h4G62fbKa1D8fQiWbH65PFkkRVLBed4wrkEeUlM++S4qN/mZDJoxB0We0lR2SolGZ26Txk6Ir9O+1WqdMaVC9PS7NmiU/+hEPFN6YieX+/K6w93NwRm1fHYEK0fbg==',key_name='tempest-TestNetworkAdvancedServerOps-1463569288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-oidm56b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2e848add-8417-4307-8b01-f0d1c1a76cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.927 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.927 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.929 254096 DEBUG nova.objects.instance [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e848add-8417-4307-8b01-f0d1c1a76cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.952 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <uuid>2e848add-8417-4307-8b01-f0d1c1a76cea</uuid>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <name>instance-00000060</name>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-721299492</nova:name>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:29</nova:creationTime>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <nova:port uuid="5a3f34de-d3de-439b-ac8f-baabc77892b4">
Nov 25 16:48:30 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <entry name="serial">2e848add-8417-4307-8b01-f0d1c1a76cea</entry>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <entry name="uuid">2e848add-8417-4307-8b01-f0d1c1a76cea</entry>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2e848add-8417-4307-8b01-f0d1c1a76cea_disk">
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config">
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:9d:43:a6"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <target dev="tap5a3f34de-d3"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/console.log" append="off"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:30 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:30 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:30 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:30 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:30 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.953 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Preparing to wait for external event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.954 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.954 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.954 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.955 254096 DEBUG nova.virt.libvirt.vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-721299492',display_name='tempest-TestNetworkAdvancedServerOps-server-721299492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-721299492',id=96,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIuS2h4G62fbKa1D8fQiWbH65PFkkRVLBed4wrkEeUlM++S4qN/mZDJoxB0We0lR2SolGZ26Txk6Ir9O+1WqdMaVC9PS7NmiU/+hEPFN6YieX+/K6w93NwRm1fHYEK0fbg==',key_name='tempest-TestNetworkAdvancedServerOps-1463569288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-oidm56b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2e848add-8417-4307-8b01-f0d1c1a76cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.955 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.956 254096 DEBUG nova.network.os_vif_util [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.956 254096 DEBUG os_vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.957 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.957 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.970 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a3f34de-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.970 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a3f34de-d3, col_values=(('external_ids', {'iface-id': '5a3f34de-d3de-439b-ac8f-baabc77892b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:43:a6', 'vm-uuid': '2e848add-8417-4307-8b01-f0d1c1a76cea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 NetworkManager[48891]: <info>  [1764089310.9743] manager: (tap5a3f34de-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.977 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Successfully updated port: 591e580e-30bb-4c0d-b1fb-96d45eca5626 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.984 254096 INFO os_vif [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3')
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.993 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.993 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:30 compute-0 nova_compute[254092]: 2025-11-25 16:48:30.993 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.030 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.030 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.030 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:9d:43:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.031 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Using config drive
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.061 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 25 16:48:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 25 16:48:31 compute-0 podman[351010]: 2025-11-25 16:48:31.185592632 +0000 UTC m=+0.072063220 container create adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 16:48:31 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 25 16:48:31 compute-0 podman[351010]: 2025-11-25 16:48:31.139949751 +0000 UTC m=+0.026420359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:48:31 compute-0 systemd[1]: Started libpod-conmon-adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd.scope.
Nov 25 16:48:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.335 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 435ae693-6844-49ae-977b-ec3aa89cfe70 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.336 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089311.3346312, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.337 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Resumed (Lifecycle Event)
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.344 254096 DEBUG nova.compute.manager [None req-046f827d-c184-45d8-82c3-36f2a2fa1011 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:31 compute-0 podman[351010]: 2025-11-25 16:48:31.358047458 +0000 UTC m=+0.244518146 container init adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.368 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:31 compute-0 podman[351010]: 2025-11-25 16:48:31.373397115 +0000 UTC m=+0.259867723 container start adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.375 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:31 compute-0 confident_stonebraker[351049]: 167 167
Nov 25 16:48:31 compute-0 systemd[1]: libpod-adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd.scope: Deactivated successfully.
Nov 25 16:48:31 compute-0 podman[351010]: 2025-11-25 16:48:31.388428803 +0000 UTC m=+0.274899401 container attach adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 16:48:31 compute-0 podman[351010]: 2025-11-25 16:48:31.389102891 +0000 UTC m=+0.275573499 container died adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.523 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.527 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089311.3391182, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.528 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Started (Lifecycle Event)
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.550 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:31 compute-0 nova_compute[254092]: 2025-11-25 16:48:31.554 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/529209879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:31 compute-0 ceph-mon[74985]: osdmap e244: 3 total, 3 up, 3 in
Nov 25 16:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7faedfd7ad0420e5d337465ce11ea6845304f4dd701442214f59c00405d8ecf6-merged.mount: Deactivated successfully.
Nov 25 16:48:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 465 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.0 MiB/s wr, 254 op/s
Nov 25 16:48:31 compute-0 podman[351010]: 2025-11-25 16:48:31.989596811 +0000 UTC m=+0.876067399 container remove adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:48:32 compute-0 systemd[1]: libpod-conmon-adebbbd7f64cf2eadc871b658dd57d4fcd5f7ccc00cd3c6d6b0edb88a0528ebd.scope: Deactivated successfully.
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.095 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:48:32 compute-0 podman[351075]: 2025-11-25 16:48:32.21775857 +0000 UTC m=+0.050648857 container create 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:48:32 compute-0 systemd[1]: Started libpod-conmon-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope.
Nov 25 16:48:32 compute-0 podman[351075]: 2025-11-25 16:48:32.199221626 +0000 UTC m=+0.032111973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:48:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:32 compute-0 podman[351075]: 2025-11-25 16:48:32.33550811 +0000 UTC m=+0.168398407 container init 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:48:32 compute-0 podman[351075]: 2025-11-25 16:48:32.343695583 +0000 UTC m=+0.176585880 container start 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 16:48:32 compute-0 podman[351075]: 2025-11-25 16:48:32.346620552 +0000 UTC m=+0.179510859 container attach 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 16:48:32 compute-0 ceph-mon[74985]: pgmap v1910: 321 pgs: 321 active+clean; 465 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.0 MiB/s wr, 254 op/s
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.741 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Creating config drive at /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.746 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbghwnam execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.889 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphbghwnam" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.911 254096 DEBUG nova.storage.rbd_utils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.914 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.986 254096 DEBUG nova.network.neutron [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updated VIF entry in instance network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:32 compute-0 nova_compute[254092]: 2025-11-25 16:48:32.987 254096 DEBUG nova.network.neutron [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.000 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.001 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.001 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.001 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.002 254096 DEBUG oslo_concurrency.lockutils [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.002 254096 DEBUG nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.002 254096 WARNING nova.compute.manager [req-129fc6ac-38a7-4b9a-a1d4-0b9e180af5c5 req-1736f64b-9883-4695-a181-cb9e52e8d151 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state active and task_state rescuing.
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.104 254096 DEBUG oslo_concurrency.processutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config 2e848add-8417-4307-8b01-f0d1c1a76cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.105 254096 INFO nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deleting local config drive /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea/disk.config because it was imported into RBD.
Nov 25 16:48:33 compute-0 kernel: tap5a3f34de-d3: entered promiscuous mode
Nov 25 16:48:33 compute-0 NetworkManager[48891]: <info>  [1764089313.1697] manager: (tap5a3f34de-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Nov 25 16:48:33 compute-0 systemd-udevd[350849]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:33 compute-0 ovn_controller[153477]: 2025-11-25T16:48:33Z|00915|binding|INFO|Claiming lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 for this chassis.
Nov 25 16:48:33 compute-0 ovn_controller[153477]: 2025-11-25T16:48:33Z|00916|binding|INFO|5a3f34de-d3de-439b-ac8f-baabc77892b4: Claiming fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.183 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.186 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 bound to our chassis
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.188 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 16:48:33 compute-0 NetworkManager[48891]: <info>  [1764089313.1992] device (tap5a3f34de-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:33 compute-0 NetworkManager[48891]: <info>  [1764089313.1999] device (tap5a3f34de-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:33 compute-0 ovn_controller[153477]: 2025-11-25T16:48:33Z|00917|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 ovn-installed in OVS
Nov 25 16:48:33 compute-0 ovn_controller[153477]: 2025-11-25T16:48:33Z|00918|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 up in Southbound
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.207 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6eb014-3651-48b7-80c2-57f0b0d02378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.209 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa283c2c-b1 in ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.212 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa283c2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f219ff6-f059-4cf7-8eb3-015674adff84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.216 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1f34cf-c59e-4b48-b742-aac6b4190341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 systemd-machined[216343]: New machine qemu-118-instance-00000060.
Nov 25 16:48:33 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.235 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f4364bba-f1d4-4c2e-81d1-18d20989c9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.256 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b43a7e7c-411d-40d0-a91f-da6e3a4450bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.300 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[422697a4-1fef-4171-a1b4-ccdc8d4d69de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 NetworkManager[48891]: <info>  [1764089313.3103] manager: (tapaa283c2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.309 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[90d43bbb-22d5-48c7-904b-9d7f0d2b85a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.348 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cde833cb-8133-43fd-ba4b-1a12e4d43826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.354 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9ece9534-13b1-4c1e-86a6-c0a2beaad894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 NetworkManager[48891]: <info>  [1764089313.3847] device (tapaa283c2c-b0): carrier: link connected
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.389 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[285b28ea-ed5b-4401-b6cc-d1c7fcb06d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.411 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b6bdabc9-ad56-4c2f-a43d-4a5cae0307d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577097, 'reachable_time': 23598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351203, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.426 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cb9de9-f274-4750-91e0-5356cbbe7c55]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:7d28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 577097, 'tstamp': 577097}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351204, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.453 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f36ae46f-b528-446c-bb57-817e30210d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577097, 'reachable_time': 23598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351205, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 practical_noyce[351091]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:48:33 compute-0 practical_noyce[351091]: --> relative data size: 1.0
Nov 25 16:48:33 compute-0 practical_noyce[351091]: --> All data devices are unavailable
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.506 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[353f6775-6764-44d8-b196-902b3b8fec95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 systemd[1]: libpod-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope: Deactivated successfully.
Nov 25 16:48:33 compute-0 systemd[1]: libpod-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope: Consumed 1.056s CPU time.
Nov 25 16:48:33 compute-0 podman[351075]: 2025-11-25 16:48:33.529407145 +0000 UTC m=+1.362297452 container died 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-95f63ef649d22a08e33972faf63178452d764ea882f7bc75e9e449a1e8f48dfb-merged.mount: Deactivated successfully.
Nov 25 16:48:33 compute-0 podman[351075]: 2025-11-25 16:48:33.600731543 +0000 UTC m=+1.433621830 container remove 7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.605 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c15df015-b420-4e25-a4ef-0e381ea6685f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.607 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa283c2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:33 compute-0 kernel: tapaa283c2c-b0: entered promiscuous mode
Nov 25 16:48:33 compute-0 NetworkManager[48891]: <info>  [1764089313.6477] manager: (tapaa283c2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:33 compute-0 systemd[1]: libpod-conmon-7294cae0001f0061e41b08b25a52e344adc63a8fb54e869bc09c12b2f94db1bc.scope: Deactivated successfully.
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.654 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa283c2c-b0, col_values=(('external_ids', {'iface-id': '82c4ad4d-388e-4238-98b3-8d58946e7829'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:33 compute-0 ovn_controller[153477]: 2025-11-25T16:48:33Z|00919|binding|INFO|Releasing lport 82c4ad4d-388e-4238-98b3-8d58946e7829 from this chassis (sb_readonly=0)
Nov 25 16:48:33 compute-0 sudo[350890]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.681 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.682 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7028ce90-2ee8-4848-9dc0-159d2f5720ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.687 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:48:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:33.687 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'env', 'PROCESS_TAG=haproxy-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa283c2c-b597-4970-842d-f5f2b621b5f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:48:33 compute-0 sudo[351227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:33 compute-0 sudo[351227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:33 compute-0 sudo[351227]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 465 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.0 MiB/s wr, 254 op/s
Nov 25 16:48:33 compute-0 sudo[351271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:48:33 compute-0 sudo[351271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:33 compute-0 sudo[351271]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:33 compute-0 sudo[351317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:33 compute-0 sudo[351317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:33 compute-0 sudo[351317]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.955 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089313.9552457, 2e848add-8417-4307-8b01-f0d1c1a76cea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Started (Lifecycle Event)
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.975 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.980 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089313.956693, 2e848add-8417-4307-8b01-f0d1c1a76cea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:33 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.981 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Paused (Lifecycle Event)
Nov 25 16:48:33 compute-0 sudo[351348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:48:33 compute-0 sudo[351348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:33.999 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.004 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.022 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.096 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.096 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 WARNING nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state rescued and task_state None.
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.097 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-changed-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.098 254096 DEBUG nova.compute.manager [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Refreshing instance network info cache due to event network-changed-591e580e-30bb-4c0d-b1fb-96d45eca5626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:34 compute-0 nova_compute[254092]: 2025-11-25 16:48:34.098 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:34 compute-0 podman[351396]: 2025-11-25 16:48:34.18680279 +0000 UTC m=+0.066108568 container create c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:48:34 compute-0 systemd[1]: Started libpod-conmon-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope.
Nov 25 16:48:34 compute-0 podman[351396]: 2025-11-25 16:48:34.153349961 +0000 UTC m=+0.032655769 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:48:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f65b33bb2cefdb9521aad3458f01f2c8320441c773aab270729c09b9835d7b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:34 compute-0 podman[351396]: 2025-11-25 16:48:34.301886437 +0000 UTC m=+0.181192215 container init c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:48:34 compute-0 podman[351396]: 2025-11-25 16:48:34.307406337 +0000 UTC m=+0.186712105 container start c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 16:48:34 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : New worker (351441) forked
Nov 25 16:48:34 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : Loading success.
Nov 25 16:48:34 compute-0 podman[351464]: 2025-11-25 16:48:34.488879829 +0000 UTC m=+0.047605325 container create eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:48:34 compute-0 systemd[1]: Started libpod-conmon-eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd.scope.
Nov 25 16:48:34 compute-0 podman[351464]: 2025-11-25 16:48:34.466629984 +0000 UTC m=+0.025355500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:48:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:48:34 compute-0 podman[351464]: 2025-11-25 16:48:34.587137979 +0000 UTC m=+0.145863485 container init eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:48:34 compute-0 podman[351464]: 2025-11-25 16:48:34.595751673 +0000 UTC m=+0.154477169 container start eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 16:48:34 compute-0 podman[351464]: 2025-11-25 16:48:34.599626498 +0000 UTC m=+0.158352024 container attach eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:48:34 compute-0 xenodochial_almeida[351481]: 167 167
Nov 25 16:48:34 compute-0 podman[351464]: 2025-11-25 16:48:34.604348717 +0000 UTC m=+0.163074213 container died eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 16:48:34 compute-0 systemd[1]: libpod-eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd.scope: Deactivated successfully.
Nov 25 16:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6932a81371c4f706c700a68e6e80b00ab29666650c1835507375f8f9bbd8ac4-merged.mount: Deactivated successfully.
Nov 25 16:48:34 compute-0 podman[351464]: 2025-11-25 16:48:34.647680484 +0000 UTC m=+0.206405980 container remove eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:48:34 compute-0 systemd[1]: libpod-conmon-eefc322e46cab0f2292fbaf863c014acc7f3d236e0d2e6bed549a7a0b23355fd.scope: Deactivated successfully.
Nov 25 16:48:34 compute-0 ceph-mon[74985]: pgmap v1911: 321 pgs: 321 active+clean; 465 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.0 MiB/s wr, 254 op/s
Nov 25 16:48:34 compute-0 podman[351505]: 2025-11-25 16:48:34.85873002 +0000 UTC m=+0.043454883 container create 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:48:34 compute-0 systemd[1]: Started libpod-conmon-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope.
Nov 25 16:48:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:48:34 compute-0 podman[351505]: 2025-11-25 16:48:34.843360111 +0000 UTC m=+0.028084994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:34 compute-0 podman[351505]: 2025-11-25 16:48:34.962187441 +0000 UTC m=+0.146912334 container init 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:48:34 compute-0 podman[351505]: 2025-11-25 16:48:34.968448911 +0000 UTC m=+0.153173774 container start 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:48:34 compute-0 podman[351505]: 2025-11-25 16:48:34.972977574 +0000 UTC m=+0.157702457 container attach 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.126 254096 DEBUG nova.network.neutron [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updating instance_info_cache with network_info: [{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.151 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.152 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance network_info: |[{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.153 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.153 254096 DEBUG nova.network.neutron [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Refreshing network info cache for port 591e580e-30bb-4c0d-b1fb-96d45eca5626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.156 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start _get_guest_xml network_info=[{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.163 254096 WARNING nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.173 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.175 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.180 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.181 254096 DEBUG nova.virt.libvirt.host [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.182 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.182 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.183 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.183 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.184 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.184 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.185 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.185 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.186 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.186 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.186 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.187 254096 DEBUG nova.virt.hardware [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.192 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911482340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.713 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.745 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]: {
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:     "0": [
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:         {
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "devices": [
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "/dev/loop3"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             ],
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_name": "ceph_lv0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_size": "21470642176",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "name": "ceph_lv0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "tags": {
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cluster_name": "ceph",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.crush_device_class": "",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.encrypted": "0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osd_id": "0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.type": "block",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.vdo": "0"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             },
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "type": "block",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "vg_name": "ceph_vg0"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:         }
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:     ],
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:     "1": [
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:         {
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "devices": [
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "/dev/loop4"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             ],
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_name": "ceph_lv1",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_size": "21470642176",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "name": "ceph_lv1",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "tags": {
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cluster_name": "ceph",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.crush_device_class": "",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.encrypted": "0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osd_id": "1",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.type": "block",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.vdo": "0"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             },
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "type": "block",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "vg_name": "ceph_vg1"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:         }
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:     ],
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:     "2": [
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:         {
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "devices": [
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "/dev/loop5"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             ],
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_name": "ceph_lv2",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_size": "21470642176",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "name": "ceph_lv2",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "tags": {
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.cluster_name": "ceph",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.crush_device_class": "",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.encrypted": "0",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osd_id": "2",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.type": "block",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:                 "ceph.vdo": "0"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             },
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "type": "block",
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:             "vg_name": "ceph_vg2"
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:         }
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]:     ]
Nov 25 16:48:35 compute-0 nostalgic_vaughan[351522]: }
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.751 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.9 MiB/s wr, 174 op/s
Nov 25 16:48:35 compute-0 systemd[1]: libpod-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope: Deactivated successfully.
Nov 25 16:48:35 compute-0 conmon[351522]: conmon 16af17d5d38ab1ac0adf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope/container/memory.events
Nov 25 16:48:35 compute-0 podman[351572]: 2025-11-25 16:48:35.832485461 +0000 UTC m=+0.026986564 container died 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:48:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2911482340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-26806fbc60db822eba5ecde18f3c721d5eaf6921d38dbc81e84342711b087b19-merged.mount: Deactivated successfully.
Nov 25 16:48:35 compute-0 podman[351572]: 2025-11-25 16:48:35.928228844 +0000 UTC m=+0.122729947 container remove 16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 16:48:35 compute-0 systemd[1]: libpod-conmon-16af17d5d38ab1ac0adf04e1ea51fd28e2764cbf143dd55cba757f576035c7f6.scope: Deactivated successfully.
Nov 25 16:48:35 compute-0 sudo[351348]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:35 compute-0 nova_compute[254092]: 2025-11-25 16:48:35.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:36 compute-0 sudo[351607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:36 compute-0 sudo[351607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:36 compute-0 sudo[351607]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:36 compute-0 sudo[351632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:48:36 compute-0 sudo[351632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:36 compute-0 sudo[351632]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:36 compute-0 sudo[351657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:36 compute-0 sudo[351657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:36 compute-0 sudo[351657]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:36 compute-0 sudo[351682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:48:36 compute-0 sudo[351682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614159682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.394 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.395 254096 DEBUG nova.virt.libvirt.vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-738000202',display_name='tempest-ServerActionsTestOtherB-server-738000202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-738000202',id=97,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-cz0mxxg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:25Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=e0098976-026f-43d8-b686-b2658f9aded9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.396 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.397 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.398 254096 DEBUG nova.objects.instance [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0098976-026f-43d8-b686-b2658f9aded9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.424 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <uuid>e0098976-026f-43d8-b686-b2658f9aded9</uuid>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <name>instance-00000061</name>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestOtherB-server-738000202</nova:name>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:35</nova:creationTime>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <nova:port uuid="591e580e-30bb-4c0d-b1fb-96d45eca5626">
Nov 25 16:48:36 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <entry name="serial">e0098976-026f-43d8-b686-b2658f9aded9</entry>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <entry name="uuid">e0098976-026f-43d8-b686-b2658f9aded9</entry>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e0098976-026f-43d8-b686-b2658f9aded9_disk">
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e0098976-026f-43d8-b686-b2658f9aded9_disk.config">
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:7d:85:c3"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <target dev="tap591e580e-30"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/console.log" append="off"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:36 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:36 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:36 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:36 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:36 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.425 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Preparing to wait for external event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.425 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.426 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.426 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.427 254096 DEBUG nova.virt.libvirt.vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-738000202',display_name='tempest-ServerActionsTestOtherB-server-738000202',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-738000202',id=97,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-cz0mxxg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:25Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=e0098976-026f-43d8-b686-b2658f9aded9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.427 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.428 254096 DEBUG nova.network.os_vif_util [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.428 254096 DEBUG os_vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.429 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap591e580e-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap591e580e-30, col_values=(('external_ids', {'iface-id': '591e580e-30bb-4c0d-b1fb-96d45eca5626', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:85:c3', 'vm-uuid': 'e0098976-026f-43d8-b686-b2658f9aded9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:36 compute-0 NetworkManager[48891]: <info>  [1764089316.4364] manager: (tap591e580e-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.449 254096 INFO os_vif [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30')
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.581 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.582 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.582 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:7d:85:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.582 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Using config drive
Nov 25 16:48:36 compute-0 nova_compute[254092]: 2025-11-25 16:48:36.625 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:36 compute-0 podman[351766]: 2025-11-25 16:48:36.684253648 +0000 UTC m=+0.067759712 container create 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:48:36 compute-0 systemd[1]: Started libpod-conmon-98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6.scope.
Nov 25 16:48:36 compute-0 podman[351766]: 2025-11-25 16:48:36.656962907 +0000 UTC m=+0.040468961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:48:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:48:36 compute-0 podman[351766]: 2025-11-25 16:48:36.918092803 +0000 UTC m=+0.301598887 container init 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:48:36 compute-0 podman[351766]: 2025-11-25 16:48:36.927543819 +0000 UTC m=+0.311049883 container start 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 16:48:36 compute-0 determined_shannon[351786]: 167 167
Nov 25 16:48:36 compute-0 systemd[1]: libpod-98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6.scope: Deactivated successfully.
Nov 25 16:48:36 compute-0 ceph-mon[74985]: pgmap v1912: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.9 MiB/s wr, 174 op/s
Nov 25 16:48:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1614159682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:37 compute-0 podman[351766]: 2025-11-25 16:48:37.053923524 +0000 UTC m=+0.437429608 container attach 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:48:37 compute-0 podman[351766]: 2025-11-25 16:48:37.054412488 +0000 UTC m=+0.437918572 container died 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d49041b59eec2a9039dd14b5e8a99a464a2baeab226cef2a02078f18820fd6-merged.mount: Deactivated successfully.
Nov 25 16:48:37 compute-0 podman[351766]: 2025-11-25 16:48:37.142889652 +0000 UTC m=+0.526395716 container remove 98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:48:37 compute-0 systemd[1]: libpod-conmon-98d82bddb5f33e4e4501e69511007e3b3679fbedcf0fd7495a10fdb958770ee6.scope: Deactivated successfully.
Nov 25 16:48:37 compute-0 podman[351809]: 2025-11-25 16:48:37.345704124 +0000 UTC m=+0.043397081 container create 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 16:48:37 compute-0 systemd[1]: Started libpod-conmon-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope.
Nov 25 16:48:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:37 compute-0 podman[351809]: 2025-11-25 16:48:37.327992942 +0000 UTC m=+0.025685919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:48:37 compute-0 podman[351809]: 2025-11-25 16:48:37.437691483 +0000 UTC m=+0.135384450 container init 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:48:37 compute-0 podman[351809]: 2025-11-25 16:48:37.44418059 +0000 UTC m=+0.141873547 container start 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 16:48:37 compute-0 podman[351809]: 2025-11-25 16:48:37.44787617 +0000 UTC m=+0.145569147 container attach 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:48:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.081 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Creating config drive at /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config
Nov 25 16:48:38 compute-0 ovn_controller[153477]: 2025-11-25T16:48:38Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:86:24 10.100.0.12
Nov 25 16:48:38 compute-0 ovn_controller[153477]: 2025-11-25T16:48:38Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:86:24 10.100.0.12
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.088 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyhx0q1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.230 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuyhx0q1t" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.259 254096 DEBUG nova.storage.rbd_utils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image e0098976-026f-43d8-b686-b2658f9aded9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.263 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config e0098976-026f-43d8-b686-b2658f9aded9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.369 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.370 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.371 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.371 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.371 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.372 254096 WARNING nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state rescued and task_state None.
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.372 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.372 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.373 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.373 254096 DEBUG oslo_concurrency.lockutils [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.373 254096 DEBUG nova.compute.manager [req-d20d668f-5eec-4fd0-bd71-0dff6da9e099 req-55d978c4-9ecc-4212-8e2c-36fe0563f52a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Processing event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.374 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.393 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.398 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089318.3918734, 2e848add-8417-4307-8b01-f0d1c1a76cea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.399 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Resumed (Lifecycle Event)
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.410 254096 INFO nova.virt.libvirt.driver [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance spawned successfully.
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.410 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.428 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.435 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.438 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.439 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.439 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.440 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.440 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.441 254096 DEBUG nova.virt.libvirt.driver [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.445 254096 DEBUG oslo_concurrency.processutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config e0098976-026f-43d8-b686-b2658f9aded9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.445 254096 INFO nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deleting local config drive /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9/disk.config because it was imported into RBD.
Nov 25 16:48:38 compute-0 crazy_payne[351826]: {
Nov 25 16:48:38 compute-0 crazy_payne[351826]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "osd_id": 1,
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "type": "bluestore"
Nov 25 16:48:38 compute-0 crazy_payne[351826]:     },
Nov 25 16:48:38 compute-0 crazy_payne[351826]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "osd_id": 2,
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "type": "bluestore"
Nov 25 16:48:38 compute-0 crazy_payne[351826]:     },
Nov 25 16:48:38 compute-0 crazy_payne[351826]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "osd_id": 0,
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:48:38 compute-0 crazy_payne[351826]:         "type": "bluestore"
Nov 25 16:48:38 compute-0 crazy_payne[351826]:     }
Nov 25 16:48:38 compute-0 crazy_payne[351826]: }
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.470 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:38 compute-0 systemd[1]: libpod-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope: Deactivated successfully.
Nov 25 16:48:38 compute-0 systemd[1]: libpod-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope: Consumed 1.023s CPU time.
Nov 25 16:48:38 compute-0 conmon[351826]: conmon 7c4fa8ab5f0fbfa47ff1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope/container/memory.events
Nov 25 16:48:38 compute-0 podman[351809]: 2025-11-25 16:48:38.491176591 +0000 UTC m=+1.188869548 container died 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.494 254096 INFO nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 16.01 seconds to spawn the instance on the hypervisor.
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.495 254096 DEBUG nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:38 compute-0 kernel: tap591e580e-30: entered promiscuous mode
Nov 25 16:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd4c91c0fb18d290bcbf8e5f9e551abb4f8f509ac8870db19ac4f7827b2b3445-merged.mount: Deactivated successfully.
Nov 25 16:48:38 compute-0 NetworkManager[48891]: <info>  [1764089318.5313] manager: (tap591e580e-30): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Nov 25 16:48:38 compute-0 podman[351809]: 2025-11-25 16:48:38.558654056 +0000 UTC m=+1.256347013 container remove 7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_payne, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:48:38 compute-0 ovn_controller[153477]: 2025-11-25T16:48:38Z|00920|binding|INFO|Claiming lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 for this chassis.
Nov 25 16:48:38 compute-0 ovn_controller[153477]: 2025-11-25T16:48:38Z|00921|binding|INFO|591e580e-30bb-4c0d-b1fb-96d45eca5626: Claiming fa:16:3e:7d:85:c3 10.100.0.14
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.595 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:85:c3 10.100.0.14'], port_security=['fa:16:3e:7d:85:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e0098976-026f-43d8-b686-b2658f9aded9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=591e580e-30bb-4c0d-b1fb-96d45eca5626) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.595 254096 INFO nova.compute.manager [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 16.97 seconds to build instance.
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.596 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 591e580e-30bb-4c0d-b1fb-96d45eca5626 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.598 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:48:38 compute-0 ovn_controller[153477]: 2025-11-25T16:48:38Z|00922|binding|INFO|Setting lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 ovn-installed in OVS
Nov 25 16:48:38 compute-0 ovn_controller[153477]: 2025-11-25T16:48:38Z|00923|binding|INFO|Setting lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 up in Southbound
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.612 254096 DEBUG oslo_concurrency.lockutils [None req-49fde752-a2c6-431f-a8c8-57902096b24e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:38 compute-0 systemd[1]: libpod-conmon-7c4fa8ab5f0fbfa47ff1d7a34612d0ae5f53b29305cdcaad9a58cc52d15cb6c1.scope: Deactivated successfully.
Nov 25 16:48:38 compute-0 sudo[351682]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.627 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ac1649-cab5-48a4-bf4c-722e1a5fba92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:38 compute-0 systemd-udevd[351925]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:48:38 compute-0 systemd-machined[216343]: New machine qemu-119-instance-00000061.
Nov 25 16:48:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:48:38 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000061.
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:38 compute-0 NetworkManager[48891]: <info>  [1764089318.6567] device (tap591e580e-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:38 compute-0 NetworkManager[48891]: <info>  [1764089318.6575] device (tap591e580e-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 36cba9bb-6e07-41b3-a1da-fc25d2fb2bfb does not exist
Nov 25 16:48:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c72f1883-c574-4dd4-b3d8-6abf7906f1ef does not exist
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.675 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41c46145-29fc-420f-af97-0a51da3d0101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.678 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eca482ed-428b-443b-95e0-f59cd239cee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.720 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6e6a57-cb7f-44db-820f-b7f172d3af9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:38 compute-0 sudo[351930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9232a739-a63a-42a7-b860-01a535b6b076]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351960, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:38 compute-0 sudo[351930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:38 compute-0 sudo[351930]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c787ac4-7c49-4544-b4fd-62a0ee480ac0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351964, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351964, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.768 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:38 compute-0 nova_compute[254092]: 2025-11-25 16:48:38.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.777 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:38.779 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:38 compute-0 sudo[351965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:48:38 compute-0 sudo[351965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:48:38 compute-0 sudo[351965]: pam_unix(sudo:session): session closed for user root
Nov 25 16:48:38 compute-0 ceph-mon[74985]: pgmap v1913: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 25 16:48:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.163 254096 DEBUG nova.network.neutron [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updated VIF entry in instance network info cache for port 591e580e-30bb-4c0d-b1fb-96d45eca5626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.164 254096 DEBUG nova.network.neutron [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updating instance_info_cache with network_info: [{"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.183 254096 DEBUG oslo_concurrency.lockutils [req-bcdf027f-efc9-45b9-9899-120ba175bfb6 req-bb762fc4-8648-4c7f-b2bc-9cc9d3408840 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e0098976-026f-43d8-b686-b2658f9aded9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:39.394 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:39.395 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:48:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:39.396 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.413 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089319.4134295, e0098976-026f-43d8-b686-b2658f9aded9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.414 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Started (Lifecycle Event)
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.445 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.450 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089319.4158354, e0098976-026f-43d8-b686-b2658f9aded9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.450 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Paused (Lifecycle Event)
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.486 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.490 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.508 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.983 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.983 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:39 compute-0 nova_compute[254092]: 2025-11-25 16:48:39.997 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.069 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.069 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.076 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.076 254096 INFO nova.compute.claims [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:48:40
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.289 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.445 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.446 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.447 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.447 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.448 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.448 254096 WARNING nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.449 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.449 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.450 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.450 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.451 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Processing event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.451 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.452 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.452 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.453 254096 DEBUG oslo_concurrency.lockutils [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.453 254096 DEBUG nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] No waiting events found dispatching network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.454 254096 WARNING nova.compute.manager [req-9da8d380-ff09-4a86-be25-18c335cab752 req-33add365-74ae-4ba4-82fa-fc3f3d693813 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received unexpected event network-vif-plugged-591e580e-30bb-4c0d-b1fb-96d45eca5626 for instance with vm_state building and task_state spawning.
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.456 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.465 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089320.4644217, e0098976-026f-43d8-b686-b2658f9aded9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.466 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Resumed (Lifecycle Event)
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.470 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.477 254096 INFO nova.virt.libvirt.driver [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance spawned successfully.
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.478 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.497 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.506 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.510 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.511 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.511 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.512 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.512 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.513 254096 DEBUG nova.virt.libvirt.driver [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.534 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.562 254096 INFO nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 14.73 seconds to spawn the instance on the hypervisor.
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.563 254096 DEBUG nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.621 254096 INFO nova.compute.manager [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 15.78 seconds to build instance.
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.668 254096 DEBUG nova.virt.libvirt.driver [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.682 254096 DEBUG oslo_concurrency.lockutils [None req-2acae539-1ebb-435f-ba13-bbaf6d0daa13 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580077054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.845 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.853 254096 DEBUG nova.compute.provider_tree [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.868 254096 DEBUG nova.scheduler.client.report [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.892 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.893 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.937 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.938 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.961 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:48:40 compute-0 nova_compute[254092]: 2025-11-25 16:48:40.984 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:48:41 compute-0 ceph-mon[74985]: pgmap v1914: 321 pgs: 321 active+clean; 465 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 157 op/s
Nov 25 16:48:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/580077054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.068 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.070 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.070 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating image(s)
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.105 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.143 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.171 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.176 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.228 254096 DEBUG nova.policy [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '013868ddd96f43a49458a4615ab1f41b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '544c4f84ca494482aea8e55248fe4c62', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.278 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.279 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.308 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.313 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:41 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.643 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.713 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] resizing rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:48:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 498 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 241 op/s
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.815 254096 DEBUG nova.objects.instance [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.829 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.829 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Ensure instance console log exists: /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.830 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.830 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:41 compute-0 nova_compute[254092]: 2025-11-25 16:48:41.830 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:42 compute-0 nova_compute[254092]: 2025-11-25 16:48:42.252 254096 INFO nova.compute.manager [None req-928dc9f7-2545-4a7a-82ad-01f56046a4b4 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Get console output
Nov 25 16:48:42 compute-0 nova_compute[254092]: 2025-11-25 16:48:42.260 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:48:42 compute-0 nova_compute[254092]: 2025-11-25 16:48:42.672 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Successfully created port: acb7d65c-0259-4a39-94f8-d7f64637a340 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:48:43 compute-0 kernel: tapcce0c8fe-e8 (unregistering): left promiscuous mode
Nov 25 16:48:43 compute-0 ceph-mon[74985]: pgmap v1915: 321 pgs: 321 active+clean; 498 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 241 op/s
Nov 25 16:48:43 compute-0 NetworkManager[48891]: <info>  [1764089323.0133] device (tapcce0c8fe-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:48:43 compute-0 ovn_controller[153477]: 2025-11-25T16:48:43Z|00924|binding|INFO|Releasing lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 from this chassis (sb_readonly=0)
Nov 25 16:48:43 compute-0 ovn_controller[153477]: 2025-11-25T16:48:43Z|00925|binding|INFO|Setting lport cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 down in Southbound
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:43 compute-0 ovn_controller[153477]: 2025-11-25T16:48:43Z|00926|binding|INFO|Removing iface tapcce0c8fe-e8 ovn-installed in OVS
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.043 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:86:24 10.100.0.12'], port_security=['fa:16:3e:87:86:24 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4677de7c-6625-4c98-a065-214341d8bfea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.044 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.046 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1431a6bc-93c8-4db5-a148-b2950f02c941
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.075 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ede233-fa98-45a5-9fe6-35275301765b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.116 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[adaaa9ee-062d-4f7b-b105-c944cdd27c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:43 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 25 16:48:43 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005f.scope: Consumed 15.025s CPU time.
Nov 25 16:48:43 compute-0 systemd-machined[216343]: Machine qemu-116-instance-0000005f terminated.
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.126 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4f530e-0917-4e7f-98ce-b907e9f33695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.161 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c78730-c67c-4ccd-a517-715c263ac65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86f5cd6f-fc65-411d-b8b0-ae8cf22d640a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1431a6bc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:f8:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563972, 'reachable_time': 23815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352231, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.205 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a700c62f-51d4-460b-931e-21be55262838]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563984, 'tstamp': 563984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352232, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1431a6bc-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563987, 'tstamp': 563987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352232, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.207 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.209 254096 DEBUG nova.compute.manager [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG nova.compute.manager [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing instance network info cache due to event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG oslo_concurrency.lockutils [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG oslo_concurrency.lockutils [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.210 254096 DEBUG nova.network.neutron [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.214 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1431a6bc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.214 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.215 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1431a6bc-90, col_values=(('external_ids', {'iface-id': '2b164d7c-ac7d-4965-a9d1-81827e4e9936'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:43.215 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.231 254096 DEBUG nova.compute.manager [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-unplugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.231 254096 DEBUG oslo_concurrency.lockutils [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 DEBUG oslo_concurrency.lockutils [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 DEBUG oslo_concurrency.lockutils [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 DEBUG nova.compute.manager [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] No waiting events found dispatching network-vif-unplugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.232 254096 WARNING nova.compute.manager [req-12e716cf-da93-4580-a833-9c9ea5002cdf req-03a2d2bf-96b6-4a9b-b282-ce087e93c798 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received unexpected event network-vif-unplugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for instance with vm_state active and task_state powering-off.
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.340 254096 INFO nova.compute.manager [None req-0bf81e2e-b183-4c87-98ac-310975a44648 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Get console output
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.345 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.750 254096 INFO nova.virt.libvirt.driver [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance shutdown successfully after 13 seconds.
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.756 254096 INFO nova.virt.libvirt.driver [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance destroyed successfully.
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.756 254096 DEBUG nova.objects.instance [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.764 254096 DEBUG nova.compute.manager [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 498 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Nov 25 16:48:43 compute-0 nova_compute[254092]: 2025-11-25 16:48:43.803 254096 DEBUG oslo_concurrency.lockutils [None req-41fc4c48-23b5-4c48-be75-00eff3fc1b54 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.100 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Successfully updated port: acb7d65c-0259-4a39-94f8-d7f64637a340 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.115 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.115 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.115 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.281 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.430 254096 INFO nova.compute.manager [None req-fdd6ad2a-b038-4244-a98d-0f753ea0b6f9 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Get console output
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.438 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.641 254096 DEBUG nova.network.neutron [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updated VIF entry in instance network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.642 254096 DEBUG nova.network.neutron [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:44 compute-0 nova_compute[254092]: 2025-11-25 16:48:44.664 254096 DEBUG oslo_concurrency.lockutils [req-88e08c57-9300-434d-bce2-25702abdd5e9 req-460770a4-dc33-462c-9803-d5d9910cd901 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:45 compute-0 ceph-mon[74985]: pgmap v1916: 321 pgs: 321 active+clean; 498 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.255 254096 DEBUG nova.network.neutron [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.271 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.272 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance network_info: |[{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.274 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start _get_guest_xml network_info=[{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.278 254096 WARNING nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.284 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.284 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.287 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.288 254096 DEBUG nova.virt.libvirt.host [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.288 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.289 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.289 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.289 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.290 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.290 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.290 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.291 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.291 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.292 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.292 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.292 254096 DEBUG nova.virt.hardware [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.297 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.552 254096 DEBUG nova.compute.manager [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-changed-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.554 254096 DEBUG nova.compute.manager [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Refreshing instance network info cache due to event network-changed-acb7d65c-0259-4a39-94f8-d7f64637a340. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.554 254096 DEBUG oslo_concurrency.lockutils [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.554 254096 DEBUG oslo_concurrency.lockutils [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.555 254096 DEBUG nova.network.neutron [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Refreshing network info cache for port acb7d65c-0259-4a39-94f8-d7f64637a340 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.627 254096 DEBUG nova.compute.manager [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.628 254096 DEBUG oslo_concurrency.lockutils [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.629 254096 DEBUG oslo_concurrency.lockutils [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.630 254096 DEBUG oslo_concurrency.lockutils [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.630 254096 DEBUG nova.compute.manager [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] No waiting events found dispatching network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.631 254096 WARNING nova.compute.manager [req-ff95a20f-7188-47a3-afa9-a18361d228ed req-6076c0af-87da-49a5-9dd6-24a41e55e305 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received unexpected event network-vif-plugged-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 for instance with vm_state stopped and task_state None.
Nov 25 16:48:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 529 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.3 MiB/s wr, 304 op/s
Nov 25 16:48:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1825936751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.830 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.862 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:45 compute-0 nova_compute[254092]: 2025-11-25 16:48:45.869 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1825936751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927801959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.333 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.336 254096 DEBUG nova.virt.libvirt.vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:41Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.337 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.338 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.341 254096 DEBUG nova.objects.instance [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.354 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <uuid>6b74b880-45f6-4f10-b09f-2696629a42e9</uuid>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <name>instance-00000062</name>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueTestJSON-server-1318564784</nova:name>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:45</nova:creationTime>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <nova:port uuid="acb7d65c-0259-4a39-94f8-d7f64637a340">
Nov 25 16:48:46 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <entry name="serial">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <entry name="uuid">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk">
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config">
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:46 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:b2:d7:96"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <target dev="tapacb7d65c-02"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/console.log" append="off"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:46 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:46 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:46 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:46 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:46 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.361 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Preparing to wait for external event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.361 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.362 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.362 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.363 254096 DEBUG nova.virt.libvirt.vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:41Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.364 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.365 254096 DEBUG nova.network.os_vif_util [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.366 254096 DEBUG os_vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.368 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.368 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.376 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacb7d65c-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.377 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacb7d65c-02, col_values=(('external_ids', {'iface-id': 'acb7d65c-0259-4a39-94f8-d7f64637a340', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:d7:96', 'vm-uuid': '6b74b880-45f6-4f10-b09f-2696629a42e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:46 compute-0 NetworkManager[48891]: <info>  [1764089326.3811] manager: (tapacb7d65c-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.393 254096 INFO os_vif [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02')
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.449 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.450 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.450 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:b2:d7:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.451 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Using config drive
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.483 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.870 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.872 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.872 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "4677de7c-6625-4c98-a065-214341d8bfea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.873 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.873 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.876 254096 INFO nova.compute.manager [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Terminating instance
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.878 254096 DEBUG nova.compute.manager [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.891 254096 INFO nova.virt.libvirt.driver [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Instance destroyed successfully.
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.892 254096 DEBUG nova.objects.instance [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 4677de7c-6625-4c98-a065-214341d8bfea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.903 254096 DEBUG nova.virt.libvirt.vif [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1206046660',display_name='tempest-Íñstáñcé-1734518290',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1206046660',id=95,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-g8d6ljpe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:44Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=4677de7c-6625-4c98-a065-214341d8bfea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.904 254096 DEBUG nova.network.os_vif_util [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "address": "fa:16:3e:87:86:24", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcce0c8fe-e8", "ovs_interfaceid": "cce0c8fe-e83f-4422-aeb3-8b1e6bafa462", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.906 254096 DEBUG nova.network.os_vif_util [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.906 254096 DEBUG os_vif [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.911 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce0c8fe-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.923 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating config drive at /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.932 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46_uoyrr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.994 254096 DEBUG nova.network.neutron [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updated VIF entry in instance network info cache for port acb7d65c-0259-4a39-94f8-d7f64637a340. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:46 compute-0 nova_compute[254092]: 2025-11-25 16:48:46.995 254096 DEBUG nova.network.neutron [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.002 254096 INFO os_vif [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:86:24,bridge_name='br-int',has_traffic_filtering=True,id=cce0c8fe-e83f-4422-aeb3-8b1e6bafa462,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcce0c8fe-e8')
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.030 254096 DEBUG oslo_concurrency.lockutils [req-92ac1f97-3008-479b-bae3-ca1245d67114 req-88fe1cca-acbc-47ea-a1b2-7a75d0e477ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:47 compute-0 ceph-mon[74985]: pgmap v1917: 321 pgs: 321 active+clean; 529 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.3 MiB/s wr, 304 op/s
Nov 25 16:48:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3927801959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.107 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46_uoyrr" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.149 254096 DEBUG nova.storage.rbd_utils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.156 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.627 254096 DEBUG oslo_concurrency.processutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.628 254096 INFO nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deleting local config drive /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config because it was imported into RBD.
Nov 25 16:48:47 compute-0 kernel: tapacb7d65c-02: entered promiscuous mode
Nov 25 16:48:47 compute-0 NetworkManager[48891]: <info>  [1764089327.6948] manager: (tapacb7d65c-02): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Nov 25 16:48:47 compute-0 ovn_controller[153477]: 2025-11-25T16:48:47Z|00927|binding|INFO|Claiming lport acb7d65c-0259-4a39-94f8-d7f64637a340 for this chassis.
Nov 25 16:48:47 compute-0 ovn_controller[153477]: 2025-11-25T16:48:47Z|00928|binding|INFO|acb7d65c-0259-4a39-94f8-d7f64637a340: Claiming fa:16:3e:b2:d7:96 10.100.0.2
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.715 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:47 compute-0 podman[352391]: 2025-11-25 16:48:47.716228743 +0000 UTC m=+0.118869121 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 16:48:47 compute-0 podman[352392]: 2025-11-25 16:48:47.712822781 +0000 UTC m=+0.116973879 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 16:48:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.717 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis
Nov 25 16:48:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.718 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:47 compute-0 ovn_controller[153477]: 2025-11-25T16:48:47Z|00929|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 ovn-installed in OVS
Nov 25 16:48:47 compute-0 ovn_controller[153477]: 2025-11-25T16:48:47Z|00930|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 up in Southbound
Nov 25 16:48:47 compute-0 nova_compute[254092]: 2025-11-25 16:48:47.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:47.726 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[102de5a2-0da6-47ce-9fb2-c91d9fa3c52b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:47 compute-0 systemd-udevd[352462]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:47 compute-0 systemd-machined[216343]: New machine qemu-120-instance-00000062.
Nov 25 16:48:47 compute-0 podman[352393]: 2025-11-25 16:48:47.755943713 +0000 UTC m=+0.160497613 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:48:47 compute-0 NetworkManager[48891]: <info>  [1764089327.7576] device (tapacb7d65c-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:47 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000062.
Nov 25 16:48:47 compute-0 NetworkManager[48891]: <info>  [1764089327.7590] device (tapacb7d65c-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 544 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 276 op/s
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.348 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089328.3473387, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.348 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Started (Lifecycle Event)
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.365 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.371 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089328.3475723, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.371 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Paused (Lifecycle Event)
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.388 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.398 254096 INFO nova.virt.libvirt.driver [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deleting instance files /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea_del
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.400 254096 INFO nova.virt.libvirt.driver [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deletion of /var/lib/nova/instances/4677de7c-6625-4c98-a065-214341d8bfea_del complete
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.408 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.429 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.490 254096 INFO nova.compute.manager [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 1.61 seconds to destroy the instance on the hypervisor.
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.492 254096 DEBUG oslo.service.loopingcall [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.492 254096 DEBUG nova.compute.manager [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.492 254096 DEBUG nova.network.neutron [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:48:48 compute-0 nova_compute[254092]: 2025-11-25 16:48:48.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:49 compute-0 ceph-mon[74985]: pgmap v1918: 321 pgs: 321 active+clean; 544 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 276 op/s
Nov 25 16:48:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 544 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 242 op/s
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.193 254096 DEBUG nova.compute.manager [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.195 254096 DEBUG oslo_concurrency.lockutils [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.196 254096 DEBUG oslo_concurrency.lockutils [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.197 254096 DEBUG oslo_concurrency.lockutils [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.197 254096 DEBUG nova.compute.manager [req-09c76b68-8614-412a-aece-19450577e0f8 req-a9de82c6-3ef9-4422-912f-a723ad9244ea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Processing event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.198 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.213 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089330.212602, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.214 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Resumed (Lifecycle Event)
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.215 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:48:50 compute-0 ceph-mon[74985]: pgmap v1919: 321 pgs: 321 active+clean; 544 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 242 op/s
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.229 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance spawned successfully.
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.230 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.236170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330236243, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1791, "num_deletes": 258, "total_data_size": 2546075, "memory_usage": 2597120, "flush_reason": "Manual Compaction"}
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.241 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330254009, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2494121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38392, "largest_seqno": 40182, "table_properties": {"data_size": 2485958, "index_size": 4849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18152, "raw_average_key_size": 20, "raw_value_size": 2469143, "raw_average_value_size": 2825, "num_data_blocks": 214, "num_entries": 874, "num_filter_entries": 874, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089186, "oldest_key_time": 1764089186, "file_creation_time": 1764089330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 17881 microseconds, and 6956 cpu microseconds.
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.254064) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2494121 bytes OK
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.254089) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.255826) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.255842) EVENT_LOG_v1 {"time_micros": 1764089330255837, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.255864) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2538200, prev total WAL file size 2538200, number of live WAL files 2.
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.256812) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2435KB)], [86(8673KB)]
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330256946, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11375391, "oldest_snapshot_seqno": -1}
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.261 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.266 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.267 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.267 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.268 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.269 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.269 254096 DEBUG nova.virt.libvirt.driver [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.302 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6504 keys, 9725769 bytes, temperature: kUnknown
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330341098, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9725769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9681409, "index_size": 26970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165699, "raw_average_key_size": 25, "raw_value_size": 9564101, "raw_average_value_size": 1470, "num_data_blocks": 1085, "num_entries": 6504, "num_filter_entries": 6504, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.341376) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9725769 bytes
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.343113) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.1 rd, 115.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.5) write-amplify(3.9) OK, records in: 7034, records dropped: 530 output_compression: NoCompression
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.343143) EVENT_LOG_v1 {"time_micros": 1764089330343131, "job": 50, "event": "compaction_finished", "compaction_time_micros": 84220, "compaction_time_cpu_micros": 46733, "output_level": 6, "num_output_files": 1, "total_output_size": 9725769, "num_input_records": 7034, "num_output_records": 6504, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330343823, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.344 254096 INFO nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 9.28 seconds to spawn the instance on the hypervisor.
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.345 254096 DEBUG nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089330346490, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.256623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:48:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:48:50.346618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.409 254096 INFO nova.compute.manager [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 10.37 seconds to build instance.
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.424 254096 DEBUG oslo_concurrency.lockutils [None req-cadf643d-1a43-4be2-9ad4-1e0cfbe7c639 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.580 254096 DEBUG nova.network.neutron [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.607 254096 INFO nova.compute.manager [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Took 2.11 seconds to deallocate network for instance.
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.672 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.672 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.775 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.775 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.792 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.853 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:50 compute-0 nova_compute[254092]: 2025-11-25 16:48:50.919 254096 DEBUG oslo_concurrency.processutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004427185078465846 of space, bias 1.0, pg target 1.3281555235397537 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.1992057139048968 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 25 16:48:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791875219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.494 254096 DEBUG oslo_concurrency.processutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.501 254096 DEBUG nova.compute.provider_tree [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1791875219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.518 254096 DEBUG nova.scheduler.client.report [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.551 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.556 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.564 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.564 254096 INFO nova.compute.claims [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.582 254096 INFO nova.scheduler.client.report [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 4677de7c-6625-4c98-a065-214341d8bfea
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.711 254096 DEBUG oslo_concurrency.lockutils [None req-bb34ebdb-50ca-42c8-8e96-c6c9ea49e4e3 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "4677de7c-6625-4c98-a065-214341d8bfea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 467 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.0 MiB/s wr, 328 op/s
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.885 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.953 254096 INFO nova.compute.manager [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Rescuing
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.954 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.954 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:51 compute-0 nova_compute[254092]: 2025-11-25 16:48:51.954 254096 DEBUG nova.network.neutron [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.264 254096 DEBUG nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG oslo_concurrency.lockutils [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG oslo_concurrency.lockutils [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG oslo_concurrency.lockutils [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.265 254096 DEBUG nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.266 254096 WARNING nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state rescuing.
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.266 254096 DEBUG nova.compute.manager [req-63f7edd0-114e-42b9-bf36-575b3a1fdc1f req-9cc6f497-5268-4f37-9d4d-9b2f301e843a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Received event network-vif-deleted-cce0c8fe-e83f-4422-aeb3-8b1e6bafa462 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357231945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.489 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.497 254096 DEBUG nova.compute.provider_tree [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.512 254096 DEBUG nova.scheduler.client.report [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:52 compute-0 ceph-mon[74985]: pgmap v1920: 321 pgs: 321 active+clean; 467 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.0 MiB/s wr, 328 op/s
Nov 25 16:48:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/357231945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.530 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.531 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.578 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.578 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.600 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.620 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.717 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.719 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.720 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Creating image(s)
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.748 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.781 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.830 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.849 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.915 254096 DEBUG nova.policy [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '23f6db77558a477bbd8b8b46cb4107d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.974 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.976 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.977 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:52 compute-0 nova_compute[254092]: 2025-11-25 16:48:52.977 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.017 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.031 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9b189bbf-2581-4656-83da-12707f48dccc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.124 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.124 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.125 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.125 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.126 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.127 254096 INFO nova.compute.manager [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Terminating instance
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.128 254096 DEBUG nova.compute.manager [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:48:53 compute-0 kernel: tap419102e4-bc (unregistering): left promiscuous mode
Nov 25 16:48:53 compute-0 NetworkManager[48891]: <info>  [1764089333.2151] device (tap419102e4-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:48:53 compute-0 ovn_controller[153477]: 2025-11-25T16:48:53Z|00931|binding|INFO|Releasing lport 419102e4-bcb4-496b-b45c-fba9f5525746 from this chassis (sb_readonly=0)
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.227 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 ovn_controller[153477]: 2025-11-25T16:48:53Z|00932|binding|INFO|Setting lport 419102e4-bcb4-496b-b45c-fba9f5525746 down in Southbound
Nov 25 16:48:53 compute-0 ovn_controller[153477]: 2025-11-25T16:48:53Z|00933|binding|INFO|Removing iface tap419102e4-bc ovn-installed in OVS
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.239 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:0f:a2 10.100.0.7'], port_security=['fa:16:3e:08:0f:a2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8b20d119-17cb-4742-9223-90e5020f93a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1431a6bc-93c8-4db5-a148-b2950f02c941', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56790f54314e4087abb5da8030b17666', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3e9934-51eb-4bf8-9114-93abd745996a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f25b0746-2c82-4a0a-9e71-0116018fc740, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=419102e4-bcb4-496b-b45c-fba9f5525746) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.240 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 419102e4-bcb4-496b-b45c-fba9f5525746 in datapath 1431a6bc-93c8-4db5-a148-b2950f02c941 unbound from our chassis
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.242 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1431a6bc-93c8-4db5-a148-b2950f02c941, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88c90851-0864-456f-8e39-43348ab1578c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.245 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 namespace which is not needed anymore
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.258 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Deactivated successfully.
Nov 25 16:48:53 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Consumed 19.289s CPU time.
Nov 25 16:48:53 compute-0 systemd-machined[216343]: Machine qemu-103-instance-00000053 terminated.
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.357 254096 DEBUG nova.network.neutron [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.383 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.404 254096 INFO nova.virt.libvirt.driver [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Instance destroyed successfully.
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.405 254096 DEBUG nova.objects.instance [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lazy-loading 'resources' on Instance uuid 8b20d119-17cb-4742-9223-90e5020f93a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.421 254096 DEBUG nova.compute.manager [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-unplugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.421 254096 DEBUG oslo_concurrency.lockutils [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG oslo_concurrency.lockutils [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG oslo_concurrency.lockutils [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG nova.compute.manager [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] No waiting events found dispatching network-vif-unplugged-419102e4-bcb4-496b-b45c-fba9f5525746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.422 254096 DEBUG nova.compute.manager [req-bae7d2f1-f39b-4d1d-8f02-03f4d308944f req-6768ed21-f62c-4c7a-a6c2-a34a25936c68 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-unplugged-419102e4-bcb4-496b-b45c-fba9f5525746 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.424 254096 DEBUG nova.virt.libvirt.vif [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:46:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1354265431',display_name='tempest-₡-1354265431',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1354265431',id=83,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56790f54314e4087abb5da8030b17666',ramdisk_id='',reservation_id='r-wvl75eev',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1089675869',owner_user_name='tempest-ServersTestJSON-1089675869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:46:23Z,user_data=None,user_id='7a137feb008c49e092cbb94106526835',uuid=8b20d119-17cb-4742-9223-90e5020f93a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.425 254096 DEBUG nova.network.os_vif_util [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converting VIF {"id": "419102e4-bcb4-496b-b45c-fba9f5525746", "address": "fa:16:3e:08:0f:a2", "network": {"id": "1431a6bc-93c8-4db5-a148-b2950f02c941", "bridge": "br-int", "label": "tempest-ServersTestJSON-1993502552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56790f54314e4087abb5da8030b17666", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap419102e4-bc", "ovs_interfaceid": "419102e4-bcb4-496b-b45c-fba9f5525746", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.425 254096 DEBUG nova.network.os_vif_util [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.426 254096 DEBUG os_vif [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.429 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap419102e4-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.441 254096 INFO os_vif [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:0f:a2,bridge_name='br-int',has_traffic_filtering=True,id=419102e4-bcb4-496b-b45c-fba9f5525746,network=Network(1431a6bc-93c8-4db5-a148-b2950f02c941),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap419102e4-bc')
Nov 25 16:48:53 compute-0 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : haproxy version is 2.8.14-c23fe91
Nov 25 16:48:53 compute-0 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [NOTICE]   (342061) : path to executable is /usr/sbin/haproxy
Nov 25 16:48:53 compute-0 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [WARNING]  (342061) : Exiting Master process...
Nov 25 16:48:53 compute-0 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [ALERT]    (342061) : Current worker (342063) exited with code 143 (Terminated)
Nov 25 16:48:53 compute-0 neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941[342057]: [WARNING]  (342061) : All workers exited. Exiting... (0)
Nov 25 16:48:53 compute-0 systemd[1]: libpod-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815.scope: Deactivated successfully.
Nov 25 16:48:53 compute-0 podman[352682]: 2025-11-25 16:48:53.512486738 +0000 UTC m=+0.066222261 container died 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815-userdata-shm.mount: Deactivated successfully.
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.555 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9b189bbf-2581-4656-83da-12707f48dccc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1774b9fa250e4e770fd47cef979c1f2ebcd875e71157ccd0e21761837005a179-merged.mount: Deactivated successfully.
Nov 25 16:48:53 compute-0 podman[352682]: 2025-11-25 16:48:53.58137059 +0000 UTC m=+0.135106093 container cleanup 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:48:53 compute-0 systemd[1]: libpod-conmon-5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815.scope: Deactivated successfully.
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.684 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] resizing rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:48:53 compute-0 podman[352742]: 2025-11-25 16:48:53.71349612 +0000 UTC m=+0.096425301 container remove 5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.729 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Successfully created port: e7e60738-4c0d-46ae-a9b6-1477573be82f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.729 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[953fe9ab-af21-4469-93c6-3d691222ef27]: (4, ('Tue Nov 25 04:48:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 (5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815)\n5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815\nTue Nov 25 04:48:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 (5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815)\n5d48e72cc714355b780f1842bac39a14f5f60877cf82c32894ad74cf2c592815\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.732 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[db59737d-6311-4c79-9cf2-0595c575b70e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.733 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.734 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1431a6bc-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 kernel: tap1431a6bc-90: left promiscuous mode
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.767 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b91476-b86d-44a6-a911-49603c839748]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.785 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b677bc25-d2f7-4542-a885-c8383223f867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87c5c771-2cf6-4979-a385-3cac5086c510]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 467 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.810 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[272f2f83-19db-4e60-95f5-83937c01215d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563963, 'reachable_time': 32097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352796, 'error': None, 'target': 'ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d1431a6bc\x2d93c8\x2d4db5\x2da148\x2db2950f02c941.mount: Deactivated successfully.
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.818 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1431a6bc-93c8-4db5-a148-b2950f02c941 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:48:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:53.818 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[39f8aff9-22e1-4775-bd2b-228c99818e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.887 254096 DEBUG nova.objects.instance [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.903 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.904 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Ensure instance console log exists: /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.904 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.904 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:53 compute-0 nova_compute[254092]: 2025-11-25 16:48:53.905 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:54 compute-0 nova_compute[254092]: 2025-11-25 16:48:54.052 254096 INFO nova.virt.libvirt.driver [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Deleting instance files /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7_del
Nov 25 16:48:54 compute-0 nova_compute[254092]: 2025-11-25 16:48:54.053 254096 INFO nova.virt.libvirt.driver [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Deletion of /var/lib/nova/instances/8b20d119-17cb-4742-9223-90e5020f93a7_del complete
Nov 25 16:48:54 compute-0 nova_compute[254092]: 2025-11-25 16:48:54.123 254096 INFO nova.compute.manager [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 25 16:48:54 compute-0 nova_compute[254092]: 2025-11-25 16:48:54.124 254096 DEBUG oslo.service.loopingcall [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:48:54 compute-0 nova_compute[254092]: 2025-11-25 16:48:54.124 254096 DEBUG nova.compute.manager [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:48:54 compute-0 nova_compute[254092]: 2025-11-25 16:48:54.124 254096 DEBUG nova.network.neutron [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:48:54 compute-0 ceph-mon[74985]: pgmap v1921: 321 pgs: 321 active+clean; 467 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 25 16:48:55 compute-0 ovn_controller[153477]: 2025-11-25T16:48:55Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 16:48:55 compute-0 ovn_controller[153477]: 2025-11-25T16:48:55Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 16:48:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:48:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/870670633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:48:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:48:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/870670633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.487 254096 DEBUG nova.compute.manager [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG oslo_concurrency.lockutils [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG oslo_concurrency.lockutils [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG oslo_concurrency.lockutils [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 DEBUG nova.compute.manager [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] No waiting events found dispatching network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.488 254096 WARNING nova.compute.manager [req-8533489e-35e2-4c13-a895-13c7755c36af req-bb2c631b-a734-481c-ade0-386ef500ffba a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received unexpected event network-vif-plugged-419102e4-bcb4-496b-b45c-fba9f5525746 for instance with vm_state active and task_state deleting.
Nov 25 16:48:55 compute-0 ovn_controller[153477]: 2025-11-25T16:48:55Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:85:c3 10.100.0.14
Nov 25 16:48:55 compute-0 ovn_controller[153477]: 2025-11-25T16:48:55Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:85:c3 10.100.0.14
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.559 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Successfully updated port: e7e60738-4c0d-46ae-a9b6-1477573be82f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.571 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.572 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.572 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.610 254096 DEBUG nova.network.neutron [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.635 254096 INFO nova.compute.manager [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Took 1.51 seconds to deallocate network for instance.
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.653 254096 DEBUG nova.compute.manager [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.654 254096 DEBUG nova.compute.manager [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing instance network info cache due to event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.654 254096 DEBUG oslo_concurrency.lockutils [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.679 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.679 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.772 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:48:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 491 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.5 MiB/s wr, 352 op/s
Nov 25 16:48:55 compute-0 nova_compute[254092]: 2025-11-25 16:48:55.846 254096 DEBUG oslo_concurrency.processutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/870670633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:48:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/870670633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:48:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:48:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:48:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216732842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:56 compute-0 nova_compute[254092]: 2025-11-25 16:48:56.326 254096 DEBUG oslo_concurrency.processutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:56 compute-0 nova_compute[254092]: 2025-11-25 16:48:56.333 254096 DEBUG nova.compute.provider_tree [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:48:56 compute-0 nova_compute[254092]: 2025-11-25 16:48:56.348 254096 DEBUG nova.scheduler.client.report [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:48:56 compute-0 nova_compute[254092]: 2025-11-25 16:48:56.366 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:56 compute-0 nova_compute[254092]: 2025-11-25 16:48:56.409 254096 INFO nova.scheduler.client.report [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Deleted allocations for instance 8b20d119-17cb-4742-9223-90e5020f93a7
Nov 25 16:48:56 compute-0 nova_compute[254092]: 2025-11-25 16:48:56.496 254096 DEBUG oslo_concurrency.lockutils [None req-367fead2-d627-4548-bc3f-cc7d34d217ac 7a137feb008c49e092cbb94106526835 56790f54314e4087abb5da8030b17666 - - default default] Lock "8b20d119-17cb-4742-9223-90e5020f93a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:56 compute-0 ceph-mon[74985]: pgmap v1922: 321 pgs: 321 active+clean; 491 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.5 MiB/s wr, 352 op/s
Nov 25 16:48:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1216732842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.319 254096 DEBUG nova.network.neutron [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.335 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.336 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance network_info: |[{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.337 254096 DEBUG oslo_concurrency.lockutils [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.337 254096 DEBUG nova.network.neutron [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.341 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Start _get_guest_xml network_info=[{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.346 254096 WARNING nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.351 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.351 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.357 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.357 254096 DEBUG nova.virt.libvirt.host [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.358 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.358 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.359 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.360 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.361 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.361 254096 DEBUG nova.virt.hardware [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.365 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.721 254096 DEBUG nova.compute.manager [req-b7884fa5-2768-4534-a684-6950f234c848 req-a1ac26cd-9ce8-44e5-9e96-333fc89aa09e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Received event network-vif-deleted-419102e4-bcb4-496b-b45c-fba9f5525746 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 489 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 MiB/s wr, 326 op/s
Nov 25 16:48:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030378047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.866 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3030378047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.923 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:57 compute-0 nova_compute[254092]: 2025-11-25 16:48:57.928 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.269 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089323.2686913, 4677de7c-6625-4c98-a065-214341d8bfea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.270 254096 INFO nova.compute.manager [-] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] VM Stopped (Lifecycle Event)
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.292 254096 DEBUG nova.compute.manager [None req-656ea098-bdf1-4d6a-978f-720237bd02b4 - - - - - -] [instance: 4677de7c-6625-4c98-a065-214341d8bfea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:48:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:48:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266950693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.421 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.423 254096 DEBUG nova.virt.libvirt.vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1282242960',display_name='tempest-ServerActionsTestOtherB-server-1282242960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1282242960',id=99,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-ao13046e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:52Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=9b189bbf-2581-4656-83da-12707f48dccc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.424 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.425 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.427 254096 DEBUG nova.objects.instance [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.445 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <uuid>9b189bbf-2581-4656-83da-12707f48dccc</uuid>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <name>instance-00000063</name>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestOtherB-server-1282242960</nova:name>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:48:57</nova:creationTime>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <nova:port uuid="e7e60738-4c0d-46ae-a9b6-1477573be82f">
Nov 25 16:48:58 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <system>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <entry name="serial">9b189bbf-2581-4656-83da-12707f48dccc</entry>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <entry name="uuid">9b189bbf-2581-4656-83da-12707f48dccc</entry>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </system>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <os>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   </os>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <features>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   </features>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9b189bbf-2581-4656-83da-12707f48dccc_disk">
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9b189bbf-2581-4656-83da-12707f48dccc_disk.config">
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:48:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f4:8d:f7"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <target dev="tape7e60738-4c"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/console.log" append="off"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <video>
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </video>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:48:58 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:48:58 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:48:58 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:48:58 compute-0 nova_compute[254092]: </domain>
Nov 25 16:48:58 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.447 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Preparing to wait for external event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.447 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.448 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.448 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.449 254096 DEBUG nova.virt.libvirt.vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1282242960',display_name='tempest-ServerActionsTestOtherB-server-1282242960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1282242960',id=99,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-ao13046e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:52Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=9b189bbf-2581-4656-83da-12707f48dccc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.449 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.450 254096 DEBUG nova.network.os_vif_util [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.450 254096 DEBUG os_vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.451 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.452 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.454 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7e60738-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.455 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7e60738-4c, col_values=(('external_ids', {'iface-id': 'e7e60738-4c0d-46ae-a9b6-1477573be82f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:8d:f7', 'vm-uuid': '9b189bbf-2581-4656-83da-12707f48dccc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:58 compute-0 NetworkManager[48891]: <info>  [1764089338.4576] manager: (tape7e60738-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.466 254096 INFO os_vif [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c')
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.524 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.525 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.525 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:f4:8d:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.526 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Using config drive
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.553 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:58 compute-0 nova_compute[254092]: 2025-11-25 16:48:58.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:58 compute-0 ceph-mon[74985]: pgmap v1923: 321 pgs: 321 active+clean; 489 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 MiB/s wr, 326 op/s
Nov 25 16:48:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/266950693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.003 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Creating config drive at /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.008 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6au5smbz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.167 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6au5smbz" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.195 254096 DEBUG nova.storage.rbd_utils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 9b189bbf-2581-4656-83da-12707f48dccc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.200 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config 9b189bbf-2581-4656-83da-12707f48dccc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.248 254096 DEBUG nova.network.neutron [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updated VIF entry in instance network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.250 254096 DEBUG nova.network.neutron [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.263 254096 DEBUG oslo_concurrency.lockutils [req-98c09282-a89c-426c-9d7b-67a2b708eece req-3ab71203-b8c5-4d3e-9dc9-3d7105c4593e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.394 254096 DEBUG oslo_concurrency.processutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config 9b189bbf-2581-4656-83da-12707f48dccc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.396 254096 INFO nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Deleting local config drive /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc/disk.config because it was imported into RBD.
Nov 25 16:48:59 compute-0 kernel: tape7e60738-4c: entered promiscuous mode
Nov 25 16:48:59 compute-0 NetworkManager[48891]: <info>  [1764089339.4423] manager: (tape7e60738-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Nov 25 16:48:59 compute-0 ovn_controller[153477]: 2025-11-25T16:48:59Z|00934|binding|INFO|Claiming lport e7e60738-4c0d-46ae-a9b6-1477573be82f for this chassis.
Nov 25 16:48:59 compute-0 ovn_controller[153477]: 2025-11-25T16:48:59Z|00935|binding|INFO|e7e60738-4c0d-46ae-a9b6-1477573be82f: Claiming fa:16:3e:f4:8d:f7 10.100.0.10
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.453 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:8d:f7 10.100.0.10'], port_security=['fa:16:3e:f4:8d:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b189bbf-2581-4656-83da-12707f48dccc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7e60738-4c0d-46ae-a9b6-1477573be82f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.454 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7e60738-4c0d-46ae-a9b6-1477573be82f in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.456 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:48:59 compute-0 ovn_controller[153477]: 2025-11-25T16:48:59Z|00936|binding|INFO|Setting lport e7e60738-4c0d-46ae-a9b6-1477573be82f ovn-installed in OVS
Nov 25 16:48:59 compute-0 ovn_controller[153477]: 2025-11-25T16:48:59Z|00937|binding|INFO|Setting lport e7e60738-4c0d-46ae-a9b6-1477573be82f up in Southbound
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.475 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f10007b0-c016-469a-90e9-bc60c2a2758c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:59 compute-0 systemd-udevd[352974]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:48:59 compute-0 systemd-machined[216343]: New machine qemu-121-instance-00000063.
Nov 25 16:48:59 compute-0 NetworkManager[48891]: <info>  [1764089339.4962] device (tape7e60738-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:48:59 compute-0 NetworkManager[48891]: <info>  [1764089339.4970] device (tape7e60738-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:48:59 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000063.
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.508 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e3c497-76a8-4f2c-a45d-0f0db1454a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.512 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[204c05c4-51ac-4910-bdf6-fee952e6705e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da2a4c4b-516c-4c34-88b4-ad10c44a6989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6015a2a-f7b5-48e1-a57f-a295a437bec0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352984, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.571 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5de19d5d-4cea-4d2c-a4fe-fd25e15fa083]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352987, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352987, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.572 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:48:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:48:59.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:48:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 489 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.0 MiB/s wr, 303 op/s
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.950 254096 DEBUG nova.compute.manager [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.951 254096 DEBUG oslo_concurrency.lockutils [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.951 254096 DEBUG oslo_concurrency.lockutils [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.952 254096 DEBUG oslo_concurrency.lockutils [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:48:59 compute-0 nova_compute[254092]: 2025-11-25 16:48:59.952 254096 DEBUG nova.compute.manager [req-f801d3d7-e0a3-4e5f-b497-8a24d98fe5ce req-cd98b7e8-d7b7-4e7a-9d84-feabe9d66968 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Processing event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.016 254096 INFO nova.compute.manager [None req-8dd18f63-bd90-48a8-abad-43109304b6de 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Get console output
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.023 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.055 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089340.0546904, 9b189bbf-2581-4656-83da-12707f48dccc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.056 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Started (Lifecycle Event)
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.058 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.083 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.086 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.093 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance spawned successfully.
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.094 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.120 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.121 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089340.0559328, 9b189bbf-2581-4656-83da-12707f48dccc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.121 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Paused (Lifecycle Event)
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.128 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.128 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.128 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.129 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.129 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.129 254096 DEBUG nova.virt.libvirt.driver [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.136 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.139 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089340.0819197, 9b189bbf-2581-4656-83da-12707f48dccc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.139 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Resumed (Lifecycle Event)
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.165 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.169 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.192 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.230 254096 INFO nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Took 7.51 seconds to spawn the instance on the hypervisor.
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.231 254096 DEBUG nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.309 254096 INFO nova.compute.manager [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Took 9.48 seconds to build instance.
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.326 254096 DEBUG oslo_concurrency.lockutils [None req-1b26fb75-d87d-4cab-93d1-6e39e157503b 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.589 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.589 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.589 254096 INFO nova.compute.manager [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Rebooting instance
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.602 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.602 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:00 compute-0 nova_compute[254092]: 2025-11-25 16:49:00.602 254096 DEBUG nova.network.neutron [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:49:00 compute-0 ceph-mon[74985]: pgmap v1924: 321 pgs: 321 active+clean; 489 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.0 MiB/s wr, 303 op/s
Nov 25 16:49:01 compute-0 anacron[126663]: Job `cron.monthly' started
Nov 25 16:49:01 compute-0 anacron[126663]: Job `cron.monthly' terminated
Nov 25 16:49:01 compute-0 anacron[126663]: Normal exit (3 jobs run)
Nov 25 16:49:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 500 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 340 op/s
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.181 254096 DEBUG nova.compute.manager [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.181 254096 DEBUG oslo_concurrency.lockutils [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 DEBUG oslo_concurrency.lockutils [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 DEBUG oslo_concurrency.lockutils [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 DEBUG nova.compute.manager [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] No waiting events found dispatching network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.182 254096 WARNING nova.compute.manager [req-492ebfd6-6a63-4a9a-88a3-75deacebb384 req-c51acec6-a3a0-4d2b-8124-c8c515e9d4bf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received unexpected event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f for instance with vm_state active and task_state None.
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.502 254096 INFO nova.compute.manager [None req-1d5dcb97-81d5-4a32-92d8-10a29cd5ec87 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Pausing
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.503 254096 DEBUG nova.objects.instance [None req-1d5dcb97-81d5-4a32-92d8-10a29cd5ec87 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'flavor' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.526 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089342.526359, 9b189bbf-2581-4656-83da-12707f48dccc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.526 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Paused (Lifecycle Event)
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.528 254096 DEBUG nova.compute.manager [None req-1d5dcb97-81d5-4a32-92d8-10a29cd5ec87 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.549 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.552 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:02 compute-0 nova_compute[254092]: 2025-11-25 16:49:02.574 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] During sync_power_state the instance has a pending task (pausing). Skip.
Nov 25 16:49:02 compute-0 ceph-mon[74985]: pgmap v1925: 321 pgs: 321 active+clean; 500 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 340 op/s
Nov 25 16:49:03 compute-0 ovn_controller[153477]: 2025-11-25T16:49:03Z|00938|binding|INFO|Releasing lport 82c4ad4d-388e-4238-98b3-8d58946e7829 from this chassis (sb_readonly=0)
Nov 25 16:49:03 compute-0 ovn_controller[153477]: 2025-11-25T16:49:03Z|00939|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:49:03 compute-0 nova_compute[254092]: 2025-11-25 16:49:03.381 254096 DEBUG nova.network.neutron [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:03 compute-0 nova_compute[254092]: 2025-11-25 16:49:03.394 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:03 compute-0 nova_compute[254092]: 2025-11-25 16:49:03.395 254096 DEBUG nova.compute.manager [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:03 compute-0 nova_compute[254092]: 2025-11-25 16:49:03.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:03 compute-0 nova_compute[254092]: 2025-11-25 16:49:03.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:03 compute-0 nova_compute[254092]: 2025-11-25 16:49:03.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 500 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 255 op/s
Nov 25 16:49:03 compute-0 nova_compute[254092]: 2025-11-25 16:49:03.806 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:49:04 compute-0 ceph-mon[74985]: pgmap v1926: 321 pgs: 321 active+clean; 500 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 255 op/s
Nov 25 16:49:05 compute-0 kernel: tap5a3f34de-d3 (unregistering): left promiscuous mode
Nov 25 16:49:05 compute-0 NetworkManager[48891]: <info>  [1764089345.7940] device (tap5a3f34de-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 517 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.4 MiB/s wr, 336 op/s
Nov 25 16:49:05 compute-0 nova_compute[254092]: 2025-11-25 16:49:05.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:05 compute-0 ovn_controller[153477]: 2025-11-25T16:49:05Z|00940|binding|INFO|Releasing lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 from this chassis (sb_readonly=0)
Nov 25 16:49:05 compute-0 ovn_controller[153477]: 2025-11-25T16:49:05Z|00941|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 down in Southbound
Nov 25 16:49:05 compute-0 ovn_controller[153477]: 2025-11-25T16:49:05Z|00942|binding|INFO|Removing iface tap5a3f34de-d3 ovn-installed in OVS
Nov 25 16:49:05 compute-0 nova_compute[254092]: 2025-11-25 16:49:05.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.815 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.816 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 unbound from our chassis
Nov 25 16:49:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.817 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa283c2c-b597-4970-842d-f5f2b621b5f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:49:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.818 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a04b7832-016e-4096-8130-5af89298e006]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:05.818 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace which is not needed anymore
Nov 25 16:49:05 compute-0 nova_compute[254092]: 2025-11-25 16:49:05.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:05 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 25 16:49:05 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 16.991s CPU time.
Nov 25 16:49:05 compute-0 systemd-machined[216343]: Machine qemu-118-instance-00000060 terminated.
Nov 25 16:49:05 compute-0 nova_compute[254092]: 2025-11-25 16:49:05.956 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:05 compute-0 nova_compute[254092]: 2025-11-25 16:49:05.957 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:05 compute-0 nova_compute[254092]: 2025-11-25 16:49:05.957 254096 INFO nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Shelving
Nov 25 16:49:05 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : haproxy version is 2.8.14-c23fe91
Nov 25 16:49:05 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [NOTICE]   (351439) : path to executable is /usr/sbin/haproxy
Nov 25 16:49:05 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [WARNING]  (351439) : Exiting Master process...
Nov 25 16:49:05 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [ALERT]    (351439) : Current worker (351441) exited with code 143 (Terminated)
Nov 25 16:49:05 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[351428]: [WARNING]  (351439) : All workers exited. Exiting... (0)
Nov 25 16:49:05 compute-0 systemd[1]: libpod-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope: Deactivated successfully.
Nov 25 16:49:05 compute-0 conmon[351428]: conmon c66a1eef8323cd955df0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope/container/memory.events
Nov 25 16:49:05 compute-0 podman[353057]: 2025-11-25 16:49:05.971960486 +0000 UTC m=+0.049498986 container died c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 16:49:06 compute-0 kernel: tape7e60738-4c (unregistering): left promiscuous mode
Nov 25 16:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431-userdata-shm.mount: Deactivated successfully.
Nov 25 16:49:06 compute-0 NetworkManager[48891]: <info>  [1764089346.0377] device (tape7e60738-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f65b33bb2cefdb9521aad3458f01f2c8320441c773aab270729c09b9835d7b2-merged.mount: Deactivated successfully.
Nov 25 16:49:06 compute-0 podman[353057]: 2025-11-25 16:49:06.053912523 +0000 UTC m=+0.131450993 container cleanup c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00943|binding|INFO|Releasing lport e7e60738-4c0d-46ae-a9b6-1477573be82f from this chassis (sb_readonly=0)
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00944|binding|INFO|Setting lport e7e60738-4c0d-46ae-a9b6-1477573be82f down in Southbound
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00945|binding|INFO|Removing iface tape7e60738-4c ovn-installed in OVS
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.060 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.068 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:8d:f7 10.100.0.10'], port_security=['fa:16:3e:f4:8d:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '9b189bbf-2581-4656-83da-12707f48dccc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7e60738-4c0d-46ae-a9b6-1477573be82f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:06 compute-0 systemd[1]: libpod-conmon-c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431.scope: Deactivated successfully.
Nov 25 16:49:06 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Deactivated successfully.
Nov 25 16:49:06 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000063.scope: Consumed 3.013s CPU time.
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.089 254096 DEBUG nova.compute.manager [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.090 254096 DEBUG oslo_concurrency.lockutils [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.090 254096 DEBUG oslo_concurrency.lockutils [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:06 compute-0 systemd-machined[216343]: Machine qemu-121-instance-00000063 terminated.
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.091 254096 DEBUG oslo_concurrency.lockutils [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.091 254096 DEBUG nova.compute.manager [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.091 254096 WARNING nova.compute.manager [req-c54e40e8-6fda-4a76-af52-158e217ef27f req-f1bc8072-8e4a-4b66-bbb0-b3b8b4c0c669 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state reboot_started.
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.092 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 podman[353101]: 2025-11-25 16:49:06.132738426 +0000 UTC m=+0.047116452 container remove c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.139 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[908dd548-6edf-4b8b-b782-8074be0486b3]: (4, ('Tue Nov 25 04:49:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431)\nc66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431\nTue Nov 25 04:49:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (c66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431)\nc66a1eef8323cd955df0d1a7a05d911b32b83de679881b527bd383cf32f85431\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.141 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38dd46c2-94e9-4f4f-b1fc-c2cbcc944cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.142 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 kernel: tapaa283c2c-b0: left promiscuous mode
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.172 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.176 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[52292048-6ae0-405f-b6a5-9a6745d1ef28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[099b889f-c811-499f-bf64-8a1eaeb175e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3cebd893-1b5d-41bd-bfa9-d7963f1735a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1db8266b-66c3-46f5-a1b0-385547f8b555]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577088, 'reachable_time': 41301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353125, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 systemd[1]: run-netns-ovnmeta\x2daa283c2c\x2db597\x2d4970\x2d842d\x2df5f2b621b5f0.mount: Deactivated successfully.
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.216 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.216 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c39e737c-7511-47ab-b61e-9431b832e89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.220 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7e60738-4c0d-46ae-a9b6-1477573be82f in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.224 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.221 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance destroyed successfully.
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.221 254096 DEBUG nova.objects.instance [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.245 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad52e2b7-9486-44d9-8bed-0274a73d4012]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.276 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2b91ee-c226-4019-a25b-e22bbb55c319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.280 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba055905-299c-48f6-ab56-4db6e35a3ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[658c1efa-8273-4bab-b281-6326aaf0baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.331 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abf15020-966b-45bf-bbfe-15337430124a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353138, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.348 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8fd1fc-56bb-4690-970f-ee84ae85ba2b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353139, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353139, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.350 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.352 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.360 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.361 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.362 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.541 254096 INFO nova.virt.libvirt.driver [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance shutdown successfully.
Nov 25 16:49:06 compute-0 kernel: tap5a3f34de-d3: entered promiscuous mode
Nov 25 16:49:06 compute-0 systemd-udevd[353036]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:49:06 compute-0 NetworkManager[48891]: <info>  [1764089346.6041] manager: (tap5a3f34de-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00946|binding|INFO|Claiming lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 for this chassis.
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00947|binding|INFO|5a3f34de-d3de-439b-ac8f-baabc77892b4: Claiming fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.615 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.616 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 bound to our chassis
Nov 25 16:49:06 compute-0 NetworkManager[48891]: <info>  [1764089346.6172] device (tap5a3f34de-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.618 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 16:49:06 compute-0 NetworkManager[48891]: <info>  [1764089346.6188] device (tap5a3f34de-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00948|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 ovn-installed in OVS
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00949|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 up in Southbound
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.630 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[669f9e97-076d-48ac-b47d-0284b86d3431]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.631 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa283c2c-b1 in ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.635 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa283c2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.636 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[514cce67-2d5a-4010-8c47-83045e49760b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 systemd-machined[216343]: New machine qemu-122-instance-00000060.
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.638 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62ff4d35-d1eb-4095-8399-a457d24f46c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.650 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2480afcb-6563-4fba-b68f-34d36632e7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000060.
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.652 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Beginning cold snapshot process
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.673 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0a22af-ebb4-4e77-947d-fe959eeb1f2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.700 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a28c156f-4da0-4d51-a07a-bdb8168f28e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.705 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86ce96e7-d68b-4475-8def-5cfe88d396ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 NetworkManager[48891]: <info>  [1764089346.7069] manager: (tapaa283c2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.737 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29477321-c854-4dd9-b463-85306684ad05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.740 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dcdcbc88-87ab-44f0-82bf-0250adaa6501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 NetworkManager[48891]: <info>  [1764089346.7635] device (tapaa283c2c-b0): carrier: link connected
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.769 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c4969bd4-2362-4fc4-9d09-3b6d28c5ada4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.787 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b859f69-e3f0-4679-bf68-764bce827b42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580435, 'reachable_time': 42442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353222, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.796 254096 DEBUG nova.virt.libvirt.imagebackend [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.803 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94c17118-2f8c-47b5-b9bc-db4fd6f24a70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:7d28'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580435, 'tstamp': 580435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353226, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.821 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab8e76a-8868-4524-96ee-4d9d8c3ad104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa283c2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:7d:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580435, 'reachable_time': 42442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353227, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.849 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c34f56bc-7d9d-4c11-b830-0312468f37aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe4919e-b0c4-4116-b756-f87cefa5c359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.918 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.918 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.918 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa283c2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:06 compute-0 NetworkManager[48891]: <info>  [1764089346.9237] manager: (tapaa283c2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 kernel: tapaa283c2c-b0: entered promiscuous mode
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.929 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa283c2c-b0, col_values=(('external_ids', {'iface-id': '82c4ad4d-388e-4238-98b3-8d58946e7829'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:06 compute-0 ceph-mon[74985]: pgmap v1927: 321 pgs: 321 active+clean; 517 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.4 MiB/s wr, 336 op/s
Nov 25 16:49:06 compute-0 ovn_controller[153477]: 2025-11-25T16:49:06Z|00950|binding|INFO|Releasing lport 82c4ad4d-388e-4238-98b3-8d58946e7829 from this chassis (sb_readonly=0)
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 nova_compute[254092]: 2025-11-25 16:49:06.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.957 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[391de876-7fc1-43c8-848a-efc6a8084776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.959 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/aa283c2c-b597-4970-842d-f5f2b621b5f0.pid.haproxy
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID aa283c2c-b597-4970-842d-f5f2b621b5f0
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:49:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:06.959 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'env', 'PROCESS_TAG=haproxy-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa283c2c-b597-4970-842d-f5f2b621b5f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.102 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(9c5fa938ca29422babcbba9eafcabdc9) on rbd image(9b189bbf-2581-4656-83da-12707f48dccc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.151 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 2e848add-8417-4307-8b01-f0d1c1a76cea due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.153 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089347.1076672, 2e848add-8417-4307-8b01-f0d1c1a76cea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.153 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Resumed (Lifecycle Event)
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.158 254096 INFO nova.virt.libvirt.driver [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance running successfully.
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.158 254096 INFO nova.virt.libvirt.driver [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance soft rebooted successfully.
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.158 254096 DEBUG nova.compute.manager [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.175 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.178 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.194 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] During sync_power_state the instance has a pending task (reboot_started). Skip.
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.195 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089347.1096506, 2e848add-8417-4307-8b01-f0d1c1a76cea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.195 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Started (Lifecycle Event)
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.218 254096 DEBUG oslo_concurrency.lockutils [None req-a6bf2ab6-32b7-47ae-9b4e-696207ff6578 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.221 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:07 compute-0 nova_compute[254092]: 2025-11-25 16:49:07.223 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:07 compute-0 podman[353324]: 2025-11-25 16:49:07.349918342 +0000 UTC m=+0.048793436 container create ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:49:07 compute-0 systemd[1]: Started libpod-conmon-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9.scope.
Nov 25 16:49:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:07 compute-0 podman[353324]: 2025-11-25 16:49:07.325393186 +0000 UTC m=+0.024268300 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0b2008feb8517d6fb5f144bab4ee20658f8fd3f00332a77040521f977c12c63/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:07 compute-0 podman[353324]: 2025-11-25 16:49:07.442696444 +0000 UTC m=+0.141571568 container init ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:49:07 compute-0 podman[353324]: 2025-11-25 16:49:07.44844532 +0000 UTC m=+0.147320414 container start ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 16:49:07 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : New worker (353347) forked
Nov 25 16:49:07 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : Loading success.
Nov 25 16:49:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 522 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 211 op/s
Nov 25 16:49:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 25 16:49:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 25 16:49:07 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.011 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/9b189bbf-2581-4656-83da-12707f48dccc_disk@9c5fa938ca29422babcbba9eafcabdc9 to images/e2fc087f-e2ce-46f4-acb0-4d7a135a78a9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.091 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/e2fc087f-e2ce-46f4-acb0-4d7a135a78a9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.230 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.231 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.231 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.231 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.232 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.232 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.233 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-unplugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.233 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.233 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] No waiting events found dispatching network-vif-unplugged-e7e60738-4c0d-46ae-a9b6-1477573be82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received unexpected event network-vif-unplugged-e7e60738-4c0d-46ae-a9b6-1477573be82f for instance with vm_state paused and task_state shelving_image_uploading.
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.234 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.235 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9b189bbf-2581-4656-83da-12707f48dccc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.235 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.235 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.236 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] No waiting events found dispatching network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.236 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received unexpected event network-vif-plugged-e7e60738-4c0d-46ae-a9b6-1477573be82f for instance with vm_state paused and task_state shelving_image_uploading.
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.236 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.237 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.238 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.238 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.238 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 DEBUG oslo_concurrency.lockutils [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 DEBUG nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.239 254096 WARNING nova.compute.manager [req-eae5673a-450f-4d6d-9d6b-49af4609d8fd req-3dbf1c33-8c60-40f1-a67c-bdc5da095fbf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state active and task_state None.
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.291 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(9c5fa938ca29422babcbba9eafcabdc9) on rbd image(9b189bbf-2581-4656-83da-12707f48dccc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.368 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089333.3668997, 8b20d119-17cb-4742-9223-90e5020f93a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.369 254096 INFO nova.compute.manager [-] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] VM Stopped (Lifecycle Event)
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.387 254096 DEBUG nova.compute.manager [None req-422fe19d-dec5-4646-a6fe-f9f60d422d60 - - - - - -] [instance: 8b20d119-17cb-4742-9223-90e5020f93a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 25 16:49:08 compute-0 ceph-mon[74985]: pgmap v1928: 321 pgs: 321 active+clean; 522 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 211 op/s
Nov 25 16:49:08 compute-0 ceph-mon[74985]: osdmap e245: 3 total, 3 up, 3 in
Nov 25 16:49:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 25 16:49:08 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 25 16:49:08 compute-0 nova_compute[254092]: 2025-11-25 16:49:08.996 254096 DEBUG nova.storage.rbd_utils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(e2fc087f-e2ce-46f4-acb0-4d7a135a78a9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:49:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 522 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 163 op/s
Nov 25 16:49:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 25 16:49:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 25 16:49:09 compute-0 ceph-mon[74985]: osdmap e246: 3 total, 3 up, 3 in
Nov 25 16:49:09 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 25 16:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:49:10 compute-0 nova_compute[254092]: 2025-11-25 16:49:10.348 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:10 compute-0 ceph-mon[74985]: pgmap v1931: 321 pgs: 321 active+clean; 522 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 163 op/s
Nov 25 16:49:10 compute-0 ceph-mon[74985]: osdmap e247: 3 total, 3 up, 3 in
Nov 25 16:49:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.207 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Snapshot image upload complete
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.208 254096 DEBUG nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.276 254096 INFO nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Shelve offloading
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.284 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance destroyed successfully.
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.284 254096 DEBUG nova.compute.manager [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.286 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.286 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.286 254096 DEBUG nova.network.neutron [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:49:11 compute-0 nova_compute[254092]: 2025-11-25 16:49:11.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.1 MiB/s wr, 332 op/s
Nov 25 16:49:12 compute-0 nova_compute[254092]: 2025-11-25 16:49:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:12 compute-0 ceph-mon[74985]: pgmap v1933: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.1 MiB/s wr, 332 op/s
Nov 25 16:49:13 compute-0 nova_compute[254092]: 2025-11-25 16:49:13.368 254096 DEBUG nova.network.neutron [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:13 compute-0 nova_compute[254092]: 2025-11-25 16:49:13.388 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:13 compute-0 nova_compute[254092]: 2025-11-25 16:49:13.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:13.625 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:13.626 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:13.627 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:13 compute-0 nova_compute[254092]: 2025-11-25 16:49:13.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.7 MiB/s wr, 278 op/s
Nov 25 16:49:14 compute-0 nova_compute[254092]: 2025-11-25 16:49:14.849 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:49:14 compute-0 ceph-mon[74985]: pgmap v1934: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 3.7 MiB/s wr, 278 op/s
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.129 254096 INFO nova.virt.libvirt.driver [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Instance destroyed successfully.
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.130 254096 DEBUG nova.objects.instance [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid 9b189bbf-2581-4656-83da-12707f48dccc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.141 254096 DEBUG nova.virt.libvirt.vif [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1282242960',display_name='tempest-ServerActionsTestOtherB-server-1282242960',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1282242960',id=99,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-ao13046e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:11.208042',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e2fc087f-e2ce-46f4-acb0-4d7a135a78a9'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:06Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=9b189bbf-2581-4656-83da-12707f48dccc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.141 254096 DEBUG nova.network.os_vif_util [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7e60738-4c", "ovs_interfaceid": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.142 254096 DEBUG nova.network.os_vif_util [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.143 254096 DEBUG os_vif [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7e60738-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.152 254096 INFO os_vif [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:8d:f7,bridge_name='br-int',has_traffic_filtering=True,id=e7e60738-4c0d-46ae-a9b6-1477573be82f,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7e60738-4c')
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.228 254096 DEBUG nova.compute.manager [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Received event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.229 254096 DEBUG nova.compute.manager [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing instance network info cache due to event network-changed-e7e60738-4c0d-46ae-a9b6-1477573be82f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.229 254096 DEBUG oslo_concurrency.lockutils [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.229 254096 DEBUG oslo_concurrency.lockutils [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.230 254096 DEBUG nova.network.neutron [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Refreshing network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.544 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Deleting instance files /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc_del
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.545 254096 INFO nova.virt.libvirt.driver [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Deletion of /var/lib/nova/instances/9b189bbf-2581-4656-83da-12707f48dccc_del complete
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.776 254096 INFO nova.scheduler.client.report [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance 9b189bbf-2581-4656-83da-12707f48dccc
Nov 25 16:49:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 232 op/s
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.821 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:15 compute-0 nova_compute[254092]: 2025-11-25 16:49:15.822 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.107 254096 DEBUG oslo_concurrency.processutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 25 16:49:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 25 16:49:16 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:49:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869621011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.543 254096 DEBUG oslo_concurrency.processutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.549 254096 DEBUG nova.compute.provider_tree [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.563 254096 DEBUG nova.scheduler.client.report [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.581 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.610 254096 DEBUG nova.network.neutron [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updated VIF entry in instance network info cache for port e7e60738-4c0d-46ae-a9b6-1477573be82f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.612 254096 DEBUG nova.network.neutron [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Updating instance_info_cache with network_info: [{"id": "e7e60738-4c0d-46ae-a9b6-1477573be82f", "address": "fa:16:3e:f4:8d:f7", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tape7e60738-4c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.630 254096 DEBUG oslo_concurrency.lockutils [None req-6fabdec0-7a4e-4453-b0a0-9a0f006b697c 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "9b189bbf-2581-4656-83da-12707f48dccc" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 10.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:16 compute-0 nova_compute[254092]: 2025-11-25 16:49:16.637 254096 DEBUG oslo_concurrency.lockutils [req-c1101ae0-29d1-4471-8a43-ba44482a8220 req-938073f3-24ac-4d3a-bb6f-70f0b676fbe2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9b189bbf-2581-4656-83da-12707f48dccc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:17 compute-0 kernel: tapacb7d65c-02 (unregistering): left promiscuous mode
Nov 25 16:49:17 compute-0 ovn_controller[153477]: 2025-11-25T16:49:17Z|00951|binding|INFO|Releasing lport acb7d65c-0259-4a39-94f8-d7f64637a340 from this chassis (sb_readonly=0)
Nov 25 16:49:17 compute-0 ovn_controller[153477]: 2025-11-25T16:49:17Z|00952|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 down in Southbound
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:17 compute-0 NetworkManager[48891]: <info>  [1764089357.1895] device (tapacb7d65c-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:17 compute-0 ovn_controller[153477]: 2025-11-25T16:49:17Z|00953|binding|INFO|Removing iface tapacb7d65c-02 ovn-installed in OVS
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.202 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.204 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis
Nov 25 16:49:17 compute-0 ceph-mon[74985]: pgmap v1935: 321 pgs: 321 active+clean; 579 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 232 op/s
Nov 25 16:49:17 compute-0 ceph-mon[74985]: osdmap e248: 3 total, 3 up, 3 in
Nov 25 16:49:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/869621011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.205 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:49:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:17.207 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9f8006-d86a-4b40-9471-f1cf8de558de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.218 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:17 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 16:49:17 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Consumed 16.476s CPU time.
Nov 25 16:49:17 compute-0 systemd-machined[216343]: Machine qemu-120-instance-00000062 terminated.
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 574 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 253 op/s
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.864 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance shutdown successfully after 24 seconds.
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.869 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.870 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.889 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Attempting rescue
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.890 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.896 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.896 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating image(s)
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.923 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.927 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.959 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.983 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:17 compute-0 nova_compute[254092]: 2025-11-25 16:49:17.986 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.060 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.061 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.062 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.062 254096 DEBUG oslo_concurrency.lockutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.086 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.089 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.346 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.348 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.359 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.360 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start _get_guest_xml network_info=[{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:b2:d7:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.360 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.376 254096 WARNING nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.381 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.382 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.385 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.386 254096 DEBUG nova.virt.libvirt.host [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.386 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.386 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.387 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.387 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.388 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.virt.hardware [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.389 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.405 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:18 compute-0 podman[353621]: 2025-11-25 16:49:18.664547239 +0000 UTC m=+0.068548053 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:49:18 compute-0 podman[353611]: 2025-11-25 16:49:18.668319632 +0000 UTC m=+0.068820641 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:18 compute-0 podman[353622]: 2025-11-25 16:49:18.71056307 +0000 UTC m=+0.113312140 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:49:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:49:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796629910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.899 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:18 compute-0 nova_compute[254092]: 2025-11-25 16:49:18.900 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:19 compute-0 ceph-mon[74985]: pgmap v1937: 321 pgs: 321 active+clean; 574 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 253 op/s
Nov 25 16:49:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2796629910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:49:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349042337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.398 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.399 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.665 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.666 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.666 254096 INFO nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Shelving
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.683 254096 DEBUG nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:49:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 574 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 206 op/s
Nov 25 16:49:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:49:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954827956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.890 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.891 254096 DEBUG nova.virt.libvirt.vif [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:48:50Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:b2:d7:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.892 254096 DEBUG nova.network.os_vif_util [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2031272697-network", "vif_mac": "fa:16:3e:b2:d7:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.892 254096 DEBUG nova.network.os_vif_util [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.893 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.911 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <uuid>6b74b880-45f6-4f10-b09f-2696629a42e9</uuid>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <name>instance-00000062</name>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueTestJSON-server-1318564784</nova:name>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:49:18</nova:creationTime>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:user uuid="013868ddd96f43a49458a4615ab1f41b">tempest-ServerRescueTestJSON-143951276-project-member</nova:user>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:project uuid="544c4f84ca494482aea8e55248fe4c62">tempest-ServerRescueTestJSON-143951276</nova:project>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <nova:port uuid="acb7d65c-0259-4a39-94f8-d7f64637a340">
Nov 25 16:49:19 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <system>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <entry name="serial">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <entry name="uuid">6b74b880-45f6-4f10-b09f-2696629a42e9</entry>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </system>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <os>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   </os>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <features>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   </features>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk.rescue">
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </source>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk">
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </source>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <target dev="vdb" bus="virtio"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue">
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </source>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:49:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:b2:d7:96"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <target dev="tapacb7d65c-02"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/console.log" append="off"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <video>
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </video>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:49:19 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:49:19 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:49:19 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:49:19 compute-0 nova_compute[254092]: </domain>
Nov 25 16:49:19 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.921 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.
Nov 25 16:49:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326653674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.966 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.966 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.967 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.967 254096 DEBUG nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] No VIF found with MAC fa:16:3e:b2:d7:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.967 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Using config drive
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.991 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:19 compute-0 nova_compute[254092]: 2025-11-25 16:49:19.996 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.010 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.041 254096 DEBUG nova.objects.instance [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'keypairs' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.091 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.091 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.091 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.095 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.100 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.103 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.104 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.107 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3349042337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/954827956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3326653674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:20 compute-0 ovn_controller[153477]: 2025-11-25T16:49:20Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:43:a6 10.100.0.9
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.336 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.337 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2968MB free_disk=59.71924591064453GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.338 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.338 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.404 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73301044-3bad-4401-9e30-f009d417f662 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.404 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 435ae693-6844-49ae-977b-ec3aa89cfe70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.405 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2e848add-8417-4307-8b01-f0d1c1a76cea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.406 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e0098976-026f-43d8-b686-b2658f9aded9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 6b74b880-45f6-4f10-b09f-2696629a42e9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.408 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.490 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2227010931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.949 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.954 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.968 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.989 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:49:20 compute-0 nova_compute[254092]: 2025-11-25 16:49:20.990 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.085 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Creating config drive at /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.091 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_s7xgsjx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.212 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089346.2106142, 9b189bbf-2581-4656-83da-12707f48dccc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.212 254096 INFO nova.compute.manager [-] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] VM Stopped (Lifecycle Event)
Nov 25 16:49:21 compute-0 ceph-mon[74985]: pgmap v1938: 321 pgs: 321 active+clean; 574 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 206 op/s
Nov 25 16:49:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2227010931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.230 254096 DEBUG nova.compute.manager [None req-1ac10283-0f07-4e4b-a1fb-7fbea56a3f32 - - - - - -] [instance: 9b189bbf-2581-4656-83da-12707f48dccc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.234 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_s7xgsjx" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.258 254096 DEBUG nova.storage.rbd_utils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] rbd image 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.261 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.298 254096 DEBUG nova.compute.manager [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG oslo_concurrency.lockutils [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG oslo_concurrency.lockutils [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG oslo_concurrency.lockutils [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 DEBUG nova.compute.manager [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.299 254096 WARNING nova.compute.manager [req-143a97af-e45b-425e-b468-cfc1bed70004 req-402048fe-31f7-42fb-a28c-5f33d6f9e47d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state rescuing.
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.423 254096 DEBUG oslo_concurrency.processutils [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue 6b74b880-45f6-4f10-b09f-2696629a42e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.424 254096 INFO nova.virt.libvirt.driver [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deleting local config drive /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9/disk.config.rescue because it was imported into RBD.
Nov 25 16:49:21 compute-0 kernel: tapacb7d65c-02: entered promiscuous mode
Nov 25 16:49:21 compute-0 NetworkManager[48891]: <info>  [1764089361.4926] manager: (tapacb7d65c-02): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Nov 25 16:49:21 compute-0 ovn_controller[153477]: 2025-11-25T16:49:21Z|00954|binding|INFO|Claiming lport acb7d65c-0259-4a39-94f8-d7f64637a340 for this chassis.
Nov 25 16:49:21 compute-0 ovn_controller[153477]: 2025-11-25T16:49:21Z|00955|binding|INFO|acb7d65c-0259-4a39-94f8-d7f64637a340: Claiming fa:16:3e:b2:d7:96 10.100.0.2
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.532 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.533 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.534 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.535 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eff8398c-2fe6-433a-a5cb-a388e7c1b727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:21 compute-0 ovn_controller[153477]: 2025-11-25T16:49:21Z|00956|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 ovn-installed in OVS
Nov 25 16:49:21 compute-0 ovn_controller[153477]: 2025-11-25T16:49:21Z|00957|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 up in Southbound
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:21 compute-0 systemd-machined[216343]: New machine qemu-123-instance-00000062.
Nov 25 16:49:21 compute-0 systemd-udevd[353845]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:49:21 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000062.
Nov 25 16:49:21 compute-0 NetworkManager[48891]: <info>  [1764089361.5725] device (tapacb7d65c-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:49:21 compute-0 NetworkManager[48891]: <info>  [1764089361.5736] device (tapacb7d65c-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:49:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 579 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Nov 25 16:49:21 compute-0 kernel: tap792a5867-7e (unregistering): left promiscuous mode
Nov 25 16:49:21 compute-0 NetworkManager[48891]: <info>  [1764089361.9435] device (tap792a5867-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:21 compute-0 ovn_controller[153477]: 2025-11-25T16:49:21Z|00958|binding|INFO|Releasing lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 from this chassis (sb_readonly=0)
Nov 25 16:49:21 compute-0 ovn_controller[153477]: 2025-11-25T16:49:21Z|00959|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 down in Southbound
Nov 25 16:49:21 compute-0 ovn_controller[153477]: 2025-11-25T16:49:21Z|00960|binding|INFO|Removing iface tap792a5867-7e ovn-installed in OVS
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.960 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.962 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.964 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:21.984 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57d0d611-d6a5-402e-8434-293fd583f386]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.988 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.989 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.989 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.992 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 6b74b880-45f6-4f10-b09f-2696629a42e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.992 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089361.992157, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.993 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Resumed (Lifecycle Event)
Nov 25 16:49:21 compute-0 nova_compute[254092]: 2025-11-25 16:49:21.996 254096 DEBUG nova.compute.manager [None req-608c3a82-eb77-433a-bf7d-8da68458eb6a 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:21 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 25 16:49:22 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 18.793s CPU time.
Nov 25 16:49:22 compute-0 systemd-machined[216343]: Machine qemu-110-instance-00000059 terminated.
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.015 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[31b60150-2274-4a3c-a787-0992dd3ef747]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.018 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c99faf4d-1865-42fe-882f-b50e67f2a233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.046 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2373f5b0-1ab5-4bb2-a657-481b75b09ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.051 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.055 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.055 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.056 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.057 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.068 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54e297e8-e832-4662-b495-e2ae154e30ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353922, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.080 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.081 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089361.994401, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.081 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Started (Lifecycle Event)
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.088 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[392d1cc6-5b26-4f4b-93f1-ffd0edbf7598]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353923, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353923, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.090 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.096 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.097 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.097 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:22.097 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.099 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.102 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.199 254096 DEBUG nova.compute.manager [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.199 254096 DEBUG oslo_concurrency.lockutils [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 DEBUG oslo_concurrency.lockutils [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 DEBUG oslo_concurrency.lockutils [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 DEBUG nova.compute.manager [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.200 254096 WARNING nova.compute.manager [req-6bedd139-b5f7-4739-8e54-31b6cbca12d5 req-682bc090-0fc4-4ab1-8f07-28a273493313 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state shelving.
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.701 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance shutdown successfully after 3 seconds.
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.707 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.
Nov 25 16:49:22 compute-0 nova_compute[254092]: 2025-11-25 16:49:22.707 254096 DEBUG nova.objects.instance [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'numa_topology' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.009 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Beginning cold snapshot process
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.162 254096 DEBUG nova.virt.libvirt.imagebackend [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:49:23 compute-0 ceph-mon[74985]: pgmap v1939: 321 pgs: 321 active+clean; 579 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.507 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.507 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.507 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 WARNING nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state None.
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.508 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 WARNING nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state None.
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.509 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG oslo_concurrency.lockutils [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 DEBUG nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.510 254096 WARNING nova.compute.manager [req-17c3e016-2609-496b-b0f4-ba4bf2eb60e2 req-7dba0895-bea4-4637-9bf8-96be311e949d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state None.
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.592 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(6afd30d4955a4854892e4c2ef0b91fab) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:49:23 compute-0 nova_compute[254092]: 2025-11-25 16:49:23.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 579 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.217 254096 INFO nova.compute.manager [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Unrescuing
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.217 254096 DEBUG oslo_concurrency.lockutils [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.218 254096 DEBUG oslo_concurrency.lockutils [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquired lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.218 254096 DEBUG nova.network.neutron [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:49:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 25 16:49:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 25 16:49:24 compute-0 ceph-mon[74985]: pgmap v1940: 321 pgs: 321 active+clean; 579 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 674 KiB/s rd, 2.2 MiB/s wr, 121 op/s
Nov 25 16:49:24 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.291 254096 DEBUG nova.compute.manager [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.291 254096 DEBUG oslo_concurrency.lockutils [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 DEBUG oslo_concurrency.lockutils [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 DEBUG oslo_concurrency.lockutils [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 DEBUG nova.compute.manager [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.292 254096 WARNING nova.compute.manager [req-45c628fa-1cde-4afb-92db-5135e48b048b req-5f887457-f6a2-4dae-a8fb-2652f480fec5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.313 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning vms/73301044-3bad-4401-9e30-f009d417f662_disk@6afd30d4955a4854892e4c2ef0b91fab to images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.428 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.710 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.752 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.754 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.759 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:24 compute-0 nova_compute[254092]: 2025-11-25 16:49:24.832 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] removing snapshot(6afd30d4955a4854892e4c2ef0b91fab) on rbd image(73301044-3bad-4401-9e30-f009d417f662_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:49:25 compute-0 nova_compute[254092]: 2025-11-25 16:49:25.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 25 16:49:25 compute-0 ceph-mon[74985]: osdmap e249: 3 total, 3 up, 3 in
Nov 25 16:49:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 25 16:49:25 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 25 16:49:25 compute-0 nova_compute[254092]: 2025-11-25 16:49:25.322 254096 DEBUG nova.storage.rbd_utils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] creating snapshot(snap) on rbd image(3855d8b5-0ce2-4690-ac71-e43d7c3e5764) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:49:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 640 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Nov 25 16:49:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.260 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:49:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 25 16:49:26 compute-0 ceph-mon[74985]: osdmap e250: 3 total, 3 up, 3 in
Nov 25 16:49:26 compute-0 ceph-mon[74985]: pgmap v1943: 321 pgs: 321 active+clean; 640 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Nov 25 16:49:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 25 16:49:26 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.372 254096 INFO nova.compute.manager [None req-bf000a48-4c8d-4ec7-9285-6d1f000d0061 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Get console output
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.379 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.387 254096 DEBUG nova.network.neutron [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [{"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.401 254096 DEBUG oslo_concurrency.lockutils [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Releasing lock "refresh_cache-6b74b880-45f6-4f10-b09f-2696629a42e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.402 254096 DEBUG nova.objects.instance [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'flavor' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:26 compute-0 kernel: tapacb7d65c-02 (unregistering): left promiscuous mode
Nov 25 16:49:26 compute-0 NetworkManager[48891]: <info>  [1764089366.4768] device (tapacb7d65c-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:26 compute-0 ovn_controller[153477]: 2025-11-25T16:49:26Z|00961|binding|INFO|Releasing lport acb7d65c-0259-4a39-94f8-d7f64637a340 from this chassis (sb_readonly=0)
Nov 25 16:49:26 compute-0 ovn_controller[153477]: 2025-11-25T16:49:26Z|00962|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 down in Southbound
Nov 25 16:49:26 compute-0 ovn_controller[153477]: 2025-11-25T16:49:26Z|00963|binding|INFO|Removing iface tapacb7d65c-02 ovn-installed in OVS
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:26 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:26 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000062.scope: Consumed 4.839s CPU time.
Nov 25 16:49:26 compute-0 systemd-machined[216343]: Machine qemu-123-instance-00000062 terminated.
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.566 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.569 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.570 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.572 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1469518d-3ee8-44e0-a4ee-6e53be32dd82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.666 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.666 254096 DEBUG nova.objects.instance [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:26 compute-0 kernel: tapacb7d65c-02: entered promiscuous mode
Nov 25 16:49:26 compute-0 systemd-udevd[354081]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:49:26 compute-0 NetworkManager[48891]: <info>  [1764089366.7790] manager: (tapacb7d65c-02): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Nov 25 16:49:26 compute-0 ovn_controller[153477]: 2025-11-25T16:49:26Z|00964|binding|INFO|Claiming lport acb7d65c-0259-4a39-94f8-d7f64637a340 for this chassis.
Nov 25 16:49:26 compute-0 ovn_controller[153477]: 2025-11-25T16:49:26Z|00965|binding|INFO|acb7d65c-0259-4a39-94f8-d7f64637a340: Claiming fa:16:3e:b2:d7:96 10.100.0.2
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.790 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.791 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d bound to our chassis
Nov 25 16:49:26 compute-0 NetworkManager[48891]: <info>  [1764089366.7940] device (tapacb7d65c-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.794 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:49:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:26.795 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[819b359c-941b-4f5b-961c-30950c5b768b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:26 compute-0 NetworkManager[48891]: <info>  [1764089366.7963] device (tapacb7d65c-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:26 compute-0 ovn_controller[153477]: 2025-11-25T16:49:26Z|00966|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 ovn-installed in OVS
Nov 25 16:49:26 compute-0 ovn_controller[153477]: 2025-11-25T16:49:26Z|00967|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 up in Southbound
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.817 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:26 compute-0 systemd-machined[216343]: New machine qemu-124-instance-00000062.
Nov 25 16:49:26 compute-0 nova_compute[254092]: 2025-11-25 16:49:26.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:26 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000062.
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.227 254096 DEBUG nova.compute.manager [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.228 254096 DEBUG oslo_concurrency.lockutils [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.228 254096 DEBUG oslo_concurrency.lockutils [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.229 254096 DEBUG oslo_concurrency.lockutils [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.229 254096 DEBUG nova.compute.manager [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.229 254096 WARNING nova.compute.manager [req-be35f8c9-6413-40e0-84e2-2e24f1f536fd req-080023c2-03e6-4c40-8ebc-3d4be9c651e2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state rescued and task_state unrescuing.
Nov 25 16:49:27 compute-0 ceph-mon[74985]: osdmap e251: 3 total, 3 up, 3 in
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.369 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 6b74b880-45f6-4f10-b09f-2696629a42e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.370 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089367.3682847, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.370 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Resumed (Lifecycle Event)
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.393 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.398 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.422 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.422 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089367.3733737, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.422 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Started (Lifecycle Event)
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.443 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.453 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.477 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.740 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Snapshot image upload complete
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.741 254096 DEBUG nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 660 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.816 254096 DEBUG nova.compute.manager [None req-78b2daa5-1607-4c08-abae-a6e07b00db26 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.938 254096 INFO nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Shelve offloading
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.946 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.946 254096 DEBUG nova.compute.manager [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.949 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.949 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:27 compute-0 nova_compute[254092]: 2025-11-25 16:49:27.949 254096 DEBUG nova.network.neutron [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:49:28 compute-0 ceph-mon[74985]: pgmap v1945: 321 pgs: 321 active+clean; 660 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.361 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.362 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.362 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.362 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.363 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.364 254096 INFO nova.compute.manager [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Terminating instance
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.365 254096 DEBUG nova.compute.manager [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:49:28 compute-0 kernel: tap5a3f34de-d3 (unregistering): left promiscuous mode
Nov 25 16:49:28 compute-0 NetworkManager[48891]: <info>  [1764089368.4163] device (tap5a3f34de-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:28 compute-0 ovn_controller[153477]: 2025-11-25T16:49:28Z|00968|binding|INFO|Releasing lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 from this chassis (sb_readonly=0)
Nov 25 16:49:28 compute-0 ovn_controller[153477]: 2025-11-25T16:49:28Z|00969|binding|INFO|Setting lport 5a3f34de-d3de-439b-ac8f-baabc77892b4 down in Southbound
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 ovn_controller[153477]: 2025-11-25T16:49:28Z|00970|binding|INFO|Removing iface tap5a3f34de-d3 ovn-installed in OVS
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.435 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:43:a6 10.100.0.9'], port_security=['fa:16:3e:9d:43:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e848add-8417-4307-8b01-f0d1c1a76cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'aca38536-94ed-4de4-8f0e-6e316a6d451c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30962290-f1d7-4a88-8e7f-94881d981a74, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5a3f34de-d3de-439b-ac8f-baabc77892b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.437 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3f34de-d3de-439b-ac8f-baabc77892b4 in datapath aa283c2c-b597-4970-842d-f5f2b621b5f0 unbound from our chassis
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.448 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa283c2c-b597-4970-842d-f5f2b621b5f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.450 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3f2ea185-d7ac-4355-959c-ce88493ba9ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.451 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 namespace which is not needed anymore
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 25 16:49:28 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000060.scope: Consumed 12.921s CPU time.
Nov 25 16:49:28 compute-0 systemd-machined[216343]: Machine qemu-122-instance-00000060 terminated.
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.603 254096 INFO nova.virt.libvirt.driver [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance destroyed successfully.
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.603 254096 DEBUG nova.objects.instance [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 2e848add-8417-4307-8b01-f0d1c1a76cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:28 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : haproxy version is 2.8.14-c23fe91
Nov 25 16:49:28 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [NOTICE]   (353345) : path to executable is /usr/sbin/haproxy
Nov 25 16:49:28 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [WARNING]  (353345) : Exiting Master process...
Nov 25 16:49:28 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [ALERT]    (353345) : Current worker (353347) exited with code 143 (Terminated)
Nov 25 16:49:28 compute-0 neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0[353339]: [WARNING]  (353345) : All workers exited. Exiting... (0)
Nov 25 16:49:28 compute-0 systemd[1]: libpod-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9.scope: Deactivated successfully.
Nov 25 16:49:28 compute-0 podman[354208]: 2025-11-25 16:49:28.631729619 +0000 UTC m=+0.073791916 container died ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9-userdata-shm.mount: Deactivated successfully.
Nov 25 16:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0b2008feb8517d6fb5f144bab4ee20658f8fd3f00332a77040521f977c12c63-merged.mount: Deactivated successfully.
Nov 25 16:49:28 compute-0 podman[354208]: 2025-11-25 16:49:28.684340179 +0000 UTC m=+0.126402446 container cleanup ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:49:28 compute-0 systemd[1]: libpod-conmon-ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9.scope: Deactivated successfully.
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 podman[354248]: 2025-11-25 16:49:28.760780806 +0000 UTC m=+0.049809085 container remove ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.768 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae80180c-0adc-49e2-8f83-5b2936a32961]: (4, ('Tue Nov 25 04:49:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9)\nef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9\nTue Nov 25 04:49:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 (ef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9)\nef5beb7152db5e2353d62b2763beb955386b61f207badd868dba0e3739a238e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.768 254096 DEBUG nova.virt.libvirt.vif [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-721299492',display_name='tempest-TestNetworkAdvancedServerOps-server-721299492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-721299492',id=96,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIuS2h4G62fbKa1D8fQiWbH65PFkkRVLBed4wrkEeUlM++S4qN/mZDJoxB0We0lR2SolGZ26Txk6Ir9O+1WqdMaVC9PS7NmiU/+hEPFN6YieX+/K6w93NwRm1fHYEK0fbg==',key_name='tempest-TestNetworkAdvancedServerOps-1463569288',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-oidm56b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:07Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=2e848add-8417-4307-8b01-f0d1c1a76cea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.769 254096 DEBUG nova.network.os_vif_util [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.769 254096 DEBUG nova.network.os_vif_util [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.770 254096 DEBUG os_vif [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92171fbe-66f3-4b5a-acda-df25dece9b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.771 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa283c2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.772 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a3f34de-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:28 compute-0 kernel: tapaa283c2c-b0: left promiscuous mode
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.797 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.800 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa515e8c-c7e4-4cb6-9ac7-547bad3146b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:28 compute-0 nova_compute[254092]: 2025-11-25 16:49:28.804 254096 INFO os_vif [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:43:a6,bridge_name='br-int',has_traffic_filtering=True,id=5a3f34de-d3de-439b-ac8f-baabc77892b4,network=Network(aa283c2c-b597-4970-842d-f5f2b621b5f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3f34de-d3')
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.818 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[233e6c89-0b3c-4d1c-aa4d-59962aacb5e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.820 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[89795bc5-89bf-45cb-b02e-e315432f154f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.835 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a04c50a-19bb-4651-bd85-2a631b27ffc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580428, 'reachable_time': 20487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354283, 'error': None, 'target': 'ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:28 compute-0 systemd[1]: run-netns-ovnmeta\x2daa283c2c\x2db597\x2d4970\x2d842d\x2df5f2b621b5f0.mount: Deactivated successfully.
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.841 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa283c2c-b597-4970-842d-f5f2b621b5f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:49:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:28.841 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb6faca-16b0-4df8-ad91-49d3ac4bd0b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.153 254096 INFO nova.virt.libvirt.driver [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deleting instance files /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea_del
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.153 254096 INFO nova.virt.libvirt.driver [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deletion of /var/lib/nova/instances/2e848add-8417-4307-8b01-f0d1c1a76cea_del complete
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.223 254096 INFO nova.compute.manager [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.224 254096 DEBUG oslo.service.loopingcall [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.224 254096 DEBUG nova.compute.manager [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.224 254096 DEBUG nova.network.neutron [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.354 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.355 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.355 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.355 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 WARNING nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state None.
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.356 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 WARNING nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state None.
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing instance network info cache due to event network-changed-5a3f34de-d3de-439b-ac8f-baabc77892b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.357 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.358 254096 DEBUG nova.network.neutron [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Refreshing network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.749 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.750 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.751 254096 INFO nova.compute.manager [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Terminating instance
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.752 254096 DEBUG nova.compute.manager [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:49:29 compute-0 kernel: tapacb7d65c-02 (unregistering): left promiscuous mode
Nov 25 16:49:29 compute-0 NetworkManager[48891]: <info>  [1764089369.7950] device (tapacb7d65c-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:29 compute-0 ovn_controller[153477]: 2025-11-25T16:49:29Z|00971|binding|INFO|Releasing lport acb7d65c-0259-4a39-94f8-d7f64637a340 from this chassis (sb_readonly=0)
Nov 25 16:49:29 compute-0 ovn_controller[153477]: 2025-11-25T16:49:29Z|00972|binding|INFO|Setting lport acb7d65c-0259-4a39-94f8-d7f64637a340 down in Southbound
Nov 25 16:49:29 compute-0 ovn_controller[153477]: 2025-11-25T16:49:29Z|00973|binding|INFO|Removing iface tapacb7d65c-02 ovn-installed in OVS
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 660 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.826 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:d7:96 10.100.0.2'], port_security=['fa:16:3e:b2:d7:96 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '6b74b880-45f6-4f10-b09f-2696629a42e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=acb7d65c-0259-4a39-94f8-d7f64637a340) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.828 163338 INFO neutron.agent.ovn.metadata.agent [-] Port acb7d65c-0259-4a39-94f8-d7f64637a340 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis
Nov 25 16:49:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.829 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:49:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:29.830 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab2e61f-1fd5-43e8-8442-28932bfb88c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:29 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Deactivated successfully.
Nov 25 16:49:29 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000062.scope: Consumed 2.948s CPU time.
Nov 25 16:49:29 compute-0 systemd-machined[216343]: Machine qemu-124-instance-00000062 terminated.
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.948 254096 DEBUG nova.network.neutron [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.972 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.994 254096 INFO nova.virt.libvirt.driver [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Instance destroyed successfully.
Nov 25 16:49:29 compute-0 nova_compute[254092]: 2025-11-25 16:49:29.995 254096 DEBUG nova.objects.instance [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 6b74b880-45f6-4f10-b09f-2696629a42e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.008 254096 DEBUG nova.virt.libvirt.vif [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1318564784',display_name='tempest-ServerRescueTestJSON-server-1318564784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1318564784',id=98,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-07wxgfqh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:27Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=6b74b880-45f6-4f10-b09f-2696629a42e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.009 254096 DEBUG nova.network.os_vif_util [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "acb7d65c-0259-4a39-94f8-d7f64637a340", "address": "fa:16:3e:b2:d7:96", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacb7d65c-02", "ovs_interfaceid": "acb7d65c-0259-4a39-94f8-d7f64637a340", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.010 254096 DEBUG nova.network.os_vif_util [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.010 254096 DEBUG os_vif [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.012 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacb7d65c-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.020 254096 INFO os_vif [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b2:d7:96,bridge_name='br-int',has_traffic_filtering=True,id=acb7d65c-0259-4a39-94f8-d7f64637a340,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacb7d65c-02')
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.307 254096 INFO nova.virt.libvirt.driver [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deleting instance files /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9_del
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.308 254096 INFO nova.virt.libvirt.driver [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deletion of /var/lib/nova/instances/6b74b880-45f6-4f10-b09f-2696629a42e9_del complete
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.354 254096 INFO nova.compute.manager [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 0.60 seconds to destroy the instance on the hypervisor.
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.355 254096 DEBUG oslo.service.loopingcall [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.355 254096 DEBUG nova.compute.manager [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.355 254096 DEBUG nova.network.neutron [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.665 254096 DEBUG nova.network.neutron [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.688 254096 INFO nova.compute.manager [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Took 1.46 seconds to deallocate network for instance.
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.731 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.731 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:30 compute-0 ceph-mon[74985]: pgmap v1946: 321 pgs: 321 active+clean; 660 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 294 op/s
Nov 25 16:49:30 compute-0 nova_compute[254092]: 2025-11-25 16:49:30.934 254096 DEBUG oslo_concurrency.processutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732144333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.402 254096 DEBUG oslo_concurrency.processutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.410 254096 DEBUG nova.compute.provider_tree [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.423 254096 DEBUG nova.scheduler.client.report [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.447 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.451 254096 DEBUG nova.network.neutron [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updated VIF entry in instance network info cache for port 5a3f34de-d3de-439b-ac8f-baabc77892b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.451 254096 DEBUG nova.network.neutron [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Updating instance_info_cache with network_info: [{"id": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "address": "fa:16:3e:9d:43:a6", "network": {"id": "aa283c2c-b597-4970-842d-f5f2b621b5f0", "bridge": "br-int", "label": "tempest-network-smoke--488175601", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3f34de-d3", "ovs_interfaceid": "5a3f34de-d3de-439b-ac8f-baabc77892b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.476 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2e848add-8417-4307-8b01-f0d1c1a76cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.477 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.478 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.478 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.478 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.479 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.479 254096 WARNING nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state active and task_state None.
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.479 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.480 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.480 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.481 254096 DEBUG oslo_concurrency.lockutils [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.481 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.481 254096 DEBUG nova.compute.manager [req-a4824abb-0774-4b7e-b393-c6e3b4c71836 req-6601dba4-1a23-4830-b6e8-207fc29d0048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-unplugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.483 254096 INFO nova.scheduler.client.report [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 2e848add-8417-4307-8b01-f0d1c1a76cea
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.556 254096 DEBUG oslo_concurrency.lockutils [None req-58ef3582-0a28-4ada-b01a-d95acc6429fe 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.647 254096 DEBUG nova.network.neutron [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.668 254096 INFO nova.compute.manager [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Took 1.31 seconds to deallocate network for instance.
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.689 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.690 254096 DEBUG nova.objects.instance [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.713 254096 DEBUG nova.virt.libvirt.vif [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:27.741538',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3855d8b5-0ce2-4690-ac71-e43d7c3e5764'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.714 254096 DEBUG nova.network.os_vif_util [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.715 254096 DEBUG nova.network.os_vif_util [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.715 254096 DEBUG os_vif [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.718 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap792a5867-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.721 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.721 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.722 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.727 254096 INFO os_vif [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.801 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2e848add-8417-4307-8b01-f0d1c1a76cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.802 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] No waiting events found dispatching network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.803 254096 WARNING nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received unexpected event network-vif-plugged-5a3f34de-d3de-439b-ac8f-baabc77892b4 for instance with vm_state deleted and task_state None.
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.803 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.804 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.805 254096 WARNING nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-unplugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state deleted and task_state None.
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.805 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Received event network-vif-deleted-5a3f34de-d3de-439b-ac8f-baabc77892b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.805 254096 INFO nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Neutron deleted interface 5a3f34de-d3de-439b-ac8f-baabc77892b4; detaching it from the instance and deleting it from the info cache
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.806 254096 DEBUG nova.network.neutron [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 16:49:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 487 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 488 op/s
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Detach interface failed, port_id=5a3f34de-d3de-439b-ac8f-baabc77892b4, reason: Instance 2e848add-8417-4307-8b01-f0d1c1a76cea could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.809 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.810 254096 DEBUG oslo_concurrency.lockutils [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.810 254096 DEBUG nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] No waiting events found dispatching network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.810 254096 WARNING nova.compute.manager [req-9bbeff02-4bea-4443-882a-bcc337f437f2 req-b8c5b3dd-3d44-46ce-b0b1-583ef15e1a74 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received unexpected event network-vif-plugged-acb7d65c-0259-4a39-94f8-d7f64637a340 for instance with vm_state deleted and task_state None.
Nov 25 16:49:31 compute-0 nova_compute[254092]: 2025-11-25 16:49:31.853 254096 DEBUG oslo_concurrency.processutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1732144333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.051 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting instance files /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.052 254096 INFO nova.virt.libvirt.driver [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deletion of /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del complete
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.151 254096 INFO nova.scheduler.client.report [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance 73301044-3bad-4401-9e30-f009d417f662
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.202 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168623913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.312 254096 DEBUG oslo_concurrency.processutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.318 254096 DEBUG nova.compute.provider_tree [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.333 254096 DEBUG nova.scheduler.client.report [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.350 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.352 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.377 254096 INFO nova.scheduler.client.report [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Deleted allocations for instance 6b74b880-45f6-4f10-b09f-2696629a42e9
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.434 254096 DEBUG oslo_concurrency.processutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.476 254096 DEBUG oslo_concurrency.lockutils [None req-5543b0f8-ba7b-4daf-b1b0-f54fc2dd2831 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "6b74b880-45f6-4f10-b09f-2696629a42e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.715 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.716 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.729 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.800 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:32 compute-0 ceph-mon[74985]: pgmap v1947: 321 pgs: 321 active+clean; 487 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 488 op/s
Nov 25 16:49:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/168623913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711582819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.914 254096 DEBUG oslo_concurrency.processutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.920 254096 DEBUG nova.compute.provider_tree [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.933 254096 DEBUG nova.scheduler.client.report [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.949 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.952 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.963 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.963 254096 INFO nova.compute.claims [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.983 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.983 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.983 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.984 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.984 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.985 254096 INFO nova.compute.manager [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Terminating instance
Nov 25 16:49:32 compute-0 nova_compute[254092]: 2025-11-25 16:49:32.986 254096 DEBUG nova.compute.manager [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.019 254096 DEBUG oslo_concurrency.lockutils [None req-9a1425ec-f3d0-453a-b0c3-13c651f80757 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:33 compute-0 kernel: tap431770e1-47 (unregistering): left promiscuous mode
Nov 25 16:49:33 compute-0 NetworkManager[48891]: <info>  [1764089373.0401] device (tap431770e1-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:33 compute-0 ovn_controller[153477]: 2025-11-25T16:49:33Z|00974|binding|INFO|Releasing lport 431770e1-476d-40b3-8477-419b69aa4fe9 from this chassis (sb_readonly=0)
Nov 25 16:49:33 compute-0 ovn_controller[153477]: 2025-11-25T16:49:33Z|00975|binding|INFO|Setting lport 431770e1-476d-40b3-8477-419b69aa4fe9 down in Southbound
Nov 25 16:49:33 compute-0 ovn_controller[153477]: 2025-11-25T16:49:33Z|00976|binding|INFO|Removing iface tap431770e1-47 ovn-installed in OVS
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.099 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.105 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:e7:de 10.100.0.8'], port_security=['fa:16:3e:e9:e7:de 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '435ae693-6844-49ae-977b-ec3aa89cfe70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e7afc96-cd20-4255-ae42-f12adadff80d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '544c4f84ca494482aea8e55248fe4c62', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ea3857ca-f518-4e95-9458-20b7becb63fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ffd0c38-5078-40a5-ba7b-d1f91d316b3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=431770e1-476d-40b3-8477-419b69aa4fe9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.107 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 431770e1-476d-40b3-8477-419b69aa4fe9 in datapath 7e7afc96-cd20-4255-ae42-f12adadff80d unbound from our chassis
Nov 25 16:49:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.108 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7e7afc96-cd20-4255-ae42-f12adadff80d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:49:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:33.109 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e96aaa9d-5ea6-422f-8948-84b09357f29c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.117 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.129 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:33 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 25 16:49:33 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005e.scope: Consumed 17.719s CPU time.
Nov 25 16:49:33 compute-0 systemd-machined[216343]: Machine qemu-117-instance-0000005e terminated.
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.224 254096 INFO nova.virt.libvirt.driver [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Instance destroyed successfully.
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.225 254096 DEBUG nova.objects.instance [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lazy-loading 'resources' on Instance uuid 435ae693-6844-49ae-977b-ec3aa89cfe70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.238 254096 DEBUG nova.virt.libvirt.vif [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-328897245',display_name='tempest-ServerRescueTestJSON-server-328897245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-328897245',id=94,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='544c4f84ca494482aea8e55248fe4c62',ramdisk_id='',reservation_id='r-d30awnf0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-143951276',owner_user_name='tempest-ServerRescueTestJSON-143951276-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:31Z,user_data=None,user_id='013868ddd96f43a49458a4615ab1f41b',uuid=435ae693-6844-49ae-977b-ec3aa89cfe70,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.238 254096 DEBUG nova.network.os_vif_util [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converting VIF {"id": "431770e1-476d-40b3-8477-419b69aa4fe9", "address": "fa:16:3e:e9:e7:de", "network": {"id": "7e7afc96-cd20-4255-ae42-f12adadff80d", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2031272697-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "544c4f84ca494482aea8e55248fe4c62", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap431770e1-47", "ovs_interfaceid": "431770e1-476d-40b3-8477-419b69aa4fe9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.239 254096 DEBUG nova.network.os_vif_util [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.240 254096 DEBUG os_vif [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.242 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap431770e1-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.248 254096 INFO os_vif [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:e7:de,bridge_name='br-int',has_traffic_filtering=True,id=431770e1-476d-40b3-8477-419b69aa4fe9,network=Network(7e7afc96-cd20-4255-ae42-f12adadff80d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap431770e1-47')
Nov 25 16:49:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142638453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.615 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.620 254096 DEBUG nova.compute.provider_tree [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.633 254096 DEBUG nova.scheduler.client.report [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.655 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.656 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.704 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.705 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.728 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.742 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:49:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 487 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 310 op/s
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.830 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.832 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.832 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Creating image(s)
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.854 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2711582819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1142638453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.886 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.918 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.923 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.985 254096 INFO nova.virt.libvirt.driver [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deleting instance files /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70_del
Nov 25 16:49:33 compute-0 nova_compute[254092]: 2025-11-25 16:49:33.986 254096 INFO nova.virt.libvirt.driver [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deletion of /var/lib/nova/instances/435ae693-6844-49ae-977b-ec3aa89cfe70_del complete
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.015 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.017 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.017 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.018 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.043 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.048 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.094 254096 DEBUG nova.policy [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5aefdc701af340eba9e8201f5065511e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6ce5017d19f45bcb3b13bf55faa9493', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.101 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Received event network-vif-deleted-acb7d65c-0259-4a39-94f8-d7f64637a340 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.101 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.101 254096 DEBUG oslo_concurrency.lockutils [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.102 254096 DEBUG oslo_concurrency.lockutils [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.102 254096 DEBUG oslo_concurrency.lockutils [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.102 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.103 254096 DEBUG nova.compute.manager [req-2d0b01f4-aa87-4307-82e0-306a5624f61c req-4ae4d0e8-d337-4ec2-bb9d-58be3deddfde a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-unplugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.109 254096 INFO nova.compute.manager [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 1.12 seconds to destroy the instance on the hypervisor.
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.109 254096 DEBUG oslo.service.loopingcall [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.110 254096 DEBUG nova.compute.manager [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.110 254096 DEBUG nova.network.neutron [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.351 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.416 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] resizing rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.505 254096 DEBUG nova.objects.instance [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f5465d3-64cd-46fb-af8f-3b29aef5123d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.516 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.517 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Ensure instance console log exists: /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.517 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.517 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.518 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:34 compute-0 ovn_controller[153477]: 2025-11-25T16:49:34Z|00977|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:49:34 compute-0 ceph-mon[74985]: pgmap v1948: 321 pgs: 321 active+clean; 487 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 310 op/s
Nov 25 16:49:34 compute-0 nova_compute[254092]: 2025-11-25 16:49:34.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:35 compute-0 nova_compute[254092]: 2025-11-25 16:49:35.531 254096 DEBUG nova.network.neutron [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:35 compute-0 nova_compute[254092]: 2025-11-25 16:49:35.543 254096 INFO nova.compute.manager [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Took 1.43 seconds to deallocate network for instance.
Nov 25 16:49:35 compute-0 nova_compute[254092]: 2025-11-25 16:49:35.576 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:35 compute-0 nova_compute[254092]: 2025-11-25 16:49:35.576 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:35 compute-0 nova_compute[254092]: 2025-11-25 16:49:35.644 254096 DEBUG oslo_concurrency.processutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 346 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 356 op/s
Nov 25 16:49:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/476113371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.109 254096 DEBUG oslo_concurrency.processutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.121 254096 DEBUG nova.compute.provider_tree [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.139 254096 DEBUG nova.scheduler.client.report [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.163 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.194 254096 DEBUG nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.194 254096 DEBUG oslo_concurrency.lockutils [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 DEBUG oslo_concurrency.lockutils [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 DEBUG oslo_concurrency.lockutils [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 DEBUG nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] No waiting events found dispatching network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.195 254096 WARNING nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received unexpected event network-vif-plugged-431770e1-476d-40b3-8477-419b69aa4fe9 for instance with vm_state deleted and task_state None.
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.196 254096 DEBUG nova.compute.manager [req-c55bac39-0822-41f3-9d5d-73d8dcceb5d4 req-72d3cb09-9acb-415e-a95d-c21d2bcd7412 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Received event network-vif-deleted-431770e1-476d-40b3-8477-419b69aa4fe9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 25 16:49:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 25 16:49:36 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.212 254096 INFO nova.scheduler.client.report [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Deleted allocations for instance 435ae693-6844-49ae-977b-ec3aa89cfe70
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.273 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Successfully created port: 9fefcfde-9e55-4ed2-8521-ee26704af28c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:49:36 compute-0 nova_compute[254092]: 2025-11-25 16:49:36.303 254096 DEBUG oslo_concurrency.lockutils [None req-60b8a186-0987-4a8d-a3a5-7d70f4d0b6f2 013868ddd96f43a49458a4615ab1f41b 544c4f84ca494482aea8e55248fe4c62 - - default default] Lock "435ae693-6844-49ae-977b-ec3aa89cfe70" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:36 compute-0 ceph-mon[74985]: pgmap v1949: 321 pgs: 321 active+clean; 346 MiB data, 848 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 356 op/s
Nov 25 16:49:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/476113371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:36 compute-0 ceph-mon[74985]: osdmap e252: 3 total, 3 up, 3 in
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.204 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089362.2036006, 73301044-3bad-4401-9e30-f009d417f662 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.205 254096 INFO nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Stopped (Lifecycle Event)
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.229 254096 DEBUG nova.compute.manager [None req-8992db9e-1535-44aa-9f94-440498fe4e51 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.416 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Successfully updated port: 9fefcfde-9e55-4ed2-8521-ee26704af28c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.429 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.429 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquired lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.429 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:49:37 compute-0 nova_compute[254092]: 2025-11-25 16:49:37.658 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:49:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 331 op/s
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.289 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.289 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.289 254096 INFO nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Unshelving
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.296 254096 DEBUG nova.compute.manager [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-changed-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.297 254096 DEBUG nova.compute.manager [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Refreshing instance network info cache due to event network-changed-9fefcfde-9e55-4ed2-8521-ee26704af28c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.297 254096 DEBUG oslo_concurrency.lockutils [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.361 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.362 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.366 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_requests' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.378 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'numa_topology' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.385 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.385 254096 INFO nova.compute.claims [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.537 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.856 254096 DEBUG nova.network.neutron [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updating instance_info_cache with network_info: [{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.890 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Releasing lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.891 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance network_info: |[{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.892 254096 DEBUG oslo_concurrency.lockutils [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.892 254096 DEBUG nova.network.neutron [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Refreshing network info cache for port 9fefcfde-9e55-4ed2-8521-ee26704af28c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.896 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start _get_guest_xml network_info=[{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.902 254096 WARNING nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.910 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.911 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.918 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.920 254096 DEBUG nova.virt.libvirt.host [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.920 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.921 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.923 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.923 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:49:38 compute-0 sudo[354680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.926 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.927 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.927 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.928 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:49:38 compute-0 sudo[354680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.928 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.929 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.930 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.930 254096 DEBUG nova.virt.hardware [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:49:38 compute-0 sudo[354680]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.936 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:38 compute-0 ceph-mon[74985]: pgmap v1951: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 331 op/s
Nov 25 16:49:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532709748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:38 compute-0 sudo[354705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:49:38 compute-0 sudo[354705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:38 compute-0 nova_compute[254092]: 2025-11-25 16:49:38.996 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:38 compute-0 sudo[354705]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.004 254096 DEBUG nova.compute.provider_tree [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.025 254096 DEBUG nova.scheduler.client.report [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.048 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:39 compute-0 sudo[354733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:39 compute-0 sudo[354733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:39 compute-0 sudo[354733]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:39 compute-0 sudo[354760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:49:39 compute-0 sudo[354760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858223094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.397 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.425 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.431 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.487 254096 INFO nova.network.neutron [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 25 16:49:39 compute-0 sudo[354760]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:49:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d7eaa06a-174b-45e5-878a-d99ca07c3208 does not exist
Nov 25 16:49:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5375b7fb-238a-4959-b9ee-72f5004738fd does not exist
Nov 25 16:49:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8115eac4-eb4f-4556-9fdc-df210229f695 does not exist
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:49:39 compute-0 sudo[354875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:39 compute-0 sudo[354875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:39 compute-0 sudo[354875]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 331 op/s
Nov 25 16:49:39 compute-0 sudo[354900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:49:39 compute-0 sudo[354900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:39 compute-0 sudo[354900]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:49:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710776305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.895 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.897 254096 DEBUG nova.virt.libvirt.vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-2033799726',id=100,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6ce5017d19f45bcb3b13bf55faa9493',ramdisk_id='',reservation_id='r-ey2dhuzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:33Z,user_data=None,user_id='5aefdc701af340eba9e8201f5065511e',uuid=6f5465d3-64cd-46fb-af8f-3b29aef5123d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.898 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converting VIF {"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.899 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.900 254096 DEBUG nova.objects.instance [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f5465d3-64cd-46fb-af8f-3b29aef5123d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.911 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <uuid>6f5465d3-64cd-46fb-af8f-3b29aef5123d</uuid>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <name>instance-00000064</name>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-2033799726</nova:name>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:49:38</nova:creationTime>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:user uuid="5aefdc701af340eba9e8201f5065511e">tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member</nova:user>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:project uuid="b6ce5017d19f45bcb3b13bf55faa9493">tempest-ServersNegativeTestMultiTenantJSON-2074703932</nova:project>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <nova:port uuid="9fefcfde-9e55-4ed2-8521-ee26704af28c">
Nov 25 16:49:39 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <system>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <entry name="serial">6f5465d3-64cd-46fb-af8f-3b29aef5123d</entry>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <entry name="uuid">6f5465d3-64cd-46fb-af8f-3b29aef5123d</entry>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </system>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <os>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   </os>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <features>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   </features>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk">
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       </source>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config">
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       </source>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:49:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ab:07:67"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <target dev="tap9fefcfde-9e"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/console.log" append="off"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <video>
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </video>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:49:39 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:49:39 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:49:39 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:49:39 compute-0 nova_compute[254092]: </domain>
Nov 25 16:49:39 compute-0 sudo[354925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:39 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.912 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Preparing to wait for external event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.912 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.913 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.913 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.914 254096 DEBUG nova.virt.libvirt.vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-2033799726',id=100,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6ce5017d19f45bcb3b13bf55faa9493',ramdisk_id='',reservation_id='r-ey2dhuzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:33Z,user_data=None,user_id='5aefdc701af340eba9e8201f5065511e',uuid=6f5465d3-64cd-46fb-af8f-3b29aef5123d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.914 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converting VIF {"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.915 254096 DEBUG nova.network.os_vif_util [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.915 254096 DEBUG os_vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.916 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.916 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:39 compute-0 sudo[354925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fefcfde-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fefcfde-9e, col_values=(('external_ids', {'iface-id': '9fefcfde-9e55-4ed2-8521-ee26704af28c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:07:67', 'vm-uuid': '6f5465d3-64cd-46fb-af8f-3b29aef5123d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:39 compute-0 sudo[354925]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:39 compute-0 NetworkManager[48891]: <info>  [1764089379.9237] manager: (tap9fefcfde-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.932 254096 INFO os_vif [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e')
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3532709748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1858223094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:49:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/710776305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:39.974 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:39.975 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.991 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.992 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.992 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] No VIF found with MAC fa:16:3e:ab:07:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:49:39 compute-0 nova_compute[254092]: 2025-11-25 16:49:39.993 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Using config drive
Nov 25 16:49:39 compute-0 sudo[354954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:49:39 compute-0 sudo[354954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:40 compute-0 nova_compute[254092]: 2025-11-25 16:49:40.015 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:49:40
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root']
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:49:40 compute-0 podman[355038]: 2025-11-25 16:49:40.351546337 +0000 UTC m=+0.049431534 container create b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 16:49:40 compute-0 systemd[1]: Started libpod-conmon-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope.
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:49:40 compute-0 podman[355038]: 2025-11-25 16:49:40.328724846 +0000 UTC m=+0.026609853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:49:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:40 compute-0 podman[355038]: 2025-11-25 16:49:40.467919169 +0000 UTC m=+0.165804156 container init b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:49:40 compute-0 podman[355038]: 2025-11-25 16:49:40.476380239 +0000 UTC m=+0.174265226 container start b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:49:40 compute-0 podman[355038]: 2025-11-25 16:49:40.479177215 +0000 UTC m=+0.177062192 container attach b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:49:40 compute-0 systemd[1]: libpod-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope: Deactivated successfully.
Nov 25 16:49:40 compute-0 kind_jang[355054]: 167 167
Nov 25 16:49:40 compute-0 conmon[355054]: conmon b181c1198f75f26cd4a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope/container/memory.events
Nov 25 16:49:40 compute-0 podman[355059]: 2025-11-25 16:49:40.532580556 +0000 UTC m=+0.028486244 container died b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-240233f781cae97ed622517824bd205b119899a8ef95fe8e4f74da7328f38683-merged.mount: Deactivated successfully.
Nov 25 16:49:40 compute-0 podman[355059]: 2025-11-25 16:49:40.570299652 +0000 UTC m=+0.066205330 container remove b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 16:49:40 compute-0 systemd[1]: libpod-conmon-b181c1198f75f26cd4a83ca42d6141b414bf61b2ff50fec6fbf3c0f2d51fede1.scope: Deactivated successfully.
Nov 25 16:49:40 compute-0 podman[355081]: 2025-11-25 16:49:40.769100175 +0000 UTC m=+0.045415766 container create 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:49:40 compute-0 systemd[1]: Started libpod-conmon-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope.
Nov 25 16:49:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:40 compute-0 podman[355081]: 2025-11-25 16:49:40.751765123 +0000 UTC m=+0.028080734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:40 compute-0 podman[355081]: 2025-11-25 16:49:40.869327437 +0000 UTC m=+0.145643048 container init 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:49:40 compute-0 podman[355081]: 2025-11-25 16:49:40.880507671 +0000 UTC m=+0.156823262 container start 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:49:40 compute-0 podman[355081]: 2025-11-25 16:49:40.885267211 +0000 UTC m=+0.161582842 container attach 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:49:40 compute-0 ceph-mon[74985]: pgmap v1952: 321 pgs: 321 active+clean; 292 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 331 op/s
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.064 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Creating config drive at /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.074 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbytd9blv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.239 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbytd9blv" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.281 254096 DEBUG nova.storage.rbd_utils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] rbd image 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.288 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.467 254096 DEBUG oslo_concurrency.processutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config 6f5465d3-64cd-46fb-af8f-3b29aef5123d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.469 254096 INFO nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deleting local config drive /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d/disk.config because it was imported into RBD.
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.480 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.481 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.481 254096 DEBUG nova.network.neutron [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:49:41 compute-0 kernel: tap9fefcfde-9e: entered promiscuous mode
Nov 25 16:49:41 compute-0 NetworkManager[48891]: <info>  [1764089381.5273] manager: (tap9fefcfde-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:41 compute-0 ovn_controller[153477]: 2025-11-25T16:49:41Z|00978|binding|INFO|Claiming lport 9fefcfde-9e55-4ed2-8521-ee26704af28c for this chassis.
Nov 25 16:49:41 compute-0 ovn_controller[153477]: 2025-11-25T16:49:41Z|00979|binding|INFO|9fefcfde-9e55-4ed2-8521-ee26704af28c: Claiming fa:16:3e:ab:07:67 10.100.0.7
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.572 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:07:67 10.100.0.7'], port_security=['fa:16:3e:ab:07:67 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f5465d3-64cd-46fb-af8f-3b29aef5123d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6ce5017d19f45bcb3b13bf55faa9493', 'neutron:revision_number': '2', 'neutron:security_group_ids': '25d23027-5b7a-4134-9db5-2818d483fb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65852f31-5e85-4656-9a53-3d977e20f573, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fefcfde-9e55-4ed2-8521-ee26704af28c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.573 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fefcfde-9e55-4ed2-8521-ee26704af28c in datapath e3082221-dfbe-4119-bc6f-940f05f1b99c bound to our chassis
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.575 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3082221-dfbe-4119-bc6f-940f05f1b99c
Nov 25 16:49:41 compute-0 ovn_controller[153477]: 2025-11-25T16:49:41Z|00980|binding|INFO|Setting lport 9fefcfde-9e55-4ed2-8521-ee26704af28c ovn-installed in OVS
Nov 25 16:49:41 compute-0 ovn_controller[153477]: 2025-11-25T16:49:41Z|00981|binding|INFO|Setting lport 9fefcfde-9e55-4ed2-8521-ee26704af28c up in Southbound
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:41 compute-0 systemd-udevd[355152]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[523db5e3-4893-4f66-8fe5-c1b874fc9757]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.592 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape3082221-d1 in ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.597 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape3082221-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4116c432-f6a3-4456-bce4-c026ef69081c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb906349-2f75-4b29-b446-b9345095d474]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 NetworkManager[48891]: <info>  [1764089381.6104] device (tap9fefcfde-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:49:41 compute-0 NetworkManager[48891]: <info>  [1764089381.6114] device (tap9fefcfde-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:49:41 compute-0 systemd-machined[216343]: New machine qemu-125-instance-00000064.
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.611 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd62473-578e-4887-8e38-783113b0cb1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000064.
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1a9bb8-d573-4248-acb4-7ef2d6b20030]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.676 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[98375983-0a8f-49d7-a67c-53dc7dcbfce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.684 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bb64c1b4-d96f-44f4-be5f-64195145f487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 NetworkManager[48891]: <info>  [1764089381.6861] manager: (tape3082221-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.723 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5c048a-3028-4e7d-be9d-7574c0fa6cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.729 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5c6db5-69e8-4ff5-88de-4ed64664bcdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 NetworkManager[48891]: <info>  [1764089381.7574] device (tape3082221-d0): carrier: link connected
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.769 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[309a8f14-a3d1-4082-819d-2e35f10fc8a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.791 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0f4748-b972-4997-843d-b07222225f77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3082221-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:f3:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583934, 'reachable_time': 16757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355199, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 292 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.813 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8692047a-c59c-48d0-99ee-980826675e97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:f3c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 583934, 'tstamp': 583934}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355201, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.835 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c996469f-e842-4919-9f10-534142fb7481]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3082221-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:f3:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583934, 'reachable_time': 16757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355204, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.864 254096 DEBUG nova.compute.manager [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.864 254096 DEBUG nova.compute.manager [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing instance network info cache due to event network-changed-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.865 254096 DEBUG oslo_concurrency.lockutils [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0342ae91-e180-40ba-b908-f4d9e4bc7423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.887 254096 DEBUG nova.network.neutron [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updated VIF entry in instance network info cache for port 9fefcfde-9e55-4ed2-8521-ee26704af28c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.887 254096 DEBUG nova.network.neutron [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updating instance_info_cache with network_info: [{"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.898 254096 DEBUG oslo_concurrency.lockutils [req-be3f1f2a-4d7a-42b6-83b3-fd1aa0e0de07 req-9ef4cf1e-0c2c-435e-8d31-c9f849756cc2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6f5465d3-64cd-46fb-af8f-3b29aef5123d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[029f02e5-178e-4319-9876-776a322ad663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.954 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3082221-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.954 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.954 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3082221-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:41 compute-0 NetworkManager[48891]: <info>  [1764089381.9576] manager: (tape3082221-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Nov 25 16:49:41 compute-0 kernel: tape3082221-d0: entered promiscuous mode
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.960 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3082221-d0, col_values=(('external_ids', {'iface-id': '00283bd6-1ec2-4a8e-b502-76396999cb36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:41 compute-0 ovn_controller[153477]: 2025-11-25T16:49:41Z|00982|binding|INFO|Releasing lport 00283bd6-1ec2-4a8e-b502-76396999cb36 from this chassis (sb_readonly=0)
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.980 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e3082221-dfbe-4119-bc6f-940f05f1b99c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e3082221-dfbe-4119-bc6f-940f05f1b99c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:49:41 compute-0 nova_compute[254092]: 2025-11-25 16:49:41.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2d1575-a535-4ffa-9296-2ddb6d996317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.984 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-e3082221-dfbe-4119-bc6f-940f05f1b99c
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/e3082221-dfbe-4119-bc6f-940f05f1b99c.pid.haproxy
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID e3082221-dfbe-4119-bc6f-940f05f1b99c
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:49:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:41.986 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'env', 'PROCESS_TAG=haproxy-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e3082221-dfbe-4119-bc6f-940f05f1b99c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:49:42 compute-0 upbeat_hypatia[355097]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:49:42 compute-0 upbeat_hypatia[355097]: --> relative data size: 1.0
Nov 25 16:49:42 compute-0 upbeat_hypatia[355097]: --> All data devices are unavailable
Nov 25 16:49:42 compute-0 systemd[1]: libpod-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope: Deactivated successfully.
Nov 25 16:49:42 compute-0 systemd[1]: libpod-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope: Consumed 1.060s CPU time.
Nov 25 16:49:42 compute-0 podman[355081]: 2025-11-25 16:49:42.059705656 +0000 UTC m=+1.336021247 container died 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdbef3dcdc3040a09f035f3a6ca6953cb5b3704c2d8e185ff2ccc8d531d52075-merged.mount: Deactivated successfully.
Nov 25 16:49:42 compute-0 podman[355081]: 2025-11-25 16:49:42.124733283 +0000 UTC m=+1.401048874 container remove 02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:49:42 compute-0 sudo[354954]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:42 compute-0 systemd[1]: libpod-conmon-02bfa2ce2779534d577a2df3c3b18d826962276baa19e4db1a14dea4701bfe9f.scope: Deactivated successfully.
Nov 25 16:49:42 compute-0 sudo[355247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:42 compute-0 sudo[355247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:42 compute-0 sudo[355247]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:42 compute-0 sudo[355299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:49:42 compute-0 sudo[355299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:42 compute-0 sudo[355299]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.341 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089382.341018, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.342 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Started (Lifecycle Event)
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.359 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:42 compute-0 sudo[355345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:42 compute-0 sudo[355345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:42 compute-0 sudo[355345]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.368 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089382.3436253, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.368 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Paused (Lifecycle Event)
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.386 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.390 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:42 compute-0 podman[355354]: 2025-11-25 16:49:42.391603166 +0000 UTC m=+0.058416549 container create 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:49:42 compute-0 systemd[1]: Started libpod-conmon-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2.scope.
Nov 25 16:49:42 compute-0 sudo[355387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:49:42 compute-0 sudo[355387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:42 compute-0 podman[355354]: 2025-11-25 16:49:42.362934147 +0000 UTC m=+0.029747560 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:49:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb944fbef87fb94ba68f50ced07af8fad100a0cb07b8d51f22d49a8cd9a98b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:42 compute-0 podman[355354]: 2025-11-25 16:49:42.482201718 +0000 UTC m=+0.149015171 container init 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:49:42 compute-0 podman[355354]: 2025-11-25 16:49:42.492313353 +0000 UTC m=+0.159126766 container start 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:49:42 compute-0 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : New worker (355420) forked
Nov 25 16:49:42 compute-0 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : Loading success.
Nov 25 16:49:42 compute-0 podman[355469]: 2025-11-25 16:49:42.812312339 +0000 UTC m=+0.042717332 container create 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:49:42 compute-0 ovn_controller[153477]: 2025-11-25T16:49:42Z|00983|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:49:42 compute-0 ovn_controller[153477]: 2025-11-25T16:49:42Z|00984|binding|INFO|Releasing lport 00283bd6-1ec2-4a8e-b502-76396999cb36 from this chassis (sb_readonly=0)
Nov 25 16:49:42 compute-0 systemd[1]: Started libpod-conmon-080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc.scope.
Nov 25 16:49:42 compute-0 podman[355469]: 2025-11-25 16:49:42.794061622 +0000 UTC m=+0.024466645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:49:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:42 compute-0 ceph-mon[74985]: pgmap v1953: 321 pgs: 321 active+clean; 292 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 16:49:42 compute-0 podman[355469]: 2025-11-25 16:49:42.959085327 +0000 UTC m=+0.189490360 container init 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 16:49:42 compute-0 nova_compute[254092]: 2025-11-25 16:49:42.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:42 compute-0 podman[355469]: 2025-11-25 16:49:42.974588529 +0000 UTC m=+0.204993532 container start 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:49:42 compute-0 podman[355469]: 2025-11-25 16:49:42.977654442 +0000 UTC m=+0.208059465 container attach 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:49:42 compute-0 charming_kare[355485]: 167 167
Nov 25 16:49:42 compute-0 systemd[1]: libpod-080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc.scope: Deactivated successfully.
Nov 25 16:49:42 compute-0 podman[355469]: 2025-11-25 16:49:42.983872661 +0000 UTC m=+0.214277694 container died 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8225f822df926bf9ccf7cb5ef2f5397d4ae2c2c19543fa44c592a739fb5bd16-merged.mount: Deactivated successfully.
Nov 25 16:49:43 compute-0 podman[355469]: 2025-11-25 16:49:43.019819888 +0000 UTC m=+0.250224891 container remove 080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:49:43 compute-0 systemd[1]: libpod-conmon-080d282721c5aad56ca0cfa6261b6b8a13de370cfdcac5012c0c27c98b4769bc.scope: Deactivated successfully.
Nov 25 16:49:43 compute-0 podman[355509]: 2025-11-25 16:49:43.208872455 +0000 UTC m=+0.046339430 container create 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:49:43 compute-0 systemd[1]: Started libpod-conmon-2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3.scope.
Nov 25 16:49:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:43 compute-0 podman[355509]: 2025-11-25 16:49:43.18809091 +0000 UTC m=+0.025557645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:43 compute-0 podman[355509]: 2025-11-25 16:49:43.296870097 +0000 UTC m=+0.134336842 container init 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:49:43 compute-0 podman[355509]: 2025-11-25 16:49:43.304524485 +0000 UTC m=+0.141991210 container start 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 16:49:43 compute-0 podman[355509]: 2025-11-25 16:49:43.30729591 +0000 UTC m=+0.144762635 container attach 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.598 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089368.5971568, 2e848add-8417-4307-8b01-f0d1c1a76cea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.599 254096 INFO nova.compute.manager [-] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] VM Stopped (Lifecycle Event)
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.627 254096 DEBUG nova.compute.manager [None req-55078cc3-f84b-4eb5-a926-7339d9dd4e9c - - - - - -] [instance: 2e848add-8417-4307-8b01-f0d1c1a76cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.735 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.799 254096 DEBUG nova.network.neutron [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.811 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.812 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:49:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 292 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.814 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating image(s)
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.836 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.840 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.842 254096 DEBUG oslo_concurrency.lockutils [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.842 254096 DEBUG nova.network.neutron [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Refreshing network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.873 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.898 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.902 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "ac65f45ff36d3fc6c00b94e1164f55245d2a4ddb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.903 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "ac65f45ff36d3fc6c00b94e1164f55245d2a4ddb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.949 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.950 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.950 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.950 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Processing event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.951 254096 DEBUG oslo_concurrency.lockutils [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.952 254096 DEBUG nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] No waiting events found dispatching network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.952 254096 WARNING nova.compute.manager [req-5d1c0f03-2127-4c52-9c4b-c3da2c6afdc8 req-d71031c8-c926-4f4a-9e43-9cb47d43eae1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received unexpected event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c for instance with vm_state building and task_state spawning.
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.952 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.957 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089383.9569314, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.957 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Resumed (Lifecycle Event)
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.959 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.974 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.980 254096 INFO nova.virt.libvirt.driver [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance spawned successfully.
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.981 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:49:43 compute-0 nova_compute[254092]: 2025-11-25 16:49:43.984 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.005 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.007 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.007 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.008 254096 DEBUG nova.virt.libvirt.driver [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.063 254096 INFO nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 10.23 seconds to spawn the instance on the hypervisor.
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.063 254096 DEBUG nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.113 254096 INFO nova.compute.manager [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 11.34 seconds to build instance.
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.125 254096 DEBUG oslo_concurrency.lockutils [None req-549fc9cf-5ecb-493f-b6ce-6dd80f6568a8 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:44 compute-0 eager_tu[355525]: {
Nov 25 16:49:44 compute-0 eager_tu[355525]:     "0": [
Nov 25 16:49:44 compute-0 eager_tu[355525]:         {
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "devices": [
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "/dev/loop3"
Nov 25 16:49:44 compute-0 eager_tu[355525]:             ],
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_name": "ceph_lv0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_size": "21470642176",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "name": "ceph_lv0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "tags": {
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cluster_name": "ceph",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.crush_device_class": "",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.encrypted": "0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osd_id": "0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.type": "block",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.vdo": "0"
Nov 25 16:49:44 compute-0 eager_tu[355525]:             },
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "type": "block",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "vg_name": "ceph_vg0"
Nov 25 16:49:44 compute-0 eager_tu[355525]:         }
Nov 25 16:49:44 compute-0 eager_tu[355525]:     ],
Nov 25 16:49:44 compute-0 eager_tu[355525]:     "1": [
Nov 25 16:49:44 compute-0 eager_tu[355525]:         {
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "devices": [
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "/dev/loop4"
Nov 25 16:49:44 compute-0 eager_tu[355525]:             ],
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_name": "ceph_lv1",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_size": "21470642176",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "name": "ceph_lv1",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "tags": {
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cluster_name": "ceph",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.crush_device_class": "",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.encrypted": "0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osd_id": "1",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.type": "block",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.vdo": "0"
Nov 25 16:49:44 compute-0 eager_tu[355525]:             },
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "type": "block",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "vg_name": "ceph_vg1"
Nov 25 16:49:44 compute-0 eager_tu[355525]:         }
Nov 25 16:49:44 compute-0 eager_tu[355525]:     ],
Nov 25 16:49:44 compute-0 eager_tu[355525]:     "2": [
Nov 25 16:49:44 compute-0 eager_tu[355525]:         {
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "devices": [
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "/dev/loop5"
Nov 25 16:49:44 compute-0 eager_tu[355525]:             ],
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_name": "ceph_lv2",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_size": "21470642176",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "name": "ceph_lv2",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "tags": {
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.cluster_name": "ceph",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.crush_device_class": "",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.encrypted": "0",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osd_id": "2",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.type": "block",
Nov 25 16:49:44 compute-0 eager_tu[355525]:                 "ceph.vdo": "0"
Nov 25 16:49:44 compute-0 eager_tu[355525]:             },
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "type": "block",
Nov 25 16:49:44 compute-0 eager_tu[355525]:             "vg_name": "ceph_vg2"
Nov 25 16:49:44 compute-0 eager_tu[355525]:         }
Nov 25 16:49:44 compute-0 eager_tu[355525]:     ]
Nov 25 16:49:44 compute-0 eager_tu[355525]: }
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.151 254096 DEBUG nova.virt.libvirt.imagebackend [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 16:49:44 compute-0 systemd[1]: libpod-2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3.scope: Deactivated successfully.
Nov 25 16:49:44 compute-0 podman[355509]: 2025-11-25 16:49:44.155920262 +0000 UTC m=+0.993386987 container died 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7adfde1379f334b80999379a2f2948d3422c33866737aa24b0323535c52d182e-merged.mount: Deactivated successfully.
Nov 25 16:49:44 compute-0 podman[355509]: 2025-11-25 16:49:44.210731341 +0000 UTC m=+1.048198066 container remove 2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:49:44 compute-0 systemd[1]: libpod-conmon-2330e2c35caad05ed5e080c8c6e99da6d32c697eef27d74958fc13c8877dd9d3.scope: Deactivated successfully.
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.234 254096 DEBUG nova.virt.libvirt.imagebackend [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.235 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] cloning images/3855d8b5-0ce2-4690-ac71-e43d7c3e5764@snap to None/73301044-3bad-4401-9e30-f009d417f662_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:49:44 compute-0 sudo[355387]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:44 compute-0 sudo[355646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:44 compute-0 sudo[355646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:44 compute-0 sudo[355646]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.377 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "ac65f45ff36d3fc6c00b94e1164f55245d2a4ddb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:44 compute-0 sudo[355691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:49:44 compute-0 sudo[355691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:44 compute-0 sudo[355691]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:44 compute-0 sudo[355734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:44 compute-0 sudo[355734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:44 compute-0 sudo[355734]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:44 compute-0 sudo[355779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:49:44 compute-0 sudo[355779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.528 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'migration_context' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.581 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] flattening vms/73301044-3bad-4401-9e30-f009d417f662_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.895 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Image rbd:vms/73301044-3bad-4401-9e30-f009d417f662_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.896 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.896 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Ensure instance console log exists: /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.896 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.897 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.897 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.899 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start _get_guest_xml network_info=[{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:49:19Z,direct_url=<?>,disk_format='raw',id=3855d8b5-0ce2-4690-ac71-e43d7c3e5764,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-932750089-shelved',owner='fbf763b31dad40d6b0d7285dc017dd89',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:49:44 compute-0 podman[355898]: 2025-11-25 16:49:44.903362643 +0000 UTC m=+0.054935903 container create 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.903 254096 WARNING nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.914 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.915 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.919 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.920 254096 DEBUG nova.virt.libvirt.host [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.920 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.920 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:49:19Z,direct_url=<?>,disk_format='raw',id=3855d8b5-0ce2-4690-ac71-e43d7c3e5764,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-932750089-shelved',owner='fbf763b31dad40d6b0d7285dc017dd89',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.921 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.virt.hardware [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.922 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.935 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:44 compute-0 systemd[1]: Started libpod-conmon-08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583.scope.
Nov 25 16:49:44 compute-0 ceph-mon[74985]: pgmap v1954: 321 pgs: 321 active+clean; 292 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 25 16:49:44 compute-0 podman[355898]: 2025-11-25 16:49:44.877964923 +0000 UTC m=+0.029538203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:49:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.992 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089369.99103, 6b74b880-45f6-4f10-b09f-2696629a42e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:44 compute-0 nova_compute[254092]: 2025-11-25 16:49:44.994 254096 INFO nova.compute.manager [-] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] VM Stopped (Lifecycle Event)
Nov 25 16:49:44 compute-0 podman[355898]: 2025-11-25 16:49:44.996615428 +0000 UTC m=+0.148188708 container init 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:49:45 compute-0 podman[355898]: 2025-11-25 16:49:45.005986822 +0000 UTC m=+0.157560082 container start 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:49:45 compute-0 podman[355898]: 2025-11-25 16:49:45.010550077 +0000 UTC m=+0.162123347 container attach 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:49:45 compute-0 jolly_knuth[355914]: 167 167
Nov 25 16:49:45 compute-0 systemd[1]: libpod-08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583.scope: Deactivated successfully.
Nov 25 16:49:45 compute-0 podman[355898]: 2025-11-25 16:49:45.015674545 +0000 UTC m=+0.167247805 container died 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 16:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a8b85034cc7da8c625c9755656ef539a478399e18f377315b9587f4509e0e4b-merged.mount: Deactivated successfully.
Nov 25 16:49:45 compute-0 podman[355898]: 2025-11-25 16:49:45.058336935 +0000 UTC m=+0.209910195 container remove 08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:49:45 compute-0 systemd[1]: libpod-conmon-08623383f988bc9288cde85d258a7175502f14608f3c6a825a641a19e82e5583.scope: Deactivated successfully.
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.082 254096 DEBUG nova.compute.manager [None req-095c2373-d5af-4036-82a8-c7930bec053c - - - - - -] [instance: 6b74b880-45f6-4f10-b09f-2696629a42e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:45 compute-0 podman[355958]: 2025-11-25 16:49:45.277447269 +0000 UTC m=+0.057117843 container create 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:49:45 compute-0 systemd[1]: Started libpod-conmon-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope.
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:45 compute-0 podman[355958]: 2025-11-25 16:49:45.254434214 +0000 UTC m=+0.034104818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:49:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:49:45 compute-0 podman[355958]: 2025-11-25 16:49:45.378415863 +0000 UTC m=+0.158086467 container init 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:49:45 compute-0 podman[355958]: 2025-11-25 16:49:45.389851774 +0000 UTC m=+0.169522348 container start 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:49:45 compute-0 podman[355958]: 2025-11-25 16:49:45.394760147 +0000 UTC m=+0.174430751 container attach 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 16:49:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:49:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774064259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.457 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.478 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.482 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 351 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.5 MiB/s wr, 133 op/s
Nov 25 16:49:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:49:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376739314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.910 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.913 254096 DEBUG nova.virt.libvirt.vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='3855d8b5-0ce2-4690-ac71-e43d7c3e5764',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:27.741538',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3855d8b5-0ce2-4690-ac71-e43d7c3e5764'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.913 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.914 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.915 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.927 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <uuid>73301044-3bad-4401-9e30-f009d417f662</uuid>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <name>instance-00000059</name>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerActionsTestOtherB-server-932750089</nova:name>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:49:44</nova:creationTime>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:user uuid="23f6db77558a477bbd8b8b46cb4107d1">tempest-ServerActionsTestOtherB-1019246920-project-member</nova:user>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:project uuid="fbf763b31dad40d6b0d7285dc017dd89">tempest-ServerActionsTestOtherB-1019246920</nova:project>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="3855d8b5-0ce2-4690-ac71-e43d7c3e5764"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <nova:port uuid="792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3">
Nov 25 16:49:45 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <system>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <entry name="serial">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <entry name="uuid">73301044-3bad-4401-9e30-f009d417f662</entry>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </system>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <os>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   </os>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <features>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   </features>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk">
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/73301044-3bad-4401-9e30-f009d417f662_disk.config">
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       </source>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:49:45 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:c4:5c:49"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <target dev="tap792a5867-7e"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/console.log" append="off"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <video>
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </video>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:49:45 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:49:45 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:49:45 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:49:45 compute-0 nova_compute[254092]: </domain>
Nov 25 16:49:45 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.929 254096 DEBUG nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Preparing to wait for external event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.929 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.929 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.930 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.930 254096 DEBUG nova.virt.libvirt.vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='3855d8b5-0ce2-4690-ac71-e43d7c3e5764',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:47:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member',shelved_at='2025-11-25T16:49:27.741538',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='3855d8b5-0ce2-4690-ac71-e43d7c3e5764'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.931 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.931 254096 DEBUG nova.network.os_vif_util [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.932 254096 DEBUG os_vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.933 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.933 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.937 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap792a5867-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.938 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap792a5867-7e, col_values=(('external_ids', {'iface-id': '792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:5c:49', 'vm-uuid': '73301044-3bad-4401-9e30-f009d417f662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:45 compute-0 NetworkManager[48891]: <info>  [1764089385.9407] manager: (tap792a5867-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.949 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:45 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.950 254096 INFO os_vif [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')
Nov 25 16:49:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2774064259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/376739314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:45.999 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.000 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.000 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] No VIF found with MAC fa:16:3e:c4:5c:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.001 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Using config drive
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.023 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.047 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.098 254096 DEBUG nova.objects.instance [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'keypairs' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]: {
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "osd_id": 1,
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "type": "bluestore"
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:     },
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "osd_id": 2,
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "type": "bluestore"
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:     },
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "osd_id": 0,
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:         "type": "bluestore"
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]:     }
Nov 25 16:49:46 compute-0 goofy_driscoll[355973]: }
Nov 25 16:49:46 compute-0 systemd[1]: libpod-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope: Deactivated successfully.
Nov 25 16:49:46 compute-0 systemd[1]: libpod-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope: Consumed 1.007s CPU time.
Nov 25 16:49:46 compute-0 podman[355958]: 2025-11-25 16:49:46.409782351 +0000 UTC m=+1.189452955 container died 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:49:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b51d6be098799bd1cf24384d86d3213d57182c30306ddd47b19f5d4e7489397-merged.mount: Deactivated successfully.
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.468 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Creating config drive at /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config
Nov 25 16:49:46 compute-0 podman[355958]: 2025-11-25 16:49:46.472511275 +0000 UTC m=+1.252181849 container remove 93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_driscoll, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 16:49:46 compute-0 systemd[1]: libpod-conmon-93f4f7f584aa258346ca4b991e9af974e118e84dd8e81e38475bf101efff7e11.scope: Deactivated successfully.
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.480 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxj3y20r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:46 compute-0 sudo[355779]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:49:46 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:49:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.516 254096 DEBUG nova.network.neutron [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updated VIF entry in instance network info cache for port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.517 254096 DEBUG nova.network.neutron [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [{"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:46 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:49:46 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 955e7524-0de7-4467-b8ef-ac5be120e29d does not exist
Nov 25 16:49:46 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 17b4b376-758c-4c31-8589-fda4a3e6a081 does not exist
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.543 254096 DEBUG oslo_concurrency.lockutils [req-5d119492-51af-478d-8f83-dbc7eca58a20 req-69a208ce-f214-4bd6-b4b3-d50595aacd9e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73301044-3bad-4401-9e30-f009d417f662" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:49:46 compute-0 sudo[356084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:49:46 compute-0 sudo[356084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:46 compute-0 sudo[356084]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.623 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkxj3y20r" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:46 compute-0 sudo[356109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:49:46 compute-0 sudo[356109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:49:46 compute-0 sudo[356109]: pam_unix(sudo:session): session closed for user root
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.656 254096 DEBUG nova.storage.rbd_utils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] rbd image 73301044-3bad-4401-9e30-f009d417f662_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.660 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.799 254096 DEBUG oslo_concurrency.processutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config 73301044-3bad-4401-9e30-f009d417f662_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.800 254096 INFO nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting local config drive /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662/disk.config because it was imported into RBD.
Nov 25 16:49:46 compute-0 NetworkManager[48891]: <info>  [1764089386.8496] manager: (tap792a5867-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Nov 25 16:49:46 compute-0 kernel: tap792a5867-7e: entered promiscuous mode
Nov 25 16:49:46 compute-0 ovn_controller[153477]: 2025-11-25T16:49:46Z|00985|binding|INFO|Claiming lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for this chassis.
Nov 25 16:49:46 compute-0 ovn_controller[153477]: 2025-11-25T16:49:46Z|00986|binding|INFO|792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3: Claiming fa:16:3e:c4:5c:49 10.100.0.8
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.860 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.862 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 bound to our chassis
Nov 25 16:49:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.865 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:49:46 compute-0 systemd-udevd[356181]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:49:46 compute-0 ovn_controller[153477]: 2025-11-25T16:49:46Z|00987|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 ovn-installed in OVS
Nov 25 16:49:46 compute-0 ovn_controller[153477]: 2025-11-25T16:49:46Z|00988|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 up in Southbound
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:46 compute-0 nova_compute[254092]: 2025-11-25 16:49:46.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.887 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c631a6c3-a12f-459f-9318-e45e40de64ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:46 compute-0 NetworkManager[48891]: <info>  [1764089386.8890] device (tap792a5867-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:49:46 compute-0 NetworkManager[48891]: <info>  [1764089386.8948] device (tap792a5867-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:49:46 compute-0 systemd-machined[216343]: New machine qemu-126-instance-00000059.
Nov 25 16:49:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.918 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[83d1fc75-b1a0-4ddf-990c-51196b42fc71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:46 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000059.
Nov 25 16:49:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.920 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6514bc2-b38b-4b37-b169-11325f0687fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.945 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c13d4fe4-959f-449d-accc-331e41303632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:46 compute-0 ceph-mon[74985]: pgmap v1955: 321 pgs: 321 active+clean; 351 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 4.5 MiB/s wr, 133 op/s
Nov 25 16:49:46 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:49:46 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:46.963 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c87dde-c483-4bf9-b119-e8e4d55663f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356191, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a455f8fc-5793-4710-9476-5a5644a1f56c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356195, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356195, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.011 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.015 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.428 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089387.4271107, 73301044-3bad-4401-9e30-f009d417f662 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.429 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Started (Lifecycle Event)
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.456 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.462 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089387.427582, 73301044-3bad-4401-9e30-f009d417f662 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Paused (Lifecycle Event)
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.481 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.486 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.510 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.551 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.552 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.553 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.553 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.554 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.555 254096 INFO nova.compute.manager [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Terminating instance
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.556 254096 DEBUG nova.compute.manager [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:49:47 compute-0 kernel: tap9fefcfde-9e (unregistering): left promiscuous mode
Nov 25 16:49:47 compute-0 NetworkManager[48891]: <info>  [1764089387.5967] device (tap9fefcfde-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:47 compute-0 ovn_controller[153477]: 2025-11-25T16:49:47Z|00989|binding|INFO|Releasing lport 9fefcfde-9e55-4ed2-8521-ee26704af28c from this chassis (sb_readonly=0)
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 ovn_controller[153477]: 2025-11-25T16:49:47Z|00990|binding|INFO|Setting lport 9fefcfde-9e55-4ed2-8521-ee26704af28c down in Southbound
Nov 25 16:49:47 compute-0 ovn_controller[153477]: 2025-11-25T16:49:47Z|00991|binding|INFO|Removing iface tap9fefcfde-9e ovn-installed in OVS
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.612 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:07:67 10.100.0.7'], port_security=['fa:16:3e:ab:07:67 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f5465d3-64cd-46fb-af8f-3b29aef5123d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6ce5017d19f45bcb3b13bf55faa9493', 'neutron:revision_number': '4', 'neutron:security_group_ids': '25d23027-5b7a-4134-9db5-2818d483fb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65852f31-5e85-4656-9a53-3d977e20f573, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fefcfde-9e55-4ed2-8521-ee26704af28c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.613 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fefcfde-9e55-4ed2-8521-ee26704af28c in datapath e3082221-dfbe-4119-bc6f-940f05f1b99c unbound from our chassis
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.614 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3082221-dfbe-4119-bc6f-940f05f1b99c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c78639d6-970a-4360-bb8b-7155d06ddfce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.616 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c namespace which is not needed anymore
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.625 254096 DEBUG nova.compute.manager [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.625 254096 DEBUG oslo_concurrency.lockutils [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.626 254096 DEBUG oslo_concurrency.lockutils [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.626 254096 DEBUG oslo_concurrency.lockutils [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.626 254096 DEBUG nova.compute.manager [req-e2a6ecd1-df40-44e8-9227-bac49ee4d6e5 req-149b17a3-c2c6-408b-99d5-ae4941322b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Processing event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.627 254096 DEBUG nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.631 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089387.6315713, 73301044-3bad-4401-9e30-f009d417f662 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.632 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Resumed (Lifecycle Event)
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.633 254096 DEBUG nova.virt.libvirt.driver [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.636 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance spawned successfully.
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.647 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.651 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:49:47 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Deactivated successfully.
Nov 25 16:49:47 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000064.scope: Consumed 4.307s CPU time.
Nov 25 16:49:47 compute-0 systemd-machined[216343]: Machine qemu-125-instance-00000064 terminated.
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.673 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:49:47 compute-0 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : haproxy version is 2.8.14-c23fe91
Nov 25 16:49:47 compute-0 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [NOTICE]   (355418) : path to executable is /usr/sbin/haproxy
Nov 25 16:49:47 compute-0 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [WARNING]  (355418) : Exiting Master process...
Nov 25 16:49:47 compute-0 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [ALERT]    (355418) : Current worker (355420) exited with code 143 (Terminated)
Nov 25 16:49:47 compute-0 neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c[355413]: [WARNING]  (355418) : All workers exited. Exiting... (0)
Nov 25 16:49:47 compute-0 systemd[1]: libpod-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2.scope: Deactivated successfully.
Nov 25 16:49:47 compute-0 podman[356261]: 2025-11-25 16:49:47.769249434 +0000 UTC m=+0.051722716 container died 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.797 254096 INFO nova.virt.libvirt.driver [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Instance destroyed successfully.
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.799 254096 DEBUG nova.objects.instance [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lazy-loading 'resources' on Instance uuid 6f5465d3-64cd-46fb-af8f-3b29aef5123d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2-userdata-shm.mount: Deactivated successfully.
Nov 25 16:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-19bb944fbef87fb94ba68f50ced07af8fad100a0cb07b8d51f22d49a8cd9a98b-merged.mount: Deactivated successfully.
Nov 25 16:49:47 compute-0 podman[356261]: 2025-11-25 16:49:47.808999535 +0000 UTC m=+0.091472807 container cleanup 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.814 254096 DEBUG nova.virt.libvirt.vif [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:49:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-2033799726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-2033799726',id=100,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6ce5017d19f45bcb3b13bf55faa9493',ramdisk_id='',reservation_id='r-ey2dhuzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-2074703932-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:44Z,user_data=None,user_id='5aefdc701af340eba9e8201f5065511e',uuid=6f5465d3-64cd-46fb-af8f-3b29aef5123d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.0 MiB/s wr, 142 op/s
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.816 254096 DEBUG nova.network.os_vif_util [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converting VIF {"id": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "address": "fa:16:3e:ab:07:67", "network": {"id": "e3082221-dfbe-4119-bc6f-940f05f1b99c", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-653591229-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6ce5017d19f45bcb3b13bf55faa9493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fefcfde-9e", "ovs_interfaceid": "9fefcfde-9e55-4ed2-8521-ee26704af28c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.818 254096 DEBUG nova.network.os_vif_util [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.818 254096 DEBUG os_vif [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.833 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fefcfde-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.838 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.840 254096 INFO os_vif [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:07:67,bridge_name='br-int',has_traffic_filtering=True,id=9fefcfde-9e55-4ed2-8521-ee26704af28c,network=Network(e3082221-dfbe-4119-bc6f-940f05f1b99c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fefcfde-9e')
Nov 25 16:49:47 compute-0 systemd[1]: libpod-conmon-2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2.scope: Deactivated successfully.
Nov 25 16:49:47 compute-0 podman[356301]: 2025-11-25 16:49:47.906882515 +0000 UTC m=+0.053275289 container remove 2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.913 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae89306a-dea0-48c5-b349-9ad9f25abcba]: (4, ('Tue Nov 25 04:49:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c (2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2)\n2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2\nTue Nov 25 04:49:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c (2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2)\n2cae034830035d544a8e13e3bebd17c5599b08145fe3351b631b41795b0182f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.915 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[14c8e8bb-0471-4585-b96c-e3add2821fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.917 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3082221-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 kernel: tape3082221-d0: left promiscuous mode
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 nova_compute[254092]: 2025-11-25 16:49:47.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.943 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d765e31-e03a-463b-8597-611fe807247e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88523aa4-da2e-4053-aaf4-bf60c78623c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.958 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06709875-1403-49f0-a0f6-0bc75d816858]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.976 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[819854e3-2ae3-4a67-9d4b-76e8bf7b21a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583925, 'reachable_time': 26434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356334, 'error': None, 'target': 'ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:47 compute-0 systemd[1]: run-netns-ovnmeta\x2de3082221\x2ddfbe\x2d4119\x2dbc6f\x2d940f05f1b99c.mount: Deactivated successfully.
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.982 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3082221-dfbe-4119-bc6f-940f05f1b99c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:49:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:47.982 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[53561cb7-4df5-4e8a-9172-53c7ec9cb760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.218 254096 DEBUG nova.compute.manager [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-unplugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG oslo_concurrency.lockutils [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG oslo_concurrency.lockutils [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG oslo_concurrency.lockutils [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG nova.compute.manager [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] No waiting events found dispatching network-vif-unplugged-9fefcfde-9e55-4ed2-8521-ee26704af28c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.219 254096 DEBUG nova.compute.manager [req-4cd4f86f-f0b7-4f39-8e75-d53dfe913a76 req-c58bee8f-0d24-4ab9-9329-af057e8a77d3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-unplugged-9fefcfde-9e55-4ed2-8521-ee26704af28c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.223 254096 INFO nova.virt.libvirt.driver [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deleting instance files /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d_del
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.224 254096 INFO nova.virt.libvirt.driver [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deletion of /var/lib/nova/instances/6f5465d3-64cd-46fb-af8f-3b29aef5123d_del complete
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.226 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089373.221139, 435ae693-6844-49ae-977b-ec3aa89cfe70 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.226 254096 INFO nova.compute.manager [-] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] VM Stopped (Lifecycle Event)
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.254 254096 DEBUG nova.compute.manager [None req-380c019b-0bc3-42c4-8e02-82b241fdb8bb - - - - - -] [instance: 435ae693-6844-49ae-977b-ec3aa89cfe70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.289 254096 INFO nova.compute.manager [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.290 254096 DEBUG oslo.service.loopingcall [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.293 254096 DEBUG nova.compute.manager [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.293 254096 DEBUG nova.network.neutron [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:49:48 compute-0 nova_compute[254092]: 2025-11-25 16:49:48.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:48.977 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 25 16:49:48 compute-0 ceph-mon[74985]: pgmap v1956: 321 pgs: 321 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.0 MiB/s wr, 142 op/s
Nov 25 16:49:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 25 16:49:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.618 254096 DEBUG nova.compute.manager [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:49:49 compute-0 podman[356337]: 2025-11-25 16:49:49.663750548 +0000 UTC m=+0.064295708 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:49:49 compute-0 podman[356336]: 2025-11-25 16:49:49.683166676 +0000 UTC m=+0.093202965 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:49:49 compute-0 podman[356338]: 2025-11-25 16:49:49.697712601 +0000 UTC m=+0.102300182 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.703 254096 DEBUG oslo_concurrency.lockutils [None req-540f723a-498c-4622-ae02-0567efb9bfe2 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.739 254096 DEBUG nova.compute.manager [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.739 254096 DEBUG oslo_concurrency.lockutils [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 DEBUG oslo_concurrency.lockutils [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 DEBUG oslo_concurrency.lockutils [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 DEBUG nova.compute.manager [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.740 254096 WARNING nova.compute.manager [req-e68e00ee-9b30-4acd-a78a-b26650b57182 req-0556f5a1-b90d-4dcf-9ba7-6d674f065b14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state None.
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.763 254096 DEBUG nova.network.neutron [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.798 254096 INFO nova.compute.manager [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Took 1.50 seconds to deallocate network for instance.
Nov 25 16:49:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 4.7 MiB/s wr, 165 op/s
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.860 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.861 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:49 compute-0 nova_compute[254092]: 2025-11-25 16:49:49.982 254096 DEBUG oslo_concurrency.processutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:50 compute-0 ceph-mon[74985]: osdmap e253: 3 total, 3 up, 3 in
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.372 254096 DEBUG nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.373 254096 DEBUG oslo_concurrency.lockutils [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.374 254096 DEBUG oslo_concurrency.lockutils [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.375 254096 DEBUG oslo_concurrency.lockutils [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.375 254096 DEBUG nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] No waiting events found dispatching network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.376 254096 WARNING nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received unexpected event network-vif-plugged-9fefcfde-9e55-4ed2-8521-ee26704af28c for instance with vm_state deleted and task_state None.
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.376 254096 DEBUG nova.compute.manager [req-fc13c561-85a6-4a78-b98d-902d9499e559 req-3822c373-e2fb-4b26-9f00-6e03925322ad a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Received event network-vif-deleted-9fefcfde-9e55-4ed2-8521-ee26704af28c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041399877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.455 254096 DEBUG oslo_concurrency.processutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.462 254096 DEBUG nova.compute.provider_tree [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.483 254096 DEBUG nova.scheduler.client.report [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.514 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.549 254096 INFO nova.scheduler.client.report [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Deleted allocations for instance 6f5465d3-64cd-46fb-af8f-3b29aef5123d
Nov 25 16:49:50 compute-0 nova_compute[254092]: 2025-11-25 16:49:50.630 254096 DEBUG oslo_concurrency.lockutils [None req-21de41ac-08f0-4b62-a10a-aeb905ce8004 5aefdc701af340eba9e8201f5065511e b6ce5017d19f45bcb3b13bf55faa9493 - - default default] Lock "6f5465d3-64cd-46fb-af8f-3b29aef5123d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:51 compute-0 ceph-mon[74985]: pgmap v1958: 321 pgs: 321 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 4.7 MiB/s wr, 165 op/s
Nov 25 16:49:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2041399877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001864924688252914 of space, bias 1.0, pg target 0.5594774064758742 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001770365305723774 of space, bias 1.0, pg target 0.5311095917171322 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:49:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 246 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 329 op/s
Nov 25 16:49:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 25 16:49:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 25 16:49:52 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 25 16:49:52 compute-0 nova_compute[254092]: 2025-11-25 16:49:52.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:52 compute-0 nova_compute[254092]: 2025-11-25 16:49:52.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:53 compute-0 ceph-mon[74985]: pgmap v1959: 321 pgs: 321 active+clean; 246 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 329 op/s
Nov 25 16:49:53 compute-0 ceph-mon[74985]: osdmap e254: 3 total, 3 up, 3 in
Nov 25 16:49:53 compute-0 nova_compute[254092]: 2025-11-25 16:49:53.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 246 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 1.5 MiB/s wr, 304 op/s
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.618 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "e0098976-026f-43d8-b686-b2658f9aded9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.619 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.620 254096 INFO nova.compute.manager [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Terminating instance
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.621 254096 DEBUG nova.compute.manager [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:49:54 compute-0 kernel: tap591e580e-30 (unregistering): left promiscuous mode
Nov 25 16:49:54 compute-0 NetworkManager[48891]: <info>  [1764089394.6721] device (tap591e580e-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:54 compute-0 ovn_controller[153477]: 2025-11-25T16:49:54Z|00992|binding|INFO|Releasing lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 from this chassis (sb_readonly=0)
Nov 25 16:49:54 compute-0 ovn_controller[153477]: 2025-11-25T16:49:54Z|00993|binding|INFO|Setting lport 591e580e-30bb-4c0d-b1fb-96d45eca5626 down in Southbound
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 ovn_controller[153477]: 2025-11-25T16:49:54Z|00994|binding|INFO|Removing iface tap591e580e-30 ovn-installed in OVS
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.693 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:85:c3 10.100.0.14'], port_security=['fa:16:3e:7d:85:c3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e0098976-026f-43d8-b686-b2658f9aded9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae4b77d0-13f0-4078-80f7-8ecf8872e648', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=591e580e-30bb-4c0d-b1fb-96d45eca5626) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 591e580e-30bb-4c0d-b1fb-96d45eca5626 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.695 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34b8c77e-8369-4eab-a81e-0825e5fa2919
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.715 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecc9c52-f9d0-49c5-ae2f-3cb9adc729e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.745 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[daf3174b-b7e4-4339-ba55-3b54e99a530a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:54 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Deactivated successfully.
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.749 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf01dbae-8a24-468e-b45e-c803499c0635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:54 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Consumed 17.624s CPU time.
Nov 25 16:49:54 compute-0 systemd-machined[216343]: Machine qemu-119-instance-00000061 terminated.
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.778 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5f95a154-3137-41fe-83f3-cc2a0a2cac98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.798 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef065ac-1a49-4b0f-bd40-2882f63708ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34b8c77e-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:49:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570033, 'reachable_time': 42096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356433, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.816 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91a36401-97fd-41cb-8dd4-3fc89a661022]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570043, 'tstamp': 570043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356434, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap34b8c77e-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570046, 'tstamp': 570046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356434, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.817 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34b8c77e-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.824 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34b8c77e-80, col_values=(('external_ids', {'iface-id': '031633f4-be6f-41a4-9c41-8e90000f1a4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:54.824 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.858 254096 INFO nova.virt.libvirt.driver [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Instance destroyed successfully.
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.859 254096 DEBUG nova.objects.instance [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid e0098976-026f-43d8-b686-b2658f9aded9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.874 254096 DEBUG nova.virt.libvirt.vif [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:48:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-738000202',display_name='tempest-ServerActionsTestOtherB-server-738000202',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-738000202',id=97,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:48:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-cz0mxxg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:48:40Z,user_data=None,user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=e0098976-026f-43d8-b686-b2658f9aded9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.875 254096 DEBUG nova.network.os_vif_util [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "address": "fa:16:3e:7d:85:c3", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap591e580e-30", "ovs_interfaceid": "591e580e-30bb-4c0d-b1fb-96d45eca5626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.876 254096 DEBUG nova.network.os_vif_util [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.877 254096 DEBUG os_vif [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.879 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap591e580e-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:49:54 compute-0 nova_compute[254092]: 2025-11-25 16:49:54.886 254096 INFO os_vif [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:85:c3,bridge_name='br-int',has_traffic_filtering=True,id=591e580e-30bb-4c0d-b1fb-96d45eca5626,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap591e580e-30')
Nov 25 16:49:55 compute-0 ceph-mon[74985]: pgmap v1961: 321 pgs: 321 active+clean; 246 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 1.5 MiB/s wr, 304 op/s
Nov 25 16:49:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:49:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543115138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:49:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:49:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543115138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.267 254096 INFO nova.virt.libvirt.driver [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deleting instance files /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9_del
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.268 254096 INFO nova.virt.libvirt.driver [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deletion of /var/lib/nova/instances/e0098976-026f-43d8-b686-b2658f9aded9_del complete
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.538 254096 INFO nova.compute.manager [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 0.92 seconds to destroy the instance on the hypervisor.
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.539 254096 DEBUG oslo.service.loopingcall [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.540 254096 DEBUG nova.compute.manager [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.540 254096 DEBUG nova.network.neutron [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.644 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.645 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.671 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.758 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.759 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.775 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.775 254096 INFO nova.compute.claims [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:49:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 150 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 28 KiB/s wr, 269 op/s
Nov 25 16:49:55 compute-0 nova_compute[254092]: 2025-11-25 16:49:55.964 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2543115138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:49:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2543115138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:49:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:49:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 25 16:49:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 25 16:49:56 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 25 16:49:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803006270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.395 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.401 254096 DEBUG nova.compute.provider_tree [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.416 254096 DEBUG nova.scheduler.client.report [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.432 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.432 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.471 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.472 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.490 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.505 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:49:56 compute-0 ovn_controller[153477]: 2025-11-25T16:49:56Z|00995|binding|INFO|Releasing lport 031633f4-be6f-41a4-9c41-8e90000f1a4f from this chassis (sb_readonly=0)
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.600 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.601 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.602 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating image(s)
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.638 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.669 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.699 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.704 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.770 254096 DEBUG nova.network.neutron [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.789 254096 INFO nova.compute.manager [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Took 1.25 seconds to deallocate network for instance.
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.795 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.799 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.799 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.800 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.834 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.840 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.904 254096 DEBUG nova.policy [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.911 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.911 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:56 compute-0 nova_compute[254092]: 2025-11-25 16:49:56.914 254096 DEBUG nova.compute.manager [req-1720b1c4-a30e-4d9a-a4f8-56e6928a8c2f req-4b1a7d78-bdc3-4ad5-8ca3-9aec2c1d902c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Received event network-vif-deleted-591e580e-30bb-4c0d-b1fb-96d45eca5626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.020 254096 DEBUG oslo_concurrency.processutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.130 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.191 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:49:57 compute-0 ceph-mon[74985]: pgmap v1962: 321 pgs: 321 active+clean; 150 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 28 KiB/s wr, 269 op/s
Nov 25 16:49:57 compute-0 ceph-mon[74985]: osdmap e255: 3 total, 3 up, 3 in
Nov 25 16:49:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2803006270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.275 254096 DEBUG nova.objects.instance [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.293 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Ensure instance console log exists: /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.294 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:49:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139327757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.512 254096 DEBUG oslo_concurrency.processutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.520 254096 DEBUG nova.compute.provider_tree [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.533 254096 DEBUG nova.scheduler.client.report [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.555 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.594 254096 INFO nova.scheduler.client.report [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance e0098976-026f-43d8-b686-b2658f9aded9
Nov 25 16:49:57 compute-0 nova_compute[254092]: 2025-11-25 16:49:57.686 254096 DEBUG oslo_concurrency.lockutils [None req-15064c90-786d-4698-a67a-64ec5f35439f 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "e0098976-026f-43d8-b686-b2658f9aded9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 121 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 29 KiB/s wr, 277 op/s
Nov 25 16:49:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4139327757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.677 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.678 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.678 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.678 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.679 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.679 254096 INFO nova.compute.manager [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Terminating instance
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.680 254096 DEBUG nova.compute.manager [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:49:58 compute-0 kernel: tap792a5867-7e (unregistering): left promiscuous mode
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.723 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Successfully created port: 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:49:58 compute-0 NetworkManager[48891]: <info>  [1764089398.7252] device (tap792a5867-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:49:58 compute-0 ovn_controller[153477]: 2025-11-25T16:49:58Z|00996|binding|INFO|Releasing lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 from this chassis (sb_readonly=0)
Nov 25 16:49:58 compute-0 ovn_controller[153477]: 2025-11-25T16:49:58Z|00997|binding|INFO|Setting lport 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 down in Southbound
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.734 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:58 compute-0 ovn_controller[153477]: 2025-11-25T16:49:58Z|00998|binding|INFO|Removing iface tap792a5867-7e ovn-installed in OVS
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.741 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:5c:49 10.100.0.8'], port_security=['fa:16:3e:c4:5c:49 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '73301044-3bad-4401-9e30-f009d417f662', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbf763b31dad40d6b0d7285dc017dd89', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a96e0467-f375-4847-9156-007c15ede1a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2d4322c-9d6a-4a56-a005-6db2d2591dbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:49:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.742 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 in datapath 34b8c77e-8369-4eab-a81e-0825e5fa2919 unbound from our chassis
Nov 25 16:49:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.744 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34b8c77e-8369-4eab-a81e-0825e5fa2919, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:49:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf9bdbc-6b0b-4e0b-bc5e-a9cec7112ab7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:58.746 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 namespace which is not needed anymore
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.758 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:58 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000059.scope: Deactivated successfully.
Nov 25 16:49:58 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000059.scope: Consumed 11.929s CPU time.
Nov 25 16:49:58 compute-0 systemd-machined[216343]: Machine qemu-126-instance-00000059 terminated.
Nov 25 16:49:58 compute-0 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : haproxy version is 2.8.14-c23fe91
Nov 25 16:49:58 compute-0 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [NOTICE]   (345858) : path to executable is /usr/sbin/haproxy
Nov 25 16:49:58 compute-0 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [WARNING]  (345858) : Exiting Master process...
Nov 25 16:49:58 compute-0 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [ALERT]    (345858) : Current worker (345862) exited with code 143 (Terminated)
Nov 25 16:49:58 compute-0 neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919[345830]: [WARNING]  (345858) : All workers exited. Exiting... (0)
Nov 25 16:49:58 compute-0 systemd[1]: libpod-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a.scope: Deactivated successfully.
Nov 25 16:49:58 compute-0 podman[356701]: 2025-11-25 16:49:58.880608657 +0000 UTC m=+0.043188805 container died 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.916 254096 INFO nova.virt.libvirt.driver [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Instance destroyed successfully.
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.916 254096 DEBUG nova.objects.instance [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lazy-loading 'resources' on Instance uuid 73301044-3bad-4401-9e30-f009d417f662 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.933 254096 DEBUG nova.virt.libvirt.vif [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-932750089',display_name='tempest-ServerActionsTestOtherB-server-932750089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-932750089',id=89,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnTONYhZI5heiiXfLNfrX/OXgVvYUSn4j15WmEIpbpXRla6y35HHtq1Z0p1o7qCtHccmkb3EmOHzY/7y/6dkmUi3UFUSG+S7Mj9iMzU9G5dSAHiNyUuZOtLj/6FJpmuqA==',key_name='tempest-keypair-557412706',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:49:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbf763b31dad40d6b0d7285dc017dd89',ramdisk_id='',reservation_id='r-e7rwgh7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1019246920',owner_user_name='tempest-ServerActionsTestOtherB-1019246920-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:49:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='23f6db77558a477bbd8b8b46cb4107d1',uuid=73301044-3bad-4401-9e30-f009d417f662,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.933 254096 DEBUG nova.network.os_vif_util [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converting VIF {"id": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "address": "fa:16:3e:c4:5c:49", "network": {"id": "34b8c77e-8369-4eab-a81e-0825e5fa2919", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1802076949-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbf763b31dad40d6b0d7285dc017dd89", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792a5867-7e", "ovs_interfaceid": "792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.934 254096 DEBUG nova.network.os_vif_util [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.934 254096 DEBUG os_vif [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.936 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap792a5867-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a-userdata-shm.mount: Deactivated successfully.
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e7daa1a78ba94c079dcbea794e084a9276b64e1bfa6e1a2b7fa4bec0c3a08d-merged.mount: Deactivated successfully.
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:49:58 compute-0 nova_compute[254092]: 2025-11-25 16:49:58.945 254096 INFO os_vif [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:5c:49,bridge_name='br-int',has_traffic_filtering=True,id=792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3,network=Network(34b8c77e-8369-4eab-a81e-0825e5fa2919),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792a5867-7e')
Nov 25 16:49:58 compute-0 podman[356701]: 2025-11-25 16:49:58.951975276 +0000 UTC m=+0.114555414 container cleanup 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:49:58 compute-0 systemd[1]: libpod-conmon-85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a.scope: Deactivated successfully.
Nov 25 16:49:59 compute-0 podman[356751]: 2025-11-25 16:49:59.016425688 +0000 UTC m=+0.042969249 container remove 85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.022 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85068c00-16cb-410c-a68a-3cc3774c1bd5]: (4, ('Tue Nov 25 04:49:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 (85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a)\n85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a\nTue Nov 25 04:49:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 (85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a)\n85f477ca333eee044d3513003d2a878b429ca319089d88a5495353550b3a5e5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.024 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eac1db0f-54ee-4065-b31a-835aa30f4dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.024 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34b8c77e-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:59 compute-0 kernel: tap34b8c77e-80: left promiscuous mode
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.033 254096 DEBUG nova.compute.manager [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.033 254096 DEBUG oslo_concurrency.lockutils [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.033 254096 DEBUG oslo_concurrency.lockutils [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.034 254096 DEBUG oslo_concurrency.lockutils [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.034 254096 DEBUG nova.compute.manager [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.034 254096 DEBUG nova.compute.manager [req-5cf0a51c-e7e2-4328-9994-7e18ecdbca21 req-40c9098c-70b4-4dcb-b875-75124c0465d2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-unplugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6166128-596a-4dfd-9b88-91c0629df42a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.061 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec47f9f-3f58-46cb-9b04-9c17ea3c8598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.062 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fb03e1-edfc-4f59-a2db-01939dc9e0c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.076 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85451b47-6371-415a-830f-37550e7eb757]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570025, 'reachable_time': 31964, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356769, 'error': None, 'target': 'ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.078 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34b8c77e-8369-4eab-a81e-0825e5fa2919 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:49:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:49:59.078 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[45b6122a-df29-4dad-9edc-c7e42bd758b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:49:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d34b8c77e\x2d8369\x2d4eab\x2da81e\x2d0825e5fa2919.mount: Deactivated successfully.
Nov 25 16:49:59 compute-0 ceph-mon[74985]: pgmap v1964: 321 pgs: 321 active+clean; 121 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 29 KiB/s wr, 277 op/s
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.277 254096 INFO nova.virt.libvirt.driver [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deleting instance files /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.278 254096 INFO nova.virt.libvirt.driver [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deletion of /var/lib/nova/instances/73301044-3bad-4401-9e30-f009d417f662_del complete
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.335 254096 INFO nova.compute.manager [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 0.65 seconds to destroy the instance on the hypervisor.
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.335 254096 DEBUG oslo.service.loopingcall [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.336 254096 DEBUG nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:49:59 compute-0 nova_compute[254092]: 2025-11-25 16:49:59.336 254096 DEBUG nova.network.neutron [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:49:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 121 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 72 op/s
Nov 25 16:50:00 compute-0 nova_compute[254092]: 2025-11-25 16:50:00.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.118 254096 DEBUG nova.compute.manager [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.119 254096 DEBUG oslo_concurrency.lockutils [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73301044-3bad-4401-9e30-f009d417f662-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.119 254096 DEBUG oslo_concurrency.lockutils [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.119 254096 DEBUG oslo_concurrency.lockutils [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.120 254096 DEBUG nova.compute.manager [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] No waiting events found dispatching network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.120 254096 WARNING nova.compute.manager [req-6fc8e34b-365a-427d-a463-22f577e1e100 req-8147b721-7fa2-4da2-b877-b05b74d6e58f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received unexpected event network-vif-plugged-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 for instance with vm_state active and task_state deleting.
Nov 25 16:50:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 25 16:50:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 25 16:50:01 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 25 16:50:01 compute-0 ceph-mon[74985]: pgmap v1965: 321 pgs: 321 active+clean; 121 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 72 op/s
Nov 25 16:50:01 compute-0 ceph-mon[74985]: osdmap e256: 3 total, 3 up, 3 in
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.568 254096 DEBUG nova.network.neutron [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.597 254096 INFO nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] Took 2.26 seconds to deallocate network for instance.
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.611 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Successfully updated port: 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.623 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.623 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.623 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.676 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.677 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.736 254096 DEBUG oslo_concurrency.processutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 2.7 MiB/s wr, 157 op/s
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.883 254096 DEBUG nova.compute.manager [req-353bbaea-bb40-49aa-8207-5d7db89643c4 req-64f59ce9-cbc3-4e75-9730-5668d190cd48 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73301044-3bad-4401-9e30-f009d417f662] Received event network-vif-deleted-792a5867-7e12-4ac8-be7a-5ee1aa6ec8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:01 compute-0 nova_compute[254092]: 2025-11-25 16:50:01.898 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:50:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:50:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737371638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.183 254096 DEBUG oslo_concurrency.processutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.189 254096 DEBUG nova.compute.provider_tree [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.203 254096 DEBUG nova.scheduler.client.report [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:50:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1737371638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.245 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.291 254096 INFO nova.scheduler.client.report [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Deleted allocations for instance 73301044-3bad-4401-9e30-f009d417f662
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.361 254096 DEBUG oslo_concurrency.lockutils [None req-0f586a2a-6f6e-4f01-9e7d-05679b20d432 23f6db77558a477bbd8b8b46cb4107d1 fbf763b31dad40d6b0d7285dc017dd89 - - default default] Lock "73301044-3bad-4401-9e30-f009d417f662" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.793 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089387.791985, 6f5465d3-64cd-46fb-af8f-3b29aef5123d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.794 254096 INFO nova.compute.manager [-] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] VM Stopped (Lifecycle Event)
Nov 25 16:50:02 compute-0 nova_compute[254092]: 2025-11-25 16:50:02.813 254096 DEBUG nova.compute.manager [None req-825b68e2-0e59-4ca3-ac76-64f5ab19d7b1 - - - - - -] [instance: 6f5465d3-64cd-46fb-af8f-3b29aef5123d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.210 254096 DEBUG nova.network.neutron [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.226 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.226 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance network_info: |[{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.229 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start _get_guest_xml network_info=[{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.233 254096 WARNING nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.243 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:50:03 compute-0 ceph-mon[74985]: pgmap v1967: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 2.7 MiB/s wr, 157 op/s
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.244 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.247 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.248 254096 DEBUG nova.virt.libvirt.host [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.248 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.249 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.250 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.250 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.251 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.251 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.251 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.252 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.252 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.253 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.253 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.253 254096 DEBUG nova.virt.hardware [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.259 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191750307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.722 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.743 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.748 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.998 254096 DEBUG nova.compute.manager [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.998 254096 DEBUG nova.compute.manager [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing instance network info cache due to event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.999 254096 DEBUG oslo_concurrency.lockutils [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.999 254096 DEBUG oslo_concurrency.lockutils [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:03 compute-0 nova_compute[254092]: 2025-11-25 16:50:03.999 254096 DEBUG nova.network.neutron [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.084 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.084 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.098 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:50:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970103562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.199 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.201 254096 DEBUG nova.virt.libvirt.vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:56Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.201 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.202 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.203 254096 DEBUG nova.objects.instance [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.213 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.213 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.217 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <uuid>1ae1094f-81aa-490c-80ca-4eba95f46cac</uuid>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <name>instance-00000065</name>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-922142806</nova:name>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:50:03</nova:creationTime>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <nova:port uuid="1d6ef4a2-8289-4c88-b3f3-481435a4dab0">
Nov 25 16:50:04 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <system>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <entry name="serial">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <entry name="uuid">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </system>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <os>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   </os>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <features>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   </features>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk">
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config">
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:04 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f8:6f:25"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <target dev="tap1d6ef4a2-82"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log" append="off"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <video>
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </video>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:50:04 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:50:04 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:50:04 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:50:04 compute-0 nova_compute[254092]: </domain>
Nov 25 16:50:04 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.218 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Preparing to wait for external event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.218 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.218 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.219 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.219 254096 DEBUG nova.virt.libvirt.vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:49:56Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.220 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.220 254096 DEBUG nova.network.os_vif_util [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.221 254096 DEBUG os_vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.222 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.222 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.226 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d6ef4a2-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.227 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d6ef4a2-82, col_values=(('external_ids', {'iface-id': '1d6ef4a2-8289-4c88-b3f3-481435a4dab0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:6f:25', 'vm-uuid': '1ae1094f-81aa-490c-80ca-4eba95f46cac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:04 compute-0 NetworkManager[48891]: <info>  [1764089404.2292] manager: (tap1d6ef4a2-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.232 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.232 254096 INFO nova.compute.claims [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.235 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.236 254096 INFO os_vif [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')
Nov 25 16:50:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1191750307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:04 compute-0 ceph-mon[74985]: pgmap v1968: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Nov 25 16:50:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2970103562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.293 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.294 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.294 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:f8:6f:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.295 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Using config drive
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.315 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.372 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:50:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17999023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.814 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.820 254096 DEBUG nova.compute.provider_tree [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.833 254096 DEBUG nova.scheduler.client.report [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.852 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.852 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.891 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.891 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.918 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:50:04 compute-0 nova_compute[254092]: 2025-11-25 16:50:04.940 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.054 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.056 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.056 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating image(s)
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.075 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.096 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.116 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.118 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.192 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.193 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.193 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.194 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.213 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.217 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/17999023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.263 254096 DEBUG nova.policy [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '734253d3f2e84904968d9db3044df1c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.464 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.529 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] resizing rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.564 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating config drive at /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.569 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8prsh3i4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.647 254096 DEBUG nova.objects.instance [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.667 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.668 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Ensure instance console log exists: /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.668 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.669 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.669 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.708 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8prsh3i4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.729 254096 DEBUG nova.storage.rbd_utils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.732 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.879 254096 DEBUG oslo_concurrency.processutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.880 254096 INFO nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting local config drive /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config because it was imported into RBD.
Nov 25 16:50:05 compute-0 kernel: tap1d6ef4a2-82: entered promiscuous mode
Nov 25 16:50:05 compute-0 ovn_controller[153477]: 2025-11-25T16:50:05Z|00999|binding|INFO|Claiming lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for this chassis.
Nov 25 16:50:05 compute-0 ovn_controller[153477]: 2025-11-25T16:50:05Z|01000|binding|INFO|1d6ef4a2-8289-4c88-b3f3-481435a4dab0: Claiming fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:05 compute-0 NetworkManager[48891]: <info>  [1764089405.9253] manager: (tap1d6ef4a2-82): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Nov 25 16:50:05 compute-0 nova_compute[254092]: 2025-11-25 16:50:05.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:05 compute-0 systemd-udevd[357114]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:50:05 compute-0 NetworkManager[48891]: <info>  [1764089405.9639] device (tap1d6ef4a2-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:50:05 compute-0 NetworkManager[48891]: <info>  [1764089405.9649] device (tap1d6ef4a2-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:50:05 compute-0 systemd-machined[216343]: New machine qemu-127-instance-00000065.
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:06 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000065.
Nov 25 16:50:06 compute-0 ovn_controller[153477]: 2025-11-25T16:50:06Z|01001|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 ovn-installed in OVS
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.008 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:06 compute-0 ovn_controller[153477]: 2025-11-25T16:50:06Z|01002|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 up in Southbound
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.066 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.067 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee bound to our chassis
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.068 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6c42e2-ad03-417d-92d0-2534f3639dd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.083 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9840ff40-e1 in ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.085 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9840ff40-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd48acf2-119c-4b30-9433-49e1cbfccf17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.085 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7db2a5e7-9359-466e-9189-1f38e2335c72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.096 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[089d0317-57c5-41df-91be-9b220759b0a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.111 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef81785e-a5e2-44fd-bd88-db8c3764bdfb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.139 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2cdf75-56e0-482e-87b9-716fd7ab65e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 NetworkManager[48891]: <info>  [1764089406.1467] manager: (tap9840ff40-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.145 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fef9b6d9-6833-4a10-ac76-8ddc2d854fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.174 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a20f1d0b-9c9a-4907-9bf9-47b38bee5f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.177 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff4e8f1-953a-4686-bf16-fc4d6d897c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 NetworkManager[48891]: <info>  [1764089406.1993] device (tap9840ff40-e0): carrier: link connected
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.205 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e701594a-482f-42c4-9306-fa66c4d1d4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.221 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd9ff3e-6abf-477c-9394-8a633777500a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586379, 'reachable_time': 16720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357150, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.235 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07aa20f8-52d8-4297-ae5b-ccaeea4fea74]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:4dad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586379, 'tstamp': 586379}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357151, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.255 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[23751494-8258-4012-bf17-c3048e494e61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586379, 'reachable_time': 16720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357160, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ceph-mon[74985]: pgmap v1969: 321 pgs: 321 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.291 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a37f8486-f5b6-4c1d-8e30-b376ff2f709c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.352 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec1702e-da6c-4a1d-a27e-5ca927379b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.353 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.353 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.354 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9840ff40-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:06 compute-0 NetworkManager[48891]: <info>  [1764089406.3562] manager: (tap9840ff40-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Nov 25 16:50:06 compute-0 kernel: tap9840ff40-e0: entered promiscuous mode
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.359 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9840ff40-e0, col_values=(('external_ids', {'iface-id': '217facd0-6092-44c8-9430-efb8d36c211a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:06 compute-0 ovn_controller[153477]: 2025-11-25T16:50:06Z|01003|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.376 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.377 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce535d1-becb-4809-95a7-a8d1a6738634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.377 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:50:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:06.378 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'env', 'PROCESS_TAG=haproxy-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9840ff40-ec43-46f9-ab52-3d9495f203ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.399 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089406.3993363, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.400 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Started (Lifecycle Event)
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.418 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.422 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089406.3995037, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.422 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Paused (Lifecycle Event)
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.439 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.455 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.728 254096 DEBUG nova.compute.manager [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.728 254096 DEBUG oslo_concurrency.lockutils [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.729 254096 DEBUG oslo_concurrency.lockutils [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.729 254096 DEBUG oslo_concurrency.lockutils [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.729 254096 DEBUG nova.compute.manager [req-a0530870-2c73-456b-b193-add3d8eebc31 req-3a902fce-4e32-4358-ad6c-985d391d89be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Processing event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.730 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.733 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089406.7331688, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.733 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Resumed (Lifecycle Event)
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.735 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.738 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance spawned successfully.
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.738 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:50:06 compute-0 podman[357226]: 2025-11-25 16:50:06.741495428 +0000 UTC m=+0.051343746 container create 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.751 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.754 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.764 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.764 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.765 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.765 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.765 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.766 254096 DEBUG nova.virt.libvirt.driver [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.774 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:50:06 compute-0 systemd[1]: Started libpod-conmon-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14.scope.
Nov 25 16:50:06 compute-0 podman[357226]: 2025-11-25 16:50:06.715246905 +0000 UTC m=+0.025095233 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:50:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.816 254096 INFO nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 10.22 seconds to spawn the instance on the hypervisor.
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.817 254096 DEBUG nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb50cdc7de6082c53fe78a6bea78f3cc609faf5a93f7a225931635c4a0532c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:06 compute-0 podman[357226]: 2025-11-25 16:50:06.834855515 +0000 UTC m=+0.144703843 container init 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:50:06 compute-0 podman[357226]: 2025-11-25 16:50:06.840679493 +0000 UTC m=+0.150527801 container start 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:50:06 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : New worker (357247) forked
Nov 25 16:50:06 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : Loading success.
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.899 254096 INFO nova.compute.manager [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 11.18 seconds to build instance.
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.907 254096 DEBUG nova.network.neutron [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updated VIF entry in instance network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.907 254096 DEBUG nova.network.neutron [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.917 254096 DEBUG oslo_concurrency.lockutils [None req-694bc63f-6c68-40a0-ae2a-19746df11636 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:06 compute-0 nova_compute[254092]: 2025-11-25 16:50:06.919 254096 DEBUG oslo_concurrency.lockutils [req-07f68c74-11ab-400e-8267-fa46a66d4c99 req-084884f3-daff-4942-a66f-77c219e36031 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:07 compute-0 nova_compute[254092]: 2025-11-25 16:50:07.011 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Successfully created port: d6e67173-6a72-4200-9963-90668ed663e4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:50:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 103 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Nov 25 16:50:08 compute-0 nova_compute[254092]: 2025-11-25 16:50:08.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:08 compute-0 nova_compute[254092]: 2025-11-25 16:50:08.849 254096 DEBUG nova.compute.manager [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:08 compute-0 nova_compute[254092]: 2025-11-25 16:50:08.850 254096 DEBUG oslo_concurrency.lockutils [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:08 compute-0 nova_compute[254092]: 2025-11-25 16:50:08.850 254096 DEBUG oslo_concurrency.lockutils [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:08 compute-0 nova_compute[254092]: 2025-11-25 16:50:08.850 254096 DEBUG oslo_concurrency.lockutils [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:08 compute-0 nova_compute[254092]: 2025-11-25 16:50:08.851 254096 DEBUG nova.compute.manager [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:08 compute-0 nova_compute[254092]: 2025-11-25 16:50:08.851 254096 WARNING nova.compute.manager [req-15aeb377-aaee-47d2-990f-da62eaa6d053 req-250c23d2-28ad-49b9-a803-8426b7db8c73 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state None.
Nov 25 16:50:08 compute-0 ceph-mon[74985]: pgmap v1970: 321 pgs: 321 active+clean; 103 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.120 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Successfully updated port: d6e67173-6a72-4200-9963-90668ed663e4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.139 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.140 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.140 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.229 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 103 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.842 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.857 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089394.8559422, e0098976-026f-43d8-b686-b2658f9aded9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.857 254096 INFO nova.compute.manager [-] [instance: e0098976-026f-43d8-b686-b2658f9aded9] VM Stopped (Lifecycle Event)
Nov 25 16:50:09 compute-0 nova_compute[254092]: 2025-11-25 16:50:09.875 254096 DEBUG nova.compute.manager [None req-ee93c43a-f676-4987-a4ea-7ab2ef0eaaad - - - - - -] [instance: e0098976-026f-43d8-b686-b2658f9aded9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:50:10 compute-0 ceph-mon[74985]: pgmap v1971: 321 pgs: 321 active+clean; 103 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Nov 25 16:50:10 compute-0 nova_compute[254092]: 2025-11-25 16:50:10.942 254096 DEBUG nova.compute.manager [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:10 compute-0 nova_compute[254092]: 2025-11-25 16:50:10.943 254096 DEBUG nova.compute.manager [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:10 compute-0 nova_compute[254092]: 2025-11-25 16:50:10.943 254096 DEBUG oslo_concurrency.lockutils [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.548 254096 DEBUG nova.network.neutron [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.582 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.583 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance network_info: |[{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.583 254096 DEBUG oslo_concurrency.lockutils [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.584 254096 DEBUG nova.network.neutron [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.588 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start _get_guest_xml network_info=[{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.592 254096 WARNING nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.597 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.598 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.608 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.608 254096 DEBUG nova.virt.libvirt.host [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.609 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.609 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.610 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.611 254096 DEBUG nova.virt.hardware [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.614 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:11 compute-0 NetworkManager[48891]: <info>  [1764089411.7021] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:11 compute-0 NetworkManager[48891]: <info>  [1764089411.7030] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.806 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:11 compute-0 ovn_controller[153477]: 2025-11-25T16:50:11Z|01004|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 16:50:11 compute-0 nova_compute[254092]: 2025-11-25 16:50:11.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Nov 25 16:50:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2102446208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.077 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.110 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.117 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075876687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.576 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.579 254096 DEBUG nova.virt.libvirt.vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:04Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.580 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.581 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.583 254096 DEBUG nova.objects.instance [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.596 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <uuid>8e8f0fb8-4b3c-40dd-9317-94bedc736376</uuid>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <name>instance-00000066</name>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1379098021</nova:name>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:50:11</nova:creationTime>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:user uuid="734253d3f2e84904968d9db3044df1c8">tempest-ServerRescueTestJSONUnderV235-1568478678-project-member</nova:user>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:project uuid="423cb78fb5f54c46b9867a6f07d0cf95">tempest-ServerRescueTestJSONUnderV235-1568478678</nova:project>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <nova:port uuid="d6e67173-6a72-4200-9963-90668ed663e4">
Nov 25 16:50:12 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <system>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <entry name="serial">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <entry name="uuid">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </system>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <os>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   </os>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <features>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   </features>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk">
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config">
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:fa:b3:74"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <target dev="tapd6e67173-6a"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/console.log" append="off"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <video>
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </video>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:50:12 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:50:12 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:50:12 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:50:12 compute-0 nova_compute[254092]: </domain>
Nov 25 16:50:12 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.602 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Preparing to wait for external event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.602 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.602 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.603 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.603 254096 DEBUG nova.virt.libvirt.vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:04Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.604 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.605 254096 DEBUG nova.network.os_vif_util [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.605 254096 DEBUG os_vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.606 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.607 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.613 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6e67173-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.613 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6e67173-6a, col_values=(('external_ids', {'iface-id': 'd6e67173-6a72-4200-9963-90668ed663e4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:b3:74', 'vm-uuid': '8e8f0fb8-4b3c-40dd-9317-94bedc736376'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:12 compute-0 NetworkManager[48891]: <info>  [1764089412.6158] manager: (tapd6e67173-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.623 254096 INFO os_vif [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a')
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.665 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.666 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.666 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No VIF found with MAC fa:16:3e:fa:b3:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.667 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Using config drive
Nov 25 16:50:12 compute-0 nova_compute[254092]: 2025-11-25 16:50:12.689 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:12 compute-0 ceph-mon[74985]: pgmap v1972: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Nov 25 16:50:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2102446208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4075876687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.075 254096 DEBUG nova.compute.manager [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.075 254096 DEBUG nova.compute.manager [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing instance network info cache due to event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.076 254096 DEBUG oslo_concurrency.lockutils [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.076 254096 DEBUG oslo_concurrency.lockutils [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.076 254096 DEBUG nova.network.neutron [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.296 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating config drive at /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.300 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24vw2lix execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.436 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24vw2lix" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.464 254096 DEBUG nova.storage.rbd_utils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.468 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.574 254096 DEBUG nova.network.neutron [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.576 254096 DEBUG nova.network.neutron [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.595 254096 DEBUG oslo_concurrency.lockutils [req-63fb04e5-fcf1-4647-bf25-615ed5e7a4bf req-b1ad7552-77c9-4c0d-b160-ccabc51f3e62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.627 254096 DEBUG oslo_concurrency.processutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.627 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.628 254096 INFO nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deleting local config drive /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config because it was imported into RBD.
Nov 25 16:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.629 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:13 compute-0 kernel: tapd6e67173-6a: entered promiscuous mode
Nov 25 16:50:13 compute-0 NetworkManager[48891]: <info>  [1764089413.6830] manager: (tapd6e67173-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Nov 25 16:50:13 compute-0 ovn_controller[153477]: 2025-11-25T16:50:13Z|01005|binding|INFO|Claiming lport d6e67173-6a72-4200-9963-90668ed663e4 for this chassis.
Nov 25 16:50:13 compute-0 ovn_controller[153477]: 2025-11-25T16:50:13Z|01006|binding|INFO|d6e67173-6a72-4200-9963-90668ed663e4: Claiming fa:16:3e:fa:b3:74 10.100.0.13
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.692 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 bound to our chassis
Nov 25 16:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.695 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:13.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d848db-0bc6-4281-9d4b-ed8d211413aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:13 compute-0 ovn_controller[153477]: 2025-11-25T16:50:13Z|01007|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 ovn-installed in OVS
Nov 25 16:50:13 compute-0 ovn_controller[153477]: 2025-11-25T16:50:13Z|01008|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 up in Southbound
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:13 compute-0 systemd-udevd[357391]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:50:13 compute-0 systemd-machined[216343]: New machine qemu-128-instance-00000066.
Nov 25 16:50:13 compute-0 NetworkManager[48891]: <info>  [1764089413.7455] device (tapd6e67173-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:50:13 compute-0 NetworkManager[48891]: <info>  [1764089413.7467] device (tapd6e67173-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:50:13 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000066.
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.915 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089398.9143293, 73301044-3bad-4401-9e30-f009d417f662 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.916 254096 INFO nova.compute.manager [-] [instance: 73301044-3bad-4401-9e30-f009d417f662] VM Stopped (Lifecycle Event)
Nov 25 16:50:13 compute-0 nova_compute[254092]: 2025-11-25 16:50:13.934 254096 DEBUG nova.compute.manager [None req-af2c918e-6be4-4018-a420-efec636f530c - - - - - -] [instance: 73301044-3bad-4401-9e30-f009d417f662] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.141 254096 DEBUG nova.compute.manager [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.141 254096 DEBUG oslo_concurrency.lockutils [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.142 254096 DEBUG oslo_concurrency.lockutils [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.142 254096 DEBUG oslo_concurrency.lockutils [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.142 254096 DEBUG nova.compute.manager [req-4e0de390-4ae7-4847-ac85-8dc52f79ce23 req-f8a5a89b-7c17-421a-8708-0ee41f8ed277 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Processing event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.312 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089414.3121905, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.313 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Started (Lifecycle Event)
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.315 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.318 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.325 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance spawned successfully.
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.326 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.330 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.334 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.354 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.355 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.356 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.357 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.358 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.358 254096 DEBUG nova.virt.libvirt.driver [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.366 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.367 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089414.3146741, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.367 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Paused (Lifecycle Event)
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.396 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.402 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089414.3175724, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.403 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Resumed (Lifecycle Event)
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.421 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.425 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.430 254096 INFO nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 9.38 seconds to spawn the instance on the hypervisor.
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.431 254096 DEBUG nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.456 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.495 254096 INFO nova.compute.manager [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 10.31 seconds to build instance.
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.529 254096 DEBUG oslo_concurrency.lockutils [None req-88bedb22-df26-4195-9f33-9b0a3ae60114 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.741 254096 DEBUG nova.network.neutron [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updated VIF entry in instance network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.741 254096 DEBUG nova.network.neutron [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:14 compute-0 nova_compute[254092]: 2025-11-25 16:50:14.755 254096 DEBUG oslo_concurrency.lockutils [req-366bf844-b9c8-4cdb-be42-4fd08fe964f1 req-e77b868b-72ad-4eff-8173-c129841ac937 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:14 compute-0 ceph-mon[74985]: pgmap v1973: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 16:50:15 compute-0 nova_compute[254092]: 2025-11-25 16:50:15.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 16:50:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.234 254096 DEBUG nova.compute.manager [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG oslo_concurrency.lockutils [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG oslo_concurrency.lockutils [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG oslo_concurrency.lockutils [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.235 254096 DEBUG nova.compute.manager [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.236 254096 WARNING nova.compute.manager [req-3f988ff8-1d0d-4125-b24e-76cd7f44b9ad req-c4c8e9bb-e9eb-4c75-9ca4-930c8cfa61cd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state active and task_state None.
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.602 254096 INFO nova.compute.manager [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Rescuing
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.602 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.603 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:16 compute-0 nova_compute[254092]: 2025-11-25 16:50:16.603 254096 DEBUG nova.network.neutron [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:50:16 compute-0 ceph-mon[74985]: pgmap v1974: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Nov 25 16:50:17 compute-0 nova_compute[254092]: 2025-11-25 16:50:17.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:17 compute-0 nova_compute[254092]: 2025-11-25 16:50:17.705 254096 DEBUG nova.network.neutron [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:17 compute-0 nova_compute[254092]: 2025-11-25 16:50:17.723 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 16:50:18 compute-0 nova_compute[254092]: 2025-11-25 16:50:18.072 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:50:18 compute-0 nova_compute[254092]: 2025-11-25 16:50:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:18 compute-0 nova_compute[254092]: 2025-11-25 16:50:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:18 compute-0 nova_compute[254092]: 2025-11-25 16:50:18.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:50:18 compute-0 nova_compute[254092]: 2025-11-25 16:50:18.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:18 compute-0 ovn_controller[153477]: 2025-11-25T16:50:18Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 16:50:18 compute-0 ovn_controller[153477]: 2025-11-25T16:50:18Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 16:50:18 compute-0 ceph-mon[74985]: pgmap v1975: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 16:50:19 compute-0 nova_compute[254092]: 2025-11-25 16:50:19.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:20 compute-0 podman[357446]: 2025-11-25 16:50:20.64174961 +0000 UTC m=+0.056968260 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 16:50:20 compute-0 podman[357445]: 2025-11-25 16:50:20.662006361 +0000 UTC m=+0.076698885 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:50:20 compute-0 podman[357447]: 2025-11-25 16:50:20.681148611 +0000 UTC m=+0.094031627 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 16:50:20 compute-0 ceph-mon[74985]: pgmap v1976: 321 pgs: 321 active+clean; 134 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 133 op/s
Nov 25 16:50:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:50:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310197540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:20 compute-0 nova_compute[254092]: 2025-11-25 16:50:20.990 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.071 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:50:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.241 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.243 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3460MB free_disk=59.94648361206055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.243 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.244 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.333 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1ae1094f-81aa-490c-80ca-4eba95f46cac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.334 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8e8f0fb8-4b3c-40dd-9317-94bedc736376 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.334 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.334 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.352 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.393 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.394 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.411 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.435 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 222 op/s
Nov 25 16:50:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2310197540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:50:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215157792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.982 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:21 compute-0 nova_compute[254092]: 2025-11-25 16:50:21.987 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:50:22 compute-0 nova_compute[254092]: 2025-11-25 16:50:22.001 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:50:22 compute-0 nova_compute[254092]: 2025-11-25 16:50:22.020 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:50:22 compute-0 nova_compute[254092]: 2025-11-25 16:50:22.021 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:22 compute-0 nova_compute[254092]: 2025-11-25 16:50:22.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:22 compute-0 nova_compute[254092]: 2025-11-25 16:50:22.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:22 compute-0 ceph-mon[74985]: pgmap v1977: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 222 op/s
Nov 25 16:50:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1215157792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:23 compute-0 nova_compute[254092]: 2025-11-25 16:50:23.016 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:23 compute-0 nova_compute[254092]: 2025-11-25 16:50:23.017 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:50:23 compute-0 nova_compute[254092]: 2025-11-25 16:50:23.017 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:50:23 compute-0 nova_compute[254092]: 2025-11-25 16:50:23.052 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:50:23 compute-0 nova_compute[254092]: 2025-11-25 16:50:23.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 16:50:24 compute-0 ceph-mon[74985]: pgmap v1978: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 16:50:25 compute-0 nova_compute[254092]: 2025-11-25 16:50:25.667 254096 INFO nova.compute.manager [None req-f8855c07-08ce-4e4f-98da-d3d5c7b6ff36 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Get console output
Nov 25 16:50:25 compute-0 nova_compute[254092]: 2025-11-25 16:50:25.674 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:50:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 16:50:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.738 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.739 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.747 254096 INFO nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Rebuilding instance
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.763 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.876 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.876 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.881 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:50:26 compute-0 nova_compute[254092]: 2025-11-25 16:50:26.882 254096 INFO nova.compute.claims [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:50:26 compute-0 ceph-mon[74985]: pgmap v1979: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.015 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.363 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.382 254096 DEBUG nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.433 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_requests' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:50:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796676940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.444 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.454 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.460 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.463 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.467 254096 DEBUG nova.compute.provider_tree [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.479 254096 DEBUG nova.scheduler.client.report [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.484 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.488 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.501 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.502 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.550 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.550 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.579 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.595 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.696 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.698 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.698 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Creating image(s)
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.721 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.743 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.766 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.772 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.839 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.841 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.841 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.842 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.865 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.869 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:27 compute-0 nova_compute[254092]: 2025-11-25 16:50:27.937 254096 DEBUG nova.policy [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91f64879f99b40f69cdf49bceea9af2b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5ef0cd3e375456d9e1b561f7929fc4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:50:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2796676940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.114 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.199 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.275 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] resizing rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.570 254096 DEBUG nova.objects.instance [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lazy-loading 'migration_context' on Instance uuid 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.589 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Successfully created port: 0127bd66-2e67-465a-8205-164198287c55 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.592 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Ensure instance console log exists: /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.593 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:28 compute-0 nova_compute[254092]: 2025-11-25 16:50:28.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:28 compute-0 ceph-mon[74985]: pgmap v1980: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.504 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Successfully updated port: 0127bd66-2e67-465a-8205-164198287c55 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.520 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.520 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquired lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.520 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.773 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:50:29 compute-0 kernel: tap1d6ef4a2-82 (unregistering): left promiscuous mode
Nov 25 16:50:29 compute-0 NetworkManager[48891]: <info>  [1764089429.8027] device (tap1d6ef4a2-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:29 compute-0 ovn_controller[153477]: 2025-11-25T16:50:29Z|01009|binding|INFO|Releasing lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 from this chassis (sb_readonly=0)
Nov 25 16:50:29 compute-0 ovn_controller[153477]: 2025-11-25T16:50:29Z|01010|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 down in Southbound
Nov 25 16:50:29 compute-0 ovn_controller[153477]: 2025-11-25T16:50:29Z|01011|binding|INFO|Removing iface tap1d6ef4a2-82 ovn-installed in OVS
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.819 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.821 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis
Nov 25 16:50:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.822 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:50:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eab52cb8-0a0e-4176-b120-aebd470236e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:29.824 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace which is not needed anymore
Nov 25 16:50:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 92 op/s
Nov 25 16:50:29 compute-0 nova_compute[254092]: 2025-11-25 16:50:29.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:29 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 25 16:50:29 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000065.scope: Consumed 12.896s CPU time.
Nov 25 16:50:29 compute-0 systemd-machined[216343]: Machine qemu-127-instance-00000065 terminated.
Nov 25 16:50:29 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : haproxy version is 2.8.14-c23fe91
Nov 25 16:50:29 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [NOTICE]   (357245) : path to executable is /usr/sbin/haproxy
Nov 25 16:50:29 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [WARNING]  (357245) : Exiting Master process...
Nov 25 16:50:29 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [WARNING]  (357245) : Exiting Master process...
Nov 25 16:50:29 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [ALERT]    (357245) : Current worker (357247) exited with code 143 (Terminated)
Nov 25 16:50:29 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[357241]: [WARNING]  (357245) : All workers exited. Exiting... (0)
Nov 25 16:50:29 compute-0 systemd[1]: libpod-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14.scope: Deactivated successfully.
Nov 25 16:50:29 compute-0 podman[357759]: 2025-11-25 16:50:29.995592742 +0000 UTC m=+0.054329128 container died 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14-userdata-shm.mount: Deactivated successfully.
Nov 25 16:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfb50cdc7de6082c53fe78a6bea78f3cc609faf5a93f7a225931635c4a0532c2-merged.mount: Deactivated successfully.
Nov 25 16:50:30 compute-0 kernel: tap1d6ef4a2-82: entered promiscuous mode
Nov 25 16:50:30 compute-0 kernel: tap1d6ef4a2-82 (unregistering): left promiscuous mode
Nov 25 16:50:30 compute-0 ovn_controller[153477]: 2025-11-25T16:50:30Z|01012|binding|INFO|Claiming lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for this chassis.
Nov 25 16:50:30 compute-0 ovn_controller[153477]: 2025-11-25T16:50:30Z|01013|binding|INFO|1d6ef4a2-8289-4c88-b3f3-481435a4dab0: Claiming fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 podman[357759]: 2025-11-25 16:50:30.051963324 +0000 UTC m=+0.110699710 container cleanup 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.058 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:30 compute-0 ovn_controller[153477]: 2025-11-25T16:50:30Z|01014|binding|INFO|Releasing lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 from this chassis (sb_readonly=0)
Nov 25 16:50:30 compute-0 systemd[1]: libpod-conmon-4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14.scope: Deactivated successfully.
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.071 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.078 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:30 compute-0 podman[357796]: 2025-11-25 16:50:30.129424779 +0000 UTC m=+0.049105935 container remove 4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0186c583-335f-44ca-b4c3-2b61e66c4487]: (4, ('Tue Nov 25 04:50:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14)\n4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14\nTue Nov 25 04:50:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14)\n4121544d55e451f24382ea828d27b26f1fa767b9a2b279dc116a02babfd4dc14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d55e1b56-789c-4d56-826c-7ac0db176700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:30 compute-0 kernel: tap9840ff40-e0: left promiscuous mode
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.161 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7238efd1-1ee8-4447-adc7-310da9094d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.175 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddabd908-e55e-43f8-a525-6414d4e20bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.177 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[da1543ee-11e7-463f-9ee0-338f683afc82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.194 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04a62cb1-88af-4134-b6a5-bedb7a814804]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586372, 'reachable_time': 22213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357815, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.199 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.199 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[fc51324b-2052-470e-aa78-fee352494bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d9840ff40\x2dec43\x2d46f9\x2dab52\x2d3d9495f203ee.mount: Deactivated successfully.
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.200 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.201 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.202 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[51e6d0ad-4304-4fd5-8b28-ad8770ebe522]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.203 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.204 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.205 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80065cf7-6cca-41e6-b7d0-133822e43831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.228 254096 DEBUG nova.compute.manager [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-changed-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.229 254096 DEBUG nova.compute.manager [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Refreshing instance network info cache due to event network-changed-0127bd66-2e67-465a-8205-164198287c55. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.229 254096 DEBUG oslo_concurrency.lockutils [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:30 compute-0 kernel: tapd6e67173-6a (unregistering): left promiscuous mode
Nov 25 16:50:30 compute-0 NetworkManager[48891]: <info>  [1764089430.4503] device (tapd6e67173-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.463 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 ovn_controller[153477]: 2025-11-25T16:50:30Z|01015|binding|INFO|Releasing lport d6e67173-6a72-4200-9963-90668ed663e4 from this chassis (sb_readonly=0)
Nov 25 16:50:30 compute-0 ovn_controller[153477]: 2025-11-25T16:50:30Z|01016|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 down in Southbound
Nov 25 16:50:30 compute-0 ovn_controller[153477]: 2025-11-25T16:50:30Z|01017|binding|INFO|Removing iface tapd6e67173-6a ovn-installed in OVS
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.474 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.476 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 unbound from our chassis
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.477 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:50:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:30.478 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92f5edd6-8f94-4629-9398-0ea4efea81f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.519 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance shutdown successfully after 3 seconds.
Nov 25 16:50:30 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 25 16:50:30 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Consumed 13.778s CPU time.
Nov 25 16:50:30 compute-0 systemd-machined[216343]: Machine qemu-128-instance-00000066 terminated.
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.529 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance destroyed successfully.
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.536 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance destroyed successfully.
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.537 254096 DEBUG nova.virt.libvirt.vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:26Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.538 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.539 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.540 254096 DEBUG os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.545 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d6ef4a2-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.556 254096 INFO os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')
Nov 25 16:50:30 compute-0 NetworkManager[48891]: <info>  [1764089430.6851] manager: (tapd6e67173-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.905 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting instance files /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.908 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deletion of /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del complete
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.929 254096 DEBUG nova.compute.manager [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.929 254096 DEBUG oslo_concurrency.lockutils [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.930 254096 DEBUG oslo_concurrency.lockutils [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.930 254096 DEBUG oslo_concurrency.lockutils [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.930 254096 DEBUG nova.compute.manager [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.931 254096 WARNING nova.compute.manager [req-81d09cc1-102d-49d1-9d83-83944d7e177b req-27524d2a-f8c7-4c4d-a3d3-a751c40097ce a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state active and task_state rescuing.
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.963 254096 DEBUG nova.network.neutron [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updating instance_info_cache with network_info: [{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:30 compute-0 ceph-mon[74985]: pgmap v1981: 321 pgs: 321 active+clean; 167 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 92 op/s
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.990 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Releasing lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.991 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance network_info: |[{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.991 254096 DEBUG oslo_concurrency.lockutils [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.992 254096 DEBUG nova.network.neutron [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Refreshing network info cache for port 0127bd66-2e67-465a-8205-164198287c55 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:30 compute-0 nova_compute[254092]: 2025-11-25 16:50:30.996 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start _get_guest_xml network_info=[{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.003 254096 WARNING nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.021 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.022 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.031 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.032 254096 DEBUG nova.virt.libvirt.host [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.032 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.032 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.033 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.033 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.034 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.035 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.035 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.035 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.036 254096 DEBUG nova.virt.hardware [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.039 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.089 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.090 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating image(s)
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.113 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.136 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.161 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.167 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.205 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance shutdown successfully after 13 seconds.
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.213 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance destroyed successfully.
Nov 25 16:50:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.216 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.236 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Attempting rescue
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.237 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.242 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.243 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating image(s)
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.263 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.268 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.269 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.271 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "161d3f571cb73ba41fcf2bb0ea94bcb3953595f6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.290 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.294 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.355 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.385 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.390 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.484 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.487 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.490 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.490 254096 DEBUG oslo_concurrency.lockutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.516 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.519 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906346555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.553 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.554 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.599 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.603 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.685 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.791 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.792 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.819 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.820 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start _get_guest_xml network_info=[{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "vif_mac": "fa:16:3e:fa:b3:74"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.820 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'resources' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.829 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.830 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Ensure instance console log exists: /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.831 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.831 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.831 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.833 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start _get_guest_xml network_info=[{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:50:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 219 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.840 254096 WARNING nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.842 254096 WARNING nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.845 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.845 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.846 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.846 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.849 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.849 254096 DEBUG nova.virt.libvirt.host [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.850 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.850 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:36Z,direct_url=<?>,disk_format='qcow2',id=a4aa3708-bb73-4b5a-b3f3-42153358021e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.851 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.852 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.853 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.853 254096 DEBUG nova.virt.hardware [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.853 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.854 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.854 254096 DEBUG nova.virt.libvirt.host [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.855 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.856 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.856 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.856 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.857 254096 DEBUG nova.virt.hardware [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.858 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.874 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 nova_compute[254092]: 2025-11-25 16:50:31.919 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2906346555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559605262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.109 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.111 254096 DEBUG nova.virt.libvirt.vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1145656902',display_name='tempest-ServerAddressesTestJSON-server-1145656902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1145656902',id=103,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5ef0cd3e375456d9e1b561f7929fc4f',ramdisk_id='',reservation_id='r-s9s0yj68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1476412269',owner_user_name='tempest-ServerAddressesTestJSON-1476412269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:27Z,user_data=None,user_id='91f64879f99b40f69cdf49bceea9af2b',uuid=7e50c80e-03fd-47ec-854f-f4e5d45c1c82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.111 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converting VIF {"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.112 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.113 254096 DEBUG nova.objects.instance [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.137 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <uuid>7e50c80e-03fd-47ec-854f-f4e5d45c1c82</uuid>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <name>instance-00000067</name>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerAddressesTestJSON-server-1145656902</nova:name>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:50:31</nova:creationTime>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:user uuid="91f64879f99b40f69cdf49bceea9af2b">tempest-ServerAddressesTestJSON-1476412269-project-member</nova:user>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:project uuid="b5ef0cd3e375456d9e1b561f7929fc4f">tempest-ServerAddressesTestJSON-1476412269</nova:project>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:port uuid="0127bd66-2e67-465a-8205-164198287c55">
Nov 25 16:50:32 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <system>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="serial">7e50c80e-03fd-47ec-854f-f4e5d45c1c82</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="uuid">7e50c80e-03fd-47ec-854f-f4e5d45c1c82</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </system>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <os>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </os>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <features>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </features>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:9a:60:a7"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <target dev="tap0127bd66-2e"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/console.log" append="off"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <video>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </video>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:50:32 compute-0 nova_compute[254092]: </domain>
Nov 25 16:50:32 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.138 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Preparing to wait for external event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.143 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.143 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.143 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.144 254096 DEBUG nova.virt.libvirt.vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:50:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1145656902',display_name='tempest-ServerAddressesTestJSON-server-1145656902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1145656902',id=103,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5ef0cd3e375456d9e1b561f7929fc4f',ramdisk_id='',reservation_id='r-s9s0yj68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1476412269',owner_user_name='tempest-ServerAddressesTestJSON-1476412269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:27Z,user_data=None,user_id='91f64879f99b40f69cdf49bceea9af2b',uuid=7e50c80e-03fd-47ec-854f-f4e5d45c1c82,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.144 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converting VIF {"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.145 254096 DEBUG nova.network.os_vif_util [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.146 254096 DEBUG os_vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.147 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.147 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.151 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0127bd66-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.151 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0127bd66-2e, col_values=(('external_ids', {'iface-id': '0127bd66-2e67-465a-8205-164198287c55', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:60:a7', 'vm-uuid': '7e50c80e-03fd-47ec-854f-f4e5d45c1c82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 NetworkManager[48891]: <info>  [1764089432.1543] manager: (tap0127bd66-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.161 254096 INFO os_vif [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e')
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.222 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.222 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.223 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] No VIF found with MAC fa:16:3e:9a:60:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.224 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Using config drive
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.245 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966878006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766728261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.370 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.371 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.371 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.372 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.372 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.372 254096 WARNING nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state rebuild_spawning.
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.373 254096 DEBUG oslo_concurrency.lockutils [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.374 254096 DEBUG nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.374 254096 WARNING nova.compute.manager [req-fd725501-6b4c-4f7b-b533-60f88e0c547c req-1bba0e1b-4093-4ca7-bf84-e0ea27ad6a58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state rebuild_spawning.
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.375 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.394 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.398 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.433 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.435 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643128134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.840 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.843 254096 DEBUG nova.virt.libvirt.vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:31Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.843 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.844 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.847 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <uuid>1ae1094f-81aa-490c-80ca-4eba95f46cac</uuid>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <name>instance-00000065</name>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:50:32 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:50:32 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-922142806</nova:name>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:50:31</nova:creationTime>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="a4aa3708-bb73-4b5a-b3f3-42153358021e"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <nova:port uuid="1d6ef4a2-8289-4c88-b3f3-481435a4dab0">
Nov 25 16:50:32 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <system>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="serial">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="uuid">1ae1094f-81aa-490c-80ca-4eba95f46cac</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </system>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <os>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </os>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <features>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </features>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f8:6f:25"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <target dev="tap1d6ef4a2-82"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/console.log" append="off"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <video>
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </video>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:50:32 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:50:32 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:50:32 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:50:32 compute-0 nova_compute[254092]: </domain>
Nov 25 16:50:32 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.860 254096 DEBUG nova.virt.libvirt.vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:31Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.860 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.861 254096 DEBUG nova.network.os_vif_util [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.861 254096 DEBUG os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.862 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.862 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.863 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d6ef4a2-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d6ef4a2-82, col_values=(('external_ids', {'iface-id': '1d6ef4a2-8289-4c88-b3f3-481435a4dab0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:6f:25', 'vm-uuid': '1ae1094f-81aa-490c-80ca-4eba95f46cac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 NetworkManager[48891]: <info>  [1764089432.8694] manager: (tap1d6ef4a2-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.871 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.881 254096 INFO os_vif [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')
Nov 25 16:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050487547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.906 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.907 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.975 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.976 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.976 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:f8:6f:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:50:32 compute-0 nova_compute[254092]: 2025-11-25 16:50:32.977 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Using config drive
Nov 25 16:50:32 compute-0 ceph-mon[74985]: pgmap v1982: 321 pgs: 321 active+clean; 219 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Nov 25 16:50:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1559605262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3966878006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1766728261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/643128134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2050487547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.001 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.025 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.051 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'keypairs' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.092 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Creating config drive at /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.097 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7ilik5h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.129 254096 DEBUG nova.compute.manager [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.129 254096 DEBUG oslo_concurrency.lockutils [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.130 254096 DEBUG oslo_concurrency.lockutils [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.130 254096 DEBUG oslo_concurrency.lockutils [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.130 254096 DEBUG nova.compute.manager [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.131 254096 WARNING nova.compute.manager [req-a4a9f7e1-3ff3-436c-b711-a2ea3be70587 req-78716b5e-91c9-44cf-a4b5-950b35a315d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state active and task_state rescuing.
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.234 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7ilik5h" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.257 254096 DEBUG nova.storage.rbd_utils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] rbd image 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.260 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:50:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531098191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.363 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.365 254096 DEBUG nova.virt.libvirt.vif [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:50:14Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "vif_mac": "fa:16:3e:fa:b3:74"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.366 254096 DEBUG nova.network.os_vif_util [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "vif_mac": "fa:16:3e:fa:b3:74"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.367 254096 DEBUG nova.network.os_vif_util [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.368 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.379 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <uuid>8e8f0fb8-4b3c-40dd-9317-94bedc736376</uuid>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <name>instance-00000066</name>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-1379098021</nova:name>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:50:31</nova:creationTime>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:user uuid="734253d3f2e84904968d9db3044df1c8">tempest-ServerRescueTestJSONUnderV235-1568478678-project-member</nova:user>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:project uuid="423cb78fb5f54c46b9867a6f07d0cf95">tempest-ServerRescueTestJSONUnderV235-1568478678</nova:project>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <nova:port uuid="d6e67173-6a72-4200-9963-90668ed663e4">
Nov 25 16:50:33 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <system>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <entry name="serial">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <entry name="uuid">8e8f0fb8-4b3c-40dd-9317-94bedc736376</entry>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </system>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <os>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   </os>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <features>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   </features>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.rescue">
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk">
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <target dev="vdb" bus="virtio"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue">
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </source>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:50:33 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:fa:b3:74"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <target dev="tapd6e67173-6a"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/console.log" append="off"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <video>
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </video>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:50:33 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:50:33 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:50:33 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:50:33 compute-0 nova_compute[254092]: </domain>
Nov 25 16:50:33 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.394 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance destroyed successfully.
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.414 254096 DEBUG oslo_concurrency.processutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config 7e50c80e-03fd-47ec-854f-f4e5d45c1c82_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.414 254096 INFO nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deleting local config drive /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82/disk.config because it was imported into RBD.
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.458 254096 DEBUG nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] No VIF found with MAC fa:16:3e:fa:b3:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.459 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Using config drive
Nov 25 16:50:33 compute-0 NetworkManager[48891]: <info>  [1764089433.4796] manager: (tap0127bd66-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/416)
Nov 25 16:50:33 compute-0 kernel: tap0127bd66-2e: entered promiscuous mode
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.487 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:33 compute-0 ovn_controller[153477]: 2025-11-25T16:50:33Z|01018|binding|INFO|Claiming lport 0127bd66-2e67-465a-8205-164198287c55 for this chassis.
Nov 25 16:50:33 compute-0 ovn_controller[153477]: 2025-11-25T16:50:33Z|01019|binding|INFO|0127bd66-2e67-465a-8205-164198287c55: Claiming fa:16:3e:9a:60:a7 10.100.0.13
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.499 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.501 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:60:a7 10.100.0.13'], port_security=['fa:16:3e:9a:60:a7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7e50c80e-03fd-47ec-854f-f4e5d45c1c82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5ef0cd3e375456d9e1b561f7929fc4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ebd3670-30be-4da5-91ef-ad015b6cc911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3a36a72-9f53-4efe-835b-b2c470e0b4b1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0127bd66-2e67-465a-8205-164198287c55) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.503 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0127bd66-2e67-465a-8205-164198287c55 in datapath 4ca03ef0-bdbb-4378-ace4-e94a9e273068 bound to our chassis
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.505 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ca03ef0-bdbb-4378-ace4-e94a9e273068
Nov 25 16:50:33 compute-0 systemd-udevd[358421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.517 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.518 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f17fedd-ba1d-4f79-a699-07c916a4a97f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_controller[153477]: 2025-11-25T16:50:33Z|01020|binding|INFO|Setting lport 0127bd66-2e67-465a-8205-164198287c55 ovn-installed in OVS
Nov 25 16:50:33 compute-0 ovn_controller[153477]: 2025-11-25T16:50:33Z|01021|binding|INFO|Setting lport 0127bd66-2e67-465a-8205-164198287c55 up in Southbound
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.519 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ca03ef0-b1 in ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:50:33 compute-0 systemd-machined[216343]: New machine qemu-129-instance-00000067.
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.521 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ca03ef0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f00af28-fe89-4a70-a8b3-657e9a8056a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.522 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92d21f32-eab8-4ea9-94ee-a02f10026ea5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:33 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Nov 25 16:50:33 compute-0 NetworkManager[48891]: <info>  [1764089433.5338] device (tap0127bd66-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:50:33 compute-0 NetworkManager[48891]: <info>  [1764089433.5344] device (tap0127bd66-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.541 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bd720e7f-ab48-42b5-8b68-98ff0b50f776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.557 254096 DEBUG nova.network.neutron [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updated VIF entry in instance network info cache for port 0127bd66-2e67-465a-8205-164198287c55. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.558 254096 DEBUG nova.network.neutron [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updating instance_info_cache with network_info: [{"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.561 254096 DEBUG nova.objects.instance [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'keypairs' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.561 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d88cffb-183f-42e6-9b1e-b9ddd1c4a740]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.585 254096 DEBUG oslo_concurrency.lockutils [req-ef9f9d62-8a87-4b27-a35d-29c460589115 req-0ddad090-8b7d-48e0-bfe6-71ace86a0b19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7e50c80e-03fd-47ec-854f-f4e5d45c1c82" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.595 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f69083-0b80-410e-b2b3-f0c3d759eb5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[550b68c3-9da2-4ed2-bc5a-2e93fb1a5ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 NetworkManager[48891]: <info>  [1764089433.6032] manager: (tap4ca03ef0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/417)
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.637 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc9b403-cda7-4a8d-bd64-ae3664ea361e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.641 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[558b759a-ae50-48c9-80b6-b80bf9aa9ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.656 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Creating config drive at /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.663 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgswrpu96 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:33 compute-0 NetworkManager[48891]: <info>  [1764089433.6715] device (tap4ca03ef0-b0): carrier: link connected
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.679 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6726719e-9137-497f-a704-1e64ee245fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f112e4e5-2a0e-45c4-8758-2ed84fe061d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca03ef0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:e8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589126, 'reachable_time': 15131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358461, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.725 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1551cff-93a8-434f-abaf-3f9c790905d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:e8ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589126, 'tstamp': 589126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358463, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.745 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e24932d0-0148-4b2f-b21b-4563259e513a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ca03ef0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:e8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589126, 'reachable_time': 15131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358466, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.780 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[411d49a1-79d1-4f87-a25e-7c249e898006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.823 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgswrpu96" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 219 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 4.0 MiB/s wr, 109 op/s
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.854 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53504e03-8dbc-4ad8-844c-461bdf019e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.855 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca03ef0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.855 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.856 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ca03ef0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:33 compute-0 NetworkManager[48891]: <info>  [1764089433.8587] manager: (tap4ca03ef0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.863 254096 DEBUG nova.storage.rbd_utils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:33 compute-0 kernel: tap4ca03ef0-b0: entered promiscuous mode
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.869 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ca03ef0-b0, col_values=(('external_ids', {'iface-id': '2a505731-bbaa-45d2-aebd-3f227fca1251'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:33 compute-0 ovn_controller[153477]: 2025-11-25T16:50:33Z|01022|binding|INFO|Releasing lport 2a505731-bbaa-45d2-aebd-3f227fca1251 from this chassis (sb_readonly=0)
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.874 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.912 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ca03ef0-bdbb-4378-ace4-e94a9e273068.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ca03ef0-bdbb-4378-ace4-e94a9e273068.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.914 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b75a1980-c690-445b-ac1d-0799acc00c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.915 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-4ca03ef0-bdbb-4378-ace4-e94a9e273068
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/4ca03ef0-bdbb-4378-ace4-e94a9e273068.pid.haproxy
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 4ca03ef0-bdbb-4378-ace4-e94a9e273068
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:50:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:33.917 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'env', 'PROCESS_TAG=haproxy-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ca03ef0-bdbb-4378-ace4-e94a9e273068.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:50:33 compute-0 nova_compute[254092]: 2025-11-25 16:50:33.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1531098191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.023 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.0229478, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.024 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Started (Lifecycle Event)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.044 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.049 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.0271955, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.049 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Paused (Lifecycle Event)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.065 254096 DEBUG oslo_concurrency.processutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config 1ae1094f-81aa-490c-80ca-4eba95f46cac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.067 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.067 254096 INFO nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting local config drive /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac/disk.config because it was imported into RBD.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.071 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.089 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.1440] manager: (tap1d6ef4a2-82): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Nov 25 16:50:34 compute-0 systemd-udevd[358449]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:50:34 compute-0 kernel: tap1d6ef4a2-82: entered promiscuous mode
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01023|binding|INFO|Claiming lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for this chassis.
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01024|binding|INFO|1d6ef4a2-8289-4c88-b3f3-481435a4dab0: Claiming fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.162 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.1676] device (tap1d6ef4a2-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.1690] device (tap1d6ef4a2-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01025|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 ovn-installed in OVS
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01026|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 up in Southbound
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 systemd-machined[216343]: New machine qemu-130-instance-00000065.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.212 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Creating config drive at /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue
Nov 25 16:50:34 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000065.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.217 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjp3658uz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:34 compute-0 podman[358610]: 2025-11-25 16:50:34.378342564 +0000 UTC m=+0.050643147 container create 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.383 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjp3658uz" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:34 compute-0 systemd[1]: Started libpod-conmon-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6.scope.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.422 254096 DEBUG nova.storage.rbd_utils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] rbd image 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.427 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:34 compute-0 podman[358610]: 2025-11-25 16:50:34.352918853 +0000 UTC m=+0.025219456 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc33fcdae98ce1a9441422b4975d67e1938a46ed9b71c93f2ea2242b9f8bf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:34 compute-0 podman[358610]: 2025-11-25 16:50:34.464875415 +0000 UTC m=+0.137176018 container init 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:50:34 compute-0 podman[358610]: 2025-11-25 16:50:34.475768011 +0000 UTC m=+0.148068614 container start 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:50:34 compute-0 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : New worker (358693) forked
Nov 25 16:50:34 compute-0 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : Loading success.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.557 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 1ae1094f-81aa-490c-80ca-4eba95f46cac due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.558 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.5569906, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.558 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Resumed (Lifecycle Event)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.561 254096 DEBUG nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.561 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.566 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance spawned successfully.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.567 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.584 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.584 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.586 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.590 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.592 254096 DEBUG oslo_concurrency.processutils [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue 8e8f0fb8-4b3c-40dd-9317-94bedc736376_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.594 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.595 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.595 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.595 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.596 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.596 254096 DEBUG nova.virt.libvirt.driver [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7aed5562-b3ee-45d5-bd8c-ecd91f4f1f41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.597 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9840ff40-e1 in ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.601 254096 INFO nova.virt.libvirt.driver [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deleting local config drive /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376/disk.config.rescue because it was imported into RBD.
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.602 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9840ff40-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd932ce5-a904-4935-955c-00ec28f26d43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d17d71d7-4eb6-4c38-bfea-6d41a74c260a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.617 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c01c15-1f64-4189-88c7-878b964680da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.634 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8136dd30-7f21-4ba8-8ba1-819660c73218]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.637 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.638 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.5606065, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.638 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Started (Lifecycle Event)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.658 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.658 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Processing event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.659 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] No waiting events found dispatching network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 WARNING nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received unexpected event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 for instance with vm_state building and task_state spawning.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.660 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 DEBUG oslo_concurrency.lockutils [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 DEBUG nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 WARNING nova.compute.manager [req-6479bdf2-bdeb-474b-8269-e1225bc01f8a req-b634ac65-f5f1-4dfd-b5db-2a27b60a38ca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state rebuild_spawning.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.661 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.663 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.668 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.668 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.676 254096 DEBUG nova.compute.manager [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.682 254096 INFO nova.virt.libvirt.driver [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance spawned successfully.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.682 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:50:34 compute-0 kernel: tapd6e67173-6a: entered promiscuous mode
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.6849] manager: (tapd6e67173-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.685 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089434.6664329, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Resumed (Lifecycle Event)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01027|binding|INFO|Claiming lport d6e67173-6a72-4200-9963-90668ed663e4 for this chassis.
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01028|binding|INFO|d6e67173-6a72-4200-9963-90668ed663e4: Claiming fa:16:3e:fa:b3:74 10.100.0.13
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.690 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.689 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a1006805-c235-41c3-82ef-b708771709d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.698 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '5', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.7020] manager: (tap9840ff40-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/421)
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b59b2d65-1172-4f08-935b-3b5ea9ff906f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.7107] device (tapd6e67173-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.7124] device (tapd6e67173-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.714 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01029|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 ovn-installed in OVS
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01030|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 up in Southbound
Nov 25 16:50:34 compute-0 systemd-udevd[358454]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.720 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.721 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.721 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.722 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.722 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.723 254096 DEBUG nova.virt.libvirt.driver [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.730 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:34 compute-0 systemd-machined[216343]: New machine qemu-131-instance-00000066.
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.753 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[968776fc-2c67-49d0-9f06-00c5da73e170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.756 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[611c0c40-f715-4c19-b0da-57929ae5dc66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-00000066.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.774 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.775 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.776 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.776 254096 DEBUG nova.objects.instance [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.7863] device (tap9840ff40-e0): carrier: link connected
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.791 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a2425454-5ea4-4460-a8da-069b8524632c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.811 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55985556-6445-4b54-9b48-154144385409]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589237, 'reachable_time': 36550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358754, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.828 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[027a166a-c7cf-4f71-8850-52086ffe7ace]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:4dad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589237, 'tstamp': 589237}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358756, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.831 254096 INFO nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 7.13 seconds to spawn the instance on the hypervisor.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.831 254096 DEBUG nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.846 254096 DEBUG oslo_concurrency.lockutils [None req-cfc21dc1-981e-4626-8cfc-b479aeb0d48e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.857 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93f9adad-79ac-4b57-9bf5-c6024d0d4bf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9840ff40-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:4d:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589237, 'reachable_time': 36550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358757, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.886 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[270356fb-fde0-4a16-b33d-ee8c0c0dacca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.894 254096 INFO nova.compute.manager [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 8.05 seconds to build instance.
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.908 254096 DEBUG oslo_concurrency.lockutils [None req-a5f934d3-03d0-4403-bbc6-31e9b7d60baa 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.964 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[425b20a8-679a-4ec2-aecb-bab779e965f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.966 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.967 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9840ff40-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 kernel: tap9840ff40-e0: entered promiscuous mode
Nov 25 16:50:34 compute-0 NetworkManager[48891]: <info>  [1764089434.9726] manager: (tap9840ff40-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.976 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9840ff40-e0, col_values=(('external_ids', {'iface-id': '217facd0-6092-44c8-9430-efb8d36c211a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 ovn_controller[153477]: 2025-11-25T16:50:34Z|01031|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 16:50:34 compute-0 nova_compute[254092]: 2025-11-25 16:50:34.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.979 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.984 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b9edb053-b95e-4606-b2e9-c865171556a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.986 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/9840ff40-ec43-46f9-ab52-3d9495f203ee.pid.haproxy
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 9840ff40-ec43-46f9-ab52-3d9495f203ee
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:50:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:34.986 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'env', 'PROCESS_TAG=haproxy-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9840ff40-ec43-46f9-ab52-3d9495f203ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:50:35 compute-0 ceph-mon[74985]: pgmap v1983: 321 pgs: 321 active+clean; 219 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 4.0 MiB/s wr, 109 op/s
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.214 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 8e8f0fb8-4b3c-40dd-9317-94bedc736376 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.214 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089435.2140608, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.214 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Resumed (Lifecycle Event)
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.221 254096 DEBUG nova.compute.manager [None req-3a47f469-5a8d-4cf8-a333-3654f5d2f094 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.242 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.245 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.274 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] During sync_power_state the instance has a pending task (rescuing). Skip.
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.275 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089435.2191, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Started (Lifecycle Event)
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.290 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.293 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.409 254096 DEBUG nova.compute.manager [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG oslo_concurrency.lockutils [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG oslo_concurrency.lockutils [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG oslo_concurrency.lockutils [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 DEBUG nova.compute.manager [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:35 compute-0 nova_compute[254092]: 2025-11-25 16:50:35.410 254096 WARNING nova.compute.manager [req-19be607c-daf1-4885-8419-d60485712f73 req-6fabb24c-bc6a-472f-a23f-150f7f4eeaa2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state rescued and task_state None.
Nov 25 16:50:35 compute-0 podman[358849]: 2025-11-25 16:50:35.428986475 +0000 UTC m=+0.054855481 container create 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 16:50:35 compute-0 systemd[1]: Started libpod-conmon-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e.scope.
Nov 25 16:50:35 compute-0 podman[358849]: 2025-11-25 16:50:35.400937863 +0000 UTC m=+0.026806889 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:50:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad744a85431f20015a7676bfb11197180bce1ae809f0c271dcc2b50469a0e66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:35 compute-0 podman[358849]: 2025-11-25 16:50:35.514041307 +0000 UTC m=+0.139910333 container init 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 16:50:35 compute-0 podman[358849]: 2025-11-25 16:50:35.520733839 +0000 UTC m=+0.146602845 container start 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 16:50:35 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : New worker (358870) forked
Nov 25 16:50:35 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : Loading success.
Nov 25 16:50:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:35.626 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 unbound from our chassis
Nov 25 16:50:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:35.627 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:50:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:35.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7118599f-2d6a-4c70-8f50-d3b634ef99c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 237 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 6.5 MiB/s wr, 168 op/s
Nov 25 16:50:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:37 compute-0 ceph-mon[74985]: pgmap v1984: 321 pgs: 321 active+clean; 237 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 6.5 MiB/s wr, 168 op/s
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.152 254096 DEBUG nova.compute.manager [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.152 254096 DEBUG oslo_concurrency.lockutils [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.152 254096 DEBUG oslo_concurrency.lockutils [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.153 254096 DEBUG oslo_concurrency.lockutils [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.153 254096 DEBUG nova.compute.manager [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.153 254096 WARNING nova.compute.manager [req-fb701361-8452-4743-91ce-4b5e8841d446 req-2f4c46da-6b75-4dfa-848f-12c888e0394f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state active and task_state None.
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.528 254096 DEBUG nova.compute.manager [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.528 254096 DEBUG oslo_concurrency.lockutils [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 DEBUG oslo_concurrency.lockutils [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 DEBUG oslo_concurrency.lockutils [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 DEBUG nova.compute.manager [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.529 254096 WARNING nova.compute.manager [req-f8e54e92-3786-4cbc-8a66-639b741a7251 req-3a57ce52-5db8-496b-b3ef-2dfd9f444e32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state rescued and task_state None.
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.664 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.665 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.665 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.666 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.666 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.667 254096 INFO nova.compute.manager [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Terminating instance
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.668 254096 DEBUG nova.compute.manager [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:50:37 compute-0 kernel: tap0127bd66-2e (unregistering): left promiscuous mode
Nov 25 16:50:37 compute-0 NetworkManager[48891]: <info>  [1764089437.7140] device (tap0127bd66-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:50:37 compute-0 ovn_controller[153477]: 2025-11-25T16:50:37Z|01032|binding|INFO|Releasing lport 0127bd66-2e67-465a-8205-164198287c55 from this chassis (sb_readonly=0)
Nov 25 16:50:37 compute-0 ovn_controller[153477]: 2025-11-25T16:50:37Z|01033|binding|INFO|Setting lport 0127bd66-2e67-465a-8205-164198287c55 down in Southbound
Nov 25 16:50:37 compute-0 ovn_controller[153477]: 2025-11-25T16:50:37Z|01034|binding|INFO|Removing iface tap0127bd66-2e ovn-installed in OVS
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.732 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:60:a7 10.100.0.13'], port_security=['fa:16:3e:9a:60:a7 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '7e50c80e-03fd-47ec-854f-f4e5d45c1c82', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5ef0cd3e375456d9e1b561f7929fc4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ebd3670-30be-4da5-91ef-ad015b6cc911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3a36a72-9f53-4efe-835b-b2c470e0b4b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0127bd66-2e67-465a-8205-164198287c55) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.734 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0127bd66-2e67-465a-8205-164198287c55 in datapath 4ca03ef0-bdbb-4378-ace4-e94a9e273068 unbound from our chassis
Nov 25 16:50:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.735 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ca03ef0-bdbb-4378-ace4-e94a9e273068, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:50:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.736 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4ff995-4193-43dc-8c25-0e0c6eebd461]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:37.737 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 namespace which is not needed anymore
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:37 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Nov 25 16:50:37 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 3.396s CPU time.
Nov 25 16:50:37 compute-0 systemd-machined[216343]: Machine qemu-129-instance-00000067 terminated.
Nov 25 16:50:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 260 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.5 MiB/s wr, 239 op/s
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:37 compute-0 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : haproxy version is 2.8.14-c23fe91
Nov 25 16:50:37 compute-0 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [NOTICE]   (358691) : path to executable is /usr/sbin/haproxy
Nov 25 16:50:37 compute-0 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [WARNING]  (358691) : Exiting Master process...
Nov 25 16:50:37 compute-0 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [ALERT]    (358691) : Current worker (358693) exited with code 143 (Terminated)
Nov 25 16:50:37 compute-0 neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068[358673]: [WARNING]  (358691) : All workers exited. Exiting... (0)
Nov 25 16:50:37 compute-0 systemd[1]: libpod-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6.scope: Deactivated successfully.
Nov 25 16:50:37 compute-0 podman[358901]: 2025-11-25 16:50:37.886986232 +0000 UTC m=+0.051854240 container died 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.907 254096 INFO nova.virt.libvirt.driver [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Instance destroyed successfully.
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.908 254096 DEBUG nova.objects.instance [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lazy-loading 'resources' on Instance uuid 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.918 254096 DEBUG nova.virt.libvirt.vif [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:50:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1145656902',display_name='tempest-ServerAddressesTestJSON-server-1145656902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1145656902',id=103,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5ef0cd3e375456d9e1b561f7929fc4f',ramdisk_id='',reservation_id='r-s9s0yj68',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1476412269',owner_user_name='tempest-ServerAddressesTestJSON-1476412269-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:50:34Z,user_data=None,user_id='91f64879f99b40f69cdf49bceea9af2b',uuid=7e50c80e-03fd-47ec-854f-f4e5d45c1c82,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.918 254096 DEBUG nova.network.os_vif_util [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converting VIF {"id": "0127bd66-2e67-465a-8205-164198287c55", "address": "fa:16:3e:9a:60:a7", "network": {"id": "4ca03ef0-bdbb-4378-ace4-e94a9e273068", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1402853303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5ef0cd3e375456d9e1b561f7929fc4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0127bd66-2e", "ovs_interfaceid": "0127bd66-2e67-465a-8205-164198287c55", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.919 254096 DEBUG nova.network.os_vif_util [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.920 254096 DEBUG os_vif [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.922 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0127bd66-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:37 compute-0 nova_compute[254092]: 2025-11-25 16:50:37.926 254096 INFO os_vif [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:60:a7,bridge_name='br-int',has_traffic_filtering=True,id=0127bd66-2e67-465a-8205-164198287c55,network=Network(4ca03ef0-bdbb-4378-ace4-e94a9e273068),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0127bd66-2e')
Nov 25 16:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6-userdata-shm.mount: Deactivated successfully.
Nov 25 16:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5cc33fcdae98ce1a9441422b4975d67e1938a46ed9b71c93f2ea2242b9f8bf5-merged.mount: Deactivated successfully.
Nov 25 16:50:37 compute-0 podman[358901]: 2025-11-25 16:50:37.942232983 +0000 UTC m=+0.107100991 container cleanup 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 16:50:37 compute-0 systemd[1]: libpod-conmon-5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6.scope: Deactivated successfully.
Nov 25 16:50:38 compute-0 podman[358956]: 2025-11-25 16:50:38.007108087 +0000 UTC m=+0.042866217 container remove 5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[679d8805-b3e8-4f70-9d88-47d48c455f41]: (4, ('Tue Nov 25 04:50:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 (5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6)\n5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6\nTue Nov 25 04:50:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 (5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6)\n5580f83731e121c173a5bd7d37725351df51e1903bda799336164e155d43aca6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.015 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54f23304-81bc-4067-811c-ca1cb9e75ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.016 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ca03ef0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:38 compute-0 kernel: tap4ca03ef0-b0: left promiscuous mode
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.037 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c001e-8e18-4e6a-9bda-b6de3201c011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.051 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71b3adf1-1354-4e1c-a38f-2f6c85f950d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70f93300-81d2-4cfb-b7d1-ca913ff54dbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.073 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f745431-2ae6-4f09-8113-08073df1af82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589118, 'reachable_time': 24290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358974, 'error': None, 'target': 'ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d4ca03ef0\x2dbdbb\x2d4378\x2dace4\x2de94a9e273068.mount: Deactivated successfully.
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.078 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ca03ef0-bdbb-4378-ace4-e94a9e273068 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:50:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:38.078 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[797fdee9-36c4-4345-a466-d51e072f0f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.273 254096 INFO nova.virt.libvirt.driver [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deleting instance files /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_del
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.273 254096 INFO nova.virt.libvirt.driver [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deletion of /var/lib/nova/instances/7e50c80e-03fd-47ec-854f-f4e5d45c1c82_del complete
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.333 254096 INFO nova.compute.manager [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 0.66 seconds to destroy the instance on the hypervisor.
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.333 254096 DEBUG oslo.service.loopingcall [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.334 254096 DEBUG nova.compute.manager [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.334 254096 DEBUG nova.network.neutron [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.864 254096 DEBUG nova.network.neutron [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.888 254096 INFO nova.compute.manager [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Took 0.55 seconds to deallocate network for instance.
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.929 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:38 compute-0 nova_compute[254092]: 2025-11-25 16:50:38.930 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.001 254096 DEBUG oslo_concurrency.processutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:39 compute-0 ceph-mon[74985]: pgmap v1985: 321 pgs: 321 active+clean; 260 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.5 MiB/s wr, 239 op/s
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.268 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-unplugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.269 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] No waiting events found dispatching network-vif-unplugged-0127bd66-2e67-465a-8205-164198287c55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.270 254096 WARNING nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received unexpected event network-vif-unplugged-0127bd66-2e67-465a-8205-164198287c55 for instance with vm_state deleted and task_state None.
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.270 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.270 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 DEBUG oslo_concurrency.lockutils [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 DEBUG nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] No waiting events found dispatching network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.271 254096 WARNING nova.compute.manager [req-6e7ab323-4721-4fb3-a6c8-c174d6f5c6c3 req-9471509e-0fe2-4556-b3e0-f45e3db81510 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received unexpected event network-vif-plugged-0127bd66-2e67-465a-8205-164198287c55 for instance with vm_state deleted and task_state None.
Nov 25 16:50:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:50:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121778999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.444 254096 DEBUG oslo_concurrency.processutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.451 254096 DEBUG nova.compute.provider_tree [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.474 254096 DEBUG nova.scheduler.client.report [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.502 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.542 254096 INFO nova.scheduler.client.report [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Deleted allocations for instance 7e50c80e-03fd-47ec-854f-f4e5d45c1c82
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.608 254096 DEBUG oslo_concurrency.lockutils [None req-c93f8e5f-f87e-4485-bd7b-9c2ec74bf027 91f64879f99b40f69cdf49bceea9af2b b5ef0cd3e375456d9e1b561f7929fc4f - - default default] Lock "7e50c80e-03fd-47ec-854f-f4e5d45c1c82" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:39 compute-0 nova_compute[254092]: 2025-11-25 16:50:39.699 254096 DEBUG nova.compute.manager [req-5f8e02dd-af29-4ae4-bcb7-24d0d6b25789 req-f755a63a-a787-47f7-a34c-ee8f38eef33f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Received event network-vif-deleted-0127bd66-2e67-465a-8205-164198287c55 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 260 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.3 MiB/s wr, 236 op/s
Nov 25 16:50:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2121778999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:50:40
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.mgr']
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:50:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:40.725 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:40.726 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:50:40 compute-0 nova_compute[254092]: 2025-11-25 16:50:40.727 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:41 compute-0 ceph-mon[74985]: pgmap v1986: 321 pgs: 321 active+clean; 260 MiB data, 828 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 7.3 MiB/s wr, 236 op/s
Nov 25 16:50:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:41 compute-0 nova_compute[254092]: 2025-11-25 16:50:41.373 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:41 compute-0 nova_compute[254092]: 2025-11-25 16:50:41.374 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:41 compute-0 nova_compute[254092]: 2025-11-25 16:50:41.374 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:41 compute-0 nova_compute[254092]: 2025-11-25 16:50:41.374 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:41 compute-0 nova_compute[254092]: 2025-11-25 16:50:41.375 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.3 MiB/s wr, 415 op/s
Nov 25 16:50:42 compute-0 ovn_controller[153477]: 2025-11-25T16:50:42Z|01035|binding|INFO|Releasing lport 217facd0-6092-44c8-9430-efb8d36c211a from this chassis (sb_readonly=0)
Nov 25 16:50:42 compute-0 nova_compute[254092]: 2025-11-25 16:50:42.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:42 compute-0 nova_compute[254092]: 2025-11-25 16:50:42.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.031 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.032 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:43 compute-0 ceph-mon[74985]: pgmap v1987: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 7.3 MiB/s wr, 415 op/s
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.044 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.045 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.045 254096 DEBUG nova.compute.manager [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.046 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.046 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.046 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:43 compute-0 nova_compute[254092]: 2025-11-25 16:50:43.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 309 op/s
Nov 25 16:50:45 compute-0 ceph-mon[74985]: pgmap v1988: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 309 op/s
Nov 25 16:50:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 309 op/s
Nov 25 16:50:45 compute-0 nova_compute[254092]: 2025-11-25 16:50:45.873 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:45 compute-0 nova_compute[254092]: 2025-11-25 16:50:45.874 254096 DEBUG nova.network.neutron [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:45 compute-0 nova_compute[254092]: 2025-11-25 16:50:45.888 254096 DEBUG oslo_concurrency.lockutils [req-e74feffd-1833-4b89-99ba-6b41ba2f409e req-c5fb02a4-6f72-46e5-b75e-d3763e732ae6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:46 compute-0 nova_compute[254092]: 2025-11-25 16:50:46.502 254096 DEBUG nova.compute.manager [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:46 compute-0 nova_compute[254092]: 2025-11-25 16:50:46.503 254096 DEBUG nova.compute.manager [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:46 compute-0 nova_compute[254092]: 2025-11-25 16:50:46.503 254096 DEBUG oslo_concurrency.lockutils [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:46 compute-0 nova_compute[254092]: 2025-11-25 16:50:46.504 254096 DEBUG oslo_concurrency.lockutils [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:46 compute-0 nova_compute[254092]: 2025-11-25 16:50:46.504 254096 DEBUG nova.network.neutron [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:46 compute-0 sudo[358998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:46 compute-0 sudo[358998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:46 compute-0 sudo[358998]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:46 compute-0 sudo[359023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:50:46 compute-0 sudo[359023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:46 compute-0 sudo[359023]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:46 compute-0 sudo[359048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:46 compute-0 sudo[359048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:46 compute-0 sudo[359048]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:46 compute-0 sudo[359073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:50:46 compute-0 sudo[359073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:47 compute-0 ceph-mon[74985]: pgmap v1989: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 309 op/s
Nov 25 16:50:47 compute-0 sudo[359073]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:50:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:50:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:50:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:50:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:50:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:50:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 721711fc-174e-4807-937f-5c530133b518 does not exist
Nov 25 16:50:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 404e2cdc-65ee-4c36-a0f4-568cd9ad848d does not exist
Nov 25 16:50:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 815330fb-ca71-4cc0-a171-a5e7e865e457 does not exist
Nov 25 16:50:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:50:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:50:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:50:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:50:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:50:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:50:47 compute-0 sudo[359129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:47 compute-0 sudo[359129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:47 compute-0 sudo[359129]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:47 compute-0 sudo[359154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:50:47 compute-0 sudo[359154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:47 compute-0 sudo[359154]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:47 compute-0 sudo[359179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:47 compute-0 sudo[359179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:47 compute-0 sudo[359179]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:47 compute-0 sudo[359204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:50:47 compute-0 sudo[359204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:47.728 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:47 compute-0 ovn_controller[153477]: 2025-11-25T16:50:47Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 16:50:47 compute-0 ovn_controller[153477]: 2025-11-25T16:50:47Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:6f:25 10.100.0.14
Nov 25 16:50:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1021 KiB/s wr, 251 op/s
Nov 25 16:50:47 compute-0 nova_compute[254092]: 2025-11-25 16:50:47.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:48 compute-0 podman[359267]: 2025-11-25 16:50:48.01453647 +0000 UTC m=+0.038355414 container create 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:50:48 compute-0 systemd[1]: Started libpod-conmon-8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1.scope.
Nov 25 16:50:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:50:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:50:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:50:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:50:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:50:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:50:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:48 compute-0 podman[359267]: 2025-11-25 16:50:48.087169784 +0000 UTC m=+0.110988758 container init 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:50:48 compute-0 podman[359267]: 2025-11-25 16:50:47.999847801 +0000 UTC m=+0.023666765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:50:48 compute-0 podman[359267]: 2025-11-25 16:50:48.096350763 +0000 UTC m=+0.120169707 container start 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 16:50:48 compute-0 podman[359267]: 2025-11-25 16:50:48.099502319 +0000 UTC m=+0.123321283 container attach 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:50:48 compute-0 fervent_proskuriakova[359284]: 167 167
Nov 25 16:50:48 compute-0 systemd[1]: libpod-8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1.scope: Deactivated successfully.
Nov 25 16:50:48 compute-0 podman[359267]: 2025-11-25 16:50:48.105044529 +0000 UTC m=+0.128863493 container died 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-09dbd5f94d0dc87e8310e86655ceae2c153a3fe0c9ced25ff6ff1b2b897eb1ad-merged.mount: Deactivated successfully.
Nov 25 16:50:48 compute-0 podman[359267]: 2025-11-25 16:50:48.139352401 +0000 UTC m=+0.163171345 container remove 8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_proskuriakova, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:50:48 compute-0 systemd[1]: libpod-conmon-8b2ea2198252a572994dc878b28cc9c49528e8bf07cfa16662e016364fd463e1.scope: Deactivated successfully.
Nov 25 16:50:48 compute-0 podman[359309]: 2025-11-25 16:50:48.296287886 +0000 UTC m=+0.036659437 container create fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 16:50:48 compute-0 systemd[1]: Started libpod-conmon-fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a.scope.
Nov 25 16:50:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:48 compute-0 podman[359309]: 2025-11-25 16:50:48.366379532 +0000 UTC m=+0.106751113 container init fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 16:50:48 compute-0 podman[359309]: 2025-11-25 16:50:48.374962335 +0000 UTC m=+0.115333886 container start fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:50:48 compute-0 podman[359309]: 2025-11-25 16:50:48.280717754 +0000 UTC m=+0.021089325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:50:48 compute-0 podman[359309]: 2025-11-25 16:50:48.378077619 +0000 UTC m=+0.118449180 container attach fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 16:50:48 compute-0 nova_compute[254092]: 2025-11-25 16:50:48.532 254096 DEBUG nova.network.neutron [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:48 compute-0 nova_compute[254092]: 2025-11-25 16:50:48.533 254096 DEBUG nova.network.neutron [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:48 compute-0 nova_compute[254092]: 2025-11-25 16:50:48.554 254096 DEBUG oslo_concurrency.lockutils [req-6533c769-b52b-4736-b8ce-913e69e38c93 req-4304c77e-4117-4635-97fc-4f1b3ca43a41 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:48 compute-0 nova_compute[254092]: 2025-11-25 16:50:48.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:49 compute-0 ceph-mon[74985]: pgmap v1990: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1021 KiB/s wr, 251 op/s
Nov 25 16:50:49 compute-0 cool_mayer[359325]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:50:49 compute-0 cool_mayer[359325]: --> relative data size: 1.0
Nov 25 16:50:49 compute-0 cool_mayer[359325]: --> All data devices are unavailable
Nov 25 16:50:49 compute-0 systemd[1]: libpod-fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a.scope: Deactivated successfully.
Nov 25 16:50:49 compute-0 podman[359309]: 2025-11-25 16:50:49.383174382 +0000 UTC m=+1.123545953 container died fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-be073adca4ae98052d88d0a09464c02a838495a3f05bfa0ec101992611111896-merged.mount: Deactivated successfully.
Nov 25 16:50:49 compute-0 podman[359309]: 2025-11-25 16:50:49.434508788 +0000 UTC m=+1.174880339 container remove fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:50:49 compute-0 systemd[1]: libpod-conmon-fa45695d8a0ae9742e8775d1641f0de1e73a98cfb73a018fe60a7b9ea289bf4a.scope: Deactivated successfully.
Nov 25 16:50:49 compute-0 sudo[359204]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:49 compute-0 sudo[359369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:49 compute-0 sudo[359369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:49 compute-0 sudo[359369]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:49 compute-0 sudo[359394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:50:49 compute-0 sudo[359394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:49 compute-0 sudo[359394]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:49 compute-0 sudo[359419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:49 compute-0 sudo[359419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:49 compute-0 sudo[359419]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:49 compute-0 sudo[359444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:50:49 compute-0 sudo[359444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.2 KiB/s wr, 179 op/s
Nov 25 16:50:50 compute-0 podman[359509]: 2025-11-25 16:50:50.005044522 +0000 UTC m=+0.043256206 container create 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:50:50 compute-0 systemd[1]: Started libpod-conmon-5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b.scope.
Nov 25 16:50:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:50 compute-0 podman[359509]: 2025-11-25 16:50:49.981533343 +0000 UTC m=+0.019745047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:50:50 compute-0 podman[359509]: 2025-11-25 16:50:50.080611165 +0000 UTC m=+0.118822929 container init 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:50:50 compute-0 podman[359509]: 2025-11-25 16:50:50.087557565 +0000 UTC m=+0.125769249 container start 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 16:50:50 compute-0 podman[359509]: 2025-11-25 16:50:50.090805253 +0000 UTC m=+0.129017027 container attach 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 16:50:50 compute-0 hungry_brown[359525]: 167 167
Nov 25 16:50:50 compute-0 systemd[1]: libpod-5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b.scope: Deactivated successfully.
Nov 25 16:50:50 compute-0 podman[359509]: 2025-11-25 16:50:50.093785064 +0000 UTC m=+0.131996748 container died 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e681784ef391bc1351498510fea6beec1fa39ec86e897cffcd84887bb034711-merged.mount: Deactivated successfully.
Nov 25 16:50:50 compute-0 podman[359509]: 2025-11-25 16:50:50.133451191 +0000 UTC m=+0.171662875 container remove 5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:50:50 compute-0 systemd[1]: libpod-conmon-5fb031646afbb8f93f25b43f2c878496b79a095bcb514e3625792fa62ae8ec0b.scope: Deactivated successfully.
Nov 25 16:50:50 compute-0 podman[359549]: 2025-11-25 16:50:50.305725864 +0000 UTC m=+0.039135035 container create 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:50:50 compute-0 systemd[1]: Started libpod-conmon-0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c.scope.
Nov 25 16:50:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:50 compute-0 podman[359549]: 2025-11-25 16:50:50.36777519 +0000 UTC m=+0.101184381 container init 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:50:50 compute-0 podman[359549]: 2025-11-25 16:50:50.383524718 +0000 UTC m=+0.116933899 container start 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:50:50 compute-0 podman[359549]: 2025-11-25 16:50:50.290227022 +0000 UTC m=+0.023636213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:50:50 compute-0 podman[359549]: 2025-11-25 16:50:50.387162867 +0000 UTC m=+0.120572038 container attach 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:50:51 compute-0 ceph-mon[74985]: pgmap v1991: 321 pgs: 321 active+clean; 214 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.2 KiB/s wr, 179 op/s
Nov 25 16:50:51 compute-0 nova_compute[254092]: 2025-11-25 16:50:51.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:51 compute-0 amazing_mclean[359566]: {
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:     "0": [
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:         {
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "devices": [
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "/dev/loop3"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             ],
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_name": "ceph_lv0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_size": "21470642176",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "name": "ceph_lv0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "tags": {
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cluster_name": "ceph",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.crush_device_class": "",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.encrypted": "0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osd_id": "0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.type": "block",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.vdo": "0"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             },
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "type": "block",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "vg_name": "ceph_vg0"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:         }
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:     ],
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:     "1": [
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:         {
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "devices": [
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "/dev/loop4"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             ],
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_name": "ceph_lv1",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_size": "21470642176",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "name": "ceph_lv1",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "tags": {
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cluster_name": "ceph",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.crush_device_class": "",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.encrypted": "0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osd_id": "1",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.type": "block",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.vdo": "0"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             },
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "type": "block",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "vg_name": "ceph_vg1"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:         }
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:     ],
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:     "2": [
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:         {
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "devices": [
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "/dev/loop5"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             ],
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_name": "ceph_lv2",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_size": "21470642176",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "name": "ceph_lv2",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "tags": {
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.cluster_name": "ceph",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.crush_device_class": "",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.encrypted": "0",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osd_id": "2",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.type": "block",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:                 "ceph.vdo": "0"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             },
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "type": "block",
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:             "vg_name": "ceph_vg2"
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:         }
Nov 25 16:50:51 compute-0 amazing_mclean[359566]:     ]
Nov 25 16:50:51 compute-0 amazing_mclean[359566]: }
Nov 25 16:50:51 compute-0 systemd[1]: libpod-0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c.scope: Deactivated successfully.
Nov 25 16:50:51 compute-0 podman[359549]: 2025-11-25 16:50:51.17001596 +0000 UTC m=+0.903425131 container died 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0036ef23fe6324b7ae16f2bb0ca37f6c46d1ba3a13435e90a94848d4555e94eb-merged.mount: Deactivated successfully.
Nov 25 16:50:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:51 compute-0 podman[359549]: 2025-11-25 16:50:51.228767467 +0000 UTC m=+0.962176638 container remove 0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mclean, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:50:51 compute-0 systemd[1]: libpod-conmon-0b7e094ba996affe7a0d869318d01823ed86539f2df6dc5fa2f7bc1bf9d9367c.scope: Deactivated successfully.
Nov 25 16:50:51 compute-0 sudo[359444]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:51 compute-0 podman[359578]: 2025-11-25 16:50:51.270736487 +0000 UTC m=+0.066358293 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 25 16:50:51 compute-0 podman[359575]: 2025-11-25 16:50:51.275832236 +0000 UTC m=+0.073510418 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:50:51 compute-0 podman[359580]: 2025-11-25 16:50:51.3057572 +0000 UTC m=+0.102190419 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:50:51 compute-0 sudo[359646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:50:51 compute-0 sudo[359646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014552733484730091 of space, bias 1.0, pg target 0.4365820045419027 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 sudo[359646]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:50:51 compute-0 sudo[359677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:50:51 compute-0 sudo[359677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:51 compute-0 sudo[359677]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:51 compute-0 sudo[359702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:51 compute-0 sudo[359702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:51 compute-0 sudo[359702]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:51 compute-0 sudo[359727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:50:51 compute-0 sudo[359727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:51 compute-0 podman[359792]: 2025-11-25 16:50:51.784027617 +0000 UTC m=+0.035515177 container create 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:50:51 compute-0 systemd[1]: Started libpod-conmon-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope.
Nov 25 16:50:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 246 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.1 MiB/s wr, 303 op/s
Nov 25 16:50:51 compute-0 podman[359792]: 2025-11-25 16:50:51.854720328 +0000 UTC m=+0.106207908 container init 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:50:51 compute-0 podman[359792]: 2025-11-25 16:50:51.861598374 +0000 UTC m=+0.113085934 container start 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:50:51 compute-0 podman[359792]: 2025-11-25 16:50:51.768491324 +0000 UTC m=+0.019978904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:50:51 compute-0 podman[359792]: 2025-11-25 16:50:51.864809251 +0000 UTC m=+0.116296831 container attach 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:50:51 compute-0 compassionate_mclean[359808]: 167 167
Nov 25 16:50:51 compute-0 systemd[1]: libpod-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope: Deactivated successfully.
Nov 25 16:50:51 compute-0 conmon[359808]: conmon 75c434d134eda183cdfa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope/container/memory.events
Nov 25 16:50:51 compute-0 podman[359792]: 2025-11-25 16:50:51.867457683 +0000 UTC m=+0.118945233 container died 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f656753524d7bf23fb7bb5736800c824fe8613d6aa610aef4035c50da5cdbb-merged.mount: Deactivated successfully.
Nov 25 16:50:51 compute-0 podman[359792]: 2025-11-25 16:50:51.898730973 +0000 UTC m=+0.150218533 container remove 75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:50:51 compute-0 systemd[1]: libpod-conmon-75c434d134eda183cdfae9642f91fd7481819a1aea482f6514c76b7db400a452.scope: Deactivated successfully.
Nov 25 16:50:52 compute-0 podman[359833]: 2025-11-25 16:50:52.059755599 +0000 UTC m=+0.036837051 container create 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:50:52 compute-0 systemd[1]: Started libpod-conmon-9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e.scope.
Nov 25 16:50:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:50:52 compute-0 podman[359833]: 2025-11-25 16:50:52.134915272 +0000 UTC m=+0.111996744 container init 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:50:52 compute-0 podman[359833]: 2025-11-25 16:50:52.044459483 +0000 UTC m=+0.021540935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:50:52 compute-0 podman[359833]: 2025-11-25 16:50:52.144082801 +0000 UTC m=+0.121164253 container start 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 16:50:52 compute-0 podman[359833]: 2025-11-25 16:50:52.147328249 +0000 UTC m=+0.124409731 container attach 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:50:52 compute-0 nova_compute[254092]: 2025-11-25 16:50:52.903 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089437.9012377, 7e50c80e-03fd-47ec-854f-f4e5d45c1c82 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:50:52 compute-0 nova_compute[254092]: 2025-11-25 16:50:52.905 254096 INFO nova.compute.manager [-] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] VM Stopped (Lifecycle Event)
Nov 25 16:50:52 compute-0 nova_compute[254092]: 2025-11-25 16:50:52.926 254096 DEBUG nova.compute.manager [None req-777fcc67-0921-4979-b407-d29dd0ddeeda - - - - - -] [instance: 7e50c80e-03fd-47ec-854f-f4e5d45c1c82] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:50:52 compute-0 nova_compute[254092]: 2025-11-25 16:50:52.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:53 compute-0 ceph-mon[74985]: pgmap v1992: 321 pgs: 321 active+clean; 246 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.1 MiB/s wr, 303 op/s
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]: {
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "osd_id": 1,
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "type": "bluestore"
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:     },
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "osd_id": 2,
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "type": "bluestore"
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:     },
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "osd_id": 0,
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:         "type": "bluestore"
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]:     }
Nov 25 16:50:53 compute-0 vigilant_lalande[359850]: }
Nov 25 16:50:53 compute-0 systemd[1]: libpod-9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e.scope: Deactivated successfully.
Nov 25 16:50:53 compute-0 podman[359833]: 2025-11-25 16:50:53.124139594 +0000 UTC m=+1.101221046 container died 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 16:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c972315921c1095090ad227842f409cf359871daf6dbc4ab65cd19ede86c8ec8-merged.mount: Deactivated successfully.
Nov 25 16:50:53 compute-0 podman[359833]: 2025-11-25 16:50:53.174223346 +0000 UTC m=+1.151304798 container remove 9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:50:53 compute-0 systemd[1]: libpod-conmon-9e12f7839130e5fd48f3f92407c2a1398ed54a79b58f58dd60ae9f23aad7873e.scope: Deactivated successfully.
Nov 25 16:50:53 compute-0 sudo[359727]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:50:53 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:50:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:50:53 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:50:53 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 94ef5dfb-78a6-439f-a4ab-2d2a28fc0953 does not exist
Nov 25 16:50:53 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 62c071c4-feae-4b32-bc57-7722c6e5e6d5 does not exist
Nov 25 16:50:53 compute-0 sudo[359896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:50:53 compute-0 sudo[359896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:53 compute-0 sudo[359896]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:53 compute-0 sudo[359921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:50:53 compute-0 sudo[359921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:50:53 compute-0 sudo[359921]: pam_unix(sudo:session): session closed for user root
Nov 25 16:50:53 compute-0 nova_compute[254092]: 2025-11-25 16:50:53.474 254096 DEBUG nova.compute.manager [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-changed-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:53 compute-0 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG nova.compute.manager [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing instance network info cache due to event network-changed-d6e67173-6a72-4200-9963-90668ed663e4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:53 compute-0 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG oslo_concurrency.lockutils [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:53 compute-0 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG oslo_concurrency.lockutils [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:53 compute-0 nova_compute[254092]: 2025-11-25 16:50:53.475 254096 DEBUG nova.network.neutron [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Refreshing network info cache for port d6e67173-6a72-4200-9963-90668ed663e4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:53 compute-0 nova_compute[254092]: 2025-11-25 16:50:53.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 246 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 936 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 16:50:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:50:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:50:54 compute-0 nova_compute[254092]: 2025-11-25 16:50:54.774 254096 INFO nova.compute.manager [None req-cfc15dc1-eb5e-45e8-953e-11fc16cb86e3 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Get console output
Nov 25 16:50:54 compute-0 nova_compute[254092]: 2025-11-25 16:50:54.780 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:50:55 compute-0 ceph-mon[74985]: pgmap v1993: 321 pgs: 321 active+clean; 246 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 936 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 25 16:50:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:50:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634020956' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:50:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:50:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3634020956' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.889 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.889 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.889 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.890 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.890 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.891 254096 INFO nova.compute.manager [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Terminating instance
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.892 254096 DEBUG nova.compute.manager [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:50:55 compute-0 kernel: tap1d6ef4a2-82 (unregistering): left promiscuous mode
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.946 254096 DEBUG nova.network.neutron [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updated VIF entry in instance network info cache for port d6e67173-6a72-4200-9963-90668ed663e4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.947 254096 DEBUG nova.network.neutron [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [{"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:55 compute-0 NetworkManager[48891]: <info>  [1764089455.9480] device (tap1d6ef4a2-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:55 compute-0 ovn_controller[153477]: 2025-11-25T16:50:55Z|01036|binding|INFO|Releasing lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 from this chassis (sb_readonly=0)
Nov 25 16:50:55 compute-0 ovn_controller[153477]: 2025-11-25T16:50:55Z|01037|binding|INFO|Setting lport 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 down in Southbound
Nov 25 16:50:55 compute-0 ovn_controller[153477]: 2025-11-25T16:50:55Z|01038|binding|INFO|Removing iface tap1d6ef4a2-82 ovn-installed in OVS
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.966 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:6f:25 10.100.0.14'], port_security=['fa:16:3e:f8:6f:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1ae1094f-81aa-490c-80ca-4eba95f46cac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '6', 'neutron:security_group_ids': '35dbfd3c-26a0-4489-b51a-15505dded8ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c9afbfe-cd77-4ff5-bdb4-0ad03d001fca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1d6ef4a2-8289-4c88-b3f3-481435a4dab0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.967 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 in datapath 9840ff40-ec43-46f9-ab52-3d9495f203ee unbound from our chassis
Nov 25 16:50:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.968 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9840ff40-ec43-46f9-ab52-3d9495f203ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.968 254096 DEBUG oslo_concurrency.lockutils [req-bf1e5382-bac7-4681-a099-6451a7f6aa1a req-a2739744-868d-4ad9-838d-318229794b79 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8e8f0fb8-4b3c-40dd-9317-94bedc736376" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.969 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0010904a-76e9-4e6b-8de5-0695164be027]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:55.969 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee namespace which is not needed anymore
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.986 254096 DEBUG nova.compute.manager [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.986 254096 DEBUG nova.compute.manager [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing instance network info cache due to event network-changed-1d6ef4a2-8289-4c88-b3f3-481435a4dab0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.987 254096 DEBUG oslo_concurrency.lockutils [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.987 254096 DEBUG oslo_concurrency.lockutils [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.987 254096 DEBUG nova.network.neutron [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Refreshing network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:50:55 compute-0 nova_compute[254092]: 2025-11-25 16:50:55.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:56 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Deactivated successfully.
Nov 25 16:50:56 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000065.scope: Consumed 13.280s CPU time.
Nov 25 16:50:56 compute-0 systemd-machined[216343]: Machine qemu-130-instance-00000065 terminated.
Nov 25 16:50:56 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : haproxy version is 2.8.14-c23fe91
Nov 25 16:50:56 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [NOTICE]   (358868) : path to executable is /usr/sbin/haproxy
Nov 25 16:50:56 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [WARNING]  (358868) : Exiting Master process...
Nov 25 16:50:56 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [WARNING]  (358868) : Exiting Master process...
Nov 25 16:50:56 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [ALERT]    (358868) : Current worker (358870) exited with code 143 (Terminated)
Nov 25 16:50:56 compute-0 neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee[358864]: [WARNING]  (358868) : All workers exited. Exiting... (0)
Nov 25 16:50:56 compute-0 systemd[1]: libpod-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e.scope: Deactivated successfully.
Nov 25 16:50:56 compute-0 podman[359969]: 2025-11-25 16:50:56.100508957 +0000 UTC m=+0.042512326 container died 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e-userdata-shm.mount: Deactivated successfully.
Nov 25 16:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ad744a85431f20015a7676bfb11197180bce1ae809f0c271dcc2b50469a0e66-merged.mount: Deactivated successfully.
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.131 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Instance destroyed successfully.
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.131 254096 DEBUG nova.objects.instance [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 1ae1094f-81aa-490c-80ca-4eba95f46cac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:56 compute-0 podman[359969]: 2025-11-25 16:50:56.139549638 +0000 UTC m=+0.081552977 container cleanup 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:50:56 compute-0 systemd[1]: libpod-conmon-98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e.scope: Deactivated successfully.
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.152 254096 DEBUG nova.virt.libvirt.vif [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:49:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-922142806',display_name='tempest-TestNetworkAdvancedServerOps-server-922142806',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-922142806',id=101,image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI2eMKgXVt1DObzXc6EkgpEhBSu1NoXCvHqtISF2XNwwoVZuFONUWxBCFWb+wYX4SXZYHjLF21mp8VWxhl1OIfekGEl9PKCz3VMmlg2iYEbhoV2Ykmti7n7/j83zn1kCwQ==',key_name='tempest-TestNetworkAdvancedServerOps-21666879',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-79pfuvch',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='a4aa3708-bb73-4b5a-b3f3-42153358021e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:50:34Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=1ae1094f-81aa-490c-80ca-4eba95f46cac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.153 254096 DEBUG nova.network.os_vif_util [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.154 254096 DEBUG nova.network.os_vif_util [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.154 254096 DEBUG os_vif [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.157 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d6ef4a2-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.161 254096 INFO os_vif [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:6f:25,bridge_name='br-int',has_traffic_filtering=True,id=1d6ef4a2-8289-4c88-b3f3-481435a4dab0,network=Network(9840ff40-ec43-46f9-ab52-3d9495f203ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d6ef4a2-82')
Nov 25 16:50:56 compute-0 podman[360008]: 2025-11-25 16:50:56.196714312 +0000 UTC m=+0.037154541 container remove 98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.203 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9ec949-78ac-4e85-9b7a-69dc1356fc0d]: (4, ('Tue Nov 25 04:50:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e)\n98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e\nTue Nov 25 04:50:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee (98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e)\n98bda93a59f4bdc946f2e8c02a91666e2677f3d8e7ac0430d78976f6f174ce1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.205 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[89b9e397-39a1-479b-8e77-66d6002cc2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.206 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9840ff40-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:56 compute-0 kernel: tap9840ff40-e0: left promiscuous mode
Nov 25 16:50:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.227 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[611c6990-b575-4eee-b85a-ab51d9b12bf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3634020956' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:50:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3634020956' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.250 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc97cff-720c-44bf-a637-b451dec0d5f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.252 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40069065-5614-4aa4-907f-d582c55106f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.273 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb365f9-e54b-4bbc-8b63-4cffe1e9d721]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589226, 'reachable_time': 27767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360042, 'error': None, 'target': 'ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.275 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9840ff40-ec43-46f9-ab52-3d9495f203ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:50:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:56.275 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[150d4f9a-8d28-427d-a2b9-d5f3acf1c33c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d9840ff40\x2dec43\x2d46f9\x2dab52\x2d3d9495f203ee.mount: Deactivated successfully.
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.538 254096 INFO nova.virt.libvirt.driver [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deleting instance files /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.539 254096 INFO nova.virt.libvirt.driver [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deletion of /var/lib/nova/instances/1ae1094f-81aa-490c-80ca-4eba95f46cac_del complete
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.599 254096 INFO nova.compute.manager [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.599 254096 DEBUG oslo.service.loopingcall [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.600 254096 DEBUG nova.compute.manager [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:50:56 compute-0 nova_compute[254092]: 2025-11-25 16:50:56.600 254096 DEBUG nova.network.neutron [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:50:57 compute-0 ceph-mon[74985]: pgmap v1994: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.583 254096 DEBUG nova.network.neutron [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updated VIF entry in instance network info cache for port 1d6ef4a2-8289-4c88-b3f3-481435a4dab0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.584 254096 DEBUG nova.network.neutron [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [{"id": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "address": "fa:16:3e:f8:6f:25", "network": {"id": "9840ff40-ec43-46f9-ab52-3d9495f203ee", "bridge": "br-int", "label": "tempest-network-smoke--1436664854", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d6ef4a2-82", "ovs_interfaceid": "1d6ef4a2-8289-4c88-b3f3-481435a4dab0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.605 254096 DEBUG oslo_concurrency.lockutils [req-be596777-ba66-41a7-b6b3-47aebfd9794d req-be8177e7-c998-4406-96b8-96ed3549d0d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ae1094f-81aa-490c-80ca-4eba95f46cac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.640 254096 DEBUG nova.network.neutron [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.659 254096 INFO nova.compute.manager [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Took 1.06 seconds to deallocate network for instance.
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.704 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.704 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.742 254096 DEBUG nova.compute.manager [req-1796bac1-a391-4aa1-961f-1c355039a02d req-0c991cbc-ea7f-4ce1-8f18-42e8a26cc244 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-deleted-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:57 compute-0 nova_compute[254092]: 2025-11-25 16:50:57.789 254096 DEBUG oslo_concurrency.processutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:50:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 16:50:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:50:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838783938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.256 254096 DEBUG oslo_concurrency.processutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:50:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2838783938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.265 254096 DEBUG nova.compute.provider_tree [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.280 254096 DEBUG nova.scheduler.client.report [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.306 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.336 254096 INFO nova.scheduler.client.report [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 1ae1094f-81aa-490c-80ca-4eba95f46cac
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.415 254096 DEBUG oslo_concurrency.lockutils [None req-c0024e52-f793-43e5-9754-2367dfff2e5f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.512 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.512 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.513 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.513 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.513 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 WARNING nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-unplugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state deleted and task_state None.
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.514 254096 DEBUG oslo_concurrency.lockutils [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ae1094f-81aa-490c-80ca-4eba95f46cac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.515 254096 DEBUG nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] No waiting events found dispatching network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.515 254096 WARNING nova.compute.manager [req-ce86d88e-2b4f-40d3-b261-4451ffdaf348 req-46ae0a2e-a434-4a7e-9a78-dfae8214bd12 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Received unexpected event network-vif-plugged-1d6ef4a2-8289-4c88-b3f3-481435a4dab0 for instance with vm_state deleted and task_state None.
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.896 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.897 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.897 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.898 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.898 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.900 254096 INFO nova.compute.manager [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Terminating instance
Nov 25 16:50:58 compute-0 nova_compute[254092]: 2025-11-25 16:50:58.901 254096 DEBUG nova.compute.manager [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:50:58 compute-0 kernel: tapd6e67173-6a (unregistering): left promiscuous mode
Nov 25 16:50:58 compute-0 NetworkManager[48891]: <info>  [1764089458.9717] device (tapd6e67173-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:59 compute-0 ovn_controller[153477]: 2025-11-25T16:50:59Z|01039|binding|INFO|Releasing lport d6e67173-6a72-4200-9963-90668ed663e4 from this chassis (sb_readonly=0)
Nov 25 16:50:59 compute-0 ovn_controller[153477]: 2025-11-25T16:50:59Z|01040|binding|INFO|Setting lport d6e67173-6a72-4200-9963-90668ed663e4 down in Southbound
Nov 25 16:50:59 compute-0 ovn_controller[153477]: 2025-11-25T16:50:59Z|01041|binding|INFO|Removing iface tapd6e67173-6a ovn-installed in OVS
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.024 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:b3:74 10.100.0.13'], port_security=['fa:16:3e:fa:b3:74 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8e8f0fb8-4b3c-40dd-9317-94bedc736376', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-637a7f19-dc07-431e-90b4-682b65b3ddc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '423cb78fb5f54c46b9867a6f07d0cf95', 'neutron:revision_number': '8', 'neutron:security_group_ids': '05d0216e-41f1-476a-b1bc-7b1c5e9ab51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b730a5ca-bd1d-4c9c-8894-44f85dc34188, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d6e67173-6a72-4200-9963-90668ed663e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:50:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.027 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d6e67173-6a72-4200-9963-90668ed663e4 in datapath 637a7f19-dc07-431e-90b4-682b65b3ddc7 unbound from our chassis
Nov 25 16:50:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.028 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 637a7f19-dc07-431e-90b4-682b65b3ddc7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 16:50:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:50:59.029 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54caa6dd-ae03-4e80-9942-df648d55d63e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:59 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Deactivated successfully.
Nov 25 16:50:59 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000066.scope: Consumed 12.990s CPU time.
Nov 25 16:50:59 compute-0 systemd-machined[216343]: Machine qemu-131-instance-00000066 terminated.
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.140 254096 INFO nova.virt.libvirt.driver [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Instance destroyed successfully.
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.141 254096 DEBUG nova.objects.instance [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lazy-loading 'resources' on Instance uuid 8e8f0fb8-4b3c-40dd-9317-94bedc736376 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.155 254096 DEBUG nova.virt.libvirt.vif [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:50:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-1379098021',display_name='tempest-ServerRescueTestJSONUnderV235-server-1379098021',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-1379098021',id=102,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:50:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='423cb78fb5f54c46b9867a6f07d0cf95',ramdisk_id='',reservation_id='r-njsnxmz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1568478678',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1568478678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:50:35Z,user_data=None,user_id='734253d3f2e84904968d9db3044df1c8',uuid=8e8f0fb8-4b3c-40dd-9317-94bedc736376,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.156 254096 DEBUG nova.network.os_vif_util [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converting VIF {"id": "d6e67173-6a72-4200-9963-90668ed663e4", "address": "fa:16:3e:fa:b3:74", "network": {"id": "637a7f19-dc07-431e-90b4-682b65b3ddc7", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1593327310-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "423cb78fb5f54c46b9867a6f07d0cf95", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6e67173-6a", "ovs_interfaceid": "d6e67173-6a72-4200-9963-90668ed663e4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.157 254096 DEBUG nova.network.os_vif_util [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.157 254096 DEBUG os_vif [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.159 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6e67173-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.166 254096 INFO os_vif [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:b3:74,bridge_name='br-int',has_traffic_filtering=True,id=d6e67173-6a72-4200-9963-90668ed663e4,network=Network(637a7f19-dc07-431e-90b4-682b65b3ddc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6e67173-6a')
Nov 25 16:50:59 compute-0 ceph-mon[74985]: pgmap v1995: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 937 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 25 16:50:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 2.2 MiB/s wr, 125 op/s
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.864 254096 INFO nova.virt.libvirt.driver [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deleting instance files /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376_del
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.865 254096 INFO nova.virt.libvirt.driver [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deletion of /var/lib/nova/instances/8e8f0fb8-4b3c-40dd-9317-94bedc736376_del complete
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 INFO nova.compute.manager [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 1.04 seconds to destroy the instance on the hypervisor.
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 DEBUG oslo.service.loopingcall [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 DEBUG nova.compute.manager [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:50:59 compute-0 nova_compute[254092]: 2025-11-25 16:50:59.946 254096 DEBUG nova.network.neutron [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:51:00 compute-0 nova_compute[254092]: 2025-11-25 16:51:00.228 254096 DEBUG nova.compute.manager [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:00 compute-0 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG oslo_concurrency.lockutils [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:00 compute-0 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG oslo_concurrency.lockutils [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:00 compute-0 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG oslo_concurrency.lockutils [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:00 compute-0 nova_compute[254092]: 2025-11-25 16:51:00.229 254096 DEBUG nova.compute.manager [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:00 compute-0 nova_compute[254092]: 2025-11-25 16:51:00.230 254096 DEBUG nova.compute.manager [req-382e5ba7-994d-48e0-879d-85aecd2e09f8 req-68949872-b573-4f8b-88c9-6a32e07ed39d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-unplugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:51:00 compute-0 ceph-mon[74985]: pgmap v1996: 321 pgs: 321 active+clean; 248 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 2.2 MiB/s wr, 125 op/s
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.281 254096 DEBUG nova.network.neutron [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.304 254096 INFO nova.compute.manager [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Took 1.36 seconds to deallocate network for instance.
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.366 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.367 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.426 254096 DEBUG oslo_concurrency.processutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.470 254096 DEBUG nova.compute.manager [req-01e9e385-d50c-4886-99fa-b455db9714df req-159b171a-c14d-4380-a4e3-315e5b25d418 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-deleted-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 41 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 991 KiB/s rd, 2.2 MiB/s wr, 206 op/s
Nov 25 16:51:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:51:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1199419544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.866 254096 DEBUG oslo_concurrency.processutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.871 254096 DEBUG nova.compute.provider_tree [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.891 254096 DEBUG nova.scheduler.client.report [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:51:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1199419544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.911 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:01 compute-0 nova_compute[254092]: 2025-11-25 16:51:01.961 254096 INFO nova.scheduler.client.report [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Deleted allocations for instance 8e8f0fb8-4b3c-40dd-9317-94bedc736376
Nov 25 16:51:02 compute-0 nova_compute[254092]: 2025-11-25 16:51:02.037 254096 DEBUG oslo_concurrency.lockutils [None req-26016a85-80ca-4f0b-ab1b-38c01321cb35 734253d3f2e84904968d9db3044df1c8 423cb78fb5f54c46b9867a6f07d0cf95 - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:02 compute-0 nova_compute[254092]: 2025-11-25 16:51:02.379 254096 DEBUG nova.compute.manager [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:02 compute-0 nova_compute[254092]: 2025-11-25 16:51:02.379 254096 DEBUG oslo_concurrency.lockutils [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:02 compute-0 nova_compute[254092]: 2025-11-25 16:51:02.379 254096 DEBUG oslo_concurrency.lockutils [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:02 compute-0 nova_compute[254092]: 2025-11-25 16:51:02.380 254096 DEBUG oslo_concurrency.lockutils [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8e8f0fb8-4b3c-40dd-9317-94bedc736376-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:02 compute-0 nova_compute[254092]: 2025-11-25 16:51:02.380 254096 DEBUG nova.compute.manager [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] No waiting events found dispatching network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:02 compute-0 nova_compute[254092]: 2025-11-25 16:51:02.380 254096 WARNING nova.compute.manager [req-d7e5db57-1826-4efd-bb41-9ac04f4fc1e0 req-d8759939-b1b1-4697-b460-1aabc2682b2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Received unexpected event network-vif-plugged-d6e67173-6a72-4200-9963-90668ed663e4 for instance with vm_state deleted and task_state None.
Nov 25 16:51:02 compute-0 ceph-mon[74985]: pgmap v1997: 321 pgs: 321 active+clean; 41 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 991 KiB/s rd, 2.2 MiB/s wr, 206 op/s
Nov 25 16:51:03 compute-0 nova_compute[254092]: 2025-11-25 16:51:03.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 41 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 25 KiB/s wr, 83 op/s
Nov 25 16:51:04 compute-0 nova_compute[254092]: 2025-11-25 16:51:04.162 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:04 compute-0 ceph-mon[74985]: pgmap v1998: 321 pgs: 321 active+clean; 41 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 25 KiB/s wr, 83 op/s
Nov 25 16:51:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 25 KiB/s wr, 83 op/s
Nov 25 16:51:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:06 compute-0 ceph-mon[74985]: pgmap v1999: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 25 KiB/s wr, 83 op/s
Nov 25 16:51:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.8 KiB/s wr, 81 op/s
Nov 25 16:51:08 compute-0 nova_compute[254092]: 2025-11-25 16:51:08.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:08 compute-0 ceph-mon[74985]: pgmap v2000: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.8 KiB/s wr, 81 op/s
Nov 25 16:51:09 compute-0 nova_compute[254092]: 2025-11-25 16:51:09.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 81 op/s
Nov 25 16:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:51:10 compute-0 ceph-mon[74985]: pgmap v2001: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 81 op/s
Nov 25 16:51:11 compute-0 nova_compute[254092]: 2025-11-25 16:51:11.126 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089456.125063, 1ae1094f-81aa-490c-80ca-4eba95f46cac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:11 compute-0 nova_compute[254092]: 2025-11-25 16:51:11.127 254096 INFO nova.compute.manager [-] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] VM Stopped (Lifecycle Event)
Nov 25 16:51:11 compute-0 nova_compute[254092]: 2025-11-25 16:51:11.144 254096 DEBUG nova.compute.manager [None req-fd2afcbb-5fd2-4393-a3bf-3001c8b9254f - - - - - -] [instance: 1ae1094f-81aa-490c-80ca-4eba95f46cac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 81 op/s
Nov 25 16:51:12 compute-0 ceph-mon[74985]: pgmap v2002: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.5 KiB/s wr, 81 op/s
Nov 25 16:51:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:13.627 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:13.628 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:13.628 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:13 compute-0 nova_compute[254092]: 2025-11-25 16:51:13.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:14 compute-0 nova_compute[254092]: 2025-11-25 16:51:14.139 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089459.1373086, 8e8f0fb8-4b3c-40dd-9317-94bedc736376 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:14 compute-0 nova_compute[254092]: 2025-11-25 16:51:14.139 254096 INFO nova.compute.manager [-] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] VM Stopped (Lifecycle Event)
Nov 25 16:51:14 compute-0 nova_compute[254092]: 2025-11-25 16:51:14.155 254096 DEBUG nova.compute.manager [None req-3f205dbb-e7c0-46b4-9fff-5e27584ff0be - - - - - -] [instance: 8e8f0fb8-4b3c-40dd-9317-94bedc736376] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:14 compute-0 nova_compute[254092]: 2025-11-25 16:51:14.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:14 compute-0 nova_compute[254092]: 2025-11-25 16:51:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:14 compute-0 ceph-mon[74985]: pgmap v2003: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:15 compute-0 nova_compute[254092]: 2025-11-25 16:51:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:16 compute-0 nova_compute[254092]: 2025-11-25 16:51:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:16 compute-0 ceph-mon[74985]: pgmap v2004: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:18 compute-0 nova_compute[254092]: 2025-11-25 16:51:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:18 compute-0 nova_compute[254092]: 2025-11-25 16:51:18.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:18 compute-0 ceph-mon[74985]: pgmap v2005: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:19 compute-0 nova_compute[254092]: 2025-11-25 16:51:19.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:19 compute-0 nova_compute[254092]: 2025-11-25 16:51:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:19 compute-0 nova_compute[254092]: 2025-11-25 16:51:19.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:51:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:20 compute-0 nova_compute[254092]: 2025-11-25 16:51:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:20 compute-0 ceph-mon[74985]: pgmap v2006: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.397 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.397 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.416 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.498 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.499 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.507 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.507 254096 INFO nova.compute.claims [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.620 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:21 compute-0 podman[360128]: 2025-11-25 16:51:21.650630246 +0000 UTC m=+0.064186525 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 16:51:21 compute-0 podman[360127]: 2025-11-25 16:51:21.711386747 +0000 UTC m=+0.125880172 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:51:21 compute-0 podman[360129]: 2025-11-25 16:51:21.728562824 +0000 UTC m=+0.140020946 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.833 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.833 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.855 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:51:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:21 compute-0 nova_compute[254092]: 2025-11-25 16:51:21.912 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:51:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314132498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.113 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.121 254096 DEBUG nova.compute.provider_tree [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.143 254096 DEBUG nova.scheduler.client.report [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.163 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.166 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.166 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.198 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.208 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.209 254096 INFO nova.compute.claims [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.269 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "26a03ea9-69ed-410a-b248-693f9abf1db2" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.270 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "26a03ea9-69ed-410a-b248-693f9abf1db2" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.280 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "26a03ea9-69ed-410a-b248-693f9abf1db2" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.280 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.329 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.329 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.346 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.361 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.372 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:51:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226227465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.596 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.599 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.601 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.601 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Creating image(s)
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.624 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.646 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.668 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.672 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.747 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.748 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.748 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.749 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.769 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.772 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2fa32ddb-072c-480c-9df3-a207412beb72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:51:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253117060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.805 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.811 254096 DEBUG nova.compute.provider_tree [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.828 254096 DEBUG nova.scheduler.client.report [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.850 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.851 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.857 254096 DEBUG nova.policy [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b117cf5d3a76422aacd4d3a62d7b2f0e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'faf3ae8544684cac802ef962ea89ba52', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.913 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.914 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.935 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.951 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.986 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.988 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3845MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.988 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:22 compute-0 nova_compute[254092]: 2025-11-25 16:51:22.988 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:22 compute-0 ceph-mon[74985]: pgmap v2007: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2314132498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1226227465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/253117060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.022 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2fa32ddb-072c-480c-9df3-a207412beb72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.054 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.055 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.055 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Creating image(s)
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.075 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.095 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.114 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.119 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.170 254096 DEBUG nova.policy [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.218 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.220 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.220 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.221 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.243 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.247 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7368c721-3e2a-4635-b2d8-5703d20438d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.290 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2fa32ddb-072c-480c-9df3-a207412beb72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7368c721-3e2a-4635-b2d8-5703d20438d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.302 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] resizing rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.433 254096 DEBUG nova.objects.instance [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lazy-loading 'migration_context' on Instance uuid 2fa32ddb-072c-480c-9df3-a207412beb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.437 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.504 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.505 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Ensure instance console log exists: /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.505 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.506 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.506 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.523 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7368c721-3e2a-4635-b2d8-5703d20438d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.593 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.632 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Successfully created port: 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.695 254096 DEBUG nova.objects.instance [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.708 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.709 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Ensure instance console log exists: /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.710 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.710 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.710 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:51:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402197744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.903 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.910 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.927 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.951 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:51:23 compute-0 nova_compute[254092]: 2025-11-25 16:51:23.952 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3402197744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.468 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Successfully created port: 0cdb5ab1-8463-4494-a522-360862f2152e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.816 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Successfully updated port: 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.835 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.835 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquired lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.835 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.949 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.949 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.963 254096 DEBUG nova.compute.manager [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-changed-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.965 254096 DEBUG nova.compute.manager [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Refreshing instance network info cache due to event network-changed-774bb0a0-1853-424e-a6c2-1ee07d3bbf61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.965 254096 DEBUG oslo_concurrency.lockutils [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.970 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.970 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.970 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.994 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.994 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:51:24 compute-0 nova_compute[254092]: 2025-11-25 16:51:24.995 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:51:25 compute-0 ceph-mon[74985]: pgmap v2008: 321 pgs: 321 active+clean; 41 MiB data, 719 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.025 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:51:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 107 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.8 MiB/s wr, 28 op/s
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.878 254096 DEBUG nova.network.neutron [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updating instance_info_cache with network_info: [{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.893 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Releasing lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.894 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance network_info: |[{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.894 254096 DEBUG oslo_concurrency.lockutils [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.894 254096 DEBUG nova.network.neutron [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Refreshing network info cache for port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.898 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start _get_guest_xml network_info=[{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.902 254096 WARNING nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.910 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.910 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.914 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.915 254096 DEBUG nova.virt.libvirt.host [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.915 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.916 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.916 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.916 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.917 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.918 254096 DEBUG nova.virt.hardware [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:51:25 compute-0 nova_compute[254092]: 2025-11-25 16:51:25.922 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.013 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Successfully updated port: 0cdb5ab1-8463-4494-a522-360862f2152e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.032 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.032 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.032 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.225 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:51:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.280 254096 DEBUG nova.compute.manager [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.281 254096 DEBUG nova.compute.manager [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing instance network info cache due to event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.281 254096 DEBUG oslo_concurrency.lockutils [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:51:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986318622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.360 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.381 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.385 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:51:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313407795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.838 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.840 254096 DEBUG nova.virt.libvirt.vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-12066405',display_name='tempest-ServerGroupTestJSON-server-12066405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-12066405',id=104,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faf3ae8544684cac802ef962ea89ba52',ramdisk_id='',reservation_id='r-yx0g6j90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-671381879',owner_user_name='tempest-ServerGroupTestJSON-671381879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='b117cf5d3a76422aacd4d3a62d7b2f0e',uuid=2fa32ddb-072c-480c-9df3-a207412beb72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.840 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converting VIF {"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.841 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.842 254096 DEBUG nova.objects.instance [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2fa32ddb-072c-480c-9df3-a207412beb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.855 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <uuid>2fa32ddb-072c-480c-9df3-a207412beb72</uuid>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <name>instance-00000068</name>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerGroupTestJSON-server-12066405</nova:name>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:51:25</nova:creationTime>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:user uuid="b117cf5d3a76422aacd4d3a62d7b2f0e">tempest-ServerGroupTestJSON-671381879-project-member</nova:user>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:project uuid="faf3ae8544684cac802ef962ea89ba52">tempest-ServerGroupTestJSON-671381879</nova:project>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <nova:port uuid="774bb0a0-1853-424e-a6c2-1ee07d3bbf61">
Nov 25 16:51:26 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <system>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <entry name="serial">2fa32ddb-072c-480c-9df3-a207412beb72</entry>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <entry name="uuid">2fa32ddb-072c-480c-9df3-a207412beb72</entry>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </system>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <os>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   </os>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <features>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   </features>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2fa32ddb-072c-480c-9df3-a207412beb72_disk">
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       </source>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2fa32ddb-072c-480c-9df3-a207412beb72_disk.config">
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       </source>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:51:26 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:8c:33:c8"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <target dev="tap774bb0a0-18"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/console.log" append="off"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <video>
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </video>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:51:26 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:51:26 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:51:26 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:51:26 compute-0 nova_compute[254092]: </domain>
Nov 25 16:51:26 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Preparing to wait for external event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.856 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.857 254096 DEBUG nova.virt.libvirt.vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-12066405',display_name='tempest-ServerGroupTestJSON-server-12066405',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-12066405',id=104,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='faf3ae8544684cac802ef962ea89ba52',ramdisk_id='',reservation_id='r-yx0g6j90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-671381879',owner_user_name='tempest-ServerGroupTestJSON-671381879-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='b117cf5d3a76422aacd4d3a62d7b2f0e',uuid=2fa32ddb-072c-480c-9df3-a207412beb72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.857 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converting VIF {"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.858 254096 DEBUG nova.network.os_vif_util [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.858 254096 DEBUG os_vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.859 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.860 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.862 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap774bb0a0-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.862 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap774bb0a0-18, col_values=(('external_ids', {'iface-id': '774bb0a0-1853-424e-a6c2-1ee07d3bbf61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:33:c8', 'vm-uuid': '2fa32ddb-072c-480c-9df3-a207412beb72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:26 compute-0 NetworkManager[48891]: <info>  [1764089486.8648] manager: (tap774bb0a0-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.869 254096 INFO os_vif [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18')
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.902 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.903 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.903 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] No VIF found with MAC fa:16:3e:8c:33:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.904 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Using config drive
Nov 25 16:51:26 compute-0 nova_compute[254092]: 2025-11-25 16:51:26.921 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:27 compute-0 ceph-mon[74985]: pgmap v2009: 321 pgs: 321 active+clean; 107 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.8 MiB/s wr, 28 op/s
Nov 25 16:51:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2986318622' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/313407795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.172 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Creating config drive at /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.177 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgkecmlu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.316 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgkecmlu" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.344 254096 DEBUG nova.storage.rbd_utils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] rbd image 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.348 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.511 254096 DEBUG oslo_concurrency.processutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config 2fa32ddb-072c-480c-9df3-a207412beb72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.513 254096 INFO nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deleting local config drive /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72/disk.config because it was imported into RBD.
Nov 25 16:51:28 compute-0 kernel: tap774bb0a0-18: entered promiscuous mode
Nov 25 16:51:28 compute-0 ovn_controller[153477]: 2025-11-25T16:51:28Z|01042|binding|INFO|Claiming lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for this chassis.
Nov 25 16:51:28 compute-0 ovn_controller[153477]: 2025-11-25T16:51:28Z|01043|binding|INFO|774bb0a0-1853-424e-a6c2-1ee07d3bbf61: Claiming fa:16:3e:8c:33:c8 10.100.0.6
Nov 25 16:51:28 compute-0 NetworkManager[48891]: <info>  [1764089488.5623] manager: (tap774bb0a0-18): new Tun device (/org/freedesktop/NetworkManager/Devices/424)
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.576 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:33:c8 10.100.0.6'], port_security=['fa:16:3e:8c:33:c8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fa32ddb-072c-480c-9df3-a207412beb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faf3ae8544684cac802ef962ea89ba52', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d73d11b-858b-4b1c-bcef-f58415708764', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2880425b-36a3-47bb-868a-17d2ff8251e1, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=774bb0a0-1853-424e-a6c2-1ee07d3bbf61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.577 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 in datapath 8973f8f9-6cab-4292-a0e3-cbd494454b03 bound to our chassis
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.578 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8973f8f9-6cab-4292-a0e3-cbd494454b03
Nov 25 16:51:28 compute-0 systemd-udevd[360750]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[97b833f0-51c4-4446-bf3f-1004c3f14a09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.592 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8973f8f9-61 in ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.594 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8973f8f9-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f974fa5-182e-4c5d-93bd-85696364c86c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 systemd-machined[216343]: New machine qemu-132-instance-00000068.
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.595 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71ccc7b9-0a4f-4ba4-a38a-fce51c46f081]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 NetworkManager[48891]: <info>  [1764089488.6041] device (tap774bb0a0-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:51:28 compute-0 NetworkManager[48891]: <info>  [1764089488.6050] device (tap774bb0a0-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.607 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[e31b6275-542f-4253-85f2-a3b2479bffa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-00000068.
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.632 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[520a989d-db69-4920-b4c2-8131c133d2f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 ovn_controller[153477]: 2025-11-25T16:51:28Z|01044|binding|INFO|Setting lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 ovn-installed in OVS
Nov 25 16:51:28 compute-0 ovn_controller[153477]: 2025-11-25T16:51:28Z|01045|binding|INFO|Setting lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 up in Southbound
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.661 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3d278178-2fc8-45c6-b5cf-57100268b820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 systemd-udevd[360753]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:51:28 compute-0 NetworkManager[48891]: <info>  [1764089488.6672] manager: (tap8973f8f9-60): new Veth device (/org/freedesktop/NetworkManager/Devices/425)
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.666 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ad9791-7946-43b8-8d37-01986a2be7fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.696 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e64d10-1430-4f60-b0a2-1166c1f1ef2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.699 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef2558f-f0f1-43ce-b4e6-1cb15d588659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 NetworkManager[48891]: <info>  [1764089488.7200] device (tap8973f8f9-60): carrier: link connected
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.724 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[de4ba498-c44a-4de7-9403-abbda315537b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.742 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c38bd5c8-a2d5-4706-b4ab-057585ba8b9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8973f8f9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:4c:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594631, 'reachable_time': 18626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360782, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.758 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[797d90b0-040d-45ca-b441-146e40b2a585]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:4c7d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594631, 'tstamp': 594631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360783, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[580a3660-ba22-40ef-ae1f-a3eed5e50ded]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8973f8f9-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:4c:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594631, 'reachable_time': 18626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360784, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0feff510-6b57-43ae-8740-30e1aeef2ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eba7513f-dce6-4276-96a0-9a82a2449952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.876 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8973f8f9-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8973f8f9-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:28 compute-0 NetworkManager[48891]: <info>  [1764089488.8798] manager: (tap8973f8f9-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 25 16:51:28 compute-0 kernel: tap8973f8f9-60: entered promiscuous mode
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.882 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8973f8f9-60, col_values=(('external_ids', {'iface-id': 'd96728a6-6965-48bf-820a-d4fbc1efd7cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 ovn_controller[153477]: 2025-11-25T16:51:28Z|01046|binding|INFO|Releasing lport d96728a6-6965-48bf-820a-d4fbc1efd7cb from this chassis (sb_readonly=0)
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.900 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8973f8f9-6cab-4292-a0e3-cbd494454b03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8973f8f9-6cab-4292-a0e3-cbd494454b03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bf9c8d-9926-40ed-964a-8947ddd89f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.901 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-8973f8f9-6cab-4292-a0e3-cbd494454b03
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/8973f8f9-6cab-4292-a0e3-cbd494454b03.pid.haproxy
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 8973f8f9-6cab-4292-a0e3-cbd494454b03
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:51:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:28.903 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'env', 'PROCESS_TAG=haproxy-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8973f8f9-6cab-4292-a0e3-cbd494454b03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.993 254096 DEBUG nova.compute.manager [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.993 254096 DEBUG oslo_concurrency.lockutils [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.994 254096 DEBUG oslo_concurrency.lockutils [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.994 254096 DEBUG oslo_concurrency.lockutils [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:28 compute-0 nova_compute[254092]: 2025-11-25 16:51:28.994 254096 DEBUG nova.compute.manager [req-6cb3221c-6e42-489c-a22d-dadd471c96e6 req-3590c14a-f776-436c-a225-752abab11fe4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Processing event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.048 254096 DEBUG nova.network.neutron [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.064 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.065 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance network_info: |[{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.065 254096 DEBUG oslo_concurrency.lockutils [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.065 254096 DEBUG nova.network.neutron [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.068 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start _get_guest_xml network_info=[{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.073 254096 WARNING nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.083 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.084 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.093 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.094 254096 DEBUG nova.virt.libvirt.host [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.094 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.094 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.095 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.095 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.096 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.096 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.097 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.097 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.098 254096 DEBUG nova.virt.hardware [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.102 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:29 compute-0 ceph-mon[74985]: pgmap v2010: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 16:51:29 compute-0 podman[360827]: 2025-11-25 16:51:29.292594768 +0000 UTC m=+0.048888920 container create ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.300 254096 DEBUG nova.network.neutron [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updated VIF entry in instance network info cache for port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.301 254096 DEBUG nova.network.neutron [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updating instance_info_cache with network_info: [{"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.314 254096 DEBUG oslo_concurrency.lockutils [req-2488bc59-f23a-4096-8375-40b7ee0138bc req-a22e49f4-1581-4512-a880-5a96942bcf3f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2fa32ddb-072c-480c-9df3-a207412beb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:51:29 compute-0 systemd[1]: Started libpod-conmon-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b.scope.
Nov 25 16:51:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f60641c11c202cd98160d15300f1c97d553f0de6fcab634db88f9b72731431e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:29 compute-0 podman[360827]: 2025-11-25 16:51:29.269051327 +0000 UTC m=+0.025345489 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:51:29 compute-0 podman[360827]: 2025-11-25 16:51:29.372219391 +0000 UTC m=+0.128513553 container init ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:51:29 compute-0 podman[360827]: 2025-11-25 16:51:29.377956017 +0000 UTC m=+0.134250179 container start ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 16:51:29 compute-0 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : New worker (360893) forked
Nov 25 16:51:29 compute-0 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : Loading success.
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.489 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089489.4886696, 2fa32ddb-072c-480c-9df3-a207412beb72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.489 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Started (Lifecycle Event)
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.491 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.498 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.501 254096 INFO nova.virt.libvirt.driver [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance spawned successfully.
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.502 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.522 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.527 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.537 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.537 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.538 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.538 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.539 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.539 254096 DEBUG nova.virt.libvirt.driver [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.545 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.546 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089489.4895108, 2fa32ddb-072c-480c-9df3-a207412beb72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.546 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Paused (Lifecycle Event)
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.574 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.579 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089489.498728, 2fa32ddb-072c-480c-9df3-a207412beb72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.579 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Resumed (Lifecycle Event)
Nov 25 16:51:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:51:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221567832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.597 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.601 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.606 254096 INFO nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 7.01 seconds to spawn the instance on the hypervisor.
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.607 254096 DEBUG nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.616 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.617 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.637 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.641 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.683 254096 INFO nova.compute.manager [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 8.21 seconds to build instance.
Nov 25 16:51:29 compute-0 nova_compute[254092]: 2025-11-25 16:51:29.704 254096 DEBUG oslo_concurrency.lockutils [None req-b6313e4e-addd-4bb4-8dcb-d69fe7211c43 b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 16:51:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:51:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/528829560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.114 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.115 254096 DEBUG nova.virt.libvirt.vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.116 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.117 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.118 254096 DEBUG nova.objects.instance [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.130 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <uuid>7368c721-3e2a-4635-b2d8-5703d20438d3</uuid>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <name>instance-00000069</name>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1076158717</nova:name>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:51:29</nova:creationTime>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <nova:port uuid="0cdb5ab1-8463-4494-a522-360862f2152e">
Nov 25 16:51:30 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <system>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <entry name="serial">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <entry name="uuid">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </system>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <os>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   </os>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <features>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   </features>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk">
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config">
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       </source>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:51:30 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ba:99:c7"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <target dev="tap0cdb5ab1-84"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/console.log" append="off"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <video>
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </video>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:51:30 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:51:30 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:51:30 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:51:30 compute-0 nova_compute[254092]: </domain>
Nov 25 16:51:30 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.135 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Preparing to wait for external event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.136 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.136 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.136 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.137 254096 DEBUG nova.virt.libvirt.vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:22Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.138 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.139 254096 DEBUG nova.network.os_vif_util [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.139 254096 DEBUG os_vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.141 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cdb5ab1-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.145 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0cdb5ab1-84, col_values=(('external_ids', {'iface-id': '0cdb5ab1-8463-4494-a522-360862f2152e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:99:c7', 'vm-uuid': '7368c721-3e2a-4635-b2d8-5703d20438d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:30 compute-0 NetworkManager[48891]: <info>  [1764089490.1479] manager: (tap0cdb5ab1-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.156 254096 INFO os_vif [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.200 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.201 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.201 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:ba:99:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.202 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Using config drive
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.224 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1221567832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:30 compute-0 ceph-mon[74985]: pgmap v2011: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 16:51:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/528829560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.685 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Creating config drive at /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.690 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4raipwxe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.831 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4raipwxe" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.858 254096 DEBUG nova.storage.rbd_utils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:30 compute-0 nova_compute[254092]: 2025-11-25 16:51:30.862 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.012 254096 DEBUG oslo_concurrency.processutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config 7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.014 254096 INFO nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deleting local config drive /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/disk.config because it was imported into RBD.
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.050 254096 DEBUG nova.network.neutron [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updated VIF entry in instance network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.051 254096 DEBUG nova.network.neutron [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.063 254096 DEBUG oslo_concurrency.lockutils [req-8e3ef4ba-2ab5-4867-98a0-e836c74ea555 req-5f0fe509-3148-4f8a-8819-cbd7cdc8f1d6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:51:31 compute-0 NetworkManager[48891]: <info>  [1764089491.0668] manager: (tap0cdb5ab1-84): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Nov 25 16:51:31 compute-0 kernel: tap0cdb5ab1-84: entered promiscuous mode
Nov 25 16:51:31 compute-0 systemd-udevd[360779]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01047|binding|INFO|Claiming lport 0cdb5ab1-8463-4494-a522-360862f2152e for this chassis.
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01048|binding|INFO|0cdb5ab1-8463-4494-a522-360862f2152e: Claiming fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.081 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 NetworkManager[48891]: <info>  [1764089491.0918] device (tap0cdb5ab1-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:51:31 compute-0 NetworkManager[48891]: <info>  [1764089491.0928] device (tap0cdb5ab1-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.096 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.098 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a bound to our chassis
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 16:51:31 compute-0 systemd-machined[216343]: New machine qemu-133-instance-00000069.
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.113 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cec25b83-3dd1-44b7-845b-78aa422ec6ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.114 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1b70d379-81 in ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.116 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1b70d379-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3540943-64e0-4f61-980a-009fbd8ffaf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.117 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2dc3001-9d90-43b6-aa54-fbc6ff2af064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.129 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[84b5fb9f-a7ae-47f1-b7fd-688aec412474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-00000069.
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.156 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf83075-18c4-4d35-b250-aed32dc0f69a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01049|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e ovn-installed in OVS
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01050|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e up in Southbound
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.187 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a42014d-1708-41c2-ad04-cdb76b479887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.195 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5492ec7-57a8-4e60-9c01-88fc50936f5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 NetworkManager[48891]: <info>  [1764089491.1978] manager: (tap1b70d379-80): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.203 254096 DEBUG nova.compute.manager [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG oslo_concurrency.lockutils [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG oslo_concurrency.lockutils [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG oslo_concurrency.lockutils [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.204 254096 DEBUG nova.compute.manager [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] No waiting events found dispatching network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.205 254096 WARNING nova.compute.manager [req-619979fe-602e-4d69-bee1-9503d7e5af44 req-d6b08f97-fd23-49ee-9194-5c8fd6909118 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received unexpected event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for instance with vm_state active and task_state None.
Nov 25 16:51:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.231 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cee49d73-21e5-4a09-b4af-17174dd666a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.233 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bd645849-0b27-4c37-9569-0752a162ff34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 NetworkManager[48891]: <info>  [1764089491.2607] device (tap1b70d379-80): carrier: link connected
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.266 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e41d86c5-3f3b-4553-9ada-37da916a268c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e99cdc06-d932-4b14-92bc-74be3d3ea128]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594885, 'reachable_time': 21348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361040, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[81970e31-5b30-4049-aded-45993af89708]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:396a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594885, 'tstamp': 594885}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361041, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73b5b64e-575f-48c8-b1b4-a13dcb689ead]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594885, 'reachable_time': 21348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361042, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.345 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a941f801-3b1d-4eed-9aab-d419ebd296cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.403 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8f086e-5690-4521-805c-ffb7ed967b43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.405 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.405 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.405 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b70d379-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:31 compute-0 kernel: tap1b70d379-80: entered promiscuous mode
Nov 25 16:51:31 compute-0 NetworkManager[48891]: <info>  [1764089491.4077] manager: (tap1b70d379-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.409 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1b70d379-80, col_values=(('external_ids', {'iface-id': '43f83cca-eded-4f81-a561-02d17bd21a2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01051|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.425 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.424 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.425 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[44a20f46-1d65-43c3-a701-419c9efe9583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.426 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.426 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'env', 'PROCESS_TAG=haproxy-1b70d379-8b3d-4361-b11d-cafbb578194a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1b70d379-8b3d-4361-b11d-cafbb578194a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:51:31 compute-0 podman[361079]: 2025-11-25 16:51:31.764327517 +0000 UTC m=+0.048961121 container create a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 16:51:31 compute-0 systemd[1]: Started libpod-conmon-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e.scope.
Nov 25 16:51:31 compute-0 podman[361079]: 2025-11-25 16:51:31.738873625 +0000 UTC m=+0.023507259 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:51:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 25 16:51:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d8c25a22fe3e9484bc39161e604e238df356a2c17ccd6f67b57960b5feb4d5f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.874 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089491.8740003, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Started (Lifecycle Event)
Nov 25 16:51:31 compute-0 podman[361079]: 2025-11-25 16:51:31.877958435 +0000 UTC m=+0.162592059 container init a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:51:31 compute-0 podman[361079]: 2025-11-25 16:51:31.884264497 +0000 UTC m=+0.168898101 container start a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:51:31 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : New worker (361134) forked
Nov 25 16:51:31 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : Loading success.
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.905 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.907 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.907 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.908 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.908 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.908 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.910 254096 INFO nova.compute.manager [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Terminating instance
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.911 254096 DEBUG nova.compute.manager [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.915 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089491.8741636, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.916 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Paused (Lifecycle Event)
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.932 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.937 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:51:31 compute-0 kernel: tap774bb0a0-18 (unregistering): left promiscuous mode
Nov 25 16:51:31 compute-0 NetworkManager[48891]: <info>  [1764089491.9486] device (tap774bb0a0-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01052|binding|INFO|Releasing lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 from this chassis (sb_readonly=0)
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.955 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01053|binding|INFO|Setting lport 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 down in Southbound
Nov 25 16:51:31 compute-0 ovn_controller[153477]: 2025-11-25T16:51:31Z|01054|binding|INFO|Removing iface tap774bb0a0-18 ovn-installed in OVS
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.962 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:33:c8 10.100.0.6'], port_security=['fa:16:3e:8c:33:c8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fa32ddb-072c-480c-9df3-a207412beb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'faf3ae8544684cac802ef962ea89ba52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d73d11b-858b-4b1c-bcef-f58415708764', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2880425b-36a3-47bb-868a-17d2ff8251e1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=774bb0a0-1853-424e-a6c2-1ee07d3bbf61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.964 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 774bb0a0-1853-424e-a6c2-1ee07d3bbf61 in datapath 8973f8f9-6cab-4292-a0e3-cbd494454b03 unbound from our chassis
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.965 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8973f8f9-6cab-4292-a0e3-cbd494454b03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.966 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f83e106-ca9c-4f80-ba4a-128b2b0f2189]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:31.966 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 namespace which is not needed anymore
Nov 25 16:51:31 compute-0 nova_compute[254092]: 2025-11-25 16:51:31.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:31 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 25 16:51:31 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000068.scope: Consumed 3.361s CPU time.
Nov 25 16:51:32 compute-0 systemd-machined[216343]: Machine qemu-132-instance-00000068 terminated.
Nov 25 16:51:32 compute-0 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : haproxy version is 2.8.14-c23fe91
Nov 25 16:51:32 compute-0 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [NOTICE]   (360889) : path to executable is /usr/sbin/haproxy
Nov 25 16:51:32 compute-0 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [WARNING]  (360889) : Exiting Master process...
Nov 25 16:51:32 compute-0 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [ALERT]    (360889) : Current worker (360893) exited with code 143 (Terminated)
Nov 25 16:51:32 compute-0 neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03[360858]: [WARNING]  (360889) : All workers exited. Exiting... (0)
Nov 25 16:51:32 compute-0 systemd[1]: libpod-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b.scope: Deactivated successfully.
Nov 25 16:51:32 compute-0 podman[361164]: 2025-11-25 16:51:32.087469668 +0000 UTC m=+0.038994630 container died ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 16:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b-userdata-shm.mount: Deactivated successfully.
Nov 25 16:51:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f60641c11c202cd98160d15300f1c97d553f0de6fcab634db88f9b72731431e-merged.mount: Deactivated successfully.
Nov 25 16:51:32 compute-0 podman[361164]: 2025-11-25 16:51:32.120014953 +0000 UTC m=+0.071539915 container cleanup ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:51:32 compute-0 systemd[1]: libpod-conmon-ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b.scope: Deactivated successfully.
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.145 254096 INFO nova.virt.libvirt.driver [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Instance destroyed successfully.
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.146 254096 DEBUG nova.objects.instance [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lazy-loading 'resources' on Instance uuid 2fa32ddb-072c-480c-9df3-a207412beb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.162 254096 DEBUG nova.virt.libvirt.vif [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-12066405',display_name='tempest-ServerGroupTestJSON-server-12066405',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-12066405',id=104,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='faf3ae8544684cac802ef962ea89ba52',ramdisk_id='',reservation_id='r-yx0g6j90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-671381879',owner_user_name='tempest-ServerGroupTestJSON-671381879-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:29Z,user_data=None,user_id='b117cf5d3a76422aacd4d3a62d7b2f0e',uuid=2fa32ddb-072c-480c-9df3-a207412beb72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.163 254096 DEBUG nova.network.os_vif_util [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converting VIF {"id": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "address": "fa:16:3e:8c:33:c8", "network": {"id": "8973f8f9-6cab-4292-a0e3-cbd494454b03", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1258354502-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "faf3ae8544684cac802ef962ea89ba52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap774bb0a0-18", "ovs_interfaceid": "774bb0a0-1853-424e-a6c2-1ee07d3bbf61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.164 254096 DEBUG nova.network.os_vif_util [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.164 254096 DEBUG os_vif [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.166 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.166 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap774bb0a0-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.171 254096 INFO os_vif [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:33:c8,bridge_name='br-int',has_traffic_filtering=True,id=774bb0a0-1853-424e-a6c2-1ee07d3bbf61,network=Network(8973f8f9-6cab-4292-a0e3-cbd494454b03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap774bb0a0-18')
Nov 25 16:51:32 compute-0 podman[361195]: 2025-11-25 16:51:32.183724035 +0000 UTC m=+0.041026307 container remove ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.189 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5ed6cf-f39d-4d03-b4c9-7749df260c57]: (4, ('Tue Nov 25 04:51:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 (ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b)\necb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b\nTue Nov 25 04:51:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 (ecb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b)\necb37f10c68905cc3a06cbc5af238271a866bd233e3bed8d147e6c2fcadd921b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e61989a3-cfc0-4b3c-9ff4-74851aae98bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.192 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8973f8f9-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:32 compute-0 kernel: tap8973f8f9-60: left promiscuous mode
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.217 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66cbf923-5f45-4a57-9de7-351f3c6beccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[199513eb-8e5a-4ebf-affd-aaa5f3a86f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d88ea609-61fb-43af-bb73-6e09ed6a0e8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.245 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd80050-d448-4ebb-8d31-f4ce206365ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594624, 'reachable_time': 30721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361237, 'error': None, 'target': 'ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.247 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8973f8f9-6cab-4292-a0e3-cbd494454b03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:51:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:32.247 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[09730bce-a073-40a2-ae88-496a82c27340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d8973f8f9\x2d6cab\x2d4292\x2da0e3\x2dcbd494454b03.mount: Deactivated successfully.
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.484 254096 INFO nova.virt.libvirt.driver [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deleting instance files /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72_del
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.486 254096 INFO nova.virt.libvirt.driver [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deletion of /var/lib/nova/instances/2fa32ddb-072c-480c-9df3-a207412beb72_del complete
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.552 254096 INFO nova.compute.manager [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 0.64 seconds to destroy the instance on the hypervisor.
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.554 254096 DEBUG oslo.service.loopingcall [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.555 254096 DEBUG nova.compute.manager [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:51:32 compute-0 nova_compute[254092]: 2025-11-25 16:51:32.556 254096 DEBUG nova.network.neutron [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:51:32 compute-0 ceph-mon[74985]: pgmap v2012: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.109 254096 DEBUG nova.network.neutron [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.127 254096 INFO nova.compute.manager [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Took 0.57 seconds to deallocate network for instance.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.177 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.177 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.236 254096 DEBUG oslo_concurrency.processutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.383 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.384 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.384 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.385 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.385 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Processing event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.385 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.386 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 WARNING nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state building and task_state spawning.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-unplugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.387 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.388 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.388 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] No waiting events found dispatching network-vif-unplugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.388 254096 WARNING nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received unexpected event network-vif-unplugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for instance with vm_state deleted and task_state None.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG oslo_concurrency.lockutils [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.389 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] No waiting events found dispatching network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.390 254096 WARNING nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received unexpected event network-vif-plugged-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 for instance with vm_state deleted and task_state None.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.390 254096 DEBUG nova.compute.manager [req-ef142dbf-6c4d-44f1-904a-36043db68779 req-23f6a108-a80d-4bc5-9df7-fc46d7aa7fc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Received event network-vif-deleted-774bb0a0-1853-424e-a6c2-1ee07d3bbf61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.391 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089493.3953347, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.396 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Resumed (Lifecycle Event)
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.398 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.402 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance spawned successfully.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.402 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.443 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.451 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.455 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.456 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.456 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.456 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.457 254096 DEBUG nova.virt.libvirt.driver [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.486 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.531 254096 INFO nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 10.48 seconds to spawn the instance on the hypervisor.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.531 254096 DEBUG nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.611 254096 INFO nova.compute.manager [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 11.71 seconds to build instance.
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.626 254096 DEBUG oslo_concurrency.lockutils [None req-687fbb0b-d486-4a31-8bde-a052e8ee15d9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:51:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186333535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.734 254096 DEBUG oslo_concurrency.processutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.740 254096 DEBUG nova.compute.provider_tree [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.762 254096 DEBUG nova.scheduler.client.report [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.797 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.838 254096 INFO nova.scheduler.client.report [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Deleted allocations for instance 2fa32ddb-072c-480c-9df3-a207412beb72
Nov 25 16:51:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 25 16:51:33 compute-0 nova_compute[254092]: 2025-11-25 16:51:33.888 254096 DEBUG oslo_concurrency.lockutils [None req-7088309a-cbcc-438c-8de6-ee8e1261a62d b117cf5d3a76422aacd4d3a62d7b2f0e faf3ae8544684cac802ef962ea89ba52 - - default default] Lock "2fa32ddb-072c-480c-9df3-a207412beb72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1186333535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:34 compute-0 ceph-mon[74985]: pgmap v2013: 321 pgs: 321 active+clean; 134 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Nov 25 16:51:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 105 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 16:51:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:36 compute-0 ceph-mon[74985]: pgmap v2014: 321 pgs: 321 active+clean; 105 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 16:51:37 compute-0 nova_compute[254092]: 2025-11-25 16:51:37.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:51:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 9193 writes, 41K keys, 9193 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 9193 writes, 9193 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1534 writes, 6924 keys, 1534 commit groups, 1.0 writes per commit group, ingest: 9.27 MB, 0.02 MB/s
                                           Interval WAL: 1534 writes, 1534 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     25.8      1.82              0.16        25    0.073       0      0       0.0       0.0
                                             L6      1/0    9.28 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    102.2     84.9      2.17              0.53        24    0.091    130K    13K       0.0       0.0
                                            Sum      1/0    9.28 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     55.6     58.0      3.99              0.69        49    0.081    130K    13K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.6     92.9     93.0      0.54              0.16        10    0.054     33K   2569       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    102.2     84.9      2.17              0.53        24    0.091    130K    13K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     26.4      1.77              0.16        24    0.074       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.046, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 4.0 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 26.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000537 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1755,25.55 MB,8.40473%) FilterBlock(50,367.80 KB,0.11815%) IndexBlock(50,631.80 KB,0.202957%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 16:51:37 compute-0 ovn_controller[153477]: 2025-11-25T16:51:37Z|01055|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 16:51:37 compute-0 NetworkManager[48891]: <info>  [1764089497.6682] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Nov 25 16:51:37 compute-0 NetworkManager[48891]: <info>  [1764089497.6687] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Nov 25 16:51:37 compute-0 nova_compute[254092]: 2025-11-25 16:51:37.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:37 compute-0 ovn_controller[153477]: 2025-11-25T16:51:37Z|01056|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 16:51:37 compute-0 nova_compute[254092]: 2025-11-25 16:51:37.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:37 compute-0 nova_compute[254092]: 2025-11-25 16:51:37.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:37 compute-0 nova_compute[254092]: 2025-11-25 16:51:37.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 746 KiB/s wr, 199 op/s
Nov 25 16:51:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 25 16:51:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 25 16:51:37 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 25 16:51:38 compute-0 nova_compute[254092]: 2025-11-25 16:51:38.436 254096 DEBUG nova.compute.manager [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:38 compute-0 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG nova.compute.manager [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing instance network info cache due to event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:51:38 compute-0 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG oslo_concurrency.lockutils [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:38 compute-0 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG oslo_concurrency.lockutils [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:38 compute-0 nova_compute[254092]: 2025-11-25 16:51:38.437 254096 DEBUG nova.network.neutron [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:51:38 compute-0 ovn_controller[153477]: 2025-11-25T16:51:38Z|01057|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 16:51:38 compute-0 nova_compute[254092]: 2025-11-25 16:51:38.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:38 compute-0 nova_compute[254092]: 2025-11-25 16:51:38.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:38 compute-0 ceph-mon[74985]: pgmap v2015: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 746 KiB/s wr, 199 op/s
Nov 25 16:51:38 compute-0 ceph-mon[74985]: osdmap e257: 3 total, 3 up, 3 in
Nov 25 16:51:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 25 16:51:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 25 16:51:38 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 25 16:51:39 compute-0 nova_compute[254092]: 2025-11-25 16:51:39.756 254096 DEBUG nova.network.neutron [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updated VIF entry in instance network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:51:39 compute-0 nova_compute[254092]: 2025-11-25 16:51:39.757 254096 DEBUG nova.network.neutron [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:39 compute-0 nova_compute[254092]: 2025-11-25 16:51:39.774 254096 DEBUG oslo_concurrency.lockutils [req-0e3cfe30-0106-4b17-af29-b47eb16af88a req-017735f0-b828-449b-aa0f-b0152a392d21 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:51:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 20 KiB/s wr, 228 op/s
Nov 25 16:51:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 25 16:51:39 compute-0 ceph-mon[74985]: osdmap e258: 3 total, 3 up, 3 in
Nov 25 16:51:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 25 16:51:39 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:51:40
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.mgr', 'backups']
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 25 16:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 25 16:51:40 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 25 16:51:40 compute-0 ceph-mon[74985]: pgmap v2018: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 20 KiB/s wr, 228 op/s
Nov 25 16:51:40 compute-0 ceph-mon[74985]: osdmap e259: 3 total, 3 up, 3 in
Nov 25 16:51:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 16:51:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 25 16:51:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 25 16:51:41 compute-0 ceph-mon[74985]: osdmap e260: 3 total, 3 up, 3 in
Nov 25 16:51:42 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 25 16:51:42 compute-0 nova_compute[254092]: 2025-11-25 16:51:42.172 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 25 16:51:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 25 16:51:43 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 25 16:51:43 compute-0 ceph-mon[74985]: pgmap v2021: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 16:51:43 compute-0 ceph-mon[74985]: osdmap e261: 3 total, 3 up, 3 in
Nov 25 16:51:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:43.047 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:51:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:43.048 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:51:43 compute-0 nova_compute[254092]: 2025-11-25 16:51:43.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:43 compute-0 nova_compute[254092]: 2025-11-25 16:51:43.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 16:51:44 compute-0 ceph-mon[74985]: osdmap e262: 3 total, 3 up, 3 in
Nov 25 16:51:45 compute-0 ceph-mon[74985]: pgmap v2024: 321 pgs: 321 active+clean; 88 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 135 op/s
Nov 25 16:51:45 compute-0 nova_compute[254092]: 2025-11-25 16:51:45.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:45 compute-0 ovn_controller[153477]: 2025-11-25T16:51:45Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 16:51:45 compute-0 ovn_controller[153477]: 2025-11-25T16:51:45Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 16:51:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 104 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.7 MiB/s wr, 195 op/s
Nov 25 16:51:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 25 16:51:47 compute-0 ceph-mon[74985]: pgmap v2025: 321 pgs: 321 active+clean; 104 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 3.7 MiB/s wr, 195 op/s
Nov 25 16:51:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 25 16:51:47 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 25 16:51:47 compute-0 nova_compute[254092]: 2025-11-25 16:51:47.145 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089492.1432257, 2fa32ddb-072c-480c-9df3-a207412beb72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:47 compute-0 nova_compute[254092]: 2025-11-25 16:51:47.146 254096 INFO nova.compute.manager [-] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] VM Stopped (Lifecycle Event)
Nov 25 16:51:47 compute-0 nova_compute[254092]: 2025-11-25 16:51:47.169 254096 DEBUG nova.compute.manager [None req-4a657906-8202-4fb9-a7ed-65914e5ff376 - - - - - -] [instance: 2fa32ddb-072c-480c-9df3-a207412beb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:47 compute-0 nova_compute[254092]: 2025-11-25 16:51:47.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:47 compute-0 nova_compute[254092]: 2025-11-25 16:51:47.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 109 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 4.1 MiB/s wr, 146 op/s
Nov 25 16:51:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 25 16:51:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 25 16:51:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:48.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:48 compute-0 ceph-mon[74985]: osdmap e263: 3 total, 3 up, 3 in
Nov 25 16:51:48 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 25 16:51:48 compute-0 nova_compute[254092]: 2025-11-25 16:51:48.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 25 16:51:49 compute-0 ceph-mon[74985]: pgmap v2027: 321 pgs: 321 active+clean; 109 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 4.1 MiB/s wr, 146 op/s
Nov 25 16:51:49 compute-0 ceph-mon[74985]: osdmap e264: 3 total, 3 up, 3 in
Nov 25 16:51:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 25 16:51:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 25 16:51:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 109 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 4.1 MiB/s wr, 146 op/s
Nov 25 16:51:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 25 16:51:50 compute-0 ceph-mon[74985]: osdmap e265: 3 total, 3 up, 3 in
Nov 25 16:51:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 25 16:51:50 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.168 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.169 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:51 compute-0 ceph-mon[74985]: pgmap v2030: 321 pgs: 321 active+clean; 109 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 4.1 MiB/s wr, 146 op/s
Nov 25 16:51:51 compute-0 ceph-mon[74985]: osdmap e266: 3 total, 3 up, 3 in
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.189 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:51:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.265 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.265 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.273 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.273 254096 INFO nova.compute.claims [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007562842881096569 of space, bias 1.0, pg target 0.22688528643289707 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671937064530833 of space, bias 1.0, pg target 0.20015811193592498 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.377 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.732 254096 INFO nova.compute.manager [None req-b5f82a09-7244-45a2-a96e-061cbbd2f585 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Get console output
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.738 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:51:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:51:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230878483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.833 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.841 254096 DEBUG nova.compute.provider_tree [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.851 254096 DEBUG nova.scheduler.client.report [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:51:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 241 KiB/s wr, 243 op/s
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.882 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.883 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.936 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.936 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.952 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:51:51 compute-0 nova_compute[254092]: 2025-11-25 16:51:51.968 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.049 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.050 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.051 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Creating image(s)
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.071 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.099 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.129 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.134 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 25 16:51:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 25 16:51:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1230878483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:51:52 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.209 254096 DEBUG oslo_concurrency.lockutils [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.210 254096 DEBUG oslo_concurrency.lockutils [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.211 254096 DEBUG nova.compute.manager [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.217 254096 DEBUG nova.compute.manager [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.219 254096 DEBUG nova.objects.instance [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.228 254096 DEBUG nova.policy [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '406c96278eea4ca9ac09c960f9240fd6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd04ee87178c14bcc860cdca885ea5685', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.242 254096 DEBUG nova.virt.libvirt.driver [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.244 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.244 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.245 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.245 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.274 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.279 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.612 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:52 compute-0 podman[361379]: 2025-11-25 16:51:52.646403881 +0000 UTC m=+0.066306182 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 16:51:52 compute-0 podman[361380]: 2025-11-25 16:51:52.66589157 +0000 UTC m=+0.085801122 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.684 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] resizing rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:51:52 compute-0 podman[361381]: 2025-11-25 16:51:52.69051696 +0000 UTC m=+0.100924324 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.773 254096 DEBUG nova.objects.instance [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lazy-loading 'migration_context' on Instance uuid afaa5e41-729a-48cb-bfc0-54a38b0dc96f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.783 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Successfully created port: 08e9db98-366d-49ea-aa38-b2d4e8a80e80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.787 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.787 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Ensure instance console log exists: /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.788 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.788 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:52 compute-0 nova_compute[254092]: 2025-11-25 16:51:52.788 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 25 16:51:53 compute-0 ceph-mon[74985]: pgmap v2032: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 241 KiB/s wr, 243 op/s
Nov 25 16:51:53 compute-0 ceph-mon[74985]: osdmap e267: 3 total, 3 up, 3 in
Nov 25 16:51:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 25 16:51:53 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 25 16:51:53 compute-0 sudo[361510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:53 compute-0 sudo[361510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:53 compute-0 sudo[361510]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:53 compute-0 sudo[361535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:51:53 compute-0 sudo[361535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:53 compute-0 sudo[361535]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.509 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Successfully updated port: 08e9db98-366d-49ea-aa38-b2d4e8a80e80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.533 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.534 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquired lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.534 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:51:53 compute-0 sudo[361560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:53 compute-0 sudo[361560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:53 compute-0 sudo[361560]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:53 compute-0 sudo[361585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:51:53 compute-0 sudo[361585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.710 254096 DEBUG nova.compute.manager [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-changed-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.710 254096 DEBUG nova.compute.manager [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Refreshing instance network info cache due to event network-changed-08e9db98-366d-49ea-aa38-b2d4e8a80e80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.711 254096 DEBUG oslo_concurrency.lockutils [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.766 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:51:53 compute-0 nova_compute[254092]: 2025-11-25 16:51:53.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 243 KiB/s wr, 246 op/s
Nov 25 16:51:54 compute-0 sudo[361585]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:51:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:51:54 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:51:54 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:51:54 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4caa3808-6e2f-45e5-a543-39ef0497e8fb does not exist
Nov 25 16:51:54 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fe97f82c-dbac-4001-a1a6-60421bc4ca8f does not exist
Nov 25 16:51:54 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev dfbc2fdd-dfe4-4351-8762-830c4f9863b5 does not exist
Nov 25 16:51:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:51:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:51:54 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:51:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:51:54 compute-0 sudo[361641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:54 compute-0 sudo[361641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:54 compute-0 sudo[361641]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:54 compute-0 ceph-mon[74985]: osdmap e268: 3 total, 3 up, 3 in
Nov 25 16:51:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:51:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:51:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:51:54 compute-0 sudo[361666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:51:54 compute-0 sudo[361666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:54 compute-0 sudo[361666]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:54 compute-0 sudo[361691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:54 compute-0 sudo[361691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:54 compute-0 sudo[361691]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:54 compute-0 sudo[361716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:51:54 compute-0 sudo[361716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:54 compute-0 kernel: tap0cdb5ab1-84 (unregistering): left promiscuous mode
Nov 25 16:51:54 compute-0 NetworkManager[48891]: <info>  [1764089514.5623] device (tap0cdb5ab1-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:54 compute-0 ovn_controller[153477]: 2025-11-25T16:51:54Z|01058|binding|INFO|Releasing lport 0cdb5ab1-8463-4494-a522-360862f2152e from this chassis (sb_readonly=0)
Nov 25 16:51:54 compute-0 ovn_controller[153477]: 2025-11-25T16:51:54Z|01059|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e down in Southbound
Nov 25 16:51:54 compute-0 ovn_controller[153477]: 2025-11-25T16:51:54Z|01060|binding|INFO|Removing iface tap0cdb5ab1-84 ovn-installed in OVS
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.578 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a unbound from our chassis
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.580 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b70d379-8b3d-4361-b11d-cafbb578194a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.581 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb942ee5-ac13-4582-a01f-164abb8085bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.581 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace which is not needed anymore
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:54 compute-0 podman[361780]: 2025-11-25 16:51:54.620923259 +0000 UTC m=+0.066266442 container create e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:51:54 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 25 16:51:54 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d00000069.scope: Consumed 13.694s CPU time.
Nov 25 16:51:54 compute-0 systemd[1]: Started libpod-conmon-e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865.scope.
Nov 25 16:51:54 compute-0 systemd-machined[216343]: Machine qemu-133-instance-00000069 terminated.
Nov 25 16:51:54 compute-0 podman[361780]: 2025-11-25 16:51:54.581480087 +0000 UTC m=+0.026823290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:51:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:54 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : haproxy version is 2.8.14-c23fe91
Nov 25 16:51:54 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [NOTICE]   (361132) : path to executable is /usr/sbin/haproxy
Nov 25 16:51:54 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [WARNING]  (361132) : Exiting Master process...
Nov 25 16:51:54 compute-0 podman[361780]: 2025-11-25 16:51:54.701723685 +0000 UTC m=+0.147066898 container init e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:51:54 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [ALERT]    (361132) : Current worker (361134) exited with code 143 (Terminated)
Nov 25 16:51:54 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[361127]: [WARNING]  (361132) : All workers exited. Exiting... (0)
Nov 25 16:51:54 compute-0 systemd[1]: libpod-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e.scope: Deactivated successfully.
Nov 25 16:51:54 compute-0 podman[361780]: 2025-11-25 16:51:54.708110848 +0000 UTC m=+0.153454031 container start e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:51:54 compute-0 podman[361780]: 2025-11-25 16:51:54.711903582 +0000 UTC m=+0.157246755 container attach e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 25 16:51:54 compute-0 inspiring_bohr[361823]: 167 167
Nov 25 16:51:54 compute-0 systemd[1]: libpod-e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865.scope: Deactivated successfully.
Nov 25 16:51:54 compute-0 podman[361780]: 2025-11-25 16:51:54.713869014 +0000 UTC m=+0.159212197 container died e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 25 16:51:54 compute-0 podman[361821]: 2025-11-25 16:51:54.713882035 +0000 UTC m=+0.047447801 container died a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 16:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-02b6a7f7b9d2bdfecb85bba395c3ea10c90c41ec28e482c4e1ff05faa81e0e9d-merged.mount: Deactivated successfully.
Nov 25 16:51:54 compute-0 podman[361780]: 2025-11-25 16:51:54.758463846 +0000 UTC m=+0.203807029 container remove e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 16:51:54 compute-0 systemd[1]: libpod-conmon-e0486124b7ed5ce0743f902ca3be6b98e6bb93c0709387e62836fdabadf1a865.scope: Deactivated successfully.
Nov 25 16:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e-userdata-shm.mount: Deactivated successfully.
Nov 25 16:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d8c25a22fe3e9484bc39161e604e238df356a2c17ccd6f67b57960b5feb4d5f-merged.mount: Deactivated successfully.
Nov 25 16:51:54 compute-0 podman[361821]: 2025-11-25 16:51:54.790324093 +0000 UTC m=+0.123889829 container cleanup a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:51:54 compute-0 systemd[1]: libpod-conmon-a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e.scope: Deactivated successfully.
Nov 25 16:51:54 compute-0 podman[361867]: 2025-11-25 16:51:54.852630146 +0000 UTC m=+0.042035474 container remove a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.859 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6e853653-9e6f-4056-84ca-e4037b208a4c]: (4, ('Tue Nov 25 04:51:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e)\na355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e\nTue Nov 25 04:51:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (a355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e)\na355d4c2c30505019533570d5d53ee7daf45cc65000df70d3f2a0aea7df7d12e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.862 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af58ed18-4864-4ed2-9fe2-9e0324a1c290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:54 compute-0 kernel: tap1b70d379-80: left promiscuous mode
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG nova.compute.manager [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG oslo_concurrency.lockutils [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG oslo_concurrency.lockutils [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.931 254096 DEBUG oslo_concurrency.lockutils [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.932 254096 DEBUG nova.compute.manager [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:54 compute-0 nova_compute[254092]: 2025-11-25 16:51:54.932 254096 WARNING nova.compute.manager [req-2aac8255-6b98-4cdd-9625-fa1ddc9fe0fd req-4b00cd00-e8aa-4bb7-9bae-a5d88ecdafe6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state active and task_state powering-off.
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.935 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b49009f8-017f-4849-b7fc-999e6cf2a89c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[817dac9b-ab43-4e90-b484-0d645be644a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a09dd5b-5fa5-4bcf-a598-2fc19547372a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 podman[361893]: 2025-11-25 16:51:54.950812803 +0000 UTC m=+0.041624152 container create 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.965 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4efad0ec-b7ca-4ad8-a214-13d2c6d72c72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594877, 'reachable_time': 29942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361912, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.967 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:51:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:54.967 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c6187ffa-9678-49da-8745-2f1ad970f171]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:54 compute-0 systemd[1]: Started libpod-conmon-734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7.scope.
Nov 25 16:51:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:55 compute-0 podman[361893]: 2025-11-25 16:51:54.930908553 +0000 UTC m=+0.021719902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:51:55 compute-0 podman[361893]: 2025-11-25 16:51:55.029894692 +0000 UTC m=+0.120706051 container init 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:51:55 compute-0 podman[361893]: 2025-11-25 16:51:55.039307149 +0000 UTC m=+0.130118518 container start 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:51:55 compute-0 podman[361893]: 2025-11-25 16:51:55.042900326 +0000 UTC m=+0.133711675 container attach 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.184 254096 DEBUG nova.network.neutron [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updating instance_info_cache with network_info: [{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Releasing lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance network_info: |[{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG oslo_concurrency.lockutils [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.201 254096 DEBUG nova.network.neutron [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Refreshing network info cache for port 08e9db98-366d-49ea-aa38-b2d4e8a80e80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:51:55 compute-0 ceph-mon[74985]: pgmap v2035: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 243 KiB/s wr, 246 op/s
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.204 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start _get_guest_xml network_info=[{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.209 254096 WARNING nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.219 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.220 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.222 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.223 254096 DEBUG nova.virt.libvirt.host [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.224 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.225 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.226 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.226 254096 DEBUG nova.virt.hardware [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.228 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:51:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875539692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:51:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:51:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1875539692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.352 254096 INFO nova.virt.libvirt.driver [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance shutdown successfully after 3 seconds.
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.358 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance destroyed successfully.
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.359 254096 DEBUG nova.objects.instance [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.372 254096 DEBUG nova.compute.manager [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.414 254096 DEBUG oslo_concurrency.lockutils [None req-d319e599-df8a-4509-9e4d-82e5b4db7ef9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d1b70d379\x2d8b3d\x2d4361\x2db11d\x2dcafbb578194a.mount: Deactivated successfully.
Nov 25 16:51:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:51:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296492758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.701 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.721 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:55 compute-0 nova_compute[254092]: 2025-11-25 16:51:55.725 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 153 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.3 MiB/s wr, 254 op/s
Nov 25 16:51:56 compute-0 admiring_edison[361916]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:51:56 compute-0 admiring_edison[361916]: --> relative data size: 1.0
Nov 25 16:51:56 compute-0 admiring_edison[361916]: --> All data devices are unavailable
Nov 25 16:51:56 compute-0 systemd[1]: libpod-734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7.scope: Deactivated successfully.
Nov 25 16:51:56 compute-0 podman[361893]: 2025-11-25 16:51:56.067605733 +0000 UTC m=+1.158417082 container died 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 16:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a86b61c96dab11e13bb6be75b1c860f8275a44dd028e21669878bdb9a2c5711-merged.mount: Deactivated successfully.
Nov 25 16:51:56 compute-0 podman[361893]: 2025-11-25 16:51:56.12604455 +0000 UTC m=+1.216855909 container remove 734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:51:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:51:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1083064880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:56 compute-0 systemd[1]: libpod-conmon-734e22fa3fd22aca63297d457847e0bd59aea4ddfd50af54a7e1716159b412c7.scope: Deactivated successfully.
Nov 25 16:51:56 compute-0 sudo[361716]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.154 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.158 254096 DEBUG nova.virt.libvirt.vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1469913116',display_name='tempest-ServerMetadataTestJSON-server-1469913116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1469913116',id=106,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d04ee87178c14bcc860cdca885ea5685',ramdisk_id='',reservation_id='r-0xb1sqh5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-246494039',owner_user_name='tempest-ServerMetadataTestJSON-246494039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:52Z,user_data=None,user_id='406c96278eea4ca9ac09c960f9240fd6',uuid=afaa5e41-729a-48cb-bfc0-54a38b0dc96f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.159 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converting VIF {"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.160 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.162 254096 DEBUG nova.objects.instance [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lazy-loading 'pci_devices' on Instance uuid afaa5e41-729a-48cb-bfc0-54a38b0dc96f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.175 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <uuid>afaa5e41-729a-48cb-bfc0-54a38b0dc96f</uuid>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <name>instance-0000006a</name>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerMetadataTestJSON-server-1469913116</nova:name>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:51:55</nova:creationTime>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:user uuid="406c96278eea4ca9ac09c960f9240fd6">tempest-ServerMetadataTestJSON-246494039-project-member</nova:user>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:project uuid="d04ee87178c14bcc860cdca885ea5685">tempest-ServerMetadataTestJSON-246494039</nova:project>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <nova:port uuid="08e9db98-366d-49ea-aa38-b2d4e8a80e80">
Nov 25 16:51:56 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <system>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <entry name="serial">afaa5e41-729a-48cb-bfc0-54a38b0dc96f</entry>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <entry name="uuid">afaa5e41-729a-48cb-bfc0-54a38b0dc96f</entry>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </system>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <os>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   </os>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <features>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   </features>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk">
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config">
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:51:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e2:98:e4"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <target dev="tap08e9db98-36"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/console.log" append="off"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <video>
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </video>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:51:56 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:51:56 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:51:56 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:51:56 compute-0 nova_compute[254092]: </domain>
Nov 25 16:51:56 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.177 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Preparing to wait for external event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.177 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.177 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.178 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.178 254096 DEBUG nova.virt.libvirt.vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:51:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1469913116',display_name='tempest-ServerMetadataTestJSON-server-1469913116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1469913116',id=106,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d04ee87178c14bcc860cdca885ea5685',ramdisk_id='',reservation_id='r-0xb1sqh5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-246494039',owner_user_name='tempest-ServerMetadataTestJSON-246494039-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:51:52Z,user_data=None,user_id='406c96278eea4ca9ac09c960f9240fd6',uuid=afaa5e41-729a-48cb-bfc0-54a38b0dc96f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.179 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converting VIF {"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.179 254096 DEBUG nova.network.os_vif_util [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.179 254096 DEBUG os_vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.180 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.181 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.183 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08e9db98-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.184 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08e9db98-36, col_values=(('external_ids', {'iface-id': '08e9db98-366d-49ea-aa38-b2d4e8a80e80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:98:e4', 'vm-uuid': 'afaa5e41-729a-48cb-bfc0-54a38b0dc96f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:56 compute-0 NetworkManager[48891]: <info>  [1764089516.1871] manager: (tap08e9db98-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.196 254096 INFO os_vif [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36')
Nov 25 16:51:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1875539692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:51:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1875539692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:51:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1296492758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1083064880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:51:56 compute-0 sudo[362021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:56 compute-0 sudo[362021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:56 compute-0 sudo[362021]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:51:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 25 16:51:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.246 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.247 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:51:56 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.247 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] No VIF found with MAC fa:16:3e:e2:98:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.248 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Using config drive
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.259738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516259771, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2427, "num_deletes": 510, "total_data_size": 3207647, "memory_usage": 3264272, "flush_reason": "Manual Compaction"}
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516277249, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2014992, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40183, "largest_seqno": 42609, "table_properties": {"data_size": 2006806, "index_size": 4172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 23243, "raw_average_key_size": 19, "raw_value_size": 1986766, "raw_average_value_size": 1690, "num_data_blocks": 187, "num_entries": 1175, "num_filter_entries": 1175, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089331, "oldest_key_time": 1764089331, "file_creation_time": 1764089516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 17553 microseconds, and 5208 cpu microseconds.
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:51:56 compute-0 sudo[362050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:51:56 compute-0 sudo[362050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:56 compute-0 sudo[362050]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.277290) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2014992 bytes OK
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.277312) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.294412) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.294492) EVENT_LOG_v1 {"time_micros": 1764089516294482, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.294513) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3196352, prev total WAL file size 3196352, number of live WAL files 2.
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.295508) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353037' seq:72057594037927935, type:22 .. '6D6772737461740031373539' seq:0, type:0; will stop at (end)
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1967KB)], [89(9497KB)]
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516295559, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11740761, "oldest_snapshot_seqno": -1}
Nov 25 16:51:56 compute-0 nova_compute[254092]: 2025-11-25 16:51:56.299 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:56 compute-0 sudo[362091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:56 compute-0 sudo[362091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:56 compute-0 sudo[362091]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6736 keys, 8868334 bytes, temperature: kUnknown
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516353296, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 8868334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8824295, "index_size": 26071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 172814, "raw_average_key_size": 25, "raw_value_size": 8704575, "raw_average_value_size": 1292, "num_data_blocks": 1032, "num_entries": 6736, "num_filter_entries": 6736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089516, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.353572) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 8868334 bytes
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.355080) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.0 rd, 153.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(10.2) write-amplify(4.4) OK, records in: 7679, records dropped: 943 output_compression: NoCompression
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.355095) EVENT_LOG_v1 {"time_micros": 1764089516355088, "job": 52, "event": "compaction_finished", "compaction_time_micros": 57832, "compaction_time_cpu_micros": 21501, "output_level": 6, "num_output_files": 1, "total_output_size": 8868334, "num_input_records": 7679, "num_output_records": 6736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516355539, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089516357141, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.295422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:51:56 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:51:56.357175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:51:56 compute-0 sudo[362118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:51:56 compute-0 sudo[362118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:56 compute-0 podman[362184]: 2025-11-25 16:51:56.733898439 +0000 UTC m=+0.044267483 container create e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:51:56 compute-0 systemd[1]: Started libpod-conmon-e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841.scope.
Nov 25 16:51:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:56 compute-0 podman[362184]: 2025-11-25 16:51:56.715346965 +0000 UTC m=+0.025716009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:51:56 compute-0 podman[362184]: 2025-11-25 16:51:56.815231709 +0000 UTC m=+0.125600743 container init e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:51:56 compute-0 podman[362184]: 2025-11-25 16:51:56.821929422 +0000 UTC m=+0.132298426 container start e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 16:51:56 compute-0 podman[362184]: 2025-11-25 16:51:56.824991005 +0000 UTC m=+0.135360059 container attach e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:51:56 compute-0 unruffled_mahavira[362201]: 167 167
Nov 25 16:51:56 compute-0 systemd[1]: libpod-e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841.scope: Deactivated successfully.
Nov 25 16:51:56 compute-0 podman[362184]: 2025-11-25 16:51:56.82887242 +0000 UTC m=+0.139241504 container died e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b758801e73ad4e7fe87bf89ad950a25762421ad666eb3b30180835238b991a5c-merged.mount: Deactivated successfully.
Nov 25 16:51:56 compute-0 podman[362184]: 2025-11-25 16:51:56.87998216 +0000 UTC m=+0.190351214 container remove e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:51:56 compute-0 systemd[1]: libpod-conmon-e7ae4e51219071c02314acfd3e9d1683fe11d1b71307161de4792f6534b7e841.scope: Deactivated successfully.
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.062 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Creating config drive at /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config
Nov 25 16:51:57 compute-0 podman[362226]: 2025-11-25 16:51:57.068751029 +0000 UTC m=+0.047915533 container create 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.073 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1jccc7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:57 compute-0 systemd[1]: Started libpod-conmon-82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3.scope.
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.127 254096 DEBUG nova.compute.manager [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.127 254096 DEBUG oslo_concurrency.lockutils [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 DEBUG oslo_concurrency.lockutils [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 DEBUG oslo_concurrency.lockutils [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 DEBUG nova.compute.manager [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.128 254096 WARNING nova.compute.manager [req-ac9349de-7d2e-48e3-af04-de73d86332a7 req-85be4bf9-9214-4731-b1a8-210a0ea59194 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state stopped and task_state None.
Nov 25 16:51:57 compute-0 podman[362226]: 2025-11-25 16:51:57.047495481 +0000 UTC m=+0.026660045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:51:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:57 compute-0 podman[362226]: 2025-11-25 16:51:57.174431671 +0000 UTC m=+0.153596205 container init 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:51:57 compute-0 podman[362226]: 2025-11-25 16:51:57.187012593 +0000 UTC m=+0.166177107 container start 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:51:57 compute-0 podman[362226]: 2025-11-25 16:51:57.19024925 +0000 UTC m=+0.169413794 container attach 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.223 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps1jccc7b" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:57 compute-0 ceph-mon[74985]: pgmap v2036: 321 pgs: 321 active+clean; 153 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.3 MiB/s wr, 254 op/s
Nov 25 16:51:57 compute-0 ceph-mon[74985]: osdmap e269: 3 total, 3 up, 3 in
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.261 254096 DEBUG nova.storage.rbd_utils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] rbd image afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.267 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.438 254096 DEBUG nova.network.neutron [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updated VIF entry in instance network info cache for port 08e9db98-366d-49ea-aa38-b2d4e8a80e80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.440 254096 DEBUG nova.network.neutron [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updating instance_info_cache with network_info: [{"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.447 254096 DEBUG oslo_concurrency.processutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config afaa5e41-729a-48cb-bfc0-54a38b0dc96f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.448 254096 INFO nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deleting local config drive /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f/disk.config because it was imported into RBD.
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.457 254096 DEBUG oslo_concurrency.lockutils [req-eab90b69-d839-400f-9ef7-d674e1a10ada req-00811098-b520-4f0a-bf46-7bc9db6ad20a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-afaa5e41-729a-48cb-bfc0-54a38b0dc96f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:51:57 compute-0 NetworkManager[48891]: <info>  [1764089517.5091] manager: (tap08e9db98-36): new Tun device (/org/freedesktop/NetworkManager/Devices/434)
Nov 25 16:51:57 compute-0 systemd-udevd[361798]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:51:57 compute-0 kernel: tap08e9db98-36: entered promiscuous mode
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:57 compute-0 ovn_controller[153477]: 2025-11-25T16:51:57Z|01061|binding|INFO|Claiming lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 for this chassis.
Nov 25 16:51:57 compute-0 ovn_controller[153477]: 2025-11-25T16:51:57Z|01062|binding|INFO|08e9db98-366d-49ea-aa38-b2d4e8a80e80: Claiming fa:16:3e:e2:98:e4 10.100.0.12
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.528 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:98:e4 10.100.0.12'], port_security=['fa:16:3e:e2:98:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'afaa5e41-729a-48cb-bfc0-54a38b0dc96f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-461d8b90-d4fc-454e-911d-7fee7be073c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd04ee87178c14bcc860cdca885ea5685', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94eded8c-472b-4a0e-a390-113a914b266f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=957a0d26-cdd2-49bf-b411-89b73b8c3e75, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=08e9db98-366d-49ea-aa38-b2d4e8a80e80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:51:57 compute-0 NetworkManager[48891]: <info>  [1764089517.5308] device (tap08e9db98-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:51:57 compute-0 NetworkManager[48891]: <info>  [1764089517.5321] device (tap08e9db98-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.532 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 08e9db98-366d-49ea-aa38-b2d4e8a80e80 in datapath 461d8b90-d4fc-454e-911d-7fee7be073c4 bound to our chassis
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.534 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 461d8b90-d4fc-454e-911d-7fee7be073c4
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.552 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f191d6f4-ce66-4562-bdd7-701e4975244d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.553 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap461d8b90-d1 in ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.555 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap461d8b90-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c0299ef4-52dd-40c1-bc35-6adec128a2b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3dbf404-dc92-47b9-904d-d72d5a8090ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 systemd-machined[216343]: New machine qemu-134-instance-0000006a.
Nov 25 16:51:57 compute-0 ovn_controller[153477]: 2025-11-25T16:51:57Z|01063|binding|INFO|Setting lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 ovn-installed in OVS
Nov 25 16:51:57 compute-0 ovn_controller[153477]: 2025-11-25T16:51:57Z|01064|binding|INFO|Setting lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 up in Southbound
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:57 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006a.
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.580 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8847d402-12de-4541-9f34-99b45a757fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.608 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[882fa0a7-8b3e-41dc-9641-45327cf919f8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.641 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[108c06fb-daa5-4540-b7d9-0daa1fcd5156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.648 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5600c4e2-e07c-4d1a-8efc-3f18608ee0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 NetworkManager[48891]: <info>  [1764089517.6488] manager: (tap461d8b90-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/435)
Nov 25 16:51:57 compute-0 systemd-udevd[362319]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.681 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed77900-5062-4282-b954-e768b0bf3a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.686 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bd6ee6-c4d5-4a23-bb04-3c8fb3d939fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 NetworkManager[48891]: <info>  [1764089517.7165] device (tap461d8b90-d0): carrier: link connected
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.722 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8be1ae47-719e-4be3-821e-50e44a5d0439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.741 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66012451-3946-4456-8727-006be3e2cf31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap461d8b90-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e2:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 313], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597530, 'reachable_time': 22263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362341, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[12d2ff25-8197-4f62-8088-5ec7dc4f73a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:e2a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597530, 'tstamp': 597530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362342, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.779 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99162a79-6734-400f-8cef-97c64c1cb91e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap461d8b90-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:e2:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 313], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597530, 'reachable_time': 22263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362343, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7a1998-7812-49d8-8aef-1e6d7101d0b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.875 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c95cc443-d126-40f4-8b85-6f0a9c8ccdf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.876 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap461d8b90-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap461d8b90-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 25 16:51:57 compute-0 NetworkManager[48891]: <info>  [1764089517.8798] manager: (tap461d8b90-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:57 compute-0 kernel: tap461d8b90-d0: entered promiscuous mode
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.889 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap461d8b90-d0, col_values=(('external_ids', {'iface-id': '6b5b1d51-4b75-47f2-a227-9f92a6bb6041'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:51:57 compute-0 ovn_controller[153477]: 2025-11-25T16:51:57Z|01065|binding|INFO|Releasing lport 6b5b1d51-4b75-47f2-a227-9f92a6bb6041 from this chassis (sb_readonly=0)
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.911 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/461d8b90-d4fc-454e-911d-7fee7be073c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/461d8b90-d4fc-454e-911d-7fee7be073c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.912 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f245930-71ec-4ddc-9edf-26ef6dea15ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.912 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-461d8b90-d4fc-454e-911d-7fee7be073c4
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/461d8b90-d4fc-454e-911d-7fee7be073c4.pid.haproxy
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 461d8b90-d4fc-454e-911d-7fee7be073c4
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:51:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:51:57.913 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'env', 'PROCESS_TAG=haproxy-461d8b90-d4fc-454e-911d-7fee7be073c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/461d8b90-d4fc-454e-911d-7fee7be073c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.925 254096 DEBUG nova.compute.manager [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG oslo_concurrency.lockutils [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG oslo_concurrency.lockutils [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG oslo_concurrency.lockutils [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:57 compute-0 nova_compute[254092]: 2025-11-25 16:51:57.926 254096 DEBUG nova.compute.manager [req-8e31c836-46e1-4152-a23f-37a800fe3c78 req-305f5a38-6183-44cc-b29a-c72fd34aee19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Processing event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]: {
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:     "0": [
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:         {
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "devices": [
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "/dev/loop3"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             ],
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_name": "ceph_lv0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_size": "21470642176",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "name": "ceph_lv0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "tags": {
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cluster_name": "ceph",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.crush_device_class": "",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.encrypted": "0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osd_id": "0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.type": "block",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.vdo": "0"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             },
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "type": "block",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "vg_name": "ceph_vg0"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:         }
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:     ],
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:     "1": [
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:         {
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "devices": [
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "/dev/loop4"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             ],
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_name": "ceph_lv1",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_size": "21470642176",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "name": "ceph_lv1",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "tags": {
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cluster_name": "ceph",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.crush_device_class": "",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.encrypted": "0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osd_id": "1",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.type": "block",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.vdo": "0"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             },
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "type": "block",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "vg_name": "ceph_vg1"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:         }
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:     ],
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:     "2": [
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:         {
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "devices": [
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "/dev/loop5"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             ],
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_name": "ceph_lv2",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_size": "21470642176",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "name": "ceph_lv2",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "tags": {
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.cluster_name": "ceph",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.crush_device_class": "",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.encrypted": "0",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osd_id": "2",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.type": "block",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:                 "ceph.vdo": "0"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             },
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "type": "block",
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:             "vg_name": "ceph_vg2"
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:         }
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]:     ]
Nov 25 16:51:57 compute-0 intelligent_kapitsa[362244]: }
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.001 254096 INFO nova.compute.manager [None req-fb96f65d-8461-4e36-9e2c-acc401e25086 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Get console output
Nov 25 16:51:58 compute-0 systemd[1]: libpod-82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3.scope: Deactivated successfully.
Nov 25 16:51:58 compute-0 podman[362226]: 2025-11-25 16:51:58.016587636 +0000 UTC m=+0.995752150 container died 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 16:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf41299698ed50cf8536f09d4d7461e0bda39dcdb87fc33e90439ea37c80335d-merged.mount: Deactivated successfully.
Nov 25 16:51:58 compute-0 podman[362226]: 2025-11-25 16:51:58.077397989 +0000 UTC m=+1.056562503 container remove 82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:51:58 compute-0 systemd[1]: libpod-conmon-82f495d2745214d17c2343157eb4f471bc392fd10eff2954e317a98f29e02ee3.scope: Deactivated successfully.
Nov 25 16:51:58 compute-0 sudo[362118]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:58 compute-0 sudo[362400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:58 compute-0 sudo[362400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:58 compute-0 sudo[362400]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:58 compute-0 sudo[362449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:51:58 compute-0 sudo[362449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:58 compute-0 sudo[362449]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.332 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089518.3221192, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.334 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Started (Lifecycle Event)
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.336 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.345 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:51:58 compute-0 podman[362485]: 2025-11-25 16:51:58.349626737 +0000 UTC m=+0.050951156 container create b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.349 254096 INFO nova.virt.libvirt.driver [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance spawned successfully.
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.350 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:51:58 compute-0 sudo[362491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.361 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.362 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:58 compute-0 sudo[362491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:58 compute-0 sudo[362491]: pam_unix(sudo:session): session closed for user root
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.369 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.373 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.373 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.374 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.375 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.375 254096 DEBUG nova.virt.libvirt.driver [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG oslo_concurrency.lockutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG oslo_concurrency.lockutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG nova.network.neutron [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.384 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'info_cache' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.406 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.407 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089518.3224535, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Paused (Lifecycle Event)
Nov 25 16:51:58 compute-0 systemd[1]: Started libpod-conmon-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74.scope.
Nov 25 16:51:58 compute-0 podman[362485]: 2025-11-25 16:51:58.323836926 +0000 UTC m=+0.025161365 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:51:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:58 compute-0 sudo[362521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:58 compute-0 sudo[362521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c6aa58a7c081ad9b5f642b8ac1385a57de4dfb79d58b6ff8d6feed85edc246c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.442 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089518.3439662, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.442 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Resumed (Lifecycle Event)
Nov 25 16:51:58 compute-0 podman[362485]: 2025-11-25 16:51:58.455170285 +0000 UTC m=+0.156494704 container init b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.460 254096 INFO nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 6.41 seconds to spawn the instance on the hypervisor.
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.460 254096 DEBUG nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:58 compute-0 podman[362485]: 2025-11-25 16:51:58.461954579 +0000 UTC m=+0.163278998 container start b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.461 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.471 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:51:58 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : New worker (362556) forked
Nov 25 16:51:58 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : Loading success.
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.504 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.523 254096 INFO nova.compute.manager [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 7.29 seconds to build instance.
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.540 254096 DEBUG oslo_concurrency.lockutils [None req-a17faf0f-d52b-440d-a267-2059ead76711 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:51:58 compute-0 podman[362605]: 2025-11-25 16:51:58.782129271 +0000 UTC m=+0.043910605 container create 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:51:58 compute-0 nova_compute[254092]: 2025-11-25 16:51:58.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:51:58 compute-0 systemd[1]: Started libpod-conmon-39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b.scope.
Nov 25 16:51:58 compute-0 podman[362605]: 2025-11-25 16:51:58.762918249 +0000 UTC m=+0.024699573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:51:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:58 compute-0 podman[362605]: 2025-11-25 16:51:58.904315471 +0000 UTC m=+0.166096825 container init 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:51:58 compute-0 podman[362605]: 2025-11-25 16:51:58.915350261 +0000 UTC m=+0.177131585 container start 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:51:58 compute-0 podman[362605]: 2025-11-25 16:51:58.919409901 +0000 UTC m=+0.181191295 container attach 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:51:58 compute-0 sad_hellman[362622]: 167 167
Nov 25 16:51:58 compute-0 systemd[1]: libpod-39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b.scope: Deactivated successfully.
Nov 25 16:51:58 compute-0 podman[362605]: 2025-11-25 16:51:58.924574391 +0000 UTC m=+0.186355735 container died 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfa958b78d903d4f52217048ec5002aad5f94001f4df740fa78acf0490c58f88-merged.mount: Deactivated successfully.
Nov 25 16:51:58 compute-0 podman[362605]: 2025-11-25 16:51:58.972499224 +0000 UTC m=+0.234280548 container remove 39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hellman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:51:58 compute-0 systemd[1]: libpod-conmon-39a75da667b564f949d593f7dfb7580da679a8d654d9cbd8acd7a52023eff02b.scope: Deactivated successfully.
Nov 25 16:51:59 compute-0 podman[362649]: 2025-11-25 16:51:59.140204461 +0000 UTC m=+0.040910552 container create a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 16:51:59 compute-0 systemd[1]: Started libpod-conmon-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope.
Nov 25 16:51:59 compute-0 podman[362649]: 2025-11-25 16:51:59.122473639 +0000 UTC m=+0.023179760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:51:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:51:59 compute-0 podman[362649]: 2025-11-25 16:51:59.250358735 +0000 UTC m=+0.151064866 container init a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 16:51:59 compute-0 podman[362649]: 2025-11-25 16:51:59.258207908 +0000 UTC m=+0.158914039 container start a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:51:59 compute-0 ceph-mon[74985]: pgmap v2038: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 25 16:51:59 compute-0 podman[362649]: 2025-11-25 16:51:59.262811493 +0000 UTC m=+0.163517664 container attach a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:51:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 2.8 MiB/s wr, 115 op/s
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.209 254096 DEBUG nova.compute.manager [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.211 254096 DEBUG oslo_concurrency.lockutils [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.211 254096 DEBUG oslo_concurrency.lockutils [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.211 254096 DEBUG oslo_concurrency.lockutils [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.212 254096 DEBUG nova.compute.manager [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] No waiting events found dispatching network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.212 254096 WARNING nova.compute.manager [req-942ace91-d52e-4ffb-a5af-08569329298a req-19bc59b9-4620-4714-a1ff-41c373820fcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received unexpected event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 for instance with vm_state active and task_state None.
Nov 25 16:52:00 compute-0 crazy_carver[362666]: {
Nov 25 16:52:00 compute-0 crazy_carver[362666]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "osd_id": 1,
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "type": "bluestore"
Nov 25 16:52:00 compute-0 crazy_carver[362666]:     },
Nov 25 16:52:00 compute-0 crazy_carver[362666]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "osd_id": 2,
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "type": "bluestore"
Nov 25 16:52:00 compute-0 crazy_carver[362666]:     },
Nov 25 16:52:00 compute-0 crazy_carver[362666]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "osd_id": 0,
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:52:00 compute-0 crazy_carver[362666]:         "type": "bluestore"
Nov 25 16:52:00 compute-0 crazy_carver[362666]:     }
Nov 25 16:52:00 compute-0 crazy_carver[362666]: }
Nov 25 16:52:00 compute-0 systemd[1]: libpod-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope: Deactivated successfully.
Nov 25 16:52:00 compute-0 podman[362649]: 2025-11-25 16:52:00.258758898 +0000 UTC m=+1.159464999 container died a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:52:00 compute-0 systemd[1]: libpod-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope: Consumed 1.004s CPU time.
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.269204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520269274, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 285, "num_deletes": 251, "total_data_size": 69887, "memory_usage": 75368, "flush_reason": "Manual Compaction"}
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520271397, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 69520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42610, "largest_seqno": 42894, "table_properties": {"data_size": 67592, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4952, "raw_average_key_size": 18, "raw_value_size": 63852, "raw_average_value_size": 236, "num_data_blocks": 7, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089517, "oldest_key_time": 1764089517, "file_creation_time": 1764089520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 2220 microseconds, and 1170 cpu microseconds.
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.271435) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 69520 bytes OK
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.271449) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272427) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272441) EVENT_LOG_v1 {"time_micros": 1764089520272436, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 67761, prev total WAL file size 67761, number of live WAL files 2.
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272754) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(67KB)], [92(8660KB)]
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520272881, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 8937854, "oldest_snapshot_seqno": -1}
Nov 25 16:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd324e66b5ff5043b211457a810a64047c3b3fc3c62c74f2f91942ea8c4e8c1-merged.mount: Deactivated successfully.
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6497 keys, 7286138 bytes, temperature: kUnknown
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520303822, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7286138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7245173, "index_size": 23639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 168547, "raw_average_key_size": 25, "raw_value_size": 7131038, "raw_average_value_size": 1097, "num_data_blocks": 921, "num_entries": 6497, "num_filter_entries": 6497, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.304157) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7286138 bytes
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.305873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 287.4 rd, 234.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.5 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(233.4) write-amplify(104.8) OK, records in: 7006, records dropped: 509 output_compression: NoCompression
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.305890) EVENT_LOG_v1 {"time_micros": 1764089520305882, "job": 54, "event": "compaction_finished", "compaction_time_micros": 31096, "compaction_time_cpu_micros": 16152, "output_level": 6, "num_output_files": 1, "total_output_size": 7286138, "num_input_records": 7006, "num_output_records": 6497, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520306331, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089520307782, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.272712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:00.307886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:00 compute-0 podman[362649]: 2025-11-25 16:52:00.322815239 +0000 UTC m=+1.223521340 container remove a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 16:52:00 compute-0 systemd[1]: libpod-conmon-a07f937a9bb3e0f2a3154ad7390caa8c2b3f5d51dcd014229b04cdf345a3311c.scope: Deactivated successfully.
Nov 25 16:52:00 compute-0 sudo[362521]: pam_unix(sudo:session): session closed for user root
Nov 25 16:52:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:52:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:52:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:52:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:52:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1271be50-61d3-4c31-9c30-1eee1d9c14d0 does not exist
Nov 25 16:52:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c9d3a61f-dac4-4bb1-9e26-c617601b2e76 does not exist
Nov 25 16:52:00 compute-0 sudo[362711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:52:00 compute-0 sudo[362711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:52:00 compute-0 sudo[362711]: pam_unix(sudo:session): session closed for user root
Nov 25 16:52:00 compute-0 sudo[362736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:52:00 compute-0 sudo[362736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:52:00 compute-0 sudo[362736]: pam_unix(sudo:session): session closed for user root
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.705 254096 DEBUG nova.network.neutron [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.733 254096 DEBUG oslo_concurrency.lockutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.756 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance destroyed successfully.
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.757 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.810 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.829 254096 DEBUG nova.virt.libvirt.vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:55Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.830 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.831 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.831 254096 DEBUG os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.833 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.833 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cdb5ab1-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.837 254096 INFO os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.843 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start _get_guest_xml network_info=[{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.847 254096 WARNING nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.854 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.855 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.858 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.858 254096 DEBUG nova.virt.libvirt.host [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.859 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.860 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.virt.hardware [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.861 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:00 compute-0 nova_compute[254092]: 2025-11-25 16:52:00.873 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 25 16:52:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 25 16:52:01 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 25 16:52:01 compute-0 ceph-mon[74985]: pgmap v2039: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 2.8 MiB/s wr, 115 op/s
Nov 25 16:52:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:52:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:52:01 compute-0 ceph-mon[74985]: osdmap e270: 3 total, 3 up, 3 in
Nov 25 16:52:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3020784841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.308 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.342 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405001762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.782 254096 DEBUG oslo_concurrency.processutils [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.783 254096 DEBUG nova.virt.libvirt.vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:55Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.784 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.785 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.786 254096 DEBUG nova.objects.instance [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.804 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <uuid>7368c721-3e2a-4635-b2d8-5703d20438d3</uuid>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <name>instance-00000069</name>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1076158717</nova:name>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:52:00</nova:creationTime>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <nova:port uuid="0cdb5ab1-8463-4494-a522-360862f2152e">
Nov 25 16:52:01 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <system>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <entry name="serial">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <entry name="uuid">7368c721-3e2a-4635-b2d8-5703d20438d3</entry>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </system>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <os>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   </os>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <features>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   </features>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk">
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7368c721-3e2a-4635-b2d8-5703d20438d3_disk.config">
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:01 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ba:99:c7"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <target dev="tap0cdb5ab1-84"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3/console.log" append="off"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <video>
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </video>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:52:01 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:52:01 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:52:01 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:52:01 compute-0 nova_compute[254092]: </domain>
Nov 25 16:52:01 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.809 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.810 254096 DEBUG nova.virt.libvirt.driver [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.810 254096 DEBUG nova.virt.libvirt.vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:51:55Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.811 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.811 254096 DEBUG nova.network.os_vif_util [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.812 254096 DEBUG os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.813 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.813 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.815 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cdb5ab1-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.816 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0cdb5ab1-84, col_values=(('external_ids', {'iface-id': '0cdb5ab1-8463-4494-a522-360862f2152e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:99:c7', 'vm-uuid': '7368c721-3e2a-4635-b2d8-5703d20438d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:01 compute-0 NetworkManager[48891]: <info>  [1764089521.8180] manager: (tap0cdb5ab1-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.818 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.825 254096 INFO os_vif [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')
Nov 25 16:52:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Nov 25 16:52:01 compute-0 kernel: tap0cdb5ab1-84: entered promiscuous mode
Nov 25 16:52:01 compute-0 NetworkManager[48891]: <info>  [1764089521.8947] manager: (tap0cdb5ab1-84): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.895 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:01 compute-0 ovn_controller[153477]: 2025-11-25T16:52:01Z|01066|binding|INFO|Claiming lport 0cdb5ab1-8463-4494-a522-360862f2152e for this chassis.
Nov 25 16:52:01 compute-0 ovn_controller[153477]: 2025-11-25T16:52:01Z|01067|binding|INFO|0cdb5ab1-8463-4494-a522-360862f2152e: Claiming fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.903 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.906 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a bound to our chassis
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.908 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:01 compute-0 ovn_controller[153477]: 2025-11-25T16:52:01Z|01068|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e ovn-installed in OVS
Nov 25 16:52:01 compute-0 ovn_controller[153477]: 2025-11-25T16:52:01Z|01069|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e up in Southbound
Nov 25 16:52:01 compute-0 nova_compute[254092]: 2025-11-25 16:52:01.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b28517c3-2461-4677-bdc6-145f4225c6d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.922 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1b70d379-81 in ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.924 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1b70d379-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.924 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a067847-de20-4c93-bf4c-cea79ba5c650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.928 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7c1658-cc3f-407b-b704-0cd8697b1894]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:01 compute-0 systemd-machined[216343]: New machine qemu-135-instance-00000069.
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.940 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c3f76d-5bdf-4511-9523-305d742d6e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:01 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-00000069.
Nov 25 16:52:01 compute-0 systemd-udevd[362841]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:52:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:01.972 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bed025f4-e9a3-440a-ad14-ab48aa0df8d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:01 compute-0 NetworkManager[48891]: <info>  [1764089521.9854] device (tap0cdb5ab1-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:52:01 compute-0 NetworkManager[48891]: <info>  [1764089521.9863] device (tap0cdb5ab1-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.007 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dcce64f5-d296-47b7-9c8f-d22758b84c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 systemd-udevd[362846]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:52:02 compute-0 NetworkManager[48891]: <info>  [1764089522.0126] manager: (tap1b70d379-80): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45a68518-bec2-4129-a05a-727662ac64c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.041 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9da859-01ec-447c-9bf7-a703f83993c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.044 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8b8cb7-a323-4502-87ba-338bc9ad59dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 NetworkManager[48891]: <info>  [1764089522.0678] device (tap1b70d379-80): carrier: link connected
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.073 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9b835a61-c729-4453-ac02-13fd90e65ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.087 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4221330-284b-45e0-87cd-744a65b0ba2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597965, 'reachable_time': 34503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362871, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d250a6fb-adb9-4376-98c3-3ff68c44dbe9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:396a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597965, 'tstamp': 597965}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362872, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.111 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34952e07-965a-4629-a4e6-af3bb2c03d1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b70d379-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:39:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597965, 'reachable_time': 34503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362873, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.138 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8874d915-e8fb-4492-94db-31b93e3dccbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[439ee9d2-a1fb-4ef0-b9a8-f55c733b1b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.193 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.193 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.194 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b70d379-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:02 compute-0 kernel: tap1b70d379-80: entered promiscuous mode
Nov 25 16:52:02 compute-0 NetworkManager[48891]: <info>  [1764089522.1978] manager: (tap1b70d379-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.204 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1b70d379-80, col_values=(('external_ids', {'iface-id': '43f83cca-eded-4f81-a561-02d17bd21a2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:02 compute-0 ovn_controller[153477]: 2025-11-25T16:52:02Z|01070|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.210 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.213 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[511c2b34-aaf5-4a12-94e7-4633906fce6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.215 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/1b70d379-8b3d-4361-b11d-cafbb578194a.pid.haproxy
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 1b70d379-8b3d-4361-b11d-cafbb578194a
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:52:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:02.216 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'env', 'PROCESS_TAG=haproxy-1b70d379-8b3d-4361-b11d-cafbb578194a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1b70d379-8b3d-4361-b11d-cafbb578194a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3020784841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/405001762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.341 254096 DEBUG nova.compute.manager [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.341 254096 DEBUG oslo_concurrency.lockutils [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.341 254096 DEBUG oslo_concurrency.lockutils [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.342 254096 DEBUG oslo_concurrency.lockutils [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.342 254096 DEBUG nova.compute.manager [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.342 254096 WARNING nova.compute.manager [req-0b11fef1-b93c-46f9-a8cf-8e83ad25b183 req-cb507315-46e5-4407-bb9e-137ac8181df3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state stopped and task_state powering-on.
Nov 25 16:52:02 compute-0 podman[362905]: 2025-11-25 16:52:02.60028673 +0000 UTC m=+0.048015867 container create d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:52:02 compute-0 systemd[1]: Started libpod-conmon-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a.scope.
Nov 25 16:52:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4c45e0bc9eac1e04b6c01183480c0c616473c9660555153fe1e9a6aebaab91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:52:02 compute-0 podman[362905]: 2025-11-25 16:52:02.577400347 +0000 UTC m=+0.025129504 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:52:02 compute-0 podman[362905]: 2025-11-25 16:52:02.674155767 +0000 UTC m=+0.121884934 container init d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:52:02 compute-0 podman[362905]: 2025-11-25 16:52:02.679813041 +0000 UTC m=+0.127542178 container start d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:52:02 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : New worker (362967) forked
Nov 25 16:52:02 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : Loading success.
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.766 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 7368c721-3e2a-4635-b2d8-5703d20438d3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.766 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089522.7656937, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Resumed (Lifecycle Event)
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.768 254096 DEBUG nova.compute.manager [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.771 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance rebooted successfully.
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.772 254096 DEBUG nova.compute.manager [None req-9d08d918-438f-4cf1-93b4-4d4797820b54 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.795 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.798 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] During sync_power_state the instance has a pending task (powering-on). Skip.
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.828 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089522.7668285, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Started (Lifecycle Event)
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.856 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:02 compute-0 nova_compute[254092]: 2025-11-25 16:52:02.861 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:03 compute-0 ceph-mon[74985]: pgmap v2041: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.662 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.662 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.662 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.663 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.663 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.665 254096 INFO nova.compute.manager [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Terminating instance
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.666 254096 DEBUG nova.compute.manager [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:52:03 compute-0 kernel: tap08e9db98-36 (unregistering): left promiscuous mode
Nov 25 16:52:03 compute-0 NetworkManager[48891]: <info>  [1764089523.7094] device (tap08e9db98-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 ovn_controller[153477]: 2025-11-25T16:52:03Z|01071|binding|INFO|Releasing lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 from this chassis (sb_readonly=0)
Nov 25 16:52:03 compute-0 ovn_controller[153477]: 2025-11-25T16:52:03Z|01072|binding|INFO|Setting lport 08e9db98-366d-49ea-aa38-b2d4e8a80e80 down in Southbound
Nov 25 16:52:03 compute-0 ovn_controller[153477]: 2025-11-25T16:52:03Z|01073|binding|INFO|Removing iface tap08e9db98-36 ovn-installed in OVS
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.728 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:98:e4 10.100.0.12'], port_security=['fa:16:3e:e2:98:e4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'afaa5e41-729a-48cb-bfc0-54a38b0dc96f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-461d8b90-d4fc-454e-911d-7fee7be073c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd04ee87178c14bcc860cdca885ea5685', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94eded8c-472b-4a0e-a390-113a914b266f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=957a0d26-cdd2-49bf-b411-89b73b8c3e75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=08e9db98-366d-49ea-aa38-b2d4e8a80e80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.729 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 08e9db98-366d-49ea-aa38-b2d4e8a80e80 in datapath 461d8b90-d4fc-454e-911d-7fee7be073c4 unbound from our chassis
Nov 25 16:52:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.730 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 461d8b90-d4fc-454e-911d-7fee7be073c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:52:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.731 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0fad0476-3a2a-42d0-9674-67477b9c51b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:03.731 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 namespace which is not needed anymore
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Nov 25 16:52:03 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Consumed 5.990s CPU time.
Nov 25 16:52:03 compute-0 systemd-machined[216343]: Machine qemu-134-instance-0000006a terminated.
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : haproxy version is 2.8.14-c23fe91
Nov 25 16:52:03 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [NOTICE]   (362554) : path to executable is /usr/sbin/haproxy
Nov 25 16:52:03 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [WARNING]  (362554) : Exiting Master process...
Nov 25 16:52:03 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [WARNING]  (362554) : Exiting Master process...
Nov 25 16:52:03 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [ALERT]    (362554) : Current worker (362556) exited with code 143 (Terminated)
Nov 25 16:52:03 compute-0 neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4[362546]: [WARNING]  (362554) : All workers exited. Exiting... (0)
Nov 25 16:52:03 compute-0 systemd[1]: libpod-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74.scope: Deactivated successfully.
Nov 25 16:52:03 compute-0 podman[362998]: 2025-11-25 16:52:03.868692158 +0000 UTC m=+0.049012672 container died b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:52:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 177 op/s
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.900 254096 INFO nova.virt.libvirt.driver [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Instance destroyed successfully.
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.901 254096 DEBUG nova.objects.instance [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lazy-loading 'resources' on Instance uuid afaa5e41-729a-48cb-bfc0-54a38b0dc96f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74-userdata-shm.mount: Deactivated successfully.
Nov 25 16:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c6aa58a7c081ad9b5f642b8ac1385a57de4dfb79d58b6ff8d6feed85edc246c-merged.mount: Deactivated successfully.
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.914 254096 DEBUG nova.virt.libvirt.vif [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1469913116',display_name='tempest-ServerMetadataTestJSON-server-1469913116',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1469913116',id=106,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d04ee87178c14bcc860cdca885ea5685',ramdisk_id='',reservation_id='r-0xb1sqh5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-246494039',owner_user_name='tempest-ServerMetadataTestJSON-246494039-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:52:03Z,user_data=None,user_id='406c96278eea4ca9ac09c960f9240fd6',uuid=afaa5e41-729a-48cb-bfc0-54a38b0dc96f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.916 254096 DEBUG nova.network.os_vif_util [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converting VIF {"id": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "address": "fa:16:3e:e2:98:e4", "network": {"id": "461d8b90-d4fc-454e-911d-7fee7be073c4", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1141829243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d04ee87178c14bcc860cdca885ea5685", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08e9db98-36", "ovs_interfaceid": "08e9db98-366d-49ea-aa38-b2d4e8a80e80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.918 254096 DEBUG nova.network.os_vif_util [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.918 254096 DEBUG os_vif [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.920 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.920 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08e9db98-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:03 compute-0 nova_compute[254092]: 2025-11-25 16:52:03.925 254096 INFO os_vif [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:98:e4,bridge_name='br-int',has_traffic_filtering=True,id=08e9db98-366d-49ea-aa38-b2d4e8a80e80,network=Network(461d8b90-d4fc-454e-911d-7fee7be073c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08e9db98-36')
Nov 25 16:52:03 compute-0 podman[362998]: 2025-11-25 16:52:03.926862409 +0000 UTC m=+0.107182903 container cleanup b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 16:52:03 compute-0 systemd[1]: libpod-conmon-b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74.scope: Deactivated successfully.
Nov 25 16:52:04 compute-0 podman[363046]: 2025-11-25 16:52:04.001630301 +0000 UTC m=+0.050539584 container remove b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.008 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5367d037-812b-4e46-8ce4-b0baf2e29170]: (4, ('Tue Nov 25 04:52:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 (b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74)\nb3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74\nTue Nov 25 04:52:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 (b3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74)\nb3d144886e542deab80cc1976c1abdb8c4310fb17b7fa96d127e809551324c74\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad71313a-9403-4ea4-9f37-fe0daeace544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG nova.compute.manager [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-unplugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG oslo_concurrency.lockutils [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG oslo_concurrency.lockutils [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.010 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap461d8b90-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.010 254096 DEBUG oslo_concurrency.lockutils [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.011 254096 DEBUG nova.compute.manager [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] No waiting events found dispatching network-vif-unplugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.011 254096 DEBUG nova.compute.manager [req-48d74822-941e-45af-81d6-830042f86af3 req-c436c4ee-279d-40d5-b3f1-5e252097e94e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-unplugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:04 compute-0 kernel: tap461d8b90-d0: left promiscuous mode
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.031 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1393e8-dd71-46f1-90bd-867faa8b40f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.041 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e386cd-f45e-4ec9-bff2-4e5d6b0a2b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.043 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17ffb486-ca04-4149-8a90-a2a1b5d52c4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.058 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[513e05f3-5191-4254-9732-0553e6baa101]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597522, 'reachable_time': 22817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363072, 'error': None, 'target': 'ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d461d8b90\x2dd4fc\x2d454e\x2d911d\x2d7fee7be073c4.mount: Deactivated successfully.
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.062 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-461d8b90-d4fc-454e-911d-7fee7be073c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:52:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:04.062 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb1f741-b820-45a3-b111-682276878e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:04 compute-0 ceph-mon[74985]: pgmap v2042: 321 pgs: 321 active+clean; 167 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.2 MiB/s wr, 177 op/s
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.452 254096 DEBUG nova.compute.manager [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG oslo_concurrency.lockutils [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG oslo_concurrency.lockutils [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG oslo_concurrency.lockutils [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.453 254096 DEBUG nova.compute.manager [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.454 254096 WARNING nova.compute.manager [req-d11226db-4701-4fb7-b45e-0275406be683 req-118035ce-10f7-47fd-96f0-c2a229821f32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state active and task_state None.
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.621 254096 INFO nova.virt.libvirt.driver [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deleting instance files /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_del
Nov 25 16:52:04 compute-0 nova_compute[254092]: 2025-11-25 16:52:04.622 254096 INFO nova.virt.libvirt.driver [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deletion of /var/lib/nova/instances/afaa5e41-729a-48cb-bfc0-54a38b0dc96f_del complete
Nov 25 16:52:05 compute-0 nova_compute[254092]: 2025-11-25 16:52:05.053 254096 INFO nova.compute.manager [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 1.39 seconds to destroy the instance on the hypervisor.
Nov 25 16:52:05 compute-0 nova_compute[254092]: 2025-11-25 16:52:05.053 254096 DEBUG oslo.service.loopingcall [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:52:05 compute-0 nova_compute[254092]: 2025-11-25 16:52:05.054 254096 DEBUG nova.compute.manager [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:52:05 compute-0 nova_compute[254092]: 2025-11-25 16:52:05.054 254096 DEBUG nova.network.neutron [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:52:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 17 KiB/s wr, 167 op/s
Nov 25 16:52:06 compute-0 nova_compute[254092]: 2025-11-25 16:52:06.222 254096 DEBUG nova.compute.manager [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:06 compute-0 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG oslo_concurrency.lockutils [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:06 compute-0 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG oslo_concurrency.lockutils [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:06 compute-0 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG oslo_concurrency.lockutils [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:06 compute-0 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 DEBUG nova.compute.manager [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] No waiting events found dispatching network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:06 compute-0 nova_compute[254092]: 2025-11-25 16:52:06.223 254096 WARNING nova.compute.manager [req-eab69268-d475-47a9-a250-67ffffee4039 req-ad4b12af-ef24-4a66-a0b8-df3aaa89987d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received unexpected event network-vif-plugged-08e9db98-366d-49ea-aa38-b2d4e8a80e80 for instance with vm_state active and task_state deleting.
Nov 25 16:52:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:06 compute-0 ceph-mon[74985]: pgmap v2043: 321 pgs: 321 active+clean; 134 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 17 KiB/s wr, 167 op/s
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.046 254096 DEBUG nova.network.neutron [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.084 254096 INFO nova.compute.manager [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Took 2.03 seconds to deallocate network for instance.
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.137 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.138 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.240 254096 DEBUG oslo_concurrency.processutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:52:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988681984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.685 254096 DEBUG oslo_concurrency.processutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.694 254096 DEBUG nova.compute.provider_tree [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.710 254096 DEBUG nova.scheduler.client.report [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.729 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.759 254096 INFO nova.scheduler.client.report [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Deleted allocations for instance afaa5e41-729a-48cb-bfc0-54a38b0dc96f
Nov 25 16:52:07 compute-0 nova_compute[254092]: 2025-11-25 16:52:07.812 254096 DEBUG oslo_concurrency.lockutils [None req-a690f670-36e4-4646-9575-9fcf7255647a 406c96278eea4ca9ac09c960f9240fd6 d04ee87178c14bcc860cdca885ea5685 - - default default] Lock "afaa5e41-729a-48cb-bfc0-54a38b0dc96f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 205 op/s
Nov 25 16:52:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3988681984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:08 compute-0 nova_compute[254092]: 2025-11-25 16:52:08.356 254096 DEBUG nova.compute.manager [req-3d3bd817-c6e9-4406-9855-239ab0234900 req-5a749f42-029e-477c-992c-3c0ada830e81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Received event network-vif-deleted-08e9db98-366d-49ea-aa38-b2d4e8a80e80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:08 compute-0 nova_compute[254092]: 2025-11-25 16:52:08.817 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:08 compute-0 nova_compute[254092]: 2025-11-25 16:52:08.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:09 compute-0 ceph-mon[74985]: pgmap v2044: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 205 op/s
Nov 25 16:52:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 205 op/s
Nov 25 16:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:52:11 compute-0 ceph-mon[74985]: pgmap v2045: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 205 op/s
Nov 25 16:52:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:11 compute-0 ovn_controller[153477]: 2025-11-25T16:52:11Z|01074|binding|INFO|Releasing lport 43f83cca-eded-4f81-a561-02d17bd21a2b from this chassis (sb_readonly=0)
Nov 25 16:52:11 compute-0 nova_compute[254092]: 2025-11-25 16:52:11.362 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 KiB/s wr, 109 op/s
Nov 25 16:52:12 compute-0 sshd-session[363096]: Connection closed by authenticating user root 171.244.51.45 port 37780 [preauth]
Nov 25 16:52:13 compute-0 ceph-mon[74985]: pgmap v2046: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 KiB/s wr, 109 op/s
Nov 25 16:52:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:13.629 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:13 compute-0 nova_compute[254092]: 2025-11-25 16:52:13.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 25 16:52:13 compute-0 nova_compute[254092]: 2025-11-25 16:52:13.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:14 compute-0 nova_compute[254092]: 2025-11-25 16:52:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:15 compute-0 ceph-mon[74985]: pgmap v2047: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 97 op/s
Nov 25 16:52:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 134 op/s
Nov 25 16:52:16 compute-0 ovn_controller[153477]: 2025-11-25T16:52:16Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:99:c7 10.100.0.4
Nov 25 16:52:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:17 compute-0 ceph-mon[74985]: pgmap v2048: 321 pgs: 321 active+clean; 121 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 134 op/s
Nov 25 16:52:17 compute-0 nova_compute[254092]: 2025-11-25 16:52:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:17 compute-0 nova_compute[254092]: 2025-11-25 16:52:17.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 121 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 961 KiB/s rd, 12 KiB/s wr, 80 op/s
Nov 25 16:52:18 compute-0 nova_compute[254092]: 2025-11-25 16:52:18.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:18 compute-0 nova_compute[254092]: 2025-11-25 16:52:18.821 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:18 compute-0 nova_compute[254092]: 2025-11-25 16:52:18.899 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089523.898389, afaa5e41-729a-48cb-bfc0-54a38b0dc96f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:18 compute-0 nova_compute[254092]: 2025-11-25 16:52:18.900 254096 INFO nova.compute.manager [-] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] VM Stopped (Lifecycle Event)
Nov 25 16:52:18 compute-0 nova_compute[254092]: 2025-11-25 16:52:18.917 254096 DEBUG nova.compute.manager [None req-38b5ea01-7055-48c9-8ce1-846bec617862 - - - - - -] [instance: afaa5e41-729a-48cb-bfc0-54a38b0dc96f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:18 compute-0 nova_compute[254092]: 2025-11-25 16:52:18.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:19 compute-0 ceph-mon[74985]: pgmap v2049: 321 pgs: 321 active+clean; 121 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 961 KiB/s rd, 12 KiB/s wr, 80 op/s
Nov 25 16:52:19 compute-0 nova_compute[254092]: 2025-11-25 16:52:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 121 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 12 KiB/s wr, 43 op/s
Nov 25 16:52:21 compute-0 ceph-mon[74985]: pgmap v2050: 321 pgs: 321 active+clean; 121 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 528 KiB/s rd, 12 KiB/s wr, 43 op/s
Nov 25 16:52:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.542 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.543 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.544 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.545 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 25 16:52:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:52:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927510687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:21 compute-0 nova_compute[254092]: 2025-11-25 16:52:21.983 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/927510687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.073 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.074 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.234 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.235 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3600MB free_disk=59.94268798828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.235 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.235 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.323 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7368c721-3e2a-4635-b2d8-5703d20438d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.323 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.324 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.356 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:52:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2429199596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.815 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.820 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.831 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.851 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:52:22 compute-0 nova_compute[254092]: 2025-11-25 16:52:22.851 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:23 compute-0 ceph-mon[74985]: pgmap v2051: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 25 16:52:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2429199596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:23 compute-0 nova_compute[254092]: 2025-11-25 16:52:23.188 254096 INFO nova.compute.manager [None req-50e7fe2f-b229-4c0b-b0d0-6cdbcdb3dd55 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Get console output
Nov 25 16:52:23 compute-0 nova_compute[254092]: 2025-11-25 16:52:23.193 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:52:23 compute-0 podman[363144]: 2025-11-25 16:52:23.643391339 +0000 UTC m=+0.063193148 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:52:23 compute-0 podman[363145]: 2025-11-25 16:52:23.662709494 +0000 UTC m=+0.081858765 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 25 16:52:23 compute-0 podman[363146]: 2025-11-25 16:52:23.743326795 +0000 UTC m=+0.154941161 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:52:23 compute-0 nova_compute[254092]: 2025-11-25 16:52:23.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 25 16:52:23 compute-0 nova_compute[254092]: 2025-11-25 16:52:23.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:24 compute-0 nova_compute[254092]: 2025-11-25 16:52:24.847 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:24 compute-0 nova_compute[254092]: 2025-11-25 16:52:24.847 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:24 compute-0 nova_compute[254092]: 2025-11-25 16:52:24.848 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:52:24 compute-0 nova_compute[254092]: 2025-11-25 16:52:24.848 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:52:25 compute-0 ceph-mon[74985]: pgmap v2052: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.096 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.097 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.097 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.098 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.098 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.100 254096 INFO nova.compute.manager [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Terminating instance
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.102 254096 DEBUG nova.compute.manager [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.136 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.136 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.137 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.137 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:25 compute-0 kernel: tap0cdb5ab1-84 (unregistering): left promiscuous mode
Nov 25 16:52:25 compute-0 NetworkManager[48891]: <info>  [1764089545.1607] device (tap0cdb5ab1-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:52:25 compute-0 ovn_controller[153477]: 2025-11-25T16:52:25Z|01075|binding|INFO|Releasing lport 0cdb5ab1-8463-4494-a522-360862f2152e from this chassis (sb_readonly=0)
Nov 25 16:52:25 compute-0 ovn_controller[153477]: 2025-11-25T16:52:25Z|01076|binding|INFO|Setting lport 0cdb5ab1-8463-4494-a522-360862f2152e down in Southbound
Nov 25 16:52:25 compute-0 ovn_controller[153477]: 2025-11-25T16:52:25Z|01077|binding|INFO|Removing iface tap0cdb5ab1-84 ovn-installed in OVS
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.181 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:99:c7 10.100.0.4'], port_security=['fa:16:3e:ba:99:c7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7368c721-3e2a-4635-b2d8-5703d20438d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b70d379-8b3d-4361-b11d-cafbb578194a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c7519bf8-93c3-4492-8176-0f77cad9e544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86dc4e3d-c4e3-499d-a349-cb0874096a1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0cdb5ab1-8463-4494-a522-360862f2152e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.182 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0cdb5ab1-8463-4494-a522-360862f2152e in datapath 1b70d379-8b3d-4361-b11d-cafbb578194a unbound from our chassis
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.183 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b70d379-8b3d-4361-b11d-cafbb578194a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.184 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[449420f6-a9ec-4884-9f77-e22a0224d861]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.186 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a namespace which is not needed anymore
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.186 254096 DEBUG nova.compute.manager [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.187 254096 DEBUG nova.compute.manager [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing instance network info cache due to event network-changed-0cdb5ab1-8463-4494-a522-360862f2152e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.188 254096 DEBUG oslo_concurrency.lockutils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:25 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Deactivated successfully.
Nov 25 16:52:25 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d00000069.scope: Consumed 13.555s CPU time.
Nov 25 16:52:25 compute-0 systemd-machined[216343]: Machine qemu-135-instance-00000069 terminated.
Nov 25 16:52:25 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : haproxy version is 2.8.14-c23fe91
Nov 25 16:52:25 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [NOTICE]   (362960) : path to executable is /usr/sbin/haproxy
Nov 25 16:52:25 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [WARNING]  (362960) : Exiting Master process...
Nov 25 16:52:25 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [WARNING]  (362960) : Exiting Master process...
Nov 25 16:52:25 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [ALERT]    (362960) : Current worker (362967) exited with code 143 (Terminated)
Nov 25 16:52:25 compute-0 neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a[362933]: [WARNING]  (362960) : All workers exited. Exiting... (0)
Nov 25 16:52:25 compute-0 systemd[1]: libpod-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a.scope: Deactivated successfully.
Nov 25 16:52:25 compute-0 podman[363229]: 2025-11-25 16:52:25.338617007 +0000 UTC m=+0.051219053 container died d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.350 254096 INFO nova.virt.libvirt.driver [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Instance destroyed successfully.
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.351 254096 DEBUG nova.objects.instance [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid 7368c721-3e2a-4635-b2d8-5703d20438d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a-userdata-shm.mount: Deactivated successfully.
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.370 254096 DEBUG nova.virt.libvirt.vif [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:51:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1076158717',display_name='tempest-TestNetworkAdvancedServerOps-server-1076158717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1076158717',id=105,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWxuW33AB4qPGJ0diTXxVYa0pjcMkAB+FGOJimEwNbypH8IBF9aIPs3ix1DTpwsI1XlgcWOzbZRzhhJIctw3iK4wzXVfa+Ic+4kx7lWRJ84QudBAyK5H0oeGkPARcnCCA==',key_name='tempest-TestNetworkAdvancedServerOps-383328129',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:51:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-y40qfw1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:52:02Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=7368c721-3e2a-4635-b2d8-5703d20438d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.371 254096 DEBUG nova.network.os_vif_util [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e4c45e0bc9eac1e04b6c01183480c0c616473c9660555153fe1e9a6aebaab91-merged.mount: Deactivated successfully.
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.372 254096 DEBUG nova.network.os_vif_util [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.372 254096 DEBUG os_vif [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.375 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cdb5ab1-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.382 254096 INFO os_vif [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:99:c7,bridge_name='br-int',has_traffic_filtering=True,id=0cdb5ab1-8463-4494-a522-360862f2152e,network=Network(1b70d379-8b3d-4361-b11d-cafbb578194a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cdb5ab1-84')
Nov 25 16:52:25 compute-0 podman[363229]: 2025-11-25 16:52:25.385610594 +0000 UTC m=+0.098212640 container cleanup d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:52:25 compute-0 systemd[1]: libpod-conmon-d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a.scope: Deactivated successfully.
Nov 25 16:52:25 compute-0 podman[363282]: 2025-11-25 16:52:25.462596647 +0000 UTC m=+0.048294004 container remove d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.468 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[479435d1-c8d8-47a3-91ea-bc2a3c87e726]: (4, ('Tue Nov 25 04:52:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a)\nd18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a\nTue Nov 25 04:52:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a (d18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a)\nd18fe0e8fcd0445c2c052a23b169dba490788217707c724618b0ed4d31b9c23a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.471 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa90e212-16fb-4b56-88ce-ace2181ff827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.471 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b70d379-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:25 compute-0 kernel: tap1b70d379-80: left promiscuous mode
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.487 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.492 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee16b44-79ec-4abc-a45b-59e7afecc93d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.508 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41b3613c-c1c6-43e6-a8fd-71d2eabc6f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de3d32b6-2898-41fb-9851-64bcc3f706fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[784b0305-cb6f-458b-94f0-617a41236a70]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597959, 'reachable_time': 41808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363302, 'error': None, 'target': 'ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d1b70d379\x2d8b3d\x2d4361\x2db11d\x2dcafbb578194a.mount: Deactivated successfully.
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.529 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1b70d379-8b3d-4361-b11d-cafbb578194a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:52:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:25.529 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ad41a774-74e2-4879-a8eb-eb38f0e486ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.770 254096 DEBUG nova.compute.manager [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.770 254096 DEBUG oslo_concurrency.lockutils [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG oslo_concurrency.lockutils [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG oslo_concurrency.lockutils [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG nova.compute.manager [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.771 254096 DEBUG nova.compute.manager [req-0fdc58b9-2009-4864-a6d6-28fff7d428df req-7c6840bf-cc65-4745-b9f3-fa89330a6cc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-unplugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.782 254096 INFO nova.virt.libvirt.driver [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deleting instance files /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3_del
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.783 254096 INFO nova.virt.libvirt.driver [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deletion of /var/lib/nova/instances/7368c721-3e2a-4635-b2d8-5703d20438d3_del complete
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.833 254096 INFO nova.compute.manager [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.834 254096 DEBUG oslo.service.loopingcall [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.834 254096 DEBUG nova.compute.manager [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:52:25 compute-0 nova_compute[254092]: 2025-11-25 16:52:25.834 254096 DEBUG nova.network.neutron [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:52:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 45 op/s
Nov 25 16:52:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:26 compute-0 nova_compute[254092]: 2025-11-25 16:52:26.571 254096 DEBUG nova.network.neutron [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:26 compute-0 nova_compute[254092]: 2025-11-25 16:52:26.590 254096 INFO nova.compute.manager [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Took 0.76 seconds to deallocate network for instance.
Nov 25 16:52:26 compute-0 nova_compute[254092]: 2025-11-25 16:52:26.638 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:26 compute-0 nova_compute[254092]: 2025-11-25 16:52:26.639 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:26 compute-0 nova_compute[254092]: 2025-11-25 16:52:26.681 254096 DEBUG oslo_concurrency.processutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:27 compute-0 ceph-mon[74985]: pgmap v2053: 321 pgs: 321 active+clean; 122 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 45 op/s
Nov 25 16:52:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:52:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3010538172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.120 254096 DEBUG oslo_concurrency.processutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.129 254096 DEBUG nova.compute.provider_tree [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.145 254096 DEBUG nova.scheduler.client.report [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.209 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.259 254096 INFO nova.scheduler.client.report [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance 7368c721-3e2a-4635-b2d8-5703d20438d3
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.266 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.266 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.292 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.369 254096 DEBUG oslo_concurrency.lockutils [None req-e17c1d29-baf1-4f7d-873d-156e1c77452e 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.392 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.392 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.400 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.401 254096 INFO nova.compute.claims [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.512 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.551 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [{"id": "0cdb5ab1-8463-4494-a522-360862f2152e", "address": "fa:16:3e:ba:99:c7", "network": {"id": "1b70d379-8b3d-4361-b11d-cafbb578194a", "bridge": "br-int", "label": "tempest-network-smoke--1591904689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cdb5ab1-84", "ovs_interfaceid": "0cdb5ab1-8463-4494-a522-360862f2152e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.575 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.576 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.576 254096 DEBUG oslo_concurrency.lockutils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.576 254096 DEBUG nova.network.neutron [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Refreshing network info cache for port 0cdb5ab1-8463-4494-a522-360862f2152e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.592 254096 DEBUG nova.compute.utils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.796 254096 INFO nova.network.neutron [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Port 0cdb5ab1-8463-4494-a522-360862f2152e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.797 254096 DEBUG nova.network.neutron [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.817 254096 DEBUG oslo_concurrency.lockutils [req-ec47e3ff-017c-4136-bca8-87ac86137e82 req-73f6eb09-67e4-440a-bfaf-3f64cfea79c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7368c721-3e2a-4635-b2d8-5703d20438d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.855 254096 DEBUG nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.855 254096 DEBUG oslo_concurrency.lockutils [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.857 254096 DEBUG oslo_concurrency.lockutils [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.857 254096 DEBUG oslo_concurrency.lockutils [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7368c721-3e2a-4635-b2d8-5703d20438d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.857 254096 DEBUG nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] No waiting events found dispatching network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.858 254096 WARNING nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received unexpected event network-vif-plugged-0cdb5ab1-8463-4494-a522-360862f2152e for instance with vm_state deleted and task_state None.
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.858 254096 DEBUG nova.compute.manager [req-85fd5dd1-971d-4119-94e0-0d9b7415595f req-7cad16ae-48c1-4019-9b07-3fa5b7a9db17 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Received event network-vif-deleted-0cdb5ab1-8463-4494-a522-360862f2152e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 103 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 11 KiB/s wr, 19 op/s
Nov 25 16:52:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:52:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557331089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.954 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.959 254096 DEBUG nova.compute.provider_tree [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:52:27 compute-0 nova_compute[254092]: 2025-11-25 16:52:27.975 254096 DEBUG nova.scheduler.client.report [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.001 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.002 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.045 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.045 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.061 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.080 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:52:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3010538172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/557331089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.173 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.174 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.174 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Creating image(s)
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.194 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.216 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.235 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.239 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.307 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.308 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.309 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.309 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.327 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.332 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.403 254096 DEBUG nova.policy [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9deaf2356cda4c0cb2a52383b7f2e609', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33a2e508e63149889f0d5d945726522c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.578 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.637 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] resizing rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.722 254096 DEBUG nova.objects.instance [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'migration_context' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.737 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.737 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Ensure instance console log exists: /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.738 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.738 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.738 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:28 compute-0 nova_compute[254092]: 2025-11-25 16:52:28.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:29 compute-0 ceph-mon[74985]: pgmap v2054: 321 pgs: 321 active+clean; 103 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 11 KiB/s wr, 19 op/s
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.883 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.883 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 103 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 11 KiB/s wr, 13 op/s
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.910 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.928 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Successfully created port: 923a00bb-da3b-434a-b154-c338c92e0635 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.974 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.974 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.981 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:52:29 compute-0 nova_compute[254092]: 2025-11-25 16:52:29.981 254096 INFO nova.compute.claims [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.105 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:52:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757331519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.591 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.597 254096 DEBUG nova.compute.provider_tree [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.617 254096 DEBUG nova.scheduler.client.report [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.642 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.643 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.685 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.686 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.703 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.721 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.813 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.814 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.815 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating image(s)
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.833 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.852 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.869 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.872 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.938 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.940 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.941 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.941 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.968 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:30 compute-0 nova_compute[254092]: 2025-11-25 16:52:30.974 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:31 compute-0 ceph-mon[74985]: pgmap v2055: 321 pgs: 321 active+clean; 103 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 11 KiB/s wr, 13 op/s
Nov 25 16:52:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3757331519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.237 254096 DEBUG nova.policy [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9deaf2356cda4c0cb2a52383b7f2e609', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33a2e508e63149889f0d5d945726522c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:52:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.254761) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551254793, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 544, "num_deletes": 259, "total_data_size": 507220, "memory_usage": 519080, "flush_reason": "Manual Compaction"}
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551258310, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 502561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42895, "largest_seqno": 43438, "table_properties": {"data_size": 499595, "index_size": 938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7186, "raw_average_key_size": 18, "raw_value_size": 493397, "raw_average_value_size": 1291, "num_data_blocks": 41, "num_entries": 382, "num_filter_entries": 382, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089520, "oldest_key_time": 1764089520, "file_creation_time": 1764089551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 3575 microseconds, and 1605 cpu microseconds.
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.258336) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 502561 bytes OK
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.258349) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259679) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259691) EVENT_LOG_v1 {"time_micros": 1764089551259688, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259702) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 504101, prev total WAL file size 504101, number of live WAL files 2.
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.260044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353037' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(490KB)], [95(7115KB)]
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551260077, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 7788699, "oldest_snapshot_seqno": -1}
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.279 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6345 keys, 7660194 bytes, temperature: kUnknown
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551319264, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7660194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7619260, "index_size": 23992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 166323, "raw_average_key_size": 26, "raw_value_size": 7506752, "raw_average_value_size": 1183, "num_data_blocks": 934, "num_entries": 6345, "num_filter_entries": 6345, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.319501) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7660194 bytes
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.320809) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.5 rd, 129.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 6.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(30.7) write-amplify(15.2) OK, records in: 6879, records dropped: 534 output_compression: NoCompression
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.320828) EVENT_LOG_v1 {"time_micros": 1764089551320819, "job": 56, "event": "compaction_finished", "compaction_time_micros": 59247, "compaction_time_cpu_micros": 35308, "output_level": 6, "num_output_files": 1, "total_output_size": 7660194, "num_input_records": 6879, "num_output_records": 6345, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551321018, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089551322330, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.259967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:52:31.322405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.335 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] resizing rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.415 254096 DEBUG nova.objects.instance [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'migration_context' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.428 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.429 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Ensure instance console log exists: /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.430 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.430 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.430 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.721 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Successfully updated port: 923a00bb-da3b-434a-b154-c338c92e0635 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.740 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.740 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquired lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.740 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:52:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 88 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.947 254096 DEBUG nova.compute.manager [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-changed-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.948 254096 DEBUG nova.compute.manager [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Refreshing instance network info cache due to event network-changed-923a00bb-da3b-434a-b154-c338c92e0635. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:52:31 compute-0 nova_compute[254092]: 2025-11-25 16:52:31.948 254096 DEBUG oslo_concurrency.lockutils [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:32 compute-0 nova_compute[254092]: 2025-11-25 16:52:32.371 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:52:32 compute-0 nova_compute[254092]: 2025-11-25 16:52:32.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:32 compute-0 nova_compute[254092]: 2025-11-25 16:52:32.580 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Successfully created port: a74c23e6-4075-4b59-b36f-9d06bad062d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:52:32 compute-0 nova_compute[254092]: 2025-11-25 16:52:32.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:33 compute-0 ceph-mon[74985]: pgmap v2056: 321 pgs: 321 active+clean; 88 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 16:52:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 88 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 16:52:33 compute-0 nova_compute[254092]: 2025-11-25 16:52:33.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.165 254096 DEBUG nova.network.neutron [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updating instance_info_cache with network_info: [{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.194 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Releasing lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.195 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance network_info: |[{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.196 254096 DEBUG oslo_concurrency.lockutils [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.196 254096 DEBUG nova.network.neutron [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Refreshing network info cache for port 923a00bb-da3b-434a-b154-c338c92e0635 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.202 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start _get_guest_xml network_info=[{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.212 254096 WARNING nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.227 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.228 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.233 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.234 254096 DEBUG nova.virt.libvirt.host [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.235 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.235 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.236 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.237 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.237 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.238 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.239 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.239 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.240 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.240 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.241 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.242 254096 DEBUG nova.virt.hardware [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.248 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.312 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Successfully updated port: a74c23e6-4075-4b59-b36f-9d06bad062d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.344 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.345 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquired lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.345 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.484 254096 DEBUG nova.compute.manager [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-changed-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.485 254096 DEBUG nova.compute.manager [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Refreshing instance network info cache due to event network-changed-a74c23e6-4075-4b59-b36f-9d06bad062d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.486 254096 DEBUG oslo_concurrency.lockutils [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.637 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:52:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585207271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.810 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.840 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:34 compute-0 nova_compute[254092]: 2025-11-25 16:52:34.845 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:35 compute-0 ceph-mon[74985]: pgmap v2057: 321 pgs: 321 active+clean; 88 MiB data, 765 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 16:52:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3585207271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2045095226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.338 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.343 254096 DEBUG nova.virt.libvirt.vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-142320588',display_name='tempest-ServerRescueNegativeTestJSON-server-142320588',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-142320588',id=107,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-wqxsdvnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:28Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=6d70119e-e45b-4a12-893e-8d5a805ca8ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.344 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.344 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.346 254096 DEBUG nova.objects.instance [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.360 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <uuid>6d70119e-e45b-4a12-893e-8d5a805ca8ab</uuid>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <name>instance-0000006b</name>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-142320588</nova:name>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:52:34</nova:creationTime>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:user uuid="9deaf2356cda4c0cb2a52383b7f2e609">tempest-ServerRescueNegativeTestJSON-1769565225-project-member</nova:user>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:project uuid="33a2e508e63149889f0d5d945726522c">tempest-ServerRescueNegativeTestJSON-1769565225</nova:project>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <nova:port uuid="923a00bb-da3b-434a-b154-c338c92e0635">
Nov 25 16:52:35 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <system>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <entry name="serial">6d70119e-e45b-4a12-893e-8d5a805ca8ab</entry>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <entry name="uuid">6d70119e-e45b-4a12-893e-8d5a805ca8ab</entry>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </system>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <os>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   </os>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <features>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   </features>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk">
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config">
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:4e:17:31"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <target dev="tap923a00bb-da"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/console.log" append="off"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <video>
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </video>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:52:35 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:52:35 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:52:35 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:52:35 compute-0 nova_compute[254092]: </domain>
Nov 25 16:52:35 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.362 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Preparing to wait for external event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.363 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.364 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.365 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.366 254096 DEBUG nova.virt.libvirt.vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-142320588',display_name='tempest-ServerRescueNegativeTestJSON-server-142320588',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-142320588',id=107,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-wqxsdvnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:28Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=6d70119e-e45b-4a12-893e-8d5a805ca8ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.367 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.368 254096 DEBUG nova.network.os_vif_util [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.369 254096 DEBUG os_vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.371 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.371 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.374 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap923a00bb-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.375 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap923a00bb-da, col_values=(('external_ids', {'iface-id': '923a00bb-da3b-434a-b154-c338c92e0635', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:17:31', 'vm-uuid': '6d70119e-e45b-4a12-893e-8d5a805ca8ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:35 compute-0 NetworkManager[48891]: <info>  [1764089555.3783] manager: (tap923a00bb-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.384 254096 INFO os_vif [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da')
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.443 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.444 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.444 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No VIF found with MAC fa:16:3e:4e:17:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.445 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Using config drive
Nov 25 16:52:35 compute-0 nova_compute[254092]: 2025-11-25 16:52:35.471 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 110 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.250 254096 DEBUG nova.network.neutron [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.270 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Releasing lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.270 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance network_info: |[{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.271 254096 DEBUG oslo_concurrency.lockutils [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.271 254096 DEBUG nova.network.neutron [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Refreshing network info cache for port a74c23e6-4075-4b59-b36f-9d06bad062d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:52:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2045095226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.274 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start _get_guest_xml network_info=[{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.279 254096 WARNING nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.285 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.285 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.289 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.289 254096 DEBUG nova.virt.libvirt.host [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.290 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.290 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.290 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.291 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.292 254096 DEBUG nova.virt.hardware [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.295 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.339 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Creating config drive at /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.344 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyix4975 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.453 254096 DEBUG nova.network.neutron [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updated VIF entry in instance network info cache for port 923a00bb-da3b-434a-b154-c338c92e0635. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.454 254096 DEBUG nova.network.neutron [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updating instance_info_cache with network_info: [{"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.468 254096 DEBUG oslo_concurrency.lockutils [req-9ebcc348-d38f-4844-bfbe-1be98ec0b2da req-40a7f815-459a-43a3-9a69-0fee3162e6af a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6d70119e-e45b-4a12-893e-8d5a805ca8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.496 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyix4975" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.530 254096 DEBUG nova.storage.rbd_utils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.541 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.731 254096 DEBUG oslo_concurrency.processutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config 6d70119e-e45b-4a12-893e-8d5a805ca8ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.732 254096 INFO nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deleting local config drive /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab/disk.config because it was imported into RBD.
Nov 25 16:52:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991932512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:36 compute-0 kernel: tap923a00bb-da: entered promiscuous mode
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.795 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:36 compute-0 NetworkManager[48891]: <info>  [1764089556.7976] manager: (tap923a00bb-da): new Tun device (/org/freedesktop/NetworkManager/Devices/442)
Nov 25 16:52:36 compute-0 ovn_controller[153477]: 2025-11-25T16:52:36Z|01078|binding|INFO|Claiming lport 923a00bb-da3b-434a-b154-c338c92e0635 for this chassis.
Nov 25 16:52:36 compute-0 ovn_controller[153477]: 2025-11-25T16:52:36Z|01079|binding|INFO|923a00bb-da3b-434a-b154-c338c92e0635: Claiming fa:16:3e:4e:17:31 10.100.0.13
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.806 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:31 10.100.0.13'], port_security=['fa:16:3e:4e:17:31 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d70119e-e45b-4a12-893e-8d5a805ca8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=923a00bb-da3b-434a-b154-c338c92e0635) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.807 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 923a00bb-da3b-434a-b154-c338c92e0635 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e bound to our chassis
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.809 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 16:52:36 compute-0 systemd-udevd[363873]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.822 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c04cb152-5f45-42b5-9549-82d5caa6cf25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.823 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcf0cb5b9-c1 in ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.825 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcf0cb5b9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba294cb9-9491-4223-97bf-91ebaa852d3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.826 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b135b342-7747-4ca8-902e-45a489e5b3af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 NetworkManager[48891]: <info>  [1764089556.8301] device (tap923a00bb-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:52:36 compute-0 NetworkManager[48891]: <info>  [1764089556.8310] device (tap923a00bb-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:52:36 compute-0 systemd-machined[216343]: New machine qemu-136-instance-0000006b.
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.837 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbf0251-84a1-4adf-a6d9-4a6f9a4c96c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.840 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.845 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:36 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006b.
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.865 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01377dd8-4deb-412c-b031-97ee5fa4036b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 ovn_controller[153477]: 2025-11-25T16:52:36Z|01080|binding|INFO|Setting lport 923a00bb-da3b-434a-b154-c338c92e0635 ovn-installed in OVS
Nov 25 16:52:36 compute-0 ovn_controller[153477]: 2025-11-25T16:52:36Z|01081|binding|INFO|Setting lport 923a00bb-da3b-434a-b154-c338c92e0635 up in Southbound
Nov 25 16:52:36 compute-0 nova_compute[254092]: 2025-11-25 16:52:36.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.909 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[770184cf-219a-43ef-ba22-2e4eaf65e8ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 NetworkManager[48891]: <info>  [1764089556.9175] manager: (tapcf0cb5b9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/443)
Nov 25 16:52:36 compute-0 systemd-udevd[363881]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.918 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af042b1a-8df2-4e18-80fc-25f56736bfda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.957 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a28446dc-154d-45e7-b40a-585cd8f0d5b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.960 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[082b0017-6e87-461d-b0f4-9b9cab05a3da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:36 compute-0 NetworkManager[48891]: <info>  [1764089556.9913] device (tapcf0cb5b9-c0): carrier: link connected
Nov 25 16:52:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:36.998 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4085f5-42e5-431e-9a25-a6f8f2b8ba9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4303c0-13a4-40f6-b299-18385511cc7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363932, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.028 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1ecbb3-b642-41b4-9ab3-74bd4ae505ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:ccc7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601458, 'tstamp': 601458}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363933, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.050 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2796c31d-937b-4d9b-955e-379cf0e69ff5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363934, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.096 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71bc3146-de22-4c0e-8109-9bbd32145d7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.159 254096 DEBUG nova.compute.manager [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.159 254096 DEBUG oslo_concurrency.lockutils [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.159 254096 DEBUG oslo_concurrency.lockutils [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.160 254096 DEBUG oslo_concurrency.lockutils [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.160 254096 DEBUG nova.compute.manager [req-83ae6581-9adb-469b-9923-4127cee05678 req-0a3aecb5-22cf-4787-8286-b008f775f48e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Processing event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.168 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ee753148-1e7b-4209-9d11-18f9e0261265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.169 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.170 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.170 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:37 compute-0 kernel: tapcf0cb5b9-c0: entered promiscuous mode
Nov 25 16:52:37 compute-0 NetworkManager[48891]: <info>  [1764089557.1724] manager: (tapcf0cb5b9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.174 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:37 compute-0 ovn_controller[153477]: 2025-11-25T16:52:37Z|01082|binding|INFO|Releasing lport 04d240b5-2178-4712-9ede-6d58532785de from this chassis (sb_readonly=0)
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.189 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.190 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c641285-bd5c-4dbd-ba72-0abc7ca09734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.190 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.pid.haproxy
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:52:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:37.191 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'env', 'PROCESS_TAG=haproxy-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:52:37 compute-0 ceph-mon[74985]: pgmap v2058: 321 pgs: 321 active+clean; 110 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Nov 25 16:52:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/991932512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609706744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.310 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.311 254096 DEBUG nova.virt.libvirt.vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:30Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.312 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.313 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.314 254096 DEBUG nova.objects.instance [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.333 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <uuid>1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</uuid>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <name>instance-0000006c</name>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-2043816774</nova:name>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:52:36</nova:creationTime>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:user uuid="9deaf2356cda4c0cb2a52383b7f2e609">tempest-ServerRescueNegativeTestJSON-1769565225-project-member</nova:user>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:project uuid="33a2e508e63149889f0d5d945726522c">tempest-ServerRescueNegativeTestJSON-1769565225</nova:project>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <nova:port uuid="a74c23e6-4075-4b59-b36f-9d06bad062d2">
Nov 25 16:52:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <system>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <entry name="serial">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <entry name="uuid">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </system>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <os>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   </os>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <features>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   </features>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk">
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config">
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:09:6e:80"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <target dev="tapa74c23e6-40"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/console.log" append="off"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <video>
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </video>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:52:37 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:52:37 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:52:37 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:52:37 compute-0 nova_compute[254092]: </domain>
Nov 25 16:52:37 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.339 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Preparing to wait for external event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.339 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.339 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.340 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.341 254096 DEBUG nova.virt.libvirt.vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:30Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.341 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.342 254096 DEBUG nova.network.os_vif_util [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.342 254096 DEBUG os_vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.343 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.343 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.344 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089557.331682, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.344 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Started (Lifecycle Event)
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.346 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.348 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa74c23e6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.348 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa74c23e6-40, col_values=(('external_ids', {'iface-id': 'a74c23e6-4075-4b59-b36f-9d06bad062d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:6e:80', 'vm-uuid': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.376 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:37 compute-0 NetworkManager[48891]: <info>  [1764089557.3835] manager: (tapa74c23e6-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.385 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.386 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.387 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.390 254096 INFO os_vif [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40')
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.392 254096 INFO nova.virt.libvirt.driver [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance spawned successfully.
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.393 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.409 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.409 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089557.333787, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.410 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Paused (Lifecycle Event)
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.416 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.417 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.418 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.418 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.418 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.419 254096 DEBUG nova.virt.libvirt.driver [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.427 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.430 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089557.3827226, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.430 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Resumed (Lifecycle Event)
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.462 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.466 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.475 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.475 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.476 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No VIF found with MAC fa:16:3e:09:6e:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.476 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Using config drive
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.506 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.516 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.519 254096 INFO nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 9.35 seconds to spawn the instance on the hypervisor.
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.520 254096 DEBUG nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:37 compute-0 podman[364012]: 2025-11-25 16:52:37.543739313 +0000 UTC m=+0.048242792 container create 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.580 254096 INFO nova.compute.manager [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 10.22 seconds to build instance.
Nov 25 16:52:37 compute-0 systemd[1]: Started libpod-conmon-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3.scope.
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.603 254096 DEBUG oslo_concurrency.lockutils [None req-cd478d32-2c93-4322-980e-b8af46747b91 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:52:37 compute-0 podman[364012]: 2025-11-25 16:52:37.519955017 +0000 UTC m=+0.024458516 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:52:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a699cce47daa3e0d0478b9596de6cbf41d67cddb37cef82117aeaf5f09e69db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:52:37 compute-0 podman[364012]: 2025-11-25 16:52:37.628018673 +0000 UTC m=+0.132522172 container init 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:52:37 compute-0 podman[364012]: 2025-11-25 16:52:37.633077981 +0000 UTC m=+0.137581460 container start 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:52:37 compute-0 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : New worker (364051) forked
Nov 25 16:52:37 compute-0 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : Loading success.
Nov 25 16:52:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 134 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.986 254096 DEBUG nova.network.neutron [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updated VIF entry in instance network info cache for port a74c23e6-4075-4b59-b36f-9d06bad062d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:52:37 compute-0 nova_compute[254092]: 2025-11-25 16:52:37.987 254096 DEBUG nova.network.neutron [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.000 254096 DEBUG oslo_concurrency.lockutils [req-28c5cddb-6bbb-4a69-ae00-2118b6803306 req-d2ec78c6-6f3d-4096-8571-fa40ceda1ca8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.044 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating config drive at /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.048 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqf5luvq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.192 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeqf5luvq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.213 254096 DEBUG nova.storage.rbd_utils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.216 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1609706744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.382 254096 DEBUG oslo_concurrency.processutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.385 254096 INFO nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deleting local config drive /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config because it was imported into RBD.
Nov 25 16:52:38 compute-0 kernel: tapa74c23e6-40: entered promiscuous mode
Nov 25 16:52:38 compute-0 NetworkManager[48891]: <info>  [1764089558.4536] manager: (tapa74c23e6-40): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Nov 25 16:52:38 compute-0 ovn_controller[153477]: 2025-11-25T16:52:38Z|01083|binding|INFO|Claiming lport a74c23e6-4075-4b59-b36f-9d06bad062d2 for this chassis.
Nov 25 16:52:38 compute-0 ovn_controller[153477]: 2025-11-25T16:52:38Z|01084|binding|INFO|a74c23e6-4075-4b59-b36f-9d06bad062d2: Claiming fa:16:3e:09:6e:80 10.100.0.14
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.461 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.464 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e bound to our chassis
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.467 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 16:52:38 compute-0 NetworkManager[48891]: <info>  [1764089558.4808] device (tapa74c23e6-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:52:38 compute-0 NetworkManager[48891]: <info>  [1764089558.4821] device (tapa74c23e6-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:52:38 compute-0 ovn_controller[153477]: 2025-11-25T16:52:38Z|01085|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 ovn-installed in OVS
Nov 25 16:52:38 compute-0 ovn_controller[153477]: 2025-11-25T16:52:38Z|01086|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 up in Southbound
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.486 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d4f7ad-5435-485a-bb95-3e2625f7fd89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:38 compute-0 systemd-machined[216343]: New machine qemu-137-instance-0000006c.
Nov 25 16:52:38 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006c.
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.534 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[40062747-4c4f-495a-8142-0e32c6cd97e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.538 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[98faef3b-6cd2-48c6-ae0c-4e4aaf41b9e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.573 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1df38c-d43a-4712-a28f-905ec5908923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.596 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8973859e-3f10-4513-83e1-a88032badf85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364123, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.619 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf02b92-719e-49b2-846d-ea2aef5a8d95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364126, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364126, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.622 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.626 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:38.627 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.728 254096 DEBUG nova.compute.manager [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.728 254096 DEBUG oslo_concurrency.lockutils [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.729 254096 DEBUG oslo_concurrency.lockutils [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.729 254096 DEBUG oslo_concurrency.lockutils [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.729 254096 DEBUG nova.compute.manager [req-840f9c52-bdf1-498f-9930-ad2a0f06a56a req-1d4f682f-68bd-4670-8e93-1c9ca26dfd2a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Processing event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:52:38 compute-0 nova_compute[254092]: 2025-11-25 16:52:38.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.291 254096 DEBUG nova.compute.manager [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:39 compute-0 ceph-mon[74985]: pgmap v2059: 321 pgs: 321 active+clean; 134 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.292 254096 DEBUG oslo_concurrency.lockutils [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.293 254096 DEBUG oslo_concurrency.lockutils [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.293 254096 DEBUG oslo_concurrency.lockutils [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.293 254096 DEBUG nova.compute.manager [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] No waiting events found dispatching network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.294 254096 WARNING nova.compute.manager [req-825795de-b2e0-4c03-8c4a-149868165fbc req-8a7407c6-508f-4190-926f-d8a05db63264 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received unexpected event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 for instance with vm_state active and task_state None.
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.499 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.499 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.500 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.500 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.526 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.540 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.541 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Image id 8b512c8e-2281-41de-a668-eb983e174ba0 yields fingerprint 9e29bca11122733e2b34fccd9459097794a3a169 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.542 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] image 8b512c8e-2281-41de-a668-eb983e174ba0 at (/var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169): checking
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.542 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] image 8b512c8e-2281-41de-a668-eb983e174ba0 at (/var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.544 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.545 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] 6d70119e-e45b-4a12-893e-8d5a805ca8ab is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.545 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.545 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.546 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Active base files: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.546 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Removable base files: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.547 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.547 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.547 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.548 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.548 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.788 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089559.7883976, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.789 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Started (Lifecycle Event)
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.791 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.794 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.798 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance spawned successfully.
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.798 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.814 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.820 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.824 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.825 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.826 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.826 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.827 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.827 254096 DEBUG nova.virt.libvirt.driver [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.855 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.856 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089559.7893817, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.856 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Paused (Lifecycle Event)
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.886 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.890 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089559.793624, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.890 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Resumed (Lifecycle Event)
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.896 254096 INFO nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 9.08 seconds to spawn the instance on the hypervisor.
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.896 254096 DEBUG nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 134 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 69 op/s
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.905 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.908 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.930 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.951 254096 INFO nova.compute.manager [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 10.00 seconds to build instance.
Nov 25 16:52:39 compute-0 nova_compute[254092]: 2025-11-25 16:52:39.963 254096 DEBUG oslo_concurrency.lockutils [None req-9c5dd67d-c892-4530-bcba-5715934fc71b 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:52:40
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', '.mgr', 'vms']
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:52:40 compute-0 ceph-mon[74985]: pgmap v2060: 321 pgs: 321 active+clean; 134 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 69 op/s
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.347 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089545.345628, 7368c721-3e2a-4635-b2d8-5703d20438d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.347 254096 INFO nova.compute.manager [-] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] VM Stopped (Lifecycle Event)
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.367 254096 DEBUG nova.compute.manager [None req-9d0c4a12-f49a-4b80-9084-31d0e73a6f6c - - - - - -] [instance: 7368c721-3e2a-4635-b2d8-5703d20438d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.899 254096 DEBUG nova.compute.manager [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.900 254096 DEBUG oslo_concurrency.lockutils [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.900 254096 DEBUG oslo_concurrency.lockutils [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.900 254096 DEBUG oslo_concurrency.lockutils [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.901 254096 DEBUG nova.compute.manager [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:40 compute-0 nova_compute[254092]: 2025-11-25 16:52:40.901 254096 WARNING nova.compute.manager [req-9ea5efd9-dc2e-4beb-a087-14bc9e6e43cf req-60e4a35e-dc39-4e58-85a7-9534961feeef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state active and task_state None.
Nov 25 16:52:40 compute-0 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 16:52:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:41 compute-0 nova_compute[254092]: 2025-11-25 16:52:41.453 254096 INFO nova.compute.manager [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Rescuing
Nov 25 16:52:41 compute-0 nova_compute[254092]: 2025-11-25 16:52:41.454 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:41 compute-0 nova_compute[254092]: 2025-11-25 16:52:41.454 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquired lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:41 compute-0 nova_compute[254092]: 2025-11-25 16:52:41.454 254096 DEBUG nova.network.neutron [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:52:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Nov 25 16:52:42 compute-0 nova_compute[254092]: 2025-11-25 16:52:42.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 25 16:52:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 25 16:52:42 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 25 16:52:42 compute-0 ceph-mon[74985]: pgmap v2061: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Nov 25 16:52:43 compute-0 nova_compute[254092]: 2025-11-25 16:52:43.131 254096 DEBUG nova.network.neutron [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:43 compute-0 nova_compute[254092]: 2025-11-25 16:52:43.177 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Releasing lock "refresh_cache-1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:43.411 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:43.412 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:52:43 compute-0 nova_compute[254092]: 2025-11-25 16:52:43.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:43 compute-0 nova_compute[254092]: 2025-11-25 16:52:43.464 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:52:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 16:52:43 compute-0 ceph-mon[74985]: osdmap e271: 3 total, 3 up, 3 in
Nov 25 16:52:43 compute-0 nova_compute[254092]: 2025-11-25 16:52:43.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:44 compute-0 ceph-mon[74985]: pgmap v2063: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Nov 25 16:52:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 200 op/s
Nov 25 16:52:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:46 compute-0 ceph-mon[74985]: pgmap v2064: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 1.3 MiB/s wr, 200 op/s
Nov 25 16:52:47 compute-0 nova_compute[254092]: 2025-11-25 16:52:47.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 32 KiB/s wr, 191 op/s
Nov 25 16:52:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 25 16:52:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 25 16:52:48 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 25 16:52:48 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 16:52:48 compute-0 nova_compute[254092]: 2025-11-25 16:52:48.971 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:48 compute-0 nova_compute[254092]: 2025-11-25 16:52:48.972 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:48 compute-0 nova_compute[254092]: 2025-11-25 16:52:48.990 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:52:48 compute-0 nova_compute[254092]: 2025-11-25 16:52:48.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 25 16:52:49 compute-0 ceph-mon[74985]: pgmap v2065: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 32 KiB/s wr, 191 op/s
Nov 25 16:52:49 compute-0 ceph-mon[74985]: osdmap e272: 3 total, 3 up, 3 in
Nov 25 16:52:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 25 16:52:49 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.058 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.059 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.067 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.067 254096 INFO nova.compute.claims [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.210 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:52:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049670726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.689 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.695 254096 DEBUG nova.compute.provider_tree [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.710 254096 DEBUG nova.scheduler.client.report [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.733 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.734 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.783 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.784 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.807 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.824 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:52:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 84 op/s
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.909 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.910 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.911 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Creating image(s)
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.938 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.967 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.992 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:49 compute-0 nova_compute[254092]: 2025-11-25 16:52:49.996 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 25 16:52:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 25 16:52:50 compute-0 ceph-mon[74985]: osdmap e273: 3 total, 3 up, 3 in
Nov 25 16:52:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3049670726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:52:50 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.083 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.085 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.086 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.086 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.124 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.129 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b12625ea-31bf-4599-a248-4c6ced8e59c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.175 254096 DEBUG nova.policy [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b30410358c448a693a9dc40ee6aafb0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:52:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:50.414 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.427 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b12625ea-31bf-4599-a248-4c6ced8e59c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.486 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] resizing rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.598 254096 DEBUG nova.objects.instance [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'migration_context' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.610 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.611 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Ensure instance console log exists: /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.612 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.612 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:50 compute-0 nova_compute[254092]: 2025-11-25 16:52:50.612 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:50 compute-0 ovn_controller[153477]: 2025-11-25T16:52:50Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:17:31 10.100.0.13
Nov 25 16:52:50 compute-0 ovn_controller[153477]: 2025-11-25T16:52:50Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:17:31 10.100.0.13
Nov 25 16:52:51 compute-0 ceph-mon[74985]: pgmap v2068: 321 pgs: 321 active+clean; 134 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 84 op/s
Nov 25 16:52:51 compute-0 ceph-mon[74985]: osdmap e274: 3 total, 3 up, 3 in
Nov 25 16:52:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006967633855896333 of space, bias 1.0, pg target 0.20902901567689 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664942086670574 of space, bias 1.0, pg target 0.19994826260011722 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:52:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 314 active+clean; 182 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 24 MiB/s wr, 295 op/s
Nov 25 16:52:52 compute-0 nova_compute[254092]: 2025-11-25 16:52:52.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:52 compute-0 nova_compute[254092]: 2025-11-25 16:52:52.448 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Successfully created port: 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:52:53 compute-0 ceph-mon[74985]: pgmap v2070: 321 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 314 active+clean; 182 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 24 MiB/s wr, 295 op/s
Nov 25 16:52:53 compute-0 ovn_controller[153477]: 2025-11-25T16:52:53Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:6e:80 10.100.0.14
Nov 25 16:52:53 compute-0 ovn_controller[153477]: 2025-11-25T16:52:53Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:6e:80 10.100.0.14
Nov 25 16:52:53 compute-0 nova_compute[254092]: 2025-11-25 16:52:53.532 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Nov 25 16:52:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 314 active+clean; 182 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 24 MiB/s wr, 263 op/s
Nov 25 16:52:53 compute-0 nova_compute[254092]: 2025-11-25 16:52:53.954 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Successfully updated port: 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:52:53 compute-0 nova_compute[254092]: 2025-11-25 16:52:53.970 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:53 compute-0 nova_compute[254092]: 2025-11-25 16:52:53.970 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:53 compute-0 nova_compute[254092]: 2025-11-25 16:52:53.970 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:52:53 compute-0 nova_compute[254092]: 2025-11-25 16:52:53.996 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:54 compute-0 nova_compute[254092]: 2025-11-25 16:52:54.160 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:52:54 compute-0 nova_compute[254092]: 2025-11-25 16:52:54.234 254096 DEBUG nova.compute.manager [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:54 compute-0 nova_compute[254092]: 2025-11-25 16:52:54.234 254096 DEBUG nova.compute.manager [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing instance network info cache due to event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:52:54 compute-0 nova_compute[254092]: 2025-11-25 16:52:54.234 254096 DEBUG oslo_concurrency.lockutils [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:52:54 compute-0 podman[364358]: 2025-11-25 16:52:54.653421722 +0000 UTC m=+0.061151942 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:52:54 compute-0 podman[364357]: 2025-11-25 16:52:54.653581997 +0000 UTC m=+0.065206473 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 25 16:52:54 compute-0 podman[364359]: 2025-11-25 16:52:54.719825416 +0000 UTC m=+0.127817564 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 25 16:52:55 compute-0 ceph-mon[74985]: pgmap v2071: 321 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 314 active+clean; 182 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 707 KiB/s rd, 24 MiB/s wr, 263 op/s
Nov 25 16:52:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:52:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619463625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:52:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:52:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/619463625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.338 254096 DEBUG nova.network.neutron [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.368 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.368 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance network_info: |[{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.369 254096 DEBUG oslo_concurrency.lockutils [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.369 254096 DEBUG nova.network.neutron [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.372 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start _get_guest_xml network_info=[{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.375 254096 WARNING nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.380 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.381 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.384 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.384 254096 DEBUG nova.virt.libvirt.host [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.385 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.385 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.385 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.386 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.387 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.387 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.387 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.388 254096 DEBUG nova.virt.hardware [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.391 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:55 compute-0 kernel: tapa74c23e6-40 (unregistering): left promiscuous mode
Nov 25 16:52:55 compute-0 NetworkManager[48891]: <info>  [1764089575.8082] device (tapa74c23e6-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:55 compute-0 ovn_controller[153477]: 2025-11-25T16:52:55Z|01087|binding|INFO|Releasing lport a74c23e6-4075-4b59-b36f-9d06bad062d2 from this chassis (sb_readonly=0)
Nov 25 16:52:55 compute-0 ovn_controller[153477]: 2025-11-25T16:52:55Z|01088|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 down in Southbound
Nov 25 16:52:55 compute-0 ovn_controller[153477]: 2025-11-25T16:52:55Z|01089|binding|INFO|Removing iface tapa74c23e6-40 ovn-installed in OVS
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.825 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e unbound from our chassis
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.829 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 16:52:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079908265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.850 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b72f23c-4f2e-463a-8604-95e8a5d7734b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:55 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 25 16:52:55 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Consumed 13.856s CPU time.
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.864 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:55 compute-0 systemd-machined[216343]: Machine qemu-137-instance-0000006c terminated.
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.883 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b7dde4-ecf5-435a-b43b-0eb7bfc25efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.886 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.887 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae54ca1-22e9-40cc-8640-ebc5d095c17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.890 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 237 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 MiB/s wr, 304 op/s
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.914 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f802f10c-1495-42e1-91f4-613dd12c38b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d82fceb7-7404-4349-9532-b2b0320cb759]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364470, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.945 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdf8745-a1b3-4b65-97dd-94980aa5c0b4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364471, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364471, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.946 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:55 compute-0 nova_compute[254092]: 2025-11-25 16:52:55.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.952 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.952 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.952 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:55.953 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/619463625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:52:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/619463625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:52:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4079908265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:52:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 25 16:52:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 25 16:52:56 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 25 16:52:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3910056781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.335 254096 DEBUG nova.compute.manager [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.338 254096 DEBUG oslo_concurrency.lockutils [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.338 254096 DEBUG oslo_concurrency.lockutils [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.338 254096 DEBUG oslo_concurrency.lockutils [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.339 254096 DEBUG nova.compute.manager [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.339 254096 WARNING nova.compute.manager [req-49ea0de2-9898-4d8d-9fbc-4586918b43e0 req-b503949a-4a6e-45a3-a554-2f80765bc065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state active and task_state rescuing.
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.339 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.341 254096 DEBUG nova.virt.libvirt.vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:49Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.341 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.342 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.343 254096 DEBUG nova.objects.instance [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <uuid>b12625ea-31bf-4599-a248-4c6ced8e59c2</uuid>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <name>instance-0000006d</name>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1982563586</nova:name>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:52:55</nova:creationTime>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:user uuid="3b30410358c448a693a9dc40ee6aafb0">tempest-TestNetworkAdvancedServerOps-1110744031-project-member</nova:user>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:project uuid="f92d76b523b84ec9b645382bdd3192c9">tempest-TestNetworkAdvancedServerOps-1110744031</nova:project>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <nova:port uuid="1ef97ad6-0798-4b3d-a9cc-562f9526ae38">
Nov 25 16:52:56 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <system>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <entry name="serial">b12625ea-31bf-4599-a248-4c6ced8e59c2</entry>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <entry name="uuid">b12625ea-31bf-4599-a248-4c6ced8e59c2</entry>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </system>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <os>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   </os>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <features>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   </features>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b12625ea-31bf-4599-a248-4c6ced8e59c2_disk">
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config">
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:5e:77:80"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <target dev="tap1ef97ad6-07"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/console.log" append="off"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <video>
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </video>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:52:56 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:52:56 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:52:56 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:52:56 compute-0 nova_compute[254092]: </domain>
Nov 25 16:52:56 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Preparing to wait for external event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.358 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.359 254096 DEBUG nova.virt.libvirt.vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:49Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.359 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.360 254096 DEBUG nova.network.os_vif_util [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.360 254096 DEBUG os_vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.361 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.364 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ef97ad6-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.365 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ef97ad6-07, col_values=(('external_ids', {'iface-id': '1ef97ad6-0798-4b3d-a9cc-562f9526ae38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:77:80', 'vm-uuid': 'b12625ea-31bf-4599-a248-4c6ced8e59c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:56 compute-0 NetworkManager[48891]: <info>  [1764089576.3667] manager: (tap1ef97ad6-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.375 254096 INFO os_vif [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07')
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.435 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.435 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.435 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] No VIF found with MAC fa:16:3e:5e:77:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.436 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Using config drive
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.466 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.546 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance shutdown successfully after 13 seconds.
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.554 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance destroyed successfully.
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.555 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'numa_topology' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.570 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Attempting rescue
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.572 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.577 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.577 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating image(s)
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.606 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.609 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.644 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.678 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.685 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.800 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.801 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.802 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.803 254096 DEBUG oslo_concurrency.lockutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.835 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:56 compute-0 nova_compute[254092]: 2025-11-25 16:52:56.844 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.044 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Creating config drive at /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.050 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjz7a6jei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.155 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.156 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'migration_context' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.168 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.169 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start _get_guest_xml network_info=[{"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "vif_mac": "fa:16:3e:09:6e:80"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.169 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'resources' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.187 254096 WARNING nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.190 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.191 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.193 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.194 254096 DEBUG nova.virt.libvirt.host [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.194 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.194 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.195 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.virt.hardware [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.196 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.199 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjz7a6jei" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.222 254096 DEBUG nova.storage.rbd_utils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] rbd image b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.226 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:57 compute-0 ceph-mon[74985]: pgmap v2072: 321 pgs: 321 active+clean; 237 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 MiB/s wr, 304 op/s
Nov 25 16:52:57 compute-0 ceph-mon[74985]: osdmap e275: 3 total, 3 up, 3 in
Nov 25 16:52:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3910056781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.291 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.419 254096 DEBUG oslo_concurrency.processutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config b12625ea-31bf-4599-a248-4c6ced8e59c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.419 254096 INFO nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deleting local config drive /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2/disk.config because it was imported into RBD.
Nov 25 16:52:57 compute-0 kernel: tap1ef97ad6-07: entered promiscuous mode
Nov 25 16:52:57 compute-0 systemd-udevd[364444]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:52:57 compute-0 NetworkManager[48891]: <info>  [1764089577.4651] manager: (tap1ef97ad6-07): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Nov 25 16:52:57 compute-0 NetworkManager[48891]: <info>  [1764089577.4847] device (tap1ef97ad6-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:52:57 compute-0 NetworkManager[48891]: <info>  [1764089577.4853] device (tap1ef97ad6-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:52:57 compute-0 ovn_controller[153477]: 2025-11-25T16:52:57Z|01090|binding|INFO|Claiming lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for this chassis.
Nov 25 16:52:57 compute-0 ovn_controller[153477]: 2025-11-25T16:52:57Z|01091|binding|INFO|1ef97ad6-0798-4b3d-a9cc-562f9526ae38: Claiming fa:16:3e:5e:77:80 10.100.0.8
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.525 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.536 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.537 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 bound to our chassis
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.538 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.549 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc337e0-3aac-4134-8361-ea41d338f7dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.550 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa7a2aa2-91 in ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.552 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa7a2aa2-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.552 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3acaf7d0-eb5f-473b-aa0d-aaee3fe2249b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.553 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c25b9e3-04fb-4f73-8d32-cdd5c095b15c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.564 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a154f111-dc0c-41c9-8667-462ee3cb7efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 systemd-machined[216343]: New machine qemu-138-instance-0000006d.
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.579 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f910c861-116c-4d31-995a-eb127b2ee5f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006d.
Nov 25 16:52:57 compute-0 ovn_controller[153477]: 2025-11-25T16:52:57Z|01092|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 ovn-installed in OVS
Nov 25 16:52:57 compute-0 ovn_controller[153477]: 2025-11-25T16:52:57Z|01093|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 up in Southbound
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.608 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0182ad-d796-4a65-8293-152855d174d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6092e4ba-174b-4641-951a-6b8ac26b7969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 NetworkManager[48891]: <info>  [1764089577.6137] manager: (tapaa7a2aa2-90): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.650 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2179fd0c-5613-4cbd-bc39-675a8cbefa69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.658 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfe3b72-ed8d-4fa0-bc6c-cc0f582d741f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 NetworkManager[48891]: <info>  [1764089577.6818] device (tapaa7a2aa2-90): carrier: link connected
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.687 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[45d3c66d-1d3f-4766-81df-926a1b5a4520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.701 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[78dfef53-3911-4112-86a5-10aba1b93660]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603527, 'reachable_time': 20731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364729, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.722 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ca012-0762-412c-8e52-721c378686a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:f6a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603527, 'tstamp': 603527}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364730, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[485b82b4-e17e-4e13-8bc6-712c31c94af0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603527, 'reachable_time': 20731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364731, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511631983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.764 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53ed4055-7fa5-4426-a52e-7c46e30d41a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.773 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.774 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.821 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18482dfb-7ec3-45bc-a03c-61bde883e6c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.822 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.822 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.823 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa7a2aa2-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.825 254096 DEBUG nova.network.neutron [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated VIF entry in instance network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:52:57 compute-0 kernel: tapaa7a2aa2-90: entered promiscuous mode
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.826 254096 DEBUG nova.network.neutron [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:52:57 compute-0 NetworkManager[48891]: <info>  [1764089577.8275] manager: (tapaa7a2aa2-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.832 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa7a2aa2-90, col_values=(('external_ids', {'iface-id': '4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.833 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 ovn_controller[153477]: 2025-11-25T16:52:57Z|01094|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.842 254096 DEBUG oslo_concurrency.lockutils [req-e7628b19-a02e-46c3-9f8f-f759b2291a57 req-feb78ab9-b4f7-46e7-a0b3-a13f9b68dc5c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 nova_compute[254092]: 2025-11-25 16:52:57.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.856 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.857 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8a7c7f-0b41-45b7-9013-142d8066e64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.858 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:52:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:57.859 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'env', 'PROCESS_TAG=haproxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:52:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 246 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 MiB/s wr, 362 op/s
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.179 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089578.1792219, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.180 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Started (Lifecycle Event)
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.196 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.200 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089578.179442, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.200 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Paused (Lifecycle Event)
Nov 25 16:52:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471538734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:58 compute-0 podman[364831]: 2025-11-25 16:52:58.214901756 +0000 UTC m=+0.054592574 container create 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.223 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.225 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.226 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:58 compute-0 systemd[1]: Started libpod-conmon-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a.scope.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.261 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3511631983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2471538734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:58 compute-0 podman[364831]: 2025-11-25 16:52:58.185319542 +0000 UTC m=+0.025010390 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.279 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:52:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3934ea0b4d9ff83643ea9a1fbc52010cb7a732e5e5a2e2bfce4873181e16e161/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:52:58 compute-0 podman[364831]: 2025-11-25 16:52:58.314519214 +0000 UTC m=+0.154210062 container init 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:52:58 compute-0 podman[364831]: 2025-11-25 16:52:58.320152807 +0000 UTC m=+0.159843625 container start 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 16:52:58 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : New worker (364860) forked
Nov 25 16:52:58 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : Loading success.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.441 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.442 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.445 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.445 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 WARNING nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state active and task_state rescuing.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.446 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.447 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.447 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.447 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Processing event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.448 254096 DEBUG oslo_concurrency.lockutils [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.449 254096 DEBUG nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.449 254096 WARNING nova.compute.manager [req-ae1342a1-0064-496f-a4fb-0cdb73dabe5d req-ddca33de-407c-49fd-b17c-227d21fcc701 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state building and task_state spawning.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.450 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.454 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.454 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089578.4539948, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.454 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Resumed (Lifecycle Event)
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.458 254096 INFO nova.virt.libvirt.driver [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance spawned successfully.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.458 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.483 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.488 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.491 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.491 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.492 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.492 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.493 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.493 254096 DEBUG nova.virt.libvirt.driver [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.523 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.556 254096 INFO nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 8.65 seconds to spawn the instance on the hypervisor.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.557 254096 DEBUG nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:52:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:52:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350011716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.632 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.633 254096 DEBUG nova.virt.libvirt.vif [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:52:39Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "vif_mac": "fa:16:3e:09:6e:80"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.634 254096 DEBUG nova.network.os_vif_util [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "vif_mac": "fa:16:3e:09:6e:80"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.635 254096 DEBUG nova.network.os_vif_util [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.636 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.672 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <uuid>1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</uuid>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <name>instance-0000006c</name>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-2043816774</nova:name>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:52:57</nova:creationTime>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:user uuid="9deaf2356cda4c0cb2a52383b7f2e609">tempest-ServerRescueNegativeTestJSON-1769565225-project-member</nova:user>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:project uuid="33a2e508e63149889f0d5d945726522c">tempest-ServerRescueNegativeTestJSON-1769565225</nova:project>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <nova:port uuid="a74c23e6-4075-4b59-b36f-9d06bad062d2">
Nov 25 16:52:58 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <system>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <entry name="serial">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <entry name="uuid">1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5</entry>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </system>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <os>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   </os>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <features>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   </features>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.rescue">
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk">
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <target dev="vdb" bus="virtio"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue">
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </source>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:52:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:09:6e:80"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <target dev="tapa74c23e6-40"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/console.log" append="off"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <video>
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </video>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:52:58 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:52:58 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:52:58 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:52:58 compute-0 nova_compute[254092]: </domain>
Nov 25 16:52:58 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.678 254096 INFO nova.compute.manager [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 9.64 seconds to build instance.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.682 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance destroyed successfully.
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.690 254096 DEBUG oslo_concurrency.lockutils [None req-cff6b03e-df71-45c0-a674-d2bb87fc1c1f 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.740 254096 DEBUG nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] No VIF found with MAC fa:16:3e:09:6e:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.741 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Using config drive
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.763 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.785 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.814 254096 DEBUG nova.objects.instance [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'keypairs' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:52:58 compute-0 nova_compute[254092]: 2025-11-25 16:52:58.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:59 compute-0 ceph-mon[74985]: pgmap v2074: 321 pgs: 321 active+clean; 246 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 23 MiB/s wr, 362 op/s
Nov 25 16:52:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2350011716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.434 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Creating config drive at /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.440 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9pj45tr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.583 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps9pj45tr" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.607 254096 DEBUG nova.storage.rbd_utils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] rbd image 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.611 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.762 254096 DEBUG oslo_concurrency.processutils [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.763 254096 INFO nova.virt.libvirt.driver [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deleting local config drive /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5/disk.config.rescue because it was imported into RBD.
Nov 25 16:52:59 compute-0 kernel: tapa74c23e6-40: entered promiscuous mode
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:59 compute-0 NetworkManager[48891]: <info>  [1764089579.8151] manager: (tapa74c23e6-40): new Tun device (/org/freedesktop/NetworkManager/Devices/451)
Nov 25 16:52:59 compute-0 ovn_controller[153477]: 2025-11-25T16:52:59Z|01095|binding|INFO|Claiming lport a74c23e6-4075-4b59-b36f-9d06bad062d2 for this chassis.
Nov 25 16:52:59 compute-0 ovn_controller[153477]: 2025-11-25T16:52:59Z|01096|binding|INFO|a74c23e6-4075-4b59-b36f-9d06bad062d2: Claiming fa:16:3e:09:6e:80 10.100.0.14
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.822 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.824 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e bound to our chassis
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.825 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:59 compute-0 ovn_controller[153477]: 2025-11-25T16:52:59Z|01097|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 ovn-installed in OVS
Nov 25 16:52:59 compute-0 ovn_controller[153477]: 2025-11-25T16:52:59Z|01098|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 up in Southbound
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:59 compute-0 systemd-machined[216343]: New machine qemu-139-instance-0000006c.
Nov 25 16:52:59 compute-0 systemd-udevd[364959]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.853 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04eb6876-4188-4ab1-9917-113bcc06fe58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:59 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-0000006c.
Nov 25 16:52:59 compute-0 NetworkManager[48891]: <info>  [1764089579.8726] device (tapa74c23e6-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:52:59 compute-0 NetworkManager[48891]: <info>  [1764089579.8733] device (tapa74c23e6-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.888 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9648fc-2ae4-4c08-a541-8c2802dfbfff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.891 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0802b3-0088-4a4e-9355-ff161d2c6df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 246 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 19 MiB/s wr, 294 op/s
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.919 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9405543f-6bde-4f93-b0a0-e29467f72c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.938 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e934518c-e365-49fa-a2f3-a0115558eea8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364971, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.951 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fa2561-990c-40b9-950c-38274feb4907]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364973, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364973, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.953 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:59 compute-0 nova_compute[254092]: 2025-11-25 16:52:59.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:52:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:52:59.956 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.377 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.377 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089580.3768363, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.378 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Resumed (Lifecycle Event)
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.382 254096 DEBUG nova.compute.manager [None req-30ea5d7e-2147-4e5a-91d5-ec5abb6c6bd7 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.417 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.424 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.453 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089580.3777165, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.454 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Started (Lifecycle Event)
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.485 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:53:00 compute-0 sudo[365034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:00 compute-0 sudo[365034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:00 compute-0 sudo[365034]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:00 compute-0 sudo[365059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:53:00 compute-0 sudo[365059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:00 compute-0 sudo[365059]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.755 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.756 254096 WARNING nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state rescued and task_state None.
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG oslo_concurrency.lockutils [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 DEBUG nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:00 compute-0 nova_compute[254092]: 2025-11-25 16:53:00.757 254096 WARNING nova.compute.manager [req-043ea24d-22c6-4298-8499-34ac8902b74f req-d9e90230-4124-43a6-a81e-10a99888a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state rescued and task_state None.
Nov 25 16:53:00 compute-0 sudo[365084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:00 compute-0 sudo[365084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:00 compute-0 sudo[365084]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:00 compute-0 sudo[365109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:53:00 compute-0 sudo[365109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:01 compute-0 sudo[365109]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:01 compute-0 ceph-mon[74985]: pgmap v2075: 321 pgs: 321 active+clean; 246 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 19 MiB/s wr, 294 op/s
Nov 25 16:53:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:53:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:53:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:53:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:53:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:53:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:53:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev be8c9f4c-9f2d-44e9-ba1b-db649aca7e48 does not exist
Nov 25 16:53:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 994401bf-b5e3-4f31-b741-74f7760a5f19 does not exist
Nov 25 16:53:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 642384ab-10fe-42e8-95d5-7b6c25e77b95 does not exist
Nov 25 16:53:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:53:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:53:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:53:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:53:01 compute-0 nova_compute[254092]: 2025-11-25 16:53:01.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:53:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:53:01 compute-0 sudo[365165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:01 compute-0 sudo[365165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:01 compute-0 sudo[365165]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:01 compute-0 sudo[365190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:53:01 compute-0 sudo[365190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:01 compute-0 sudo[365190]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:01 compute-0 sudo[365215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:01 compute-0 sudo[365215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:01 compute-0 sudo[365215]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:01 compute-0 sudo[365240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:53:01 compute-0 sudo[365240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:01 compute-0 podman[365305]: 2025-11-25 16:53:01.902561168 +0000 UTC m=+0.043487442 container create 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 16:53:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 256 op/s
Nov 25 16:53:01 compute-0 systemd[1]: Started libpod-conmon-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope.
Nov 25 16:53:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:53:01 compute-0 podman[365305]: 2025-11-25 16:53:01.88347365 +0000 UTC m=+0.024399954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:53:01 compute-0 podman[365305]: 2025-11-25 16:53:01.991946607 +0000 UTC m=+0.132872911 container init 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:53:01 compute-0 podman[365305]: 2025-11-25 16:53:01.998365303 +0000 UTC m=+0.139291577 container start 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:53:02 compute-0 podman[365305]: 2025-11-25 16:53:02.001615481 +0000 UTC m=+0.142541765 container attach 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:53:02 compute-0 fervent_kowalevski[365320]: 167 167
Nov 25 16:53:02 compute-0 systemd[1]: libpod-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope: Deactivated successfully.
Nov 25 16:53:02 compute-0 conmon[365320]: conmon 1da874b0c3bab412ea8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope/container/memory.events
Nov 25 16:53:02 compute-0 podman[365305]: 2025-11-25 16:53:02.004739866 +0000 UTC m=+0.145666140 container died 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 16:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e6a1cf0eb6b20f89fff21931645a50362c5110e573b08f33850ff54d5314dbc-merged.mount: Deactivated successfully.
Nov 25 16:53:02 compute-0 podman[365305]: 2025-11-25 16:53:02.042478291 +0000 UTC m=+0.183404565 container remove 1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:53:02 compute-0 systemd[1]: libpod-conmon-1da874b0c3bab412ea8c6665b4ff109f4cd883a8ea73b7631c57e4e6f0ac61f8.scope: Deactivated successfully.
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.092 254096 INFO nova.compute.manager [None req-bcfe8ec3-c1ae-45c4-b42e-5ebd3ff3d938 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Pausing
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.093 254096 DEBUG nova.objects.instance [None req-bcfe8ec3-c1ae-45c4-b42e-5ebd3ff3d938 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'flavor' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.156 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089582.156158, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.156 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Paused (Lifecycle Event)
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.161 254096 DEBUG nova.compute.manager [None req-bcfe8ec3-c1ae-45c4-b42e-5ebd3ff3d938 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.194 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.204 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:53:02 compute-0 podman[365344]: 2025-11-25 16:53:02.23158351 +0000 UTC m=+0.038441016 container create 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:53:02 compute-0 systemd[1]: Started libpod-conmon-08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f.scope.
Nov 25 16:53:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:53:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:53:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:53:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:53:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:53:02 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:53:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:02 compute-0 podman[365344]: 2025-11-25 16:53:02.309538609 +0000 UTC m=+0.116396135 container init 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:53:02 compute-0 podman[365344]: 2025-11-25 16:53:02.21650855 +0000 UTC m=+0.023366056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:53:02 compute-0 podman[365344]: 2025-11-25 16:53:02.31992725 +0000 UTC m=+0.126784766 container start 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:53:02 compute-0 podman[365344]: 2025-11-25 16:53:02.324356401 +0000 UTC m=+0.131213927 container attach 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:53:02 compute-0 ovn_controller[153477]: 2025-11-25T16:53:02Z|01099|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 16:53:02 compute-0 NetworkManager[48891]: <info>  [1764089582.6024] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Nov 25 16:53:02 compute-0 NetworkManager[48891]: <info>  [1764089582.6032] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Nov 25 16:53:02 compute-0 ovn_controller[153477]: 2025-11-25T16:53:02Z|01100|binding|INFO|Releasing lport 04d240b5-2178-4712-9ede-6d58532785de from this chassis (sb_readonly=0)
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:02 compute-0 ovn_controller[153477]: 2025-11-25T16:53:02Z|01101|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 16:53:02 compute-0 ovn_controller[153477]: 2025-11-25T16:53:02Z|01102|binding|INFO|Releasing lport 04d240b5-2178-4712-9ede-6d58532785de from this chassis (sb_readonly=0)
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.908 254096 DEBUG nova.compute.manager [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG nova.compute.manager [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing instance network info cache due to event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG oslo_concurrency.lockutils [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG oslo_concurrency.lockutils [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:53:02 compute-0 nova_compute[254092]: 2025-11-25 16:53:02.909 254096 DEBUG nova.network.neutron [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:53:03 compute-0 modest_lewin[365361]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:53:03 compute-0 modest_lewin[365361]: --> relative data size: 1.0
Nov 25 16:53:03 compute-0 modest_lewin[365361]: --> All data devices are unavailable
Nov 25 16:53:03 compute-0 ceph-mon[74985]: pgmap v2076: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 256 op/s
Nov 25 16:53:03 compute-0 systemd[1]: libpod-08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f.scope: Deactivated successfully.
Nov 25 16:53:03 compute-0 podman[365344]: 2025-11-25 16:53:03.301916567 +0000 UTC m=+1.108774063 container died 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa57a8073f695545e1fac8e4951acbde33d315cb1e012e2bc1bb98da9eb2ca2e-merged.mount: Deactivated successfully.
Nov 25 16:53:03 compute-0 podman[365344]: 2025-11-25 16:53:03.376241776 +0000 UTC m=+1.183099302 container remove 08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:53:03 compute-0 sudo[365240]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:03 compute-0 systemd[1]: libpod-conmon-08e21738cccdf92dab9959af76319926d3bcbe7de216f6002215a9963ead341f.scope: Deactivated successfully.
Nov 25 16:53:03 compute-0 sudo[365402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:03 compute-0 sudo[365402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:03 compute-0 sudo[365402]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:03 compute-0 sudo[365427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:53:03 compute-0 sudo[365427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:03 compute-0 sudo[365427]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:03 compute-0 sudo[365452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:03 compute-0 sudo[365452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:03 compute-0 sudo[365452]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:03 compute-0 sudo[365477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:53:03 compute-0 sudo[365477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 256 op/s
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:04 compute-0 podman[365539]: 2025-11-25 16:53:04.046015658 +0000 UTC m=+0.043620777 container create cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:53:04 compute-0 systemd[1]: Started libpod-conmon-cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967.scope.
Nov 25 16:53:04 compute-0 podman[365539]: 2025-11-25 16:53:04.025401367 +0000 UTC m=+0.023006516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:53:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:53:04 compute-0 podman[365539]: 2025-11-25 16:53:04.144500944 +0000 UTC m=+0.142106093 container init cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 16:53:04 compute-0 podman[365539]: 2025-11-25 16:53:04.15537759 +0000 UTC m=+0.152982719 container start cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:53:04 compute-0 podman[365539]: 2025-11-25 16:53:04.158263168 +0000 UTC m=+0.155868307 container attach cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 16:53:04 compute-0 lucid_swanson[365555]: 167 167
Nov 25 16:53:04 compute-0 systemd[1]: libpod-cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967.scope: Deactivated successfully.
Nov 25 16:53:04 compute-0 podman[365539]: 2025-11-25 16:53:04.164316803 +0000 UTC m=+0.161921922 container died cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 16:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b827e6d7366a66443a4239c054e21080e50c9fcb514b12eb308b3f57b3a0e621-merged.mount: Deactivated successfully.
Nov 25 16:53:04 compute-0 podman[365539]: 2025-11-25 16:53:04.196000103 +0000 UTC m=+0.193605222 container remove cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swanson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:53:04 compute-0 systemd[1]: libpod-conmon-cfee26adaafeb718dd655403a9094e789990c83f7e1c53873869080413f40967.scope: Deactivated successfully.
Nov 25 16:53:04 compute-0 ceph-mon[74985]: pgmap v2077: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 256 op/s
Nov 25 16:53:04 compute-0 podman[365579]: 2025-11-25 16:53:04.401088366 +0000 UTC m=+0.052614420 container create 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:53:04 compute-0 systemd[1]: Started libpod-conmon-20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476.scope.
Nov 25 16:53:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:53:04 compute-0 podman[365579]: 2025-11-25 16:53:04.369420916 +0000 UTC m=+0.020946980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:04 compute-0 podman[365579]: 2025-11-25 16:53:04.504802235 +0000 UTC m=+0.156328279 container init 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:53:04 compute-0 podman[365579]: 2025-11-25 16:53:04.510808358 +0000 UTC m=+0.162334402 container start 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:53:04 compute-0 podman[365579]: 2025-11-25 16:53:04.514755066 +0000 UTC m=+0.166281110 container attach 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.571 254096 INFO nova.compute.manager [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Unpausing
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.572 254096 DEBUG nova.objects.instance [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'flavor' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.601 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089584.6014717, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.602 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Resumed (Lifecycle Event)
Nov 25 16:53:04 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.605 254096 DEBUG nova.virt.libvirt.guest [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.605 254096 DEBUG nova.compute.manager [None req-68b851e4-ca67-4fdb-b56e-9a88865ff40a 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.622 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.625 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.647 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] During sync_power_state the instance has a pending task (unpausing). Skip.
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.711 254096 DEBUG nova.network.neutron [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated VIF entry in instance network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.712 254096 DEBUG nova.network.neutron [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:53:04 compute-0 nova_compute[254092]: 2025-11-25 16:53:04.733 254096 DEBUG oslo_concurrency.lockutils [req-dd8bf3b0-00a7-4f0e-9e6a-4b0c65cb8178 req-f57a5443-e377-460d-b3b2-12a288767197 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]: {
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:     "0": [
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:         {
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "devices": [
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "/dev/loop3"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             ],
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_name": "ceph_lv0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_size": "21470642176",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "name": "ceph_lv0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "tags": {
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cluster_name": "ceph",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.crush_device_class": "",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.encrypted": "0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osd_id": "0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.type": "block",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.vdo": "0"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             },
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "type": "block",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "vg_name": "ceph_vg0"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:         }
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:     ],
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:     "1": [
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:         {
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "devices": [
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "/dev/loop4"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             ],
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_name": "ceph_lv1",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_size": "21470642176",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "name": "ceph_lv1",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "tags": {
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cluster_name": "ceph",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.crush_device_class": "",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.encrypted": "0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osd_id": "1",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.type": "block",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.vdo": "0"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             },
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "type": "block",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "vg_name": "ceph_vg1"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:         }
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:     ],
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:     "2": [
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:         {
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "devices": [
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "/dev/loop5"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             ],
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_name": "ceph_lv2",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_size": "21470642176",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "name": "ceph_lv2",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "tags": {
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.cluster_name": "ceph",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.crush_device_class": "",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.encrypted": "0",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osd_id": "2",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.type": "block",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:                 "ceph.vdo": "0"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             },
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "type": "block",
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:             "vg_name": "ceph_vg2"
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:         }
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]:     ]
Nov 25 16:53:05 compute-0 suspicious_albattani[365595]: }
Nov 25 16:53:05 compute-0 systemd[1]: libpod-20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476.scope: Deactivated successfully.
Nov 25 16:53:05 compute-0 podman[365579]: 2025-11-25 16:53:05.305406361 +0000 UTC m=+0.956932405 container died 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c515c88ee897ed640cc5c3e567457178a3dacf0acbae92dab25d0dadec5edb-merged.mount: Deactivated successfully.
Nov 25 16:53:05 compute-0 podman[365579]: 2025-11-25 16:53:05.363022578 +0000 UTC m=+1.014548622 container remove 20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:53:05 compute-0 systemd[1]: libpod-conmon-20add273b176c7573d4ff33be0485363407305056f8056ef1db8c8acccdec476.scope: Deactivated successfully.
Nov 25 16:53:05 compute-0 sudo[365477]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:05 compute-0 sudo[365617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:05 compute-0 sudo[365617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:05 compute-0 sudo[365617]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:05 compute-0 sudo[365642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:53:05 compute-0 sudo[365642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:05 compute-0 sudo[365642]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.568 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.568 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.569 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.569 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.569 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.571 254096 INFO nova.compute.manager [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Terminating instance
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.572 254096 DEBUG nova.compute.manager [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:53:05 compute-0 sudo[365667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:05 compute-0 sudo[365667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:05 compute-0 sudo[365667]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:05 compute-0 kernel: tapa74c23e6-40 (unregistering): left promiscuous mode
Nov 25 16:53:05 compute-0 NetworkManager[48891]: <info>  [1764089585.6228] device (tapa74c23e6-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:53:05 compute-0 sudo[365692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 ovn_controller[153477]: 2025-11-25T16:53:05Z|01103|binding|INFO|Releasing lport a74c23e6-4075-4b59-b36f-9d06bad062d2 from this chassis (sb_readonly=0)
Nov 25 16:53:05 compute-0 ovn_controller[153477]: 2025-11-25T16:53:05Z|01104|binding|INFO|Setting lport a74c23e6-4075-4b59-b36f-9d06bad062d2 down in Southbound
Nov 25 16:53:05 compute-0 ovn_controller[153477]: 2025-11-25T16:53:05Z|01105|binding|INFO|Removing iface tapa74c23e6-40 ovn-installed in OVS
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 sudo[365692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.692 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6e:80 10.100.0.14'], port_security=['fa:16:3e:09:6e:80 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a74c23e6-4075-4b59-b36f-9d06bad062d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.693 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a74c23e6-4075-4b59-b36f-9d06bad062d2 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e unbound from our chassis
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 25 16:53:05 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006c.scope: Consumed 5.752s CPU time.
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[61496df6-a8f4-4a18-bd73-dced9b50c6c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:05 compute-0 systemd-machined[216343]: Machine qemu-139-instance-0000006c terminated.
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.750 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a43505-2f75-41b4-80d4-8e25967677b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.753 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[72d79081-513a-457f-9a2a-13367c58bbf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.792 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7a693d-7d69-4c81-b61c-ae1bb2f37176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.811 254096 INFO nova.virt.libvirt.driver [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Instance destroyed successfully.
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.812 254096 DEBUG nova.objects.instance [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'resources' on Instance uuid 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.817 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65adf9e9-029c-4782-9334-4aebb376abd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcf0cb5b9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:cc:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 319], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601458, 'reachable_time': 32191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365734, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.825 254096 DEBUG nova.virt.libvirt.vif [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2043816774',display_name='tempest-ServerRescueNegativeTestJSON-server-2043816774',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2043816774',id=108,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:53:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-6exq9cfa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:00Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.825 254096 DEBUG nova.network.os_vif_util [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "address": "fa:16:3e:09:6e:80", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa74c23e6-40", "ovs_interfaceid": "a74c23e6-4075-4b59-b36f-9d06bad062d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.826 254096 DEBUG nova.network.os_vif_util [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.827 254096 DEBUG os_vif [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.829 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa74c23e6-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.840 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ec8815-eb21-4aa4-8cdc-794dc3eb09ce]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601471, 'tstamp': 601471}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365742, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcf0cb5b9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601474, 'tstamp': 601474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365742, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.841 254096 INFO os_vif [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:6e:80,bridge_name='br-int',has_traffic_filtering=True,id=a74c23e6-4075-4b59-b36f-9d06bad062d2,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa74c23e6-40')
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.842 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.844 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf0cb5b9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.844 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.845 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcf0cb5b9-c0, col_values=(('external_ids', {'iface-id': '04d240b5-2178-4712-9ede-6d58532785de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:05.845 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.868 254096 DEBUG nova.compute.manager [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.868 254096 DEBUG oslo_concurrency.lockutils [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.869 254096 DEBUG oslo_concurrency.lockutils [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.869 254096 DEBUG oslo_concurrency.lockutils [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.870 254096 DEBUG nova.compute.manager [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:05 compute-0 nova_compute[254092]: 2025-11-25 16:53:05.870 254096 DEBUG nova.compute.manager [req-09736eb0-ba89-49f7-8275-a04862958651 req-70db2fbe-79f6-4c49-8685-a9894bd857ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-unplugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:53:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.5 MiB/s wr, 247 op/s
Nov 25 16:53:06 compute-0 podman[365798]: 2025-11-25 16:53:06.094595838 +0000 UTC m=+0.055229102 container create acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 16:53:06 compute-0 systemd[1]: Started libpod-conmon-acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e.scope.
Nov 25 16:53:06 compute-0 podman[365798]: 2025-11-25 16:53:06.07295667 +0000 UTC m=+0.033589944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:53:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:53:06 compute-0 podman[365798]: 2025-11-25 16:53:06.188835679 +0000 UTC m=+0.149468953 container init acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:53:06 compute-0 podman[365798]: 2025-11-25 16:53:06.197213886 +0000 UTC m=+0.157847160 container start acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:53:06 compute-0 elastic_morse[365816]: 167 167
Nov 25 16:53:06 compute-0 systemd[1]: libpod-acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e.scope: Deactivated successfully.
Nov 25 16:53:06 compute-0 podman[365798]: 2025-11-25 16:53:06.204207427 +0000 UTC m=+0.164840711 container attach acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 16:53:06 compute-0 podman[365798]: 2025-11-25 16:53:06.205911153 +0000 UTC m=+0.166544407 container died acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb01c780be59edbb774d83770dd6f2351374c66bb6ada7f6a3e1f882f514ba3e-merged.mount: Deactivated successfully.
Nov 25 16:53:06 compute-0 podman[365798]: 2025-11-25 16:53:06.249899198 +0000 UTC m=+0.210532452 container remove acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:53:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:06 compute-0 systemd[1]: libpod-conmon-acf6da4ae9b4eefe1e1c4d47027d04bebe8c7e24047a61112bb76fe408a09d2e.scope: Deactivated successfully.
Nov 25 16:53:06 compute-0 podman[365840]: 2025-11-25 16:53:06.496447538 +0000 UTC m=+0.074492375 container create c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 16:53:06 compute-0 systemd[1]: Started libpod-conmon-c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f.scope.
Nov 25 16:53:06 compute-0 podman[365840]: 2025-11-25 16:53:06.473360231 +0000 UTC m=+0.051405098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:53:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:06 compute-0 podman[365840]: 2025-11-25 16:53:06.589693193 +0000 UTC m=+0.167738060 container init c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 16:53:06 compute-0 podman[365840]: 2025-11-25 16:53:06.595323426 +0000 UTC m=+0.173368263 container start c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:53:06 compute-0 podman[365840]: 2025-11-25 16:53:06.59773354 +0000 UTC m=+0.175778417 container attach c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:53:06 compute-0 nova_compute[254092]: 2025-11-25 16:53:06.611 254096 INFO nova.virt.libvirt.driver [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deleting instance files /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_del
Nov 25 16:53:06 compute-0 nova_compute[254092]: 2025-11-25 16:53:06.614 254096 INFO nova.virt.libvirt.driver [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deletion of /var/lib/nova/instances/1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5_del complete
Nov 25 16:53:06 compute-0 nova_compute[254092]: 2025-11-25 16:53:06.662 254096 INFO nova.compute.manager [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 1.09 seconds to destroy the instance on the hypervisor.
Nov 25 16:53:06 compute-0 nova_compute[254092]: 2025-11-25 16:53:06.662 254096 DEBUG oslo.service.loopingcall [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:53:06 compute-0 nova_compute[254092]: 2025-11-25 16:53:06.663 254096 DEBUG nova.compute.manager [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:53:06 compute-0 nova_compute[254092]: 2025-11-25 16:53:06.663 254096 DEBUG nova.network.neutron [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:53:06 compute-0 ceph-mon[74985]: pgmap v2078: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.5 MiB/s wr, 247 op/s
Nov 25 16:53:07 compute-0 cool_gates[365857]: {
Nov 25 16:53:07 compute-0 cool_gates[365857]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "osd_id": 1,
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "type": "bluestore"
Nov 25 16:53:07 compute-0 cool_gates[365857]:     },
Nov 25 16:53:07 compute-0 cool_gates[365857]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "osd_id": 2,
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "type": "bluestore"
Nov 25 16:53:07 compute-0 cool_gates[365857]:     },
Nov 25 16:53:07 compute-0 cool_gates[365857]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "osd_id": 0,
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:53:07 compute-0 cool_gates[365857]:         "type": "bluestore"
Nov 25 16:53:07 compute-0 cool_gates[365857]:     }
Nov 25 16:53:07 compute-0 cool_gates[365857]: }
Nov 25 16:53:07 compute-0 systemd[1]: libpod-c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f.scope: Deactivated successfully.
Nov 25 16:53:07 compute-0 podman[365840]: 2025-11-25 16:53:07.53420906 +0000 UTC m=+1.112253897 container died c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a830f6887257eacf81bd88f9b4baf90b297a18c700c8ac3858e2cb5e42d3480-merged.mount: Deactivated successfully.
Nov 25 16:53:07 compute-0 nova_compute[254092]: 2025-11-25 16:53:07.580 254096 DEBUG nova.network.neutron [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:53:07 compute-0 podman[365840]: 2025-11-25 16:53:07.587903999 +0000 UTC m=+1.165948836 container remove c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gates, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 16:53:07 compute-0 nova_compute[254092]: 2025-11-25 16:53:07.596 254096 INFO nova.compute.manager [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Took 0.93 seconds to deallocate network for instance.
Nov 25 16:53:07 compute-0 systemd[1]: libpod-conmon-c58518d48b5b92d196f33df4325177eaa2ac9132c97d4aaba9b8b05514a9d58f.scope: Deactivated successfully.
Nov 25 16:53:07 compute-0 sudo[365692]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:53:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:53:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:53:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:53:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 11d8f14a-7314-47cd-9d67-5a9873ae493c does not exist
Nov 25 16:53:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e1c9161f-6687-4904-8d0c-802c95c201f6 does not exist
Nov 25 16:53:07 compute-0 nova_compute[254092]: 2025-11-25 16:53:07.657 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:07 compute-0 nova_compute[254092]: 2025-11-25 16:53:07.657 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:07 compute-0 sudo[365901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:53:07 compute-0 sudo[365901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:07 compute-0 sudo[365901]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:07 compute-0 sudo[365926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:53:07 compute-0 sudo[365926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:53:07 compute-0 sudo[365926]: pam_unix(sudo:session): session closed for user root
Nov 25 16:53:07 compute-0 nova_compute[254092]: 2025-11-25 16:53:07.751 254096 DEBUG nova.compute.manager [req-6204caa5-cff5-433b-9381-be8c8d6412a4 req-30f65e21-e4b9-4b83-a76c-674ae8e9cc37 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-deleted-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:07 compute-0 nova_compute[254092]: 2025-11-25 16:53:07.773 254096 DEBUG oslo_concurrency.processutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:53:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 178 op/s
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.022 254096 DEBUG nova.compute.manager [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.022 254096 DEBUG oslo_concurrency.lockutils [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.023 254096 DEBUG oslo_concurrency.lockutils [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.023 254096 DEBUG oslo_concurrency.lockutils [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.023 254096 DEBUG nova.compute.manager [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] No waiting events found dispatching network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.024 254096 WARNING nova.compute.manager [req-36ef8867-86c3-450a-90e4-765f66d12877 req-1172a051-699b-4a3e-936c-a64eb019c1a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Received unexpected event network-vif-plugged-a74c23e6-4075-4b59-b36f-9d06bad062d2 for instance with vm_state deleted and task_state None.
Nov 25 16:53:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:53:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861149236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.223 254096 DEBUG oslo_concurrency.processutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.230 254096 DEBUG nova.compute.provider_tree [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.246 254096 DEBUG nova.scheduler.client.report [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.272 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.329 254096 INFO nova.scheduler.client.report [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Deleted allocations for instance 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5
Nov 25 16:53:08 compute-0 nova_compute[254092]: 2025-11-25 16:53:08.461 254096 DEBUG oslo_concurrency.lockutils [None req-6347734c-52d6-4dd8-9874-83cd55568c76 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:53:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:53:08 compute-0 ceph-mon[74985]: pgmap v2079: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.9 MiB/s wr, 178 op/s
Nov 25 16:53:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1861149236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.103 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.103 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.104 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.104 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.104 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.105 254096 INFO nova.compute.manager [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Terminating instance
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.106 254096 DEBUG nova.compute.manager [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:53:09 compute-0 kernel: tap923a00bb-da (unregistering): left promiscuous mode
Nov 25 16:53:09 compute-0 NetworkManager[48891]: <info>  [1764089589.1589] device (tap923a00bb-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:09 compute-0 ovn_controller[153477]: 2025-11-25T16:53:09Z|01106|binding|INFO|Releasing lport 923a00bb-da3b-434a-b154-c338c92e0635 from this chassis (sb_readonly=0)
Nov 25 16:53:09 compute-0 ovn_controller[153477]: 2025-11-25T16:53:09Z|01107|binding|INFO|Setting lport 923a00bb-da3b-434a-b154-c338c92e0635 down in Southbound
Nov 25 16:53:09 compute-0 ovn_controller[153477]: 2025-11-25T16:53:09Z|01108|binding|INFO|Removing iface tap923a00bb-da ovn-installed in OVS
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.178 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:31 10.100.0.13'], port_security=['fa:16:3e:4e:17:31 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '6d70119e-e45b-4a12-893e-8d5a805ca8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33a2e508e63149889f0d5d945726522c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f668b193-ad4a-4e17-9130-f237a989e2f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa74b059-0bab-4ac8-ad4f-59f5f4ae171b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=923a00bb-da3b-434a-b154-c338c92e0635) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.179 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 923a00bb-da3b-434a-b154-c338c92e0635 in datapath cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e unbound from our chassis
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.180 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.181 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6b7b01-b6e8-4035-a964-f02be84dfb50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.182 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e namespace which is not needed anymore
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:09 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 25 16:53:09 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006b.scope: Consumed 13.951s CPU time.
Nov 25 16:53:09 compute-0 systemd-machined[216343]: Machine qemu-136-instance-0000006b terminated.
Nov 25 16:53:09 compute-0 sshd-session[365973]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Nov 25 16:53:09 compute-0 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : haproxy version is 2.8.14-c23fe91
Nov 25 16:53:09 compute-0 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [NOTICE]   (364049) : path to executable is /usr/sbin/haproxy
Nov 25 16:53:09 compute-0 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [WARNING]  (364049) : Exiting Master process...
Nov 25 16:53:09 compute-0 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [ALERT]    (364049) : Current worker (364051) exited with code 143 (Terminated)
Nov 25 16:53:09 compute-0 neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e[364045]: [WARNING]  (364049) : All workers exited. Exiting... (0)
Nov 25 16:53:09 compute-0 systemd[1]: libpod-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3.scope: Deactivated successfully.
Nov 25 16:53:09 compute-0 podman[365998]: 2025-11-25 16:53:09.320074771 +0000 UTC m=+0.048120529 container died 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3-userdata-shm.mount: Deactivated successfully.
Nov 25 16:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a699cce47daa3e0d0478b9596de6cbf41d67cddb37cef82117aeaf5f09e69db-merged.mount: Deactivated successfully.
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.349 254096 INFO nova.virt.libvirt.driver [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Instance destroyed successfully.
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.351 254096 DEBUG nova.objects.instance [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lazy-loading 'resources' on Instance uuid 6d70119e-e45b-4a12-893e-8d5a805ca8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:09 compute-0 podman[365998]: 2025-11-25 16:53:09.357518029 +0000 UTC m=+0.085563787 container cleanup 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:53:09 compute-0 systemd[1]: libpod-conmon-9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3.scope: Deactivated successfully.
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.368 254096 DEBUG nova.virt.libvirt.vif [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-142320588',display_name='tempest-ServerRescueNegativeTestJSON-server-142320588',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-142320588',id=107,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33a2e508e63149889f0d5d945726522c',ramdisk_id='',reservation_id='r-wqxsdvnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1769565225',owner_user_name='tempest-ServerRescueNegativeTestJSON-1769565225-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:04Z,user_data=None,user_id='9deaf2356cda4c0cb2a52383b7f2e609',uuid=6d70119e-e45b-4a12-893e-8d5a805ca8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.369 254096 DEBUG nova.network.os_vif_util [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converting VIF {"id": "923a00bb-da3b-434a-b154-c338c92e0635", "address": "fa:16:3e:4e:17:31", "network": {"id": "cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1445011889-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33a2e508e63149889f0d5d945726522c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a00bb-da", "ovs_interfaceid": "923a00bb-da3b-434a-b154-c338c92e0635", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.370 254096 DEBUG nova.network.os_vif_util [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.371 254096 DEBUG os_vif [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.374 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap923a00bb-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.380 254096 INFO os_vif [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:31,bridge_name='br-int',has_traffic_filtering=True,id=923a00bb-da3b-434a-b154-c338c92e0635,network=Network(cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a00bb-da')
Nov 25 16:53:09 compute-0 podman[366040]: 2025-11-25 16:53:09.425592549 +0000 UTC m=+0.041232112 container remove 9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.431 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[11be4882-10f8-48e8-bd32-58c428874228]: (4, ('Tue Nov 25 04:53:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e (9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3)\n9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3\nTue Nov 25 04:53:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e (9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3)\n9758ed213608c06fd11fb1ff6275783984c1d6dffd6b7b49825d3f97a73a2bd3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.433 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ff0162-c0d7-474f-a6d3-2c7c6ee831b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.434 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf0cb5b9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:09 compute-0 kernel: tapcf0cb5b9-c0: left promiscuous mode
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.453 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[736c6749-bcec-46e6-85d3-f20870b7e25f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.465 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e412452-18a1-42b1-9b89-a1eb5a6c3c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.466 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af31e8a2-2476-4995-8101-926225f9a83f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e570401c-2ade-4105-baea-8d38e3475caa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601448, 'reachable_time': 30208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366073, 'error': None, 'target': 'ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.483 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cf0cb5b9-c956-4cd5-8a3d-7f55d142ae3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:53:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:09.483 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaada79-abf2-411c-ad81-761741b15ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:09 compute-0 systemd[1]: run-netns-ovnmeta\x2dcf0cb5b9\x2dc956\x2d4cd5\x2d8a3d\x2d7f55d142ae3e.mount: Deactivated successfully.
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.822 254096 INFO nova.virt.libvirt.driver [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deleting instance files /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab_del
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.823 254096 INFO nova.virt.libvirt.driver [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deletion of /var/lib/nova/instances/6d70119e-e45b-4a12-893e-8d5a805ca8ab_del complete
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.875 254096 INFO nova.compute.manager [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.875 254096 DEBUG oslo.service.loopingcall [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.875 254096 DEBUG nova.compute.manager [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:53:09 compute-0 nova_compute[254092]: 2025-11-25 16:53:09.876 254096 DEBUG nova.network.neutron [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:53:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.029 254096 DEBUG nova.compute.manager [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-unplugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.029 254096 DEBUG oslo_concurrency.lockutils [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.029 254096 DEBUG oslo_concurrency.lockutils [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.030 254096 DEBUG oslo_concurrency.lockutils [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.030 254096 DEBUG nova.compute.manager [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] No waiting events found dispatching network-vif-unplugged-923a00bb-da3b-434a-b154-c338c92e0635 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.030 254096 DEBUG nova.compute.manager [req-a7ee9aa9-51ce-4d07-97ae-6b69d3bd1cf7 req-fb39423c-8ca6-480c-8eaf-e8737647f4e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-unplugged-923a00bb-da3b-434a-b154-c338c92e0635 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:53:10 compute-0 ovn_controller[153477]: 2025-11-25T16:53:10Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:77:80 10.100.0.8
Nov 25 16:53:10 compute-0 ovn_controller[153477]: 2025-11-25T16:53:10Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:77:80 10.100.0.8
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.596 254096 DEBUG nova.network.neutron [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.612 254096 INFO nova.compute.manager [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Took 0.74 seconds to deallocate network for instance.
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.687 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.688 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:10 compute-0 nova_compute[254092]: 2025-11-25 16:53:10.755 254096 DEBUG oslo_concurrency.processutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:53:10 compute-0 ceph-mon[74985]: pgmap v2080: 321 pgs: 321 active+clean; 293 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Nov 25 16:53:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:53:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767569315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:11 compute-0 nova_compute[254092]: 2025-11-25 16:53:11.181 254096 DEBUG oslo_concurrency.processutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:53:11 compute-0 nova_compute[254092]: 2025-11-25 16:53:11.187 254096 DEBUG nova.compute.provider_tree [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:53:11 compute-0 nova_compute[254092]: 2025-11-25 16:53:11.209 254096 DEBUG nova.scheduler.client.report [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:53:11 compute-0 nova_compute[254092]: 2025-11-25 16:53:11.223 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:11 compute-0 nova_compute[254092]: 2025-11-25 16:53:11.248 254096 INFO nova.scheduler.client.report [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Deleted allocations for instance 6d70119e-e45b-4a12-893e-8d5a805ca8ab
Nov 25 16:53:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:11 compute-0 nova_compute[254092]: 2025-11-25 16:53:11.327 254096 DEBUG oslo_concurrency.lockutils [None req-5e043114-4366-49c2-8572-a32f8496b4c5 9deaf2356cda4c0cb2a52383b7f2e609 33a2e508e63149889f0d5d945726522c - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 109 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 294 op/s
Nov 25 16:53:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2767569315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:12 compute-0 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:12 compute-0 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG oslo_concurrency.lockutils [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:12 compute-0 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG oslo_concurrency.lockutils [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:12 compute-0 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG oslo_concurrency.lockutils [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6d70119e-e45b-4a12-893e-8d5a805ca8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:12 compute-0 nova_compute[254092]: 2025-11-25 16:53:12.236 254096 DEBUG nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] No waiting events found dispatching network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:12 compute-0 nova_compute[254092]: 2025-11-25 16:53:12.237 254096 WARNING nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received unexpected event network-vif-plugged-923a00bb-da3b-434a-b154-c338c92e0635 for instance with vm_state deleted and task_state None.
Nov 25 16:53:12 compute-0 nova_compute[254092]: 2025-11-25 16:53:12.237 254096 DEBUG nova.compute.manager [req-f2fc1264-d4c5-4575-ba7b-8fd3a1b568f7 req-fc42ac7b-fa8c-44d0-a2c1-1d81068f3a8e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Received event network-vif-deleted-923a00bb-da3b-434a-b154-c338c92e0635 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:13 compute-0 ceph-mon[74985]: pgmap v2081: 321 pgs: 321 active+clean; 109 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 294 op/s
Nov 25 16:53:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:13.630 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:13.631 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 109 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 190 op/s
Nov 25 16:53:14 compute-0 nova_compute[254092]: 2025-11-25 16:53:14.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:14 compute-0 nova_compute[254092]: 2025-11-25 16:53:14.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:15 compute-0 ceph-mon[74985]: pgmap v2082: 321 pgs: 321 active+clean; 109 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 190 op/s
Nov 25 16:53:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 118 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 199 op/s
Nov 25 16:53:16 compute-0 ovn_controller[153477]: 2025-11-25T16:53:16Z|01109|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 16:53:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.279 254096 INFO nova.compute.manager [None req-348206ab-1bdc-4dab-95ff-bed7af5997fb 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Get console output
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.285 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.547 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.664 254096 DEBUG nova.objects.instance [None req-6e1ca346-2e9c-4fd1-ad37-303759815abd 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.685 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089596.6846845, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.685 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Paused (Lifecycle Event)
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.705 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.710 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:53:16 compute-0 nova_compute[254092]: 2025-11-25 16:53:16.726 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 16:53:17 compute-0 ceph-mon[74985]: pgmap v2083: 321 pgs: 321 active+clean; 118 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 199 op/s
Nov 25 16:53:17 compute-0 kernel: tap1ef97ad6-07 (unregistering): left promiscuous mode
Nov 25 16:53:17 compute-0 NetworkManager[48891]: <info>  [1764089597.3438] device (tap1ef97ad6-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01110|binding|INFO|Releasing lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 from this chassis (sb_readonly=0)
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01111|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 down in Southbound
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01112|binding|INFO|Removing iface tap1ef97ad6-07 ovn-installed in OVS
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.359 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.360 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.361 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.362 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[866b888b-9a83-4a60-850a-3db34e36b032]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.362 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace which is not needed anymore
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 25 16:53:17 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006d.scope: Consumed 13.172s CPU time.
Nov 25 16:53:17 compute-0 systemd-machined[216343]: Machine qemu-138-instance-0000006d terminated.
Nov 25 16:53:17 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : haproxy version is 2.8.14-c23fe91
Nov 25 16:53:17 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [NOTICE]   (364854) : path to executable is /usr/sbin/haproxy
Nov 25 16:53:17 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [WARNING]  (364854) : Exiting Master process...
Nov 25 16:53:17 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [ALERT]    (364854) : Current worker (364860) exited with code 143 (Terminated)
Nov 25 16:53:17 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[364850]: [WARNING]  (364854) : All workers exited. Exiting... (0)
Nov 25 16:53:17 compute-0 systemd[1]: libpod-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a.scope: Deactivated successfully.
Nov 25 16:53:17 compute-0 podman[366125]: 2025-11-25 16:53:17.493160736 +0000 UTC m=+0.041541730 container died 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:17 compute-0 kernel: tap1ef97ad6-07: entered promiscuous mode
Nov 25 16:53:17 compute-0 NetworkManager[48891]: <info>  [1764089597.5101] manager: (tap1ef97ad6-07): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01113|binding|INFO|Claiming lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for this chassis.
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01114|binding|INFO|1ef97ad6-0798-4b3d-a9cc-562f9526ae38: Claiming fa:16:3e:5e:77:80 10.100.0.8
Nov 25 16:53:17 compute-0 kernel: tap1ef97ad6-07 (unregistering): left promiscuous mode
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.602 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a-userdata-shm.mount: Deactivated successfully.
Nov 25 16:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3934ea0b4d9ff83643ea9a1fbc52010cb7a732e5e5a2e2bfce4873181e16e161-merged.mount: Deactivated successfully.
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.608 254096 DEBUG nova.compute.manager [None req-6e1ca346-2e9c-4fd1-ad37-303759815abd 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:17 compute-0 podman[366125]: 2025-11-25 16:53:17.614962786 +0000 UTC m=+0.163343780 container cleanup 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01115|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 ovn-installed in OVS
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01116|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 up in Southbound
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.620 254096 DEBUG nova.compute.manager [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.620 254096 DEBUG oslo_concurrency.lockutils [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.621 254096 DEBUG oslo_concurrency.lockutils [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.622 254096 DEBUG oslo_concurrency.lockutils [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.624 254096 DEBUG nova.compute.manager [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.624 254096 WARNING nova.compute.manager [req-7e4e24d9-1916-48f7-a326-37803eb00dd4 req-aa47c0c7-8490-4f11-9210-53b407f781c3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state active and task_state suspending.
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01117|binding|INFO|Releasing lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 from this chassis (sb_readonly=0)
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01118|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 down in Southbound
Nov 25 16:53:17 compute-0 ovn_controller[153477]: 2025-11-25T16:53:17Z|01119|binding|INFO|Removing iface tap1ef97ad6-07 ovn-installed in OVS
Nov 25 16:53:17 compute-0 systemd[1]: libpod-conmon-1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a.scope: Deactivated successfully.
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.630 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 podman[366166]: 2025-11-25 16:53:17.681002901 +0000 UTC m=+0.041818498 container remove 1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.686 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba7c842-391d-4375-a9c5-b6a0cdd13891]: (4, ('Tue Nov 25 04:53:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a)\n1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a\nTue Nov 25 04:53:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a)\n1783aaa40d4ebb6e524f8b0823a0abc04ec5e34b56392a52a2337f24d54bc35a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.688 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b676e580-851b-49db-a006-45baea669133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.688 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.690 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 kernel: tapaa7a2aa2-90: left promiscuous mode
Nov 25 16:53:17 compute-0 nova_compute[254092]: 2025-11-25 16:53:17.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bad61903-f4f8-40da-aa98-cb43198fb21d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe4a6e6-c838-45b3-9ba3-77081032eef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.738 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b3248509-bc74-4845-bcee-6da9243435b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.753 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f297d223-3e1b-40f5-b1d7-25eaeb3fd8b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603519, 'reachable_time': 20349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366185, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 systemd[1]: run-netns-ovnmeta\x2daa7a2aa2\x2d9e73\x2d498f\x2dbac7\x2d4dcf3eb7c3d1.mount: Deactivated successfully.
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.757 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.757 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f72970c8-4364-4aaf-a01e-bb04581820ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.758 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.759 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.760 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9aadc93a-cc71-432c-b152-218505401fda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.761 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.762 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:53:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:17.762 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[658d0c88-943a-4c10-ae58-9454557a96da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 146 op/s
Nov 25 16:53:18 compute-0 sshd-session[365973]: Connection closed by authenticating user root 139.19.117.131 port 36854 [preauth]
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.008 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:19 compute-0 ceph-mon[74985]: pgmap v2084: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 2.1 MiB/s wr, 146 op/s
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.795 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.796 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.797 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.798 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.799 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.800 254096 DEBUG oslo_concurrency.lockutils [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.801 254096 DEBUG nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:19 compute-0 nova_compute[254092]: 2025-11-25 16:53:19.801 254096 WARNING nova.compute.manager [req-54fe6ddb-644d-48bc-9b01-b87115c6864c req-aff21f6e-67e1-4c19-a888-948b986cade5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state None.
Nov 25 16:53:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Nov 25 16:53:20 compute-0 ceph-mon[74985]: pgmap v2085: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.623 254096 INFO nova.compute.manager [None req-4b26ad17-a388-4efb-928b-006f23c09fb9 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Get console output
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.798 254096 INFO nova.compute.manager [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Resuming
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.800 254096 DEBUG nova.objects.instance [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'flavor' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.807 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089585.807225, 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.808 254096 INFO nova.compute.manager [-] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] VM Stopped (Lifecycle Event)
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.823 254096 DEBUG nova.compute.manager [None req-20deafa6-e275-4822-9640-cbded10aef0e - - - - - -] [instance: 1e5cdf5e-e381-42e2-a91f-f3289b5ec7a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.831 254096 DEBUG oslo_concurrency.lockutils [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.832 254096 DEBUG oslo_concurrency.lockutils [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:53:20 compute-0 nova_compute[254092]: 2025-11-25 16:53:20.832 254096 DEBUG nova.network.neutron [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:53:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:53:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 16:53:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:53:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395284670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:21 compute-0 nova_compute[254092]: 2025-11-25 16:53:21.964 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:53:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1395284670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.039 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.040 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.168 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.169 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3742MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.169 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.170 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b12625ea-31bf-4599-a248-4c6ced8e59c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.267 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:53:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:53:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634087155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.743 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.748 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.765 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.786 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.786 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.890 254096 DEBUG nova.network.neutron [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.903 254096 DEBUG oslo_concurrency.lockutils [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.907 254096 DEBUG nova.virt.libvirt.vif [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:17Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.908 254096 DEBUG nova.network.os_vif_util [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.908 254096 DEBUG nova.network.os_vif_util [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.908 254096 DEBUG os_vif [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.909 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.910 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ef97ad6-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ef97ad6-07, col_values=(('external_ids', {'iface-id': '1ef97ad6-0798-4b3d-a9cc-562f9526ae38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:77:80', 'vm-uuid': 'b12625ea-31bf-4599-a248-4c6ced8e59c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.912 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.913 254096 INFO os_vif [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07')
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.926 254096 DEBUG nova.objects.instance [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'numa_topology' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:22 compute-0 kernel: tap1ef97ad6-07: entered promiscuous mode
Nov 25 16:53:22 compute-0 NetworkManager[48891]: <info>  [1764089602.9919] manager: (tap1ef97ad6-07): new Tun device (/org/freedesktop/NetworkManager/Devices/455)
Nov 25 16:53:22 compute-0 ovn_controller[153477]: 2025-11-25T16:53:22Z|01120|binding|INFO|Claiming lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for this chassis.
Nov 25 16:53:22 compute-0 ovn_controller[153477]: 2025-11-25T16:53:22Z|01121|binding|INFO|1ef97ad6-0798-4b3d-a9cc-562f9526ae38: Claiming fa:16:3e:5e:77:80 10.100.0.8
Nov 25 16:53:22 compute-0 nova_compute[254092]: 2025-11-25 16:53:22.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.000 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.001 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 bound to our chassis
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.002 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 16:53:23 compute-0 ceph-mon[74985]: pgmap v2086: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 25 16:53:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/634087155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:23 compute-0 ovn_controller[153477]: 2025-11-25T16:53:23Z|01122|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 ovn-installed in OVS
Nov 25 16:53:23 compute-0 ovn_controller[153477]: 2025-11-25T16:53:23Z|01123|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 up in Southbound
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[23b9e6fd-6eaa-404d-9fcc-9e46eb1c5be6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.015 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa7a2aa2-91 in ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.017 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa7a2aa2-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.017 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe947cf-99ae-4fea-a39a-301fe5a2d214]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[49893d48-81c0-4522-a7ae-7a33ea9b0e63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 systemd-udevd[366244]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:53:23 compute-0 systemd-machined[216343]: New machine qemu-140-instance-0000006d.
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.031 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9163c010-5839-4688-a679-4aa119994165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 NetworkManager[48891]: <info>  [1764089603.0393] device (tap1ef97ad6-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:53:23 compute-0 NetworkManager[48891]: <info>  [1764089603.0404] device (tap1ef97ad6-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:53:23 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-0000006d.
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.050 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[71c6b6e9-e90a-4735-b4eb-c2b458b7b265]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.078 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1002473a-90af-4f6a-972c-57faec473794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[410c3aba-27f8-4dbb-9594-1ce451e0e5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 NetworkManager[48891]: <info>  [1764089603.0838] manager: (tapaa7a2aa2-90): new Veth device (/org/freedesktop/NetworkManager/Devices/456)
Nov 25 16:53:23 compute-0 systemd-udevd[366248]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.110 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[164cc12b-50c8-409c-b57f-096cec6c4e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.113 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c976627-8e45-4818-9543-e63777a6001f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 NetworkManager[48891]: <info>  [1764089603.1314] device (tapaa7a2aa2-90): carrier: link connected
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.135 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[540feadd-ab32-4f07-affd-658c2e548317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.150 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e50ee7d9-5289-4f32-8341-61c80d445666]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606072, 'reachable_time': 28152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366277, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a070af18-f933-4faf-8a49-5c721232c5d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:f6a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606072, 'tstamp': 606072}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366278, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.185 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3e29d7-91cd-4395-82b3-c67e65c14313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa7a2aa2-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:f6:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 329], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606072, 'reachable_time': 28152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366279, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.216 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4a47fd36-3fa0-46b9-8c76-549b46690a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.268 254096 DEBUG nova.compute.manager [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.269 254096 DEBUG oslo_concurrency.lockutils [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.269 254096 DEBUG oslo_concurrency.lockutils [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.269 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8b64e1-d7ac-4f01-9ebc-7eea4d81c804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.270 254096 DEBUG oslo_concurrency.lockutils [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.270 254096 DEBUG nova.compute.manager [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.270 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.270 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.271 254096 WARNING nova.compute.manager [req-5af42302-47d5-40ad-a655-f6603fec52a4 req-e88cd706-1c68-4544-8b1d-3511530e5312 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state suspended and task_state resuming.
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.271 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa7a2aa2-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:23 compute-0 kernel: tapaa7a2aa2-90: entered promiscuous mode
Nov 25 16:53:23 compute-0 NetworkManager[48891]: <info>  [1764089603.2735] manager: (tapaa7a2aa2-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.275 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa7a2aa2-90, col_values=(('external_ids', {'iface-id': '4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.278 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:53:23 compute-0 ovn_controller[153477]: 2025-11-25T16:53:23Z|01124|binding|INFO|Releasing lport 4ba2fcc9-e190-42a3-90c2-5eaf3bb3b6ed from this chassis (sb_readonly=0)
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.290 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1cca3c-e1d8-4906-b095-cbc0f603ff3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.292 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.pid.haproxy
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:53:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:23.294 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'env', 'PROCESS_TAG=haproxy-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.631 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for b12625ea-31bf-4599-a248-4c6ced8e59c2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.633 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089603.6308513, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.633 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Started (Lifecycle Event)
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.649 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:23 compute-0 podman[366353]: 2025-11-25 16:53:23.667192327 +0000 UTC m=+0.056081425 container create e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.672 254096 DEBUG nova.compute.manager [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.674 254096 DEBUG nova.objects.instance [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.678 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.691 254096 INFO nova.virt.libvirt.driver [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance running successfully.
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.692 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.692 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089603.6360326, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.693 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Resumed (Lifecycle Event)
Nov 25 16:53:23 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.695 254096 DEBUG nova.virt.libvirt.guest [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.695 254096 DEBUG nova.compute.manager [None req-956653e2-948e-409a-b570-5f0975533bc4 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.713 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.716 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:53:23 compute-0 systemd[1]: Started libpod-conmon-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8.scope.
Nov 25 16:53:23 compute-0 podman[366353]: 2025-11-25 16:53:23.64045317 +0000 UTC m=+0.029342278 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:53:23 compute-0 nova_compute[254092]: 2025-11-25 16:53:23.741 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 16:53:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6467f1eaed3cd66ef8c9f25e2f7b2beb1393d9ccf64e1b42e45ebee6456e03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:53:23 compute-0 podman[366353]: 2025-11-25 16:53:23.76631726 +0000 UTC m=+0.155206388 container init e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:53:23 compute-0 podman[366353]: 2025-11-25 16:53:23.771331807 +0000 UTC m=+0.160220905 container start e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 16:53:23 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : New worker (366374) forked
Nov 25 16:53:23 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : Loading success.
Nov 25 16:53:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 105 KiB/s wr, 18 op/s
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.345 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089589.3433707, 6d70119e-e45b-4a12-893e-8d5a805ca8ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.345 254096 INFO nova.compute.manager [-] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] VM Stopped (Lifecycle Event)
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.366 254096 DEBUG nova.compute.manager [None req-ff73578c-f1e2-4a4d-a7e9-97ee0bf1b333 - - - - - -] [instance: 6d70119e-e45b-4a12-893e-8d5a805ca8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:24 compute-0 nova_compute[254092]: 2025-11-25 16:53:24.785 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:53:25 compute-0 ceph-mon[74985]: pgmap v2087: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 105 KiB/s wr, 18 op/s
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.304 254096 INFO nova.compute.manager [None req-82471993-ac31-4782-b28e-389e5fb37240 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Get console output
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.309 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.467 254096 DEBUG nova.compute.manager [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.467 254096 DEBUG oslo_concurrency.lockutils [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 DEBUG oslo_concurrency.lockutils [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 DEBUG oslo_concurrency.lockutils [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 DEBUG nova.compute.manager [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.468 254096 WARNING nova.compute.manager [req-d9272169-0d36-4133-bb7f-b4786925f3da req-a39295ec-6859-4165-b288-a10e5b91dd15 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state active and task_state None.
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:53:25 compute-0 podman[366384]: 2025-11-25 16:53:25.655460618 +0000 UTC m=+0.066216210 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 16:53:25 compute-0 podman[366383]: 2025-11-25 16:53:25.686327467 +0000 UTC m=+0.098501378 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 16:53:25 compute-0 podman[366385]: 2025-11-25 16:53:25.688032273 +0000 UTC m=+0.095635889 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.852 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.852 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.853 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:53:25 compute-0 nova_compute[254092]: 2025-11-25 16:53:25.853 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 105 KiB/s wr, 22 op/s
Nov 25 16:53:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.796 254096 DEBUG nova.compute.manager [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.797 254096 DEBUG nova.compute.manager [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing instance network info cache due to event network-changed-1ef97ad6-0798-4b3d-a9cc-562f9526ae38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.797 254096 DEBUG oslo_concurrency.lockutils [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.934 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.935 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.935 254096 INFO nova.compute.manager [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Terminating instance
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.936 254096 DEBUG nova.compute.manager [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:53:26 compute-0 kernel: tap1ef97ad6-07 (unregistering): left promiscuous mode
Nov 25 16:53:26 compute-0 NetworkManager[48891]: <info>  [1764089606.9764] device (tap1ef97ad6-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:26 compute-0 ovn_controller[153477]: 2025-11-25T16:53:26Z|01125|binding|INFO|Releasing lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 from this chassis (sb_readonly=0)
Nov 25 16:53:26 compute-0 ovn_controller[153477]: 2025-11-25T16:53:26Z|01126|binding|INFO|Setting lport 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 down in Southbound
Nov 25 16:53:26 compute-0 ovn_controller[153477]: 2025-11-25T16:53:26Z|01127|binding|INFO|Removing iface tap1ef97ad6-07 ovn-installed in OVS
Nov 25 16:53:26 compute-0 nova_compute[254092]: 2025-11-25 16:53:26.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.992 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:77:80 10.100.0.8'], port_security=['fa:16:3e:5e:77:80 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b12625ea-31bf-4599-a248-4c6ced8e59c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f92d76b523b84ec9b645382bdd3192c9', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1e6bccce-d7db-425d-9e27-bd7a85fa0b4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a32ed1af-29d9-45da-a55a-60d785fe0bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1ef97ad6-0798-4b3d-a9cc-562f9526ae38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.994 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 in datapath aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 unbound from our chassis
Nov 25 16:53:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.995 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:53:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[258622cb-dfe7-46a9-888e-5ed7f567d631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:26.996 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 namespace which is not needed anymore
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.004 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:27 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 25 16:53:27 compute-0 ceph-mon[74985]: pgmap v2088: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 105 KiB/s wr, 22 op/s
Nov 25 16:53:27 compute-0 systemd-machined[216343]: Machine qemu-140-instance-0000006d terminated.
Nov 25 16:53:27 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : haproxy version is 2.8.14-c23fe91
Nov 25 16:53:27 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [NOTICE]   (366372) : path to executable is /usr/sbin/haproxy
Nov 25 16:53:27 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [WARNING]  (366372) : Exiting Master process...
Nov 25 16:53:27 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [ALERT]    (366372) : Current worker (366374) exited with code 143 (Terminated)
Nov 25 16:53:27 compute-0 neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1[366368]: [WARNING]  (366372) : All workers exited. Exiting... (0)
Nov 25 16:53:27 compute-0 systemd[1]: libpod-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8.scope: Deactivated successfully.
Nov 25 16:53:27 compute-0 podman[366472]: 2025-11-25 16:53:27.128859088 +0000 UTC m=+0.046597787 container died e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8-userdata-shm.mount: Deactivated successfully.
Nov 25 16:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d6467f1eaed3cd66ef8c9f25e2f7b2beb1393d9ccf64e1b42e45ebee6456e03-merged.mount: Deactivated successfully.
Nov 25 16:53:27 compute-0 podman[366472]: 2025-11-25 16:53:27.170283134 +0000 UTC m=+0.088021833 container cleanup e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:53:27 compute-0 systemd[1]: libpod-conmon-e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8.scope: Deactivated successfully.
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.179 254096 INFO nova.virt.libvirt.driver [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Instance destroyed successfully.
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.179 254096 DEBUG nova.objects.instance [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lazy-loading 'resources' on Instance uuid b12625ea-31bf-4599-a248-4c6ced8e59c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.189 254096 DEBUG nova.virt.libvirt.vif [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:52:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1982563586',display_name='tempest-TestNetworkAdvancedServerOps-server-1982563586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1982563586',id=109,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARA21hEW3+D06RLHnS5fFKgy30b2MZ1ew/5ZgTFs7Q0nQH9hOe8Z3jkbVm5M1kwXkWC30h/+QH5x5eRfxJUT8solIlf//6cGDYhyCYMmZXSakiWtWl4wz/5VtXXcA2QCA==',key_name='tempest-TestNetworkAdvancedServerOps-1679437763',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:52:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f92d76b523b84ec9b645382bdd3192c9',ramdisk_id='',reservation_id='r-7fugjamt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1110744031',owner_user_name='tempest-TestNetworkAdvancedServerOps-1110744031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:53:23Z,user_data=None,user_id='3b30410358c448a693a9dc40ee6aafb0',uuid=b12625ea-31bf-4599-a248-4c6ced8e59c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.189 254096 DEBUG nova.network.os_vif_util [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converting VIF {"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.190 254096 DEBUG nova.network.os_vif_util [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.190 254096 DEBUG os_vif [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.192 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ef97ad6-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.197 254096 INFO os_vif [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:77:80,bridge_name='br-int',has_traffic_filtering=True,id=1ef97ad6-0798-4b3d-a9cc-562f9526ae38,network=Network(aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef97ad6-07')
Nov 25 16:53:27 compute-0 podman[366511]: 2025-11-25 16:53:27.240386809 +0000 UTC m=+0.043269366 container remove e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.248 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0b50d11a-0f1d-4e1f-9618-e2814902f721]: (4, ('Tue Nov 25 04:53:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8)\ne866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8\nTue Nov 25 04:53:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 (e866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8)\ne866478491d71ef5825f1d8bd9ca82c0048295cfb87d3ac749095e67b427bfd8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.250 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[47514d72-e114-4ac9-a785-db9d0b88618b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.252 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa7a2aa2-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:27 compute-0 kernel: tapaa7a2aa2-90: left promiscuous mode
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.275 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29e633e3-8022-41d4-a5d7-b2949dd8560d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ec84c127-4d1c-47ca-b2a0-ed8f8e3e3a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[242de98a-02f3-4ecf-ab67-cbefb12137dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c55c7b15-7e7b-4e26-b609-2d7fa760ade1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606066, 'reachable_time': 38145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366545, 'error': None, 'target': 'ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.325 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:53:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:27.325 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[56f585d7-5de6-47d1-a50a-4be83606976d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:27 compute-0 systemd[1]: run-netns-ovnmeta\x2daa7a2aa2\x2d9e73\x2d498f\x2dbac7\x2d4dcf3eb7c3d1.mount: Deactivated successfully.
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.518 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.545 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.545 254096 DEBUG oslo_concurrency.lockutils [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.545 254096 DEBUG nova.network.neutron [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Refreshing network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.601 254096 INFO nova.virt.libvirt.driver [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deleting instance files /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2_del
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.601 254096 INFO nova.virt.libvirt.driver [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deletion of /var/lib/nova/instances/b12625ea-31bf-4599-a248-4c6ced8e59c2_del complete
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.660 254096 INFO nova.compute.manager [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.661 254096 DEBUG oslo.service.loopingcall [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.661 254096 DEBUG nova.compute.manager [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.661 254096 DEBUG nova.network.neutron [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.726 254096 DEBUG nova.compute.manager [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG oslo_concurrency.lockutils [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG oslo_concurrency.lockutils [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG oslo_concurrency.lockutils [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.727 254096 DEBUG nova.compute.manager [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:27 compute-0 nova_compute[254092]: 2025-11-25 16:53:27.728 254096 DEBUG nova.compute.manager [req-26556520-c6ec-4757-94b2-166e2a58b8e4 req-edcad552-1a50-49dd-8511-9ee6e8582e04 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-unplugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:53:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 28 KiB/s wr, 13 op/s
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.365 254096 DEBUG nova.network.neutron [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.386 254096 INFO nova.compute.manager [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Took 0.73 seconds to deallocate network for instance.
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.445 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.446 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.503 254096 DEBUG oslo_concurrency.processutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.924 254096 DEBUG nova.compute.manager [req-1b20025f-bd19-4c25-a578-e70e8a227d2d req-94144e38-b2af-4983-97a6-bb020d38adb7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-deleted-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:53:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146727220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.969 254096 DEBUG oslo_concurrency.processutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.975 254096 DEBUG nova.compute.provider_tree [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:53:28 compute-0 nova_compute[254092]: 2025-11-25 16:53:28.987 254096 DEBUG nova.scheduler.client.report [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.005 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:29 compute-0 ceph-mon[74985]: pgmap v2089: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 28 KiB/s wr, 13 op/s
Nov 25 16:53:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2146727220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.033 254096 INFO nova.scheduler.client.report [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Deleted allocations for instance b12625ea-31bf-4599-a248-4c6ced8e59c2
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.083 254096 DEBUG oslo_concurrency.lockutils [None req-e888c670-2f8e-4a53-bfc4-6a9870f81e78 3b30410358c448a693a9dc40ee6aafb0 f92d76b523b84ec9b645382bdd3192c9 - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.601 254096 DEBUG nova.network.neutron [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updated VIF entry in instance network info cache for port 1ef97ad6-0798-4b3d-a9cc-562f9526ae38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.602 254096 DEBUG nova.network.neutron [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Updating instance_info_cache with network_info: [{"id": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "address": "fa:16:3e:5e:77:80", "network": {"id": "aa7a2aa2-9e73-498f-bac7-4dcf3eb7c3d1", "bridge": "br-int", "label": "tempest-network-smoke--2125799662", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f92d76b523b84ec9b645382bdd3192c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef97ad6-07", "ovs_interfaceid": "1ef97ad6-0798-4b3d-a9cc-562f9526ae38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.630 254096 DEBUG oslo_concurrency.lockutils [req-a7f32224-dc2a-4e2f-9509-986f8a76c65e req-9a9af927-a085-4a73-8459-34c257682871 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b12625ea-31bf-4599-a248-4c6ced8e59c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:53:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.940 254096 DEBUG nova.compute.manager [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.941 254096 DEBUG oslo_concurrency.lockutils [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 DEBUG oslo_concurrency.lockutils [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 DEBUG oslo_concurrency.lockutils [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b12625ea-31bf-4599-a248-4c6ced8e59c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 DEBUG nova.compute.manager [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] No waiting events found dispatching network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:53:29 compute-0 nova_compute[254092]: 2025-11-25 16:53:29.942 254096 WARNING nova.compute.manager [req-b26b6988-ca71-4b8b-a4ea-8642a6e8ba34 req-e083e450-e193-4425-9991-e13eb21aebc7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Received unexpected event network-vif-plugged-1ef97ad6-0798-4b3d-a9cc-562f9526ae38 for instance with vm_state deleted and task_state None.
Nov 25 16:53:31 compute-0 ceph-mon[74985]: pgmap v2090: 321 pgs: 321 active+clean; 121 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 25 16:53:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 13 KiB/s wr, 33 op/s
Nov 25 16:53:32 compute-0 nova_compute[254092]: 2025-11-25 16:53:32.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:33 compute-0 ceph-mon[74985]: pgmap v2091: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 13 KiB/s wr, 33 op/s
Nov 25 16:53:33 compute-0 nova_compute[254092]: 2025-11-25 16:53:33.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:33 compute-0 nova_compute[254092]: 2025-11-25 16:53:33.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 16:53:34 compute-0 nova_compute[254092]: 2025-11-25 16:53:34.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:35 compute-0 ceph-mon[74985]: pgmap v2092: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 16:53:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 16:53:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:36 compute-0 nova_compute[254092]: 2025-11-25 16:53:36.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:36 compute-0 nova_compute[254092]: 2025-11-25 16:53:36.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:53:36 compute-0 nova_compute[254092]: 2025-11-25 16:53:36.530 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:53:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.774 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:5a:27 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a791f910-85c1-412c-90b8-601e3d703b65, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5cfd93ef-7743-40d7-8957-75a8c3d82550) old=Port_Binding(mac=['fa:16:3e:f5:5a:27 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8306a8bb-cdd9-40cb-9050-ddc2efdcd179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.775 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5cfd93ef-7743-40d7-8957-75a8c3d82550 in datapath 8306a8bb-cdd9-40cb-9050-ddc2efdcd179 updated
Nov 25 16:53:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.776 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8306a8bb-cdd9-40cb-9050-ddc2efdcd179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:53:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:36.777 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9591658-1b0c-44eb-9c46-468f78d4218d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:53:37 compute-0 ceph-mon[74985]: pgmap v2093: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 25 16:53:37 compute-0 nova_compute[254092]: 2025-11-25 16:53:37.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:37 compute-0 nova_compute[254092]: 2025-11-25 16:53:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:53:37 compute-0 nova_compute[254092]: 2025-11-25 16:53:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:53:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 16:53:39 compute-0 nova_compute[254092]: 2025-11-25 16:53:39.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:39 compute-0 ceph-mon[74985]: pgmap v2094: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 16:53:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:53:40
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'images', 'backups', '.mgr']
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:53:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:53:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.2 total, 600.0 interval
                                           Cumulative writes: 32K writes, 132K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.89 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9383 writes, 36K keys, 9383 commit groups, 1.0 writes per commit group, ingest: 35.89 MB, 0.06 MB/s
                                           Interval WAL: 9384 writes, 3775 syncs, 2.49 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:53:41 compute-0 ceph-mon[74985]: pgmap v2095: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 16:53:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 16:53:42 compute-0 nova_compute[254092]: 2025-11-25 16:53:42.177 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089607.1757858, b12625ea-31bf-4599-a248-4c6ced8e59c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:53:42 compute-0 nova_compute[254092]: 2025-11-25 16:53:42.177 254096 INFO nova.compute.manager [-] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] VM Stopped (Lifecycle Event)
Nov 25 16:53:42 compute-0 nova_compute[254092]: 2025-11-25 16:53:42.250 254096 DEBUG nova.compute.manager [None req-27e55956-1eeb-478b-a89c-c6ae02145609 - - - - - -] [instance: b12625ea-31bf-4599-a248-4c6ced8e59c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:53:42 compute-0 nova_compute[254092]: 2025-11-25 16:53:42.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:43 compute-0 ceph-mon[74985]: pgmap v2096: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 16:53:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:43.603 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:53:43 compute-0 nova_compute[254092]: 2025-11-25 16:53:43.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:43.605 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:53:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:44 compute-0 nova_compute[254092]: 2025-11-25 16:53:44.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:45 compute-0 ceph-mon[74985]: pgmap v2097: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:47 compute-0 ceph-mon[74985]: pgmap v2098: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:47 compute-0 nova_compute[254092]: 2025-11-25 16:53:47.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:49 compute-0 nova_compute[254092]: 2025-11-25 16:53:49.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:49 compute-0 ceph-mon[74985]: pgmap v2099: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:53:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.2 total, 600.0 interval
                                           Cumulative writes: 34K writes, 133K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 34K writes, 12K syncs, 2.87 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9033 writes, 34K keys, 9033 commit groups, 1.0 writes per commit group, ingest: 35.03 MB, 0.06 MB/s
                                           Interval WAL: 9033 writes, 3578 syncs, 2.52 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:53:51 compute-0 ceph-mon[74985]: pgmap v2100: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:53:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:52 compute-0 nova_compute[254092]: 2025-11-25 16:53:52.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:53:52.606 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:53:53 compute-0 ceph-mon[74985]: pgmap v2101: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:54 compute-0 nova_compute[254092]: 2025-11-25 16:53:54.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:55 compute-0 ceph-mon[74985]: pgmap v2102: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:53:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141173161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:53:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:53:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141173161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:53:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1141173161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:53:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1141173161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:53:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:53:56 compute-0 podman[366571]: 2025-11-25 16:53:56.640925948 +0000 UTC m=+0.059716097 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:53:56 compute-0 podman[366570]: 2025-11-25 16:53:56.663468192 +0000 UTC m=+0.082920418 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 16:53:56 compute-0 podman[366572]: 2025-11-25 16:53:56.683361554 +0000 UTC m=+0.094037712 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:53:57 compute-0 ceph-mon[74985]: pgmap v2103: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:57 compute-0 nova_compute[254092]: 2025-11-25 16:53:57.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:59 compute-0 nova_compute[254092]: 2025-11-25 16:53:59.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:53:59 compute-0 ceph-mon[74985]: pgmap v2104: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:53:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:01 compute-0 ceph-mon[74985]: pgmap v2105: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:02 compute-0 nova_compute[254092]: 2025-11-25 16:54:02.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:03 compute-0 ceph-mon[74985]: pgmap v2106: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.564 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7c702cf-93bf-4690-931b-d19f31e3f140, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=677dcda5-8641-460d-b058-dce18b1330bf) old=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:54:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.566 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 677dcda5-8641-460d-b058-dce18b1330bf in datapath fda12e20-ad43-4206-95a1-86c7a084bd24 updated
Nov 25 16:54:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.567 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fda12e20-ad43-4206-95a1-86c7a084bd24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:54:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:03.569 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[08446be3-bc98-47dd-b5b9-ae7b1c30c8c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:54:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:04 compute-0 nova_compute[254092]: 2025-11-25 16:54:04.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:05 compute-0 ceph-mon[74985]: pgmap v2107: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:07 compute-0 ceph-mon[74985]: pgmap v2108: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:07 compute-0 nova_compute[254092]: 2025-11-25 16:54:07.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 16:54:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3601.5 total, 600.0 interval
                                           Cumulative writes: 29K writes, 116K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.87 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8344 writes, 32K keys, 8344 commit groups, 1.0 writes per commit group, ingest: 34.73 MB, 0.06 MB/s
                                           Interval WAL: 8344 writes, 3341 syncs, 2.50 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 16:54:07 compute-0 sudo[366635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:07 compute-0 sudo[366635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:07 compute-0 sudo[366635]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:07 compute-0 sudo[366660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:54:07 compute-0 sudo[366660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:07 compute-0 sudo[366660]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:07 compute-0 sudo[366685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:07 compute-0 sudo[366685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:07 compute-0 sudo[366685]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:08 compute-0 sudo[366710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:54:08 compute-0 sudo[366710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:08 compute-0 sudo[366710]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:08 compute-0 sudo[366765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:08 compute-0 sudo[366765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:08 compute-0 sudo[366765]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:08 compute-0 sudo[366790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:54:08 compute-0 sudo[366790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:08 compute-0 sudo[366790]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:08 compute-0 sudo[366815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:08 compute-0 sudo[366815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:08 compute-0 sudo[366815]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:08 compute-0 sudo[366840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 16:54:08 compute-0 sudo[366840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:09 compute-0 nova_compute[254092]: 2025-11-25 16:54:09.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:09 compute-0 sudo[366840]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ff8a111a-ca02-4923-a418-97831a29ab35 does not exist
Nov 25 16:54:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6491f478-3496-4ecd-9613-ff8a0f0ef3cb does not exist
Nov 25 16:54:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e9db6add-d45b-4d88-b6c9-602adb6a7de3 does not exist
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:54:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:54:09 compute-0 sudo[366883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:09 compute-0 sudo[366883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:09 compute-0 sudo[366883]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:09 compute-0 ceph-mon[74985]: pgmap v2109: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:54:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:54:09 compute-0 sudo[366908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:54:09 compute-0 sudo[366908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:09 compute-0 sudo[366908]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:09 compute-0 sudo[366933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:09 compute-0 sudo[366933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:09 compute-0 sudo[366933]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:09 compute-0 sudo[366958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:54:09 compute-0 sudo[366958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:09 compute-0 podman[367022]: 2025-11-25 16:54:09.867735561 +0000 UTC m=+0.057994011 container create 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 16:54:09 compute-0 systemd[1]: Started libpod-conmon-47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26.scope.
Nov 25 16:54:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:54:09 compute-0 podman[367022]: 2025-11-25 16:54:09.935230399 +0000 UTC m=+0.125488849 container init 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:54:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:09 compute-0 podman[367022]: 2025-11-25 16:54:09.943071733 +0000 UTC m=+0.133330173 container start 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:54:09 compute-0 podman[367022]: 2025-11-25 16:54:09.84898968 +0000 UTC m=+0.039248130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:54:09 compute-0 podman[367022]: 2025-11-25 16:54:09.946520577 +0000 UTC m=+0.136779027 container attach 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:54:09 compute-0 pensive_khorana[367038]: 167 167
Nov 25 16:54:09 compute-0 systemd[1]: libpod-47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26.scope: Deactivated successfully.
Nov 25 16:54:09 compute-0 podman[367022]: 2025-11-25 16:54:09.947824122 +0000 UTC m=+0.138082602 container died 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 16:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-35a4d1ea74aad40cb31332298718d92d7ad5d60142ab1565d0b7102bf3a2874d-merged.mount: Deactivated successfully.
Nov 25 16:54:09 compute-0 podman[367022]: 2025-11-25 16:54:09.985671013 +0000 UTC m=+0.175929493 container remove 47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:54:10 compute-0 systemd[1]: libpod-conmon-47c24f5a85e56309464749c6ab0dee4b12bafe02d353203ad1b22b02611b8d26.scope: Deactivated successfully.
Nov 25 16:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:54:10 compute-0 podman[367062]: 2025-11-25 16:54:10.156714341 +0000 UTC m=+0.034031608 container create 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:54:10 compute-0 systemd[1]: Started libpod-conmon-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope.
Nov 25 16:54:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:10 compute-0 podman[367062]: 2025-11-25 16:54:10.235206519 +0000 UTC m=+0.112523826 container init 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 16:54:10 compute-0 podman[367062]: 2025-11-25 16:54:10.142110014 +0000 UTC m=+0.019427281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:54:10 compute-0 podman[367062]: 2025-11-25 16:54:10.244868312 +0000 UTC m=+0.122185599 container start 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:54:10 compute-0 podman[367062]: 2025-11-25 16:54:10.248149151 +0000 UTC m=+0.125466498 container attach 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:54:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:11 compute-0 nifty_jones[367079]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:54:11 compute-0 nifty_jones[367079]: --> relative data size: 1.0
Nov 25 16:54:11 compute-0 nifty_jones[367079]: --> All data devices are unavailable
Nov 25 16:54:11 compute-0 systemd[1]: libpod-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope: Deactivated successfully.
Nov 25 16:54:11 compute-0 systemd[1]: libpod-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope: Consumed 1.001s CPU time.
Nov 25 16:54:11 compute-0 conmon[367079]: conmon 7e89a45c5bbcc66d41cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope/container/memory.events
Nov 25 16:54:11 compute-0 podman[367062]: 2025-11-25 16:54:11.305083059 +0000 UTC m=+1.182400326 container died 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:54:11 compute-0 ceph-mon[74985]: pgmap v2110: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba70a1b3a7eef3dedf743c1c3e28acc03e0682e5e8ed0c8b99be9f4b5c67b71-merged.mount: Deactivated successfully.
Nov 25 16:54:11 compute-0 podman[367062]: 2025-11-25 16:54:11.354761992 +0000 UTC m=+1.232079269 container remove 7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:54:11 compute-0 systemd[1]: libpod-conmon-7e89a45c5bbcc66d41cdc0cb2dfea4ebb579e62445a48958728777af198b5125.scope: Deactivated successfully.
Nov 25 16:54:11 compute-0 sudo[366958]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:11 compute-0 sudo[367122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:11 compute-0 sudo[367122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:11 compute-0 sudo[367122]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:11 compute-0 sudo[367147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:54:11 compute-0 sudo[367147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:11 compute-0 sudo[367147]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:11 compute-0 sudo[367172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:11 compute-0 sudo[367172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:11 compute-0 sudo[367172]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:11 compute-0 sudo[367197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:54:11 compute-0 sudo[367197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:11 compute-0 podman[367258]: 2025-11-25 16:54:11.928529368 +0000 UTC m=+0.034822019 container create e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:54:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:11 compute-0 systemd[1]: Started libpod-conmon-e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd.scope.
Nov 25 16:54:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:54:11 compute-0 podman[367258]: 2025-11-25 16:54:11.984290257 +0000 UTC m=+0.090582928 container init e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:54:11 compute-0 podman[367258]: 2025-11-25 16:54:11.991240137 +0000 UTC m=+0.097532788 container start e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:54:11 compute-0 podman[367258]: 2025-11-25 16:54:11.9946836 +0000 UTC m=+0.100976251 container attach e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 16:54:11 compute-0 competent_cerf[367274]: 167 167
Nov 25 16:54:11 compute-0 systemd[1]: libpod-e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd.scope: Deactivated successfully.
Nov 25 16:54:11 compute-0 podman[367258]: 2025-11-25 16:54:11.996691115 +0000 UTC m=+0.102983766 container died e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 16:54:12 compute-0 podman[367258]: 2025-11-25 16:54:11.914724412 +0000 UTC m=+0.021017083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cf68c465cfbc91d416633d26ae9b29838d96983d74b1d6d4ce8aa39ea856a04-merged.mount: Deactivated successfully.
Nov 25 16:54:12 compute-0 podman[367258]: 2025-11-25 16:54:12.024292387 +0000 UTC m=+0.130585038 container remove e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_cerf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:54:12 compute-0 systemd[1]: libpod-conmon-e3c3a32eea87184b2c8526a92f018ced2a69a8980630e52da3138dafa1d764cd.scope: Deactivated successfully.
Nov 25 16:54:12 compute-0 podman[367298]: 2025-11-25 16:54:12.215436333 +0000 UTC m=+0.064932430 container create 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:54:12 compute-0 podman[367298]: 2025-11-25 16:54:12.170414937 +0000 UTC m=+0.019911074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:54:12 compute-0 nova_compute[254092]: 2025-11-25 16:54:12.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:12 compute-0 systemd[1]: Started libpod-conmon-9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0.scope.
Nov 25 16:54:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:12 compute-0 ceph-mon[74985]: pgmap v2111: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:12 compute-0 podman[367298]: 2025-11-25 16:54:12.41177176 +0000 UTC m=+0.261267857 container init 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:54:12 compute-0 podman[367298]: 2025-11-25 16:54:12.42392422 +0000 UTC m=+0.273420297 container start 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 16:54:12 compute-0 podman[367298]: 2025-11-25 16:54:12.488627823 +0000 UTC m=+0.338123930 container attach 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]: {
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:     "0": [
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:         {
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "devices": [
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "/dev/loop3"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             ],
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_name": "ceph_lv0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_size": "21470642176",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "name": "ceph_lv0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "tags": {
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cluster_name": "ceph",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.crush_device_class": "",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.encrypted": "0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osd_id": "0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.type": "block",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.vdo": "0"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             },
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "type": "block",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "vg_name": "ceph_vg0"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:         }
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:     ],
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:     "1": [
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:         {
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "devices": [
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "/dev/loop4"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             ],
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_name": "ceph_lv1",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_size": "21470642176",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "name": "ceph_lv1",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "tags": {
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cluster_name": "ceph",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.crush_device_class": "",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.encrypted": "0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osd_id": "1",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.type": "block",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.vdo": "0"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             },
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "type": "block",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "vg_name": "ceph_vg1"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:         }
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:     ],
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:     "2": [
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:         {
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "devices": [
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "/dev/loop5"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             ],
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_name": "ceph_lv2",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_size": "21470642176",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "name": "ceph_lv2",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "tags": {
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.cluster_name": "ceph",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.crush_device_class": "",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.encrypted": "0",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osd_id": "2",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.type": "block",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:                 "ceph.vdo": "0"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             },
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "type": "block",
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:             "vg_name": "ceph_vg2"
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:         }
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]:     ]
Nov 25 16:54:13 compute-0 interesting_rhodes[367315]: }
Nov 25 16:54:13 compute-0 systemd[1]: libpod-9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0.scope: Deactivated successfully.
Nov 25 16:54:13 compute-0 podman[367298]: 2025-11-25 16:54:13.214611695 +0000 UTC m=+1.064107772 container died 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 16:54:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a10e3fc97db29ed75626ea23802c7171bb675126b317cd372c2514f791b3aa-merged.mount: Deactivated successfully.
Nov 25 16:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.410 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7c702cf-93bf-4690-931b-d19f31e3f140, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=677dcda5-8641-460d-b058-dce18b1330bf) old=Port_Binding(mac=['fa:16:3e:c8:15:fe 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fda12e20-ad43-4206-95a1-86c7a084bd24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fefb0bc9b5b6422dbf452116a9137318', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.413 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 677dcda5-8641-460d-b058-dce18b1330bf in datapath fda12e20-ad43-4206-95a1-86c7a084bd24 updated
Nov 25 16:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.414 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fda12e20-ad43-4206-95a1-86c7a084bd24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.415 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd58c851-1584-4a23-adcb-69c3991684ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:54:13 compute-0 podman[367298]: 2025-11-25 16:54:13.424321267 +0000 UTC m=+1.273817344 container remove 9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:54:13 compute-0 systemd[1]: libpod-conmon-9ee77f5786b99d5af551f15235f2b7832a4398bb5dc4f3c6297decff497888c0.scope: Deactivated successfully.
Nov 25 16:54:13 compute-0 sudo[367197]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:13 compute-0 sudo[367338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:13 compute-0 sudo[367338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:13 compute-0 sudo[367338]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:13 compute-0 sudo[367363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:54:13 compute-0 sudo[367363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:13 compute-0 sudo[367363]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:13 compute-0 sudo[367388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:13 compute-0 sudo[367388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.631 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:54:13 compute-0 sudo[367388]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:13 compute-0 sudo[367413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:54:13 compute-0 sudo[367413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:14 compute-0 podman[367480]: 2025-11-25 16:54:14.01703901 +0000 UTC m=+0.042475367 container create 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:54:14 compute-0 nova_compute[254092]: 2025-11-25 16:54:14.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:14 compute-0 systemd[1]: Started libpod-conmon-088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d.scope.
Nov 25 16:54:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:54:14 compute-0 podman[367480]: 2025-11-25 16:54:14.090733027 +0000 UTC m=+0.116169384 container init 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:54:14 compute-0 podman[367480]: 2025-11-25 16:54:13.99937371 +0000 UTC m=+0.024810097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:54:14 compute-0 podman[367480]: 2025-11-25 16:54:14.097685147 +0000 UTC m=+0.123121504 container start 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:54:14 compute-0 podman[367480]: 2025-11-25 16:54:14.100263797 +0000 UTC m=+0.125700154 container attach 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:54:14 compute-0 jolly_euclid[367496]: 167 167
Nov 25 16:54:14 compute-0 systemd[1]: libpod-088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d.scope: Deactivated successfully.
Nov 25 16:54:14 compute-0 podman[367480]: 2025-11-25 16:54:14.102973201 +0000 UTC m=+0.128409558 container died 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbae5afc2a52d6e39ec4475975b2d7242b1dbebc5d96db0eef38db2eb29dea76-merged.mount: Deactivated successfully.
Nov 25 16:54:14 compute-0 podman[367480]: 2025-11-25 16:54:14.131702573 +0000 UTC m=+0.157138930 container remove 088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:54:14 compute-0 systemd[1]: libpod-conmon-088c25565fd07061efb490af72050ea51cefee6427726b440e4d9b69986e6b0d.scope: Deactivated successfully.
Nov 25 16:54:14 compute-0 podman[367519]: 2025-11-25 16:54:14.2883561 +0000 UTC m=+0.035406015 container create 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 16:54:14 compute-0 systemd[1]: Started libpod-conmon-79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5.scope.
Nov 25 16:54:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:54:14 compute-0 podman[367519]: 2025-11-25 16:54:14.364257947 +0000 UTC m=+0.111307902 container init 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:54:14 compute-0 podman[367519]: 2025-11-25 16:54:14.273271599 +0000 UTC m=+0.020321524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:54:14 compute-0 podman[367519]: 2025-11-25 16:54:14.369746437 +0000 UTC m=+0.116796362 container start 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:54:14 compute-0 podman[367519]: 2025-11-25 16:54:14.37278536 +0000 UTC m=+0.119835295 container attach 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 16:54:15 compute-0 ceph-mon[74985]: pgmap v2112: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]: {
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "osd_id": 1,
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "type": "bluestore"
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:     },
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "osd_id": 2,
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "type": "bluestore"
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:     },
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "osd_id": 0,
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:         "type": "bluestore"
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]:     }
Nov 25 16:54:15 compute-0 wonderful_shaw[367535]: }
Nov 25 16:54:15 compute-0 systemd[1]: libpod-79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5.scope: Deactivated successfully.
Nov 25 16:54:15 compute-0 podman[367519]: 2025-11-25 16:54:15.317589282 +0000 UTC m=+1.064639207 container died 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 16:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fefe9333cc8e4ef35a1e190e99614c189b9224bd8083232b60f49a0a12ad4cc1-merged.mount: Deactivated successfully.
Nov 25 16:54:15 compute-0 podman[367519]: 2025-11-25 16:54:15.366667779 +0000 UTC m=+1.113717704 container remove 79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:54:15 compute-0 systemd[1]: libpod-conmon-79139a2af16c887e334f5cb51755f5449c96c0b4937e139624cc244a9552c9b5.scope: Deactivated successfully.
Nov 25 16:54:15 compute-0 sudo[367413]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:54:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:54:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6a8f14b0-e1c1-45ea-9dd6-1c84d86f9500 does not exist
Nov 25 16:54:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 20f7cd01-07d8-444c-ad44-38d94fc8f60a does not exist
Nov 25 16:54:15 compute-0 sudo[367579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:54:15 compute-0 sudo[367579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:15 compute-0 sudo[367579]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:15 compute-0 sudo[367604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:54:15 compute-0 sudo[367604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:54:15 compute-0 sudo[367604]: pam_unix(sudo:session): session closed for user root
Nov 25 16:54:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:16 compute-0 ovn_controller[153477]: 2025-11-25T16:54:16Z|01128|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 16:54:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:54:16 compute-0 ceph-mon[74985]: pgmap v2113: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:16 compute-0 nova_compute[254092]: 2025-11-25 16:54:16.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:17 compute-0 nova_compute[254092]: 2025-11-25 16:54:17.309 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:19 compute-0 nova_compute[254092]: 2025-11-25 16:54:19.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:19 compute-0 ceph-mon[74985]: pgmap v2114: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:19 compute-0 nova_compute[254092]: 2025-11-25 16:54:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:20 compute-0 ceph-mon[74985]: pgmap v2115: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:54:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:54:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130077105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:54:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:21 compute-0 nova_compute[254092]: 2025-11-25 16:54:21.958 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.129 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.132 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3747MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.390 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.390 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:54:22 compute-0 nova_compute[254092]: 2025-11-25 16:54:22.481 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:54:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:24 compute-0 nova_compute[254092]: 2025-11-25 16:54:24.036 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 16:54:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1130077105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:54:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:54:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832087066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:54:26 compute-0 nova_compute[254092]: 2025-11-25 16:54:26.746 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:54:26 compute-0 nova_compute[254092]: 2025-11-25 16:54:26.753 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:54:26 compute-0 nova_compute[254092]: 2025-11-25 16:54:26.764 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:54:27 compute-0 nova_compute[254092]: 2025-11-25 16:54:27.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:27 compute-0 podman[367676]: 2025-11-25 16:54:27.642162996 +0000 UTC m=+0.054069554 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:54:27 compute-0 podman[367675]: 2025-11-25 16:54:27.642726011 +0000 UTC m=+0.058634888 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:54:27 compute-0 podman[367677]: 2025-11-25 16:54:27.679373069 +0000 UTC m=+0.084041910 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 16:54:27 compute-0 ceph-mon[74985]: pgmap v2116: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:27 compute-0 ceph-mon[74985]: pgmap v2117: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:27 compute-0 ceph-mon[74985]: pgmap v2118: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2832087066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:54:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:28 compute-0 nova_compute[254092]: 2025-11-25 16:54:28.711 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:54:28 compute-0 nova_compute[254092]: 2025-11-25 16:54:28.712 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:54:29 compute-0 nova_compute[254092]: 2025-11-25 16:54:29.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:29 compute-0 ceph-mon[74985]: pgmap v2119: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.711 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.711 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.712 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.712 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.724 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.736 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:54:30 compute-0 nova_compute[254092]: 2025-11-25 16:54:30.736 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:54:31 compute-0 ceph-mon[74985]: pgmap v2120: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:32 compute-0 nova_compute[254092]: 2025-11-25 16:54:32.319 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:32 compute-0 ceph-mon[74985]: pgmap v2121: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:34 compute-0 nova_compute[254092]: 2025-11-25 16:54:34.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:35 compute-0 ceph-mon[74985]: pgmap v2122: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:37 compute-0 ceph-mon[74985]: pgmap v2123: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:37 compute-0 nova_compute[254092]: 2025-11-25 16:54:37.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:39 compute-0 nova_compute[254092]: 2025-11-25 16:54:39.043 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:39 compute-0 ceph-mon[74985]: pgmap v2124: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:54:40
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'default.rgw.control']
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:54:41 compute-0 ceph-mon[74985]: pgmap v2125: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:42 compute-0 nova_compute[254092]: 2025-11-25 16:54:42.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:43 compute-0 ceph-mon[74985]: pgmap v2126: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:44 compute-0 nova_compute[254092]: 2025-11-25 16:54:44.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:44 compute-0 nova_compute[254092]: 2025-11-25 16:54:44.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:44.849 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:54:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:44.850 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:54:45 compute-0 ceph-mon[74985]: pgmap v2127: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:47 compute-0 ceph-mon[74985]: pgmap v2128: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:47 compute-0 nova_compute[254092]: 2025-11-25 16:54:47.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:49 compute-0 nova_compute[254092]: 2025-11-25 16:54:49.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:49 compute-0 ceph-mon[74985]: pgmap v2129: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:49.852 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:54:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.752 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:54:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.754 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:54:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.754 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:54:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:50.756 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9240022e-fbb3-48b7-8147-18e68c936df6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:54:51 compute-0 ceph-mon[74985]: pgmap v2130: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:54:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:52 compute-0 nova_compute[254092]: 2025-11-25 16:54:52.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:53 compute-0 ceph-mon[74985]: pgmap v2131: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:54 compute-0 nova_compute[254092]: 2025-11-25 16:54:54.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 25 16:54:55 compute-0 ceph-mon[74985]: pgmap v2132: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:54:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 25 16:54:55 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 25 16:54:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:54:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1185465127' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:54:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:54:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1185465127' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:54:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.911 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:54:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.912 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:54:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.913 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:54:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:55.914 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfb75c9-2f66-456c-bff1-7791094fd411]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:54:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 1.3 KiB/s wr, 11 op/s
Nov 25 16:54:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 25 16:54:56 compute-0 ceph-mon[74985]: osdmap e276: 3 total, 3 up, 3 in
Nov 25 16:54:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1185465127' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:54:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1185465127' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:54:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 25 16:54:56 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 25 16:54:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:54:57 compute-0 ceph-mon[74985]: pgmap v2134: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 1.3 KiB/s wr, 11 op/s
Nov 25 16:54:57 compute-0 ceph-mon[74985]: osdmap e277: 3 total, 3 up, 3 in
Nov 25 16:54:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.299 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:54:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.300 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:54:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.301 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:54:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:54:57.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9680b3d5-fd95-4db9-a570-f33fc501e7d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:54:57 compute-0 nova_compute[254092]: 2025-11-25 16:54:57.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.6 KiB/s wr, 13 op/s
Nov 25 16:54:58 compute-0 podman[367737]: 2025-11-25 16:54:58.652693091 +0000 UTC m=+0.067600502 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 16:54:58 compute-0 podman[367738]: 2025-11-25 16:54:58.664387229 +0000 UTC m=+0.079252319 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 25 16:54:58 compute-0 podman[367739]: 2025-11-25 16:54:58.712604553 +0000 UTC m=+0.125072548 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:54:59 compute-0 nova_compute[254092]: 2025-11-25 16:54:59.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:54:59 compute-0 ceph-mon[74985]: pgmap v2136: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.6 KiB/s wr, 13 op/s
Nov 25 16:54:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.2 KiB/s wr, 43 op/s
Nov 25 16:55:01 compute-0 ceph-mon[74985]: pgmap v2137: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 3.2 KiB/s wr, 43 op/s
Nov 25 16:55:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.759 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:55:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.760 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:55:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.761 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:55:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:01.761 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00390be1-5b75-40d1-94fb-c771e7fe828d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:55:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Nov 25 16:55:02 compute-0 nova_compute[254092]: 2025-11-25 16:55:02.381 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.066 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:55:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.068 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:55:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.070 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:55:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:03.071 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45d54016-b7eb-40a8-b1c8-250dc29406ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:55:03 compute-0 ceph-mon[74985]: pgmap v2138: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Nov 25 16:55:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 36 op/s
Nov 25 16:55:04 compute-0 nova_compute[254092]: 2025-11-25 16:55:04.051 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:05 compute-0 ceph-mon[74985]: pgmap v2139: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 36 op/s
Nov 25 16:55:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 16:55:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 25 16:55:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 25 16:55:06 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 25 16:55:07 compute-0 ceph-mon[74985]: pgmap v2140: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 16:55:07 compute-0 ceph-mon[74985]: osdmap e278: 3 total, 3 up, 3 in
Nov 25 16:55:07 compute-0 nova_compute[254092]: 2025-11-25 16:55:07.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 16:55:08 compute-0 ceph-mon[74985]: pgmap v2142: 321 pgs: 321 active+clean; 41 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 28 op/s
Nov 25 16:55:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.917 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:55:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.919 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:55:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.920 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:55:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:08.920 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cd26d9-add9-432f-a5ee-df2ccbd5f2c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:55:09 compute-0 nova_compute[254092]: 2025-11-25 16:55:09.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 511 B/s wr, 5 op/s
Nov 25 16:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:55:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.596 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:55:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.597 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:55:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.598 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:55:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:10.599 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2886b0ac-b846-4c2f-b418-4aa701c1e5f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:55:11 compute-0 ceph-mon[74985]: pgmap v2143: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 511 B/s wr, 5 op/s
Nov 25 16:55:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:12 compute-0 nova_compute[254092]: 2025-11-25 16:55:12.386 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:13 compute-0 ceph-mon[74985]: pgmap v2144: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:55:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:55:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:55:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:14 compute-0 nova_compute[254092]: 2025-11-25 16:55:14.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:15 compute-0 ceph-mon[74985]: pgmap v2145: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:15 compute-0 sudo[367796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:15 compute-0 sudo[367796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:15 compute-0 sudo[367796]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:15 compute-0 sudo[367821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:55:15 compute-0 sudo[367821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:15 compute-0 sudo[367821]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:15 compute-0 sudo[367846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:15 compute-0 sudo[367846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:15 compute-0 sudo[367846]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:15 compute-0 sudo[367871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 16:55:15 compute-0 sudo[367871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:16 compute-0 sudo[367871]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:55:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:55:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:16 compute-0 sudo[367916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:16 compute-0 sudo[367916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:16 compute-0 sudo[367916]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:16 compute-0 sudo[367941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:55:16 compute-0 sudo[367941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:16 compute-0 sudo[367941]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:16 compute-0 sudo[367966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:16 compute-0 sudo[367966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:16 compute-0 sudo[367966]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:16 compute-0 sudo[367991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:55:16 compute-0 sudo[367991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:16 compute-0 sudo[367991]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:16 compute-0 sudo[368048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:16 compute-0 sudo[368048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:16 compute-0 sudo[368048]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:16 compute-0 sudo[368073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:55:16 compute-0 sudo[368073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:16 compute-0 sudo[368073]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:16 compute-0 sudo[368098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:16 compute-0 sudo[368098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:16 compute-0 sudo[368098]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:17 compute-0 sudo[368123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- inventory --format=json-pretty --filter-for-batch
Nov 25 16:55:17 compute-0 sudo[368123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:17 compute-0 ceph-mon[74985]: pgmap v2146: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:17 compute-0 podman[368189]: 2025-11-25 16:55:17.382093846 +0000 UTC m=+0.076774952 container create 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:55:17 compute-0 nova_compute[254092]: 2025-11-25 16:55:17.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:17 compute-0 systemd[1]: Started libpod-conmon-08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621.scope.
Nov 25 16:55:17 compute-0 podman[368189]: 2025-11-25 16:55:17.353383744 +0000 UTC m=+0.048064950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:17 compute-0 podman[368189]: 2025-11-25 16:55:17.481147014 +0000 UTC m=+0.175828140 container init 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:55:17 compute-0 podman[368189]: 2025-11-25 16:55:17.487655321 +0000 UTC m=+0.182336417 container start 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:55:17 compute-0 podman[368189]: 2025-11-25 16:55:17.490080468 +0000 UTC m=+0.184761574 container attach 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:55:17 compute-0 vigilant_kepler[368205]: 167 167
Nov 25 16:55:17 compute-0 systemd[1]: libpod-08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621.scope: Deactivated successfully.
Nov 25 16:55:17 compute-0 podman[368189]: 2025-11-25 16:55:17.495836434 +0000 UTC m=+0.190517550 container died 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:55:17 compute-0 nova_compute[254092]: 2025-11-25 16:55:17.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.516 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 10.100.0.2 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:55:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.518 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:55:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.519 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:55:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:17.520 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f17b06-e8e2-4dc9-a77c-18e835a62565]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:55:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-89b53407ea94f5bf04847df97580844ba9c111570734318de3cd9bed388fbd72-merged.mount: Deactivated successfully.
Nov 25 16:55:17 compute-0 podman[368189]: 2025-11-25 16:55:17.537346124 +0000 UTC m=+0.232027230 container remove 08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:55:17 compute-0 systemd[1]: libpod-conmon-08d470d69bd7f4056cbf714b11883a01801adc3f8c8581dbd3f0272416160621.scope: Deactivated successfully.
Nov 25 16:55:17 compute-0 podman[368228]: 2025-11-25 16:55:17.719340671 +0000 UTC m=+0.048666616 container create 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:55:17 compute-0 systemd[1]: Started libpod-conmon-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope.
Nov 25 16:55:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:17 compute-0 podman[368228]: 2025-11-25 16:55:17.693449986 +0000 UTC m=+0.022775921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:17 compute-0 podman[368228]: 2025-11-25 16:55:17.798799296 +0000 UTC m=+0.128125261 container init 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:55:17 compute-0 podman[368228]: 2025-11-25 16:55:17.809814736 +0000 UTC m=+0.139140681 container start 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:55:17 compute-0 podman[368228]: 2025-11-25 16:55:17.812777296 +0000 UTC m=+0.142103241 container attach 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 16:55:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:19 compute-0 ceph-mon[74985]: pgmap v2147: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:19 compute-0 nova_compute[254092]: 2025-11-25 16:55:19.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]: [
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:     {
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "available": false,
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "ceph_device": false,
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "lsm_data": {},
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "lvs": [],
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "path": "/dev/sr0",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "rejected_reasons": [
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "Has a FileSystem",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "Insufficient space (<5GB)"
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         ],
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         "sys_api": {
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "actuators": null,
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "device_nodes": "sr0",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "devname": "sr0",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "human_readable_size": "482.00 KB",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "id_bus": "ata",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "model": "QEMU DVD-ROM",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "nr_requests": "2",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "parent": "/dev/sr0",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "partitions": {},
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "path": "/dev/sr0",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "removable": "1",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "rev": "2.5+",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "ro": "0",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "rotational": "1",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "sas_address": "",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "sas_device_handle": "",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "scheduler_mode": "mq-deadline",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "sectors": 0,
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "sectorsize": "2048",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "size": 493568.0,
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "support_discard": "2048",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "type": "disk",
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:             "vendor": "QEMU"
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:         }
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]:     }
Nov 25 16:55:19 compute-0 optimistic_tharp[368244]: ]
Nov 25 16:55:19 compute-0 systemd[1]: libpod-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope: Deactivated successfully.
Nov 25 16:55:19 compute-0 podman[368228]: 2025-11-25 16:55:19.185758121 +0000 UTC m=+1.515084076 container died 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:55:19 compute-0 systemd[1]: libpod-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope: Consumed 1.433s CPU time.
Nov 25 16:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a62667caf678d5136ad2bff10dcec8131e9e507f3cd9aa8a7b8f7975da8b680f-merged.mount: Deactivated successfully.
Nov 25 16:55:19 compute-0 podman[368228]: 2025-11-25 16:55:19.236907533 +0000 UTC m=+1.566233478 container remove 3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_tharp, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:55:19 compute-0 systemd[1]: libpod-conmon-3706bc36d27a3086474d3b2c48f90949d7a3e97a42ff3f8770b9b119f5211249.scope: Deactivated successfully.
Nov 25 16:55:19 compute-0 sudo[368123]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1de6067b-63ba-41c9-9f9a-11e64ced3a9a does not exist
Nov 25 16:55:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b3eba2ae-6b27-4671-b668-226bc0d59093 does not exist
Nov 25 16:55:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f36bbd24-9989-4b14-b17d-a95b95bfcf20 does not exist
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:55:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:55:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:55:19 compute-0 sudo[370425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:19 compute-0 sudo[370425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:19 compute-0 sudo[370425]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:19 compute-0 sudo[370450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:55:19 compute-0 sudo[370450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:19 compute-0 sudo[370450]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:19 compute-0 sudo[370475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:19 compute-0 sudo[370475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:19 compute-0 sudo[370475]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:19 compute-0 sudo[370500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:55:19 compute-0 sudo[370500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:19 compute-0 nova_compute[254092]: 2025-11-25 16:55:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:19 compute-0 podman[370566]: 2025-11-25 16:55:19.774468354 +0000 UTC m=+0.034646284 container create 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:55:19 compute-0 systemd[1]: Started libpod-conmon-230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59.scope.
Nov 25 16:55:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:19 compute-0 podman[370566]: 2025-11-25 16:55:19.850269289 +0000 UTC m=+0.110447279 container init 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 16:55:19 compute-0 podman[370566]: 2025-11-25 16:55:19.761007788 +0000 UTC m=+0.021185738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:19 compute-0 podman[370566]: 2025-11-25 16:55:19.858522794 +0000 UTC m=+0.118700724 container start 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:55:19 compute-0 podman[370566]: 2025-11-25 16:55:19.861265829 +0000 UTC m=+0.121443779 container attach 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:55:19 compute-0 strange_bell[370583]: 167 167
Nov 25 16:55:19 compute-0 systemd[1]: libpod-230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59.scope: Deactivated successfully.
Nov 25 16:55:19 compute-0 podman[370566]: 2025-11-25 16:55:19.863621983 +0000 UTC m=+0.123799903 container died 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a10fe5e9bea41fd9276fd2671ae64291e7454f0edf981d3d78f051a7dc1f1b71-merged.mount: Deactivated successfully.
Nov 25 16:55:19 compute-0 podman[370566]: 2025-11-25 16:55:19.915115795 +0000 UTC m=+0.175293765 container remove 230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:55:19 compute-0 systemd[1]: libpod-conmon-230eda2a401e4a49c6178285c94cccd665e21a590310c2211f8f01b365bfac59.scope: Deactivated successfully.
Nov 25 16:55:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:20 compute-0 podman[370606]: 2025-11-25 16:55:20.083550212 +0000 UTC m=+0.040313838 container create c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:55:20 compute-0 systemd[1]: Started libpod-conmon-c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f.scope.
Nov 25 16:55:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:20 compute-0 podman[370606]: 2025-11-25 16:55:20.154418223 +0000 UTC m=+0.111181869 container init c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:55:20 compute-0 podman[370606]: 2025-11-25 16:55:20.066683233 +0000 UTC m=+0.023446879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:20 compute-0 podman[370606]: 2025-11-25 16:55:20.162456492 +0000 UTC m=+0.119220118 container start c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:55:20 compute-0 podman[370606]: 2025-11-25 16:55:20.166312377 +0000 UTC m=+0.123076003 container attach c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:55:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:55:21 compute-0 angry_mendel[370622]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:55:21 compute-0 angry_mendel[370622]: --> relative data size: 1.0
Nov 25 16:55:21 compute-0 angry_mendel[370622]: --> All data devices are unavailable
Nov 25 16:55:21 compute-0 systemd[1]: libpod-c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f.scope: Deactivated successfully.
Nov 25 16:55:21 compute-0 podman[370606]: 2025-11-25 16:55:21.155677813 +0000 UTC m=+1.112441449 container died c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-99a798fbdd038fbbeb97bea8583cf48606f821e56c544c1e17e48456f9398543-merged.mount: Deactivated successfully.
Nov 25 16:55:21 compute-0 podman[370606]: 2025-11-25 16:55:21.227757016 +0000 UTC m=+1.184520642 container remove c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:55:21 compute-0 systemd[1]: libpod-conmon-c08d4306cc6950e4af47efc32240ce44b863d7510e9f1353a9ac720271ba653f.scope: Deactivated successfully.
Nov 25 16:55:21 compute-0 sudo[370500]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:21 compute-0 ceph-mon[74985]: pgmap v2148: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:21 compute-0 sudo[370663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:21 compute-0 sudo[370663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:21 compute-0 sudo[370663]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:21 compute-0 sudo[370688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:55:21 compute-0 sudo[370688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:21 compute-0 sudo[370688]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:21 compute-0 sudo[370713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:21 compute-0 sudo[370713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:21 compute-0 sudo[370713]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:21 compute-0 nova_compute[254092]: 2025-11-25 16:55:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:21 compute-0 sudo[370738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:55:21 compute-0 sudo[370738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:21 compute-0 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:55:21 compute-0 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:55:21 compute-0 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:55:21 compute-0 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:55:21 compute-0 nova_compute[254092]: 2025-11-25 16:55:21.517 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:55:21 compute-0 podman[370823]: 2025-11-25 16:55:21.82121325 +0000 UTC m=+0.034070540 container create 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:55:21 compute-0 systemd[1]: Started libpod-conmon-6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9.scope.
Nov 25 16:55:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:21 compute-0 podman[370823]: 2025-11-25 16:55:21.897019734 +0000 UTC m=+0.109877044 container init 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:55:21 compute-0 podman[370823]: 2025-11-25 16:55:21.807829625 +0000 UTC m=+0.020686915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:21 compute-0 podman[370823]: 2025-11-25 16:55:21.905335481 +0000 UTC m=+0.118192781 container start 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:55:21 compute-0 hopeful_varahamihira[370840]: 167 167
Nov 25 16:55:21 compute-0 podman[370823]: 2025-11-25 16:55:21.909560975 +0000 UTC m=+0.122418255 container attach 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 16:55:21 compute-0 systemd[1]: libpod-6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9.scope: Deactivated successfully.
Nov 25 16:55:21 compute-0 podman[370823]: 2025-11-25 16:55:21.911109288 +0000 UTC m=+0.123966578 container died 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:55:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:55:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143071527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2933fc27f4c075c56a56a99b4c8321ab9d979bf91a45d90084d4d7accc04920a-merged.mount: Deactivated successfully.
Nov 25 16:55:21 compute-0 podman[370823]: 2025-11-25 16:55:21.940852638 +0000 UTC m=+0.153709928 container remove 6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_varahamihira, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:55:21 compute-0 nova_compute[254092]: 2025-11-25 16:55:21.941 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:55:21 compute-0 systemd[1]: libpod-conmon-6125a664fe4bc24e66a299167fe2e01caa49c78247af6bf755633672c403bab9.scope: Deactivated successfully.
Nov 25 16:55:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.089 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.090 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.091 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.091 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:55:22 compute-0 podman[370866]: 2025-11-25 16:55:22.111245439 +0000 UTC m=+0.053327914 container create f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:55:22 compute-0 systemd[1]: Started libpod-conmon-f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0.scope.
Nov 25 16:55:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:22 compute-0 podman[370866]: 2025-11-25 16:55:22.082864166 +0000 UTC m=+0.024946741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:22 compute-0 podman[370866]: 2025-11-25 16:55:22.182015716 +0000 UTC m=+0.124098191 container init f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:55:22 compute-0 podman[370866]: 2025-11-25 16:55:22.191460954 +0000 UTC m=+0.133543429 container start f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:55:22 compute-0 podman[370866]: 2025-11-25 16:55:22.194178278 +0000 UTC m=+0.136260753 container attach f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:55:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/143071527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.340 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.355 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.355 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.370 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.393 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.409 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:55:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:55:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108859968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.860 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.867 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.883 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.885 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:55:22 compute-0 nova_compute[254092]: 2025-11-25 16:55:22.885 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]: {
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:     "0": [
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:         {
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "devices": [
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "/dev/loop3"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             ],
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_name": "ceph_lv0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_size": "21470642176",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "name": "ceph_lv0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "tags": {
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cluster_name": "ceph",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.crush_device_class": "",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.encrypted": "0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osd_id": "0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.type": "block",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.vdo": "0"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             },
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "type": "block",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "vg_name": "ceph_vg0"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:         }
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:     ],
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:     "1": [
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:         {
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "devices": [
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "/dev/loop4"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             ],
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_name": "ceph_lv1",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_size": "21470642176",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "name": "ceph_lv1",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "tags": {
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cluster_name": "ceph",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.crush_device_class": "",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.encrypted": "0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osd_id": "1",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.type": "block",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.vdo": "0"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             },
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "type": "block",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "vg_name": "ceph_vg1"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:         }
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:     ],
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:     "2": [
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:         {
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "devices": [
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "/dev/loop5"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             ],
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_name": "ceph_lv2",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_size": "21470642176",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "name": "ceph_lv2",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "tags": {
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.cluster_name": "ceph",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.crush_device_class": "",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.encrypted": "0",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osd_id": "2",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.type": "block",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:                 "ceph.vdo": "0"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             },
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "type": "block",
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:             "vg_name": "ceph_vg2"
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:         }
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]:     ]
Nov 25 16:55:22 compute-0 cool_kapitsa[370882]: }
Nov 25 16:55:22 compute-0 systemd[1]: libpod-f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0.scope: Deactivated successfully.
Nov 25 16:55:22 compute-0 podman[370866]: 2025-11-25 16:55:22.933801792 +0000 UTC m=+0.875884287 container died f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d74928ebcca5875788c375e03ca17e5aada7b472b8e3549ae80b1c3e5152cf2-merged.mount: Deactivated successfully.
Nov 25 16:55:22 compute-0 podman[370866]: 2025-11-25 16:55:22.988972375 +0000 UTC m=+0.931054850 container remove f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:55:22 compute-0 systemd[1]: libpod-conmon-f8bb2319ccd32ead8249b921474af7ba948d2cf578d1387a58d779a457d4e0c0.scope: Deactivated successfully.
Nov 25 16:55:23 compute-0 sudo[370738]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:23 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 16:55:23 compute-0 sudo[370925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:23 compute-0 sudo[370925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:23 compute-0 sudo[370925]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:23 compute-0 sudo[370951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:55:23 compute-0 sudo[370951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:23 compute-0 sudo[370951]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:23 compute-0 sudo[370976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:23 compute-0 sudo[370976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:23 compute-0 sudo[370976]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:23 compute-0 sudo[371001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:55:23 compute-0 sudo[371001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:23 compute-0 ceph-mon[74985]: pgmap v2149: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1108859968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:55:23 compute-0 podman[371067]: 2025-11-25 16:55:23.565071055 +0000 UTC m=+0.033996657 container create 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 25 16:55:23 compute-0 systemd[1]: Started libpod-conmon-6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d.scope.
Nov 25 16:55:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:23 compute-0 podman[371067]: 2025-11-25 16:55:23.641858167 +0000 UTC m=+0.110783819 container init 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:55:23 compute-0 podman[371067]: 2025-11-25 16:55:23.648299152 +0000 UTC m=+0.117224754 container start 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:55:23 compute-0 podman[371067]: 2025-11-25 16:55:23.551126635 +0000 UTC m=+0.020052257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:23 compute-0 podman[371067]: 2025-11-25 16:55:23.651432727 +0000 UTC m=+0.120358359 container attach 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 16:55:23 compute-0 zen_kapitsa[371083]: 167 167
Nov 25 16:55:23 compute-0 systemd[1]: libpod-6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d.scope: Deactivated successfully.
Nov 25 16:55:23 compute-0 podman[371067]: 2025-11-25 16:55:23.653958976 +0000 UTC m=+0.122884598 container died 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-896a0d1b5eb7a7d060252490e73fce174449c35a793e5da397c3f87f354793df-merged.mount: Deactivated successfully.
Nov 25 16:55:23 compute-0 podman[371067]: 2025-11-25 16:55:23.69046793 +0000 UTC m=+0.159393532 container remove 6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kapitsa, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:55:23 compute-0 systemd[1]: libpod-conmon-6fe4bd28e43a58db998cf13e70a34760027fe3736ea8e2c012f1242c9bb6009d.scope: Deactivated successfully.
Nov 25 16:55:23 compute-0 nova_compute[254092]: 2025-11-25 16:55:23.885 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:23 compute-0 nova_compute[254092]: 2025-11-25 16:55:23.887 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:23 compute-0 nova_compute[254092]: 2025-11-25 16:55:23.887 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:23 compute-0 podman[371106]: 2025-11-25 16:55:23.891971238 +0000 UTC m=+0.046176899 container create cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 16:55:23 compute-0 systemd[1]: Started libpod-conmon-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope.
Nov 25 16:55:23 compute-0 podman[371106]: 2025-11-25 16:55:23.874321448 +0000 UTC m=+0.028527099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:55:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:55:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:55:23 compute-0 podman[371106]: 2025-11-25 16:55:23.992684011 +0000 UTC m=+0.146889662 container init cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:55:24 compute-0 podman[371106]: 2025-11-25 16:55:24.000741111 +0000 UTC m=+0.154946742 container start cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:55:24 compute-0 podman[371106]: 2025-11-25 16:55:24.004364309 +0000 UTC m=+0.158569940 container attach cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:55:24 compute-0 nova_compute[254092]: 2025-11-25 16:55:24.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:24 compute-0 nova_compute[254092]: 2025-11-25 16:55:24.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:24 compute-0 pedantic_tu[371123]: {
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "osd_id": 1,
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "type": "bluestore"
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:     },
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "osd_id": 2,
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "type": "bluestore"
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:     },
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "osd_id": 0,
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:         "type": "bluestore"
Nov 25 16:55:24 compute-0 pedantic_tu[371123]:     }
Nov 25 16:55:24 compute-0 pedantic_tu[371123]: }
Nov 25 16:55:25 compute-0 systemd[1]: libpod-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope: Deactivated successfully.
Nov 25 16:55:25 compute-0 podman[371106]: 2025-11-25 16:55:25.016287859 +0000 UTC m=+1.170493520 container died cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:55:25 compute-0 systemd[1]: libpod-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope: Consumed 1.019s CPU time.
Nov 25 16:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-290bf31f4baa3cb7b16a01e60dfb9546206224b97b098193cb9aa817e1b44412-merged.mount: Deactivated successfully.
Nov 25 16:55:25 compute-0 podman[371106]: 2025-11-25 16:55:25.070392943 +0000 UTC m=+1.224598574 container remove cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 16:55:25 compute-0 systemd[1]: libpod-conmon-cd0b8ed93e4c14482652ac8599a45166d6b8bf267ccb1a839b1662adf763188c.scope: Deactivated successfully.
Nov 25 16:55:25 compute-0 sudo[371001]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:55:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:55:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b0013731-30c9-45da-b127-743732996315 does not exist
Nov 25 16:55:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7d3a2893-9e1e-48ff-b1be-3dc218d1156f does not exist
Nov 25 16:55:25 compute-0 sudo[371166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:55:25 compute-0 sudo[371166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:25 compute-0 sudo[371166]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:25 compute-0 sudo[371191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:55:25 compute-0 sudo[371191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:55:25 compute-0 sudo[371191]: pam_unix(sudo:session): session closed for user root
Nov 25 16:55:25 compute-0 ceph-mon[74985]: pgmap v2150: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:55:25 compute-0 nova_compute[254092]: 2025-11-25 16:55:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:25 compute-0 nova_compute[254092]: 2025-11-25 16:55:25.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:55:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:27 compute-0 ceph-mon[74985]: pgmap v2151: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:27 compute-0 nova_compute[254092]: 2025-11-25 16:55:27.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:29 compute-0 nova_compute[254092]: 2025-11-25 16:55:29.063 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:29 compute-0 ceph-mon[74985]: pgmap v2152: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:29 compute-0 nova_compute[254092]: 2025-11-25 16:55:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:29 compute-0 nova_compute[254092]: 2025-11-25 16:55:29.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:55:29 compute-0 nova_compute[254092]: 2025-11-25 16:55:29.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:55:29 compute-0 nova_compute[254092]: 2025-11-25 16:55:29.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:55:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:30 compute-0 podman[371217]: 2025-11-25 16:55:30.116711421 +0000 UTC m=+0.529107581 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 16:55:30 compute-0 podman[371216]: 2025-11-25 16:55:30.120825243 +0000 UTC m=+0.533421149 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:55:30 compute-0 podman[371218]: 2025-11-25 16:55:30.147946552 +0000 UTC m=+0.560586389 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 16:55:30 compute-0 ceph-mon[74985]: pgmap v2153: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.343854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730343914, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1831, "num_deletes": 254, "total_data_size": 2951670, "memory_usage": 2993696, "flush_reason": "Manual Compaction"}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730363023, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 2867283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43439, "largest_seqno": 45269, "table_properties": {"data_size": 2858854, "index_size": 5179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17520, "raw_average_key_size": 20, "raw_value_size": 2841877, "raw_average_value_size": 3296, "num_data_blocks": 230, "num_entries": 862, "num_filter_entries": 862, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089552, "oldest_key_time": 1764089552, "file_creation_time": 1764089730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 19196 microseconds, and 6793 cpu microseconds.
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.363055) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 2867283 bytes OK
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.363071) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.364736) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.364751) EVENT_LOG_v1 {"time_micros": 1764089730364746, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.364768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2943783, prev total WAL file size 2943783, number of live WAL files 2.
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.365603) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(2800KB)], [98(7480KB)]
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730365667, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 10527477, "oldest_snapshot_seqno": -1}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6685 keys, 8893735 bytes, temperature: kUnknown
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730417409, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8893735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8849247, "index_size": 26648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 174120, "raw_average_key_size": 26, "raw_value_size": 8729614, "raw_average_value_size": 1305, "num_data_blocks": 1043, "num_entries": 6685, "num_filter_entries": 6685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.417712) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8893735 bytes
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.419129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 171.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7207, records dropped: 522 output_compression: NoCompression
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.419149) EVENT_LOG_v1 {"time_micros": 1764089730419140, "job": 58, "event": "compaction_finished", "compaction_time_micros": 51823, "compaction_time_cpu_micros": 19602, "output_level": 6, "num_output_files": 1, "total_output_size": 8893735, "num_input_records": 7207, "num_output_records": 6685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730420008, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089730421936, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.365529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:55:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:55:30.422066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:55:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:32 compute-0 nova_compute[254092]: 2025-11-25 16:55:32.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:32 compute-0 nova_compute[254092]: 2025-11-25 16:55:32.506 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:55:33 compute-0 ceph-mon[74985]: pgmap v2154: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:34 compute-0 nova_compute[254092]: 2025-11-25 16:55:34.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:35 compute-0 ceph-mon[74985]: pgmap v2155: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:36 compute-0 ceph-mon[74985]: pgmap v2156: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:37 compute-0 nova_compute[254092]: 2025-11-25 16:55:37.397 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:39 compute-0 nova_compute[254092]: 2025-11-25 16:55:39.069 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:39 compute-0 ceph-mon[74985]: pgmap v2157: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:55:40
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'vms']
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:55:41 compute-0 ceph-mon[74985]: pgmap v2158: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:42 compute-0 nova_compute[254092]: 2025-11-25 16:55:42.400 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:43 compute-0 ceph-mon[74985]: pgmap v2159: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:44 compute-0 nova_compute[254092]: 2025-11-25 16:55:44.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:45 compute-0 ceph-mon[74985]: pgmap v2160: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:45.258 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:55:45 compute-0 nova_compute[254092]: 2025-11-25 16:55:45.258 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:45.259 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:55:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:47 compute-0 ceph-mon[74985]: pgmap v2161: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:47 compute-0 nova_compute[254092]: 2025-11-25 16:55:47.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:49 compute-0 nova_compute[254092]: 2025-11-25 16:55:49.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:49 compute-0 ceph-mon[74985]: pgmap v2162: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:51 compute-0 ceph-mon[74985]: pgmap v2163: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:51.260 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:55:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:55:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:52 compute-0 nova_compute[254092]: 2025-11-25 16:55:52.463 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:53 compute-0 ceph-mon[74985]: pgmap v2164: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:54 compute-0 nova_compute[254092]: 2025-11-25 16:55:54.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:55 compute-0 ceph-mon[74985]: pgmap v2165: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:55:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888755253' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:55:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:55:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888755253' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:55:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3888755253' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:55:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3888755253' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:55:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.319 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8:0:1:f816:3eff:fef1:9bc1'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4d60955-fbc7-4da8-87c3-cdd23dd7271a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a745fe20-e175-40f3-bdd0-3c1a94b5a9b0) old=Port_Binding(mac=['fa:16:3e:f1:9b:c1 2001:db8::f816:3eff:fef1:9bc1'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef1:9bc1/64', 'neutron:device_id': 'ovnmeta-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-608565e6-7435-4ef2-aacd-e9f617a996c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a2c11a14e2b494e85fc351a8601421a', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:55:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.320 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a745fe20-e175-40f3-bdd0-3c1a94b5a9b0 in datapath 608565e6-7435-4ef2-aacd-e9f617a996c3 updated
Nov 25 16:55:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.320 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 608565e6-7435-4ef2-aacd-e9f617a996c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:55:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:55:56.321 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[901e4296-5ad1-478c-b48b-55d890fb0d1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:55:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:55:57 compute-0 ceph-mon[74985]: pgmap v2166: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:57 compute-0 nova_compute[254092]: 2025-11-25 16:55:57.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:55:59 compute-0 nova_compute[254092]: 2025-11-25 16:55:59.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:55:59 compute-0 ceph-mon[74985]: pgmap v2167: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:00 compute-0 podman[371281]: 2025-11-25 16:56:00.662283538 +0000 UTC m=+0.071270932 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 25 16:56:00 compute-0 podman[371280]: 2025-11-25 16:56:00.665954928 +0000 UTC m=+0.084570414 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Nov 25 16:56:00 compute-0 podman[371282]: 2025-11-25 16:56:00.685414448 +0000 UTC m=+0.095916803 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:56:01 compute-0 ceph-mon[74985]: pgmap v2168: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:56:02 compute-0 nova_compute[254092]: 2025-11-25 16:56:02.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:03 compute-0 ceph-mon[74985]: pgmap v2169: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:56:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:56:04 compute-0 nova_compute[254092]: 2025-11-25 16:56:04.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:05 compute-0 ceph-mon[74985]: pgmap v2170: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 16:56:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:07 compute-0 ceph-mon[74985]: pgmap v2171: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:07 compute-0 nova_compute[254092]: 2025-11-25 16:56:07.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:09 compute-0 nova_compute[254092]: 2025-11-25 16:56:09.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:09 compute-0 ceph-mon[74985]: pgmap v2172: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:56:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:11 compute-0 ceph-mon[74985]: pgmap v2173: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:12 compute-0 nova_compute[254092]: 2025-11-25 16:56:12.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:13 compute-0 ceph-mon[74985]: pgmap v2174: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 16:56:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:56:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:56:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:56:13.632 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:56:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:56:13.633 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:56:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 16:56:14 compute-0 nova_compute[254092]: 2025-11-25 16:56:14.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:15 compute-0 ceph-mon[74985]: pgmap v2175: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 16:56:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 16:56:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:17 compute-0 ceph-mon[74985]: pgmap v2176: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 16:56:17 compute-0 nova_compute[254092]: 2025-11-25 16:56:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:17 compute-0 nova_compute[254092]: 2025-11-25 16:56:17.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:19 compute-0 nova_compute[254092]: 2025-11-25 16:56:19.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:19 compute-0 ceph-mon[74985]: pgmap v2177: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:21 compute-0 ceph-mon[74985]: pgmap v2178: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:21 compute-0 nova_compute[254092]: 2025-11-25 16:56:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:22 compute-0 ceph-mon[74985]: pgmap v2179: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:56:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127705648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:56:22 compute-0 nova_compute[254092]: 2025-11-25 16:56:22.936 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.077 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.078 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3866MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.078 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.079 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.128 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.129 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.145 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:56:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1127705648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:56:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:56:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930245941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.548 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.553 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.571 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.572 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:56:23 compute-0 nova_compute[254092]: 2025-11-25 16:56:23.573 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:56:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:24 compute-0 nova_compute[254092]: 2025-11-25 16:56:24.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3930245941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:56:24 compute-0 ceph-mon[74985]: pgmap v2180: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:25 compute-0 sudo[371387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:25 compute-0 sudo[371387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:25 compute-0 sudo[371387]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:25 compute-0 sudo[371412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:56:25 compute-0 sudo[371412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:25 compute-0 sudo[371412]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:25 compute-0 sudo[371437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:25 compute-0 sudo[371437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:25 compute-0 sudo[371437]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:25 compute-0 sudo[371462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:56:25 compute-0 sudo[371462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:25 compute-0 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:25 compute-0 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:25 compute-0 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:25 compute-0 nova_compute[254092]: 2025-11-25 16:56:25.575 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:56:25 compute-0 sudo[371462]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:56:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:56:26 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:56:26 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:56:26 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c8099620-b715-4b49-ae09-e2246fad99b4 does not exist
Nov 25 16:56:26 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 16d1d671-ca2a-4b40-92d7-30573412b72e does not exist
Nov 25 16:56:26 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev faed2e19-dc3e-4342-972f-38115209b21c does not exist
Nov 25 16:56:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:56:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:56:26 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:56:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:56:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:56:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:56:26 compute-0 sudo[371519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:26 compute-0 sudo[371519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:26 compute-0 sudo[371519]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:26 compute-0 sudo[371544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:56:26 compute-0 sudo[371544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:26 compute-0 sudo[371544]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:26 compute-0 sudo[371569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:26 compute-0 sudo[371569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:26 compute-0 sudo[371569]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:26 compute-0 sudo[371594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:56:26 compute-0 sudo[371594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:26 compute-0 nova_compute[254092]: 2025-11-25 16:56:26.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:26 compute-0 podman[371659]: 2025-11-25 16:56:26.666794181 +0000 UTC m=+0.045499080 container create 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:56:26 compute-0 systemd[1]: Started libpod-conmon-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope.
Nov 25 16:56:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:56:26 compute-0 podman[371659]: 2025-11-25 16:56:26.646443617 +0000 UTC m=+0.025148556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:56:26 compute-0 podman[371659]: 2025-11-25 16:56:26.754558602 +0000 UTC m=+0.133263511 container init 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:56:26 compute-0 podman[371659]: 2025-11-25 16:56:26.763200608 +0000 UTC m=+0.141905507 container start 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:56:26 compute-0 podman[371659]: 2025-11-25 16:56:26.766699083 +0000 UTC m=+0.145403992 container attach 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 16:56:26 compute-0 frosty_kowalevski[371676]: 167 167
Nov 25 16:56:26 compute-0 systemd[1]: libpod-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope: Deactivated successfully.
Nov 25 16:56:26 compute-0 conmon[371676]: conmon 65b9f7224cc78319febc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope/container/memory.events
Nov 25 16:56:26 compute-0 podman[371681]: 2025-11-25 16:56:26.810831175 +0000 UTC m=+0.027805218 container died 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e79dc638523f0f2876dd386516e4bbfd9e20a9d394ca63dfc33cfe295e2105d-merged.mount: Deactivated successfully.
Nov 25 16:56:26 compute-0 podman[371681]: 2025-11-25 16:56:26.862832921 +0000 UTC m=+0.079806924 container remove 65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:56:26 compute-0 systemd[1]: libpod-conmon-65b9f7224cc78319febc95bf7cd5aef529fffb5f8438f7341608b9301aed7aee.scope: Deactivated successfully.
Nov 25 16:56:27 compute-0 podman[371704]: 2025-11-25 16:56:27.074531727 +0000 UTC m=+0.039537978 container create e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 16:56:27 compute-0 ceph-mon[74985]: pgmap v2181: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:27 compute-0 systemd[1]: Started libpod-conmon-e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597.scope.
Nov 25 16:56:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:27 compute-0 podman[371704]: 2025-11-25 16:56:27.05777214 +0000 UTC m=+0.022778421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:56:27 compute-0 podman[371704]: 2025-11-25 16:56:27.159200163 +0000 UTC m=+0.124206434 container init e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:56:27 compute-0 podman[371704]: 2025-11-25 16:56:27.171783256 +0000 UTC m=+0.136789507 container start e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 16:56:27 compute-0 podman[371704]: 2025-11-25 16:56:27.174992223 +0000 UTC m=+0.139998474 container attach e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:56:27 compute-0 nova_compute[254092]: 2025-11-25 16:56:27.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:29 compute-0 nova_compute[254092]: 2025-11-25 16:56:29.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:29 compute-0 ceph-mon[74985]: pgmap v2182: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:29 compute-0 hardcore_keldysh[371720]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:56:29 compute-0 hardcore_keldysh[371720]: --> relative data size: 1.0
Nov 25 16:56:29 compute-0 hardcore_keldysh[371720]: --> All data devices are unavailable
Nov 25 16:56:29 compute-0 systemd[1]: libpod-e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597.scope: Deactivated successfully.
Nov 25 16:56:29 compute-0 podman[371749]: 2025-11-25 16:56:29.432694383 +0000 UTC m=+0.028159147 container died e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 16:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9aca0d8e4e72797b34eb2b3379e2579b72891632ddf6b11b8245689c39f49c-merged.mount: Deactivated successfully.
Nov 25 16:56:29 compute-0 podman[371749]: 2025-11-25 16:56:29.481102682 +0000 UTC m=+0.076567426 container remove e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_keldysh, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:56:29 compute-0 systemd[1]: libpod-conmon-e0d78f2c74a1ae9cebb47e3877e9e5ca942b67c2ae6419df59b6c28481398597.scope: Deactivated successfully.
Nov 25 16:56:29 compute-0 sudo[371594]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:29 compute-0 sudo[371765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:29 compute-0 sudo[371765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:29 compute-0 sudo[371765]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:29 compute-0 sudo[371790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:56:29 compute-0 sudo[371790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:29 compute-0 sudo[371790]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:29 compute-0 sudo[371815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:29 compute-0 sudo[371815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:29 compute-0 sudo[371815]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:29 compute-0 sudo[371840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:56:29 compute-0 sudo[371840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:30 compute-0 podman[371905]: 2025-11-25 16:56:30.115963053 +0000 UTC m=+0.036885056 container create 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:56:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:30 compute-0 systemd[1]: Started libpod-conmon-30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075.scope.
Nov 25 16:56:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:56:30 compute-0 podman[371905]: 2025-11-25 16:56:30.193234227 +0000 UTC m=+0.114156280 container init 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 16:56:30 compute-0 podman[371905]: 2025-11-25 16:56:30.101496788 +0000 UTC m=+0.022418811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:56:30 compute-0 podman[371905]: 2025-11-25 16:56:30.199092716 +0000 UTC m=+0.120014719 container start 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:56:30 compute-0 podman[371905]: 2025-11-25 16:56:30.20179203 +0000 UTC m=+0.122714063 container attach 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:56:30 compute-0 condescending_stonebraker[371922]: 167 167
Nov 25 16:56:30 compute-0 systemd[1]: libpod-30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075.scope: Deactivated successfully.
Nov 25 16:56:30 compute-0 podman[371905]: 2025-11-25 16:56:30.204540445 +0000 UTC m=+0.125462448 container died 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 16:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-911cef95f7d0214a09f5b16a8f9a23e5d77d5098653c19d96ee1919c701b3f1f-merged.mount: Deactivated successfully.
Nov 25 16:56:30 compute-0 podman[371905]: 2025-11-25 16:56:30.237936015 +0000 UTC m=+0.158858028 container remove 30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:56:30 compute-0 systemd[1]: libpod-conmon-30525bd64b8badfd5a6b99c55b15de276f9b87899baad78af45068fb51250075.scope: Deactivated successfully.
Nov 25 16:56:30 compute-0 podman[371945]: 2025-11-25 16:56:30.391487227 +0000 UTC m=+0.039515127 container create 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 16:56:30 compute-0 systemd[1]: Started libpod-conmon-3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b.scope.
Nov 25 16:56:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:30 compute-0 podman[371945]: 2025-11-25 16:56:30.45952976 +0000 UTC m=+0.107557670 container init 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:56:30 compute-0 podman[371945]: 2025-11-25 16:56:30.465880902 +0000 UTC m=+0.113908802 container start 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:56:30 compute-0 podman[371945]: 2025-11-25 16:56:30.468516714 +0000 UTC m=+0.116544634 container attach 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:56:30 compute-0 podman[371945]: 2025-11-25 16:56:30.3758445 +0000 UTC m=+0.023872420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:56:30 compute-0 nova_compute[254092]: 2025-11-25 16:56:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:56:30 compute-0 nova_compute[254092]: 2025-11-25 16:56:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:56:30 compute-0 nova_compute[254092]: 2025-11-25 16:56:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:56:30 compute-0 nova_compute[254092]: 2025-11-25 16:56:30.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:56:31 compute-0 ceph-mon[74985]: pgmap v2183: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:31 compute-0 jovial_noyce[371962]: {
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:     "0": [
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:         {
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "devices": [
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "/dev/loop3"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             ],
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_name": "ceph_lv0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_size": "21470642176",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "name": "ceph_lv0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "tags": {
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cluster_name": "ceph",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.crush_device_class": "",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.encrypted": "0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osd_id": "0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.type": "block",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.vdo": "0"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             },
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "type": "block",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "vg_name": "ceph_vg0"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:         }
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:     ],
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:     "1": [
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:         {
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "devices": [
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "/dev/loop4"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             ],
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_name": "ceph_lv1",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_size": "21470642176",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "name": "ceph_lv1",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "tags": {
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cluster_name": "ceph",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.crush_device_class": "",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.encrypted": "0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osd_id": "1",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.type": "block",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.vdo": "0"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             },
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "type": "block",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "vg_name": "ceph_vg1"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:         }
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:     ],
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:     "2": [
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:         {
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "devices": [
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "/dev/loop5"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             ],
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_name": "ceph_lv2",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_size": "21470642176",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "name": "ceph_lv2",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "tags": {
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.cluster_name": "ceph",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.crush_device_class": "",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.encrypted": "0",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osd_id": "2",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.type": "block",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:                 "ceph.vdo": "0"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             },
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "type": "block",
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:             "vg_name": "ceph_vg2"
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:         }
Nov 25 16:56:31 compute-0 jovial_noyce[371962]:     ]
Nov 25 16:56:31 compute-0 jovial_noyce[371962]: }
Nov 25 16:56:31 compute-0 systemd[1]: libpod-3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b.scope: Deactivated successfully.
Nov 25 16:56:31 compute-0 podman[371945]: 2025-11-25 16:56:31.21479786 +0000 UTC m=+0.862825760 container died 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab31de570c393ebee6d293edf76fb4b1459fa7a53a1dfb4943ab2db573f82df9-merged.mount: Deactivated successfully.
Nov 25 16:56:31 compute-0 podman[371945]: 2025-11-25 16:56:31.273874039 +0000 UTC m=+0.921901939 container remove 3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:56:31 compute-0 systemd[1]: libpod-conmon-3783753abe8d538754995dc9d408e252fc9c2f33936e65f7bdd7e5939106050b.scope: Deactivated successfully.
Nov 25 16:56:31 compute-0 sudo[371840]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:31 compute-0 podman[371979]: 2025-11-25 16:56:31.313719694 +0000 UTC m=+0.068153777 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 16:56:31 compute-0 podman[371972]: 2025-11-25 16:56:31.317071696 +0000 UTC m=+0.068752103 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 16:56:31 compute-0 sudo[372037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:31 compute-0 sudo[372037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:31 compute-0 sudo[372037]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:31 compute-0 podman[371980]: 2025-11-25 16:56:31.383503445 +0000 UTC m=+0.135091070 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 16:56:31 compute-0 sudo[372067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:56:31 compute-0 sudo[372067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:31 compute-0 sudo[372067]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:31 compute-0 sudo[372092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:31 compute-0 sudo[372092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:31 compute-0 sudo[372092]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:31 compute-0 sudo[372117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:56:31 compute-0 sudo[372117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:31 compute-0 podman[372185]: 2025-11-25 16:56:31.844267575 +0000 UTC m=+0.039238810 container create 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:56:31 compute-0 systemd[1]: Started libpod-conmon-4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704.scope.
Nov 25 16:56:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:56:31 compute-0 podman[372185]: 2025-11-25 16:56:31.912683278 +0000 UTC m=+0.107654553 container init 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:56:31 compute-0 podman[372185]: 2025-11-25 16:56:31.919677948 +0000 UTC m=+0.114649183 container start 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:56:31 compute-0 podman[372185]: 2025-11-25 16:56:31.829443231 +0000 UTC m=+0.024414496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:56:31 compute-0 podman[372185]: 2025-11-25 16:56:31.922819654 +0000 UTC m=+0.117790919 container attach 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:56:31 compute-0 relaxed_dhawan[372201]: 167 167
Nov 25 16:56:31 compute-0 systemd[1]: libpod-4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704.scope: Deactivated successfully.
Nov 25 16:56:31 compute-0 podman[372185]: 2025-11-25 16:56:31.924956442 +0000 UTC m=+0.119927687 container died 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 16:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fab899b22e7fd5c57e2dbd6541d004f2b1f7176c63bdfcb5780bf4a7a35106b-merged.mount: Deactivated successfully.
Nov 25 16:56:31 compute-0 podman[372185]: 2025-11-25 16:56:31.959022769 +0000 UTC m=+0.153994014 container remove 4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dhawan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:56:31 compute-0 systemd[1]: libpod-conmon-4e58758da9837347ccc6a5085f078dd13a8267480609425cfc9c818e48314704.scope: Deactivated successfully.
Nov 25 16:56:32 compute-0 podman[372223]: 2025-11-25 16:56:32.103984538 +0000 UTC m=+0.042132399 container create 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:56:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:32 compute-0 systemd[1]: Started libpod-conmon-5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d.scope.
Nov 25 16:56:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:56:32 compute-0 podman[372223]: 2025-11-25 16:56:32.176458062 +0000 UTC m=+0.114605913 container init 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 16:56:32 compute-0 podman[372223]: 2025-11-25 16:56:32.082871003 +0000 UTC m=+0.021018904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:56:32 compute-0 podman[372223]: 2025-11-25 16:56:32.183383631 +0000 UTC m=+0.121531482 container start 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:56:32 compute-0 podman[372223]: 2025-11-25 16:56:32.186415343 +0000 UTC m=+0.124563194 container attach 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:56:32 compute-0 nova_compute[254092]: 2025-11-25 16:56:32.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]: {
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "osd_id": 1,
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "type": "bluestore"
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:     },
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "osd_id": 2,
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "type": "bluestore"
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:     },
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "osd_id": 0,
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:         "type": "bluestore"
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]:     }
Nov 25 16:56:33 compute-0 thirsty_shockley[372239]: }
Nov 25 16:56:33 compute-0 systemd[1]: libpod-5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d.scope: Deactivated successfully.
Nov 25 16:56:33 compute-0 podman[372223]: 2025-11-25 16:56:33.081389198 +0000 UTC m=+1.019537049 container died 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 16:56:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-23f8278ee9ab847b7fac53bec50a431eefb8d500119dcb0257fcb3fbdeb1ae23-merged.mount: Deactivated successfully.
Nov 25 16:56:33 compute-0 podman[372223]: 2025-11-25 16:56:33.131342829 +0000 UTC m=+1.069490680 container remove 5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 16:56:33 compute-0 systemd[1]: libpod-conmon-5645debcbfac6c145962652a3db797e53ecea246298fc86c1e17be55cc01f26d.scope: Deactivated successfully.
Nov 25 16:56:33 compute-0 ceph-mon[74985]: pgmap v2184: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:33 compute-0 sudo[372117]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:56:33 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:56:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:56:33 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:56:33 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e9800288-981b-4947-9bfe-7eab647c1666 does not exist
Nov 25 16:56:33 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 35619cca-792c-4d15-bb83-d7c174e53912 does not exist
Nov 25 16:56:33 compute-0 sudo[372284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:56:33 compute-0 sudo[372284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:33 compute-0 sudo[372284]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:33 compute-0 sudo[372309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:56:33 compute-0 sudo[372309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:56:33 compute-0 sudo[372309]: pam_unix(sudo:session): session closed for user root
Nov 25 16:56:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:34 compute-0 nova_compute[254092]: 2025-11-25 16:56:34.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:56:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:56:35 compute-0 ceph-mon[74985]: pgmap v2185: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:37 compute-0 ceph-mon[74985]: pgmap v2186: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:37 compute-0 nova_compute[254092]: 2025-11-25 16:56:37.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:39 compute-0 nova_compute[254092]: 2025-11-25 16:56:39.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:39 compute-0 ceph-mon[74985]: pgmap v2187: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:56:40
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups']
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:56:41 compute-0 ceph-mon[74985]: pgmap v2188: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:42 compute-0 nova_compute[254092]: 2025-11-25 16:56:42.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:43 compute-0 ceph-mon[74985]: pgmap v2189: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:44 compute-0 nova_compute[254092]: 2025-11-25 16:56:44.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:45 compute-0 ceph-mon[74985]: pgmap v2190: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:56:45.461 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:56:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:56:45.462 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:56:45 compute-0 nova_compute[254092]: 2025-11-25 16:56:45.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:47 compute-0 ceph-mon[74985]: pgmap v2191: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:47 compute-0 nova_compute[254092]: 2025-11-25 16:56:47.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:49 compute-0 nova_compute[254092]: 2025-11-25 16:56:49.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:49 compute-0 ceph-mon[74985]: pgmap v2192: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:51 compute-0 ceph-mon[74985]: pgmap v2193: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:56:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:56:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:56:51.463 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:56:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:52 compute-0 nova_compute[254092]: 2025-11-25 16:56:52.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:53 compute-0 ceph-mon[74985]: pgmap v2194: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:54 compute-0 nova_compute[254092]: 2025-11-25 16:56:54.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:55 compute-0 ceph-mon[74985]: pgmap v2195: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:56:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/78123321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:56:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:56:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/78123321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:56:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/78123321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:56:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/78123321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:56:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:56:57 compute-0 ceph-mon[74985]: pgmap v2196: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:57 compute-0 nova_compute[254092]: 2025-11-25 16:56:57.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:56:59 compute-0 nova_compute[254092]: 2025-11-25 16:56:59.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:56:59 compute-0 ceph-mon[74985]: pgmap v2197: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:01 compute-0 ceph-mon[74985]: pgmap v2198: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:01 compute-0 podman[372334]: 2025-11-25 16:57:01.716173345 +0000 UTC m=+0.111310773 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 16:57:01 compute-0 podman[372335]: 2025-11-25 16:57:01.732202302 +0000 UTC m=+0.128365647 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:57:01 compute-0 podman[372336]: 2025-11-25 16:57:01.73616641 +0000 UTC m=+0.126744483 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 16:57:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:02 compute-0 nova_compute[254092]: 2025-11-25 16:57:02.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:03 compute-0 ceph-mon[74985]: pgmap v2199: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:04 compute-0 nova_compute[254092]: 2025-11-25 16:57:04.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:05 compute-0 ceph-mon[74985]: pgmap v2200: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:07 compute-0 ceph-mon[74985]: pgmap v2201: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:07 compute-0 nova_compute[254092]: 2025-11-25 16:57:07.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:09 compute-0 nova_compute[254092]: 2025-11-25 16:57:09.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:09 compute-0 ceph-mon[74985]: pgmap v2202: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:57:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:11 compute-0 ceph-mon[74985]: pgmap v2203: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:12 compute-0 nova_compute[254092]: 2025-11-25 16:57:12.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:13 compute-0 sshd-session[372393]: Connection closed by authenticating user root 171.244.51.45 port 40998 [preauth]
Nov 25 16:57:13 compute-0 ceph-mon[74985]: pgmap v2204: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:13.633 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:13.633 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:13.634 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:14 compute-0 nova_compute[254092]: 2025-11-25 16:57:14.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:15 compute-0 ceph-mon[74985]: pgmap v2205: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:17 compute-0 ceph-mon[74985]: pgmap v2206: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:17 compute-0 nova_compute[254092]: 2025-11-25 16:57:17.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:19 compute-0 nova_compute[254092]: 2025-11-25 16:57:19.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:19 compute-0 nova_compute[254092]: 2025-11-25 16:57:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:19 compute-0 ceph-mon[74985]: pgmap v2207: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:21 compute-0 ceph-mon[74985]: pgmap v2208: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:22 compute-0 nova_compute[254092]: 2025-11-25 16:57:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:22 compute-0 nova_compute[254092]: 2025-11-25 16:57:22.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:23 compute-0 nova_compute[254092]: 2025-11-25 16:57:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:23 compute-0 nova_compute[254092]: 2025-11-25 16:57:23.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:23 compute-0 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:23 compute-0 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:23 compute-0 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:57:23 compute-0 nova_compute[254092]: 2025-11-25 16:57:23.519 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:23 compute-0 ceph-mon[74985]: pgmap v2209: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:57:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571550181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:57:23 compute-0 nova_compute[254092]: 2025-11-25 16:57:23.971 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.119 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.121 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3872MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.121 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.121 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.178 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.188 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.189 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.276 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3571550181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:57:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:57:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318878177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.694 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.699 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.717 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.718 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:57:24 compute-0 nova_compute[254092]: 2025-11-25 16:57:24.718 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:25 compute-0 ceph-mon[74985]: pgmap v2210: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3318878177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:57:25 compute-0 nova_compute[254092]: 2025-11-25 16:57:25.719 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:25 compute-0 nova_compute[254092]: 2025-11-25 16:57:25.719 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:25 compute-0 nova_compute[254092]: 2025-11-25 16:57:25.720 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:25 compute-0 nova_compute[254092]: 2025-11-25 16:57:25.720 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:57:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:26 compute-0 nova_compute[254092]: 2025-11-25 16:57:26.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:27 compute-0 nova_compute[254092]: 2025-11-25 16:57:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:27 compute-0 ceph-mon[74985]: pgmap v2211: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:27 compute-0 nova_compute[254092]: 2025-11-25 16:57:27.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:29 compute-0 nova_compute[254092]: 2025-11-25 16:57:29.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:29 compute-0 ceph-mon[74985]: pgmap v2212: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:31 compute-0 nova_compute[254092]: 2025-11-25 16:57:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:31 compute-0 nova_compute[254092]: 2025-11-25 16:57:31.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:57:31 compute-0 nova_compute[254092]: 2025-11-25 16:57:31.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:57:31 compute-0 nova_compute[254092]: 2025-11-25 16:57:31.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:57:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:31 compute-0 ceph-mon[74985]: pgmap v2213: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:32 compute-0 podman[372440]: 2025-11-25 16:57:32.692169792 +0000 UTC m=+0.099354776 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 16:57:32 compute-0 podman[372439]: 2025-11-25 16:57:32.692289525 +0000 UTC m=+0.093938838 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Nov 25 16:57:32 compute-0 podman[372441]: 2025-11-25 16:57:32.717751349 +0000 UTC m=+0.123279599 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:57:32 compute-0 nova_compute[254092]: 2025-11-25 16:57:32.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:33 compute-0 sudo[372505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:33 compute-0 sudo[372505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:33 compute-0 sudo[372505]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:33 compute-0 sudo[372530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:57:33 compute-0 sudo[372530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:33 compute-0 sudo[372530]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:33 compute-0 sudo[372555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:33 compute-0 sudo[372555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:33 compute-0 sudo[372555]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:33 compute-0 sudo[372580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:57:33 compute-0 sudo[372580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:33 compute-0 ceph-mon[74985]: pgmap v2214: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:34 compute-0 sudo[372580]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 16:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:57:34 compute-0 nova_compute[254092]: 2025-11-25 16:57:34.231 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:57:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 95eac55f-af9f-4557-9c1f-dfd5da1d708e does not exist
Nov 25 16:57:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9995f906-4427-4961-bb2c-305ad104d0d4 does not exist
Nov 25 16:57:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1fefd49b-82fa-48f4-8cf0-d58869d88069 does not exist
Nov 25 16:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:57:34 compute-0 sudo[372637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:34 compute-0 sudo[372637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:34 compute-0 sudo[372637]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:34 compute-0 sudo[372662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:57:34 compute-0 sudo[372662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:34 compute-0 sudo[372662]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:34 compute-0 sudo[372687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:34 compute-0 sudo[372687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:34 compute-0 sudo[372687]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:34 compute-0 nova_compute[254092]: 2025-11-25 16:57:34.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:57:34 compute-0 sudo[372712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:57:34 compute-0 sudo[372712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:57:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:57:34 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:57:34 compute-0 podman[372777]: 2025-11-25 16:57:34.92586943 +0000 UTC m=+0.075718215 container create ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:57:34 compute-0 systemd[1]: Started libpod-conmon-ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6.scope.
Nov 25 16:57:34 compute-0 podman[372777]: 2025-11-25 16:57:34.88658913 +0000 UTC m=+0.036437995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:57:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:57:35 compute-0 podman[372777]: 2025-11-25 16:57:35.051041648 +0000 UTC m=+0.200890463 container init ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:57:35 compute-0 podman[372777]: 2025-11-25 16:57:35.057842844 +0000 UTC m=+0.207691629 container start ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 25 16:57:35 compute-0 zealous_shamir[372793]: 167 167
Nov 25 16:57:35 compute-0 systemd[1]: libpod-ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6.scope: Deactivated successfully.
Nov 25 16:57:35 compute-0 podman[372777]: 2025-11-25 16:57:35.064491115 +0000 UTC m=+0.214339900 container attach ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 25 16:57:35 compute-0 podman[372777]: 2025-11-25 16:57:35.065600764 +0000 UTC m=+0.215449579 container died ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae318328ba4c744bc5827feaa263b5166d900d7584e9879ca182d674d7cb22cd-merged.mount: Deactivated successfully.
Nov 25 16:57:35 compute-0 podman[372777]: 2025-11-25 16:57:35.279342806 +0000 UTC m=+0.429191601 container remove ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:57:35 compute-0 systemd[1]: libpod-conmon-ce1bf1842f4a9e95c502908973170cdc1ac05ef125eda71ce7fcbe1e899cdec6.scope: Deactivated successfully.
Nov 25 16:57:35 compute-0 podman[372819]: 2025-11-25 16:57:35.474586883 +0000 UTC m=+0.077668765 container create 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:57:35 compute-0 podman[372819]: 2025-11-25 16:57:35.423513362 +0000 UTC m=+0.026595354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:57:35 compute-0 systemd[1]: Started libpod-conmon-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope.
Nov 25 16:57:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:35 compute-0 podman[372819]: 2025-11-25 16:57:35.641097879 +0000 UTC m=+0.244179841 container init 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:57:35 compute-0 podman[372819]: 2025-11-25 16:57:35.655681036 +0000 UTC m=+0.258762918 container start 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:57:35 compute-0 podman[372819]: 2025-11-25 16:57:35.6628094 +0000 UTC m=+0.265891362 container attach 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:57:35 compute-0 ceph-mon[74985]: pgmap v2215: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.541 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.543 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.595 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:57:36 compute-0 condescending_jennings[372836]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:57:36 compute-0 condescending_jennings[372836]: --> relative data size: 1.0
Nov 25 16:57:36 compute-0 condescending_jennings[372836]: --> All data devices are unavailable
Nov 25 16:57:36 compute-0 systemd[1]: libpod-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope: Deactivated successfully.
Nov 25 16:57:36 compute-0 podman[372819]: 2025-11-25 16:57:36.712424697 +0000 UTC m=+1.315506599 container died 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 16:57:36 compute-0 systemd[1]: libpod-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope: Consumed 1.003s CPU time.
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.719 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.721 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.731 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.732 254096 INFO nova.compute.claims [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-42a6a3f631f7efbae7bd1c0c14f609c305fdd439ccc436bce3982746c20d01be-merged.mount: Deactivated successfully.
Nov 25 16:57:36 compute-0 nova_compute[254092]: 2025-11-25 16:57:36.897 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:36 compute-0 podman[372819]: 2025-11-25 16:57:36.920928766 +0000 UTC m=+1.524010668 container remove 159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 16:57:36 compute-0 systemd[1]: libpod-conmon-159a79ee352799f7c3fd293948bcbb29a3c572e18af89bfa7d4878581d22b40a.scope: Deactivated successfully.
Nov 25 16:57:36 compute-0 sudo[372712]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:37 compute-0 sudo[372879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:37 compute-0 sudo[372879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:37 compute-0 sudo[372879]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:37 compute-0 sudo[372923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:57:37 compute-0 sudo[372923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:37 compute-0 sudo[372923]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:37 compute-0 sudo[372948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:37 compute-0 sudo[372948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:37 compute-0 sudo[372948]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:37 compute-0 sudo[372973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:57:37 compute-0 sudo[372973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:57:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505803417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.345 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.355 254096 DEBUG nova.compute.provider_tree [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.371 254096 DEBUG nova.scheduler.client.report [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.403 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.406 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.452 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.453 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.471 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.497 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.583 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.585 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.585 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating image(s)
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.614 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.644 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.680 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.686 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:37 compute-0 podman[373056]: 2025-11-25 16:57:37.68774511 +0000 UTC m=+0.057569488 container create 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:57:37 compute-0 systemd[1]: Started libpod-conmon-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope.
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.740 254096 DEBUG nova.policy [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2a830f6b7532459380b24ae0297b12bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0fef68bf8cf647f89586309d548d4bd7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:57:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:57:37 compute-0 podman[373056]: 2025-11-25 16:57:37.664398625 +0000 UTC m=+0.034223003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:57:37 compute-0 podman[373056]: 2025-11-25 16:57:37.780332552 +0000 UTC m=+0.150156960 container init 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:57:37 compute-0 podman[373056]: 2025-11-25 16:57:37.787032475 +0000 UTC m=+0.156856853 container start 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.787 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.788 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:37 compute-0 podman[373056]: 2025-11-25 16:57:37.790296044 +0000 UTC m=+0.160120422 container attach 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.789 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.790 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:37 compute-0 epic_germain[373112]: 167 167
Nov 25 16:57:37 compute-0 systemd[1]: libpod-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope: Deactivated successfully.
Nov 25 16:57:37 compute-0 conmon[373112]: conmon 5500289e6071f0b214a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope/container/memory.events
Nov 25 16:57:37 compute-0 podman[373056]: 2025-11-25 16:57:37.79676679 +0000 UTC m=+0.166591158 container died 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-add913bfab9dbc5cbf46755a74ff1022e13c357c563ddb35c16bca25a5e27a73-merged.mount: Deactivated successfully.
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.834 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:57:37 compute-0 ceph-mon[74985]: pgmap v2216: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1505803417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:57:37 compute-0 podman[373056]: 2025-11-25 16:57:37.841330254 +0000 UTC m=+0.211154612 container remove 5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.843 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:37 compute-0 systemd[1]: libpod-conmon-5500289e6071f0b214a48a153cdc861179d344e121cd4e30c63ec295fcd07723.scope: Deactivated successfully.
Nov 25 16:57:37 compute-0 nova_compute[254092]: 2025-11-25 16:57:37.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:38 compute-0 podman[373170]: 2025-11-25 16:57:38.135254939 +0000 UTC m=+0.126820965 container create 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:57:38 compute-0 podman[373170]: 2025-11-25 16:57:38.051191109 +0000 UTC m=+0.042757175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:57:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:38 compute-0 systemd[1]: Started libpod-conmon-35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2.scope.
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.203 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:38 compute-0 podman[373170]: 2025-11-25 16:57:38.264086058 +0000 UTC m=+0.255652154 container init 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 16:57:38 compute-0 podman[373170]: 2025-11-25 16:57:38.282155381 +0000 UTC m=+0.273721407 container start 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:57:38 compute-0 podman[373170]: 2025-11-25 16:57:38.288196475 +0000 UTC m=+0.279762551 container attach 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.291 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] resizing rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.408 254096 DEBUG nova.objects.instance [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.425 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.425 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Ensure instance console log exists: /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.426 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.426 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:38 compute-0 nova_compute[254092]: 2025-11-25 16:57:38.427 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:39 compute-0 cool_lewin[373187]: {
Nov 25 16:57:39 compute-0 cool_lewin[373187]:     "0": [
Nov 25 16:57:39 compute-0 cool_lewin[373187]:         {
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "devices": [
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "/dev/loop3"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             ],
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_name": "ceph_lv0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_size": "21470642176",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "name": "ceph_lv0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "tags": {
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cluster_name": "ceph",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.crush_device_class": "",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.encrypted": "0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osd_id": "0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.type": "block",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.vdo": "0"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             },
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "type": "block",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "vg_name": "ceph_vg0"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:         }
Nov 25 16:57:39 compute-0 cool_lewin[373187]:     ],
Nov 25 16:57:39 compute-0 cool_lewin[373187]:     "1": [
Nov 25 16:57:39 compute-0 cool_lewin[373187]:         {
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "devices": [
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "/dev/loop4"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             ],
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_name": "ceph_lv1",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_size": "21470642176",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "name": "ceph_lv1",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "tags": {
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cluster_name": "ceph",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.crush_device_class": "",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.encrypted": "0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osd_id": "1",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.type": "block",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.vdo": "0"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             },
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "type": "block",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "vg_name": "ceph_vg1"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:         }
Nov 25 16:57:39 compute-0 cool_lewin[373187]:     ],
Nov 25 16:57:39 compute-0 cool_lewin[373187]:     "2": [
Nov 25 16:57:39 compute-0 cool_lewin[373187]:         {
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "devices": [
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "/dev/loop5"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             ],
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_name": "ceph_lv2",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_size": "21470642176",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "name": "ceph_lv2",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "tags": {
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.cluster_name": "ceph",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.crush_device_class": "",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.encrypted": "0",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osd_id": "2",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.type": "block",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:                 "ceph.vdo": "0"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             },
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "type": "block",
Nov 25 16:57:39 compute-0 cool_lewin[373187]:             "vg_name": "ceph_vg2"
Nov 25 16:57:39 compute-0 cool_lewin[373187]:         }
Nov 25 16:57:39 compute-0 cool_lewin[373187]:     ]
Nov 25 16:57:39 compute-0 cool_lewin[373187]: }
Nov 25 16:57:39 compute-0 systemd[1]: libpod-35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2.scope: Deactivated successfully.
Nov 25 16:57:39 compute-0 podman[373170]: 2025-11-25 16:57:39.127236036 +0000 UTC m=+1.118802072 container died 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b8870570886e2ebdf3b7e29a626eacd41c877256e6e3b3d92616adeeae4d1ce-merged.mount: Deactivated successfully.
Nov 25 16:57:39 compute-0 podman[373170]: 2025-11-25 16:57:39.203904255 +0000 UTC m=+1.195470301 container remove 35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 16:57:39 compute-0 systemd[1]: libpod-conmon-35845bed41e1314fcdd30f4e849f8cd195dd5c49772851fef763e91f19a613e2.scope: Deactivated successfully.
Nov 25 16:57:39 compute-0 nova_compute[254092]: 2025-11-25 16:57:39.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:39 compute-0 sudo[372973]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:39 compute-0 sudo[373282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:39 compute-0 sudo[373282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:39 compute-0 sudo[373282]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:39 compute-0 sudo[373307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:57:39 compute-0 sudo[373307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:39 compute-0 sudo[373307]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:39 compute-0 sudo[373332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:39 compute-0 sudo[373332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:39 compute-0 sudo[373332]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:39 compute-0 sudo[373357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:57:39 compute-0 sudo[373357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:39 compute-0 nova_compute[254092]: 2025-11-25 16:57:39.518 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Successfully created port: 75edff1b-5ceb-4f80-befe-e1a5ec106382 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:57:39 compute-0 podman[373422]: 2025-11-25 16:57:39.823140011 +0000 UTC m=+0.051577967 container create 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:57:39 compute-0 ceph-mon[74985]: pgmap v2217: 321 pgs: 321 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail
Nov 25 16:57:39 compute-0 systemd[1]: Started libpod-conmon-8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f.scope.
Nov 25 16:57:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:57:39 compute-0 podman[373422]: 2025-11-25 16:57:39.809278103 +0000 UTC m=+0.037716079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:57:39 compute-0 podman[373422]: 2025-11-25 16:57:39.903957102 +0000 UTC m=+0.132395078 container init 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:57:39 compute-0 podman[373422]: 2025-11-25 16:57:39.91492218 +0000 UTC m=+0.143360136 container start 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 16:57:39 compute-0 podman[373422]: 2025-11-25 16:57:39.917600663 +0000 UTC m=+0.146038619 container attach 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 16:57:39 compute-0 tender_ride[373439]: 167 167
Nov 25 16:57:39 compute-0 systemd[1]: libpod-8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f.scope: Deactivated successfully.
Nov 25 16:57:39 compute-0 podman[373422]: 2025-11-25 16:57:39.923489574 +0000 UTC m=+0.151927530 container died 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cab6bee6c13118da8a45e84bdfbe2557e37928011c5cd73a8a91cb078013da84-merged.mount: Deactivated successfully.
Nov 25 16:57:39 compute-0 podman[373422]: 2025-11-25 16:57:39.95680952 +0000 UTC m=+0.185247496 container remove 8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 16:57:39 compute-0 systemd[1]: libpod-conmon-8e20fabc7122b4bbcb2d1300bea47bab603f5f21d16a01c0f5591018db7fc13f.scope: Deactivated successfully.
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:57:40
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control']
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:57:40 compute-0 podman[373465]: 2025-11-25 16:57:40.137845471 +0000 UTC m=+0.049430787 container create 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 81 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Nov 25 16:57:40 compute-0 systemd[1]: Started libpod-conmon-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope.
Nov 25 16:57:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:40 compute-0 podman[373465]: 2025-11-25 16:57:40.11651377 +0000 UTC m=+0.028099176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:57:40 compute-0 podman[373465]: 2025-11-25 16:57:40.215677251 +0000 UTC m=+0.127262617 container init 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 16:57:40 compute-0 podman[373465]: 2025-11-25 16:57:40.224545193 +0000 UTC m=+0.136130519 container start 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:57:40 compute-0 podman[373465]: 2025-11-25 16:57:40.227727119 +0000 UTC m=+0.139312445 container attach 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]: {
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "osd_id": 1,
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "type": "bluestore"
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:     },
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "osd_id": 2,
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "type": "bluestore"
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:     },
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "osd_id": 0,
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:         "type": "bluestore"
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]:     }
Nov 25 16:57:41 compute-0 nostalgic_merkle[373482]: }
Nov 25 16:57:41 compute-0 systemd[1]: libpod-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope: Deactivated successfully.
Nov 25 16:57:41 compute-0 podman[373465]: 2025-11-25 16:57:41.246821416 +0000 UTC m=+1.158406732 container died 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:57:41 compute-0 systemd[1]: libpod-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope: Consumed 1.026s CPU time.
Nov 25 16:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d464579c917cfa3c2d67a7961b4beedbf3453030f5209eb4ea89246255239f-merged.mount: Deactivated successfully.
Nov 25 16:57:41 compute-0 podman[373465]: 2025-11-25 16:57:41.306922822 +0000 UTC m=+1.218508138 container remove 80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 16:57:41 compute-0 systemd[1]: libpod-conmon-80bdd75e76091af8135f21733114e168887461c679a53e692d879a64b91d24cb.scope: Deactivated successfully.
Nov 25 16:57:41 compute-0 sudo[373357]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:57:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:57:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:57:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:57:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 11a71731-195e-4d02-ab78-2b5ce9435e65 does not exist
Nov 25 16:57:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 290cbf2d-08b8-46a4-82fb-82f91d11014a does not exist
Nov 25 16:57:41 compute-0 sudo[373529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:57:41 compute-0 sudo[373529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:41 compute-0 sudo[373529]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:41 compute-0 sudo[373554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:57:41 compute-0 sudo[373554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:57:41 compute-0 sudo[373554]: pam_unix(sudo:session): session closed for user root
Nov 25 16:57:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.643 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Successfully updated port: 75edff1b-5ceb-4f80-befe-e1a5ec106382 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.654 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.654 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.654 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.809 254096 DEBUG nova.compute.manager [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.809 254096 DEBUG nova.compute.manager [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.809 254096 DEBUG oslo_concurrency.lockutils [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:57:41 compute-0 ceph-mon[74985]: pgmap v2218: 321 pgs: 321 active+clean; 81 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Nov 25 16:57:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:57:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:57:41 compute-0 nova_compute[254092]: 2025-11-25 16:57:41.941 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:57:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:57:42 compute-0 nova_compute[254092]: 2025-11-25 16:57:42.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.404 254096 DEBUG nova.network.neutron [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.424 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.424 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance network_info: |[{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.424 254096 DEBUG oslo_concurrency.lockutils [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.425 254096 DEBUG nova.network.neutron [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.427 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start _get_guest_xml network_info=[{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.432 254096 WARNING nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.438 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.438 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.446 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.446 254096 DEBUG nova.virt.libvirt.host [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.446 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.447 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.448 254096 DEBUG nova.virt.hardware [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.451 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:43 compute-0 ceph-mon[74985]: pgmap v2219: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:57:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:57:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466815479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.899 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.925 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:57:43 compute-0 nova_compute[254092]: 2025-11-25 16:57:43.929 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:57:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913018023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.386 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.388 254096 DEBUG nova.virt.libvirt.vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:57:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.388 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.389 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.391 254096 DEBUG nova.objects.instance [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.410 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <uuid>5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</uuid>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <name>instance-0000006e</name>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <nova:name>tempest-TestShelveInstance-server-1491050401</nova:name>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:57:43</nova:creationTime>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:user uuid="2a830f6b7532459380b24ae0297b12bb">tempest-TestShelveInstance-535396087-project-member</nova:user>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:project uuid="0fef68bf8cf647f89586309d548d4bd7">tempest-TestShelveInstance-535396087</nova:project>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <nova:port uuid="75edff1b-5ceb-4f80-befe-e1a5ec106382">
Nov 25 16:57:44 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <system>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <entry name="serial">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <entry name="uuid">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </system>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <os>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   </os>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <features>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   </features>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk">
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       </source>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config">
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       </source>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:57:44 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:9e:02:f1"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <target dev="tap75edff1b-5c"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log" append="off"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <video>
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </video>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:57:44 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:57:44 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:57:44 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:57:44 compute-0 nova_compute[254092]: </domain>
Nov 25 16:57:44 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.412 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Preparing to wait for external event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.412 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.413 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.413 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.415 254096 DEBUG nova.virt.libvirt.vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:57:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.415 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.416 254096 DEBUG nova.network.os_vif_util [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.417 254096 DEBUG os_vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.419 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.420 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.426 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75edff1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.426 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75edff1b-5c, col_values=(('external_ids', {'iface-id': '75edff1b-5ceb-4f80-befe-e1a5ec106382', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:02:f1', 'vm-uuid': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:57:44 compute-0 NetworkManager[48891]: <info>  [1764089864.4299] manager: (tap75edff1b-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.438 254096 INFO os_vif [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.510 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.511 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.511 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No VIF found with MAC fa:16:3e:9e:02:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.512 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Using config drive
Nov 25 16:57:44 compute-0 nova_compute[254092]: 2025-11-25 16:57:44.538 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:57:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/466815479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:57:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3913018023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.574 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating config drive at /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.579 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp08ctyvp8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:45.680 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:45.681 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:57:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:45.681 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.715 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp08ctyvp8" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.740 254096 DEBUG nova.storage.rbd_utils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.743 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.830 254096 DEBUG nova.network.neutron [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.831 254096 DEBUG nova.network.neutron [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.846 254096 DEBUG oslo_concurrency.lockutils [req-24a78294-0425-4985-b96b-7ecb95de83b1 req-8e8373df-ba75-42c3-a9d1-26dd4b04765e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.887 254096 DEBUG oslo_concurrency.processutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.889 254096 INFO nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting local config drive /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config because it was imported into RBD.
Nov 25 16:57:45 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 16:57:45 compute-0 ceph-mon[74985]: pgmap v2220: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:57:45 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 16:57:45 compute-0 kernel: tap75edff1b-5c: entered promiscuous mode
Nov 25 16:57:45 compute-0 NetworkManager[48891]: <info>  [1764089865.9879] manager: (tap75edff1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Nov 25 16:57:45 compute-0 ovn_controller[153477]: 2025-11-25T16:57:45Z|01129|binding|INFO|Claiming lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 for this chassis.
Nov 25 16:57:45 compute-0 ovn_controller[153477]: 2025-11-25T16:57:45Z|01130|binding|INFO|75edff1b-5ceb-4f80-befe-e1a5ec106382: Claiming fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.988 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:45 compute-0 nova_compute[254092]: 2025-11-25 16:57:45.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.003 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.005 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 bound to our chassis
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.006 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 16:57:46 compute-0 systemd-udevd[373734]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.018 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50a4bb1b-f1d2-4eed-9b8a-de948fce6bcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.019 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c62671a-f1 in ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.024 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c62671a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.024 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38a50e05-99ab-490b-b753-58da97fcd70a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.024 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac71e15e-aff7-46c0-9c41-36b153f29e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 systemd-machined[216343]: New machine qemu-141-instance-0000006e.
Nov 25 16:57:46 compute-0 NetworkManager[48891]: <info>  [1764089866.0306] device (tap75edff1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:57:46 compute-0 NetworkManager[48891]: <info>  [1764089866.0321] device (tap75edff1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.038 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[eb572d03-90ef-4fca-a439-17af921f8816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-0000006e.
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 ovn_controller[153477]: 2025-11-25T16:57:46Z|01131|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 ovn-installed in OVS
Nov 25 16:57:46 compute-0 ovn_controller[153477]: 2025-11-25T16:57:46Z|01132|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 up in Southbound
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.060 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e51aec-23a9-4e7e-a542-7c312bdb1801]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.087 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[13eafc9b-1c88-45fb-8b85-a144d0718e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.091 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[51e54160-abf2-4115-92df-f59ddb59e679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 NetworkManager[48891]: <info>  [1764089866.0928] manager: (tap6c62671a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/460)
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.122 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b01328b2-5327-4210-9edf-b2bbd28bd9ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.125 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[36b6f068-a968-481c-9a50-aecb31c02425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 NetworkManager[48891]: <info>  [1764089866.1497] device (tap6c62671a-f0): carrier: link connected
Nov 25 16:57:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.155 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3789f067-7b10-4d42-9c5b-42e8f583e38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.171 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[506954df-1468-41f0-a57c-10a393cbd1cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632374, 'reachable_time': 22805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373766, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.188 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41458a7f-4aa3-43bf-92d6-577733172e4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c826'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632374, 'tstamp': 632374}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373767, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.202 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a383c72-d718-4378-b3b3-f56342e6ed78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632374, 'reachable_time': 22805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373768, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.233 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0905096e-2da5-4be9-be5c-b144421838e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.288 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4d1c3c-80f3-4261-a1e3-2db5e31677f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c62671a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 NetworkManager[48891]: <info>  [1764089866.2919] manager: (tap6c62671a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Nov 25 16:57:46 compute-0 kernel: tap6c62671a-f0: entered promiscuous mode
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.295 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c62671a-f0, col_values=(('external_ids', {'iface-id': 'f4863bb8-2150-4d2f-9927-dcfc70177f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.296 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 ovn_controller[153477]: 2025-11-25T16:57:46Z|01133|binding|INFO|Releasing lport f4863bb8-2150-4d2f-9927-dcfc70177f3b from this chassis (sb_readonly=0)
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.299 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.302 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[589ed177-aa4c-4e87-bf80-7bb0c89bfb3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.303 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:57:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:57:46.304 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'env', 'PROCESS_TAG=haproxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.340 254096 DEBUG nova.compute.manager [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.340 254096 DEBUG oslo_concurrency.lockutils [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.341 254096 DEBUG oslo_concurrency.lockutils [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.341 254096 DEBUG oslo_concurrency.lockutils [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.341 254096 DEBUG nova.compute.manager [req-1e1c5c9a-c4d0-4952-a482-180c86086a1c req-ec63ba96-303c-45b3-9876-dc133e77f0dd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Processing event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.508 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089866.5080724, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.508 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Started (Lifecycle Event)
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.510 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.514 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.518 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance spawned successfully.
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.519 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.525 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:57:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.529 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.537 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.538 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.539 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.539 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.540 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.541 254096 DEBUG nova.virt.libvirt.driver [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.547 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.548 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089866.5111504, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.548 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Paused (Lifecycle Event)
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.571 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.575 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089866.5133576, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.575 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Resumed (Lifecycle Event)
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.592 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.595 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.600 254096 INFO nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 9.02 seconds to spawn the instance on the hypervisor.
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.600 254096 DEBUG nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.623 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:57:46 compute-0 podman[373842]: 2025-11-25 16:57:46.635088609 +0000 UTC m=+0.044047770 container create 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.663 254096 INFO nova.compute.manager [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 9.98 seconds to build instance.
Nov 25 16:57:46 compute-0 systemd[1]: Started libpod-conmon-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754.scope.
Nov 25 16:57:46 compute-0 nova_compute[254092]: 2025-11-25 16:57:46.677 254096 DEBUG oslo_concurrency.lockutils [None req-ed39cb31-3484-40ab-a73c-851de70deefa 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f537bbdf1e418b80ee2344362b8e77ed6b022d25a22d9d91c2f957f20bed9fbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:57:46 compute-0 podman[373842]: 2025-11-25 16:57:46.613617465 +0000 UTC m=+0.022576646 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:57:46 compute-0 podman[373842]: 2025-11-25 16:57:46.714729999 +0000 UTC m=+0.123689180 container init 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 16:57:46 compute-0 podman[373842]: 2025-11-25 16:57:46.724395001 +0000 UTC m=+0.133354162 container start 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 16:57:46 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : New worker (373864) forked
Nov 25 16:57:46 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : Loading success.
Nov 25 16:57:47 compute-0 ceph-mon[74985]: pgmap v2221: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 16:57:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 16:57:48 compute-0 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG nova.compute.manager [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:57:48 compute-0 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG oslo_concurrency.lockutils [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:57:48 compute-0 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG oslo_concurrency.lockutils [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:57:48 compute-0 nova_compute[254092]: 2025-11-25 16:57:48.431 254096 DEBUG oslo_concurrency.lockutils [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:57:48 compute-0 nova_compute[254092]: 2025-11-25 16:57:48.432 254096 DEBUG nova.compute.manager [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:57:48 compute-0 nova_compute[254092]: 2025-11-25 16:57:48.432 254096 WARNING nova.compute.manager [req-4f91f607-8b5f-41e1-91db-c312de80db24 req-1ce23d4c-a164-485d-ad54-3653bcd2d392 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state None.
Nov 25 16:57:49 compute-0 nova_compute[254092]: 2025-11-25 16:57:49.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:49 compute-0 nova_compute[254092]: 2025-11-25 16:57:49.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:49 compute-0 NetworkManager[48891]: <info>  [1764089869.2853] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Nov 25 16:57:49 compute-0 NetworkManager[48891]: <info>  [1764089869.2862] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Nov 25 16:57:49 compute-0 nova_compute[254092]: 2025-11-25 16:57:49.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:49 compute-0 ovn_controller[153477]: 2025-11-25T16:57:49Z|01134|binding|INFO|Releasing lport f4863bb8-2150-4d2f-9927-dcfc70177f3b from this chassis (sb_readonly=0)
Nov 25 16:57:49 compute-0 nova_compute[254092]: 2025-11-25 16:57:49.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:49 compute-0 nova_compute[254092]: 2025-11-25 16:57:49.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:50 compute-0 ceph-mon[74985]: pgmap v2222: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Nov 25 16:57:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 25 16:57:50 compute-0 nova_compute[254092]: 2025-11-25 16:57:50.189 254096 DEBUG nova.compute.manager [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:57:50 compute-0 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG nova.compute.manager [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:57:50 compute-0 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG oslo_concurrency.lockutils [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:57:50 compute-0 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG oslo_concurrency.lockutils [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:57:50 compute-0 nova_compute[254092]: 2025-11-25 16:57:50.190 254096 DEBUG nova.network.neutron [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:57:51 compute-0 ceph-mon[74985]: pgmap v2223: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:57:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:57:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 448 KiB/s wr, 85 op/s
Nov 25 16:57:53 compute-0 ceph-mon[74985]: pgmap v2224: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 448 KiB/s wr, 85 op/s
Nov 25 16:57:53 compute-0 nova_compute[254092]: 2025-11-25 16:57:53.414 254096 DEBUG nova.network.neutron [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:57:53 compute-0 nova_compute[254092]: 2025-11-25 16:57:53.414 254096 DEBUG nova.network.neutron [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:57:53 compute-0 nova_compute[254092]: 2025-11-25 16:57:53.851 254096 DEBUG oslo_concurrency.lockutils [req-687cbd07-7c0c-4b24-aaca-6985112dd192 req-05bed1bb-a246-4c1e-b790-fb824caba03c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:57:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 16:57:54 compute-0 nova_compute[254092]: 2025-11-25 16:57:54.415 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:54 compute-0 nova_compute[254092]: 2025-11-25 16:57:54.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:57:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1494414632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:57:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:57:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1494414632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:57:55 compute-0 ceph-mon[74985]: pgmap v2225: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 16:57:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1494414632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:57:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1494414632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:57:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 16:57:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:57:57 compute-0 ceph-mon[74985]: pgmap v2226: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 16:57:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 25 16:57:59 compute-0 nova_compute[254092]: 2025-11-25 16:57:59.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:57:59 compute-0 nova_compute[254092]: 2025-11-25 16:57:59.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:59 compute-0 nova_compute[254092]: 2025-11-25 16:57:59.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5014 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 16:57:59 compute-0 nova_compute[254092]: 2025-11-25 16:57:59.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:57:59 compute-0 nova_compute[254092]: 2025-11-25 16:57:59.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:57:59 compute-0 nova_compute[254092]: 2025-11-25 16:57:59.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:57:59 compute-0 ceph-mon[74985]: pgmap v2227: 321 pgs: 321 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 25 16:58:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 109 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 25 16:58:01 compute-0 ovn_controller[153477]: 2025-11-25T16:58:01Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 16:58:01 compute-0 ovn_controller[153477]: 2025-11-25T16:58:01Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 16:58:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:01 compute-0 ceph-mon[74985]: pgmap v2228: 321 pgs: 321 active+clean; 109 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 25 16:58:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 112 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Nov 25 16:58:03 compute-0 podman[373877]: 2025-11-25 16:58:03.653489359 +0000 UTC m=+0.062597726 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 16:58:03 compute-0 podman[373878]: 2025-11-25 16:58:03.680372651 +0000 UTC m=+0.086425365 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 16:58:03 compute-0 podman[373879]: 2025-11-25 16:58:03.680486804 +0000 UTC m=+0.083226858 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 16:58:03 compute-0 ceph-mon[74985]: pgmap v2229: 321 pgs: 321 active+clean; 112 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Nov 25 16:58:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 112 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 51 op/s
Nov 25 16:58:04 compute-0 nova_compute[254092]: 2025-11-25 16:58:04.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:05 compute-0 ceph-mon[74985]: pgmap v2230: 321 pgs: 321 active+clean; 112 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 2.0 MiB/s wr, 51 op/s
Nov 25 16:58:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 16:58:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:07 compute-0 ceph-mon[74985]: pgmap v2231: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 16:58:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 16:58:09 compute-0 ceph-mon[74985]: pgmap v2232: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 16:58:09 compute-0 nova_compute[254092]: 2025-11-25 16:58:09.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:09 compute-0 nova_compute[254092]: 2025-11-25 16:58:09.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:58:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 16:58:10 compute-0 sshd-session[373944]: Received disconnect from 193.46.255.7 port 31480:11:  [preauth]
Nov 25 16:58:10 compute-0 sshd-session[373944]: Disconnected from authenticating user root 193.46.255.7 port 31480 [preauth]
Nov 25 16:58:11 compute-0 ceph-mon[74985]: pgmap v2233: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 16:58:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 108 KiB/s wr, 27 op/s
Nov 25 16:58:13 compute-0 ceph-mon[74985]: pgmap v2234: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 108 KiB/s wr, 27 op/s
Nov 25 16:58:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:13.634 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 100 KiB/s wr, 11 op/s
Nov 25 16:58:14 compute-0 nova_compute[254092]: 2025-11-25 16:58:14.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:58:14 compute-0 nova_compute[254092]: 2025-11-25 16:58:14.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:58:14 compute-0 nova_compute[254092]: 2025-11-25 16:58:14.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 16:58:14 compute-0 nova_compute[254092]: 2025-11-25 16:58:14.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:58:14 compute-0 nova_compute[254092]: 2025-11-25 16:58:14.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:14 compute-0 nova_compute[254092]: 2025-11-25 16:58:14.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:58:15 compute-0 ceph-mon[74985]: pgmap v2235: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 100 KiB/s wr, 11 op/s
Nov 25 16:58:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 100 KiB/s wr, 11 op/s
Nov 25 16:58:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:17 compute-0 ceph-mon[74985]: pgmap v2236: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 100 KiB/s wr, 11 op/s
Nov 25 16:58:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 16:58:19 compute-0 nova_compute[254092]: 2025-11-25 16:58:19.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:19 compute-0 ceph-mon[74985]: pgmap v2237: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 16:58:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 16:58:20 compute-0 nova_compute[254092]: 2025-11-25 16:58:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:21 compute-0 ceph-mon[74985]: pgmap v2238: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 16:58:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:58:23 compute-0 nova_compute[254092]: 2025-11-25 16:58:23.197 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:23 compute-0 nova_compute[254092]: 2025-11-25 16:58:23.199 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:23 compute-0 nova_compute[254092]: 2025-11-25 16:58:23.199 254096 INFO nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Shelving
Nov 25 16:58:23 compute-0 nova_compute[254092]: 2025-11-25 16:58:23.222 254096 DEBUG nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Nov 25 16:58:23 compute-0 nova_compute[254092]: 2025-11-25 16:58:23.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:23 compute-0 ceph-mon[74985]: pgmap v2239: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:58:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:58:24 compute-0 nova_compute[254092]: 2025-11-25 16:58:24.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:58:24 compute-0 nova_compute[254092]: 2025-11-25 16:58:24.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:58:24 compute-0 nova_compute[254092]: 2025-11-25 16:58:24.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 16:58:24 compute-0 nova_compute[254092]: 2025-11-25 16:58:24.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:58:24 compute-0 nova_compute[254092]: 2025-11-25 16:58:24.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:24 compute-0 nova_compute[254092]: 2025-11-25 16:58:24.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:24 compute-0 nova_compute[254092]: 2025-11-25 16:58:24.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 16:58:25 compute-0 kernel: tap75edff1b-5c (unregistering): left promiscuous mode
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:25 compute-0 NetworkManager[48891]: <info>  [1764089905.4982] device (tap75edff1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:25 compute-0 ovn_controller[153477]: 2025-11-25T16:58:25Z|01135|binding|INFO|Releasing lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 from this chassis (sb_readonly=0)
Nov 25 16:58:25 compute-0 ovn_controller[153477]: 2025-11-25T16:58:25Z|01136|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 down in Southbound
Nov 25 16:58:25 compute-0 ovn_controller[153477]: 2025-11-25T16:58:25Z|01137|binding|INFO|Removing iface tap75edff1b-5c ovn-installed in OVS
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.510 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.516 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.518 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 unbound from our chassis
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.520 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[485647f4-1950-4924-9a78-55bc08c3c145]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.522 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace which is not needed anymore
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.534 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.535 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.535 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.535 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.536 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:25 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 16:58:25 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d0000006e.scope: Consumed 14.187s CPU time.
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:25 compute-0 systemd-machined[216343]: Machine qemu-141-instance-0000006e terminated.
Nov 25 16:58:25 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : haproxy version is 2.8.14-c23fe91
Nov 25 16:58:25 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [NOTICE]   (373862) : path to executable is /usr/sbin/haproxy
Nov 25 16:58:25 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [WARNING]  (373862) : Exiting Master process...
Nov 25 16:58:25 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [ALERT]    (373862) : Current worker (373864) exited with code 143 (Terminated)
Nov 25 16:58:25 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[373858]: [WARNING]  (373862) : All workers exited. Exiting... (0)
Nov 25 16:58:25 compute-0 systemd[1]: libpod-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754.scope: Deactivated successfully.
Nov 25 16:58:25 compute-0 podman[373972]: 2025-11-25 16:58:25.68881638 +0000 UTC m=+0.049557211 container died 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 16:58:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754-userdata-shm.mount: Deactivated successfully.
Nov 25 16:58:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f537bbdf1e418b80ee2344362b8e77ed6b022d25a22d9d91c2f957f20bed9fbc-merged.mount: Deactivated successfully.
Nov 25 16:58:25 compute-0 ceph-mon[74985]: pgmap v2240: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 16:58:25 compute-0 podman[373972]: 2025-11-25 16:58:25.732579662 +0000 UTC m=+0.093320433 container cleanup 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:58:25 compute-0 systemd[1]: libpod-conmon-9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754.scope: Deactivated successfully.
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.773 254096 DEBUG nova.compute.manager [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.778 254096 DEBUG oslo_concurrency.lockutils [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.778 254096 DEBUG oslo_concurrency.lockutils [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.778 254096 DEBUG oslo_concurrency.lockutils [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.779 254096 DEBUG nova.compute.manager [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.779 254096 WARNING nova.compute.manager [req-d6a1c407-d987-4eb5-b863-ecc7793b87b7 req-41a91c2b-4273-4a3c-ac0f-466e4234829a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state shelving.
Nov 25 16:58:25 compute-0 podman[374023]: 2025-11-25 16:58:25.819816818 +0000 UTC m=+0.050993880 container remove 9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.827 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[23055595-7867-456d-8bea-5f3a18cc60fe]: (4, ('Tue Nov 25 04:58:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754)\n9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754\nTue Nov 25 04:58:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754)\n9792b6943dc5d98d33bfd4a64cfb498dce4097f67f71f784956c465a04b29754\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.828 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8982cb-3566-403a-958e-b7cbbc2e381b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.829 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:25 compute-0 kernel: tap6c62671a-f0: left promiscuous mode
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.852 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4befd1-ef41-4bfb-8660-5f805402d733]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.869 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1c098cc1-ad8f-452b-8c7b-47c9f1af22e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.870 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f706a40d-3c9b-4bf9-afa7-d30c7763a67c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.887 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c064bb62-1498-4e62-985b-238f0a21cd2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632367, 'reachable_time': 21345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374049, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.889 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:58:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:25.889 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1737fbff-fa18-4ac8-a9db-b78ccfa59809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c62671a\x2df9ae\x2d4033\x2d9c32\x2db04ccb3e0de4.mount: Deactivated successfully.
Nov 25 16:58:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:58:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080967315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:25 compute-0 nova_compute[254092]: 2025-11-25 16:58:25.997 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 16:58:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 28 KiB/s wr, 4 op/s
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.230 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.231 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3812MB free_disk=59.94274139404297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.232 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.232 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.243 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance shutdown successfully after 3 seconds.
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.250 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.250 254096 DEBUG nova.objects.instance [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.298 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.299 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.299 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.337 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.610 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Beginning cold snapshot process
Nov 25 16:58:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3080967315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.767 254096 DEBUG nova.virt.libvirt.imagebackend [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 16:58:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:58:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793325635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.811 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.817 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.830 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.844 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.845 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:26 compute-0 nova_compute[254092]: 2025-11-25 16:58:26.981 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] creating snapshot(2d7784b1d2864fbcb9bcb6d80a6ef20d) on rbd image(5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:58:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 25 16:58:27 compute-0 ceph-mon[74985]: pgmap v2241: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 28 KiB/s wr, 4 op/s
Nov 25 16:58:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3793325635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 25 16:58:27 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.816 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] cloning vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk@2d7784b1d2864fbcb9bcb6d80a6ef20d to images/e39c28b6-49f5-4073-a746-38050d6a700b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.867 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.868 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.868 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.869 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.886 254096 DEBUG nova.compute.manager [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.887 254096 DEBUG oslo_concurrency.lockutils [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.888 254096 DEBUG oslo_concurrency.lockutils [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.889 254096 DEBUG oslo_concurrency.lockutils [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.889 254096 DEBUG nova.compute.manager [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.891 254096 WARNING nova.compute.manager [req-286f7f6f-a033-497c-b4ec-62e2210f55c3 req-9ed54d72-7b94-46e6-8699-ec7de07b0e80 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state shelving_image_uploading.
Nov 25 16:58:27 compute-0 nova_compute[254092]: 2025-11-25 16:58:27.995 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] flattening images/e39c28b6-49f5-4073-a746-38050d6a700b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:58:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 33 KiB/s wr, 5 op/s
Nov 25 16:58:28 compute-0 nova_compute[254092]: 2025-11-25 16:58:28.524 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] removing snapshot(2d7784b1d2864fbcb9bcb6d80a6ef20d) on rbd image(5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 16:58:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 25 16:58:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 25 16:58:28 compute-0 ceph-mon[74985]: osdmap e279: 3 total, 3 up, 3 in
Nov 25 16:58:28 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 25 16:58:28 compute-0 nova_compute[254092]: 2025-11-25 16:58:28.894 254096 DEBUG nova.storage.rbd_utils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] creating snapshot(snap) on rbd image(e39c28b6-49f5-4073-a746-38050d6a700b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 16:58:29 compute-0 nova_compute[254092]: 2025-11-25 16:58:29.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:29 compute-0 nova_compute[254092]: 2025-11-25 16:58:29.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 25 16:58:29 compute-0 ceph-mon[74985]: pgmap v2243: 321 pgs: 321 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 33 KiB/s wr, 5 op/s
Nov 25 16:58:29 compute-0 ceph-mon[74985]: osdmap e280: 3 total, 3 up, 3 in
Nov 25 16:58:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 25 16:58:29 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 25 16:58:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 163 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 16:58:30 compute-0 ceph-mon[74985]: osdmap e281: 3 total, 3 up, 3 in
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.199 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Snapshot image upload complete
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.199 254096 DEBUG nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.250 254096 INFO nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Shelve offloading
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.258 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.259 254096 DEBUG nova.compute.manager [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.261 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.261 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:58:31 compute-0 nova_compute[254092]: 2025-11-25 16:58:31.262 254096 DEBUG nova.network.neutron [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:58:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:31 compute-0 ceph-mon[74985]: pgmap v2246: 321 pgs: 321 active+clean; 163 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 3.6 MiB/s wr, 95 op/s
Nov 25 16:58:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 153 op/s
Nov 25 16:58:32 compute-0 nova_compute[254092]: 2025-11-25 16:58:32.744 254096 DEBUG nova.network.neutron [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:58:32 compute-0 nova_compute[254092]: 2025-11-25 16:58:32.792 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:58:33 compute-0 nova_compute[254092]: 2025-11-25 16:58:33.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:33 compute-0 nova_compute[254092]: 2025-11-25 16:58:33.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:58:33 compute-0 nova_compute[254092]: 2025-11-25 16:58:33.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:58:33 compute-0 nova_compute[254092]: 2025-11-25 16:58:33.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:58:33 compute-0 nova_compute[254092]: 2025-11-25 16:58:33.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:58:33 compute-0 nova_compute[254092]: 2025-11-25 16:58:33.527 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 16:58:33 compute-0 nova_compute[254092]: 2025-11-25 16:58:33.527 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:33 compute-0 ceph-mon[74985]: pgmap v2247: 321 pgs: 321 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 153 op/s
Nov 25 16:58:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.3 MiB/s wr, 143 op/s
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.350 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.351 254096 DEBUG nova.objects.instance [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'resources' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.366 254096 DEBUG nova.virt.libvirt.vif [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:57:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member',shelved_at='2025-11-25T16:58:31.199734',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e39c28b6-49f5-4073-a746-38050d6a700b'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:58:26Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.367 254096 DEBUG nova.network.os_vif_util [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.368 254096 DEBUG nova.network.os_vif_util [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.369 254096 DEBUG os_vif [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.372 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75edff1b-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.415 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.423 254096 INFO os_vif [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.468 254096 DEBUG nova.compute.manager [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.468 254096 DEBUG nova.compute.manager [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.468 254096 DEBUG oslo_concurrency.lockutils [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:34 compute-0 podman[374235]: 2025-11-25 16:58:34.672929839 +0000 UTC m=+0.082935569 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:58:34 compute-0 podman[374236]: 2025-11-25 16:58:34.691553227 +0000 UTC m=+0.096287844 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:58:34 compute-0 podman[374237]: 2025-11-25 16:58:34.736672116 +0000 UTC m=+0.125549231 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.891 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting instance files /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.892 254096 INFO nova.virt.libvirt.driver [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deletion of /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del complete
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.905 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.960 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.960 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.961 254096 DEBUG oslo_concurrency.lockutils [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:58:34 compute-0 nova_compute[254092]: 2025-11-25 16:58:34.961 254096 DEBUG nova.network.neutron [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.016 254096 INFO nova.scheduler.client.report [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Deleted allocations for instance 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.054 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.055 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.104 254096 DEBUG oslo_concurrency.processutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:58:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784194978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.545 254096 DEBUG oslo_concurrency.processutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.552 254096 DEBUG nova.compute.provider_tree [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.572 254096 DEBUG nova.scheduler.client.report [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.607 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:35 compute-0 nova_compute[254092]: 2025-11-25 16:58:35.662 254096 DEBUG oslo_concurrency.lockutils [None req-57635644-ef12-4756-9b40-ee6b9f069a51 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:35 compute-0 ceph-mon[74985]: pgmap v2248: 321 pgs: 321 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.3 MiB/s wr, 143 op/s
Nov 25 16:58:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3784194978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 120 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 170 op/s
Nov 25 16:58:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 25 16:58:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 25 16:58:36 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 25 16:58:36 compute-0 nova_compute[254092]: 2025-11-25 16:58:36.592 254096 DEBUG nova.network.neutron [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:58:36 compute-0 nova_compute[254092]: 2025-11-25 16:58:36.592 254096 DEBUG nova.network.neutron [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": null, "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap75edff1b-5c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:58:36 compute-0 nova_compute[254092]: 2025-11-25 16:58:36.610 254096 DEBUG oslo_concurrency.lockutils [req-c2cac09c-c044-4744-803e-7f88e5c375e5 req-c77c0551-60bb-4f38-b432-d5793a7e81ab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 16:58:37 compute-0 ceph-mon[74985]: pgmap v2249: 321 pgs: 321 active+clean; 120 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 170 op/s
Nov 25 16:58:37 compute-0 ceph-mon[74985]: osdmap e282: 3 total, 3 up, 3 in
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.740 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.740 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.741 254096 INFO nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Unshelving
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.809 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.809 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.814 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.825 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.837 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.837 254096 INFO nova.compute.claims [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:58:37 compute-0 nova_compute[254092]: 2025-11-25 16:58:37.917 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 120 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.6 MiB/s wr, 163 op/s
Nov 25 16:58:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:58:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631280970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:38 compute-0 nova_compute[254092]: 2025-11-25 16:58:38.393 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:38 compute-0 nova_compute[254092]: 2025-11-25 16:58:38.398 254096 DEBUG nova.compute.provider_tree [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:58:38 compute-0 nova_compute[254092]: 2025-11-25 16:58:38.415 254096 DEBUG nova.scheduler.client.report [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:58:38 compute-0 nova_compute[254092]: 2025-11-25 16:58:38.432 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1631280970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:58:38 compute-0 nova_compute[254092]: 2025-11-25 16:58:38.584 254096 INFO nova.network.neutron [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating port 75edff1b-5ceb-4f80-befe-e1a5ec106382 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.075 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.076 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.076 254096 DEBUG nova.network.neutron [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.184 254096 DEBUG nova.compute.manager [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.184 254096 DEBUG nova.compute.manager [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.185 254096 DEBUG oslo_concurrency.lockutils [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.418 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:39 compute-0 nova_compute[254092]: 2025-11-25 16:58:39.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:39 compute-0 ceph-mon[74985]: pgmap v2251: 321 pgs: 321 active+clean; 120 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.6 MiB/s wr, 163 op/s
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:58:40
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'vms', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images']
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 575 KiB/s rd, 2.5 MiB/s wr, 84 op/s
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.208 254096 DEBUG nova.network.neutron [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.223 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.225 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.225 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating image(s)
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.247 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.251 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.252 254096 DEBUG oslo_concurrency.lockutils [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.253 254096 DEBUG nova.network.neutron [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.288 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.312 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.316 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "3cd7afdb8044e756124699d2a63eed57978d47cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.317 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "3cd7afdb8044e756124699d2a63eed57978d47cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.628 254096 DEBUG nova.virt.libvirt.imagebackend [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/e39c28b6-49f5-4073-a746-38050d6a700b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/e39c28b6-49f5-4073-a746-38050d6a700b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.703 254096 DEBUG nova.virt.libvirt.imagebackend [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/e39c28b6-49f5-4073-a746-38050d6a700b/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.705 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] cloning images/e39c28b6-49f5-4073-a746-38050d6a700b@snap to None/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.763 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089905.7616975, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.764 254096 INFO nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Stopped (Lifecycle Event)
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.782 254096 DEBUG nova.compute.manager [None req-fa11f7c4-d102-49d1-baac-aa4bc034cad4 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.829 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "3cd7afdb8044e756124699d2a63eed57978d47cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:40 compute-0 nova_compute[254092]: 2025-11-25 16:58:40.967 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.025 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] flattening vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.456 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Image rbd:vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.456 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Ensure instance console log exists: /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.457 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.459 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start _get_guest_xml network_info=[{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:58:23Z,direct_url=<?>,disk_format='raw',id=e39c28b6-49f5-4073-a746-38050d6a700b,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1491050401-shelved',owner='0fef68bf8cf647f89586309d548d4bd7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:58:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.463 254096 WARNING nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.467 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.467 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.470 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.471 254096 DEBUG nova.virt.libvirt.host [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.471 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.471 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T16:58:23Z,direct_url=<?>,disk_format='raw',id=e39c28b6-49f5-4073-a746-38050d6a700b,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1491050401-shelved',owner='0fef68bf8cf647f89586309d548d4bd7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T16:58:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.472 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.virt.hardware [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.473 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.487 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.536 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.537 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 16:58:41 compute-0 sudo[374557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:41 compute-0 sudo[374557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:41 compute-0 ceph-mon[74985]: pgmap v2252: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 575 KiB/s rd, 2.5 MiB/s wr, 84 op/s
Nov 25 16:58:41 compute-0 sudo[374557]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.574 254096 DEBUG nova.network.neutron [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.574 254096 DEBUG nova.network.neutron [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.588 254096 DEBUG oslo_concurrency.lockutils [req-bce1a306-05bb-47ad-9fc0-0fbce95411ce req-1392e996-cca7-4817-858d-beaf2a574811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:58:41 compute-0 sudo[374582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:58:41 compute-0 sudo[374582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:41 compute-0 sudo[374582]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:41 compute-0 sudo[374625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:41 compute-0 sudo[374625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:41 compute-0 sudo[374625]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:41 compute-0 sudo[374651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 16:58:41 compute-0 sudo[374651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:58:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207005865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.960 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.980 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:58:41 compute-0 nova_compute[254092]: 2025-11-25 16:58:41.985 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 44 op/s
Nov 25 16:58:42 compute-0 podman[374771]: 2025-11-25 16:58:42.300929374 +0000 UTC m=+0.160994506 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:58:42 compute-0 podman[374771]: 2025-11-25 16:58:42.388789477 +0000 UTC m=+0.248854619 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 16:58:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:58:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018880638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.455 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.459 254096 DEBUG nova.virt.libvirt.vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='e39c28b6-49f5-4073-a746-38050d6a700b',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:57:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member',shelved_at='2025-11-25T16:58:31.199734',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e39c28b6-49f5-4073-a746-38050d6a700b'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:58:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.460 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.461 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.463 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.475 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <uuid>5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</uuid>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <name>instance-0000006e</name>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <nova:name>tempest-TestShelveInstance-server-1491050401</nova:name>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:58:41</nova:creationTime>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:user uuid="2a830f6b7532459380b24ae0297b12bb">tempest-TestShelveInstance-535396087-project-member</nova:user>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:project uuid="0fef68bf8cf647f89586309d548d4bd7">tempest-TestShelveInstance-535396087</nova:project>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="e39c28b6-49f5-4073-a746-38050d6a700b"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <nova:port uuid="75edff1b-5ceb-4f80-befe-e1a5ec106382">
Nov 25 16:58:42 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <system>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <entry name="serial">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <entry name="uuid">5d0e35bb-8f66-41f8-81fc-54e1b53c4dda</entry>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </system>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <os>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   </os>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <features>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   </features>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk">
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       </source>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config">
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       </source>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:58:42 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:9e:02:f1"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <target dev="tap75edff1b-5c"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/console.log" append="off"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <video>
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </video>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:58:42 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:58:42 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:58:42 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:58:42 compute-0 nova_compute[254092]: </domain>
Nov 25 16:58:42 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Preparing to wait for external event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.476 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.477 254096 DEBUG nova.virt.libvirt.vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='e39c28b6-49f5-4073-a746-38050d6a700b',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:57:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member',shelved_at='2025-11-25T16:58:31.199734',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e39c28b6-49f5-4073-a746-38050d6a700b'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:58:37Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.477 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.478 254096 DEBUG nova.network.os_vif_util [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.478 254096 DEBUG os_vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.479 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.479 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.481 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75edff1b-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.482 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75edff1b-5c, col_values=(('external_ids', {'iface-id': '75edff1b-5ceb-4f80-befe-e1a5ec106382', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:02:f1', 'vm-uuid': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.483 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:42 compute-0 NetworkManager[48891]: <info>  [1764089922.4841] manager: (tap75edff1b-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.490 254096 INFO os_vif [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.535 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.535 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.536 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] No VIF found with MAC fa:16:3e:9e:02:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.536 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Using config drive
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.561 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:58:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4207005865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:58:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2018880638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.580 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:42 compute-0 nova_compute[254092]: 2025-11-25 16:58:42.619 254096 DEBUG nova.objects.instance [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'keypairs' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.139 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Creating config drive at /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.145 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkkccyspj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:43 compute-0 sudo[374651]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:58:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:58:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:43 compute-0 sudo[374965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:43 compute-0 sudo[374965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:43 compute-0 sudo[374965]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:43 compute-0 sudo[374992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.318 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkkccyspj" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:43 compute-0 sudo[374992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:43 compute-0 sudo[374992]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.345 254096 DEBUG nova.storage.rbd_utils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] rbd image 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.350 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:58:43 compute-0 sudo[375024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:43 compute-0 sudo[375024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:43 compute-0 sudo[375024]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:43 compute-0 sudo[375061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:58:43 compute-0 sudo[375061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.513 254096 DEBUG oslo_concurrency.processutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.514 254096 INFO nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting local config drive /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda/disk.config because it was imported into RBD.
Nov 25 16:58:43 compute-0 kernel: tap75edff1b-5c: entered promiscuous mode
Nov 25 16:58:43 compute-0 NetworkManager[48891]: <info>  [1764089923.5754] manager: (tap75edff1b-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/465)
Nov 25 16:58:43 compute-0 ovn_controller[153477]: 2025-11-25T16:58:43Z|01138|binding|INFO|Claiming lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 for this chassis.
Nov 25 16:58:43 compute-0 ovn_controller[153477]: 2025-11-25T16:58:43Z|01139|binding|INFO|75edff1b-5ceb-4f80-befe-e1a5ec106382: Claiming fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.586 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 bound to our chassis
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.588 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 16:58:43 compute-0 ovn_controller[153477]: 2025-11-25T16:58:43Z|01140|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 ovn-installed in OVS
Nov 25 16:58:43 compute-0 ovn_controller[153477]: 2025-11-25T16:58:43Z|01141|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 up in Southbound
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e354db62-1b5f-4fba-8751-7d24045c410c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.604 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c62671a-f1 in ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.606 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c62671a-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.606 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d68c082a-c3a9-4be8-a1ef-1bd4be78ce27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.607 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0983896b-5eb8-4aa5-8da8-8adb419378b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ceph-mon[74985]: pgmap v2253: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 44 op/s
Nov 25 16:58:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 systemd-udevd[375128]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:58:43 compute-0 systemd-machined[216343]: New machine qemu-142-instance-0000006e.
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.620 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa1609c-ac4a-486d-ae49-51d5de1e7488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 NetworkManager[48891]: <info>  [1764089923.6327] device (tap75edff1b-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:58:43 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-0000006e.
Nov 25 16:58:43 compute-0 NetworkManager[48891]: <info>  [1764089923.6354] device (tap75edff1b-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.648 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae661d72-1334-40d9-ba0e-87a5a4f16883]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.681 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c5d72b-878e-461b-9d23-18f9a67015f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.686 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5813ad05-8f3e-4496-8300-0b1525e03d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 systemd-udevd[375134]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:58:43 compute-0 NetworkManager[48891]: <info>  [1764089923.6877] manager: (tap6c62671a-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/466)
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.718 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[070a519b-029c-4c0d-a150-32313a207a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.721 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9dbc77-12c6-42f4-845f-6a567ccea8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 NetworkManager[48891]: <info>  [1764089923.7429] device (tap6c62671a-f0): carrier: link connected
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.749 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abf020db-6bdc-4898-9ebf-81d60196c9cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.773 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15f12d89-d6cd-404d-bc85-a98c2e8b9793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638133, 'reachable_time': 26386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375165, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.792 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37fd37bb-4d12-4cf3-b4bf-ff4de20ec99a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c826'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638133, 'tstamp': 638133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375166, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.812 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba455f16-ca8a-4400-a923-f5d3a6cf3970]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c62671a-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c8:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638133, 'reachable_time': 26386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375167, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.850 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e01d97cc-ad98-4ae4-8d2a-ca643e2f4295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.876 254096 DEBUG nova.compute.manager [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.876 254096 DEBUG oslo_concurrency.lockutils [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.877 254096 DEBUG oslo_concurrency.lockutils [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.877 254096 DEBUG oslo_concurrency.lockutils [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.877 254096 DEBUG nova.compute.manager [req-c027087c-ecc6-4256-9bae-8518010583e2 req-6ce7dfa5-b4dd-48e4-b957-1ffbe690eebb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Processing event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0705fbee-37de-492e-815d-6ae849c19831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.919 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.920 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.920 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c62671a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 NetworkManager[48891]: <info>  [1764089923.9234] manager: (tap6c62671a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Nov 25 16:58:43 compute-0 kernel: tap6c62671a-f0: entered promiscuous mode
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.927 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c62671a-f0, col_values=(('external_ids', {'iface-id': 'f4863bb8-2150-4d2f-9927-dcfc70177f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 ovn_controller[153477]: 2025-11-25T16:58:43Z|01142|binding|INFO|Releasing lport f4863bb8-2150-4d2f-9927-dcfc70177f3b from this chassis (sb_readonly=0)
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.931 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[07de8174-6660-48ed-842e-ef6e490ff7ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.933 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.pid.haproxy
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 6c62671a-f9ae-4033-9c32-b04ccb3e0de4
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:58:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:43.935 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'env', 'PROCESS_TAG=haproxy-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c62671a-f9ae-4033-9c32-b04ccb3e0de4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:58:43 compute-0 nova_compute[254092]: 2025-11-25 16:58:43.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:43 compute-0 sudo[375061]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:58:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:58:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:58:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:58:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:58:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:44 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev eda2d163-ce61-45a6-99b6-7746e5f14770 does not exist
Nov 25 16:58:44 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a9743d25-e6f2-4ed8-b35d-ff697a66337a does not exist
Nov 25 16:58:44 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 954c77ab-833d-4fd7-a532-1434444020b7 does not exist
Nov 25 16:58:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:58:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:58:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:58:44 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:58:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:58:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:58:44 compute-0 sudo[375191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:44 compute-0 sudo[375191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:44 compute-0 sudo[375191]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:44 compute-0 sudo[375250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:58:44 compute-0 sudo[375250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:44 compute-0 sudo[375250]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 44 op/s
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.197 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089924.1967454, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.197 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Started (Lifecycle Event)
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.200 254096 DEBUG nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.203 254096 DEBUG nova.virt.libvirt.driver [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.205 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance spawned successfully.
Nov 25 16:58:44 compute-0 sudo[375282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:44 compute-0 sudo[375282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:44 compute-0 sudo[375282]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.222 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.226 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.252 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.252 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089924.1969492, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.253 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Paused (Lifecycle Event)
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.276 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.280 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089924.2031713, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.280 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Resumed (Lifecycle Event)
Nov 25 16:58:44 compute-0 sudo[375321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:58:44 compute-0 sudo[375321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.297 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.303 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:58:44 compute-0 podman[375333]: 2025-11-25 16:58:44.312411278 +0000 UTC m=+0.057265180 container create 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.324 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:58:44 compute-0 systemd[1]: Started libpod-conmon-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697.scope.
Nov 25 16:58:44 compute-0 podman[375333]: 2025-11-25 16:58:44.279417499 +0000 UTC m=+0.024271421 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:58:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594d52d0fc086d39ee392d5c99998b632b0abad46d43aa1aa2121c7e09f49de6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:44 compute-0 podman[375333]: 2025-11-25 16:58:44.402272405 +0000 UTC m=+0.147126327 container init 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 16:58:44 compute-0 podman[375333]: 2025-11-25 16:58:44.408181476 +0000 UTC m=+0.153035378 container start 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:58:44 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : New worker (375379) forked
Nov 25 16:58:44 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : Loading success.
Nov 25 16:58:44 compute-0 nova_compute[254092]: 2025-11-25 16:58:44.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:58:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:58:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:58:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:58:44 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:58:44 compute-0 podman[375421]: 2025-11-25 16:58:44.636953577 +0000 UTC m=+0.035274282 container create 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:58:44 compute-0 systemd[1]: Started libpod-conmon-3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e.scope.
Nov 25 16:58:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:58:44 compute-0 podman[375421]: 2025-11-25 16:58:44.622517954 +0000 UTC m=+0.020838679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:58:44 compute-0 podman[375421]: 2025-11-25 16:58:44.731696588 +0000 UTC m=+0.130017313 container init 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 16:58:44 compute-0 podman[375421]: 2025-11-25 16:58:44.741821153 +0000 UTC m=+0.140141858 container start 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 16:58:44 compute-0 podman[375421]: 2025-11-25 16:58:44.7453637 +0000 UTC m=+0.143684405 container attach 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:58:44 compute-0 objective_hopper[375437]: 167 167
Nov 25 16:58:44 compute-0 systemd[1]: libpod-3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e.scope: Deactivated successfully.
Nov 25 16:58:44 compute-0 podman[375421]: 2025-11-25 16:58:44.752752871 +0000 UTC m=+0.151073576 container died 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:58:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e26bdeda77d984bece039e957e5016be6606975fba5abf64f676f3e066884b1-merged.mount: Deactivated successfully.
Nov 25 16:58:44 compute-0 podman[375421]: 2025-11-25 16:58:44.799380731 +0000 UTC m=+0.197701436 container remove 3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 16:58:44 compute-0 systemd[1]: libpod-conmon-3eaee56156d3b3d15915ae9cd2cdd008cd3b1852b8b2d97b3da2448ab795dc3e.scope: Deactivated successfully.
Nov 25 16:58:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 25 16:58:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 25 16:58:45 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 25 16:58:45 compute-0 podman[375460]: 2025-11-25 16:58:45.027016981 +0000 UTC m=+0.073798211 container create 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 16:58:45 compute-0 podman[375460]: 2025-11-25 16:58:44.999147102 +0000 UTC m=+0.045928312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:58:45 compute-0 systemd[1]: Started libpod-conmon-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope.
Nov 25 16:58:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:45 compute-0 podman[375460]: 2025-11-25 16:58:45.156536749 +0000 UTC m=+0.203318019 container init 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:58:45 compute-0 podman[375460]: 2025-11-25 16:58:45.165868823 +0000 UTC m=+0.212650053 container start 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:58:45 compute-0 podman[375460]: 2025-11-25 16:58:45.179588367 +0000 UTC m=+0.226369607 container attach 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:58:45 compute-0 nova_compute[254092]: 2025-11-25 16:58:45.459 254096 DEBUG nova.compute.manager [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:58:45 compute-0 nova_compute[254092]: 2025-11-25 16:58:45.787 254096 DEBUG oslo_concurrency.lockutils [None req-786c337e-8731-43d3-88cd-3e09414fd7c5 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:45 compute-0 nova_compute[254092]: 2025-11-25 16:58:45.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:45.800 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:58:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:45.802 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:58:46 compute-0 ceph-mon[74985]: pgmap v2254: 321 pgs: 321 active+clean; 120 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 44 op/s
Nov 25 16:58:46 compute-0 ceph-mon[74985]: osdmap e283: 3 total, 3 up, 3 in
Nov 25 16:58:46 compute-0 nova_compute[254092]: 2025-11-25 16:58:46.039 254096 DEBUG nova.compute.manager [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:58:46 compute-0 nova_compute[254092]: 2025-11-25 16:58:46.040 254096 DEBUG oslo_concurrency.lockutils [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:58:46 compute-0 nova_compute[254092]: 2025-11-25 16:58:46.040 254096 DEBUG oslo_concurrency.lockutils [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:58:46 compute-0 nova_compute[254092]: 2025-11-25 16:58:46.040 254096 DEBUG oslo_concurrency.lockutils [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:58:46 compute-0 nova_compute[254092]: 2025-11-25 16:58:46.041 254096 DEBUG nova.compute.manager [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:58:46 compute-0 nova_compute[254092]: 2025-11-25 16:58:46.041 254096 WARNING nova.compute.manager [req-9810599e-c8e9-4369-988f-1784b8983258 req-57092a46-2468-4252-badd-cbbb25df5399 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state None.
Nov 25 16:58:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 163 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 144 op/s
Nov 25 16:58:46 compute-0 compassionate_pascal[375477]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:58:46 compute-0 compassionate_pascal[375477]: --> relative data size: 1.0
Nov 25 16:58:46 compute-0 compassionate_pascal[375477]: --> All data devices are unavailable
Nov 25 16:58:46 compute-0 systemd[1]: libpod-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope: Deactivated successfully.
Nov 25 16:58:46 compute-0 conmon[375477]: conmon 84f098177ff40a7e372c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope/container/memory.events
Nov 25 16:58:46 compute-0 podman[375460]: 2025-11-25 16:58:46.237595572 +0000 UTC m=+1.284376802 container died 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 16:58:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-13df3c11c7e3e6474f6132da8dbcf1dcce617405676d4f1b633666ba425ccf2d-merged.mount: Deactivated successfully.
Nov 25 16:58:46 compute-0 podman[375460]: 2025-11-25 16:58:46.311767852 +0000 UTC m=+1.358549052 container remove 84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:58:46 compute-0 systemd[1]: libpod-conmon-84f098177ff40a7e372c369d546dc9454fd88d0142be1132dac082af8024b8da.scope: Deactivated successfully.
Nov 25 16:58:46 compute-0 sudo[375321]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:46 compute-0 sudo[375517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:46 compute-0 sudo[375517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:46 compute-0 sudo[375517]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:46 compute-0 sudo[375542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:58:46 compute-0 sudo[375542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:46 compute-0 sudo[375542]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:46 compute-0 sudo[375567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:46 compute-0 sudo[375567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:46 compute-0 sudo[375567]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:46 compute-0 sudo[375592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:58:46 compute-0 sudo[375592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:47 compute-0 podman[375658]: 2025-11-25 16:58:47.042757172 +0000 UTC m=+0.054583588 container create 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:58:47 compute-0 systemd[1]: Started libpod-conmon-71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4.scope.
Nov 25 16:58:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:58:47 compute-0 podman[375658]: 2025-11-25 16:58:47.023298562 +0000 UTC m=+0.035125008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:58:47 compute-0 podman[375658]: 2025-11-25 16:58:47.13304237 +0000 UTC m=+0.144868846 container init 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:58:47 compute-0 podman[375658]: 2025-11-25 16:58:47.140667128 +0000 UTC m=+0.152493524 container start 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 16:58:47 compute-0 podman[375658]: 2025-11-25 16:58:47.143788323 +0000 UTC m=+0.155614739 container attach 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:58:47 compute-0 blissful_archimedes[375674]: 167 167
Nov 25 16:58:47 compute-0 podman[375658]: 2025-11-25 16:58:47.145589373 +0000 UTC m=+0.157415769 container died 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 16:58:47 compute-0 systemd[1]: libpod-71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4.scope: Deactivated successfully.
Nov 25 16:58:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-20bf69ffaaa6489f32bf5df14855fff396475f3d272a9e156a0eccbc0cddf3a0-merged.mount: Deactivated successfully.
Nov 25 16:58:47 compute-0 podman[375658]: 2025-11-25 16:58:47.192031807 +0000 UTC m=+0.203858203 container remove 71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:58:47 compute-0 systemd[1]: libpod-conmon-71327b7aae782ae8a0ee4efbceb10e8f25c2cee38cca5f5380b95003454ff6b4.scope: Deactivated successfully.
Nov 25 16:58:47 compute-0 podman[375699]: 2025-11-25 16:58:47.353796473 +0000 UTC m=+0.045326795 container create fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 16:58:47 compute-0 systemd[1]: Started libpod-conmon-fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031.scope.
Nov 25 16:58:47 compute-0 podman[375699]: 2025-11-25 16:58:47.330575941 +0000 UTC m=+0.022106253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:58:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:47 compute-0 podman[375699]: 2025-11-25 16:58:47.46129069 +0000 UTC m=+0.152821002 container init fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 16:58:47 compute-0 podman[375699]: 2025-11-25 16:58:47.468930499 +0000 UTC m=+0.160460791 container start fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:58:47 compute-0 podman[375699]: 2025-11-25 16:58:47.476578297 +0000 UTC m=+0.168108589 container attach fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:58:47 compute-0 nova_compute[254092]: 2025-11-25 16:58:47.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:48 compute-0 ceph-mon[74985]: pgmap v2256: 321 pgs: 321 active+clean; 163 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 144 op/s
Nov 25 16:58:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 163 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 138 op/s
Nov 25 16:58:48 compute-0 elated_bohr[375716]: {
Nov 25 16:58:48 compute-0 elated_bohr[375716]:     "0": [
Nov 25 16:58:48 compute-0 elated_bohr[375716]:         {
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "devices": [
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "/dev/loop3"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             ],
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_name": "ceph_lv0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_size": "21470642176",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "name": "ceph_lv0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "tags": {
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cluster_name": "ceph",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.crush_device_class": "",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.encrypted": "0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osd_id": "0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.type": "block",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.vdo": "0"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             },
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "type": "block",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "vg_name": "ceph_vg0"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:         }
Nov 25 16:58:48 compute-0 elated_bohr[375716]:     ],
Nov 25 16:58:48 compute-0 elated_bohr[375716]:     "1": [
Nov 25 16:58:48 compute-0 elated_bohr[375716]:         {
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "devices": [
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "/dev/loop4"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             ],
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_name": "ceph_lv1",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_size": "21470642176",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "name": "ceph_lv1",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "tags": {
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cluster_name": "ceph",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.crush_device_class": "",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.encrypted": "0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osd_id": "1",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.type": "block",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.vdo": "0"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             },
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "type": "block",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "vg_name": "ceph_vg1"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:         }
Nov 25 16:58:48 compute-0 elated_bohr[375716]:     ],
Nov 25 16:58:48 compute-0 elated_bohr[375716]:     "2": [
Nov 25 16:58:48 compute-0 elated_bohr[375716]:         {
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "devices": [
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "/dev/loop5"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             ],
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_name": "ceph_lv2",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_size": "21470642176",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "name": "ceph_lv2",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "tags": {
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.cluster_name": "ceph",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.crush_device_class": "",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.encrypted": "0",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osd_id": "2",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.type": "block",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:                 "ceph.vdo": "0"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             },
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "type": "block",
Nov 25 16:58:48 compute-0 elated_bohr[375716]:             "vg_name": "ceph_vg2"
Nov 25 16:58:48 compute-0 elated_bohr[375716]:         }
Nov 25 16:58:48 compute-0 elated_bohr[375716]:     ]
Nov 25 16:58:48 compute-0 elated_bohr[375716]: }
Nov 25 16:58:48 compute-0 systemd[1]: libpod-fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031.scope: Deactivated successfully.
Nov 25 16:58:48 compute-0 podman[375699]: 2025-11-25 16:58:48.291490652 +0000 UTC m=+0.983020964 container died fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-74fa42b62c583b75aade4f01139dd18ea0fc0bca3e3fc3dcba0bf15882c7ee40-merged.mount: Deactivated successfully.
Nov 25 16:58:48 compute-0 podman[375699]: 2025-11-25 16:58:48.346588432 +0000 UTC m=+1.038118744 container remove fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 16:58:48 compute-0 systemd[1]: libpod-conmon-fbd78733aa01be0f46bd25429651c5784ab4717fc0ca98ca8a27c648cea5a031.scope: Deactivated successfully.
Nov 25 16:58:48 compute-0 sudo[375592]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:48 compute-0 sudo[375738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:48 compute-0 sudo[375738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:48 compute-0 sudo[375738]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:48 compute-0 sudo[375763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:58:48 compute-0 sudo[375763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:48 compute-0 sudo[375763]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:48 compute-0 sudo[375788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:48 compute-0 sudo[375788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:48 compute-0 sudo[375788]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:48 compute-0 sudo[375813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:58:48 compute-0 sudo[375813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:49 compute-0 podman[375874]: 2025-11-25 16:58:49.039624348 +0000 UTC m=+0.040749831 container create 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 16:58:49 compute-0 systemd[1]: Started libpod-conmon-7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541.scope.
Nov 25 16:58:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:58:49 compute-0 podman[375874]: 2025-11-25 16:58:49.025082071 +0000 UTC m=+0.026207574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:58:49 compute-0 podman[375874]: 2025-11-25 16:58:49.124789547 +0000 UTC m=+0.125915030 container init 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:58:49 compute-0 podman[375874]: 2025-11-25 16:58:49.131321495 +0000 UTC m=+0.132446978 container start 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 16:58:49 compute-0 confident_mahavira[375891]: 167 167
Nov 25 16:58:49 compute-0 systemd[1]: libpod-7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541.scope: Deactivated successfully.
Nov 25 16:58:49 compute-0 podman[375874]: 2025-11-25 16:58:49.138361437 +0000 UTC m=+0.139486940 container attach 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:58:49 compute-0 podman[375874]: 2025-11-25 16:58:49.138768378 +0000 UTC m=+0.139893861 container died 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 16:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-928dc2a17756ba43b95912df350203e57da654692434fe07d8165638de1fce82-merged.mount: Deactivated successfully.
Nov 25 16:58:49 compute-0 podman[375874]: 2025-11-25 16:58:49.174661186 +0000 UTC m=+0.175786669 container remove 7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 16:58:49 compute-0 systemd[1]: libpod-conmon-7390493d18fb4a390dcc19489939a34ca23d82b8bcd766b74560920afeace541.scope: Deactivated successfully.
Nov 25 16:58:49 compute-0 podman[375914]: 2025-11-25 16:58:49.394698158 +0000 UTC m=+0.058119143 container create a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 16:58:49 compute-0 podman[375914]: 2025-11-25 16:58:49.366782698 +0000 UTC m=+0.030203673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:58:49 compute-0 systemd[1]: Started libpod-conmon-a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868.scope.
Nov 25 16:58:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:58:49 compute-0 podman[375914]: 2025-11-25 16:58:49.519768495 +0000 UTC m=+0.183189450 container init a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:58:49 compute-0 podman[375914]: 2025-11-25 16:58:49.533972472 +0000 UTC m=+0.197393417 container start a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 16:58:49 compute-0 nova_compute[254092]: 2025-11-25 16:58:49.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:49 compute-0 podman[375914]: 2025-11-25 16:58:49.540370856 +0000 UTC m=+0.203791801 container attach a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:58:50 compute-0 ceph-mon[74985]: pgmap v2257: 321 pgs: 321 active+clean; 163 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 138 op/s
Nov 25 16:58:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]: {
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "osd_id": 1,
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "type": "bluestore"
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:     },
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "osd_id": 2,
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "type": "bluestore"
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:     },
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "osd_id": 0,
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:         "type": "bluestore"
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]:     }
Nov 25 16:58:50 compute-0 hardcore_haibt[375931]: }
Nov 25 16:58:50 compute-0 systemd[1]: libpod-a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868.scope: Deactivated successfully.
Nov 25 16:58:50 compute-0 podman[375914]: 2025-11-25 16:58:50.445413375 +0000 UTC m=+1.108834320 container died a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 16:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f61f490c76119141088c7a6bb4fe4bd06ea3048cf1b5031de3997c23b896b466-merged.mount: Deactivated successfully.
Nov 25 16:58:50 compute-0 podman[375914]: 2025-11-25 16:58:50.504172156 +0000 UTC m=+1.167593141 container remove a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:58:50 compute-0 systemd[1]: libpod-conmon-a9c0ac47e49ed6024ad03cc85631853493b5c7906393fe5efbf499cad80ca868.scope: Deactivated successfully.
Nov 25 16:58:50 compute-0 sudo[375813]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 16:58:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 16:58:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d5b39198-acd6-4758-be83-0d80370ce72d does not exist
Nov 25 16:58:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6a213abf-c272-435b-b2c9-519794bb25f1 does not exist
Nov 25 16:58:50 compute-0 sudo[375977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:58:50 compute-0 sudo[375977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:50 compute-0 sudo[375977]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:50 compute-0 sudo[376002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 16:58:50 compute-0 sudo[376002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:58:50 compute-0 sudo[376002]: pam_unix(sudo:session): session closed for user root
Nov 25 16:58:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:58:50.805 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007613079540274793 of space, bias 1.0, pg target 0.22839238620824379 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:58:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:58:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 25 16:58:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 25 16:58:51 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 25 16:58:51 compute-0 ceph-mon[74985]: pgmap v2258: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Nov 25 16:58:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:58:51 compute-0 ceph-mon[74985]: osdmap e284: 3 total, 3 up, 3 in
Nov 25 16:58:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 258 op/s
Nov 25 16:58:52 compute-0 nova_compute[254092]: 2025-11-25 16:58:52.520 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:53 compute-0 ceph-mon[74985]: pgmap v2260: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 258 op/s
Nov 25 16:58:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Nov 25 16:58:54 compute-0 nova_compute[254092]: 2025-11-25 16:58:54.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:58:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3961666780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:58:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:58:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3961666780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:58:55 compute-0 ceph-mon[74985]: pgmap v2261: 321 pgs: 321 active+clean; 121 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Nov 25 16:58:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3961666780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:58:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3961666780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:58:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 409 B/s wr, 74 op/s
Nov 25 16:58:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:58:57 compute-0 ovn_controller[153477]: 2025-11-25T16:58:57Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:02:f1 10.100.0.9
Nov 25 16:58:57 compute-0 nova_compute[254092]: 2025-11-25 16:58:57.522 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:57 compute-0 ceph-mon[74985]: pgmap v2262: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 409 B/s wr, 74 op/s
Nov 25 16:58:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 409 B/s wr, 74 op/s
Nov 25 16:58:59 compute-0 nova_compute[254092]: 2025-11-25 16:58:59.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:58:59 compute-0 ceph-mon[74985]: pgmap v2263: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 409 B/s wr, 74 op/s
Nov 25 16:59:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 16 KiB/s wr, 47 op/s
Nov 25 16:59:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:01 compute-0 ceph-mon[74985]: pgmap v2264: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 16 KiB/s wr, 47 op/s
Nov 25 16:59:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 15 KiB/s wr, 50 op/s
Nov 25 16:59:02 compute-0 nova_compute[254092]: 2025-11-25 16:59:02.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:03 compute-0 ceph-mon[74985]: pgmap v2265: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 15 KiB/s wr, 50 op/s
Nov 25 16:59:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 13 KiB/s wr, 44 op/s
Nov 25 16:59:04 compute-0 nova_compute[254092]: 2025-11-25 16:59:04.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:05 compute-0 ceph-mon[74985]: pgmap v2266: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 13 KiB/s wr, 44 op/s
Nov 25 16:59:05 compute-0 podman[376029]: 2025-11-25 16:59:05.650801297 +0000 UTC m=+0.062085661 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:59:05 compute-0 podman[376028]: 2025-11-25 16:59:05.656742929 +0000 UTC m=+0.070687326 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 16:59:05 compute-0 podman[376030]: 2025-11-25 16:59:05.688554186 +0000 UTC m=+0.090741523 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 16:59:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 16:59:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:07 compute-0 nova_compute[254092]: 2025-11-25 16:59:07.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:07 compute-0 ceph-mon[74985]: pgmap v2267: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 23 KiB/s wr, 45 op/s
Nov 25 16:59:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 23 KiB/s wr, 40 op/s
Nov 25 16:59:09 compute-0 nova_compute[254092]: 2025-11-25 16:59:09.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:09 compute-0 ceph-mon[74985]: pgmap v2268: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 23 KiB/s wr, 40 op/s
Nov 25 16:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:59:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 26 KiB/s wr, 40 op/s
Nov 25 16:59:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:11 compute-0 ceph-mon[74985]: pgmap v2269: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 26 KiB/s wr, 40 op/s
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.684522) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951684540, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 2079, "num_deletes": 253, "total_data_size": 3404100, "memory_usage": 3453560, "flush_reason": "Manual Compaction"}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951699506, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 3337098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45270, "largest_seqno": 47348, "table_properties": {"data_size": 3327625, "index_size": 6031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19211, "raw_average_key_size": 20, "raw_value_size": 3308632, "raw_average_value_size": 3493, "num_data_blocks": 267, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089732, "oldest_key_time": 1764089732, "file_creation_time": 1764089951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 15036 microseconds, and 6393 cpu microseconds.
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.699551) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 3337098 bytes OK
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.699571) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.701722) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.701744) EVENT_LOG_v1 {"time_micros": 1764089951701737, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.701765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 3395390, prev total WAL file size 3395390, number of live WAL files 2.
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.702828) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(3258KB)], [101(8685KB)]
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951702879, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 12230833, "oldest_snapshot_seqno": -1}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7110 keys, 10569211 bytes, temperature: kUnknown
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951749968, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 10569211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10520294, "index_size": 30053, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 183608, "raw_average_key_size": 25, "raw_value_size": 10391420, "raw_average_value_size": 1461, "num_data_blocks": 1184, "num_entries": 7110, "num_filter_entries": 7110, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764089951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.750238) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 10569211 bytes
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.751388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.3 rd, 224.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 7632, records dropped: 522 output_compression: NoCompression
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.751407) EVENT_LOG_v1 {"time_micros": 1764089951751398, "job": 60, "event": "compaction_finished", "compaction_time_micros": 47172, "compaction_time_cpu_micros": 25621, "output_level": 6, "num_output_files": 1, "total_output_size": 10569211, "num_input_records": 7632, "num_output_records": 7110, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951752121, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764089951754190, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.702680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:59:11 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-16:59:11.754291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 16:59:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 13 KiB/s wr, 5 op/s
Nov 25 16:59:12 compute-0 nova_compute[254092]: 2025-11-25 16:59:12.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:13.635 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:13.636 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:13 compute-0 ceph-mon[74985]: pgmap v2270: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 13 KiB/s wr, 5 op/s
Nov 25 16:59:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 16:59:14 compute-0 nova_compute[254092]: 2025-11-25 16:59:14.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:15 compute-0 ceph-mon[74985]: pgmap v2271: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 16:59:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 25 16:59:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:16 compute-0 nova_compute[254092]: 2025-11-25 16:59:16.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:17 compute-0 nova_compute[254092]: 2025-11-25 16:59:17.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:17 compute-0 ceph-mon[74985]: pgmap v2272: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 25 16:59:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:19 compute-0 ceph-mon[74985]: pgmap v2273: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.833 254096 DEBUG nova.compute.manager [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG nova.compute.manager [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing instance network info cache due to event network-changed-75edff1b-5ceb-4f80-befe-e1a5ec106382. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG oslo_concurrency.lockutils [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG oslo_concurrency.lockutils [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.834 254096 DEBUG nova.network.neutron [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Refreshing network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.904 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.905 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.906 254096 INFO nova.compute.manager [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Terminating instance
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.907 254096 DEBUG nova.compute.manager [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 16:59:19 compute-0 kernel: tap75edff1b-5c (unregistering): left promiscuous mode
Nov 25 16:59:19 compute-0 NetworkManager[48891]: <info>  [1764089959.9732] device (tap75edff1b-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 16:59:19 compute-0 ovn_controller[153477]: 2025-11-25T16:59:19Z|01143|binding|INFO|Releasing lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 from this chassis (sb_readonly=0)
Nov 25 16:59:19 compute-0 ovn_controller[153477]: 2025-11-25T16:59:19Z|01144|binding|INFO|Setting lport 75edff1b-5ceb-4f80-befe-e1a5ec106382 down in Southbound
Nov 25 16:59:19 compute-0 ovn_controller[153477]: 2025-11-25T16:59:19Z|01145|binding|INFO|Removing iface tap75edff1b-5c ovn-installed in OVS
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:19 compute-0 nova_compute[254092]: 2025-11-25 16:59:19.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.995 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:02:f1 10.100.0.9'], port_security=['fa:16:3e:9e:02:f1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '5d0e35bb-8f66-41f8-81fc-54e1b53c4dda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fef68bf8cf647f89586309d548d4bd7', 'neutron:revision_number': '9', 'neutron:security_group_ids': '749158e6-7a67-48ec-831f-8c4fe94e7a88', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b4ffe7-2c89-49f5-b8f3-271ba9b0feff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=75edff1b-5ceb-4f80-befe-e1a5ec106382) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:59:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.996 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 75edff1b-5ceb-4f80-befe-e1a5ec106382 in datapath 6c62671a-f9ae-4033-9c32-b04ccb3e0de4 unbound from our chassis
Nov 25 16:59:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.997 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.998 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01a799bb-80f1-4f4d-b896-a1d4da9bec30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:19.999 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 namespace which is not needed anymore
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:20 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 25 16:59:20 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d0000006e.scope: Consumed 14.273s CPU time.
Nov 25 16:59:20 compute-0 systemd-machined[216343]: Machine qemu-142-instance-0000006e terminated.
Nov 25 16:59:20 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : haproxy version is 2.8.14-c23fe91
Nov 25 16:59:20 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [NOTICE]   (375372) : path to executable is /usr/sbin/haproxy
Nov 25 16:59:20 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [WARNING]  (375372) : Exiting Master process...
Nov 25 16:59:20 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [ALERT]    (375372) : Current worker (375379) exited with code 143 (Terminated)
Nov 25 16:59:20 compute-0 neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4[375366]: [WARNING]  (375372) : All workers exited. Exiting... (0)
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.143 254096 INFO nova.virt.libvirt.driver [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Instance destroyed successfully.
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.143 254096 DEBUG nova.objects.instance [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lazy-loading 'resources' on Instance uuid 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:59:20 compute-0 systemd[1]: libpod-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697.scope: Deactivated successfully.
Nov 25 16:59:20 compute-0 podman[376116]: 2025-11-25 16:59:20.152964675 +0000 UTC m=+0.053248771 container died 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.161 254096 DEBUG nova.virt.libvirt.vif [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-25T16:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1491050401',display_name='tempest-TestShelveInstance-server-1491050401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1491050401',id=110,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCU1tFZ5889TlnfI/RUb+xoO74TlS+bqxewhKSm7AlfWLTCPQbO0K87XgSi1H6tslLwcv758vqLas7auaeYLUcvYqQB29Lbjk/VpKf97D9aRztTuSqsJ0wNarM8/2CjMZA==',key_name='tempest-TestShelveInstance-1027394080',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0fef68bf8cf647f89586309d548d4bd7',ramdisk_id='',reservation_id='r-8exg2mmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-535396087',owner_user_name='tempest-TestShelveInstance-535396087-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:58:45Z,user_data=None,user_id='2a830f6b7532459380b24ae0297b12bb',uuid=5d0e35bb-8f66-41f8-81fc-54e1b53c4dda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.162 254096 DEBUG nova.network.os_vif_util [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converting VIF {"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.163 254096 DEBUG nova.network.os_vif_util [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.164 254096 DEBUG os_vif [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.166 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.166 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75edff1b-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.172 254096 INFO os_vif [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=75edff1b-5ceb-4f80-befe-e1a5ec106382,network=Network(6c62671a-f9ae-4033-9c32-b04ccb3e0de4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75edff1b-5c')
Nov 25 16:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697-userdata-shm.mount: Deactivated successfully.
Nov 25 16:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-594d52d0fc086d39ee392d5c99998b632b0abad46d43aa1aa2121c7e09f49de6-merged.mount: Deactivated successfully.
Nov 25 16:59:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 3.7 KiB/s wr, 2 op/s
Nov 25 16:59:20 compute-0 podman[376116]: 2025-11-25 16:59:20.21848995 +0000 UTC m=+0.118774036 container cleanup 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 16:59:20 compute-0 systemd[1]: libpod-conmon-37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697.scope: Deactivated successfully.
Nov 25 16:59:20 compute-0 podman[376175]: 2025-11-25 16:59:20.293098932 +0000 UTC m=+0.054743331 container remove 37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b29742eb-7f18-49e2-9a3e-daba519ef67c]: (4, ('Tue Nov 25 04:59:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697)\n37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697\nTue Nov 25 04:59:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 (37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697)\n37ea73a5b76932edd6618993af23071beac5e9dfbce74aa295ead9905d869697\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b86d3442-08f5-44ba-a693-cd035987133c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.301 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c62671a-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:20 compute-0 kernel: tap6c62671a-f0: left promiscuous mode
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.370 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0aaadd-3e7f-4077-9624-c7766d34b3f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.389 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d355193-5826-4ea4-bc48-1021cc62b8df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.390 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e7fd37-75ce-4ee7-b679-39a3147647c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.409 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9021eac-ac2e-4875-a30f-7b2dc78dc691]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638126, 'reachable_time': 22674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376190, 'error': None, 'target': 'ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d6c62671a\x2df9ae\x2d4033\x2d9c32\x2db04ccb3e0de4.mount: Deactivated successfully.
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.412 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c62671a-f9ae-4033-9c32-b04ccb3e0de4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 16:59:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:20.412 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a7692737-149a-4a9f-be98-98ffc12cc324]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.797 254096 INFO nova.virt.libvirt.driver [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deleting instance files /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.798 254096 INFO nova.virt.libvirt.driver [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deletion of /var/lib/nova/instances/5d0e35bb-8f66-41f8-81fc-54e1b53c4dda_del complete
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.846 254096 INFO nova.compute.manager [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 0.94 seconds to destroy the instance on the hypervisor.
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.847 254096 DEBUG oslo.service.loopingcall [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.848 254096 DEBUG nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 16:59:20 compute-0 nova_compute[254092]: 2025-11-25 16:59:20.848 254096 DEBUG nova.network.neutron [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 16:59:21 compute-0 nova_compute[254092]: 2025-11-25 16:59:21.310 254096 DEBUG nova.network.neutron [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updated VIF entry in instance network info cache for port 75edff1b-5ceb-4f80-befe-e1a5ec106382. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:59:21 compute-0 nova_compute[254092]: 2025-11-25 16:59:21.310 254096 DEBUG nova.network.neutron [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [{"id": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "address": "fa:16:3e:9e:02:f1", "network": {"id": "6c62671a-f9ae-4033-9c32-b04ccb3e0de4", "bridge": "br-int", "label": "tempest-TestShelveInstance-32572327-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fef68bf8cf647f89586309d548d4bd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75edff1b-5c", "ovs_interfaceid": "75edff1b-5ceb-4f80-befe-e1a5ec106382", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:59:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:21 compute-0 nova_compute[254092]: 2025-11-25 16:59:21.641 254096 DEBUG oslo_concurrency.lockutils [req-81042c1a-074b-4a21-88dd-7e593320162a req-2c8e0469-d7ea-4015-a7d8-0de8104e4ce5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:59:21 compute-0 ceph-mon[74985]: pgmap v2274: 321 pgs: 321 active+clean; 122 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 3.7 KiB/s wr, 2 op/s
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.069 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.070 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-unplugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.071 254096 DEBUG oslo_concurrency.lockutils [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.072 254096 DEBUG nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] No waiting events found dispatching network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.072 254096 WARNING nova.compute.manager [req-5d5e0ab8-d64d-4fc0-8a38-502879ec6ade req-7297a4e8-9155-4218-9151-447903b2350d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received unexpected event network-vif-plugged-75edff1b-5ceb-4f80-befe-e1a5ec106382 for instance with vm_state active and task_state deleting.
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.159 254096 DEBUG nova.network.neutron [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:59:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 95 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.3 KiB/s wr, 7 op/s
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.226 254096 INFO nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Took 1.38 seconds to deallocate network for instance.
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.284 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.285 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.440 254096 DEBUG oslo_concurrency.processutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:59:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112997725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.873 254096 DEBUG oslo_concurrency.processutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.880 254096 DEBUG nova.compute.provider_tree [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.897 254096 DEBUG nova.scheduler.client.report [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:59:22 compute-0 nova_compute[254092]: 2025-11-25 16:59:22.920 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:23 compute-0 nova_compute[254092]: 2025-11-25 16:59:23.009 254096 INFO nova.scheduler.client.report [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Deleted allocations for instance 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda
Nov 25 16:59:23 compute-0 nova_compute[254092]: 2025-11-25 16:59:23.072 254096 DEBUG oslo_concurrency.lockutils [None req-96b81928-d12a-4583-9849-05a807a64075 2a830f6b7532459380b24ae0297b12bb 0fef68bf8cf647f89586309d548d4bd7 - - default default] Lock "5d0e35bb-8f66-41f8-81fc-54e1b53c4dda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:23 compute-0 nova_compute[254092]: 2025-11-25 16:59:23.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:23 compute-0 nova_compute[254092]: 2025-11-25 16:59:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:23 compute-0 ceph-mon[74985]: pgmap v2275: 321 pgs: 321 active+clean; 95 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.3 KiB/s wr, 7 op/s
Nov 25 16:59:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4112997725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 95 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1023 B/s wr, 7 op/s
Nov 25 16:59:24 compute-0 nova_compute[254092]: 2025-11-25 16:59:24.212 254096 DEBUG nova.compute.manager [req-85a2dbda-bc50-428c-8003-eed2340a277b req-b99d063b-78df-4dcb-b61a-c09bf0aff64a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Received event network-vif-deleted-75edff1b-5ceb-4f80-befe-e1a5ec106382 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:24 compute-0 nova_compute[254092]: 2025-11-25 16:59:24.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:25 compute-0 nova_compute[254092]: 2025-11-25 16:59:25.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:25 compute-0 ceph-mon[74985]: pgmap v2276: 321 pgs: 321 active+clean; 95 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1023 B/s wr, 7 op/s
Nov 25 16:59:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:59:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805136051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:26 compute-0 nova_compute[254092]: 2025-11-25 16:59:26.981 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.202 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.204 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3856MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.204 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.205 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.254 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.255 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.276 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:27 compute-0 nova_compute[254092]: 2025-11-25 16:59:27.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 16:59:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:59:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118453245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:28 compute-0 ceph-mon[74985]: pgmap v2277: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Nov 25 16:59:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3805136051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:28 compute-0 nova_compute[254092]: 2025-11-25 16:59:28.411 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:28 compute-0 nova_compute[254092]: 2025-11-25 16:59:28.418 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:59:28 compute-0 nova_compute[254092]: 2025-11-25 16:59:28.434 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:59:28 compute-0 nova_compute[254092]: 2025-11-25 16:59:28.454 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 16:59:28 compute-0 nova_compute[254092]: 2025-11-25 16:59:28.454 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:29 compute-0 ceph-mon[74985]: pgmap v2278: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 16:59:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2118453245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:29 compute-0 nova_compute[254092]: 2025-11-25 16:59:29.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:29 compute-0 nova_compute[254092]: 2025-11-25 16:59:29.980 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:29 compute-0 nova_compute[254092]: 2025-11-25 16:59:29.980 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.046 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.151 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.152 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.160 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.161 254096 INFO nova.compute.claims [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.260 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.452 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.453 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.454 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 16:59:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 16:59:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389327970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.727 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.734 254096 DEBUG nova.compute.provider_tree [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.758 254096 DEBUG nova.scheduler.client.report [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.785 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.786 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.843 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.844 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.865 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.880 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.982 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.984 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 16:59:30 compute-0 nova_compute[254092]: 2025-11-25 16:59:30.984 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Creating image(s)
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.012 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.042 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.070 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.074 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.140 254096 DEBUG nova.policy [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.181 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.182 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.183 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.183 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.211 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.216 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d643dc57-9536-4a67-9a17-c20512710ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:31 compute-0 ceph-mon[74985]: pgmap v2279: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Nov 25 16:59:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3389327970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.549 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 d643dc57-9536-4a67-9a17-c20512710ea5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.641 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.759 254096 DEBUG nova.objects.instance [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.775 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.775 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Ensure instance console log exists: /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.775 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.776 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.776 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:31 compute-0 nova_compute[254092]: 2025-11-25 16:59:31.967 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Successfully created port: 12c882d8-4cd5-4233-8d3b-650401885991 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 16:59:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.093 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Successfully updated port: 12c882d8-4cd5-4233-8d3b-650401885991 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.108 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.109 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.109 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.230 254096 DEBUG nova.compute.manager [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-changed-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.231 254096 DEBUG nova.compute.manager [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing instance network info cache due to event network-changed-12c882d8-4cd5-4233-8d3b-650401885991. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.231 254096 DEBUG oslo_concurrency.lockutils [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:59:33 compute-0 nova_compute[254092]: 2025-11-25 16:59:33.370 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 16:59:33 compute-0 ceph-mon[74985]: pgmap v2280: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 25 16:59:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.515 254096 DEBUG nova.network.neutron [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.537 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.538 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance network_info: |[{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.540 254096 DEBUG oslo_concurrency.lockutils [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.540 254096 DEBUG nova.network.neutron [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.547 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start _get_guest_xml network_info=[{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.557 254096 WARNING nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.564 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.565 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.572 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.573 254096 DEBUG nova.virt.libvirt.host [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.573 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.574 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.574 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.574 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.575 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.576 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.576 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.576 254096 DEBUG nova.virt.hardware [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.579 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:34 compute-0 nova_compute[254092]: 2025-11-25 16:59:34.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:59:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1593843188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.020 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.052 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.059 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.142 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764089960.140186, 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.143 254096 INFO nova.compute.manager [-] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] VM Stopped (Lifecycle Event)
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.167 254096 DEBUG nova.compute.manager [None req-fc057178-a07c-4b11-bed0-29d4dbedfb20 - - - - - -] [instance: 5d0e35bb-8f66-41f8-81fc-54e1b53c4dda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:35 compute-0 ceph-mon[74985]: pgmap v2281: 321 pgs: 321 active+clean; 41 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 16:59:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1593843188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:59:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 16:59:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936021589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.509 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.510 254096 DEBUG nova.virt.libvirt.vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:59:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-888996977',display_name='tempest-TestNetworkBasicOps-server-888996977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-888996977',id=111,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtS45HGfMK2vg5lA1DMxbi57g++ufQ+h4UViHDHhpxxz0TeEAmFiy6LE8nwpuctZL207E2zBi1qnO46vlL2kuhASWctkQ5Cos3uH5AqhXg1h51/mOABxlzeqFcNuWZ38Q==',key_name='tempest-TestNetworkBasicOps-202100109',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5inbkv7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:59:30Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=d643dc57-9536-4a67-9a17-c20512710ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.510 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.511 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.512 254096 DEBUG nova.objects.instance [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.529 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <uuid>d643dc57-9536-4a67-9a17-c20512710ea5</uuid>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <name>instance-0000006f</name>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <metadata>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-888996977</nova:name>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 16:59:34</nova:creationTime>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <nova:port uuid="12c882d8-4cd5-4233-8d3b-650401885991">
Nov 25 16:59:35 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   </metadata>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <system>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <entry name="serial">d643dc57-9536-4a67-9a17-c20512710ea5</entry>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <entry name="uuid">d643dc57-9536-4a67-9a17-c20512710ea5</entry>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </system>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <os>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   </os>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <features>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <apic/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   </features>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   </clock>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   </cpu>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   <devices>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d643dc57-9536-4a67-9a17-c20512710ea5_disk">
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/d643dc57-9536-4a67-9a17-c20512710ea5_disk.config">
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       </source>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 16:59:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:80:dd:57"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <target dev="tap12c882d8-4c"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </interface>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/console.log" append="off"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </serial>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <video>
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </video>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </rng>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 16:59:35 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 16:59:35 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 16:59:35 compute-0 nova_compute[254092]:   </devices>
Nov 25 16:59:35 compute-0 nova_compute[254092]: </domain>
Nov 25 16:59:35 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.530 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Preparing to wait for external event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.530 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.531 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.531 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.532 254096 DEBUG nova.virt.libvirt.vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T16:59:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-888996977',display_name='tempest-TestNetworkBasicOps-server-888996977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-888996977',id=111,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtS45HGfMK2vg5lA1DMxbi57g++ufQ+h4UViHDHhpxxz0TeEAmFiy6LE8nwpuctZL207E2zBi1qnO46vlL2kuhASWctkQ5Cos3uH5AqhXg1h51/mOABxlzeqFcNuWZ38Q==',key_name='tempest-TestNetworkBasicOps-202100109',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5inbkv7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T16:59:30Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=d643dc57-9536-4a67-9a17-c20512710ea5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.532 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.533 254096 DEBUG nova.network.os_vif_util [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.533 254096 DEBUG os_vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.534 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.535 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.541 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12c882d8-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.542 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12c882d8-4c, col_values=(('external_ids', {'iface-id': '12c882d8-4cd5-4233-8d3b-650401885991', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:dd:57', 'vm-uuid': 'd643dc57-9536-4a67-9a17-c20512710ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:35 compute-0 NetworkManager[48891]: <info>  [1764089975.5455] manager: (tap12c882d8-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.555 254096 INFO os_vif [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c')
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.597 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.597 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.598 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:80:dd:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.598 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Using config drive
Nov 25 16:59:35 compute-0 nova_compute[254092]: 2025-11-25 16:59:35.627 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.074 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Creating config drive at /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.081 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtk3p6bj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.122 254096 DEBUG nova.network.neutron [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated VIF entry in instance network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.122 254096 DEBUG nova.network.neutron [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.149 254096 DEBUG oslo_concurrency.lockutils [req-7bcd1c4d-a3e8-4015-be68-5072f9da4b72 req-cc40554c-6978-4919-bb4f-196c068020f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:59:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.229 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprtk3p6bj" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.253 254096 DEBUG nova.storage.rbd_utils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image d643dc57-9536-4a67-9a17-c20512710ea5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.258 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config d643dc57-9536-4a67-9a17-c20512710ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 16:59:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1936021589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.462 254096 DEBUG oslo_concurrency.processutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config d643dc57-9536-4a67-9a17-c20512710ea5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.463 254096 INFO nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deleting local config drive /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5/disk.config because it was imported into RBD.
Nov 25 16:59:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:36 compute-0 kernel: tap12c882d8-4c: entered promiscuous mode
Nov 25 16:59:36 compute-0 NetworkManager[48891]: <info>  [1764089976.5557] manager: (tap12c882d8-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Nov 25 16:59:36 compute-0 ovn_controller[153477]: 2025-11-25T16:59:36Z|01146|binding|INFO|Claiming lport 12c882d8-4cd5-4233-8d3b-650401885991 for this chassis.
Nov 25 16:59:36 compute-0 ovn_controller[153477]: 2025-11-25T16:59:36Z|01147|binding|INFO|12c882d8-4cd5-4233-8d3b-650401885991: Claiming fa:16:3e:80:dd:57 10.100.0.3
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.580 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:dd:57 10.100.0.3'], port_security=['fa:16:3e:80:dd:57 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd643dc57-9536-4a67-9a17-c20512710ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f446da-e1c5-4251-b5dd-3071154486f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '055c4680-7ea5-4bc6-a453-5482dfbe9b96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=168e2889-06cc-4097-8922-1d94c15fa45a, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12c882d8-4cd5-4233-8d3b-650401885991) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.582 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12c882d8-4cd5-4233-8d3b-650401885991 in datapath 22f446da-e1c5-4251-b5dd-3071154486f0 bound to our chassis
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.583 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22f446da-e1c5-4251-b5dd-3071154486f0
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bad55d-8b56-4369-96fe-ca92a5ab72bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.598 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22f446da-e1 in ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.602 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22f446da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.602 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[943814aa-7cd2-4423-9998-a1a166eed6f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[98a44baf-62c8-4a99-9f1b-ea63b34da527]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.621 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c252b-bf2a-4ac5-869d-e44aa56679fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 systemd-udevd[376607]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:59:36 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-0000006f.
Nov 25 16:59:36 compute-0 systemd-machined[216343]: New machine qemu-143-instance-0000006f.
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 NetworkManager[48891]: <info>  [1764089976.6548] device (tap12c882d8-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 16:59:36 compute-0 ovn_controller[153477]: 2025-11-25T16:59:36Z|01148|binding|INFO|Setting lport 12c882d8-4cd5-4233-8d3b-650401885991 ovn-installed in OVS
Nov 25 16:59:36 compute-0 ovn_controller[153477]: 2025-11-25T16:59:36Z|01149|binding|INFO|Setting lport 12c882d8-4cd5-4233-8d3b-650401885991 up in Southbound
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.655 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[96cc9887-4bc6-4519-90a3-6f3da471cfa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 NetworkManager[48891]: <info>  [1764089976.6594] device (tap12c882d8-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.691 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3b27b41f-c1aa-4e6a-80bd-1f5ba8db5756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 podman[376580]: 2025-11-25 16:59:36.696787221 +0000 UTC m=+0.091237856 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 16:59:36 compute-0 podman[376579]: 2025-11-25 16:59:36.697094749 +0000 UTC m=+0.084785421 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.698 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e21c4ef6-e156-48d2-85d5-e7b1c8349fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 systemd-udevd[376615]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 16:59:36 compute-0 NetworkManager[48891]: <info>  [1764089976.7060] manager: (tap22f446da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Nov 25 16:59:36 compute-0 podman[376582]: 2025-11-25 16:59:36.72833418 +0000 UTC m=+0.120270067 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.736 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fddfebaf-188c-4442-b6f3-8a338ffd95d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.740 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[01ac0a75-c75c-462f-8968-48d10c713502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 NetworkManager[48891]: <info>  [1764089976.7612] device (tap22f446da-e0): carrier: link connected
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.766 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41588749-f7fa-442b-aeba-3c307855aee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.783 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edb2753c-3537-478e-ba03-ddbfbdf31c52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22f446da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:2d:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643435, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376672, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.799 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6bdf96-142f-4d0d-98cb-a4eafa161a1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:2d67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643435, 'tstamp': 643435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376673, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a27d6389-5c71-4765-a17e-4330cafe6512]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22f446da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:2d:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643435, 'reachable_time': 29055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376674, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[33fe39fd-5077-42e7-9f33-e8ad250ed4b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.899 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d666747-8475-4136-90de-605b262ee5ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f446da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.901 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22f446da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 NetworkManager[48891]: <info>  [1764089976.9043] manager: (tap22f446da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Nov 25 16:59:36 compute-0 kernel: tap22f446da-e0: entered promiscuous mode
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.907 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22f446da-e0, col_values=(('external_ids', {'iface-id': 'ecc3dd32-bc67-4be1-9d9a-caf7485b6c03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 ovn_controller[153477]: 2025-11-25T16:59:36Z|01150|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 nova_compute[254092]: 2025-11-25 16:59:36.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.929 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22f446da-e1c5-4251-b5dd-3071154486f0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22f446da-e1c5-4251-b5dd-3071154486f0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.931 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[272db025-e476-4aa6-ad2d-05903ee212fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.932 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: global
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-22f446da-e1c5-4251-b5dd-3071154486f0
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/22f446da-e1c5-4251-b5dd-3071154486f0.pid.haproxy
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 22f446da-e1c5-4251-b5dd-3071154486f0
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 16:59:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:36.933 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'env', 'PROCESS_TAG=haproxy-22f446da-e1c5-4251-b5dd-3071154486f0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22f446da-e1c5-4251-b5dd-3071154486f0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.157 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089977.1560907, d643dc57-9536-4a67-9a17-c20512710ea5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.157 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Started (Lifecycle Event)
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.182 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.190 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089977.1563592, d643dc57-9536-4a67-9a17-c20512710ea5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.191 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Paused (Lifecycle Event)
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.209 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.213 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.232 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:59:37 compute-0 podman[376748]: 2025-11-25 16:59:37.378346813 +0000 UTC m=+0.080361840 container create 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:59:37 compute-0 systemd[1]: Started libpod-conmon-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99.scope.
Nov 25 16:59:37 compute-0 podman[376748]: 2025-11-25 16:59:37.34445278 +0000 UTC m=+0.046467847 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 16:59:37 compute-0 ceph-mon[74985]: pgmap v2282: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Nov 25 16:59:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:59:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7b995b572a86953529648503ae39e65ee754ec7d4b7a8548926206da37e805c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:37 compute-0 podman[376748]: 2025-11-25 16:59:37.486157949 +0000 UTC m=+0.188172976 container init 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 16:59:37 compute-0 podman[376748]: 2025-11-25 16:59:37.496915502 +0000 UTC m=+0.198930509 container start 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 16:59:37 compute-0 nova_compute[254092]: 2025-11-25 16:59:37.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 16:59:37 compute-0 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : New worker (376769) forked
Nov 25 16:59:37 compute-0 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : Loading success.
Nov 25 16:59:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:59:39 compute-0 ceph-mon[74985]: pgmap v2283: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 16:59:39 compute-0 nova_compute[254092]: 2025-11-25 16:59:39.585 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_16:59:40
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:59:40 compute-0 nova_compute[254092]: 2025-11-25 16:59:40.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 16:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 16:59:41 compute-0 ceph-mon[74985]: pgmap v2284: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 16:59:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 16:59:43 compute-0 ceph-mon[74985]: pgmap v2285: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 16:59:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.585 254096 DEBUG nova.compute.manager [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG oslo_concurrency.lockutils [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG oslo_concurrency.lockutils [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG oslo_concurrency.lockutils [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.586 254096 DEBUG nova.compute.manager [req-955d53d5-7f99-4053-a1d8-7f6d82f17f56 req-25ade9f3-46f8-45bb-9f12-26edf1497495 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Processing event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.587 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.591 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764089984.5914507, d643dc57-9536-4a67-9a17-c20512710ea5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.591 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Resumed (Lifecycle Event)
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.593 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.595 254096 INFO nova.virt.libvirt.driver [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance spawned successfully.
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.595 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.618 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.622 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.623 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.623 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.623 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.624 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.624 254096 DEBUG nova.virt.libvirt.driver [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.628 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.663 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.711 254096 INFO nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 13.73 seconds to spawn the instance on the hypervisor.
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.711 254096 DEBUG nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.787 254096 INFO nova.compute.manager [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 14.66 seconds to build instance.
Nov 25 16:59:44 compute-0 nova_compute[254092]: 2025-11-25 16:59:44.815 254096 DEBUG oslo_concurrency.lockutils [None req-4cebfbce-c9a0-4ffa-be15-e77489b99bd7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:45 compute-0 nova_compute[254092]: 2025-11-25 16:59:45.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:45 compute-0 ceph-mon[74985]: pgmap v2286: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 16:59:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 16:59:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:46.290 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 16:59:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:46.291 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 16:59:46 compute-0 nova_compute[254092]: 2025-11-25 16:59:46.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:46 compute-0 nova_compute[254092]: 2025-11-25 16:59:46.682 254096 DEBUG nova.compute.manager [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:46 compute-0 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG oslo_concurrency.lockutils [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 16:59:46 compute-0 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG oslo_concurrency.lockutils [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 16:59:46 compute-0 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG oslo_concurrency.lockutils [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 16:59:46 compute-0 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 DEBUG nova.compute.manager [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] No waiting events found dispatching network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 16:59:46 compute-0 nova_compute[254092]: 2025-11-25 16:59:46.683 254096 WARNING nova.compute.manager [req-45b277b5-24bf-4b6e-9662-19e9c8b83b99 req-f995cbd5-f6fe-47cb-90cc-3f018d1c2ab5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received unexpected event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 for instance with vm_state active and task_state None.
Nov 25 16:59:47 compute-0 ceph-mon[74985]: pgmap v2287: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 16:59:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 12 KiB/s wr, 13 op/s
Nov 25 16:59:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 16:59:49.293 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 16:59:49 compute-0 nova_compute[254092]: 2025-11-25 16:59:49.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:49 compute-0 ceph-mon[74985]: pgmap v2288: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 12 KiB/s wr, 13 op/s
Nov 25 16:59:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 16:59:50 compute-0 nova_compute[254092]: 2025-11-25 16:59:50.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:50 compute-0 sudo[376778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:50 compute-0 sudo[376778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:50 compute-0 sudo[376778]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:50 compute-0 sudo[376803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:59:50 compute-0 sudo[376803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:50 compute-0 sudo[376803]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:50 compute-0 NetworkManager[48891]: <info>  [1764089990.8900] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Nov 25 16:59:50 compute-0 ovn_controller[153477]: 2025-11-25T16:59:50Z|01151|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 16:59:50 compute-0 nova_compute[254092]: 2025-11-25 16:59:50.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:50 compute-0 NetworkManager[48891]: <info>  [1764089990.8917] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Nov 25 16:59:50 compute-0 nova_compute[254092]: 2025-11-25 16:59:50.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:50 compute-0 ovn_controller[153477]: 2025-11-25T16:59:50Z|01152|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 16:59:50 compute-0 nova_compute[254092]: 2025-11-25 16:59:50.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:50 compute-0 sudo[376828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:50 compute-0 sudo[376828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:50 compute-0 sudo[376828]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:50 compute-0 sudo[376854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 16:59:50 compute-0 sudo[376854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:51 compute-0 sudo[376854]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 16:59:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:59:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 16:59:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 16:59:51 compute-0 nova_compute[254092]: 2025-11-25 16:59:51.519 254096 DEBUG nova.compute.manager [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-changed-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 16:59:51 compute-0 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG nova.compute.manager [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing instance network info cache due to event network-changed-12c882d8-4cd5-4233-8d3b-650401885991. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 16:59:51 compute-0 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG oslo_concurrency.lockutils [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 16:59:51 compute-0 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG oslo_concurrency.lockutils [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 16:59:51 compute-0 nova_compute[254092]: 2025-11-25 16:59:51.520 254096 DEBUG nova.network.neutron [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 16:59:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 299318d3-c3e7-4744-b840-9b6797c7f81c does not exist
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c368e4f1-62df-42cd-a7df-7ec2643c664f does not exist
Nov 25 16:59:51 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8482a452-8b31-4723-9aeb-0a687675b3b7 does not exist
Nov 25 16:59:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 16:59:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 16:59:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 16:59:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:59:51 compute-0 sudo[376910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:51 compute-0 sudo[376910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:51 compute-0 sudo[376910]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:51 compute-0 sudo[376935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:59:51 compute-0 sudo[376935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:51 compute-0 sudo[376935]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:51 compute-0 sudo[376960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:51 compute-0 sudo[376960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:51 compute-0 sudo[376960]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:51 compute-0 sudo[376985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 16:59:51 compute-0 sudo[376985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:51 compute-0 ceph-mon[74985]: pgmap v2289: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 16:59:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 16:59:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 16:59:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 16:59:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 65 op/s
Nov 25 16:59:52 compute-0 podman[377050]: 2025-11-25 16:59:52.17004442 +0000 UTC m=+0.027337826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:59:52 compute-0 podman[377050]: 2025-11-25 16:59:52.27910049 +0000 UTC m=+0.136393856 container create ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 16:59:52 compute-0 systemd[1]: Started libpod-conmon-ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36.scope.
Nov 25 16:59:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:59:52 compute-0 podman[377050]: 2025-11-25 16:59:52.457276754 +0000 UTC m=+0.314570140 container init ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 16:59:52 compute-0 podman[377050]: 2025-11-25 16:59:52.464482159 +0000 UTC m=+0.321775515 container start ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:59:52 compute-0 zen_jennings[377067]: 167 167
Nov 25 16:59:52 compute-0 systemd[1]: libpod-ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36.scope: Deactivated successfully.
Nov 25 16:59:52 compute-0 podman[377050]: 2025-11-25 16:59:52.641935903 +0000 UTC m=+0.499229289 container attach ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 16:59:52 compute-0 podman[377050]: 2025-11-25 16:59:52.643163936 +0000 UTC m=+0.500457302 container died ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:59:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-85c109c12f18b211279d6f18e581e27e3849c471c414ee3fc9af08b236fcb541-merged.mount: Deactivated successfully.
Nov 25 16:59:53 compute-0 ceph-mon[74985]: pgmap v2290: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 65 op/s
Nov 25 16:59:53 compute-0 podman[377050]: 2025-11-25 16:59:53.440133532 +0000 UTC m=+1.297426928 container remove ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jennings, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:59:53 compute-0 systemd[1]: libpod-conmon-ad4894b485834a1c16e7ceab23a2cc8faa9b6e1c8ffc895aea78f4daba4a4a36.scope: Deactivated successfully.
Nov 25 16:59:53 compute-0 podman[377092]: 2025-11-25 16:59:53.700418901 +0000 UTC m=+0.090598728 container create 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 16:59:53 compute-0 podman[377092]: 2025-11-25 16:59:53.651857109 +0000 UTC m=+0.042036936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:59:53 compute-0 systemd[1]: Started libpod-conmon-6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7.scope.
Nov 25 16:59:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:53 compute-0 nova_compute[254092]: 2025-11-25 16:59:53.866 254096 DEBUG nova.network.neutron [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated VIF entry in instance network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 16:59:53 compute-0 nova_compute[254092]: 2025-11-25 16:59:53.867 254096 DEBUG nova.network.neutron [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 16:59:53 compute-0 podman[377092]: 2025-11-25 16:59:53.899089972 +0000 UTC m=+0.289269819 container init 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 16:59:53 compute-0 nova_compute[254092]: 2025-11-25 16:59:53.906 254096 DEBUG oslo_concurrency.lockutils [req-e30e9399-5c08-4616-af1b-a1c7967a7cb6 req-e014aeb3-7682-4719-9367-08be93b39cca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 16:59:53 compute-0 podman[377092]: 2025-11-25 16:59:53.909990169 +0000 UTC m=+0.300169956 container start 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 16:59:54 compute-0 podman[377092]: 2025-11-25 16:59:54.008279516 +0000 UTC m=+0.398459353 container attach 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 16:59:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 25 16:59:54 compute-0 nova_compute[254092]: 2025-11-25 16:59:54.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:54 compute-0 friendly_shannon[377109]: --> passed data devices: 0 physical, 3 LVM
Nov 25 16:59:54 compute-0 friendly_shannon[377109]: --> relative data size: 1.0
Nov 25 16:59:54 compute-0 friendly_shannon[377109]: --> All data devices are unavailable
Nov 25 16:59:54 compute-0 systemd[1]: libpod-6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7.scope: Deactivated successfully.
Nov 25 16:59:54 compute-0 podman[377092]: 2025-11-25 16:59:54.932268322 +0000 UTC m=+1.322448129 container died 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 16:59:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b79bdc654c7ccab54da0234f74c6802e60cd760d58cfb8fa9c2697629eb46018-merged.mount: Deactivated successfully.
Nov 25 16:59:55 compute-0 podman[377092]: 2025-11-25 16:59:55.111055911 +0000 UTC m=+1.501235748 container remove 6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 16:59:55 compute-0 systemd[1]: libpod-conmon-6c5e503f0b88d53244a994be1f77da7b067193367098825b21d4bfe15934fdb7.scope: Deactivated successfully.
Nov 25 16:59:55 compute-0 sudo[376985]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:55 compute-0 sudo[377152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:55 compute-0 sudo[377152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:55 compute-0 sudo[377152]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:55 compute-0 sudo[377177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:59:55 compute-0 sudo[377177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:55 compute-0 sudo[377177]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 16:59:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283618193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:59:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 16:59:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283618193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:59:55 compute-0 sudo[377202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:55 compute-0 sudo[377202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:55 compute-0 sudo[377202]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:55 compute-0 sudo[377227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 16:59:55 compute-0 sudo[377227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:55 compute-0 ceph-mon[74985]: pgmap v2291: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Nov 25 16:59:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/283618193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 16:59:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/283618193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 16:59:55 compute-0 nova_compute[254092]: 2025-11-25 16:59:55.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:55 compute-0 podman[377293]: 2025-11-25 16:59:55.790836276 +0000 UTC m=+0.062095223 container create 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:59:55 compute-0 podman[377293]: 2025-11-25 16:59:55.749828009 +0000 UTC m=+0.021086966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:59:55 compute-0 systemd[1]: Started libpod-conmon-4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae.scope.
Nov 25 16:59:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:59:55 compute-0 podman[377293]: 2025-11-25 16:59:55.949054705 +0000 UTC m=+0.220313662 container init 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:59:55 compute-0 podman[377293]: 2025-11-25 16:59:55.955002297 +0000 UTC m=+0.226261224 container start 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 16:59:55 compute-0 strange_greider[377309]: 167 167
Nov 25 16:59:55 compute-0 systemd[1]: libpod-4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae.scope: Deactivated successfully.
Nov 25 16:59:55 compute-0 podman[377293]: 2025-11-25 16:59:55.985090266 +0000 UTC m=+0.256349233 container attach 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:59:55 compute-0 podman[377293]: 2025-11-25 16:59:55.985553299 +0000 UTC m=+0.256812236 container died 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:59:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-33726ae32b7b6bb533a44a04cc5e9a6d824182d3e094286ff03e0e2565e829b3-merged.mount: Deactivated successfully.
Nov 25 16:59:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 49 KiB/s wr, 67 op/s
Nov 25 16:59:56 compute-0 podman[377293]: 2025-11-25 16:59:56.257026853 +0000 UTC m=+0.528285790 container remove 4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 16:59:56 compute-0 systemd[1]: libpod-conmon-4bc64ca10506b706ddac9b536cc3abd25aa59a6ef2760751ac63c4b0f2c8c4ae.scope: Deactivated successfully.
Nov 25 16:59:56 compute-0 podman[377335]: 2025-11-25 16:59:56.487918631 +0000 UTC m=+0.076864684 container create 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:59:56 compute-0 podman[377335]: 2025-11-25 16:59:56.441199029 +0000 UTC m=+0.030145082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:59:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 16:59:56 compute-0 systemd[1]: Started libpod-conmon-11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296.scope.
Nov 25 16:59:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:56 compute-0 podman[377335]: 2025-11-25 16:59:56.636197159 +0000 UTC m=+0.225143232 container init 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 16:59:56 compute-0 podman[377335]: 2025-11-25 16:59:56.644375122 +0000 UTC m=+0.233321175 container start 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 16:59:56 compute-0 podman[377335]: 2025-11-25 16:59:56.668925921 +0000 UTC m=+0.257872054 container attach 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 16:59:57 compute-0 ovn_controller[153477]: 2025-11-25T16:59:57Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:dd:57 10.100.0.3
Nov 25 16:59:57 compute-0 ovn_controller[153477]: 2025-11-25T16:59:57Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:dd:57 10.100.0.3
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]: {
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:     "0": [
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:         {
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "devices": [
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "/dev/loop3"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             ],
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_name": "ceph_lv0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_size": "21470642176",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "name": "ceph_lv0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "tags": {
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cluster_name": "ceph",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.crush_device_class": "",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.encrypted": "0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osd_id": "0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.type": "block",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.vdo": "0"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             },
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "type": "block",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "vg_name": "ceph_vg0"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:         }
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:     ],
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:     "1": [
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:         {
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "devices": [
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "/dev/loop4"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             ],
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_name": "ceph_lv1",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_size": "21470642176",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "name": "ceph_lv1",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "tags": {
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cluster_name": "ceph",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.crush_device_class": "",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.encrypted": "0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osd_id": "1",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.type": "block",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.vdo": "0"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             },
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "type": "block",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "vg_name": "ceph_vg1"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:         }
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:     ],
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:     "2": [
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:         {
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "devices": [
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "/dev/loop5"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             ],
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_name": "ceph_lv2",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_size": "21470642176",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "name": "ceph_lv2",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "tags": {
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.cluster_name": "ceph",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.crush_device_class": "",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.encrypted": "0",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osd_id": "2",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.type": "block",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:                 "ceph.vdo": "0"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             },
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "type": "block",
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:             "vg_name": "ceph_vg2"
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:         }
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]:     ]
Nov 25 16:59:57 compute-0 vigorous_rubin[377352]: }
Nov 25 16:59:57 compute-0 systemd[1]: libpod-11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296.scope: Deactivated successfully.
Nov 25 16:59:57 compute-0 podman[377335]: 2025-11-25 16:59:57.408857103 +0000 UTC m=+0.997803156 container died 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 16:59:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4ee9396678ceb285ab7a919fa7ad23d0e21542fdd33c0cdec14f675c9e6b1f8-merged.mount: Deactivated successfully.
Nov 25 16:59:57 compute-0 podman[377335]: 2025-11-25 16:59:57.619187292 +0000 UTC m=+1.208133325 container remove 11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_rubin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 16:59:57 compute-0 ceph-mon[74985]: pgmap v2292: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 49 KiB/s wr, 67 op/s
Nov 25 16:59:57 compute-0 sudo[377227]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:57 compute-0 systemd[1]: libpod-conmon-11090318f7ceb581197f0091f1da404883861fdd9ecba464f58cc3d884249296.scope: Deactivated successfully.
Nov 25 16:59:57 compute-0 sudo[377373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:57 compute-0 sudo[377373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:57 compute-0 sudo[377373]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:57 compute-0 sudo[377398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 16:59:57 compute-0 sudo[377398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:57 compute-0 sudo[377398]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:57 compute-0 sudo[377423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 16:59:57 compute-0 sudo[377423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:57 compute-0 sudo[377423]: pam_unix(sudo:session): session closed for user root
Nov 25 16:59:57 compute-0 sudo[377448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 16:59:57 compute-0 sudo[377448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 16:59:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 49 KiB/s wr, 63 op/s
Nov 25 16:59:58 compute-0 podman[377513]: 2025-11-25 16:59:58.287873265 +0000 UTC m=+0.106699847 container create cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 16:59:58 compute-0 podman[377513]: 2025-11-25 16:59:58.210065855 +0000 UTC m=+0.028892507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:59:58 compute-0 systemd[1]: Started libpod-conmon-cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b.scope.
Nov 25 16:59:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:59:58 compute-0 podman[377513]: 2025-11-25 16:59:58.39381761 +0000 UTC m=+0.212644202 container init cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 16:59:58 compute-0 podman[377513]: 2025-11-25 16:59:58.400273926 +0000 UTC m=+0.219100538 container start cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 16:59:58 compute-0 condescending_perlman[377529]: 167 167
Nov 25 16:59:58 compute-0 systemd[1]: libpod-cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b.scope: Deactivated successfully.
Nov 25 16:59:58 compute-0 podman[377513]: 2025-11-25 16:59:58.416162019 +0000 UTC m=+0.234988621 container attach cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:59:58 compute-0 podman[377513]: 2025-11-25 16:59:58.416827427 +0000 UTC m=+0.235654029 container died cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 16:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf6a85e3ade79185c54e11427104d69549b17203139983356917871973e7491f-merged.mount: Deactivated successfully.
Nov 25 16:59:58 compute-0 podman[377513]: 2025-11-25 16:59:58.556337147 +0000 UTC m=+0.375163759 container remove cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 16:59:58 compute-0 systemd[1]: libpod-conmon-cdb605ffebc5501dee6dc9a0de078c677a761f77ad5a28cb456985463d4c6d9b.scope: Deactivated successfully.
Nov 25 16:59:58 compute-0 podman[377555]: 2025-11-25 16:59:58.748455689 +0000 UTC m=+0.058473644 container create 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 16:59:58 compute-0 podman[377555]: 2025-11-25 16:59:58.710707851 +0000 UTC m=+0.020725826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 16:59:58 compute-0 systemd[1]: Started libpod-conmon-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope.
Nov 25 16:59:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 16:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 16:59:58 compute-0 podman[377555]: 2025-11-25 16:59:58.893299534 +0000 UTC m=+0.203317569 container init 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 16:59:58 compute-0 podman[377555]: 2025-11-25 16:59:58.900529541 +0000 UTC m=+0.210547526 container start 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 16:59:58 compute-0 podman[377555]: 2025-11-25 16:59:58.941683241 +0000 UTC m=+0.251701276 container attach 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 16:59:59 compute-0 nova_compute[254092]: 2025-11-25 16:59:59.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 16:59:59 compute-0 ceph-mon[74985]: pgmap v2293: 321 pgs: 321 active+clean; 88 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 49 KiB/s wr, 63 op/s
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]: {
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "osd_id": 1,
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "type": "bluestore"
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:     },
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "osd_id": 2,
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "type": "bluestore"
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:     },
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "osd_id": 0,
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:         "type": "bluestore"
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]:     }
Nov 25 16:59:59 compute-0 heuristic_swirles[377571]: }
Nov 25 16:59:59 compute-0 systemd[1]: libpod-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope: Deactivated successfully.
Nov 25 16:59:59 compute-0 podman[377555]: 2025-11-25 16:59:59.90137828 +0000 UTC m=+1.211396245 container died 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 16:59:59 compute-0 systemd[1]: libpod-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope: Consumed 1.003s CPU time.
Nov 25 16:59:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b591ac8093b33a0301f06f0a3e1a5d97e200ca0e626ac11838fa4014d1ce2453-merged.mount: Deactivated successfully.
Nov 25 17:00:00 compute-0 podman[377555]: 2025-11-25 17:00:00.049111794 +0000 UTC m=+1.359129749 container remove 99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 17:00:00 compute-0 systemd[1]: libpod-conmon-99d479f83d3ccce6d0cae715859ce461eeadde5fb11de809315c2cb49ae1f6d4.scope: Deactivated successfully.
Nov 25 17:00:00 compute-0 sudo[377448]: pam_unix(sudo:session): session closed for user root
Nov 25 17:00:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:00:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:00:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:00:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:00:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5b0b0f24-b4bb-4a12-822e-03137a56220d does not exist
Nov 25 17:00:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 41fa2b6d-3c3d-4512-a448-85c15d073042 does not exist
Nov 25 17:00:00 compute-0 sudo[377617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:00:00 compute-0 sudo[377617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:00:00 compute-0 sudo[377617]: pam_unix(sudo:session): session closed for user root
Nov 25 17:00:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 112 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Nov 25 17:00:00 compute-0 sudo[377642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:00:00 compute-0 sudo[377642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:00:00 compute-0 sudo[377642]: pam_unix(sudo:session): session closed for user root
Nov 25 17:00:00 compute-0 nova_compute[254092]: 2025-11-25 17:00:00.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:00:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:00:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.658502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090001658529, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 656, "num_deletes": 257, "total_data_size": 774169, "memory_usage": 787768, "flush_reason": "Manual Compaction"}
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090001853912, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 767463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47349, "largest_seqno": 48004, "table_properties": {"data_size": 763908, "index_size": 1399, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7908, "raw_average_key_size": 18, "raw_value_size": 756830, "raw_average_value_size": 1806, "num_data_blocks": 62, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764089952, "oldest_key_time": 1764089952, "file_creation_time": 1764090001, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 195462 microseconds, and 2550 cpu microseconds.
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.853958) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 767463 bytes OK
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.853979) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990218) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990260) EVENT_LOG_v1 {"time_micros": 1764090001990250, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990282) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 770640, prev total WAL file size 770640, number of live WAL files 2.
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990967) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303133' seq:0, type:0; will stop at (end)
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(749KB)], [104(10MB)]
Nov 25 17:00:01 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090001990990, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11336674, "oldest_snapshot_seqno": -1}
Nov 25 17:00:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7003 keys, 11207180 bytes, temperature: kUnknown
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090002266774, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11207180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11157705, "index_size": 30867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 182338, "raw_average_key_size": 26, "raw_value_size": 11029466, "raw_average_value_size": 1574, "num_data_blocks": 1216, "num_entries": 7003, "num_filter_entries": 7003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090001, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.267131) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11207180 bytes
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.343937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.1 rd, 40.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(29.4) write-amplify(14.6) OK, records in: 7529, records dropped: 526 output_compression: NoCompression
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.343974) EVENT_LOG_v1 {"time_micros": 1764090002343961, "job": 62, "event": "compaction_finished", "compaction_time_micros": 275937, "compaction_time_cpu_micros": 24437, "output_level": 6, "num_output_files": 1, "total_output_size": 11207180, "num_input_records": 7529, "num_output_records": 7003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090002344274, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090002345998, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:01.990898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:02.346106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:02 compute-0 ceph-mon[74985]: pgmap v2294: 321 pgs: 321 active+clean; 112 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 106 op/s
Nov 25 17:00:03 compute-0 ceph-mon[74985]: pgmap v2295: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 17:00:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 17:00:04 compute-0 nova_compute[254092]: 2025-11-25 17:00:04.332 254096 INFO nova.compute.manager [None req-5c54b0bc-a485-4cf5-bfb4-fbc2f3ecabda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Get console output
Nov 25 17:00:04 compute-0 nova_compute[254092]: 2025-11-25 17:00:04.339 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:00:04 compute-0 nova_compute[254092]: 2025-11-25 17:00:04.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:05 compute-0 nova_compute[254092]: 2025-11-25 17:00:05.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:05 compute-0 ceph-mon[74985]: pgmap v2296: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 17:00:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 17:00:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:07 compute-0 podman[377668]: 2025-11-25 17:00:07.653340811 +0000 UTC m=+0.063954353 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 17:00:07 compute-0 podman[377667]: 2025-11-25 17:00:07.664527616 +0000 UTC m=+0.077452141 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 17:00:07 compute-0 podman[377669]: 2025-11-25 17:00:07.689481395 +0000 UTC m=+0.098774111 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:00:07 compute-0 ceph-mon[74985]: pgmap v2297: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 25 17:00:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:00:09 compute-0 nova_compute[254092]: 2025-11-25 17:00:09.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:09 compute-0 ceph-mon[74985]: pgmap v2298: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:00:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:00:10 compute-0 nova_compute[254092]: 2025-11-25 17:00:10.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:11 compute-0 ceph-mon[74985]: pgmap v2299: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:00:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 625 KiB/s wr, 20 op/s
Nov 25 17:00:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:13.636 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:13.637 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:13.638 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:13 compute-0 ceph-mon[74985]: pgmap v2300: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 625 KiB/s wr, 20 op/s
Nov 25 17:00:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 13 KiB/s wr, 0 op/s
Nov 25 17:00:14 compute-0 nova_compute[254092]: 2025-11-25 17:00:14.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:15 compute-0 nova_compute[254092]: 2025-11-25 17:00:15.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:15 compute-0 ceph-mon[74985]: pgmap v2301: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 13 KiB/s wr, 0 op/s
Nov 25 17:00:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 17:00:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:17 compute-0 ceph-mon[74985]: pgmap v2302: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 17:00:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:00:19 compute-0 nova_compute[254092]: 2025-11-25 17:00:19.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:20 compute-0 ceph-mon[74985]: pgmap v2303: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:00:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:00:20 compute-0 nova_compute[254092]: 2025-11-25 17:00:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:20 compute-0 nova_compute[254092]: 2025-11-25 17:00:20.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.408 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.408 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.422 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.511 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.511 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.520 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.520 254096 INFO nova.compute.claims [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:00:21 compute-0 nova_compute[254092]: 2025-11-25 17:00:21.620 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.642491) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021642512, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 396, "num_deletes": 251, "total_data_size": 293062, "memory_usage": 300208, "flush_reason": "Manual Compaction"}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021645497, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 237486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48005, "largest_seqno": 48400, "table_properties": {"data_size": 235205, "index_size": 445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6163, "raw_average_key_size": 20, "raw_value_size": 230675, "raw_average_value_size": 756, "num_data_blocks": 20, "num_entries": 305, "num_filter_entries": 305, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090002, "oldest_key_time": 1764090002, "file_creation_time": 1764090021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 3034 microseconds, and 878 cpu microseconds.
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.645525) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 237486 bytes OK
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.645537) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646534) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646542) EVENT_LOG_v1 {"time_micros": 1764090021646539, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646554) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 290543, prev total WAL file size 290543, number of live WAL files 2.
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646862) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373538' seq:72057594037927935, type:22 .. '6D6772737461740032303130' seq:0, type:0; will stop at (end)
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(231KB)], [107(10MB)]
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021647038, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11444666, "oldest_snapshot_seqno": -1}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6803 keys, 8194578 bytes, temperature: kUnknown
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021712513, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 8194578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8151080, "index_size": 25402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 178356, "raw_average_key_size": 26, "raw_value_size": 8030952, "raw_average_value_size": 1180, "num_data_blocks": 988, "num_entries": 6803, "num_filter_entries": 6803, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.712771) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 8194578 bytes
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.714593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.7 rd, 125.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.7 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(82.7) write-amplify(34.5) OK, records in: 7308, records dropped: 505 output_compression: NoCompression
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.714610) EVENT_LOG_v1 {"time_micros": 1764090021714602, "job": 64, "event": "compaction_finished", "compaction_time_micros": 65499, "compaction_time_cpu_micros": 27762, "output_level": 6, "num_output_files": 1, "total_output_size": 8194578, "num_input_records": 7308, "num_output_records": 6803, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021714881, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090021716687, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.646794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:00:21.716793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:00:22 compute-0 ceph-mon[74985]: pgmap v2304: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:00:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:00:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494636634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.157 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.166 254096 DEBUG nova.compute.provider_tree [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.181 254096 DEBUG nova.scheduler.client.report [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.210 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.211 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:00:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s wr, 0 op/s
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.252 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.253 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.269 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.281 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.372 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.373 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.374 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Creating image(s)
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.406 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.434 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.463 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.468 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.515 254096 DEBUG nova.policy [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.555 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.556 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.557 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.558 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.587 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.592 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 810cd712-872a-49a6-bf45-04a319ba8d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:22 compute-0 nova_compute[254092]: 2025-11-25 17:00:22.981 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 810cd712-872a-49a6-bf45-04a319ba8d57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.046 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:00:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/494636634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.163 254096 DEBUG nova.objects.instance [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 810cd712-872a-49a6-bf45-04a319ba8d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.175 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.176 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Ensure instance console log exists: /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.176 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.177 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.177 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.223 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Successfully created port: fae63c22-d59b-4f75-9a0f-23ee49042eb6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:00:23 compute-0 nova_compute[254092]: 2025-11-25 17:00:23.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:24 compute-0 ceph-mon[74985]: pgmap v2305: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s wr, 0 op/s
Nov 25 17:00:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.263 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Successfully updated port: fae63c22-d59b-4f75-9a0f-23ee49042eb6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.280 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.280 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.281 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.443 254096 DEBUG nova.compute.manager [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-changed-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.444 254096 DEBUG nova.compute.manager [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Refreshing instance network info cache due to event network-changed-fae63c22-d59b-4f75-9a0f-23ee49042eb6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.444 254096 DEBUG oslo_concurrency.lockutils [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.526 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:00:24 compute-0 nova_compute[254092]: 2025-11-25 17:00:24.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.599 254096 DEBUG nova.network.neutron [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updating instance_info_cache with network_info: [{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.736 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.737 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance network_info: |[{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.737 254096 DEBUG oslo_concurrency.lockutils [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.738 254096 DEBUG nova.network.neutron [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Refreshing network info cache for port fae63c22-d59b-4f75-9a0f-23ee49042eb6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.740 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start _get_guest_xml network_info=[{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.745 254096 WARNING nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.748 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.748 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.766 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.767 254096 DEBUG nova.virt.libvirt.host [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.767 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.768 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.768 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.769 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.770 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.770 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.770 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.771 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.771 254096 DEBUG nova.virt.hardware [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:00:25 compute-0 nova_compute[254092]: 2025-11-25 17:00:25.773 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:26 compute-0 ceph-mon[74985]: pgmap v2306: 321 pgs: 321 active+clean; 121 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s wr, 0 op/s
Nov 25 17:00:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:00:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:00:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1098099446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.264 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.292 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.297 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:00:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015243251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.735 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.739 254096 DEBUG nova.virt.libvirt.vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-869327373',display_name='tempest-TestNetworkBasicOps-server-869327373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-869327373',id=112,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuOSoLJisGupCr6c7mO9PpNQ98EI+RPV6Pzjh9gT1H1Xu3OmtIley7lcmeOBktNLnaIXezfnI8n1+SWLXO5zvxz2wr1BCBoDOK2e1yKu9xR1PRa/EJoxyZJU+8n+8oakg==',key_name='tempest-TestNetworkBasicOps-1211766568',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-fdgy2dxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:22Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=810cd712-872a-49a6-bf45-04a319ba8d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.740 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.744 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.747 254096 DEBUG nova.objects.instance [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 810cd712-872a-49a6-bf45-04a319ba8d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.767 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <uuid>810cd712-872a-49a6-bf45-04a319ba8d57</uuid>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <name>instance-00000070</name>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-869327373</nova:name>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:00:25</nova:creationTime>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <nova:port uuid="fae63c22-d59b-4f75-9a0f-23ee49042eb6">
Nov 25 17:00:26 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <system>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <entry name="serial">810cd712-872a-49a6-bf45-04a319ba8d57</entry>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <entry name="uuid">810cd712-872a-49a6-bf45-04a319ba8d57</entry>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </system>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <os>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   </os>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <features>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   </features>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/810cd712-872a-49a6-bf45-04a319ba8d57_disk">
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       </source>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/810cd712-872a-49a6-bf45-04a319ba8d57_disk.config">
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       </source>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:00:26 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f3:c0:ff"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <target dev="tapfae63c22-d5"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/console.log" append="off"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <video>
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </video>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:00:26 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:00:26 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:00:26 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:00:26 compute-0 nova_compute[254092]: </domain>
Nov 25 17:00:26 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.770 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Preparing to wait for external event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.770 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.771 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.771 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.772 254096 DEBUG nova.virt.libvirt.vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-869327373',display_name='tempest-TestNetworkBasicOps-server-869327373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-869327373',id=112,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuOSoLJisGupCr6c7mO9PpNQ98EI+RPV6Pzjh9gT1H1Xu3OmtIley7lcmeOBktNLnaIXezfnI8n1+SWLXO5zvxz2wr1BCBoDOK2e1yKu9xR1PRa/EJoxyZJU+8n+8oakg==',key_name='tempest-TestNetworkBasicOps-1211766568',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-fdgy2dxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:22Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=810cd712-872a-49a6-bf45-04a319ba8d57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.773 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.773 254096 DEBUG nova.network.os_vif_util [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.774 254096 DEBUG os_vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.775 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.776 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.779 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfae63c22-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.780 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfae63c22-d5, col_values=(('external_ids', {'iface-id': 'fae63c22-d59b-4f75-9a0f-23ee49042eb6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:c0:ff', 'vm-uuid': '810cd712-872a-49a6-bf45-04a319ba8d57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:26 compute-0 NetworkManager[48891]: <info>  [1764090026.8302] manager: (tapfae63c22-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.841 254096 INFO os_vif [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5')
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.902 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.903 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.904 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:f3:c0:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.904 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Using config drive
Nov 25 17:00:26 compute-0 nova_compute[254092]: 2025-11-25 17:00:26.927 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:27 compute-0 ceph-mon[74985]: pgmap v2307: 321 pgs: 321 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:00:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1098099446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1015243251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.347 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Creating config drive at /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.351 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaemv3g6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.508 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaemv3g6x" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.539 254096 DEBUG nova.storage.rbd_utils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.543 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.709 254096 DEBUG oslo_concurrency.processutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config 810cd712-872a-49a6-bf45-04a319ba8d57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.711 254096 INFO nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deleting local config drive /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57/disk.config because it was imported into RBD.
Nov 25 17:00:27 compute-0 kernel: tapfae63c22-d5: entered promiscuous mode
Nov 25 17:00:27 compute-0 NetworkManager[48891]: <info>  [1764090027.7814] manager: (tapfae63c22-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Nov 25 17:00:27 compute-0 ovn_controller[153477]: 2025-11-25T17:00:27Z|01153|binding|INFO|Claiming lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 for this chassis.
Nov 25 17:00:27 compute-0 ovn_controller[153477]: 2025-11-25T17:00:27Z|01154|binding|INFO|fae63c22-d59b-4f75-9a0f-23ee49042eb6: Claiming fa:16:3e:f3:c0:ff 10.100.0.26
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.801 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c0:ff 10.100.0.26'], port_security=['fa:16:3e:f3:c0:ff 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '810cd712-872a-49a6-bf45-04a319ba8d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02789c4f-78fd-4a08-94c2-0c5eb3bf492c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be501355-0d69-488a-a6b4-4788d24c4a8e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fae63c22-d59b-4f75-9a0f-23ee49042eb6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.805 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fae63c22-d59b-4f75-9a0f-23ee49042eb6 in datapath a6fc284a-0332-49e7-8f7e-5297640d0e32 bound to our chassis
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.808 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a6fc284a-0332-49e7-8f7e-5297640d0e32
Nov 25 17:00:27 compute-0 systemd-udevd[378056]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.834 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1e26c557-f426-4f79-979a-91d4488f71de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.836 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa6fc284a-01 in ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.841 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa6fc284a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.841 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[462f04d5-5526-4aea-bb41-6bc5b89744bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.843 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fc79a9ee-0d94-4f4b-9d67-db35fd44f3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.846 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:27 compute-0 NetworkManager[48891]: <info>  [1764090027.8512] device (tapfae63c22-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:00:27 compute-0 ovn_controller[153477]: 2025-11-25T17:00:27Z|01155|binding|INFO|Setting lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 ovn-installed in OVS
Nov 25 17:00:27 compute-0 ovn_controller[153477]: 2025-11-25T17:00:27Z|01156|binding|INFO|Setting lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 up in Southbound
Nov 25 17:00:27 compute-0 NetworkManager[48891]: <info>  [1764090027.8528] device (tapfae63c22-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:00:27 compute-0 nova_compute[254092]: 2025-11-25 17:00:27.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:27 compute-0 systemd-machined[216343]: New machine qemu-144-instance-00000070.
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.859 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[08d64a01-0dd9-402a-93d3-6830225c5e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000070.
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.882 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f685932-9ca2-49d4-8a5f-fda95e14c28b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8a321a-dcaf-4406-aa06-31f4e179a091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 NetworkManager[48891]: <info>  [1764090027.9249] manager: (tapa6fc284a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/476)
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[74fd1be6-746e-4be7-8a0f-2c5f0bb4ee46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.961 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c9c5d58-7302-4eaa-9445-2ba4441ab7b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:27.964 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5981bf0b-c137-444b-8dfc-154ba9ee9800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:27 compute-0 NetworkManager[48891]: <info>  [1764090027.9970] device (tapa6fc284a-00): carrier: link connected
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.002 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5b5cf4-cbd2-481a-b4c2-e5459dd19409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.025 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f08dafd8-f334-4ccd-b100-0f09cc7d5dc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6fc284a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:d9:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648558, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378089, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.045 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[17735c4c-0356-43ed-8a0d-7694a8f98823]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:d91d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 648558, 'tstamp': 648558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378090, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.073 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c40128cd-d9f6-4c34-8f56-1efb052e5b74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6fc284a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:d9:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648558, 'reachable_time': 16099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378091, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27b97303-c6d6-4cdc-a17b-1a0e26aefff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.193 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d72c3-305c-4f7c-83f3-f6e400109e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6fc284a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.196 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6fc284a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:28 compute-0 kernel: tapa6fc284a-00: entered promiscuous mode
Nov 25 17:00:28 compute-0 NetworkManager[48891]: <info>  [1764090028.1985] manager: (tapa6fc284a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.200 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa6fc284a-00, col_values=(('external_ids', {'iface-id': '9a1f1538-db9c-44f0-800c-07be1e2da375'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:28 compute-0 ovn_controller[153477]: 2025-11-25T17:00:28Z|01157|binding|INFO|Releasing lport 9a1f1538-db9c-44f0-800c-07be1e2da375 from this chassis (sb_readonly=0)
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.203 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a6fc284a-0332-49e7-8f7e-5297640d0e32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a6fc284a-0332-49e7-8f7e-5297640d0e32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.204 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5819e859-7b41-44d9-b478-a0566c10debc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.205 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-a6fc284a-0332-49e7-8f7e-5297640d0e32
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/a6fc284a-0332-49e7-8f7e-5297640d0e32.pid.haproxy
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID a6fc284a-0332-49e7-8f7e-5297640d0e32
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:00:28 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:28.207 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'env', 'PROCESS_TAG=haproxy-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a6fc284a-0332-49e7-8f7e-5297640d0e32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.582 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.583 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.638 254096 DEBUG nova.compute.manager [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.638 254096 DEBUG oslo_concurrency.lockutils [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.638 254096 DEBUG oslo_concurrency.lockutils [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.639 254096 DEBUG oslo_concurrency.lockutils [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.639 254096 DEBUG nova.compute.manager [req-0f3621cd-6025-4370-8250-5063ca40fe5b req-0556c3cd-0595-4f3e-8937-c312c009fb39 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Processing event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.688 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090028.688423, 810cd712-872a-49a6-bf45-04a319ba8d57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.689 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Started (Lifecycle Event)
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.691 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.701 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.713 254096 INFO nova.virt.libvirt.driver [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance spawned successfully.
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.716 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.725 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.750 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.751 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.752 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.752 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.753 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.754 254096 DEBUG nova.virt.libvirt.driver [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.761 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.761 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090028.6886623, 810cd712-872a-49a6-bf45-04a319ba8d57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.762 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Paused (Lifecycle Event)
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.789 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.794 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090028.6952353, 810cd712-872a-49a6-bf45-04a319ba8d57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.794 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Resumed (Lifecycle Event)
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.818 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.823 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.831 254096 INFO nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 6.46 seconds to spawn the instance on the hypervisor.
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.832 254096 DEBUG nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.846 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.911 254096 INFO nova.compute.manager [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 7.44 seconds to build instance.
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.927 254096 DEBUG nova.network.neutron [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updated VIF entry in instance network info cache for port fae63c22-d59b-4f75-9a0f-23ee49042eb6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.928 254096 DEBUG nova.network.neutron [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updating instance_info_cache with network_info: [{"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.931 254096 DEBUG oslo_concurrency.lockutils [None req-07c6afa3-21c1-4d6f-b52d-f8a7ee5087c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:28 compute-0 podman[378185]: 2025-11-25 17:00:28.940129143 +0000 UTC m=+0.056746787 container create b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:00:28 compute-0 nova_compute[254092]: 2025-11-25 17:00:28.941 254096 DEBUG oslo_concurrency.lockutils [req-830ba954-4e98-471e-ba57-c80b25e9a908 req-25bc10dd-4504-4fe7-833d-19b100773f97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-810cd712-872a-49a6-bf45-04a319ba8d57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:00:28 compute-0 systemd[1]: Started libpod-conmon-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4.scope.
Nov 25 17:00:29 compute-0 podman[378185]: 2025-11-25 17:00:28.91031342 +0000 UTC m=+0.026931114 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:00:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0170379c24e2e0c12161dc8a7845072d42d78dc45d297c38ef19f95a48d9a9cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:00:29 compute-0 podman[378185]: 2025-11-25 17:00:29.052352479 +0000 UTC m=+0.168970203 container init b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:00:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:00:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054360708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:29 compute-0 podman[378185]: 2025-11-25 17:00:29.060749038 +0000 UTC m=+0.177366722 container start b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.085 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:29 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : New worker (378209) forked
Nov 25 17:00:29 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : Loading success.
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.353 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.354 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.359 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.359 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:00:29 compute-0 ceph-mon[74985]: pgmap v2308: 321 pgs: 321 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:00:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2054360708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.581 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.583 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.92196273803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.583 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.646 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance d643dc57-9536-4a67-9a17-c20512710ea5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.646 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 810cd712-872a-49a6-bf45-04a319ba8d57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.646 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.647 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.828 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.848 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.848 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.870 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.898 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:00:29 compute-0 nova_compute[254092]: 2025-11-25 17:00:29.952 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 17:00:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:00:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959655731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.425 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.432 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.447 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.482 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.483 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/959655731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.802 254096 DEBUG nova.compute.manager [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.802 254096 DEBUG oslo_concurrency.lockutils [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.802 254096 DEBUG oslo_concurrency.lockutils [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.803 254096 DEBUG oslo_concurrency.lockutils [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.803 254096 DEBUG nova.compute.manager [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] No waiting events found dispatching network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:00:30 compute-0 nova_compute[254092]: 2025-11-25 17:00:30.803 254096 WARNING nova.compute.manager [req-e19be41b-7863-4bf3-932b-859ca4f1e6f1 req-63e4ab4e-f37c-4d64-b4c3-9d7b542aa083 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received unexpected event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 for instance with vm_state active and task_state None.
Nov 25 17:00:31 compute-0 ceph-mon[74985]: pgmap v2309: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 25 17:00:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:31 compute-0 nova_compute[254092]: 2025-11-25 17:00:31.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 17:00:32 compute-0 nova_compute[254092]: 2025-11-25 17:00:32.478 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:32 compute-0 nova_compute[254092]: 2025-11-25 17:00:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:33 compute-0 ceph-mon[74985]: pgmap v2310: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 17:00:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 17:00:34 compute-0 nova_compute[254092]: 2025-11-25 17:00:34.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:35 compute-0 ceph-mon[74985]: pgmap v2311: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 25 17:00:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 17:00:36 compute-0 nova_compute[254092]: 2025-11-25 17:00:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:00:36 compute-0 nova_compute[254092]: 2025-11-25 17:00:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:00:36 compute-0 nova_compute[254092]: 2025-11-25 17:00:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:00:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:36 compute-0 nova_compute[254092]: 2025-11-25 17:00:36.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:37 compute-0 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:00:37 compute-0 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:00:37 compute-0 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:00:37 compute-0 nova_compute[254092]: 2025-11-25 17:00:37.162 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:00:37 compute-0 nova_compute[254092]: 2025-11-25 17:00:37.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:37 compute-0 ceph-mon[74985]: pgmap v2312: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 17:00:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:00:38 compute-0 nova_compute[254092]: 2025-11-25 17:00:38.522 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:38 compute-0 nova_compute[254092]: 2025-11-25 17:00:38.532 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:00:38 compute-0 nova_compute[254092]: 2025-11-25 17:00:38.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:00:38 compute-0 podman[378242]: 2025-11-25 17:00:38.638224579 +0000 UTC m=+0.053173709 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:00:38 compute-0 podman[378241]: 2025-11-25 17:00:38.692382804 +0000 UTC m=+0.107602641 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:00:38 compute-0 podman[378243]: 2025-11-25 17:00:38.709401817 +0000 UTC m=+0.115844785 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 17:00:39 compute-0 ceph-mon[74985]: pgmap v2313: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:00:39 compute-0 nova_compute[254092]: 2025-11-25 17:00:39.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:00:40
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'volumes', 'default.rgw.log', 'backups', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root']
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:00:40 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:00:41 compute-0 ceph-mon[74985]: pgmap v2314: 321 pgs: 321 active+clean; 167 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 17:00:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:41 compute-0 nova_compute[254092]: 2025-11-25 17:00:41.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 170 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 576 KiB/s wr, 58 op/s
Nov 25 17:00:42 compute-0 ovn_controller[153477]: 2025-11-25T17:00:42Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:c0:ff 10.100.0.26
Nov 25 17:00:42 compute-0 ovn_controller[153477]: 2025-11-25T17:00:42Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:c0:ff 10.100.0.26
Nov 25 17:00:42 compute-0 nova_compute[254092]: 2025-11-25 17:00:42.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:43 compute-0 ceph-mon[74985]: pgmap v2315: 321 pgs: 321 active+clean; 170 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 576 KiB/s wr, 58 op/s
Nov 25 17:00:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 170 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 575 KiB/s wr, 24 op/s
Nov 25 17:00:44 compute-0 nova_compute[254092]: 2025-11-25 17:00:44.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:45 compute-0 ceph-mon[74985]: pgmap v2316: 321 pgs: 321 active+clean; 170 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 575 KiB/s wr, 24 op/s
Nov 25 17:00:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 17:00:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:46 compute-0 nova_compute[254092]: 2025-11-25 17:00:46.876 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:47 compute-0 ceph-mon[74985]: pgmap v2317: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 17:00:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:00:48 compute-0 nova_compute[254092]: 2025-11-25 17:00:48.834 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:48 compute-0 nova_compute[254092]: 2025-11-25 17:00:48.835 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:48 compute-0 nova_compute[254092]: 2025-11-25 17:00:48.859 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:00:48 compute-0 nova_compute[254092]: 2025-11-25 17:00:48.963 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:48 compute-0 nova_compute[254092]: 2025-11-25 17:00:48.964 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:48 compute-0 nova_compute[254092]: 2025-11-25 17:00:48.972 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:00:48 compute-0 nova_compute[254092]: 2025-11-25 17:00:48.973 254096 INFO nova.compute.claims [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.078 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:00:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931076854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.557 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.563 254096 DEBUG nova.compute.provider_tree [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.586 254096 DEBUG nova.scheduler.client.report [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.606 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.607 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.662 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.663 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.686 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.704 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:00:49 compute-0 ceph-mon[74985]: pgmap v2318: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:00:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3931076854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.831 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.832 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.832 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Creating image(s)
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.855 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.877 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.902 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.906 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.944 254096 DEBUG nova.policy [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.978 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.980 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.981 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:49 compute-0 nova_compute[254092]: 2025-11-25 17:00:49.981 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.009 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.013 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 caf64ca2-5f73-454a-8442-9965c9853cba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.134 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.135 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.136 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.136 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.136 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.138 254096 INFO nova.compute.manager [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Terminating instance
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.139 254096 DEBUG nova.compute.manager [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:00:50 compute-0 kernel: tapfae63c22-d5 (unregistering): left promiscuous mode
Nov 25 17:00:50 compute-0 NetworkManager[48891]: <info>  [1764090050.2093] device (tapfae63c22-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:00:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:00:50 compute-0 ovn_controller[153477]: 2025-11-25T17:00:50Z|01158|binding|INFO|Releasing lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 from this chassis (sb_readonly=0)
Nov 25 17:00:50 compute-0 ovn_controller[153477]: 2025-11-25T17:00:50Z|01159|binding|INFO|Setting lport fae63c22-d59b-4f75-9a0f-23ee49042eb6 down in Southbound
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 ovn_controller[153477]: 2025-11-25T17:00:50Z|01160|binding|INFO|Removing iface tapfae63c22-d5 ovn-installed in OVS
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.270 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.282 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:c0:ff 10.100.0.26'], port_security=['fa:16:3e:f3:c0:ff 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '810cd712-872a-49a6-bf45-04a319ba8d57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02789c4f-78fd-4a08-94c2-0c5eb3bf492c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be501355-0d69-488a-a6b4-4788d24c4a8e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fae63c22-d59b-4f75-9a0f-23ee49042eb6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.284 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fae63c22-d59b-4f75-9a0f-23ee49042eb6 in datapath a6fc284a-0332-49e7-8f7e-5297640d0e32 unbound from our chassis
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.285 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a6fc284a-0332-49e7-8f7e-5297640d0e32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.287 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8de2816b-d4f0-40f0-975b-8abf1c60fdd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.287 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 namespace which is not needed anymore
Nov 25 17:00:50 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Deactivated successfully.
Nov 25 17:00:50 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Consumed 12.944s CPU time.
Nov 25 17:00:50 compute-0 systemd-machined[216343]: Machine qemu-144-instance-00000070 terminated.
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.356 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 caf64ca2-5f73-454a-8442-9965c9853cba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:50 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : haproxy version is 2.8.14-c23fe91
Nov 25 17:00:50 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [NOTICE]   (378206) : path to executable is /usr/sbin/haproxy
Nov 25 17:00:50 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [WARNING]  (378206) : Exiting Master process...
Nov 25 17:00:50 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [WARNING]  (378206) : Exiting Master process...
Nov 25 17:00:50 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [ALERT]    (378206) : Current worker (378209) exited with code 143 (Terminated)
Nov 25 17:00:50 compute-0 neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32[378201]: [WARNING]  (378206) : All workers exited. Exiting... (0)
Nov 25 17:00:50 compute-0 systemd[1]: libpod-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4.scope: Deactivated successfully.
Nov 25 17:00:50 compute-0 podman[378447]: 2025-11-25 17:00:50.42090819 +0000 UTC m=+0.046012635 container died b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.422 254096 INFO nova.virt.libvirt.driver [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Instance destroyed successfully.
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.424 254096 DEBUG nova.objects.instance [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 810cd712-872a-49a6-bf45-04a319ba8d57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.428 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4-userdata-shm.mount: Deactivated successfully.
Nov 25 17:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0170379c24e2e0c12161dc8a7845072d42d78dc45d297c38ef19f95a48d9a9cb-merged.mount: Deactivated successfully.
Nov 25 17:00:50 compute-0 podman[378447]: 2025-11-25 17:00:50.457790704 +0000 UTC m=+0.082895139 container cleanup b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:00:50 compute-0 systemd[1]: libpod-conmon-b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4.scope: Deactivated successfully.
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.472 254096 DEBUG nova.virt.libvirt.vif [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:00:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-869327373',display_name='tempest-TestNetworkBasicOps-server-869327373',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-869327373',id=112,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNuOSoLJisGupCr6c7mO9PpNQ98EI+RPV6Pzjh9gT1H1Xu3OmtIley7lcmeOBktNLnaIXezfnI8n1+SWLXO5zvxz2wr1BCBoDOK2e1yKu9xR1PRa/EJoxyZJU+8n+8oakg==',key_name='tempest-TestNetworkBasicOps-1211766568',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:00:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-fdgy2dxp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:00:28Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=810cd712-872a-49a6-bf45-04a319ba8d57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.473 254096 DEBUG nova.network.os_vif_util [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "address": "fa:16:3e:f3:c0:ff", "network": {"id": "a6fc284a-0332-49e7-8f7e-5297640d0e32", "bridge": "br-int", "label": "tempest-network-smoke--220608266", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae63c22-d5", "ovs_interfaceid": "fae63c22-d59b-4f75-9a0f-23ee49042eb6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.474 254096 DEBUG nova.network.os_vif_util [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.474 254096 DEBUG os_vif [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.476 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfae63c22-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.483 254096 INFO os_vif [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:c0:ff,bridge_name='br-int',has_traffic_filtering=True,id=fae63c22-d59b-4f75-9a0f-23ee49042eb6,network=Network(a6fc284a-0332-49e7-8f7e-5297640d0e32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae63c22-d5')
Nov 25 17:00:50 compute-0 podman[378534]: 2025-11-25 17:00:50.521533641 +0000 UTC m=+0.040618518 container remove b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.528 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8330e4d-163c-4160-8834-421143777a2a]: (4, ('Tue Nov 25 05:00:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 (b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4)\nb63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4\nTue Nov 25 05:00:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 (b63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4)\nb63da8a018b74b76d1d785011fc9162769e6cf7218b88ff076794a3bb577c0f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.529 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dae655-8fc3-46e3-877f-76df0d2b4c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.530 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6fc284a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.532 254096 DEBUG nova.objects.instance [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid caf64ca2-5f73-454a-8442-9965c9853cba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:00:50 compute-0 kernel: tapa6fc284a-00: left promiscuous mode
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.537 254096 DEBUG nova.compute.manager [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-unplugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.538 254096 DEBUG oslo_concurrency.lockutils [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.538 254096 DEBUG oslo_concurrency.lockutils [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.538 254096 DEBUG oslo_concurrency.lockutils [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.539 254096 DEBUG nova.compute.manager [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] No waiting events found dispatching network-vif-unplugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.539 254096 DEBUG nova.compute.manager [req-43db5c0e-6cbf-46ac-bab9-4ef97cefd4ef req-7a208fe8-4516-4403-9f7f-f203ad98c608 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-unplugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.547 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc89bb4a-60fc-4ab8-9300-c5e357a43ff4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.547 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.548 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Ensure instance console log exists: /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.549 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.550 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.574 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6621e0a-c456-4054-a604-e2706fffdf23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.575 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[683c1471-1115-4aef-a410-8ef209fc44cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[629447ec-5488-4d78-bb92-359e3c010510]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648550, 'reachable_time': 26144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378583, 'error': None, 'target': 'ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 systemd[1]: run-netns-ovnmeta\x2da6fc284a\x2d0332\x2d49e7\x2d8f7e\x2d5297640d0e32.mount: Deactivated successfully.
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.596 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a6fc284a-0332-49e7-8f7e-5297640d0e32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.597 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f11cf6-0ebc-42fa-b328-70341ace36c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.639 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:50.640 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.741 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Successfully created port: 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.858 254096 INFO nova.virt.libvirt.driver [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deleting instance files /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57_del
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.859 254096 INFO nova.virt.libvirt.driver [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deletion of /var/lib/nova/instances/810cd712-872a-49a6-bf45-04a319ba8d57_del complete
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.905 254096 INFO nova.compute.manager [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.906 254096 DEBUG oslo.service.loopingcall [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.906 254096 DEBUG nova.compute.manager [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:00:50 compute-0 nova_compute[254092]: 2025-11-25 17:00:50.906 254096 DEBUG nova.network.neutron [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.331 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Successfully updated port: 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.344 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.345 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.345 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.429 254096 DEBUG nova.compute.manager [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.429 254096 DEBUG nova.compute.manager [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing instance network info cache due to event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.429 254096 DEBUG oslo_concurrency.lockutils [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.448 254096 DEBUG nova.network.neutron [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.462 254096 INFO nova.compute.manager [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Took 0.56 seconds to deallocate network for instance.
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.470 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001519118419124829 of space, bias 1.0, pg target 0.45573552573744874 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:00:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.502 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.503 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:51 compute-0 nova_compute[254092]: 2025-11-25 17:00:51.585 254096 DEBUG oslo_concurrency.processutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:51 compute-0 ceph-mon[74985]: pgmap v2319: 321 pgs: 321 active+clean; 200 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:00:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:00:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071352961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.074 254096 DEBUG oslo_concurrency.processutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.079 254096 DEBUG nova.compute.provider_tree [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.090 254096 DEBUG nova.scheduler.client.report [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.106 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.126 254096 INFO nova.scheduler.client.report [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 810cd712-872a-49a6-bf45-04a319ba8d57
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.167 254096 DEBUG nova.network.neutron [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.183 254096 DEBUG oslo_concurrency.lockutils [None req-41e3b7a8-7cc0-42bd-abbd-c9957d52a463 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.184 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.184 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance network_info: |[{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.185 254096 DEBUG oslo_concurrency.lockutils [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.185 254096 DEBUG nova.network.neutron [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.187 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start _get_guest_xml network_info=[{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.191 254096 WARNING nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.196 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.196 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.202 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.203 254096 DEBUG nova.virt.libvirt.host [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.203 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.203 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.204 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.205 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.206 254096 DEBUG nova.virt.hardware [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.209 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 185 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Nov 25 17:00:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:52.643 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.650 254096 DEBUG nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.651 254096 DEBUG oslo_concurrency.lockutils [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.651 254096 DEBUG oslo_concurrency.lockutils [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 DEBUG oslo_concurrency.lockutils [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "810cd712-872a-49a6-bf45-04a319ba8d57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 DEBUG nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] No waiting events found dispatching network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 WARNING nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received unexpected event network-vif-plugged-fae63c22-d59b-4f75-9a0f-23ee49042eb6 for instance with vm_state deleted and task_state None.
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.652 254096 DEBUG nova.compute.manager [req-9c10ea2c-2464-4c34-a358-b4f1a88dfcae req-6bf1a60a-b509-4c74-bf5e-daad135fae90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Received event network-vif-deleted-fae63c22-d59b-4f75-9a0f-23ee49042eb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:00:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131512354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.706 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.733 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:52 compute-0 nova_compute[254092]: 2025-11-25 17:00:52.737 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4071352961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/131512354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:00:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399783084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.231 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.233 254096 DEBUG nova.virt.libvirt.vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=113,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC5IGkA0EZcfpUrnJAPAyNPE3aP4ux+1YVZrN6xmNxNmyBv5luv7uNh5XsGgePIRhuMTv5vEwnWkpC0iguYDb2SFlQPUW7qNQRGe9ic9lTfmn148JQBqNQ9VGr6RxpqguQ==',key_name='tempest-TestSecurityGroupsBasicOps-1631979734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-o6t5fl29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:49Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=caf64ca2-5f73-454a-8442-9965c9853cba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.234 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.235 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.236 254096 DEBUG nova.objects.instance [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid caf64ca2-5f73-454a-8442-9965c9853cba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.249 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <uuid>caf64ca2-5f73-454a-8442-9965c9853cba</uuid>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <name>instance-00000071</name>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357</nova:name>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:00:52</nova:creationTime>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <nova:port uuid="1caaa3da-b3eb-4441-b6b2-8eaa71146e77">
Nov 25 17:00:53 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <system>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <entry name="serial">caf64ca2-5f73-454a-8442-9965c9853cba</entry>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <entry name="uuid">caf64ca2-5f73-454a-8442-9965c9853cba</entry>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </system>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <os>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   </os>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <features>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   </features>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/caf64ca2-5f73-454a-8442-9965c9853cba_disk">
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       </source>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/caf64ca2-5f73-454a-8442-9965c9853cba_disk.config">
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       </source>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:00:53 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:97:ec:69"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <target dev="tap1caaa3da-b3"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/console.log" append="off"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <video>
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </video>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:00:53 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:00:53 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:00:53 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:00:53 compute-0 nova_compute[254092]: </domain>
Nov 25 17:00:53 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.251 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Preparing to wait for external event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.252 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.252 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.252 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.253 254096 DEBUG nova.virt.libvirt.vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=113,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC5IGkA0EZcfpUrnJAPAyNPE3aP4ux+1YVZrN6xmNxNmyBv5luv7uNh5XsGgePIRhuMTv5vEwnWkpC0iguYDb2SFlQPUW7qNQRGe9ic9lTfmn148JQBqNQ9VGr6RxpqguQ==',key_name='tempest-TestSecurityGroupsBasicOps-1631979734',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-o6t5fl29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:00:49Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=caf64ca2-5f73-454a-8442-9965c9853cba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.253 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.254 254096 DEBUG nova.network.os_vif_util [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.254 254096 DEBUG os_vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.256 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.256 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.260 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1caaa3da-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.260 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1caaa3da-b3, col_values=(('external_ids', {'iface-id': '1caaa3da-b3eb-4441-b6b2-8eaa71146e77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:ec:69', 'vm-uuid': 'caf64ca2-5f73-454a-8442-9965c9853cba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:53 compute-0 NetworkManager[48891]: <info>  [1764090053.2638] manager: (tap1caaa3da-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.267 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.267 254096 INFO os_vif [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3')
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.325 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.325 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.326 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:97:ec:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.326 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Using config drive
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.359 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.458 254096 DEBUG nova.network.neutron [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updated VIF entry in instance network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.459 254096 DEBUG nova.network.neutron [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.472 254096 DEBUG oslo_concurrency.lockutils [req-e031d9d7-0cf3-4ec8-8dd9-b9627a183694 req-e2d8ab73-3831-40ad-a1b4-c85ea3165c19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.739 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Creating config drive at /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.747 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tfq0hk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:53 compute-0 ceph-mon[74985]: pgmap v2320: 321 pgs: 321 active+clean; 185 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Nov 25 17:00:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3399783084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.891 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tfq0hk" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.923 254096 DEBUG nova.storage.rbd_utils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image caf64ca2-5f73-454a-8442-9965c9853cba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:00:53 compute-0 nova_compute[254092]: 2025-11-25 17:00:53.929 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config caf64ca2-5f73-454a-8442-9965c9853cba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.068 254096 DEBUG oslo_concurrency.processutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config caf64ca2-5f73-454a-8442-9965c9853cba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.068 254096 INFO nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deleting local config drive /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba/disk.config because it was imported into RBD.
Nov 25 17:00:54 compute-0 kernel: tap1caaa3da-b3: entered promiscuous mode
Nov 25 17:00:54 compute-0 NetworkManager[48891]: <info>  [1764090054.1252] manager: (tap1caaa3da-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/479)
Nov 25 17:00:54 compute-0 ovn_controller[153477]: 2025-11-25T17:00:54Z|01161|binding|INFO|Claiming lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for this chassis.
Nov 25 17:00:54 compute-0 ovn_controller[153477]: 2025-11-25T17:00:54Z|01162|binding|INFO|1caaa3da-b3eb-4441-b6b2-8eaa71146e77: Claiming fa:16:3e:97:ec:69 10.100.0.8
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:ec:69 10.100.0.8'], port_security=['fa:16:3e:97:ec:69 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'caf64ca2-5f73-454a-8442-9965c9853cba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b034178-39ad-4db7-adab-aaf6bc34bd4a e7198f6b-79d7-48d7-845d-93c396c87f35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e96b6e1-6935-4458-bc78-50ea3ed2412d, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1caaa3da-b3eb-4441-b6b2-8eaa71146e77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.137 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 in datapath 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 bound to our chassis
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.138 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2
Nov 25 17:00:54 compute-0 ovn_controller[153477]: 2025-11-25T17:00:54Z|01163|binding|INFO|Setting lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 ovn-installed in OVS
Nov 25 17:00:54 compute-0 ovn_controller[153477]: 2025-11-25T17:00:54Z|01164|binding|INFO|Setting lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 up in Southbound
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94112d3c-1606-4766-942d-0ec10c34b944]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.150 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f953cb4-a1 in ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.152 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f953cb4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.152 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[22ef31e8-8037-4d86-a672-3ed41aa5f325]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.153 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec9856c-fe68-4231-9ed0-7a447d1b91a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 systemd-machined[216343]: New machine qemu-145-instance-00000071.
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.167 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a991557e-0812-4661-9a3f-cab24d365327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.180 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[147cef6a-29f9-4ea7-99e0-2b4c97c9ede5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000071.
Nov 25 17:00:54 compute-0 systemd-udevd[378746]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:00:54 compute-0 NetworkManager[48891]: <info>  [1764090054.2094] device (tap1caaa3da-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:00:54 compute-0 NetworkManager[48891]: <info>  [1764090054.2105] device (tap1caaa3da-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.216 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e489d2f5-4396-403a-9f2e-631aaf9cd4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 systemd-udevd[378749]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:00:54 compute-0 NetworkManager[48891]: <info>  [1764090054.2242] manager: (tap0f953cb4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/480)
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.222 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b503125a-2f5c-4bbf-944c-55b3ef1dbc0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 185 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.256 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3d84b5-5c6b-4b95-b8bb-940c5c3ebde5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.261 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f3926b9e-1680-4d25-8ce1-b47c9dc0875c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 NetworkManager[48891]: <info>  [1764090054.2848] device (tap0f953cb4-a0): carrier: link connected
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.288 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[250eab1f-98ad-410b-a8e8-9276fe68d55f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.304 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a91bf7e4-80a9-4060-a982-ab595424c63c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f953cb4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:94:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651187, 'reachable_time': 29997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378775, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.321 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b78c97ae-d3f0-4af5-bd1a-9d0cc58f793e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:942b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651187, 'tstamp': 651187}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378776, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c7e538-f8ec-46a4-996d-772747a5f503]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f953cb4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:94:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 343], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651187, 'reachable_time': 29997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378777, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_controller[153477]: 2025-11-25T17:00:54Z|01165|binding|INFO|Releasing lport ecc3dd32-bc67-4be1-9d9a-caf7485b6c03 from this chassis (sb_readonly=0)
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.369 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8a4188-4774-46b7-9f10-4d1e902c1ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.385 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.390 254096 DEBUG nova.compute.manager [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG oslo_concurrency.lockutils [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG oslo_concurrency.lockutils [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG oslo_concurrency.lockutils [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.391 254096 DEBUG nova.compute.manager [req-4c38600d-4db7-4492-9ad3-6abccb8f9100 req-be7777e6-abb7-4ca2-ba72-aaaa046cd91f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Processing event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.430 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c5bc34-1dd8-4abe-a716-cc617f3c7ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.431 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f953cb4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.431 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.431 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f953cb4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 NetworkManager[48891]: <info>  [1764090054.4338] manager: (tap0f953cb4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Nov 25 17:00:54 compute-0 kernel: tap0f953cb4-a0: entered promiscuous mode
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.437 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f953cb4-a0, col_values=(('external_ids', {'iface-id': '2166482b-c36e-4cfe-a45d-40e1c4e6a3e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 ovn_controller[153477]: 2025-11-25T17:00:54Z|01166|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.458 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[615c5c57-1119-4981-9984-85376d369e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.461 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.pid.haproxy
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:00:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:54.461 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'env', 'PROCESS_TAG=haproxy-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:54 compute-0 podman[378842]: 2025-11-25 17:00:54.856011274 +0000 UTC m=+0.044811942 container create 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 17:00:54 compute-0 systemd[1]: Started libpod-conmon-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72.scope.
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.914 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090054.914092, caf64ca2-5f73-454a-8442-9965c9853cba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.915 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Started (Lifecycle Event)
Nov 25 17:00:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.917 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186dab47ed7f953257a28b5bb609e6cd78bd265db3c45fccf8e31014045c990/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.921 254096 DEBUG nova.compute.manager [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-changed-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.921 254096 DEBUG nova.compute.manager [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing instance network info cache due to event network-changed-12c882d8-4cd5-4233-8d3b-650401885991. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.921 254096 DEBUG oslo_concurrency.lockutils [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.922 254096 DEBUG oslo_concurrency.lockutils [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.922 254096 DEBUG nova.network.neutron [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Refreshing network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.923 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.926 254096 INFO nova.virt.libvirt.driver [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance spawned successfully.
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.926 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:00:54 compute-0 podman[378842]: 2025-11-25 17:00:54.834196309 +0000 UTC m=+0.022997007 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:00:54 compute-0 podman[378842]: 2025-11-25 17:00:54.932921158 +0000 UTC m=+0.121721856 container init 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:00:54 compute-0 podman[378842]: 2025-11-25 17:00:54.941101281 +0000 UTC m=+0.129901959 container start 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.947 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.950 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.958 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:54 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : New worker (378872) forked
Nov 25 17:00:54 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : Loading success.
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.959 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.960 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.960 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.960 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.961 254096 DEBUG nova.virt.libvirt.driver [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.964 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.965 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090054.914177, caf64ca2-5f73-454a-8442-9965c9853cba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.965 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Paused (Lifecycle Event)
Nov 25 17:00:54 compute-0 nova_compute[254092]: 2025-11-25 17:00:54.999 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.002 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090054.9194849, caf64ca2-5f73-454a-8442-9965c9853cba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.002 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Resumed (Lifecycle Event)
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.020 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.022 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.037 254096 INFO nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 5.21 seconds to spawn the instance on the hypervisor.
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.038 254096 DEBUG nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.042 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.053 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.054 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.055 254096 INFO nova.compute.manager [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Terminating instance
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.056 254096 DEBUG nova.compute.manager [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.098 254096 INFO nova.compute.manager [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 6.17 seconds to build instance.
Nov 25 17:00:55 compute-0 kernel: tap12c882d8-4c (unregistering): left promiscuous mode
Nov 25 17:00:55 compute-0 NetworkManager[48891]: <info>  [1764090055.1140] device (tap12c882d8-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.116 254096 DEBUG oslo_concurrency.lockutils [None req-3a30d9f2-5d59-44cb-83be-b2cb008615e9 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 ovn_controller[153477]: 2025-11-25T17:00:55Z|01167|binding|INFO|Releasing lport 12c882d8-4cd5-4233-8d3b-650401885991 from this chassis (sb_readonly=0)
Nov 25 17:00:55 compute-0 ovn_controller[153477]: 2025-11-25T17:00:55Z|01168|binding|INFO|Setting lport 12c882d8-4cd5-4233-8d3b-650401885991 down in Southbound
Nov 25 17:00:55 compute-0 ovn_controller[153477]: 2025-11-25T17:00:55Z|01169|binding|INFO|Removing iface tap12c882d8-4c ovn-installed in OVS
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.123 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.127 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:dd:57 10.100.0.3'], port_security=['fa:16:3e:80:dd:57 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd643dc57-9536-4a67-9a17-c20512710ea5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f446da-e1c5-4251-b5dd-3071154486f0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '055c4680-7ea5-4bc6-a453-5482dfbe9b96', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=168e2889-06cc-4097-8922-1d94c15fa45a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=12c882d8-4cd5-4233-8d3b-650401885991) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.128 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 12c882d8-4cd5-4233-8d3b-650401885991 in datapath 22f446da-e1c5-4251-b5dd-3071154486f0 unbound from our chassis
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.129 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22f446da-e1c5-4251-b5dd-3071154486f0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.130 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7bc16c-8339-4040-83c5-01d0449b9a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.131 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 namespace which is not needed anymore
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 25 17:00:55 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d0000006f.scope: Consumed 15.619s CPU time.
Nov 25 17:00:55 compute-0 systemd-machined[216343]: Machine qemu-143-instance-0000006f terminated.
Nov 25 17:00:55 compute-0 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : haproxy version is 2.8.14-c23fe91
Nov 25 17:00:55 compute-0 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [NOTICE]   (376767) : path to executable is /usr/sbin/haproxy
Nov 25 17:00:55 compute-0 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [WARNING]  (376767) : Exiting Master process...
Nov 25 17:00:55 compute-0 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [ALERT]    (376767) : Current worker (376769) exited with code 143 (Terminated)
Nov 25 17:00:55 compute-0 neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0[376763]: [WARNING]  (376767) : All workers exited. Exiting... (0)
Nov 25 17:00:55 compute-0 systemd[1]: libpod-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99.scope: Deactivated successfully.
Nov 25 17:00:55 compute-0 podman[378901]: 2025-11-25 17:00:55.269958328 +0000 UTC m=+0.043447545 container died 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.291 254096 INFO nova.virt.libvirt.driver [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Instance destroyed successfully.
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.291 254096 DEBUG nova.objects.instance [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid d643dc57-9536-4a67-9a17-c20512710ea5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99-userdata-shm.mount: Deactivated successfully.
Nov 25 17:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7b995b572a86953529648503ae39e65ee754ec7d4b7a8548926206da37e805c-merged.mount: Deactivated successfully.
Nov 25 17:00:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:00:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/291562265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.303 254096 DEBUG nova.virt.libvirt.vif [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T16:59:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-888996977',display_name='tempest-TestNetworkBasicOps-server-888996977',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-888996977',id=111,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtS45HGfMK2vg5lA1DMxbi57g++ufQ+h4UViHDHhpxxz0TeEAmFiy6LE8nwpuctZL207E2zBi1qnO46vlL2kuhASWctkQ5Cos3uH5AqhXg1h51/mOABxlzeqFcNuWZ38Q==',key_name='tempest-TestNetworkBasicOps-202100109',keypairs=<?>,launch_index=0,launched_at=2025-11-25T16:59:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5inbkv7z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T16:59:44Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=d643dc57-9536-4a67-9a17-c20512710ea5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.303 254096 DEBUG nova.network.os_vif_util [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.304 254096 DEBUG nova.network.os_vif_util [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.304 254096 DEBUG os_vif [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:00:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:00:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/291562265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.307 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12c882d8-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 podman[378901]: 2025-11-25 17:00:55.309964417 +0000 UTC m=+0.083453624 container cleanup 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.317 254096 INFO os_vif [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:dd:57,bridge_name='br-int',has_traffic_filtering=True,id=12c882d8-4cd5-4233-8d3b-650401885991,network=Network(22f446da-e1c5-4251-b5dd-3071154486f0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12c882d8-4c')
Nov 25 17:00:55 compute-0 systemd[1]: libpod-conmon-674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99.scope: Deactivated successfully.
Nov 25 17:00:55 compute-0 podman[378949]: 2025-11-25 17:00:55.392851074 +0000 UTC m=+0.054159835 container remove 674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.399 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d955fe55-9a6b-4b29-b950-d74ed38cdc23]: (4, ('Tue Nov 25 05:00:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 (674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99)\n674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99\nTue Nov 25 05:00:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 (674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99)\n674b4e1967ef9cd85acacaf9fda63acf7b62b312be423cc14ca81dbd85394f99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.401 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c470c826-a5b7-4307-a661-09277e5ac62c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.402 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f446da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.404 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 kernel: tap22f446da-e0: left promiscuous mode
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.426 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eade23f2-ff45-4d49-b756-81102d9dca9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37c96e3e-0281-4b48-97e0-76e12c873074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.450 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d96ea4c-5ec9-4d28-88e4-6a169a52e943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.470 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2d165a-324d-4e4c-9582-db5877fc53e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643427, 'reachable_time': 32842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378975, 'error': None, 'target': 'ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.474 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22f446da-e1c5-4251-b5dd-3071154486f0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:00:55 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:00:55.474 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b305dd9e-3f38-4141-b632-90a245abf6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:00:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d22f446da\x2de1c5\x2d4251\x2db5dd\x2d3071154486f0.mount: Deactivated successfully.
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.724 254096 INFO nova.virt.libvirt.driver [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deleting instance files /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5_del
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.725 254096 INFO nova.virt.libvirt.driver [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deletion of /var/lib/nova/instances/d643dc57-9536-4a67-9a17-c20512710ea5_del complete
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.769 254096 INFO nova.compute.manager [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.770 254096 DEBUG oslo.service.loopingcall [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.770 254096 DEBUG nova.compute.manager [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:00:55 compute-0 nova_compute[254092]: 2025-11-25 17:00:55.770 254096 DEBUG nova.network.neutron [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:00:55 compute-0 ceph-mon[74985]: pgmap v2321: 321 pgs: 321 active+clean; 185 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Nov 25 17:00:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/291562265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:00:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/291562265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:00:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 148 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 3.4 MiB/s wr, 123 op/s
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.259 254096 DEBUG nova.network.neutron [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.271 254096 INFO nova.compute.manager [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Took 0.50 seconds to deallocate network for instance.
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.310 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.310 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.363 254096 DEBUG oslo_concurrency.processutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.479 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.480 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.480 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.480 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] No waiting events found dispatching network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 WARNING nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received unexpected event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for instance with vm_state active and task_state None.
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-unplugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.481 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] No waiting events found dispatching network-vif-unplugged-12c882d8-4cd5-4233-8d3b-650401885991 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 WARNING nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received unexpected event network-vif-unplugged-12c882d8-4cd5-4233-8d3b-650401885991 for instance with vm_state deleted and task_state None.
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.482 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG oslo_concurrency.lockutils [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.483 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] No waiting events found dispatching network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.484 254096 WARNING nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received unexpected event network-vif-plugged-12c882d8-4cd5-4233-8d3b-650401885991 for instance with vm_state deleted and task_state None.
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.484 254096 DEBUG nova.compute.manager [req-67597a34-de5f-4a3b-a99b-14932b7806f0 req-e3706bd4-e36f-4309-b9e3-147988a63841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Received event network-vif-deleted-12c882d8-4cd5-4233-8d3b-650401885991 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.587 254096 DEBUG nova.network.neutron [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updated VIF entry in instance network info cache for port 12c882d8-4cd5-4233-8d3b-650401885991. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.588 254096 DEBUG nova.network.neutron [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Updating instance_info_cache with network_info: [{"id": "12c882d8-4cd5-4233-8d3b-650401885991", "address": "fa:16:3e:80:dd:57", "network": {"id": "22f446da-e1c5-4251-b5dd-3071154486f0", "bridge": "br-int", "label": "tempest-network-smoke--753652618", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12c882d8-4c", "ovs_interfaceid": "12c882d8-4cd5-4233-8d3b-650401885991", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.601 254096 DEBUG oslo_concurrency.lockutils [req-fa104a96-1425-416f-96d5-1ced0aff36c8 req-ba15338c-334f-4806-a765-196afa8ad6a3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-d643dc57-9536-4a67-9a17-c20512710ea5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:00:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:00:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:00:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969505326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.815 254096 DEBUG oslo_concurrency.processutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:00:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/969505326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.820 254096 DEBUG nova.compute.provider_tree [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.836 254096 DEBUG nova.scheduler.client.report [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.855 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.876 254096 INFO nova.scheduler.client.report [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance d643dc57-9536-4a67-9a17-c20512710ea5
Nov 25 17:00:56 compute-0 nova_compute[254092]: 2025-11-25 17:00:56.944 254096 DEBUG oslo_concurrency.lockutils [None req-99f16ed6-8697-48bf-a8b1-ca8f8bf22b0c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "d643dc57-9536-4a67-9a17-c20512710ea5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:00:57 compute-0 nova_compute[254092]: 2025-11-25 17:00:57.789 254096 DEBUG nova.compute.manager [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:00:57 compute-0 nova_compute[254092]: 2025-11-25 17:00:57.790 254096 DEBUG nova.compute.manager [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing instance network info cache due to event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:00:57 compute-0 nova_compute[254092]: 2025-11-25 17:00:57.791 254096 DEBUG oslo_concurrency.lockutils [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:00:57 compute-0 nova_compute[254092]: 2025-11-25 17:00:57.791 254096 DEBUG oslo_concurrency.lockutils [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:00:57 compute-0 nova_compute[254092]: 2025-11-25 17:00:57.791 254096 DEBUG nova.network.neutron [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:00:57 compute-0 ceph-mon[74985]: pgmap v2322: 321 pgs: 321 active+clean; 148 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 3.4 MiB/s wr, 123 op/s
Nov 25 17:00:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 148 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 17:00:59 compute-0 nova_compute[254092]: 2025-11-25 17:00:59.194 254096 DEBUG nova.network.neutron [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updated VIF entry in instance network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:00:59 compute-0 nova_compute[254092]: 2025-11-25 17:00:59.196 254096 DEBUG nova.network.neutron [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:00:59 compute-0 nova_compute[254092]: 2025-11-25 17:00:59.232 254096 DEBUG oslo_concurrency.lockutils [req-12b54dd7-f2d3-45c3-a9e6-c9d73561557c req-852211d7-ae2d-4645-91de-1df0c55a62a4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:00:59 compute-0 nova_compute[254092]: 2025-11-25 17:00:59.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 17:01:00 compute-0 nova_compute[254092]: 2025-11-25 17:01:00.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:00 compute-0 ceph-mon[74985]: pgmap v2323: 321 pgs: 321 active+clean; 148 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 17:01:00 compute-0 sudo[378999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:00 compute-0 sudo[378999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:00 compute-0 sudo[378999]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:00 compute-0 sudo[379024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:01:00 compute-0 sudo[379024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:00 compute-0 sudo[379024]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:00 compute-0 sudo[379049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:00 compute-0 sudo[379049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:00 compute-0 sudo[379049]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:00 compute-0 sudo[379074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:01:00 compute-0 sudo[379074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:00 compute-0 sudo[379074]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:01:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:01:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:01:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:01:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:01:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:01:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6dd16109-48a3-41a7-88c3-7ec686c82017 does not exist
Nov 25 17:01:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1a893b26-9cc6-4f7a-9e75-8d61a09ab8be does not exist
Nov 25 17:01:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:01:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:01:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fb485b3d-ba54-4a64-a9ce-442d8c3e69c6 does not exist
Nov 25 17:01:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:01:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:01:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:01:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:01:01 compute-0 CROND[379154]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 17:01:01 compute-0 sudo[379130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:01 compute-0 sudo[379130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:01 compute-0 run-parts[379158]: (/etc/cron.hourly) starting 0anacron
Nov 25 17:01:01 compute-0 sudo[379130]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:01 compute-0 run-parts[379166]: (/etc/cron.hourly) finished 0anacron
Nov 25 17:01:01 compute-0 CROND[379153]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 17:01:01 compute-0 sudo[379165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:01:01 compute-0 sudo[379165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:01 compute-0 sudo[379165]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:01 compute-0 ovn_controller[153477]: 2025-11-25T17:01:01Z|01170|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 17:01:01 compute-0 sudo[379191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:01 compute-0 sudo[379191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:01 compute-0 sudo[379191]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:01 compute-0 nova_compute[254092]: 2025-11-25 17:01:01.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:01 compute-0 sudo[379216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:01:01 compute-0 sudo[379216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:01 compute-0 ceph-mon[74985]: pgmap v2324: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Nov 25 17:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:01:01 compute-0 podman[379281]: 2025-11-25 17:01:01.646037946 +0000 UTC m=+0.040140224 container create 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 17:01:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:01 compute-0 systemd[1]: Started libpod-conmon-507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101.scope.
Nov 25 17:01:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:01 compute-0 podman[379281]: 2025-11-25 17:01:01.626874334 +0000 UTC m=+0.020976632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:01:01 compute-0 podman[379281]: 2025-11-25 17:01:01.72473425 +0000 UTC m=+0.118836548 container init 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:01:01 compute-0 podman[379281]: 2025-11-25 17:01:01.731844063 +0000 UTC m=+0.125946341 container start 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:01:01 compute-0 podman[379281]: 2025-11-25 17:01:01.734714281 +0000 UTC m=+0.128816589 container attach 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:01:01 compute-0 lucid_neumann[379298]: 167 167
Nov 25 17:01:01 compute-0 systemd[1]: libpod-507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101.scope: Deactivated successfully.
Nov 25 17:01:01 compute-0 podman[379281]: 2025-11-25 17:01:01.738728471 +0000 UTC m=+0.132830759 container died 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:01:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f02ebf654e595645d8bba24ee9bc559bf3a8d967035c5aa4f3a0d4a09e8ce606-merged.mount: Deactivated successfully.
Nov 25 17:01:01 compute-0 podman[379281]: 2025-11-25 17:01:01.774393182 +0000 UTC m=+0.168495460 container remove 507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_neumann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 17:01:01 compute-0 systemd[1]: libpod-conmon-507d60965167b358b55e8061183045bdff70b09d8ea16191fef8704f84f29101.scope: Deactivated successfully.
Nov 25 17:01:01 compute-0 podman[379322]: 2025-11-25 17:01:01.935388537 +0000 UTC m=+0.044290217 container create 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:01:01 compute-0 systemd[1]: Started libpod-conmon-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope.
Nov 25 17:01:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:02 compute-0 podman[379322]: 2025-11-25 17:01:01.913737217 +0000 UTC m=+0.022638917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:01:02 compute-0 podman[379322]: 2025-11-25 17:01:02.038213958 +0000 UTC m=+0.147115708 container init 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:01:02 compute-0 podman[379322]: 2025-11-25 17:01:02.045725402 +0000 UTC m=+0.154627082 container start 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:01:02 compute-0 podman[379322]: 2025-11-25 17:01:02.051409707 +0000 UTC m=+0.160311437 container attach 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:01:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Nov 25 17:01:03 compute-0 fervent_robinson[379338]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:01:03 compute-0 fervent_robinson[379338]: --> relative data size: 1.0
Nov 25 17:01:03 compute-0 fervent_robinson[379338]: --> All data devices are unavailable
Nov 25 17:01:03 compute-0 systemd[1]: libpod-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope: Deactivated successfully.
Nov 25 17:01:03 compute-0 systemd[1]: libpod-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope: Consumed 1.036s CPU time.
Nov 25 17:01:03 compute-0 podman[379367]: 2025-11-25 17:01:03.177084096 +0000 UTC m=+0.026775580 container died 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:01:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e2348c6bd2e9dd9acf4940e1027ce4d8f350e8c0d3657ebba7e5ac57fc66a32-merged.mount: Deactivated successfully.
Nov 25 17:01:03 compute-0 ceph-mon[74985]: pgmap v2325: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Nov 25 17:01:03 compute-0 podman[379367]: 2025-11-25 17:01:03.841113361 +0000 UTC m=+0.690804825 container remove 3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:01:03 compute-0 systemd[1]: libpod-conmon-3733599740cdc5ffdd236c26c4dfac00c5ea65a7fd78a19e79041a2f5859b7cb.scope: Deactivated successfully.
Nov 25 17:01:03 compute-0 sudo[379216]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:03 compute-0 sudo[379382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:03 compute-0 sudo[379382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:03 compute-0 sudo[379382]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:04 compute-0 sudo[379407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:01:04 compute-0 sudo[379407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:04 compute-0 sudo[379407]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:04 compute-0 auditd[702]: Audit daemon rotating log files
Nov 25 17:01:04 compute-0 sudo[379432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:04 compute-0 sudo[379432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:04 compute-0 sudo[379432]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:04 compute-0 sudo[379457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:01:04 compute-0 sudo[379457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 127 op/s
Nov 25 17:01:04 compute-0 podman[379522]: 2025-11-25 17:01:04.512359484 +0000 UTC m=+0.094092404 container create 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:01:04 compute-0 podman[379522]: 2025-11-25 17:01:04.443297792 +0000 UTC m=+0.025030722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:01:04 compute-0 systemd[1]: Started libpod-conmon-486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9.scope.
Nov 25 17:01:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:04 compute-0 nova_compute[254092]: 2025-11-25 17:01:04.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:04 compute-0 podman[379522]: 2025-11-25 17:01:04.806528456 +0000 UTC m=+0.388261386 container init 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 17:01:04 compute-0 podman[379522]: 2025-11-25 17:01:04.817999348 +0000 UTC m=+0.399732268 container start 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:01:04 compute-0 laughing_joliot[379536]: 167 167
Nov 25 17:01:04 compute-0 systemd[1]: libpod-486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9.scope: Deactivated successfully.
Nov 25 17:01:04 compute-0 podman[379522]: 2025-11-25 17:01:04.859116698 +0000 UTC m=+0.440849648 container attach 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:01:04 compute-0 podman[379522]: 2025-11-25 17:01:04.860186147 +0000 UTC m=+0.441919067 container died 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:01:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-68c2b6cdb0b3ba2fd314383fd1c423cb50362da065ebe4546fb43eae8c3d215b-merged.mount: Deactivated successfully.
Nov 25 17:01:05 compute-0 nova_compute[254092]: 2025-11-25 17:01:05.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:05 compute-0 podman[379522]: 2025-11-25 17:01:05.374474124 +0000 UTC m=+0.956207034 container remove 486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:01:05 compute-0 nova_compute[254092]: 2025-11-25 17:01:05.389 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090050.3713243, 810cd712-872a-49a6-bf45-04a319ba8d57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:05 compute-0 nova_compute[254092]: 2025-11-25 17:01:05.390 254096 INFO nova.compute.manager [-] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] VM Stopped (Lifecycle Event)
Nov 25 17:01:05 compute-0 systemd[1]: libpod-conmon-486bab736e55a038b9cf0307e87e28ed700a324cca9284cf0bea3cd699ddb7e9.scope: Deactivated successfully.
Nov 25 17:01:05 compute-0 nova_compute[254092]: 2025-11-25 17:01:05.417 254096 DEBUG nova.compute.manager [None req-7c1c081a-afb5-485f-9254-73d67a0d689d - - - - - -] [instance: 810cd712-872a-49a6-bf45-04a319ba8d57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:05 compute-0 podman[379562]: 2025-11-25 17:01:05.551935167 +0000 UTC m=+0.019586884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:01:05 compute-0 podman[379562]: 2025-11-25 17:01:05.729188595 +0000 UTC m=+0.196840292 container create e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:01:05 compute-0 ceph-mon[74985]: pgmap v2326: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 127 op/s
Nov 25 17:01:05 compute-0 systemd[1]: Started libpod-conmon-e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379.scope.
Nov 25 17:01:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:05 compute-0 podman[379562]: 2025-11-25 17:01:05.99006835 +0000 UTC m=+0.457720057 container init e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:01:06 compute-0 podman[379562]: 2025-11-25 17:01:06.00251382 +0000 UTC m=+0.470165517 container start e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:01:06 compute-0 podman[379562]: 2025-11-25 17:01:06.073426621 +0000 UTC m=+0.541078338 container attach e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:01:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 127 op/s
Nov 25 17:01:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]: {
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:     "0": [
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:         {
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "devices": [
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "/dev/loop3"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             ],
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_name": "ceph_lv0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_size": "21470642176",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "name": "ceph_lv0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "tags": {
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cluster_name": "ceph",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.crush_device_class": "",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.encrypted": "0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osd_id": "0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.type": "block",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.vdo": "0"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             },
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "type": "block",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "vg_name": "ceph_vg0"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:         }
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:     ],
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:     "1": [
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:         {
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "devices": [
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "/dev/loop4"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             ],
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_name": "ceph_lv1",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_size": "21470642176",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "name": "ceph_lv1",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "tags": {
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cluster_name": "ceph",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.crush_device_class": "",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.encrypted": "0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osd_id": "1",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.type": "block",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.vdo": "0"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             },
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "type": "block",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "vg_name": "ceph_vg1"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:         }
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:     ],
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:     "2": [
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:         {
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "devices": [
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "/dev/loop5"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             ],
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_name": "ceph_lv2",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_size": "21470642176",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "name": "ceph_lv2",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "tags": {
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.cluster_name": "ceph",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.crush_device_class": "",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.encrypted": "0",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osd_id": "2",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.type": "block",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:                 "ceph.vdo": "0"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             },
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "type": "block",
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:             "vg_name": "ceph_vg2"
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:         }
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]:     ]
Nov 25 17:01:06 compute-0 nifty_bhaskara[379579]: }
Nov 25 17:01:06 compute-0 systemd[1]: libpod-e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379.scope: Deactivated successfully.
Nov 25 17:01:06 compute-0 podman[379562]: 2025-11-25 17:01:06.776443778 +0000 UTC m=+1.244095475 container died e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:01:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b2c1304f8d0794fbbb498ae81547086f6e89ed9cc177fdc5c157680a8899503-merged.mount: Deactivated successfully.
Nov 25 17:01:07 compute-0 podman[379562]: 2025-11-25 17:01:07.230278529 +0000 UTC m=+1.697930226 container remove e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:01:07 compute-0 systemd[1]: libpod-conmon-e849a7a2780ad5f2b6d6bac134b08b777ff994ac42ee98d1a6c3f1127a978379.scope: Deactivated successfully.
Nov 25 17:01:07 compute-0 sudo[379457]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:07 compute-0 sudo[379600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:07 compute-0 sudo[379600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:07 compute-0 sudo[379600]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:07 compute-0 sudo[379625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:01:07 compute-0 sudo[379625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:07 compute-0 sudo[379625]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:07 compute-0 sudo[379650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:07 compute-0 sudo[379650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:07 compute-0 sudo[379650]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:07 compute-0 sudo[379675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:01:07 compute-0 sudo[379675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:08 compute-0 ceph-mon[74985]: pgmap v2327: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 127 op/s
Nov 25 17:01:08 compute-0 podman[379741]: 2025-11-25 17:01:07.927590011 +0000 UTC m=+0.022530645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:01:08 compute-0 podman[379741]: 2025-11-25 17:01:08.114711288 +0000 UTC m=+0.209651902 container create 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:01:08 compute-0 systemd[1]: Started libpod-conmon-2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6.scope.
Nov 25 17:01:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 76 op/s
Nov 25 17:01:08 compute-0 podman[379741]: 2025-11-25 17:01:08.27524519 +0000 UTC m=+0.370185874 container init 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:01:08 compute-0 podman[379741]: 2025-11-25 17:01:08.284682726 +0000 UTC m=+0.379623340 container start 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 25 17:01:08 compute-0 affectionate_lichterman[379757]: 167 167
Nov 25 17:01:08 compute-0 systemd[1]: libpod-2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6.scope: Deactivated successfully.
Nov 25 17:01:08 compute-0 podman[379741]: 2025-11-25 17:01:08.313856351 +0000 UTC m=+0.408796965 container attach 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 17:01:08 compute-0 podman[379741]: 2025-11-25 17:01:08.314577661 +0000 UTC m=+0.409518275 container died 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:01:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87ba094b85bdecb279eced366cab2f3aa55c0966dc41e17a03ba9ba863832dc-merged.mount: Deactivated successfully.
Nov 25 17:01:08 compute-0 ovn_controller[153477]: 2025-11-25T17:01:08Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:ec:69 10.100.0.8
Nov 25 17:01:08 compute-0 ovn_controller[153477]: 2025-11-25T17:01:08Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:ec:69 10.100.0.8
Nov 25 17:01:08 compute-0 podman[379741]: 2025-11-25 17:01:08.669663202 +0000 UTC m=+0.764603816 container remove 2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 17:01:08 compute-0 systemd[1]: libpod-conmon-2e393d38df3c851607a5005968bcc21b7898275c5a79194bf687ec94f69076e6.scope: Deactivated successfully.
Nov 25 17:01:08 compute-0 podman[379786]: 2025-11-25 17:01:08.912604039 +0000 UTC m=+0.112576498 container create 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:01:08 compute-0 podman[379777]: 2025-11-25 17:01:08.916485004 +0000 UTC m=+0.123088613 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:01:08 compute-0 podman[379785]: 2025-11-25 17:01:08.917865852 +0000 UTC m=+0.121363227 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:01:08 compute-0 podman[379786]: 2025-11-25 17:01:08.824893949 +0000 UTC m=+0.024866438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:01:09 compute-0 systemd[1]: Started libpod-conmon-0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73.scope.
Nov 25 17:01:09 compute-0 podman[379783]: 2025-11-25 17:01:09.081036756 +0000 UTC m=+0.284287734 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:01:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:09 compute-0 podman[379786]: 2025-11-25 17:01:09.166779481 +0000 UTC m=+0.366751940 container init 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:01:09 compute-0 podman[379786]: 2025-11-25 17:01:09.174558563 +0000 UTC m=+0.374531022 container start 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:01:09 compute-0 nova_compute[254092]: 2025-11-25 17:01:09.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:09 compute-0 podman[379786]: 2025-11-25 17:01:09.352562812 +0000 UTC m=+0.552535271 container attach 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:01:09 compute-0 nova_compute[254092]: 2025-11-25 17:01:09.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]: {
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "osd_id": 1,
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "type": "bluestore"
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:     },
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "osd_id": 2,
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "type": "bluestore"
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:     },
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "osd_id": 0,
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:         "type": "bluestore"
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]:     }
Nov 25 17:01:10 compute-0 dreamy_elbakyan[379864]: }
Nov 25 17:01:10 compute-0 ceph-mon[74985]: pgmap v2328: 321 pgs: 321 active+clean; 88 MiB data, 809 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 76 op/s
Nov 25 17:01:10 compute-0 systemd[1]: libpod-0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73.scope: Deactivated successfully.
Nov 25 17:01:10 compute-0 podman[379786]: 2025-11-25 17:01:10.120563619 +0000 UTC m=+1.320536078 container died 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3e4e37d0e58b663654a5a6ce727ed066a909ff4ea2aee270600dda09c8eb299-merged.mount: Deactivated successfully.
Nov 25 17:01:10 compute-0 podman[379786]: 2025-11-25 17:01:10.175099253 +0000 UTC m=+1.375071712 container remove 0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_elbakyan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:01:10 compute-0 systemd[1]: libpod-conmon-0da25dfbdbe5ec4c4e115db45c808c6b2415fe50b31b3b8125fa8e9dc254af73.scope: Deactivated successfully.
Nov 25 17:01:10 compute-0 sudo[379675]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:01:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:01:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:01:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c21ae369-2853-478b-a50f-d5461ae1e5e6 does not exist
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev eee7bdfd-c403-42b9-b6f1-8aa72ed11f2d does not exist
Nov 25 17:01:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 115 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 120 op/s
Nov 25 17:01:10 compute-0 nova_compute[254092]: 2025-11-25 17:01:10.288 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090055.286896, d643dc57-9536-4a67-9a17-c20512710ea5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:10 compute-0 nova_compute[254092]: 2025-11-25 17:01:10.289 254096 INFO nova.compute.manager [-] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] VM Stopped (Lifecycle Event)
Nov 25 17:01:10 compute-0 sudo[379908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:01:10 compute-0 sudo[379908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:10 compute-0 sudo[379908]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:10 compute-0 nova_compute[254092]: 2025-11-25 17:01:10.307 254096 DEBUG nova.compute.manager [None req-8ba4d26d-1747-41c8-8461-44bb703c0c91 - - - - - -] [instance: d643dc57-9536-4a67-9a17-c20512710ea5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:10 compute-0 nova_compute[254092]: 2025-11-25 17:01:10.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:10 compute-0 sudo[379933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:01:10 compute-0 sudo[379933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:01:10 compute-0 sudo[379933]: pam_unix(sudo:session): session closed for user root
Nov 25 17:01:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:01:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:01:11 compute-0 ceph-mon[74985]: pgmap v2329: 321 pgs: 321 active+clean; 115 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 120 op/s
Nov 25 17:01:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 121 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 17:01:13 compute-0 ceph-mon[74985]: pgmap v2330: 321 pgs: 321 active+clean; 121 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 25 17:01:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:13.637 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:13.639 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 121 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:01:14 compute-0 nova_compute[254092]: 2025-11-25 17:01:14.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:14 compute-0 nova_compute[254092]: 2025-11-25 17:01:14.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:15 compute-0 nova_compute[254092]: 2025-11-25 17:01:15.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:15 compute-0 ceph-mon[74985]: pgmap v2331: 321 pgs: 321 active+clean; 121 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:01:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:01:16 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:01:16 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:01:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:17 compute-0 ceph-mon[74985]: pgmap v2332: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:01:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:01:19 compute-0 ceph-mon[74985]: pgmap v2333: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:01:19 compute-0 nova_compute[254092]: 2025-11-25 17:01:19.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:01:20 compute-0 nova_compute[254092]: 2025-11-25 17:01:20.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:20 compute-0 nova_compute[254092]: 2025-11-25 17:01:20.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:20 compute-0 nova_compute[254092]: 2025-11-25 17:01:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:21 compute-0 ceph-mon[74985]: pgmap v2334: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:01:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 300 KiB/s wr, 19 op/s
Nov 25 17:01:23 compute-0 ceph-mon[74985]: pgmap v2335: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 300 KiB/s wr, 19 op/s
Nov 25 17:01:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 4 op/s
Nov 25 17:01:24 compute-0 nova_compute[254092]: 2025-11-25 17:01:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:24 compute-0 nova_compute[254092]: 2025-11-25 17:01:24.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:24 compute-0 nova_compute[254092]: 2025-11-25 17:01:24.925 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:24 compute-0 nova_compute[254092]: 2025-11-25 17:01:24.925 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:24 compute-0 nova_compute[254092]: 2025-11-25 17:01:24.959 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.083 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.084 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.093 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.094 254096 INFO nova.compute.claims [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.220 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:01:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448640309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.639 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.646 254096 DEBUG nova.compute.provider_tree [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.667 254096 DEBUG nova.scheduler.client.report [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.698 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.699 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.805 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.805 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.846 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.864 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:01:25 compute-0 ceph-mon[74985]: pgmap v2336: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 4 op/s
Nov 25 17:01:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/448640309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.965 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.967 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.967 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Creating image(s)
Nov 25 17:01:25 compute-0 nova_compute[254092]: 2025-11-25 17:01:25.987 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.006 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.025 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.029 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.098 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.098 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.099 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.099 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.117 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.120 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.220 254096 DEBUG nova.policy [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:01:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 4 op/s
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.415 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.479 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.568 254096 DEBUG nova.objects.instance [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.582 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.582 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Ensure instance console log exists: /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.583 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.583 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:26 compute-0 nova_compute[254092]: 2025-11-25 17:01:26.583 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:27 compute-0 nova_compute[254092]: 2025-11-25 17:01:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:27 compute-0 nova_compute[254092]: 2025-11-25 17:01:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:27 compute-0 ceph-mon[74985]: pgmap v2337: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 4 op/s
Nov 25 17:01:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Nov 25 17:01:28 compute-0 nova_compute[254092]: 2025-11-25 17:01:28.560 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully created port: 9a960a19-c599-4217-b99c-ac16fe6384b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.422 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully updated port: 9a960a19-c599-4217-b99c-ac16fe6384b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.442 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.442 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.442 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.502 254096 DEBUG nova.compute.manager [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.503 254096 DEBUG nova.compute.manager [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.503 254096 DEBUG oslo_concurrency.lockutils [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.597 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:01:29 compute-0 nova_compute[254092]: 2025-11-25 17:01:29.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:29 compute-0 ceph-mon[74985]: pgmap v2338: 321 pgs: 321 active+clean; 121 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Nov 25 17:01:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 150 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.307 254096 DEBUG nova.network.neutron [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.337 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.338 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance network_info: |[{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.338 254096 DEBUG oslo_concurrency.lockutils [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.339 254096 DEBUG nova.network.neutron [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.344 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start _get_guest_xml network_info=[{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.352 254096 WARNING nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.364 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.365 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.369 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.370 254096 DEBUG nova.virt.libvirt.host [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.371 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.372 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.373 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.373 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.374 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.374 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.375 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.375 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.376 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.376 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.377 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.377 254096 DEBUG nova.virt.hardware [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.382 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:01:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733548415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.844 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.873 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:30 compute-0 nova_compute[254092]: 2025-11-25 17:01:30.888 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:01:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8111940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/733548415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.045 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.137 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.137 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.270 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.306 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.383 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.384 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.391 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.393 254096 INFO nova.compute.claims [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.411 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.942745208740234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:01:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3123534453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.503 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.505 254096 DEBUG nova.virt.libvirt.vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:25Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.505 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.507 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.508 254096 DEBUG nova.objects.instance [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.529 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <name>instance-00000072</name>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:01:30</nova:creationTime>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:01:31 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <system>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <entry name="serial">f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <entry name="uuid">f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </system>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <os>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   </os>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <features>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   </features>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk">
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       </source>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config">
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       </source>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:01:31 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:a8:c9:7b"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <target dev="tap9a960a19-c5"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log" append="off"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <video>
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </video>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:01:31 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:01:31 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:01:31 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:01:31 compute-0 nova_compute[254092]: </domain>
Nov 25 17:01:31 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.531 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Preparing to wait for external event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.532 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.533 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.533 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.535 254096 DEBUG nova.virt.libvirt.vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:25Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.536 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.537 254096 DEBUG nova.network.os_vif_util [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.538 254096 DEBUG os_vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.541 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.541 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.547 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a960a19-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.548 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a960a19-c5, col_values=(('external_ids', {'iface-id': '9a960a19-c599-4217-b99c-ac16fe6384b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:c9:7b', 'vm-uuid': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:31 compute-0 NetworkManager[48891]: <info>  [1764090091.5519] manager: (tap9a960a19-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/482)
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.559 254096 INFO os_vif [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5')
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.564 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.660 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.661 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.661 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:a8:c9:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.661 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Using config drive
Nov 25 17:01:31 compute-0 nova_compute[254092]: 2025-11-25 17:01:31.681 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:32 compute-0 ceph-mon[74985]: pgmap v2339: 321 pgs: 321 active+clean; 150 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Nov 25 17:01:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/8111940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3123534453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:01:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171348109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.066 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.073 254096 DEBUG nova.compute.provider_tree [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.092 254096 DEBUG nova.scheduler.client.report [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.122 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.124 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.128 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.236 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.237 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.256 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:01:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.280 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance caf64ca2-5f73-454a-8442-9965c9853cba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.293 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.377 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.378 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.378 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Creating image(s)
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.399 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.431 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.467 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.472 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.533 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.580 254096 DEBUG nova.network.neutron [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updated VIF entry in instance network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.581 254096 DEBUG nova.network.neutron [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.585 254096 DEBUG nova.policy [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'baa046e735b94aba93374dff061b9e77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa394380a92d48188f2de86f1a100c08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.590 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.591 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.592 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.592 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.619 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.625 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.674 254096 DEBUG oslo_concurrency.lockutils [req-79e75234-7082-4f5c-9b20-eec306bcb30b req-5a494cb6-e113-432a-95a0-59b90e72380d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.759 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Creating config drive at /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.766 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppwo4f3c5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.927 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppwo4f3c5" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.964 254096 DEBUG nova.storage.rbd_utils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:32 compute-0 nova_compute[254092]: 2025-11-25 17:01:32.971 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:01:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971799756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.007 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4171348109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3971799756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.035 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.069 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] resizing rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.097 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.114 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.137 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.137 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.140 254096 DEBUG oslo_concurrency.processutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.141 254096 INFO nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deleting local config drive /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/disk.config because it was imported into RBD.
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.175 254096 DEBUG nova.objects.instance [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:01:33 compute-0 NetworkManager[48891]: <info>  [1764090093.1863] manager: (tap9a960a19-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/483)
Nov 25 17:01:33 compute-0 kernel: tap9a960a19-c5: entered promiscuous mode
Nov 25 17:01:33 compute-0 ovn_controller[153477]: 2025-11-25T17:01:33Z|01171|binding|INFO|Claiming lport 9a960a19-c599-4217-b99c-ac16fe6384b1 for this chassis.
Nov 25 17:01:33 compute-0 ovn_controller[153477]: 2025-11-25T17:01:33Z|01172|binding|INFO|9a960a19-c599-4217-b99c-ac16fe6384b1: Claiming fa:16:3e:a8:c9:7b 10.100.0.3
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.193 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.194 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Ensure instance console log exists: /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.194 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.195 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.195 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.197 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:c9:7b 10.100.0.3'], port_security=['fa:16:3e:a8:c9:7b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '573985ee-22d8-4e8a-b764-ea06c40f2ee7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ece2fc14-3f44-4554-9543-96a461b3adc3, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9a960a19-c599-4217-b99c-ac16fe6384b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.198 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9a960a19-c599-4217-b99c-ac16fe6384b1 in datapath 131ae834-ee81-42ce-b61e-863b3a8d52e1 bound to our chassis
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.199 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 131ae834-ee81-42ce-b61e-863b3a8d52e1
Nov 25 17:01:33 compute-0 ovn_controller[153477]: 2025-11-25T17:01:33Z|01173|binding|INFO|Setting lport 9a960a19-c599-4217-b99c-ac16fe6384b1 ovn-installed in OVS
Nov 25 17:01:33 compute-0 ovn_controller[153477]: 2025-11-25T17:01:33Z|01174|binding|INFO|Setting lport 9a960a19-c599-4217-b99c-ac16fe6384b1 up in Southbound
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.232 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a725a70-5a09-43f6-88d0-b796af324bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.233 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap131ae834-e1 in ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.235 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap131ae834-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[32ffe03f-ceea-4fa1-8f71-606353c7b0c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[41f0c8d2-7d9a-4551-9e10-6cfeeb804d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 systemd-udevd[380513]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:01:33 compute-0 NetworkManager[48891]: <info>  [1764090093.2500] device (tap9a960a19-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:01:33 compute-0 NetworkManager[48891]: <info>  [1764090093.2510] device (tap9a960a19-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.250 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[14fd8c3f-8789-40c3-9614-c6cdf886da60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 systemd-machined[216343]: New machine qemu-146-instance-00000072.
Nov 25 17:01:33 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000072.
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.273 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54c5f3b2-cbbf-463c-ae23-ebbfc54f5fae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.302 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[49f4b1dc-9724-4106-8e3b-efbc225dceda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 systemd-udevd[380519]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:01:33 compute-0 NetworkManager[48891]: <info>  [1764090093.3079] manager: (tap131ae834-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/484)
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7912d7-9001-4939-897c-2371483e50ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.337 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d6443277-1bce-4737-842f-8e553464d272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.340 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[508b7a61-1bc1-4653-b231-4abb509d21ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 NetworkManager[48891]: <info>  [1764090093.3595] device (tap131ae834-e0): carrier: link connected
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.365 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[46f60e1d-9fe7-4533-8bb0-d69aea915072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.385 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb770f7-7672-403f-b3de-c7b1a2eb2000]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap131ae834-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:27:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655095, 'reachable_time': 20234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380548, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.399 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4f56c93d-d8c4-4eee-b54d-0bc0160e6da8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:278b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655095, 'tstamp': 655095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380549, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a791cb8-9b1f-4cf0-85ed-ce8717d0dd8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap131ae834-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:27:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 346], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655095, 'reachable_time': 20234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380550, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.448 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7493bb7a-1264-4750-b754-b1b9bff16fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.521 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c543888-9e10-44f1-b649-6fbcbbb7ad32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.523 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap131ae834-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.523 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.524 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap131ae834-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:33 compute-0 kernel: tap131ae834-e0: entered promiscuous mode
Nov 25 17:01:33 compute-0 NetworkManager[48891]: <info>  [1764090093.5278] manager: (tap131ae834-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.527 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.533 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap131ae834-e0, col_values=(('external_ids', {'iface-id': '60519f01-35a8-45ac-b477-17b0e31a750f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:33 compute-0 ovn_controller[153477]: 2025-11-25T17:01:33Z|01175|binding|INFO|Releasing lport 60519f01-35a8-45ac-b477-17b0e31a750f from this chassis (sb_readonly=0)
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.561 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/131ae834-ee81-42ce-b61e-863b3a8d52e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/131ae834-ee81-42ce-b61e-863b3a8d52e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.563 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a16b1fa5-1247-4e75-bbd9-e2db6a9280d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.565 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-131ae834-ee81-42ce-b61e-863b3a8d52e1
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/131ae834-ee81-42ce-b61e-863b3a8d52e1.pid.haproxy
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 131ae834-ee81-42ce-b61e-863b3a8d52e1
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:01:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:33.567 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'env', 'PROCESS_TAG=haproxy-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/131ae834-ee81-42ce-b61e-863b3a8d52e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.617 254096 DEBUG nova.compute.manager [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.618 254096 DEBUG oslo_concurrency.lockutils [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.619 254096 DEBUG oslo_concurrency.lockutils [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.620 254096 DEBUG oslo_concurrency.lockutils [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.620 254096 DEBUG nova.compute.manager [req-65c664d6-3f9e-4fd8-8328-ab2b8c7105b0 req-76d17873-1eed-43cc-a388-0c47f67c9ebe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Processing event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.722 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090093.7218275, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.723 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Started (Lifecycle Event)
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.725 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.733 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.735 254096 INFO nova.virt.libvirt.driver [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance spawned successfully.
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.735 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.738 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.741 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.750 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.751 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.751 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.752 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.752 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.753 254096 DEBUG nova.virt.libvirt.driver [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.757 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090093.722729, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Paused (Lifecycle Event)
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.782 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.785 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090093.7321575, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.785 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Resumed (Lifecycle Event)
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.806 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.811 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.818 254096 INFO nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 7.85 seconds to spawn the instance on the hypervisor.
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.818 254096 DEBUG nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.839 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.877 254096 INFO nova.compute.manager [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 8.82 seconds to build instance.
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.885 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Successfully created port: 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:01:33 compute-0 nova_compute[254092]: 2025-11-25 17:01:33.891 254096 DEBUG oslo_concurrency.lockutils [None req-9156c4f0-47a7-4e25-bf42-5527f3b2d419 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:33 compute-0 podman[380624]: 2025-11-25 17:01:33.954408291 +0000 UTC m=+0.041618725 container create 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:01:33 compute-0 systemd[1]: Started libpod-conmon-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01.scope.
Nov 25 17:01:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30edfa0bf3043d83325a2d2e3ffee6672b0c56070c560af5182afdcb7482f9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:34 compute-0 podman[380624]: 2025-11-25 17:01:33.933346537 +0000 UTC m=+0.020557001 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:01:34 compute-0 podman[380624]: 2025-11-25 17:01:34.037940155 +0000 UTC m=+0.125150619 container init 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:01:34 compute-0 podman[380624]: 2025-11-25 17:01:34.043074225 +0000 UTC m=+0.130284659 container start 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 17:01:34 compute-0 ceph-mon[74985]: pgmap v2340: 321 pgs: 321 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:01:34 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : New worker (380645) forked
Nov 25 17:01:34 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : Loading success.
Nov 25 17:01:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.743 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Successfully updated port: 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.764 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.765 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.765 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.889 254096 DEBUG nova.compute.manager [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.889 254096 DEBUG nova.compute.manager [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing instance network info cache due to event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.890 254096 DEBUG oslo_concurrency.lockutils [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:01:34 compute-0 nova_compute[254092]: 2025-11-25 17:01:34.965 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.134 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.789 254096 DEBUG nova.compute.manager [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.790 254096 DEBUG oslo_concurrency.lockutils [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.790 254096 DEBUG oslo_concurrency.lockutils [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.791 254096 DEBUG oslo_concurrency.lockutils [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.791 254096 DEBUG nova.compute.manager [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:01:35 compute-0 nova_compute[254092]: 2025-11-25 17:01:35.791 254096 WARNING nova.compute.manager [req-c8184f94-b1ac-428f-90dc-c5043aa75ab7 req-a357e044-7fcd-4d33-8114-7b38b4d82493 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 for instance with vm_state active and task_state None.
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.043 254096 DEBUG nova.network.neutron [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:01:36 compute-0 ceph-mon[74985]: pgmap v2341: 321 pgs: 321 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.067 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.068 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance network_info: |[{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.069 254096 DEBUG oslo_concurrency.lockutils [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.070 254096 DEBUG nova.network.neutron [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.076 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start _get_guest_xml network_info=[{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.084 254096 WARNING nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.095 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.096 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.100 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.101 254096 DEBUG nova.virt.libvirt.host [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.102 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.102 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.102 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.103 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.103 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.103 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.104 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.104 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.105 254096 DEBUG nova.virt.hardware [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.109 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 213 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Nov 25 17:01:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:01:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260906191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.538 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.571 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.576 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:36 compute-0 nova_compute[254092]: 2025-11-25 17:01:36.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:01:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082642859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.049 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.052 254096 DEBUG nova.virt.libvirt.vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-877248969-acc',id=115,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYG6882dVO70OVi4d9NVrB3PaeuhIrFZ+oR1NKshvHYJDUOm1rbaI60huuXoUEKrmzCPg+QgDBxi0uURLyDj9uJZlfeSkkPsCqzCs3wQ3F9X3LJ3PkXg4AAZvavey5RFw==',key_name='tempest-TestSecurityGroupsBasicOps-143893991',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa394380a92d48188f2de86f1a100c08',ramdisk_id='',reservation_id='r-2k83oce0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-877248969',owner_user_name='tempest-TestSecurityGroupsBasicOps-877248969-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:32Z,user_data=None,user_id='baa046e735b94aba93374dff061b9e77',uuid=1f0c80e2-19cd-43ba-881d-e24e5bcd62fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.052 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converting VIF {"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.054 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.056 254096 DEBUG nova.objects.instance [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:01:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4260906191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3082642859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.073 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <uuid>1f0c80e2-19cd-43ba-881d-e24e5bcd62fb</uuid>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <name>instance-00000073</name>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942</nova:name>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:01:36</nova:creationTime>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:user uuid="baa046e735b94aba93374dff061b9e77">tempest-TestSecurityGroupsBasicOps-877248969-project-member</nova:user>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:project uuid="aa394380a92d48188f2de86f1a100c08">tempest-TestSecurityGroupsBasicOps-877248969</nova:project>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <nova:port uuid="3751c8d6-0f1e-4902-8016-cf37bf3c1ad3">
Nov 25 17:01:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <system>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <entry name="serial">1f0c80e2-19cd-43ba-881d-e24e5bcd62fb</entry>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <entry name="uuid">1f0c80e2-19cd-43ba-881d-e24e5bcd62fb</entry>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </system>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <os>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   </os>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <features>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   </features>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk">
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       </source>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config">
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       </source>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:01:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:cd:6a:fb"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <target dev="tap3751c8d6-0f"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/console.log" append="off"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <video>
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </video>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:01:37 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:01:37 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:01:37 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:01:37 compute-0 nova_compute[254092]: </domain>
Nov 25 17:01:37 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.076 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Preparing to wait for external event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.076 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.076 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.077 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.078 254096 DEBUG nova.virt.libvirt.vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-877248969-acc',id=115,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYG6882dVO70OVi4d9NVrB3PaeuhIrFZ+oR1NKshvHYJDUOm1rbaI60huuXoUEKrmzCPg+QgDBxi0uURLyDj9uJZlfeSkkPsCqzCs3wQ3F9X3LJ3PkXg4AAZvavey5RFw==',key_name='tempest-TestSecurityGroupsBasicOps-143893991',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa394380a92d48188f2de86f1a100c08',ramdisk_id='',reservation_id='r-2k83oce0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-877248969',owner_user_name='tempest-TestSecurityGroupsBasicOps-877248969-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:01:32Z,user_data=None,user_id='baa046e735b94aba93374dff061b9e77',uuid=1f0c80e2-19cd-43ba-881d-e24e5bcd62fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.078 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converting VIF {"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.079 254096 DEBUG nova.network.os_vif_util [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.079 254096 DEBUG os_vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.080 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.081 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.081 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.085 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3751c8d6-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.086 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3751c8d6-0f, col_values=(('external_ids', {'iface-id': '3751c8d6-0f1e-4902-8016-cf37bf3c1ad3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:6a:fb', 'vm-uuid': '1f0c80e2-19cd-43ba-881d-e24e5bcd62fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:37 compute-0 NetworkManager[48891]: <info>  [1764090097.1262] manager: (tap3751c8d6-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.134 254096 INFO os_vif [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f')
Nov 25 17:01:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:01:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 48K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1440 writes, 7257 keys, 1440 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s
                                           Interval WAL: 1440 writes, 1440 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     27.1      2.07              0.18        32    0.065       0      0       0.0       0.0
                                             L6      1/0    7.81 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    105.9     88.4      2.76              0.70        31    0.089    181K    17K       0.0       0.0
                                            Sum      1/0    7.81 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4     60.5     62.1      4.83              0.88        63    0.077    181K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4     83.6     81.8      0.84              0.19        14    0.060     51K   4061       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    105.9     88.4      2.76              0.70        31    0.089    181K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     27.7      2.03              0.18        31    0.065       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.055, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.07 MB/s write, 0.29 GB read, 0.07 MB/s read, 4.8 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 33.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.00023 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2219,32.43 MB,10.6665%) FilterBlock(64,511.73 KB,0.164388%) IndexBlock(64,848.36 KB,0.272525%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.192 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.193 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] No VIF found with MAC fa:16:3e:cd:6a:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.193 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Using config drive
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.215 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.220 254096 DEBUG nova.network.neutron [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updated VIF entry in instance network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.221 254096 DEBUG nova.network.neutron [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.247 254096 DEBUG oslo_concurrency.lockutils [req-88d8f2e5-ee34-456f-9418-89b40ca40e49 req-d96f5352-829d-452c-a692-f714f23b2239 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.561 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Creating config drive at /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.566 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4a2v30t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.706 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4a2v30t" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.737 254096 DEBUG nova.storage.rbd_utils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] rbd image 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.741 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.885 254096 DEBUG nova.compute.manager [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.886 254096 DEBUG nova.compute.manager [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.887 254096 DEBUG oslo_concurrency.lockutils [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.887 254096 DEBUG oslo_concurrency.lockutils [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.887 254096 DEBUG nova.network.neutron [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.905 254096 DEBUG oslo_concurrency.processutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.906 254096 INFO nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deleting local config drive /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb/disk.config because it was imported into RBD.
Nov 25 17:01:37 compute-0 kernel: tap3751c8d6-0f: entered promiscuous mode
Nov 25 17:01:37 compute-0 NetworkManager[48891]: <info>  [1764090097.9649] manager: (tap3751c8d6-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:37 compute-0 ovn_controller[153477]: 2025-11-25T17:01:37Z|01176|binding|INFO|Claiming lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for this chassis.
Nov 25 17:01:37 compute-0 ovn_controller[153477]: 2025-11-25T17:01:37Z|01177|binding|INFO|3751c8d6-0f1e-4902-8016-cf37bf3c1ad3: Claiming fa:16:3e:cd:6a:fb 10.100.0.7
Nov 25 17:01:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.978 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:6a:fb 10.100.0.7'], port_security=['fa:16:3e:cd:6a:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f0c80e2-19cd-43ba-881d-e24e5bcd62fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b79c328-9376-4e36-9211-72ee228f98d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa394380a92d48188f2de86f1a100c08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d20ac94-0311-45d3-bbc9-0b5ca7a32bc8 4b68beb0-85ec-4a9a-a335-1e3ed4aadbc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffca90a1-5def-405b-be68-948ef468bd95, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:01:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.980 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 in datapath 3b79c328-9376-4e36-9211-72ee228f98d6 bound to our chassis
Nov 25 17:01:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.982 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b79c328-9376-4e36-9211-72ee228f98d6
Nov 25 17:01:37 compute-0 ovn_controller[153477]: 2025-11-25T17:01:37Z|01178|binding|INFO|Setting lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 ovn-installed in OVS
Nov 25 17:01:37 compute-0 ovn_controller[153477]: 2025-11-25T17:01:37Z|01179|binding|INFO|Setting lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 up in Southbound
Nov 25 17:01:37 compute-0 nova_compute[254092]: 2025-11-25 17:01:37.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.995 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b89df06-6d57-4edb-8929-c579392b1dda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:37.996 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b79c328-91 in ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.006 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b79c328-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4c473830-0007-4093-af36-51fd0ed7c91e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 systemd-machined[216343]: New machine qemu-147-instance-00000073.
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.007 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6577c232-b68d-400b-9396-f5dd7af4d4e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000073.
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.030 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[03fdd1db-3e71-4c1d-b916-240de1a7a461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 systemd-udevd[380794]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.054 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00f76205-f34a-4c47-aa68-1bdfd2874a02]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 NetworkManager[48891]: <info>  [1764090098.0675] device (tap3751c8d6-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:01:38 compute-0 NetworkManager[48891]: <info>  [1764090098.0689] device (tap3751c8d6-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:01:38 compute-0 ceph-mon[74985]: pgmap v2342: 321 pgs: 321 active+clean; 213 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.109 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d61c2a-802f-454d-8f3b-2305de927054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.120 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[414520f5-c6ac-45d1-8de6-98bc2d0e96e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 NetworkManager[48891]: <info>  [1764090098.1209] manager: (tap3b79c328-90): new Veth device (/org/freedesktop/NetworkManager/Devices/488)
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.164 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7de674da-6378-4605-9162-200d00929fbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.167 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf8011f-7a6c-4e60-9ccc-538be3083147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 NetworkManager[48891]: <info>  [1764090098.2003] device (tap3b79c328-90): carrier: link connected
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.207 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1127e4-f114-47d9-93d9-d2f4330f969c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.226 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7c864161-736f-42ff-bc28-e760d1570448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b79c328-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:5c:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655579, 'reachable_time': 29590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380822, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1eb271-9cea-48fb-8e2a-d18f810640b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:5c7f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655579, 'tstamp': 655579}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380823, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.261 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a13c822-d9b2-4281-b375-dfafd2b3a6a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b79c328-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:5c:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655579, 'reachable_time': 29590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380824, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 213 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.305 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ca3dc0-27b9-4539-b807-fabdcc7554a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.379 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8289deae-1fd6-4d08-8579-a821df2ff598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b79c328-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.381 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.381 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b79c328-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:38 compute-0 kernel: tap3b79c328-90: entered promiscuous mode
Nov 25 17:01:38 compute-0 NetworkManager[48891]: <info>  [1764090098.3845] manager: (tap3b79c328-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.388 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b79c328-90, col_values=(('external_ids', {'iface-id': 'a599bfa4-c512-4c62-b0a4-4d2ab863ab24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:01:38 compute-0 ovn_controller[153477]: 2025-11-25T17:01:38Z|01180|binding|INFO|Releasing lport a599bfa4-c512-4c62-b0a4-4d2ab863ab24 from this chassis (sb_readonly=0)
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.392 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b79c328-9376-4e36-9211-72ee228f98d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b79c328-9376-4e36-9211-72ee228f98d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.393 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20bf0343-0e88-47d9-b347-b5d1b20259a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.394 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-3b79c328-9376-4e36-9211-72ee228f98d6
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/3b79c328-9376-4e36-9211-72ee228f98d6.pid.haproxy
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 3b79c328-9376-4e36-9211-72ee228f98d6
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:01:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:01:38.395 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'env', 'PROCESS_TAG=haproxy-3b79c328-9376-4e36-9211-72ee228f98d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b79c328-9376-4e36-9211-72ee228f98d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.554 254096 DEBUG nova.compute.manager [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.555 254096 DEBUG oslo_concurrency.lockutils [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.555 254096 DEBUG oslo_concurrency.lockutils [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.555 254096 DEBUG oslo_concurrency.lockutils [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.556 254096 DEBUG nova.compute.manager [req-bf6ce311-1b10-4926-a712-817bbbf99bd3 req-2939f881-b4fa-45bb-8dda-fa38c5ce69a7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Processing event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:01:38 compute-0 podman[380881]: 2025-11-25 17:01:38.743728751 +0000 UTC m=+0.045922451 container create 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:01:38 compute-0 systemd[1]: Started libpod-conmon-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27.scope.
Nov 25 17:01:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.809 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090098.8088908, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.809 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Started (Lifecycle Event)
Nov 25 17:01:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60bb6ae46dd6df8dbd8f0da40a4c941046d2ca1eb5dcd17ed4f8729c326d3da3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.811 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.815 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:01:38 compute-0 podman[380881]: 2025-11-25 17:01:38.720993322 +0000 UTC m=+0.023187052 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.818 254096 INFO nova.virt.libvirt.driver [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance spawned successfully.
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.818 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.834 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:38 compute-0 podman[380881]: 2025-11-25 17:01:38.834544755 +0000 UTC m=+0.136738465 container init 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.839 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:01:38 compute-0 podman[380881]: 2025-11-25 17:01:38.840000824 +0000 UTC m=+0.142194524 container start 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.842 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.842 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.843 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.843 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.844 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.844 254096 DEBUG nova.virt.libvirt.driver [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:01:38 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : New worker (380917) forked
Nov 25 17:01:38 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : Loading success.
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.875 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.875 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090098.809031, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.876 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Paused (Lifecycle Event)
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.894 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.897 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090098.8145552, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.897 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Resumed (Lifecycle Event)
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.916 254096 INFO nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 6.54 seconds to spawn the instance on the hypervisor.
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.917 254096 DEBUG nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.918 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.923 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.947 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.973 254096 INFO nova.compute.manager [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 7.61 seconds to build instance.
Nov 25 17:01:38 compute-0 nova_compute[254092]: 2025-11-25 17:01:38.986 254096 DEBUG oslo_concurrency.lockutils [None req-1eb760ba-3a0d-4538-902e-9115588e0bd0 baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:39 compute-0 nova_compute[254092]: 2025-11-25 17:01:39.285 254096 DEBUG nova.network.neutron [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updated VIF entry in instance network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:01:39 compute-0 nova_compute[254092]: 2025-11-25 17:01:39.286 254096 DEBUG nova.network.neutron [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:01:39 compute-0 nova_compute[254092]: 2025-11-25 17:01:39.399 254096 DEBUG oslo_concurrency.lockutils [req-a9e6b7e6-ee1e-4767-8cbd-546171304528 req-9fa4264b-1f1a-4c65-9136-52df5c2098c1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:01:39 compute-0 podman[380927]: 2025-11-25 17:01:39.660050428 +0000 UTC m=+0.078880760 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 17:01:39 compute-0 podman[380926]: 2025-11-25 17:01:39.69206791 +0000 UTC m=+0.111154778 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 17:01:39 compute-0 podman[380928]: 2025-11-25 17:01:39.713610746 +0000 UTC m=+0.128984314 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 17:01:39 compute-0 nova_compute[254092]: 2025-11-25 17:01:39.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:01:40 compute-0 ceph-mon[74985]: pgmap v2343: 321 pgs: 321 active+clean; 213 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:01:40
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 170 op/s
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:01:40 compute-0 nova_compute[254092]: 2025-11-25 17:01:40.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:01:40 compute-0 nova_compute[254092]: 2025-11-25 17:01:40.634 254096 DEBUG nova.compute.manager [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:40 compute-0 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG oslo_concurrency.lockutils [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:01:40 compute-0 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG oslo_concurrency.lockutils [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:01:40 compute-0 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG oslo_concurrency.lockutils [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:01:40 compute-0 nova_compute[254092]: 2025-11-25 17:01:40.635 254096 DEBUG nova.compute.manager [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] No waiting events found dispatching network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:01:40 compute-0 nova_compute[254092]: 2025-11-25 17:01:40.636 254096 WARNING nova.compute.manager [req-1ecf9a97-33d1-4e7e-bf78-c884f8348f4a req-c5e77572-3473-4e34-9461-bc8aec1e7f75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received unexpected event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for instance with vm_state active and task_state None.
Nov 25 17:01:41 compute-0 ceph-mon[74985]: pgmap v2344: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 170 op/s
Nov 25 17:01:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:42 compute-0 nova_compute[254092]: 2025-11-25 17:01:42.165 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 164 op/s
Nov 25 17:01:43 compute-0 ceph-mon[74985]: pgmap v2345: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 164 op/s
Nov 25 17:01:44 compute-0 nova_compute[254092]: 2025-11-25 17:01:44.134 254096 DEBUG nova.compute.manager [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:01:44 compute-0 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG nova.compute.manager [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing instance network info cache due to event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:01:44 compute-0 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG oslo_concurrency.lockutils [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:01:44 compute-0 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG oslo_concurrency.lockutils [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:01:44 compute-0 nova_compute[254092]: 2025-11-25 17:01:44.135 254096 DEBUG nova.network.neutron [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:01:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Nov 25 17:01:44 compute-0 nova_compute[254092]: 2025-11-25 17:01:44.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:45 compute-0 ceph-mon[74985]: pgmap v2346: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Nov 25 17:01:45 compute-0 nova_compute[254092]: 2025-11-25 17:01:45.718 254096 DEBUG nova.network.neutron [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updated VIF entry in instance network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:01:45 compute-0 nova_compute[254092]: 2025-11-25 17:01:45.719 254096 DEBUG nova.network.neutron [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [{"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:01:45 compute-0 nova_compute[254092]: 2025-11-25 17:01:45.891 254096 DEBUG oslo_concurrency.lockutils [req-e6e1c3db-2c9a-4bca-94ab-9250a677733d req-d6c606eb-eb4e-4275-841c-a5284679d176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:01:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Nov 25 17:01:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:47 compute-0 nova_compute[254092]: 2025-11-25 17:01:47.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 79 op/s
Nov 25 17:01:49 compute-0 nova_compute[254092]: 2025-11-25 17:01:49.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 225 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016948831355407957 of space, bias 1.0, pg target 0.5084649406622387 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:01:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:01:52 compute-0 nova_compute[254092]: 2025-11-25 17:01:52.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 226 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Nov 25 17:01:53 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 7.032744408s
Nov 25 17:01:53 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 7.161718845s
Nov 25 17:01:53 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.162920952s, txc = 0x563f6707ec00
Nov 25 17:01:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:54 compute-0 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 7.214835167s
Nov 25 17:01:54 compute-0 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 7.214835644s
Nov 25 17:01:54 compute-0 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.347607136s, txc = 0x5618dceb2000
Nov 25 17:01:54 compute-0 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.332983017s, txc = 0x5618dc2cc900
Nov 25 17:01:54 compute-0 ceph-mon[74985]: pgmap v2347: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_flush, latency = 7.010043621s
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 7.254073143s
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.254876614s, txc = 0x55750bf89b00
Nov 25 17:01:54 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.675047874s, txc = 0x563f6643f200
Nov 25 17:01:54 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672909737s, txc = 0x563f67005b00
Nov 25 17:01:54 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672739983s, txc = 0x563f68024300
Nov 25 17:01:54 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672325134s, txc = 0x563f66436900
Nov 25 17:01:54 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.672142982s, txc = 0x563f6707f500
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.703565598s, txc = 0x55750ca00300
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.703107357s, txc = 0x55750c880900
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.702624798s, txc = 0x55750bff4300
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.701981544s, txc = 0x55750bff5500
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.701769829s, txc = 0x55750bf82600
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.701525211s, txc = 0x55750cc8ec00
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700932980s, txc = 0x55750bff4900
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700676918s, txc = 0x55750bf81b00
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700468540s, txc = 0x55750bf5a300
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.700205803s, txc = 0x55750c01ec00
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.699466705s, txc = 0x55750bec2f00
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.697698116s, txc = 0x55750c01e600
Nov 25 17:01:54 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.696033478s, txc = 0x55750c01f800
Nov 25 17:01:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 226 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Nov 25 17:01:54 compute-0 nova_compute[254092]: 2025-11-25 17:01:54.857 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:01:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3354207974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:01:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:01:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3354207974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:01:55 compute-0 ceph-mon[74985]: pgmap v2348: 321 pgs: 321 active+clean; 214 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 79 op/s
Nov 25 17:01:55 compute-0 ceph-mon[74985]: pgmap v2349: 321 pgs: 321 active+clean; 225 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Nov 25 17:01:55 compute-0 ceph-mon[74985]: pgmap v2350: 321 pgs: 321 active+clean; 226 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Nov 25 17:01:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 239 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 2.6 MiB/s wr, 53 op/s
Nov 25 17:01:56 compute-0 ceph-mon[74985]: pgmap v2351: 321 pgs: 321 active+clean; 226 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Nov 25 17:01:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3354207974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:01:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3354207974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:01:57 compute-0 nova_compute[254092]: 2025-11-25 17:01:57.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:01:57 compute-0 ceph-mon[74985]: pgmap v2352: 321 pgs: 321 active+clean; 239 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 2.6 MiB/s wr, 53 op/s
Nov 25 17:01:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 239 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 37 op/s
Nov 25 17:01:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:01:59 compute-0 ovn_controller[153477]: 2025-11-25T17:01:59Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:6a:fb 10.100.0.7
Nov 25 17:01:59 compute-0 ovn_controller[153477]: 2025-11-25T17:01:59Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:6a:fb 10.100.0.7
Nov 25 17:01:59 compute-0 nova_compute[254092]: 2025-11-25 17:01:59.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:00 compute-0 ceph-mon[74985]: pgmap v2353: 321 pgs: 321 active+clean; 239 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 37 op/s
Nov 25 17:02:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 244 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 3.0 MiB/s wr, 49 op/s
Nov 25 17:02:00 compute-0 ovn_controller[153477]: 2025-11-25T17:02:00Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:c9:7b 10.100.0.3
Nov 25 17:02:00 compute-0 ovn_controller[153477]: 2025-11-25T17:02:00Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:c9:7b 10.100.0.3
Nov 25 17:02:02 compute-0 ceph-mon[74985]: pgmap v2354: 321 pgs: 321 active+clean; 244 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 3.0 MiB/s wr, 49 op/s
Nov 25 17:02:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 266 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 3.2 MiB/s wr, 85 op/s
Nov 25 17:02:02 compute-0 nova_compute[254092]: 2025-11-25 17:02:02.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:04 compute-0 ceph-mon[74985]: pgmap v2355: 321 pgs: 321 active+clean; 266 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 422 KiB/s rd, 3.2 MiB/s wr, 85 op/s
Nov 25 17:02:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 266 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 2.8 MiB/s wr, 80 op/s
Nov 25 17:02:04 compute-0 nova_compute[254092]: 2025-11-25 17:02:04.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:06 compute-0 ceph-mon[74985]: pgmap v2356: 321 pgs: 321 active+clean; 266 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 2.8 MiB/s wr, 80 op/s
Nov 25 17:02:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 2.9 MiB/s wr, 100 op/s
Nov 25 17:02:06 compute-0 nova_compute[254092]: 2025-11-25 17:02:06.755 254096 INFO nova.compute.manager [None req-18bccb64-ee3f-4a67-a2be-697e2f46b053 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Get console output
Nov 25 17:02:06 compute-0 nova_compute[254092]: 2025-11-25 17:02:06.760 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:02:07 compute-0 nova_compute[254092]: 2025-11-25 17:02:07.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:07 compute-0 nova_compute[254092]: 2025-11-25 17:02:07.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:07.321 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:02:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:07.322 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:02:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:07.323 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:08 compute-0 ceph-mon[74985]: pgmap v2357: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 2.9 MiB/s wr, 100 op/s
Nov 25 17:02:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Nov 25 17:02:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:09 compute-0 nova_compute[254092]: 2025-11-25 17:02:09.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:02:10 compute-0 nova_compute[254092]: 2025-11-25 17:02:10.208 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:10 compute-0 nova_compute[254092]: 2025-11-25 17:02:10.209 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:10 compute-0 nova_compute[254092]: 2025-11-25 17:02:10.209 254096 DEBUG nova.objects.instance [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Nov 25 17:02:10 compute-0 ceph-mon[74985]: pgmap v2358: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Nov 25 17:02:10 compute-0 sudo[380992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:10 compute-0 sudo[380992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:10 compute-0 sudo[380992]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:10 compute-0 sudo[381032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:02:10 compute-0 nova_compute[254092]: 2025-11-25 17:02:10.606 254096 DEBUG nova.objects.instance [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_requests' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:10 compute-0 sudo[381032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:10 compute-0 podman[381016]: 2025-11-25 17:02:10.609957164 +0000 UTC m=+0.086040520 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 17:02:10 compute-0 sudo[381032]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:10 compute-0 podman[381017]: 2025-11-25 17:02:10.615900568 +0000 UTC m=+0.089318771 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:02:10 compute-0 nova_compute[254092]: 2025-11-25 17:02:10.625 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:02:10 compute-0 podman[381018]: 2025-11-25 17:02:10.630330984 +0000 UTC m=+0.098973926 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:02:10 compute-0 sudo[381102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:10 compute-0 sudo[381102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:10 compute-0 sudo[381102]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:10 compute-0 sudo[381129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:02:10 compute-0 sudo[381129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:10 compute-0 nova_compute[254092]: 2025-11-25 17:02:10.799 254096 DEBUG nova.policy [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:02:11 compute-0 sudo[381129]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:02:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:02:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:02:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:02:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:02:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:02:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 94f15886-27c9-46aa-9936-f2558df14349 does not exist
Nov 25 17:02:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev be5e1915-7b8d-494e-bd82-7f765d41102a does not exist
Nov 25 17:02:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 26088abf-ce88-43b4-9f04-c9d46545b748 does not exist
Nov 25 17:02:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:02:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:02:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:02:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:02:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:02:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:02:11 compute-0 ceph-mon[74985]: pgmap v2359: 321 pgs: 321 active+clean; 279 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 1.7 MiB/s wr, 86 op/s
Nov 25 17:02:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:02:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:02:11 compute-0 nova_compute[254092]: 2025-11-25 17:02:11.571 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully created port: 1e4a418b-b459-456b-ae99-26f8b034a7bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:02:11 compute-0 sudo[381186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:11 compute-0 sudo[381186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:11 compute-0 sudo[381186]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:11 compute-0 sudo[381211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:02:11 compute-0 sudo[381211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:11 compute-0 sudo[381211]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:11 compute-0 sudo[381236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:11 compute-0 sudo[381236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:11 compute-0 sudo[381236]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:11 compute-0 sudo[381261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:02:11 compute-0 sudo[381261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:12 compute-0 podman[381326]: 2025-11-25 17:02:12.152582126 +0000 UTC m=+0.029130890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:02:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 1.3 MiB/s wr, 74 op/s
Nov 25 17:02:12 compute-0 nova_compute[254092]: 2025-11-25 17:02:12.285 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:12 compute-0 podman[381326]: 2025-11-25 17:02:12.403337004 +0000 UTC m=+0.279885748 container create aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:02:12 compute-0 systemd[1]: Started libpod-conmon-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope.
Nov 25 17:02:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:12 compute-0 podman[381326]: 2025-11-25 17:02:12.624982733 +0000 UTC m=+0.501531477 container init aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:02:12 compute-0 podman[381326]: 2025-11-25 17:02:12.639366048 +0000 UTC m=+0.515914782 container start aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:02:12 compute-0 busy_wilbur[381342]: 167 167
Nov 25 17:02:12 compute-0 systemd[1]: libpod-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope: Deactivated successfully.
Nov 25 17:02:12 compute-0 conmon[381342]: conmon aaa9fed22137f30cce6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope/container/memory.events
Nov 25 17:02:12 compute-0 podman[381326]: 2025-11-25 17:02:12.756775708 +0000 UTC m=+0.633324482 container attach aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:02:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:02:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:02:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:02:12 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:02:12 compute-0 podman[381326]: 2025-11-25 17:02:12.759496773 +0000 UTC m=+0.636045527 container died aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:02:12 compute-0 nova_compute[254092]: 2025-11-25 17:02:12.998 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Successfully updated port: 1e4a418b-b459-456b-ae99-26f8b034a7bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:02:13 compute-0 nova_compute[254092]: 2025-11-25 17:02:13.026 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:13 compute-0 nova_compute[254092]: 2025-11-25 17:02:13.027 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:13 compute-0 nova_compute[254092]: 2025-11-25 17:02:13.027 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-87b495a205acfe8172566596bb6ac34c8979a7284083b3987f6a1ef51a91c734-merged.mount: Deactivated successfully.
Nov 25 17:02:13 compute-0 podman[381326]: 2025-11-25 17:02:13.083897711 +0000 UTC m=+0.960446475 container remove aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilbur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:02:13 compute-0 systemd[1]: libpod-conmon-aaa9fed22137f30cce6bf18b01589d581c33170454f0f3f2b22f90d754cf4532.scope: Deactivated successfully.
Nov 25 17:02:13 compute-0 nova_compute[254092]: 2025-11-25 17:02:13.120 254096 DEBUG nova.compute.manager [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:13 compute-0 nova_compute[254092]: 2025-11-25 17:02:13.120 254096 DEBUG nova.compute.manager [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-1e4a418b-b459-456b-ae99-26f8b034a7bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:02:13 compute-0 nova_compute[254092]: 2025-11-25 17:02:13.120 254096 DEBUG oslo_concurrency.lockutils [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:13 compute-0 podman[381367]: 2025-11-25 17:02:13.334779902 +0000 UTC m=+0.060319755 container create d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:02:13 compute-0 systemd[1]: Started libpod-conmon-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope.
Nov 25 17:02:13 compute-0 podman[381367]: 2025-11-25 17:02:13.301006485 +0000 UTC m=+0.026546378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:02:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:13 compute-0 podman[381367]: 2025-11-25 17:02:13.419110594 +0000 UTC m=+0.144650467 container init d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:02:13 compute-0 podman[381367]: 2025-11-25 17:02:13.427079403 +0000 UTC m=+0.152619256 container start d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 17:02:13 compute-0 podman[381367]: 2025-11-25 17:02:13.430148327 +0000 UTC m=+0.155688200 container attach d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:02:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:13.639 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:13.643 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:14 compute-0 ceph-mon[74985]: pgmap v2360: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 419 KiB/s rd, 1.3 MiB/s wr, 74 op/s
Nov 25 17:02:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 99 KiB/s wr, 20 op/s
Nov 25 17:02:14 compute-0 admiring_galileo[381384]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:02:14 compute-0 admiring_galileo[381384]: --> relative data size: 1.0
Nov 25 17:02:14 compute-0 admiring_galileo[381384]: --> All data devices are unavailable
Nov 25 17:02:14 compute-0 systemd[1]: libpod-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope: Deactivated successfully.
Nov 25 17:02:14 compute-0 podman[381367]: 2025-11-25 17:02:14.538906159 +0000 UTC m=+1.264446042 container died d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 17:02:14 compute-0 systemd[1]: libpod-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope: Consumed 1.039s CPU time.
Nov 25 17:02:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-acd32e13a86f5152688cf1a171d77cfaa9a07e6d23dcd03e1870eb8037015ab6-merged.mount: Deactivated successfully.
Nov 25 17:02:14 compute-0 podman[381367]: 2025-11-25 17:02:14.601516616 +0000 UTC m=+1.327056469 container remove d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:02:14 compute-0 systemd[1]: libpod-conmon-d6f8b1f7971ae6e495562cd2b59fbf9983ba1f8e9071a8a4efb18570a2d4f26b.scope: Deactivated successfully.
Nov 25 17:02:14 compute-0 sudo[381261]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:14 compute-0 sudo[381427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:14 compute-0 sudo[381427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:14 compute-0 sudo[381427]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:14 compute-0 sudo[381452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:02:14 compute-0 sudo[381452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:14 compute-0 sudo[381452]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:14 compute-0 sudo[381477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:14 compute-0 sudo[381477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:14 compute-0 nova_compute[254092]: 2025-11-25 17:02:14.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:14 compute-0 sudo[381477]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:14 compute-0 sudo[381502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:02:14 compute-0 sudo[381502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:15 compute-0 podman[381567]: 2025-11-25 17:02:15.394136497 +0000 UTC m=+0.050919358 container create 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:02:15 compute-0 systemd[1]: Started libpod-conmon-78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d.scope.
Nov 25 17:02:15 compute-0 podman[381567]: 2025-11-25 17:02:15.36980828 +0000 UTC m=+0.026591141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:02:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:15 compute-0 podman[381567]: 2025-11-25 17:02:15.484957648 +0000 UTC m=+0.141740559 container init 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:02:15 compute-0 podman[381567]: 2025-11-25 17:02:15.493650566 +0000 UTC m=+0.150433397 container start 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:02:15 compute-0 podman[381567]: 2025-11-25 17:02:15.497398119 +0000 UTC m=+0.154181030 container attach 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:02:15 compute-0 condescending_diffie[381584]: 167 167
Nov 25 17:02:15 compute-0 systemd[1]: libpod-78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d.scope: Deactivated successfully.
Nov 25 17:02:15 compute-0 podman[381567]: 2025-11-25 17:02:15.500411752 +0000 UTC m=+0.157194613 container died 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 17:02:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a912da37af3aabc05611fb3c9ac66aa2fe296cb0c02d771307b5ab2ed122611c-merged.mount: Deactivated successfully.
Nov 25 17:02:15 compute-0 podman[381567]: 2025-11-25 17:02:15.554527075 +0000 UTC m=+0.211309906 container remove 78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:02:15 compute-0 systemd[1]: libpod-conmon-78147897d9fff6f5d0e1bbf7b62c030d054be3105dd54a7e56f74a70cae73e1d.scope: Deactivated successfully.
Nov 25 17:02:15 compute-0 podman[381609]: 2025-11-25 17:02:15.771076136 +0000 UTC m=+0.051398762 container create 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:02:15 compute-0 systemd[1]: Started libpod-conmon-700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8.scope.
Nov 25 17:02:15 compute-0 podman[381609]: 2025-11-25 17:02:15.750281635 +0000 UTC m=+0.030604281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:02:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:15 compute-0 podman[381609]: 2025-11-25 17:02:15.872928369 +0000 UTC m=+0.153251025 container init 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:02:15 compute-0 podman[381609]: 2025-11-25 17:02:15.883657444 +0000 UTC m=+0.163980070 container start 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:02:15 compute-0 podman[381609]: 2025-11-25 17:02:15.887472288 +0000 UTC m=+0.167794914 container attach 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:02:16 compute-0 ceph-mon[74985]: pgmap v2361: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 99 KiB/s wr, 20 op/s
Nov 25 17:02:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 101 KiB/s wr, 20 op/s
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.589 254096 DEBUG nova.network.neutron [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:16 compute-0 musing_blackwell[381625]: {
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:     "0": [
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:         {
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "devices": [
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "/dev/loop3"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             ],
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_name": "ceph_lv0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_size": "21470642176",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "name": "ceph_lv0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "tags": {
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cluster_name": "ceph",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.crush_device_class": "",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.encrypted": "0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osd_id": "0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.type": "block",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.vdo": "0"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             },
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "type": "block",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "vg_name": "ceph_vg0"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:         }
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:     ],
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:     "1": [
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:         {
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "devices": [
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "/dev/loop4"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             ],
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_name": "ceph_lv1",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_size": "21470642176",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "name": "ceph_lv1",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "tags": {
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cluster_name": "ceph",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.crush_device_class": "",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.encrypted": "0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osd_id": "1",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.type": "block",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.vdo": "0"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             },
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "type": "block",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "vg_name": "ceph_vg1"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:         }
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:     ],
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:     "2": [
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:         {
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "devices": [
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "/dev/loop5"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             ],
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_name": "ceph_lv2",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_size": "21470642176",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "name": "ceph_lv2",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "tags": {
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.cluster_name": "ceph",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.crush_device_class": "",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.encrypted": "0",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osd_id": "2",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.type": "block",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:                 "ceph.vdo": "0"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             },
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "type": "block",
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:             "vg_name": "ceph_vg2"
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:         }
Nov 25 17:02:16 compute-0 musing_blackwell[381625]:     ]
Nov 25 17:02:16 compute-0 musing_blackwell[381625]: }
Nov 25 17:02:16 compute-0 systemd[1]: libpod-700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8.scope: Deactivated successfully.
Nov 25 17:02:16 compute-0 podman[381609]: 2025-11-25 17:02:16.731951061 +0000 UTC m=+1.012273687 container died 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:02:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3acceab3bf7e0ebcf143522fcd221f49f6899325ffc1e9d7f92f01664810118b-merged.mount: Deactivated successfully.
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.762 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.763 254096 DEBUG oslo_concurrency.lockutils [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.763 254096 DEBUG nova.network.neutron [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 1e4a418b-b459-456b-ae99-26f8b034a7bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.767 254096 DEBUG nova.virt.libvirt.vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.767 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.768 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.768 254096 DEBUG os_vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.769 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.770 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.774 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e4a418b-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.774 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e4a418b-b4, col_values=(('external_ids', {'iface-id': '1e4a418b-b459-456b-ae99-26f8b034a7bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:fe:32', 'vm-uuid': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:16 compute-0 NetworkManager[48891]: <info>  [1764090136.7771] manager: (tap1e4a418b-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:16 compute-0 podman[381609]: 2025-11-25 17:02:16.794455375 +0000 UTC m=+1.074778001 container remove 700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.795 254096 INFO os_vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4')
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.795 254096 DEBUG nova.virt.libvirt.vif [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.796 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.796 254096 DEBUG nova.network.os_vif_util [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.800 254096 DEBUG nova.virt.libvirt.guest [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] attach device xml: <interface type="ethernet">
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:00:fe:32"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <target dev="tap1e4a418b-b4"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]: </interface>
Nov 25 17:02:16 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 17:02:16 compute-0 systemd[1]: libpod-conmon-700b29a382a9e461f0d597bb977c54006735a001dfc918d76832d89d7644b7a8.scope: Deactivated successfully.
Nov 25 17:02:16 compute-0 kernel: tap1e4a418b-b4: entered promiscuous mode
Nov 25 17:02:16 compute-0 NetworkManager[48891]: <info>  [1764090136.8149] manager: (tap1e4a418b-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Nov 25 17:02:16 compute-0 ovn_controller[153477]: 2025-11-25T17:02:16Z|01181|binding|INFO|Claiming lport 1e4a418b-b459-456b-ae99-26f8b034a7bc for this chassis.
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:16 compute-0 ovn_controller[153477]: 2025-11-25T17:02:16Z|01182|binding|INFO|1e4a418b-b459-456b-ae99-26f8b034a7bc: Claiming fa:16:3e:00:fe:32 10.100.0.24
Nov 25 17:02:16 compute-0 sudo[381502]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.825 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:fe:32 10.100.0.24'], port_security=['fa:16:3e:00:fe:32 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9bf74ed-ebec-4028-924f-11064256236f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eadf10b2-41b2-4301-a0d6-9c1d0e6514cb, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4a418b-b459-456b-ae99-26f8b034a7bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.826 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4a418b-b459-456b-ae99-26f8b034a7bc in datapath c9bf74ed-ebec-4028-924f-11064256236f bound to our chassis
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.828 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c9bf74ed-ebec-4028-924f-11064256236f
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[523f7f5f-de1d-4a0b-8e5b-f9066bdc7375]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.840 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc9bf74ed-e1 in ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.842 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc9bf74ed-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc302b9-8437-4817-bcdf-b13eb7dbfae2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.843 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5627bfa-f956-4627-8503-0f8ede271e03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 systemd-udevd[381656]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:02:16 compute-0 ovn_controller[153477]: 2025-11-25T17:02:16Z|01183|binding|INFO|Setting lport 1e4a418b-b459-456b-ae99-26f8b034a7bc ovn-installed in OVS
Nov 25 17:02:16 compute-0 ovn_controller[153477]: 2025-11-25T17:02:16Z|01184|binding|INFO|Setting lport 1e4a418b-b459-456b-ae99-26f8b034a7bc up in Southbound
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.856 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f000bf-b13e-4adc-9680-9c7b13ed0dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.857 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.860 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:16 compute-0 NetworkManager[48891]: <info>  [1764090136.8698] device (tap1e4a418b-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:02:16 compute-0 NetworkManager[48891]: <info>  [1764090136.8707] device (tap1e4a418b-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.884 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a39e6e48-7e74-4039-a369-b962c413615a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 sudo[381655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.901 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.901 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.902 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:a8:c9:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.902 254096 DEBUG nova.virt.libvirt.driver [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:00:fe:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:02:16 compute-0 sudo[381655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:16 compute-0 sudo[381655]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.910 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7315fd4e-719b-45ee-9c02-36e6760b72eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 NetworkManager[48891]: <info>  [1764090136.9167] manager: (tapc9bf74ed-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/492)
Nov 25 17:02:16 compute-0 systemd-udevd[381673]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af2ad0c3-0be7-46c8-acd3-0a847773c8c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.920 254096 DEBUG nova.virt.libvirt.guest [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:02:16</nova:creationTime>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:02:16 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     <nova:port uuid="1e4a418b-b459-456b-ae99-26f8b034a7bc">
Nov 25 17:02:16 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 17:02:16 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:16 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:02:16 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:02:16 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 17:02:16 compute-0 nova_compute[254092]: 2025-11-25 17:02:16.937 254096 DEBUG oslo_concurrency.lockutils [None req-ccc4c6b0-ea5e-4f8b-822c-cb476b53d981 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.947 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29225fe3-4b89-45fc-8bf0-12766cd0012f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.950 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a51c55ba-30cb-4fc6-b208-45df09b98771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 sudo[381687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:02:16 compute-0 sudo[381687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:16 compute-0 sudo[381687]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:16 compute-0 NetworkManager[48891]: <info>  [1764090136.9745] device (tapc9bf74ed-e0): carrier: link connected
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.981 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb32078-e0b6-4082-a0ed-b5158567e4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:16.996 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0ecfe6-e46f-4ff4-b8bc-a4a9a979d011]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9bf74ed-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:51:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659456, 'reachable_time': 36679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381737, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.012 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ed76bcac-2583-415d-83dc-8955ed8e53c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:5156'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659456, 'tstamp': 659456}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381755, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:17 compute-0 sudo[381731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:17 compute-0 sudo[381731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:17 compute-0 sudo[381731]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.032 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5dfa1c-17c0-4563-86a1-2f94366d87cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc9bf74ed-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:51:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 350], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659456, 'reachable_time': 36679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381757, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c59545e-4524-4477-b5a7-9f0340e59931]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:17 compute-0 sudo[381759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:02:17 compute-0 sudo[381759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.125 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a71e66f0-c0ce-4ea8-a26e-e8240450c658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.127 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9bf74ed-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.127 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.128 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9bf74ed-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:17 compute-0 NetworkManager[48891]: <info>  [1764090137.1306] manager: (tapc9bf74ed-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:17 compute-0 kernel: tapc9bf74ed-e0: entered promiscuous mode
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.135 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc9bf74ed-e0, col_values=(('external_ids', {'iface-id': 'bdd28a66-21f2-47f4-b133-a95bb7f7eb47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:17 compute-0 ovn_controller[153477]: 2025-11-25T17:02:17Z|01185|binding|INFO|Releasing lport bdd28a66-21f2-47f4-b133-a95bb7f7eb47 from this chassis (sb_readonly=0)
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.152 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.153 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c9bf74ed-ebec-4028-924f-11064256236f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c9bf74ed-ebec-4028-924f-11064256236f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.154 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[55519cb6-8846-45b0-98ee-bebeaf6db519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.155 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-c9bf74ed-ebec-4028-924f-11064256236f
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/c9bf74ed-ebec-4028-924f-11064256236f.pid.haproxy
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID c9bf74ed-ebec-4028-924f-11064256236f
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:02:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:17.157 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'env', 'PROCESS_TAG=haproxy-c9bf74ed-ebec-4028-924f-11064256236f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c9bf74ed-ebec-4028-924f-11064256236f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:02:17 compute-0 podman[381830]: 2025-11-25 17:02:17.384886499 +0000 UTC m=+0.022620201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:02:17 compute-0 podman[381830]: 2025-11-25 17:02:17.515172952 +0000 UTC m=+0.152906634 container create 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:02:17 compute-0 systemd[1]: Started libpod-conmon-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope.
Nov 25 17:02:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:17 compute-0 podman[381830]: 2025-11-25 17:02:17.600766311 +0000 UTC m=+0.238500023 container init 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:02:17 compute-0 podman[381830]: 2025-11-25 17:02:17.610501317 +0000 UTC m=+0.248234999 container start 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:02:17 compute-0 podman[381830]: 2025-11-25 17:02:17.61495121 +0000 UTC m=+0.252684922 container attach 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.615 254096 DEBUG nova.compute.manager [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG oslo_concurrency.lockutils [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG oslo_concurrency.lockutils [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG oslo_concurrency.lockutils [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.616 254096 DEBUG nova.compute.manager [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:17 compute-0 nova_compute[254092]: 2025-11-25 17:02:17.617 254096 WARNING nova.compute.manager [req-445a1fac-a619-4442-8a67-7d622d5813c1 req-917b2277-50bd-49cc-b360-b7782a4b7591 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.
Nov 25 17:02:17 compute-0 systemd[1]: libpod-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope: Deactivated successfully.
Nov 25 17:02:17 compute-0 amazing_banach[381862]: 167 167
Nov 25 17:02:17 compute-0 conmon[381862]: conmon 86d5aa6ce8e9763a08ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope/container/memory.events
Nov 25 17:02:17 compute-0 podman[381830]: 2025-11-25 17:02:17.620259375 +0000 UTC m=+0.257993067 container died 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 17:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-547a6d4ab3a9f9769bbbe89fa9770da782f92dc761584148ea42b3105590374e-merged.mount: Deactivated successfully.
Nov 25 17:02:17 compute-0 podman[381870]: 2025-11-25 17:02:17.672820037 +0000 UTC m=+0.083216024 container create ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:02:17 compute-0 podman[381830]: 2025-11-25 17:02:17.698527052 +0000 UTC m=+0.336260734 container remove 86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:02:17 compute-0 podman[381870]: 2025-11-25 17:02:17.612550684 +0000 UTC m=+0.022946701 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:02:17 compute-0 systemd[1]: Started libpod-conmon-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope.
Nov 25 17:02:17 compute-0 systemd[1]: libpod-conmon-86d5aa6ce8e9763a08ef920a2d9d0a8c7666c45de7d7927a38ff1468cf35fa72.scope: Deactivated successfully.
Nov 25 17:02:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50b459a7e868aa11f7b7f6e6a9b7180c5b5afde03ead9ed833c5fc59e6efb1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:17 compute-0 podman[381870]: 2025-11-25 17:02:17.766453235 +0000 UTC m=+0.176849252 container init ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:02:17 compute-0 podman[381870]: 2025-11-25 17:02:17.772407738 +0000 UTC m=+0.182803725 container start ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 17:02:17 compute-0 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : New worker (381910) forked
Nov 25 17:02:17 compute-0 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : Loading success.
Nov 25 17:02:17 compute-0 podman[381924]: 2025-11-25 17:02:17.888193665 +0000 UTC m=+0.040328368 container create 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:02:17 compute-0 systemd[1]: Started libpod-conmon-99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199.scope.
Nov 25 17:02:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:17 compute-0 podman[381924]: 2025-11-25 17:02:17.869902022 +0000 UTC m=+0.022036725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:02:17 compute-0 podman[381924]: 2025-11-25 17:02:17.971097158 +0000 UTC m=+0.123231871 container init 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:02:17 compute-0 podman[381924]: 2025-11-25 17:02:17.982896921 +0000 UTC m=+0.135031654 container start 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:02:17 compute-0 podman[381924]: 2025-11-25 17:02:17.987618111 +0000 UTC m=+0.139752824 container attach 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:02:18 compute-0 ceph-mon[74985]: pgmap v2362: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 101 KiB/s wr, 20 op/s
Nov 25 17:02:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 25 17:02:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.823 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-1e4a418b-b459-456b-ae99-26f8b034a7bc" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.824 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-1e4a418b-b459-456b-ae99-26f8b034a7bc" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.838 254096 DEBUG nova.objects.instance [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.874 254096 DEBUG nova.virt.libvirt.vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.875 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.876 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.881 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.884 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.888 254096 DEBUG nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Attempting to detach device tap1e4a418b-b4 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.888 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:00:fe:32"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <target dev="tap1e4a418b-b4"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]: </interface>
Nov 25 17:02:18 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.901 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.905 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <name>instance-00000072</name>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:02:16</nova:creationTime>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <nova:port uuid="1e4a418b-b459-456b-ae99-26f8b034a7bc">
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:02:18 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <system>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </system>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <os>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </os>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]: {
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <features>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "osd_id": 1,
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </features>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "type": "bluestore"
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:     },
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "osd_id": 2,
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "type": "bluestore"
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:     },
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "osd_id": 0,
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:         "type": "bluestore"
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:02:18 compute-0 blissful_fermat[381940]:     }
Nov 25 17:02:18 compute-0 blissful_fermat[381940]: }
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target dev='tap9a960a19-c5'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:00:fe:32'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target dev='tap1e4a418b-b4'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='net1'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       </target>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </console>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <video>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </video>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:02:18 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:18 compute-0 nova_compute[254092]: </domain>
Nov 25 17:02:18 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.907 254096 INFO nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tap1e4a418b-b4 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the persistent domain config.
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.907 254096 DEBUG nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] (1/8): Attempting to detach device tap1e4a418b-b4 with device alias net1 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 17:02:18 compute-0 nova_compute[254092]: 2025-11-25 17:02:18.907 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:00:fe:32"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]:   <target dev="tap1e4a418b-b4"/>
Nov 25 17:02:18 compute-0 nova_compute[254092]: </interface>
Nov 25 17:02:18 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 17:02:18 compute-0 systemd[1]: libpod-99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199.scope: Deactivated successfully.
Nov 25 17:02:18 compute-0 podman[381924]: 2025-11-25 17:02:18.933828454 +0000 UTC m=+1.085963147 container died 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-594a17a76766e36cc9d55cd2b6097ebda9d11d6fcd781a54b9178d36e4c61adf-merged.mount: Deactivated successfully.
Nov 25 17:02:18 compute-0 podman[381924]: 2025-11-25 17:02:18.998289662 +0000 UTC m=+1.150424345 container remove 99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermat, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:02:19 compute-0 kernel: tap1e4a418b-b4 (unregistering): left promiscuous mode
Nov 25 17:02:19 compute-0 systemd[1]: libpod-conmon-99c2bdf1c0b492165eda5c5d9b56abb62096ce65efb47dfaaf07b5c13b414199.scope: Deactivated successfully.
Nov 25 17:02:19 compute-0 NetworkManager[48891]: <info>  [1764090139.0063] device (tap1e4a418b-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:02:19 compute-0 ovn_controller[153477]: 2025-11-25T17:02:19Z|01186|binding|INFO|Releasing lport 1e4a418b-b459-456b-ae99-26f8b034a7bc from this chassis (sb_readonly=0)
Nov 25 17:02:19 compute-0 ovn_controller[153477]: 2025-11-25T17:02:19Z|01187|binding|INFO|Setting lport 1e4a418b-b459-456b-ae99-26f8b034a7bc down in Southbound
Nov 25 17:02:19 compute-0 ovn_controller[153477]: 2025-11-25T17:02:19Z|01188|binding|INFO|Removing iface tap1e4a418b-b4 ovn-installed in OVS
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.020 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:fe:32 10.100.0.24'], port_security=['fa:16:3e:00:fe:32 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9bf74ed-ebec-4028-924f-11064256236f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eadf10b2-41b2-4301-a0d6-9c1d0e6514cb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4a418b-b459-456b-ae99-26f8b034a7bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.022 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764090139.0218506, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.022 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4a418b-b459-456b-ae99-26f8b034a7bc in datapath c9bf74ed-ebec-4028-924f-11064256236f unbound from our chassis
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.023 254096 DEBUG nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Start waiting for the detach event from libvirt for device tap1e4a418b-b4 with device alias net1 for instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.023 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.024 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c9bf74ed-ebec-4028-924f-11064256236f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.025 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[626d8409-1afe-419b-b42f-f1a39c43b373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.026 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <name>instance-00000072</name>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:02:16</nova:creationTime>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:port uuid="1e4a418b-b459-456b-ae99-26f8b034a7bc">
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:02:19 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <system>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </system>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <os>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </os>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <features>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </features>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 sudo[381759]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target dev='tap9a960a19-c5'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       </target>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </console>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <video>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </video>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:19 compute-0 nova_compute[254092]: </domain>
Nov 25 17:02:19 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.026 254096 INFO nova.virt.libvirt.driver [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tap1e4a418b-b4 from instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 from the live domain config.
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.027 254096 DEBUG nova.virt.libvirt.vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.027 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.028 254096 DEBUG nova.network.os_vif_util [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.026 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f namespace which is not needed anymore
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.028 254096 DEBUG os_vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.030 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4a418b-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.037 254096 INFO os_vif [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4')
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.038 254096 DEBUG nova.virt.libvirt.guest [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:02:19</nova:creationTime>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:02:19 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:02:19 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:19 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:02:19 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:02:19 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 17:02:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:02:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:02:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:02:19 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:02:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c9c5aa85-6fef-446f-ac04-64e6f8fc6759 does not exist
Nov 25 17:02:19 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a954c476-8917-471a-8798-15741ae50634 does not exist
Nov 25 17:02:19 compute-0 sudo[381995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:02:19 compute-0 sudo[381995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:19 compute-0 sudo[381995]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:19 compute-0 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : haproxy version is 2.8.14-c23fe91
Nov 25 17:02:19 compute-0 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [NOTICE]   (381908) : path to executable is /usr/sbin/haproxy
Nov 25 17:02:19 compute-0 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [WARNING]  (381908) : Exiting Master process...
Nov 25 17:02:19 compute-0 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [ALERT]    (381908) : Current worker (381910) exited with code 143 (Terminated)
Nov 25 17:02:19 compute-0 neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f[381901]: [WARNING]  (381908) : All workers exited. Exiting... (0)
Nov 25 17:02:19 compute-0 systemd[1]: libpod-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope: Deactivated successfully.
Nov 25 17:02:19 compute-0 conmon[381901]: conmon ff2760e56b93ab990cdc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope/container/memory.events
Nov 25 17:02:19 compute-0 sudo[382038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:02:19 compute-0 sudo[382038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:02:19 compute-0 podman[382031]: 2025-11-25 17:02:19.166767663 +0000 UTC m=+0.049047937 container died ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 17:02:19 compute-0 sudo[382038]: pam_unix(sudo:session): session closed for user root
Nov 25 17:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c-userdata-shm.mount: Deactivated successfully.
Nov 25 17:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a50b459a7e868aa11f7b7f6e6a9b7180c5b5afde03ead9ed833c5fc59e6efb1b-merged.mount: Deactivated successfully.
Nov 25 17:02:19 compute-0 podman[382031]: 2025-11-25 17:02:19.222665576 +0000 UTC m=+0.104945850 container cleanup ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 17:02:19 compute-0 systemd[1]: libpod-conmon-ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c.scope: Deactivated successfully.
Nov 25 17:02:19 compute-0 podman[382088]: 2025-11-25 17:02:19.297395176 +0000 UTC m=+0.051853724 container remove ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.303 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[836e79ef-df12-4906-943f-01e45a4bcec8]: (4, ('Tue Nov 25 05:02:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f (ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c)\nff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c\nTue Nov 25 05:02:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f (ff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c)\nff2760e56b93ab990cdcac45a2b9730ee0f14fd90bfe5228462928f0cdee5b5c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.305 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a39944d-43de-4f60-93bf-2a7ece518358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.306 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9bf74ed-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:19 compute-0 kernel: tapc9bf74ed-e0: left promiscuous mode
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.320 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57273f22-43c0-4cd4-b64c-53af824b4e3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f30f7563-cd95-481c-be99-6875a5a0895e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.338 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c045344e-0b37-47b5-a62b-5877f897459d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.355 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03bc83ad-f96d-4f0a-96c8-ca046a0dbd10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659449, 'reachable_time': 42360, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382103, 'error': None, 'target': 'ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.357 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c9bf74ed-ebec-4028-924f-11064256236f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:02:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:19.358 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f42fb5-3c6d-413d-a0ba-3f8f777bdea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:19 compute-0 systemd[1]: run-netns-ovnmeta\x2dc9bf74ed\x2debec\x2d4028\x2d924f\x2d11064256236f.mount: Deactivated successfully.
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.571 254096 DEBUG nova.network.neutron [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updated VIF entry in instance network info cache for port 1e4a418b-b459-456b-ae99-26f8b034a7bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.572 254096 DEBUG nova.network.neutron [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.585 254096 DEBUG oslo_concurrency.lockutils [req-41ad3168-ce28-4944-8753-6bd0f38c2de4 req-c840d07d-80b6-4454-85cf-6a3e2768a5c2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.720 254096 DEBUG nova.compute.manager [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG oslo_concurrency.lockutils [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG oslo_concurrency.lockutils [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG oslo_concurrency.lockutils [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.721 254096 DEBUG nova.compute.manager [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.722 254096 WARNING nova.compute.manager [req-b90775f4-5eaa-4945-be8b-e70dc5f4bafd req-56a2781e-edac-4868-bf14-74a7c86db9cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.
Nov 25 17:02:19 compute-0 nova_compute[254092]: 2025-11-25 17:02:19.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:20 compute-0 ceph-mon[74985]: pgmap v2363: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 25 17:02:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:02:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:02:20 compute-0 nova_compute[254092]: 2025-11-25 17:02:20.104 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:20 compute-0 nova_compute[254092]: 2025-11-25 17:02:20.104 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:20 compute-0 nova_compute[254092]: 2025-11-25 17:02:20.105 254096 DEBUG nova.network.neutron [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:02:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 15 KiB/s wr, 0 op/s
Nov 25 17:02:21 compute-0 nova_compute[254092]: 2025-11-25 17:02:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:22 compute-0 ceph-mon[74985]: pgmap v2364: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 15 KiB/s wr, 0 op/s
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.065 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-unplugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.066 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.066 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.067 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.067 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-unplugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.067 254096 WARNING nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-unplugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.068 254096 DEBUG oslo_concurrency.lockutils [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.069 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.069 254096 WARNING nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-1e4a418b-b459-456b-ae99-26f8b034a7bc for instance with vm_state active and task_state None.
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.070 254096 DEBUG nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-deleted-1e4a418b-b459-456b-ae99-26f8b034a7bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.070 254096 INFO nova.compute.manager [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Neutron deleted interface 1e4a418b-b459-456b-ae99-26f8b034a7bc; detaching it from the instance and deleting it from the info cache
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.070 254096 DEBUG nova.network.neutron [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.086 254096 DEBUG nova.objects.instance [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.106 254096 DEBUG nova.objects.instance [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.124 254096 DEBUG nova.virt.libvirt.vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.125 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.126 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.132 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.138 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <name>instance-00000072</name>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:02:19</nova:creationTime>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:02:22 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <system>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </system>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <os>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </os>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <features>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </features>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target dev='tap9a960a19-c5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </target>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </console>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <video>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </video>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]: </domain>
Nov 25 17:02:22 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.139 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.145 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:00:fe:32"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1e4a418b-b4"/></interface>not found in domain: <domain type='kvm' id='146'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <name>instance-00000072</name>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <uuid>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</uuid>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:02:19</nova:creationTime>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:02:22 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <system>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='serial'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='uuid'>f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </system>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <os>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </os>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <features>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </features>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk' index='2'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_disk.config' index='1'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:a8:c9:7b'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target dev='tap9a960a19-c5'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       </target>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7/console.log' append='off'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </console>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </input>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <video>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </video>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c147,c521</label>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c147,c521</imagelabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:02:22 compute-0 nova_compute[254092]: </domain>
Nov 25 17:02:22 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.146 254096 WARNING nova.virt.libvirt.driver [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Detaching interface fa:16:3e:00:fe:32 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap1e4a418b-b4' not found.
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.147 254096 DEBUG nova.virt.libvirt.vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.147 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "address": "fa:16:3e:00:fe:32", "network": {"id": "c9bf74ed-ebec-4028-924f-11064256236f", "bridge": "br-int", "label": "tempest-network-smoke--963060460", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4a418b-b4", "ovs_interfaceid": "1e4a418b-b459-456b-ae99-26f8b034a7bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.148 254096 DEBUG nova.network.os_vif_util [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.149 254096 DEBUG os_vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.151 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4a418b-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.152 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.154 254096 INFO os_vif [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:fe:32,bridge_name='br-int',has_traffic_filtering=True,id=1e4a418b-b459-456b-ae99-26f8b034a7bc,network=Network(c9bf74ed-ebec-4028-924f-11064256236f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4a418b-b4')
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.155 254096 DEBUG nova.virt.libvirt.guest [req-ec9328a5-6996-45bc-bc0d-44f930c002b9 req-70cf5104-6114-4648-ba55-e2896aba3b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1605221886</nova:name>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:02:22</nova:creationTime>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     <nova:port uuid="9a960a19-c599-4217-b99c-ac16fe6384b1">
Nov 25 17:02:22 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:02:22 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:02:22 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:02:22 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:02:22 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 17:02:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.477 254096 INFO nova.network.neutron [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Port 1e4a418b-b459-456b-ae99-26f8b034a7bc from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.478 254096 DEBUG nova.network.neutron [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [{"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.499 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.522 254096 DEBUG oslo_concurrency.lockutils [None req-d5469595-0b60-44f4-9b7a-c9c1619e4bb5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-1e4a418b-b459-456b-ae99-26f8b034a7bc" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.603 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.604 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.604 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.604 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.605 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.606 254096 INFO nova.compute.manager [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Terminating instance
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.606 254096 DEBUG nova.compute.manager [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:02:22 compute-0 ovn_controller[153477]: 2025-11-25T17:02:22Z|01189|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 17:02:22 compute-0 ovn_controller[153477]: 2025-11-25T17:02:22Z|01190|binding|INFO|Releasing lport 60519f01-35a8-45ac-b477-17b0e31a750f from this chassis (sb_readonly=0)
Nov 25 17:02:22 compute-0 ovn_controller[153477]: 2025-11-25T17:02:22Z|01191|binding|INFO|Releasing lport a599bfa4-c512-4c62-b0a4-4d2ab863ab24 from this chassis (sb_readonly=0)
Nov 25 17:02:22 compute-0 kernel: tap3751c8d6-0f (unregistering): left promiscuous mode
Nov 25 17:02:22 compute-0 NetworkManager[48891]: <info>  [1764090142.6693] device (tap3751c8d6-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:02:22 compute-0 ovn_controller[153477]: 2025-11-25T17:02:22Z|01192|binding|INFO|Releasing lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 from this chassis (sb_readonly=0)
Nov 25 17:02:22 compute-0 ovn_controller[153477]: 2025-11-25T17:02:22Z|01193|binding|INFO|Setting lport 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 down in Southbound
Nov 25 17:02:22 compute-0 ovn_controller[153477]: 2025-11-25T17:02:22Z|01194|binding|INFO|Removing iface tap3751c8d6-0f ovn-installed in OVS
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.716 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:6a:fb 10.100.0.7'], port_security=['fa:16:3e:cd:6a:fb 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f0c80e2-19cd-43ba-881d-e24e5bcd62fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b79c328-9376-4e36-9211-72ee228f98d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa394380a92d48188f2de86f1a100c08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d20ac94-0311-45d3-bbc9-0b5ca7a32bc8 4b68beb0-85ec-4a9a-a335-1e3ed4aadbc5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffca90a1-5def-405b-be68-948ef468bd95, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:02:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.718 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 in datapath 3b79c328-9376-4e36-9211-72ee228f98d6 unbound from our chassis
Nov 25 17:02:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.720 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b79c328-9376-4e36-9211-72ee228f98d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.721 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bb8d31-c2cd-4e5b-8eca-4b1cc7cb58ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:22.722 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 namespace which is not needed anymore
Nov 25 17:02:22 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Deactivated successfully.
Nov 25 17:02:22 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Consumed 15.922s CPU time.
Nov 25 17:02:22 compute-0 systemd-machined[216343]: Machine qemu-147-instance-00000073 terminated.
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.851 254096 INFO nova.virt.libvirt.driver [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance destroyed successfully.
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.851 254096 DEBUG nova.objects.instance [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lazy-loading 'resources' on Instance uuid 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.863 254096 DEBUG nova.virt.libvirt.vif [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-877248969-access_point-1210434942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-877248969-acc',id=115,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYG6882dVO70OVi4d9NVrB3PaeuhIrFZ+oR1NKshvHYJDUOm1rbaI60huuXoUEKrmzCPg+QgDBxi0uURLyDj9uJZlfeSkkPsCqzCs3wQ3F9X3LJ3PkXg4AAZvavey5RFw==',key_name='tempest-TestSecurityGroupsBasicOps-143893991',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa394380a92d48188f2de86f1a100c08',ramdisk_id='',reservation_id='r-2k83oce0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-877248969',owner_user_name='tempest-TestSecurityGroupsBasicOps-877248969-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:38Z,user_data=None,user_id='baa046e735b94aba93374dff061b9e77',uuid=1f0c80e2-19cd-43ba-881d-e24e5bcd62fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.863 254096 DEBUG nova.network.os_vif_util [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converting VIF {"id": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "address": "fa:16:3e:cd:6a:fb", "network": {"id": "3b79c328-9376-4e36-9211-72ee228f98d6", "bridge": "br-int", "label": "tempest-network-smoke--1598826579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa394380a92d48188f2de86f1a100c08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3751c8d6-0f", "ovs_interfaceid": "3751c8d6-0f1e-4902-8016-cf37bf3c1ad3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.863 254096 DEBUG nova.network.os_vif_util [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.864 254096 DEBUG os_vif [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.866 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3751c8d6-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.867 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:22 compute-0 nova_compute[254092]: 2025-11-25 17:02:22.873 254096 INFO os_vif [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:6a:fb,bridge_name='br-int',has_traffic_filtering=True,id=3751c8d6-0f1e-4902-8016-cf37bf3c1ad3,network=Network(3b79c328-9376-4e36-9211-72ee228f98d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3751c8d6-0f')
Nov 25 17:02:22 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : haproxy version is 2.8.14-c23fe91
Nov 25 17:02:22 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [NOTICE]   (380915) : path to executable is /usr/sbin/haproxy
Nov 25 17:02:22 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [WARNING]  (380915) : Exiting Master process...
Nov 25 17:02:22 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [WARNING]  (380915) : Exiting Master process...
Nov 25 17:02:22 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [ALERT]    (380915) : Current worker (380917) exited with code 143 (Terminated)
Nov 25 17:02:22 compute-0 neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6[380911]: [WARNING]  (380915) : All workers exited. Exiting... (0)
Nov 25 17:02:22 compute-0 systemd[1]: libpod-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27.scope: Deactivated successfully.
Nov 25 17:02:22 compute-0 podman[382132]: 2025-11-25 17:02:22.926210768 +0000 UTC m=+0.060866521 container died 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 17:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27-userdata-shm.mount: Deactivated successfully.
Nov 25 17:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-60bb6ae46dd6df8dbd8f0da40a4c941046d2ca1eb5dcd17ed4f8729c326d3da3-merged.mount: Deactivated successfully.
Nov 25 17:02:22 compute-0 podman[382132]: 2025-11-25 17:02:22.970134503 +0000 UTC m=+0.104790266 container cleanup 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 17:02:22 compute-0 systemd[1]: libpod-conmon-00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27.scope: Deactivated successfully.
Nov 25 17:02:23 compute-0 podman[382188]: 2025-11-25 17:02:23.037543972 +0000 UTC m=+0.045028696 container remove 00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.042 254096 DEBUG nova.compute.manager [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-unplugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.043 254096 DEBUG oslo_concurrency.lockutils [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.043 254096 DEBUG oslo_concurrency.lockutils [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.043 254096 DEBUG oslo_concurrency.lockutils [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.044 254096 DEBUG nova.compute.manager [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] No waiting events found dispatching network-vif-unplugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.044 254096 DEBUG nova.compute.manager [req-ec758bcc-0b1e-410c-872f-9f5479aa3a7e req-811d91b1-efdd-42c1-b47d-40fdab645b90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-unplugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5fefb5-8966-43df-9f5d-5b76bd35e4af]: (4, ('Tue Nov 25 05:02:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 (00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27)\n00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27\nTue Nov 25 05:02:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 (00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27)\n00a20a85ab7e65128e1d1ac7bee10650c586484d09886324dcbaf6d802683d27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.046 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[feb87f30-9d6a-4902-9676-2991b4e9997c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.047 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b79c328-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:23 compute-0 kernel: tap3b79c328-90: left promiscuous mode
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.064 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e12e6961-0806-4942-bd3e-5579198b75ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.077 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64d2c186-33d6-43ee-84cf-a8e0fa1a7d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.078 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93cd08da-cc64-4628-a185-9aab6333b0dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.096 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29b961be-8920-43e8-b44c-8f6d4b42e994]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655569, 'reachable_time': 29369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382205, 'error': None, 'target': 'ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d3b79c328\x2d9376\x2d4e36\x2d9211\x2d72ee228f98d6.mount: Deactivated successfully.
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.099 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b79c328-9376-4e36-9211-72ee228f98d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.100 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[64fe4c1b-284d-4c31-92f3-8ea0498fc133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.249 254096 INFO nova.virt.libvirt.driver [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deleting instance files /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_del
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.250 254096 INFO nova.virt.libvirt.driver [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deletion of /var/lib/nova/instances/1f0c80e2-19cd-43ba-881d-e24e5bcd62fb_del complete
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.305 254096 INFO nova.compute.manager [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.306 254096 DEBUG oslo.service.loopingcall [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.306 254096 DEBUG nova.compute.manager [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.306 254096 DEBUG nova.network.neutron [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.772 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.772 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.773 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.773 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.773 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.774 254096 INFO nova.compute.manager [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Terminating instance
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.775 254096 DEBUG nova.compute.manager [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:02:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:23 compute-0 kernel: tap9a960a19-c5 (unregistering): left promiscuous mode
Nov 25 17:02:23 compute-0 NetworkManager[48891]: <info>  [1764090143.8139] device (tap9a960a19-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:02:23 compute-0 ovn_controller[153477]: 2025-11-25T17:02:23Z|01195|binding|INFO|Releasing lport 9a960a19-c599-4217-b99c-ac16fe6384b1 from this chassis (sb_readonly=0)
Nov 25 17:02:23 compute-0 ovn_controller[153477]: 2025-11-25T17:02:23Z|01196|binding|INFO|Setting lport 9a960a19-c599-4217-b99c-ac16fe6384b1 down in Southbound
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:23 compute-0 ovn_controller[153477]: 2025-11-25T17:02:23Z|01197|binding|INFO|Removing iface tap9a960a19-c5 ovn-installed in OVS
Nov 25 17:02:23 compute-0 ovn_controller[153477]: 2025-11-25T17:02:23Z|01198|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 17:02:23 compute-0 ovn_controller[153477]: 2025-11-25T17:02:23Z|01199|binding|INFO|Releasing lport 60519f01-35a8-45ac-b477-17b0e31a750f from this chassis (sb_readonly=0)
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.832 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:c9:7b 10.100.0.3'], port_security=['fa:16:3e:a8:c9:7b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '573985ee-22d8-4e8a-b764-ea06c40f2ee7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ece2fc14-3f44-4554-9543-96a461b3adc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9a960a19-c599-4217-b99c-ac16fe6384b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.833 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9a960a19-c599-4217-b99c-ac16fe6384b1 in datapath 131ae834-ee81-42ce-b61e-863b3a8d52e1 unbound from our chassis
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.835 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 131ae834-ee81-42ce-b61e-863b3a8d52e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.836 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb22bc5-9ce7-49d9-8bc3-37adecf8caae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:23.837 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 namespace which is not needed anymore
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.861 254096 DEBUG nova.network.neutron [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:23 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 25 17:02:23 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000072.scope: Consumed 15.360s CPU time.
Nov 25 17:02:23 compute-0 systemd-machined[216343]: Machine qemu-146-instance-00000072 terminated.
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.888 254096 INFO nova.compute.manager [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Took 0.58 seconds to deallocate network for instance.
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.936 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:23 compute-0 nova_compute[254092]: 2025-11-25 17:02:23.938 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:23 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : haproxy version is 2.8.14-c23fe91
Nov 25 17:02:23 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [NOTICE]   (380643) : path to executable is /usr/sbin/haproxy
Nov 25 17:02:23 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [WARNING]  (380643) : Exiting Master process...
Nov 25 17:02:23 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [WARNING]  (380643) : Exiting Master process...
Nov 25 17:02:23 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [ALERT]    (380643) : Current worker (380645) exited with code 143 (Terminated)
Nov 25 17:02:23 compute-0 neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1[380639]: [WARNING]  (380643) : All workers exited. Exiting... (0)
Nov 25 17:02:23 compute-0 systemd[1]: libpod-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01.scope: Deactivated successfully.
Nov 25 17:02:23 compute-0 podman[382227]: 2025-11-25 17:02:23.992547856 +0000 UTC m=+0.046422754 container died 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.025 254096 INFO nova.virt.libvirt.driver [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Instance destroyed successfully.
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.026 254096 DEBUG nova.objects.instance [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.035 254096 DEBUG oslo_concurrency.processutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a30edfa0bf3043d83325a2d2e3ffee6672b0c56070c560af5182afdcb7482f9a-merged.mount: Deactivated successfully.
Nov 25 17:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01-userdata-shm.mount: Deactivated successfully.
Nov 25 17:02:24 compute-0 podman[382227]: 2025-11-25 17:02:24.044541392 +0000 UTC m=+0.098416300 container cleanup 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:02:24 compute-0 systemd[1]: libpod-conmon-352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01.scope: Deactivated successfully.
Nov 25 17:02:24 compute-0 ceph-mon[74985]: pgmap v2365: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.082 254096 DEBUG nova.virt.libvirt.vif [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1605221886',display_name='tempest-TestNetworkBasicOps-server-1605221886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1605221886',id=114,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8GG+LWFgTdI2Lx3HIK8iitzYjRcEiXMosisS4j8Mff81tp3LiT3UmyZlgVw+9dc2rljXR4tNO0hQkUya+Lw6KQzFUnQWm0QB4mvWvH4UQzxuEdyxR/UzUx6isIg0+QoA==',key_name='tempest-TestNetworkBasicOps-823633184',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-4x69xfhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:01:33Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.083 254096 DEBUG nova.network.os_vif_util [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9a960a19-c599-4217-b99c-ac16fe6384b1", "address": "fa:16:3e:a8:c9:7b", "network": {"id": "131ae834-ee81-42ce-b61e-863b3a8d52e1", "bridge": "br-int", "label": "tempest-network-smoke--413727726", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a960a19-c5", "ovs_interfaceid": "9a960a19-c599-4217-b99c-ac16fe6384b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.083 254096 DEBUG nova.network.os_vif_util [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.084 254096 DEBUG os_vif [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.086 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a960a19-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.092 254096 INFO os_vif [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:c9:7b,bridge_name='br-int',has_traffic_filtering=True,id=9a960a19-c599-4217-b99c-ac16fe6384b1,network=Network(131ae834-ee81-42ce-b61e-863b3a8d52e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a960a19-c5')
Nov 25 17:02:24 compute-0 podman[382265]: 2025-11-25 17:02:24.117691698 +0000 UTC m=+0.042219239 container remove 352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.125 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8ea83c-0ba7-4bde-a515-17d7fe5b62a8]: (4, ('Tue Nov 25 05:02:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 (352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01)\n352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01\nTue Nov 25 05:02:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 (352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01)\n352356d044e2c6b782a050833cce8d18529a1bd71d70042c1c70b3d48d73bc01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.127 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64ab9f0d-d3af-4d68-9bea-f2be4586659d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.129 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap131ae834-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:24 compute-0 kernel: tap131ae834-e0: left promiscuous mode
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.146 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.146 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing instance network info cache due to event network-changed-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.147 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.147 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.147 254096 DEBUG nova.network.neutron [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Refreshing network info cache for port 3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[346c879f-912c-4c56-87dd-b8ae16d4e4bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d01cd98-567e-4f84-b7c7-86772e74b3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.168 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de7da1e8-862e-4fd1-85a3-35b471694b39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.190 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f6ba04-ea0f-4544-bff7-dbd695a53d9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655088, 'reachable_time': 33864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382299, 'error': None, 'target': 'ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d131ae834\x2dee81\x2d42ce\x2db61e\x2d863b3a8d52e1.mount: Deactivated successfully.
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.197 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-131ae834-ee81-42ce-b61e-863b3a8d52e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:02:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:24.198 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[547553f3-6254-4cae-9ad3-648a730ef7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 4.0 KiB/s wr, 0 op/s
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.292 254096 DEBUG nova.network.neutron [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.523 254096 INFO nova.virt.libvirt.driver [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deleting instance files /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_del
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.524 254096 INFO nova.virt.libvirt.driver [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deletion of /var/lib/nova/instances/f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7_del complete
Nov 25 17:02:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:02:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092346102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.558 254096 DEBUG oslo_concurrency.processutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.565 254096 DEBUG nova.compute.provider_tree [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.572 254096 INFO nova.compute.manager [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.572 254096 DEBUG oslo.service.loopingcall [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.573 254096 DEBUG nova.compute.manager [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.573 254096 DEBUG nova.network.neutron [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.579 254096 DEBUG nova.scheduler.client.report [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.600 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.618 254096 DEBUG nova.network.neutron [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.628 254096 INFO nova.scheduler.client.report [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Deleted allocations for instance 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.631 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.631 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-deleted-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.631 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-unplugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG oslo_concurrency.lockutils [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-unplugged-9a960a19-c599-4217-b99c-ac16fe6384b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.632 254096 DEBUG nova.compute.manager [req-032c4885-5e2f-46b6-b389-921b624c0211 req-f6731c82-7193-4d72-aed4-160cf37bdf2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-unplugged-9a960a19-c599-4217-b99c-ac16fe6384b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.776 254096 DEBUG oslo_concurrency.lockutils [None req-7dc572ec-e115-46cc-a24d-43cfd659854e baa046e735b94aba93374dff061b9e77 aa394380a92d48188f2de86f1a100c08 - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:24 compute-0 nova_compute[254092]: 2025-11-25 17:02:24.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4092346102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.380 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.380 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.381 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.381 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1f0c80e2-19cd-43ba-881d-e24e5bcd62fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.381 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] No waiting events found dispatching network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.382 254096 WARNING nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Received unexpected event network-vif-plugged-3751c8d6-0f1e-4902-8016-cf37bf3c1ad3 for instance with vm_state deleted and task_state None.
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.382 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.383 254096 DEBUG nova.compute.manager [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing instance network info cache due to event network-changed-9a960a19-c599-4217-b99c-ac16fe6384b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.383 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.383 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.384 254096 DEBUG nova.network.neutron [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Refreshing network info cache for port 9a960a19-c599-4217-b99c-ac16fe6384b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.481 254096 DEBUG nova.network.neutron [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.501 254096 INFO nova.compute.manager [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Took 0.93 seconds to deallocate network for instance.
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.552 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.552 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.561 254096 INFO nova.network.neutron [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Port 9a960a19-c599-4217-b99c-ac16fe6384b1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.562 254096 DEBUG nova.network.neutron [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.574 254096 DEBUG oslo_concurrency.lockutils [req-58169c00-5e6f-4336-9761-16ca9229d52e req-9196b7bb-1d33-474a-903b-1ac3e1052eca a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:25 compute-0 nova_compute[254092]: 2025-11-25 17:02:25.631 254096 DEBUG oslo_concurrency.processutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:02:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138400497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.078 254096 DEBUG oslo_concurrency.processutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.085 254096 DEBUG nova.compute.provider_tree [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:02:26 compute-0 ceph-mon[74985]: pgmap v2366: 321 pgs: 321 active+clean; 279 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 4.0 KiB/s wr, 0 op/s
Nov 25 17:02:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4138400497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.111 254096 DEBUG nova.scheduler.client.report [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.145 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.172 254096 INFO nova.scheduler.client.report [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.218 254096 DEBUG oslo_concurrency.lockutils [None req-5a6534f4-25b7-491a-80f8-84206d63442b e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.220 254096 DEBUG nova.compute.manager [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG oslo_concurrency.lockutils [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG oslo_concurrency.lockutils [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG oslo_concurrency.lockutils [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 DEBUG nova.compute.manager [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] No waiting events found dispatching network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:26 compute-0 nova_compute[254092]: 2025-11-25 17:02:26.221 254096 WARNING nova.compute.manager [req-f6183de9-fb02-4ea0-af5a-c91c7c5de528 req-98f85006-ebd4-415a-9403-27c9eee43d6e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received unexpected event network-vif-plugged-9a960a19-c599-4217-b99c-ac16fe6384b1 for instance with vm_state deleted and task_state None.
Nov 25 17:02:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 121 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 13 KiB/s wr, 56 op/s
Nov 25 17:02:27 compute-0 sshd-session[382104]: Connection closed by authenticating user root 171.244.51.45 port 45682 [preauth]
Nov 25 17:02:27 compute-0 nova_compute[254092]: 2025-11-25 17:02:27.503 254096 DEBUG nova.compute.manager [req-d80494db-578d-4920-a83d-a4bf4a1ba759 req-3785548d-1941-4bec-9ec4-6b0ee87c8b08 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Received event network-vif-deleted-9a960a19-c599-4217-b99c-ac16fe6384b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:28 compute-0 ceph-mon[74985]: pgmap v2367: 321 pgs: 321 active+clean; 121 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 13 KiB/s wr, 56 op/s
Nov 25 17:02:28 compute-0 ovn_controller[153477]: 2025-11-25T17:02:28Z|01200|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 17:02:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 121 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 11 KiB/s wr, 56 op/s
Nov 25 17:02:28 compute-0 nova_compute[254092]: 2025-11-25 17:02:28.296 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:28 compute-0 nova_compute[254092]: 2025-11-25 17:02:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 ovn_controller[153477]: 2025-11-25T17:02:29Z|01201|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 ovn_controller[153477]: 2025-11-25T17:02:29Z|01202|binding|INFO|Releasing lport 2166482b-c36e-4cfe-a45d-40e1c4e6a3e6 from this chassis (sb_readonly=0)
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.384 254096 DEBUG nova.compute.manager [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG nova.compute.manager [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing instance network info cache due to event network-changed-1caaa3da-b3eb-4441-b6b2-8eaa71146e77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG oslo_concurrency.lockutils [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG oslo_concurrency.lockutils [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.385 254096 DEBUG nova.network.neutron [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Refreshing network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.386 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.457 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.457 254096 INFO nova.compute.manager [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Terminating instance
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.458 254096 DEBUG nova.compute.manager [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:29 compute-0 kernel: tap1caaa3da-b3 (unregistering): left promiscuous mode
Nov 25 17:02:29 compute-0 NetworkManager[48891]: <info>  [1764090149.5098] device (tap1caaa3da-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 ovn_controller[153477]: 2025-11-25T17:02:29Z|01203|binding|INFO|Releasing lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 from this chassis (sb_readonly=0)
Nov 25 17:02:29 compute-0 ovn_controller[153477]: 2025-11-25T17:02:29Z|01204|binding|INFO|Setting lport 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 down in Southbound
Nov 25 17:02:29 compute-0 ovn_controller[153477]: 2025-11-25T17:02:29Z|01205|binding|INFO|Removing iface tap1caaa3da-b3 ovn-installed in OVS
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.518 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.522 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:ec:69 10.100.0.8'], port_security=['fa:16:3e:97:ec:69 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'caf64ca2-5f73-454a-8442-9965c9853cba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b034178-39ad-4db7-adab-aaf6bc34bd4a e7198f6b-79d7-48d7-845d-93c396c87f35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e96b6e1-6935-4458-bc78-50ea3ed2412d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1caaa3da-b3eb-4441-b6b2-8eaa71146e77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.523 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77 in datapath 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 unbound from our chassis
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.524 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.525 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c458bbc-b7eb-45ca-af0a-d3e6b4862483]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.526 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 namespace which is not needed anymore
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Deactivated successfully.
Nov 25 17:02:29 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Consumed 16.470s CPU time.
Nov 25 17:02:29 compute-0 systemd-machined[216343]: Machine qemu-145-instance-00000071 terminated.
Nov 25 17:02:29 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : haproxy version is 2.8.14-c23fe91
Nov 25 17:02:29 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [NOTICE]   (378870) : path to executable is /usr/sbin/haproxy
Nov 25 17:02:29 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [WARNING]  (378870) : Exiting Master process...
Nov 25 17:02:29 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [WARNING]  (378870) : Exiting Master process...
Nov 25 17:02:29 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [ALERT]    (378870) : Current worker (378872) exited with code 143 (Terminated)
Nov 25 17:02:29 compute-0 neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2[378866]: [WARNING]  (378870) : All workers exited. Exiting... (0)
Nov 25 17:02:29 compute-0 systemd[1]: libpod-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72.scope: Deactivated successfully.
Nov 25 17:02:29 compute-0 podman[382367]: 2025-11-25 17:02:29.661354541 +0000 UTC m=+0.042770974 container died 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72-userdata-shm.mount: Deactivated successfully.
Nov 25 17:02:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a186dab47ed7f953257a28b5bb609e6cd78bd265db3c45fccf8e31014045c990-merged.mount: Deactivated successfully.
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.692 254096 INFO nova.virt.libvirt.driver [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Instance destroyed successfully.
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.693 254096 DEBUG nova.objects.instance [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid caf64ca2-5f73-454a-8442-9965c9853cba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:29 compute-0 podman[382367]: 2025-11-25 17:02:29.696709361 +0000 UTC m=+0.078125794 container cleanup 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.703 254096 DEBUG nova.virt.libvirt.vif [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1085347357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=113,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC5IGkA0EZcfpUrnJAPAyNPE3aP4ux+1YVZrN6xmNxNmyBv5luv7uNh5XsGgePIRhuMTv5vEwnWkpC0iguYDb2SFlQPUW7qNQRGe9ic9lTfmn148JQBqNQ9VGr6RxpqguQ==',key_name='tempest-TestSecurityGroupsBasicOps-1631979734',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:00:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-o6t5fl29',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:00:55Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=caf64ca2-5f73-454a-8442-9965c9853cba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.704 254096 DEBUG nova.network.os_vif_util [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.705 254096 DEBUG nova.network.os_vif_util [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.705 254096 DEBUG os_vif [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.707 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1caaa3da-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.712 254096 INFO os_vif [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:ec:69,bridge_name='br-int',has_traffic_filtering=True,id=1caaa3da-b3eb-4441-b6b2-8eaa71146e77,network=Network(0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1caaa3da-b3')
Nov 25 17:02:29 compute-0 systemd[1]: libpod-conmon-35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72.scope: Deactivated successfully.
Nov 25 17:02:29 compute-0 podman[382404]: 2025-11-25 17:02:29.759264136 +0000 UTC m=+0.040524972 container remove 35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.766 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a204f1a-c83b-4eed-ad01-fe4e2f33b4c1]: (4, ('Tue Nov 25 05:02:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 (35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72)\n35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72\nTue Nov 25 05:02:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 (35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72)\n35e8d2270953bd1a25fea7d9c8adaada067b7f2a8ab10919807214e78ffaaf72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.767 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aef5e4-90e5-4190-b72d-6ff43e8e14b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.768 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f953cb4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 kernel: tap0f953cb4-a0: left promiscuous mode
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.783 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4efc4827-b6a5-4dbd-80f0-156c6658995f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72dfe14d-bf32-41b2-9524-a00d1c649105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.794 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5232ef4a-a92f-4a4c-872e-3b521ccc407f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[124e3724-3997-4dbd-966f-20e4fe33fdde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651180, 'reachable_time': 40219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382438, 'error': None, 'target': 'ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d0f953cb4\x2dad2a\x2d4f81\x2d8b2f\x2da1d71f5b8cf2.mount: Deactivated successfully.
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.813 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:02:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:29.813 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2ab195-7cb5-40cf-a42b-657f1c87443a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:29 compute-0 nova_compute[254092]: 2025-11-25 17:02:29.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:30 compute-0 ceph-mon[74985]: pgmap v2368: 321 pgs: 321 active+clean; 121 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 11 KiB/s wr, 56 op/s
Nov 25 17:02:30 compute-0 nova_compute[254092]: 2025-11-25 17:02:30.135 254096 INFO nova.virt.libvirt.driver [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deleting instance files /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba_del
Nov 25 17:02:30 compute-0 nova_compute[254092]: 2025-11-25 17:02:30.136 254096 INFO nova.virt.libvirt.driver [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deletion of /var/lib/nova/instances/caf64ca2-5f73-454a-8442-9965c9853cba_del complete
Nov 25 17:02:30 compute-0 nova_compute[254092]: 2025-11-25 17:02:30.184 254096 INFO nova.compute.manager [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 25 17:02:30 compute-0 nova_compute[254092]: 2025-11-25 17:02:30.185 254096 DEBUG oslo.service.loopingcall [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:02:30 compute-0 nova_compute[254092]: 2025-11-25 17:02:30.185 254096 DEBUG nova.compute.manager [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:02:30 compute-0 nova_compute[254092]: 2025-11-25 17:02:30.185 254096 DEBUG nova.network.neutron [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:02:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 76 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 11 KiB/s wr, 60 op/s
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.270 254096 DEBUG nova.network.neutron [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.284 254096 INFO nova.compute.manager [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Took 1.10 seconds to deallocate network for instance.
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.325 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.326 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.329 254096 DEBUG nova.network.neutron [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updated VIF entry in instance network info cache for port 1caaa3da-b3eb-4441-b6b2-8eaa71146e77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.330 254096 DEBUG nova.network.neutron [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Updating instance_info_cache with network_info: [{"id": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "address": "fa:16:3e:97:ec:69", "network": {"id": "0f953cb4-ad2a-4f81-8b2f-a1d71f5b8cf2", "bridge": "br-int", "label": "tempest-network-smoke--80928445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1caaa3da-b3", "ovs_interfaceid": "1caaa3da-b3eb-4441-b6b2-8eaa71146e77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.333 254096 DEBUG nova.compute.manager [req-9e467b53-fa53-4e7f-a8eb-c0fff2fcbd6f req-5dd574ec-553f-47a5-a201-7f6fe0bcb5b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-deleted-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.349 254096 DEBUG oslo_concurrency.lockutils [req-0769acf2-d9f2-4e77-8b48-a58b10a5dec7 req-e04f5430-41e2-4ddb-ab73-48138ff3fad1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-caf64ca2-5f73-454a-8442-9965c9853cba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.378 254096 DEBUG oslo_concurrency.processutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.477 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-unplugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.478 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.478 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.479 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.479 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] No waiting events found dispatching network-vif-unplugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.480 254096 WARNING nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received unexpected event network-vif-unplugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for instance with vm_state deleted and task_state None.
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.480 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.481 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.481 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.483 254096 DEBUG oslo_concurrency.lockutils [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.483 254096 DEBUG nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] No waiting events found dispatching network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.484 254096 WARNING nova.compute.manager [req-d0eb3a5d-49e3-483e-8c3c-0e09e4e3d9ba req-3a02c914-53b0-4525-9d7e-fc588cf07408 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Received unexpected event network-vif-plugged-1caaa3da-b3eb-4441-b6b2-8eaa71146e77 for instance with vm_state deleted and task_state None.
Nov 25 17:02:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:02:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100472373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.840 254096 DEBUG oslo_concurrency.processutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.847 254096 DEBUG nova.compute.provider_tree [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.864 254096 DEBUG nova.scheduler.client.report [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.885 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.926 254096 INFO nova.scheduler.client.report [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance caf64ca2-5f73-454a-8442-9965c9853cba
Nov 25 17:02:31 compute-0 nova_compute[254092]: 2025-11-25 17:02:31.986 254096 DEBUG oslo_concurrency.lockutils [None req-fb33b3ce-ccbd-4129-916a-9c921ac9f43c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "caf64ca2-5f73-454a-8442-9965c9853cba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:32 compute-0 ceph-mon[74985]: pgmap v2369: 321 pgs: 321 active+clean; 76 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 11 KiB/s wr, 60 op/s
Nov 25 17:02:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2100472373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 41 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 11 KiB/s wr, 82 op/s
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:02:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351245696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:32 compute-0 nova_compute[254092]: 2025-11-25 17:02:32.956 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/351245696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.161 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.162 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3765MB free_disk=59.97190475463867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.162 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.163 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.227 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.227 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.242 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:02:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372710800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.719 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.728 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.750 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.780 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:02:33 compute-0 nova_compute[254092]: 2025-11-25 17:02:33.780 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:34 compute-0 ceph-mon[74985]: pgmap v2370: 321 pgs: 321 active+clean; 41 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 11 KiB/s wr, 82 op/s
Nov 25 17:02:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3372710800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 41 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 9.5 KiB/s wr, 81 op/s
Nov 25 17:02:34 compute-0 nova_compute[254092]: 2025-11-25 17:02:34.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:34 compute-0 nova_compute[254092]: 2025-11-25 17:02:34.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:35 compute-0 ceph-mon[74985]: pgmap v2371: 321 pgs: 321 active+clean; 41 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 9.5 KiB/s wr, 81 op/s
Nov 25 17:02:35 compute-0 nova_compute[254092]: 2025-11-25 17:02:35.782 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 9.8 KiB/s wr, 83 op/s
Nov 25 17:02:37 compute-0 ceph-mon[74985]: pgmap v2372: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 9.8 KiB/s wr, 83 op/s
Nov 25 17:02:37 compute-0 nova_compute[254092]: 2025-11-25 17:02:37.849 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090142.8476992, 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:02:37 compute-0 nova_compute[254092]: 2025-11-25 17:02:37.850 254096 INFO nova.compute.manager [-] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] VM Stopped (Lifecycle Event)
Nov 25 17:02:37 compute-0 nova_compute[254092]: 2025-11-25 17:02:37.874 254096 DEBUG nova.compute.manager [None req-edc7846e-6919-4b1b-b55f-4c07b52f5def - - - - - -] [instance: 1f0c80e2-19cd-43ba-881d-e24e5bcd62fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:02:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:02:38 compute-0 nova_compute[254092]: 2025-11-25 17:02:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:02:38 compute-0 nova_compute[254092]: 2025-11-25 17:02:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:02:38 compute-0 nova_compute[254092]: 2025-11-25 17:02:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:02:38 compute-0 nova_compute[254092]: 2025-11-25 17:02:38.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:02:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:39 compute-0 nova_compute[254092]: 2025-11-25 17:02:39.023 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090144.0231204, f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:02:39 compute-0 nova_compute[254092]: 2025-11-25 17:02:39.024 254096 INFO nova.compute.manager [-] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] VM Stopped (Lifecycle Event)
Nov 25 17:02:39 compute-0 nova_compute[254092]: 2025-11-25 17:02:39.049 254096 DEBUG nova.compute.manager [None req-71500f56-a537-41fa-95f6-a7c715edce1f - - - - - -] [instance: f6c33cdd-b4eb-4e4e-91bd-ef0685b91ae7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:02:39 compute-0 ceph-mon[74985]: pgmap v2373: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:02:39 compute-0 nova_compute[254092]: 2025-11-25 17:02:39.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:39 compute-0 nova_compute[254092]: 2025-11-25 17:02:39.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:39 compute-0 nova_compute[254092]: 2025-11-25 17:02:39.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:02:40
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'volumes', 'vms']
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:02:41 compute-0 ceph-mon[74985]: pgmap v2374: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:02:41 compute-0 podman[382508]: 2025-11-25 17:02:41.654961772 +0000 UTC m=+0.062295060 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 17:02:41 compute-0 podman[382507]: 2025-11-25 17:02:41.692583564 +0000 UTC m=+0.103253253 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:02:41 compute-0 podman[382509]: 2025-11-25 17:02:41.722266878 +0000 UTC m=+0.124331861 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:02:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 25 17:02:43 compute-0 ceph-mon[74985]: pgmap v2375: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 25 17:02:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 17:02:44 compute-0 nova_compute[254092]: 2025-11-25 17:02:44.691 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090149.687964, caf64ca2-5f73-454a-8442-9965c9853cba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:02:44 compute-0 nova_compute[254092]: 2025-11-25 17:02:44.691 254096 INFO nova.compute.manager [-] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] VM Stopped (Lifecycle Event)
Nov 25 17:02:44 compute-0 nova_compute[254092]: 2025-11-25 17:02:44.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:44 compute-0 nova_compute[254092]: 2025-11-25 17:02:44.726 254096 DEBUG nova.compute.manager [None req-b91c3135-c46e-454a-b5a9-4d5afb28690f - - - - - -] [instance: caf64ca2-5f73-454a-8442-9965c9853cba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:02:44 compute-0 nova_compute[254092]: 2025-11-25 17:02:44.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:45 compute-0 ceph-mon[74985]: pgmap v2376: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 17:02:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 17:02:47 compute-0 ceph-mon[74985]: pgmap v2377: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Nov 25 17:02:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:49 compute-0 ceph-mon[74985]: pgmap v2378: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:49 compute-0 nova_compute[254092]: 2025-11-25 17:02:49.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:49 compute-0 nova_compute[254092]: 2025-11-25 17:02:49.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.103 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.104 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.124 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.228 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.229 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.235 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.235 254096 INFO nova.compute.claims [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.407 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:02:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:02:51 compute-0 ceph-mon[74985]: pgmap v2379: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:02:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742630985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.861 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.867 254096 DEBUG nova.compute.provider_tree [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.887 254096 DEBUG nova.scheduler.client.report [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.921 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.922 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.960 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.960 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.975 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:02:51 compute-0 nova_compute[254092]: 2025-11-25 17:02:51.988 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.071 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.072 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.072 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Creating image(s)
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.092 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.111 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.131 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.134 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.237 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.238 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.239 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.239 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.258 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.261 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73a187fa-5479-4191-bd44-757c3840137a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.512 254096 DEBUG nova.policy [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.538 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 73a187fa-5479-4191-bd44-757c3840137a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.626 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:02:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3742630985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.727 254096 DEBUG nova.objects.instance [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.739 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.740 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Ensure instance console log exists: /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.740 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.741 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:52 compute-0 nova_compute[254092]: 2025-11-25 17:02:52.741 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:53 compute-0 nova_compute[254092]: 2025-11-25 17:02:53.571 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Successfully created port: b0512c6a-fbc4-4639-8508-e6493d18bd3a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:02:53 compute-0 ceph-mon[74985]: pgmap v2380: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.483 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Successfully updated port: b0512c6a-fbc4-4639-8508-e6493d18bd3a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.538 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.538 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.539 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.605 254096 DEBUG nova.compute.manager [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.605 254096 DEBUG nova.compute.manager [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing instance network info cache due to event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.605 254096 DEBUG oslo_concurrency.lockutils [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.661 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:54 compute-0 nova_compute[254092]: 2025-11-25 17:02:54.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.232 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.232 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.275 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:02:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:02:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949095382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:02:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:02:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949095382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.365 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.366 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.381 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.382 254096 INFO nova.compute.claims [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.488 254096 DEBUG nova.network.neutron [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.510 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.511 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance network_info: |[{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.512 254096 DEBUG oslo_concurrency.lockutils [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.513 254096 DEBUG nova.network.neutron [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.518 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start _get_guest_xml network_info=[{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.530 254096 WARNING nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.535 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.536 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.543 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.591 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.593 254096 DEBUG nova.virt.libvirt.host [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.594 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.594 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.595 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.596 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.597 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.597 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.597 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.598 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.598 254096 DEBUG nova.virt.hardware [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.603 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:55 compute-0 ceph-mon[74985]: pgmap v2381: 321 pgs: 321 active+clean; 41 MiB data, 815 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:02:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3949095382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:02:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3949095382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:02:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:02:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154018297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:55 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.994 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:55.999 254096 DEBUG nova.compute.provider_tree [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.013 254096 DEBUG nova.scheduler.client.report [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.032 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.033 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:02:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:02:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002318026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.072 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.072 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.079 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.100 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.104 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.141 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.170 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.287 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.288 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.289 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Creating image(s)
Nov 25 17:02:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 88 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.308 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.327 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.344 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.347 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.379 254096 DEBUG nova.policy [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.415 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.416 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.416 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.416 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.434 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.438 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:02:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/595674024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.564 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.567 254096 DEBUG nova.virt.libvirt.vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-365721910',display_name='tempest-TestNetworkBasicOps-server-365721910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-365721910',id=116,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8fF/LQWIeXhdQwSKSLAErz4jvUPWKjw4L1GZb+ASjyhqtlkSjIxhUDwHlkKBv0qHWvbUkdKHkhYl9JuEVV2LarQZvIoe1QUEsDx05YVfl0dpyKfWcSmCOAyR6fZFjEqA==',key_name='tempest-TestNetworkBasicOps-1895434798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5bv1kvw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:52Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=73a187fa-5479-4191-bd44-757c3840137a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.568 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.570 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.572 254096 DEBUG nova.objects.instance [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.588 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <uuid>73a187fa-5479-4191-bd44-757c3840137a</uuid>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <name>instance-00000074</name>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-365721910</nova:name>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:02:55</nova:creationTime>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <nova:port uuid="b0512c6a-fbc4-4639-8508-e6493d18bd3a">
Nov 25 17:02:56 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <system>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <entry name="serial">73a187fa-5479-4191-bd44-757c3840137a</entry>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <entry name="uuid">73a187fa-5479-4191-bd44-757c3840137a</entry>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </system>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <os>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   </os>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <features>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   </features>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/73a187fa-5479-4191-bd44-757c3840137a_disk">
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/73a187fa-5479-4191-bd44-757c3840137a_disk.config">
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       </source>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:02:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d7:b5:b4"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <target dev="tapb0512c6a-fb"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/console.log" append="off"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <video>
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </video>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:02:56 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:02:56 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:02:56 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:02:56 compute-0 nova_compute[254092]: </domain>
Nov 25 17:02:56 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.590 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Preparing to wait for external event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.591 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.592 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.593 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.595 254096 DEBUG nova.virt.libvirt.vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-365721910',display_name='tempest-TestNetworkBasicOps-server-365721910',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-365721910',id=116,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8fF/LQWIeXhdQwSKSLAErz4jvUPWKjw4L1GZb+ASjyhqtlkSjIxhUDwHlkKBv0qHWvbUkdKHkhYl9JuEVV2LarQZvIoe1QUEsDx05YVfl0dpyKfWcSmCOAyR6fZFjEqA==',key_name='tempest-TestNetworkBasicOps-1895434798',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5bv1kvw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:52Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=73a187fa-5479-4191-bd44-757c3840137a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.596 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.598 254096 DEBUG nova.network.os_vif_util [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.599 254096 DEBUG os_vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.602 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.603 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.609 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0512c6a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.611 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0512c6a-fb, col_values=(('external_ids', {'iface-id': 'b0512c6a-fbc4-4639-8508-e6493d18bd3a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:b5:b4', 'vm-uuid': '73a187fa-5479-4191-bd44-757c3840137a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:56 compute-0 NetworkManager[48891]: <info>  [1764090176.6141] manager: (tapb0512c6a-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/494)
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.622 254096 INFO os_vif [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb')
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.688 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.689 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.689 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:d7:b5:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.689 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Using config drive
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.709 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.795 254096 DEBUG nova.network.neutron [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated VIF entry in instance network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.795 254096 DEBUG nova.network.neutron [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.818 254096 DEBUG oslo_concurrency.lockutils [req-03908e70-6dc8-49e4-8ac9-8ca1b9ab4e99 req-c86049a9-8e71-4f79-9d02-6145b257ef7a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2154018297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:02:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1002318026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:02:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/595674024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:02:56 compute-0 nova_compute[254092]: 2025-11-25 17:02:56.959 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.012 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.072 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Creating config drive at /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.077 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8h_emcr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.154 254096 DEBUG nova.objects.instance [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid 12deb2c6-31fb-4186-940b-8131e43ea3f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.175 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.175 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Ensure instance console log exists: /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.176 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.177 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.177 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.236 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr8h_emcr" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.274 254096 DEBUG nova.storage.rbd_utils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 73a187fa-5479-4191-bd44-757c3840137a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.280 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config 73a187fa-5479-4191-bd44-757c3840137a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.379 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Successfully created port: 142675a5-3c37-4e43-9f80-e8fedd63f3cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.490 254096 DEBUG oslo_concurrency.processutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config 73a187fa-5479-4191-bd44-757c3840137a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.492 254096 INFO nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deleting local config drive /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a/disk.config because it was imported into RBD.
Nov 25 17:02:57 compute-0 kernel: tapb0512c6a-fb: entered promiscuous mode
Nov 25 17:02:57 compute-0 NetworkManager[48891]: <info>  [1764090177.5926] manager: (tapb0512c6a-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Nov 25 17:02:57 compute-0 ovn_controller[153477]: 2025-11-25T17:02:57Z|01206|binding|INFO|Claiming lport b0512c6a-fbc4-4639-8508-e6493d18bd3a for this chassis.
Nov 25 17:02:57 compute-0 ovn_controller[153477]: 2025-11-25T17:02:57Z|01207|binding|INFO|b0512c6a-fbc4-4639-8508-e6493d18bd3a: Claiming fa:16:3e:d7:b5:b4 10.100.0.9
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.605 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b5:b4 10.100.0.9'], port_security=['fa:16:3e:d7:b5:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73a187fa-5479-4191-bd44-757c3840137a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05b9ee3b-4cbe-486c-b386-9a71c1c7373a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0512c6a-fbc4-4639-8508-e6493d18bd3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.607 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0512c6a-fbc4-4639-8508-e6493d18bd3a in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce bound to our chassis
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.608 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.618 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[48fe6405-c08b-499a-aaa3-822055e92850]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.620 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd54b59f-81 in ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.622 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd54b59f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.622 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3e2029-62cc-44fa-bf0f-1f911490e6ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.623 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7da812b8-9eb1-4c83-a448-95819239aa58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 systemd-udevd[383076]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:02:57 compute-0 systemd-machined[216343]: New machine qemu-148-instance-00000074.
Nov 25 17:02:57 compute-0 NetworkManager[48891]: <info>  [1764090177.6381] device (tapb0512c6a-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.636 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3ad6fe-7dab-488d-9186-b5f4e5ae0304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 NetworkManager[48891]: <info>  [1764090177.6390] device (tapb0512c6a-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Nov 25 17:02:57 compute-0 ovn_controller[153477]: 2025-11-25T17:02:57Z|01208|binding|INFO|Setting lport b0512c6a-fbc4-4639-8508-e6493d18bd3a ovn-installed in OVS
Nov 25 17:02:57 compute-0 ovn_controller[153477]: 2025-11-25T17:02:57Z|01209|binding|INFO|Setting lport b0512c6a-fbc4-4639-8508-e6493d18bd3a up in Southbound
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.663 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6efaf004-f326-4549-9d90-acb89ff7f403]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.691 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da8a5e08-c51d-45ad-89a8-c6bc1afc4456]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[588150be-28e1-475a-8900-e3b7dbcc77bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 NetworkManager[48891]: <info>  [1764090177.6976] manager: (tapcd54b59f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Nov 25 17:02:57 compute-0 systemd-udevd[383080]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.727 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2eb403-d49f-4562-9a75-10d56837dfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.731 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[049b5b74-7132-424b-8f38-0a7931c66378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 NetworkManager[48891]: <info>  [1764090177.7569] device (tapcd54b59f-80): carrier: link connected
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.761 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9835f549-ccbb-4a63-b64d-3f50f9bc5329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[519dd5ff-3d2e-42aa-bd24-4e07a5b6c926]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383109, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[739a8534-962f-4f14-bd10-0a89bf64beab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:f118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663534, 'tstamp': 663534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383110, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.808 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f96714f2-74c2-49e0-8e44-279b89bd9931]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383111, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.838 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b39cfbf8-7d36-4335-a401-6687822306bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.893 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[92a605ed-ede1-491f-a8a0-6232b90e72d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.894 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.895 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:02:57 compute-0 ceph-mon[74985]: pgmap v2382: 321 pgs: 321 active+clean; 88 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.896 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd54b59f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 kernel: tapcd54b59f-80: entered promiscuous mode
Nov 25 17:02:57 compute-0 NetworkManager[48891]: <info>  [1764090177.9016] manager: (tapcd54b59f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.902 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.906 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd54b59f-80, col_values=(('external_ids', {'iface-id': 'd6826306-6b17-44f3-a972-7d5089250616'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 ovn_controller[153477]: 2025-11-25T17:02:57Z|01210|binding|INFO|Releasing lport d6826306-6b17-44f3-a972-7d5089250616 from this chassis (sb_readonly=0)
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.908 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.909 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[36bec1dd-2fac-4b1f-b60c-e2630555ee51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.910 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.pid.haproxy
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID cd54b59f-8568-4ad8-a75d-4fcdad6f8dce
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:02:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:02:57.910 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'env', 'PROCESS_TAG=haproxy-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd54b59f-8568-4ad8-a75d-4fcdad6f8dce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.925 254096 DEBUG nova.compute.manager [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.926 254096 DEBUG oslo_concurrency.lockutils [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.926 254096 DEBUG oslo_concurrency.lockutils [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.926 254096 DEBUG oslo_concurrency.lockutils [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:57 compute-0 nova_compute[254092]: 2025-11-25 17:02:57.927 254096 DEBUG nova.compute.manager [req-128f2660-080b-453b-8605-6d9747647582 req-3d5631f3-e43b-4424-b72f-509c0211dcae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Processing event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.093 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090178.0933955, 73a187fa-5479-4191-bd44-757c3840137a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.095 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Started (Lifecycle Event)
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.097 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.101 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.104 254096 INFO nova.virt.libvirt.driver [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance spawned successfully.
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.105 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.118 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.124 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.131 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.131 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.132 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.132 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.133 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.133 254096 DEBUG nova.virt.libvirt.driver [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.158 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.158 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090178.0935426, 73a187fa-5479-4191-bd44-757c3840137a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.159 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Paused (Lifecycle Event)
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.185 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.190 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090178.0998085, 73a187fa-5479-4191-bd44-757c3840137a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.191 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Resumed (Lifecycle Event)
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.208 254096 INFO nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 6.14 seconds to spawn the instance on the hypervisor.
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.208 254096 DEBUG nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.210 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.215 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.255 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.274 254096 INFO nova.compute.manager [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 7.07 seconds to build instance.
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.291 254096 DEBUG oslo_concurrency.lockutils [None req-1c149661-2a10-45c9-bdcf-911496bb97eb e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:02:58 compute-0 podman[383185]: 2025-11-25 17:02:58.303522491 +0000 UTC m=+0.053235111 container create 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:02:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 88 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:02:58 compute-0 systemd[1]: Started libpod-conmon-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434.scope.
Nov 25 17:02:58 compute-0 podman[383185]: 2025-11-25 17:02:58.271206475 +0000 UTC m=+0.020919115 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:02:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e893ac8ad73aa3d08cc59807e1b727426905ed7800c4aa1c5d54d074cb489584/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:02:58 compute-0 podman[383185]: 2025-11-25 17:02:58.406151137 +0000 UTC m=+0.155863757 container init 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:02:58 compute-0 podman[383185]: 2025-11-25 17:02:58.411263127 +0000 UTC m=+0.160975747 container start 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 17:02:58 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : New worker (383207) forked
Nov 25 17:02:58 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : Loading success.
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.604 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Successfully updated port: 142675a5-3c37-4e43-9f80-e8fedd63f3cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.617 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.617 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.617 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:02:58 compute-0 nova_compute[254092]: 2025-11-25 17:02:58.776 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:02:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:02:59 compute-0 sshd-session[381601]: Connection closed by authenticating user root 47.76.50.188 port 58772 [preauth]
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.663 254096 DEBUG nova.network.neutron [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.678 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.679 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance network_info: |[{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.683 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start _get_guest_xml network_info=[{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.688 254096 WARNING nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.694 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.695 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.698 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.698 254096 DEBUG nova.virt.libvirt.host [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.699 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.699 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.700 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.700 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.701 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.701 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.701 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.702 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.702 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.702 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.704 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.704 254096 DEBUG nova.virt.hardware [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.708 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:02:59 compute-0 ceph-mon[74985]: pgmap v2383: 321 pgs: 321 active+clean; 88 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:02:59 compute-0 nova_compute[254092]: 2025-11-25 17:02:59.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.020 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.022 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.024 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.025 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.026 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] No waiting events found dispatching network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.027 254096 WARNING nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received unexpected event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a for instance with vm_state active and task_state None.
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.028 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.029 254096 DEBUG nova.compute.manager [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing instance network info cache due to event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.031 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.032 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.033 254096 DEBUG nova.network.neutron [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:03:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:03:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005194116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.142 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.169 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.175 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 103 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 66 op/s
Nov 25 17:03:00 compute-0 NetworkManager[48891]: <info>  [1764090180.5802] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Nov 25 17:03:00 compute-0 NetworkManager[48891]: <info>  [1764090180.5819] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.589 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:03:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680489669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.660 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.662 254096 DEBUG nova.virt.libvirt.vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=117,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-ez50uxgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=12deb2c6-31fb-4186-940b-8131e43ea3f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.662 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.663 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.664 254096 DEBUG nova.objects.instance [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12deb2c6-31fb-4186-940b-8131e43ea3f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 ovn_controller[153477]: 2025-11-25T17:03:00Z|01211|binding|INFO|Releasing lport d6826306-6b17-44f3-a972-7d5089250616 from this chassis (sb_readonly=0)
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.684 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <uuid>12deb2c6-31fb-4186-940b-8131e43ea3f8</uuid>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <name>instance-00000075</name>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716</nova:name>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:02:59</nova:creationTime>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <nova:port uuid="142675a5-3c37-4e43-9f80-e8fedd63f3cf">
Nov 25 17:03:00 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <system>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <entry name="serial">12deb2c6-31fb-4186-940b-8131e43ea3f8</entry>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <entry name="uuid">12deb2c6-31fb-4186-940b-8131e43ea3f8</entry>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </system>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <os>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   </os>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <features>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   </features>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/12deb2c6-31fb-4186-940b-8131e43ea3f8_disk">
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       </source>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config">
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       </source>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:03:00 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:4d:64:18"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <target dev="tap142675a5-3c"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/console.log" append="off"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <video>
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </video>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:03:00 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:03:00 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:03:00 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:03:00 compute-0 nova_compute[254092]: </domain>
Nov 25 17:03:00 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.685 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Preparing to wait for external event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.685 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.685 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.686 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.686 254096 DEBUG nova.virt.libvirt.vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:02:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=117,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-ez50uxgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:02:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=12deb2c6-31fb-4186-940b-8131e43ea3f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.687 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.687 254096 DEBUG nova.network.os_vif_util [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.688 254096 DEBUG os_vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.688 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.689 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.694 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.694 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap142675a5-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.694 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap142675a5-3c, col_values=(('external_ids', {'iface-id': '142675a5-3c37-4e43-9f80-e8fedd63f3cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:64:18', 'vm-uuid': '12deb2c6-31fb-4186-940b-8131e43ea3f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.697 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 NetworkManager[48891]: <info>  [1764090180.6984] manager: (tap142675a5-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/500)
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.706 254096 INFO os_vif [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c')
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.752 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.753 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.753 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:4d:64:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.753 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Using config drive
Nov 25 17:03:00 compute-0 nova_compute[254092]: 2025-11-25 17:03:00.770 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4005194116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/680489669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.954614) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090180954659, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1521, "num_deletes": 251, "total_data_size": 2344495, "memory_usage": 2377952, "flush_reason": "Manual Compaction"}
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090180967210, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2310245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48401, "largest_seqno": 49921, "table_properties": {"data_size": 2303162, "index_size": 4090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14991, "raw_average_key_size": 20, "raw_value_size": 2288989, "raw_average_value_size": 3068, "num_data_blocks": 182, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090022, "oldest_key_time": 1764090022, "file_creation_time": 1764090180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 12642 microseconds, and 6135 cpu microseconds.
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.967252) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2310245 bytes OK
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.967271) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.969491) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.969521) EVENT_LOG_v1 {"time_micros": 1764090180969511, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.969548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2337797, prev total WAL file size 2337797, number of live WAL files 2.
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.970788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2256KB)], [110(8002KB)]
Nov 25 17:03:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090180970855, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 10504823, "oldest_snapshot_seqno": -1}
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7035 keys, 8754317 bytes, temperature: kUnknown
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090181040183, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8754317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8708835, "index_size": 26810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 183920, "raw_average_key_size": 26, "raw_value_size": 8584273, "raw_average_value_size": 1220, "num_data_blocks": 1044, "num_entries": 7035, "num_filter_entries": 7035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.040408) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8754317 bytes
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.042176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.4 rd, 126.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.8 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(8.3) write-amplify(3.8) OK, records in: 7549, records dropped: 514 output_compression: NoCompression
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.042202) EVENT_LOG_v1 {"time_micros": 1764090181042191, "job": 66, "event": "compaction_finished", "compaction_time_micros": 69400, "compaction_time_cpu_micros": 20362, "output_level": 6, "num_output_files": 1, "total_output_size": 8754317, "num_input_records": 7549, "num_output_records": 7035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090181042838, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090181044632, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:00.970689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:03:01 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:03:01.044730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.123 254096 DEBUG nova.compute.manager [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG nova.compute.manager [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing instance network info cache due to event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG oslo_concurrency.lockutils [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG oslo_concurrency.lockutils [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.124 254096 DEBUG nova.network.neutron [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.233 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Creating config drive at /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.238 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d5flnyv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.282 254096 DEBUG nova.network.neutron [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updated VIF entry in instance network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.282 254096 DEBUG nova.network.neutron [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.298 254096 DEBUG oslo_concurrency.lockutils [req-d71ba9bf-118f-4c6b-b02f-de75ec910a00 req-b65e7b60-45fe-47d8-86fd-79d8bf5eb24f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.378 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d5flnyv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.404 254096 DEBUG nova.storage.rbd_utils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.410 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.615 254096 DEBUG oslo_concurrency.processutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config 12deb2c6-31fb-4186-940b-8131e43ea3f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.616 254096 INFO nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deleting local config drive /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8/disk.config because it was imported into RBD.
Nov 25 17:03:01 compute-0 kernel: tap142675a5-3c: entered promiscuous mode
Nov 25 17:03:01 compute-0 NetworkManager[48891]: <info>  [1764090181.6708] manager: (tap142675a5-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:01 compute-0 ovn_controller[153477]: 2025-11-25T17:03:01Z|01212|binding|INFO|Claiming lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf for this chassis.
Nov 25 17:03:01 compute-0 ovn_controller[153477]: 2025-11-25T17:03:01Z|01213|binding|INFO|142675a5-3c37-4e43-9f80-e8fedd63f3cf: Claiming fa:16:3e:4d:64:18 10.100.0.3
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.686 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:64:18 10.100.0.3'], port_security=['fa:16:3e:4d:64:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '12deb2c6-31fb-4186-940b-8131e43ea3f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d070883-7c27-4b8a-ba4e-1b0864814a05 3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=142675a5-3c37-4e43-9f80-e8fedd63f3cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.687 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 142675a5-3c37-4e43-9f80-e8fedd63f3cf in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 bound to our chassis
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.689 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51d8d234-0a41-496f-82c7-0c98aa4761b8
Nov 25 17:03:01 compute-0 ovn_controller[153477]: 2025-11-25T17:03:01Z|01214|binding|INFO|Setting lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf ovn-installed in OVS
Nov 25 17:03:01 compute-0 ovn_controller[153477]: 2025-11-25T17:03:01Z|01215|binding|INFO|Setting lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf up in Southbound
Nov 25 17:03:01 compute-0 nova_compute[254092]: 2025-11-25 17:03:01.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.703 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c246db76-40ed-4655-bcf2-d396ad1513b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.704 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51d8d234-01 in ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.705 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51d8d234-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.705 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a024c44d-d90b-4a30-947d-ea74619ee278]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[47972a2e-086a-4447-aae3-624ee98f266e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 systemd-machined[216343]: New machine qemu-149-instance-00000075.
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.718 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[78aee2a6-b40b-4904-a836-8a666d96c21b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-00000075.
Nov 25 17:03:01 compute-0 systemd-udevd[383355]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:03:01 compute-0 NetworkManager[48891]: <info>  [1764090181.7607] device (tap142675a5-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:03:01 compute-0 NetworkManager[48891]: <info>  [1764090181.7617] device (tap142675a5-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.759 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5009f8-7700-4473-8c4e-d8a9799a2db1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.792 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d47719a3-660f-43ef-b531-623f75bbb876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 systemd-udevd[383357]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.797 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79427c90-1fd0-41c6-9d4e-ffedb5b20dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 NetworkManager[48891]: <info>  [1764090181.7998] manager: (tap51d8d234-00): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.845 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8e07f7-8c36-4a18-b009-50f22903b97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.847 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3a254952-1022-499f-92fb-e180191ba284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 NetworkManager[48891]: <info>  [1764090181.8730] device (tap51d8d234-00): carrier: link connected
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.880 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28199e01-e8ce-4005-a460-9b8a0771eb69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.899 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c151136-6d58-4eb9-9c3a-28fd4d1a86c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383385, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31594a13-362d-4d32-b744-d1f7f3417949]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:9b5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663946, 'tstamp': 663946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383386, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72de177a-2dde-47d8-8ba3-2c94cbb813d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383387, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:01 compute-0 ceph-mon[74985]: pgmap v2384: 321 pgs: 321 active+clean; 103 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 66 op/s
Nov 25 17:03:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:01.959 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3493c3-4115-4a67-b0b8-0cbddd884f04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c2967-301b-45c9-81f1-5229d1fedf89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.054 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.055 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.055 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51d8d234-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:02 compute-0 NetworkManager[48891]: <info>  [1764090182.0578] manager: (tap51d8d234-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Nov 25 17:03:02 compute-0 kernel: tap51d8d234-00: entered promiscuous mode
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.063 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51d8d234-00, col_values=(('external_ids', {'iface-id': '81151907-1dea-47e9-9a64-99f3b4cd12fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:02 compute-0 ovn_controller[153477]: 2025-11-25T17:03:02Z|01216|binding|INFO|Releasing lport 81151907-1dea-47e9-9a64-99f3b4cd12fa from this chassis (sb_readonly=0)
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.128 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.131 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51d8d234-0a41-496f-82c7-0c98aa4761b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51d8d234-0a41-496f-82c7-0c98aa4761b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.132 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[85fe2416-cfdd-463c-99bb-5d66e32a184c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.133 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-51d8d234-0a41-496f-82c7-0c98aa4761b8
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/51d8d234-0a41-496f-82c7-0c98aa4761b8.pid.haproxy
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 51d8d234-0a41-496f-82c7-0c98aa4761b8
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:03:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:02.135 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'env', 'PROCESS_TAG=haproxy-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51d8d234-0a41-496f-82c7-0c98aa4761b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:03:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.465 254096 DEBUG nova.network.neutron [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated VIF entry in instance network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.466 254096 DEBUG nova.network.neutron [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.485 254096 DEBUG oslo_concurrency.lockutils [req-f68afde4-9c2e-47d3-b2c8-826d60d575a7 req-7c89969b-5376-435e-9619-6935eecc45be a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:02 compute-0 podman[383459]: 2025-11-25 17:03:02.501306759 +0000 UTC m=+0.063392210 container create f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.505 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090182.5052378, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.506 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Started (Lifecycle Event)
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.520 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.524 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090182.5061996, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.524 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Paused (Lifecycle Event)
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.542 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.547 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:03:02 compute-0 systemd[1]: Started libpod-conmon-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087.scope.
Nov 25 17:03:02 compute-0 podman[383459]: 2025-11-25 17:03:02.465398824 +0000 UTC m=+0.027484335 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:03:02 compute-0 nova_compute[254092]: 2025-11-25 17:03:02.563 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:03:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee1fa6406befdaaca4f1d07638e3ce32c9871c9601e12c9f35106bd16f6c1c0d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:02 compute-0 podman[383459]: 2025-11-25 17:03:02.598249267 +0000 UTC m=+0.160334718 container init f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:03:02 compute-0 podman[383459]: 2025-11-25 17:03:02.609209078 +0000 UTC m=+0.171294549 container start f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:03:02 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : New worker (383482) forked
Nov 25 17:03:02 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : Loading success.
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.235 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.235 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.236 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.237 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.237 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Processing event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.237 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.238 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.238 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.238 254096 DEBUG oslo_concurrency.lockutils [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.239 254096 DEBUG nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] No waiting events found dispatching network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.239 254096 WARNING nova.compute.manager [req-242c6b7e-cc94-4466-89fd-f663d7f69279 req-73fecbb3-72b8-4d6d-934b-206356429bf9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received unexpected event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf for instance with vm_state building and task_state spawning.
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.240 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.246 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.247 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090183.246282, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.248 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Resumed (Lifecycle Event)
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.255 254096 INFO nova.virt.libvirt.driver [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance spawned successfully.
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.255 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.281 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.290 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.291 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.292 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.293 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.294 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.294 254096 DEBUG nova.virt.libvirt.driver [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.301 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.326 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.369 254096 INFO nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 7.08 seconds to spawn the instance on the hypervisor.
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.370 254096 DEBUG nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.436 254096 INFO nova.compute.manager [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 8.10 seconds to build instance.
Nov 25 17:03:03 compute-0 nova_compute[254092]: 2025-11-25 17:03:03.450 254096 DEBUG oslo_concurrency.lockutils [None req-dfa849ad-f0f3-446e-a150-f1df5221eb54 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:04 compute-0 ceph-mon[74985]: pgmap v2385: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 25 17:03:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 25 17:03:04 compute-0 nova_compute[254092]: 2025-11-25 17:03:04.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:05 compute-0 nova_compute[254092]: 2025-11-25 17:03:05.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:06 compute-0 ceph-mon[74985]: pgmap v2386: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Nov 25 17:03:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 17:03:07 compute-0 ceph-mon[74985]: pgmap v2387: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Nov 25 17:03:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 25 17:03:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:09.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:03:09 compute-0 nova_compute[254092]: 2025-11-25 17:03:09.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:09.138 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:03:09 compute-0 ceph-mon[74985]: pgmap v2388: 321 pgs: 321 active+clean; 134 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 25 17:03:09 compute-0 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG nova.compute.manager [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:09 compute-0 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG nova.compute.manager [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing instance network info cache due to event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:03:09 compute-0 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG oslo_concurrency.lockutils [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:09 compute-0 nova_compute[254092]: 2025-11-25 17:03:09.651 254096 DEBUG oslo_concurrency.lockutils [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:09 compute-0 nova_compute[254092]: 2025-11-25 17:03:09.652 254096 DEBUG nova.network.neutron [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:03:09 compute-0 nova_compute[254092]: 2025-11-25 17:03:09.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:03:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 17:03:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 137 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 178 op/s
Nov 25 17:03:10 compute-0 nova_compute[254092]: 2025-11-25 17:03:10.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:11 compute-0 ceph-mon[74985]: pgmap v2389: 321 pgs: 321 active+clean; 137 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 178 op/s
Nov 25 17:03:11 compute-0 nova_compute[254092]: 2025-11-25 17:03:11.753 254096 DEBUG nova.network.neutron [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updated VIF entry in instance network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:03:11 compute-0 nova_compute[254092]: 2025-11-25 17:03:11.753 254096 DEBUG nova.network.neutron [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:11 compute-0 nova_compute[254092]: 2025-11-25 17:03:11.902 254096 DEBUG oslo_concurrency.lockutils [req-2f9e9070-e6ed-4ab6-9873-8340c3d32c81 req-d8bf7499-067e-4928-a234-ca302a2a27d1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 147 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Nov 25 17:03:12 compute-0 ovn_controller[153477]: 2025-11-25T17:03:12Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:b5:b4 10.100.0.9
Nov 25 17:03:12 compute-0 ovn_controller[153477]: 2025-11-25T17:03:12Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:b5:b4 10.100.0.9
Nov 25 17:03:12 compute-0 podman[383494]: 2025-11-25 17:03:12.648402794 +0000 UTC m=+0.054328650 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:03:12 compute-0 podman[383493]: 2025-11-25 17:03:12.676102464 +0000 UTC m=+0.084098698 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:03:12 compute-0 podman[383495]: 2025-11-25 17:03:12.686836879 +0000 UTC m=+0.084822609 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:03:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.140 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.640 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:13 compute-0 ceph-mon[74985]: pgmap v2390: 321 pgs: 321 active+clean; 147 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Nov 25 17:03:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 147 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Nov 25 17:03:14 compute-0 nova_compute[254092]: 2025-11-25 17:03:14.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:15 compute-0 nova_compute[254092]: 2025-11-25 17:03:15.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:16 compute-0 ceph-mon[74985]: pgmap v2391: 321 pgs: 321 active+clean; 147 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Nov 25 17:03:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 165 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Nov 25 17:03:17 compute-0 ceph-mon[74985]: pgmap v2392: 321 pgs: 321 active+clean; 165 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Nov 25 17:03:17 compute-0 ovn_controller[153477]: 2025-11-25T17:03:17Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:64:18 10.100.0.3
Nov 25 17:03:17 compute-0 ovn_controller[153477]: 2025-11-25T17:03:17Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:64:18 10.100.0.3
Nov 25 17:03:18 compute-0 nova_compute[254092]: 2025-11-25 17:03:18.141 254096 INFO nova.compute.manager [None req-ca50335a-bedf-4755-a1a5-28ade0816061 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Get console output
Nov 25 17:03:18 compute-0 nova_compute[254092]: 2025-11-25 17:03:18.147 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:03:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 165 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Nov 25 17:03:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:19 compute-0 sudo[383555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:19 compute-0 sudo[383555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:19 compute-0 sudo[383555]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:19 compute-0 sudo[383580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:03:19 compute-0 sudo[383580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:19 compute-0 sudo[383580]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:19 compute-0 sudo[383605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:19 compute-0 sudo[383605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:19 compute-0 ceph-mon[74985]: pgmap v2393: 321 pgs: 321 active+clean; 165 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Nov 25 17:03:19 compute-0 sudo[383605]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:19 compute-0 sudo[383630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:03:19 compute-0 sudo[383630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:19 compute-0 nova_compute[254092]: 2025-11-25 17:03:19.904 254096 DEBUG nova.compute.manager [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:19 compute-0 nova_compute[254092]: 2025-11-25 17:03:19.906 254096 DEBUG nova.compute.manager [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing instance network info cache due to event network-changed-b0512c6a-fbc4-4639-8508-e6493d18bd3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:03:19 compute-0 nova_compute[254092]: 2025-11-25 17:03:19.906 254096 DEBUG oslo_concurrency.lockutils [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:19 compute-0 nova_compute[254092]: 2025-11-25 17:03:19.906 254096 DEBUG oslo_concurrency.lockutils [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:19 compute-0 nova_compute[254092]: 2025-11-25 17:03:19.907 254096 DEBUG nova.network.neutron [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Refreshing network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:03:19 compute-0 sudo[383630]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:19 compute-0 nova_compute[254092]: 2025-11-25 17:03:19.997 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:03:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:03:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:03:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:03:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9904e545-a491-41a1-a29e-be89a6e15be3 does not exist
Nov 25 17:03:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7ee7dbb0-eaea-4376-9d93-df1aef7b8595 does not exist
Nov 25 17:03:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f3fd32c5-181a-4489-abd3-88bdae72dc8c does not exist
Nov 25 17:03:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:03:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:03:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:03:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:03:20 compute-0 sudo[383687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:20 compute-0 sudo[383687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:20 compute-0 sudo[383687]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:20 compute-0 sudo[383712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:03:20 compute-0 sudo[383712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:20 compute-0 sudo[383712]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:20 compute-0 sudo[383737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:20 compute-0 sudo[383737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:20 compute-0 sudo[383737]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:20 compute-0 sudo[383762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:03:20 compute-0 sudo[383762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 178 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 17:03:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:03:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:03:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:03:20 compute-0 nova_compute[254092]: 2025-11-25 17:03:20.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:20 compute-0 podman[383828]: 2025-11-25 17:03:20.745092871 +0000 UTC m=+0.073014443 container create bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:03:20 compute-0 podman[383828]: 2025-11-25 17:03:20.701371023 +0000 UTC m=+0.029292585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:03:20 compute-0 systemd[1]: Started libpod-conmon-bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29.scope.
Nov 25 17:03:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:03:20 compute-0 podman[383828]: 2025-11-25 17:03:20.974397131 +0000 UTC m=+0.302318683 container init bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:03:20 compute-0 podman[383828]: 2025-11-25 17:03:20.987959173 +0000 UTC m=+0.315880705 container start bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:03:20 compute-0 cool_pascal[383844]: 167 167
Nov 25 17:03:20 compute-0 systemd[1]: libpod-bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29.scope: Deactivated successfully.
Nov 25 17:03:21 compute-0 podman[383828]: 2025-11-25 17:03:21.067596447 +0000 UTC m=+0.395517999 container attach bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 17:03:21 compute-0 podman[383828]: 2025-11-25 17:03:21.069414797 +0000 UTC m=+0.397336359 container died bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:03:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa33f7921770edc5a1bfef3073060f2a652ae9260e283cdc0d79ffb6ae10944d-merged.mount: Deactivated successfully.
Nov 25 17:03:21 compute-0 podman[383828]: 2025-11-25 17:03:21.257854605 +0000 UTC m=+0.585776147 container remove bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pascal, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:03:21 compute-0 systemd[1]: libpod-conmon-bc6a3cfd4a90fa9510197368308071eedcbb4c895e20c4ae1a22fb35b144ad29.scope: Deactivated successfully.
Nov 25 17:03:21 compute-0 podman[383869]: 2025-11-25 17:03:21.447720033 +0000 UTC m=+0.037894840 container create 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:03:21 compute-0 nova_compute[254092]: 2025-11-25 17:03:21.482 254096 DEBUG nova.network.neutron [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated VIF entry in instance network info cache for port b0512c6a-fbc4-4639-8508-e6493d18bd3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:03:21 compute-0 nova_compute[254092]: 2025-11-25 17:03:21.484 254096 DEBUG nova.network.neutron [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:21 compute-0 nova_compute[254092]: 2025-11-25 17:03:21.501 254096 DEBUG oslo_concurrency.lockutils [req-8fddb543-cbc2-4194-8ebf-d6bdb31fa341 req-b078b992-e387-4448-a2dc-fc7e3ac3abd8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:21 compute-0 systemd[1]: Started libpod-conmon-172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e.scope.
Nov 25 17:03:21 compute-0 podman[383869]: 2025-11-25 17:03:21.430866811 +0000 UTC m=+0.021041648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:03:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:21 compute-0 podman[383869]: 2025-11-25 17:03:21.556123867 +0000 UTC m=+0.146298714 container init 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:03:21 compute-0 podman[383869]: 2025-11-25 17:03:21.565439052 +0000 UTC m=+0.155613859 container start 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:03:21 compute-0 podman[383869]: 2025-11-25 17:03:21.569437372 +0000 UTC m=+0.159612189 container attach 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:03:22 compute-0 ceph-mon[74985]: pgmap v2394: 321 pgs: 321 active+clean; 178 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 2.8 MiB/s wr, 79 op/s
Nov 25 17:03:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 200 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 4.1 MiB/s wr, 123 op/s
Nov 25 17:03:22 compute-0 nova_compute[254092]: 2025-11-25 17:03:22.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:22 compute-0 competent_sanderson[383885]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:03:22 compute-0 competent_sanderson[383885]: --> relative data size: 1.0
Nov 25 17:03:22 compute-0 competent_sanderson[383885]: --> All data devices are unavailable
Nov 25 17:03:22 compute-0 systemd[1]: libpod-172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e.scope: Deactivated successfully.
Nov 25 17:03:22 compute-0 podman[383869]: 2025-11-25 17:03:22.610001533 +0000 UTC m=+1.200176350 container died 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:03:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1232eeb55da2bdb1edee7623cac15bd782fee1b044538c44795ab3b3eaaab759-merged.mount: Deactivated successfully.
Nov 25 17:03:22 compute-0 podman[383869]: 2025-11-25 17:03:22.660916919 +0000 UTC m=+1.251091736 container remove 172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sanderson, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:03:22 compute-0 systemd[1]: libpod-conmon-172a9a5970faf3ab0930623abcc9be32e72021635ef2e0c4448d28c0c3a55d6e.scope: Deactivated successfully.
Nov 25 17:03:22 compute-0 sudo[383762]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:22 compute-0 sudo[383926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:22 compute-0 sudo[383926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:22 compute-0 sudo[383926]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:22 compute-0 sudo[383951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:03:22 compute-0 sudo[383951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:22 compute-0 sudo[383951]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:22 compute-0 sudo[383976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:22 compute-0 sudo[383976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:22 compute-0 sudo[383976]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:22 compute-0 sudo[384001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:03:22 compute-0 sudo[384001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:23 compute-0 podman[384067]: 2025-11-25 17:03:23.363177831 +0000 UTC m=+0.088175440 container create f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:03:23 compute-0 podman[384067]: 2025-11-25 17:03:23.299202656 +0000 UTC m=+0.024200295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:03:23 compute-0 systemd[1]: Started libpod-conmon-f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227.scope.
Nov 25 17:03:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:03:23 compute-0 podman[384067]: 2025-11-25 17:03:23.483930173 +0000 UTC m=+0.208927852 container init f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:03:23 compute-0 podman[384067]: 2025-11-25 17:03:23.493577477 +0000 UTC m=+0.218575096 container start f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:03:23 compute-0 condescending_bhabha[384083]: 167 167
Nov 25 17:03:23 compute-0 systemd[1]: libpod-f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227.scope: Deactivated successfully.
Nov 25 17:03:23 compute-0 podman[384067]: 2025-11-25 17:03:23.503092809 +0000 UTC m=+0.228090458 container attach f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:03:23 compute-0 podman[384067]: 2025-11-25 17:03:23.503777047 +0000 UTC m=+0.228774716 container died f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d51c7aa2a31891c73042b3eeadb06f1ec7fa91d7004bbad0ddc9fd85f39454d-merged.mount: Deactivated successfully.
Nov 25 17:03:23 compute-0 podman[384067]: 2025-11-25 17:03:23.565311955 +0000 UTC m=+0.290309564 container remove f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:03:23 compute-0 systemd[1]: libpod-conmon-f85774e91afa8a374593dff72b4a9d59969ff30c060149638ad9ac137aaa3227.scope: Deactivated successfully.
Nov 25 17:03:23 compute-0 podman[384109]: 2025-11-25 17:03:23.776141597 +0000 UTC m=+0.045752815 container create f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:03:23 compute-0 systemd[1]: Started libpod-conmon-f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5.scope.
Nov 25 17:03:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:03:23 compute-0 podman[384109]: 2025-11-25 17:03:23.754895525 +0000 UTC m=+0.024506763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:23 compute-0 podman[384109]: 2025-11-25 17:03:23.882506695 +0000 UTC m=+0.152117913 container init f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:03:23 compute-0 podman[384109]: 2025-11-25 17:03:23.890289838 +0000 UTC m=+0.159901056 container start f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 17:03:23 compute-0 podman[384109]: 2025-11-25 17:03:23.907394708 +0000 UTC m=+0.177005926 container attach f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:03:24 compute-0 ceph-mon[74985]: pgmap v2395: 321 pgs: 321 active+clean; 200 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 4.1 MiB/s wr, 123 op/s
Nov 25 17:03:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 200 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.9 MiB/s wr, 107 op/s
Nov 25 17:03:24 compute-0 nifty_goodall[384126]: {
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:     "0": [
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:         {
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "devices": [
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "/dev/loop3"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             ],
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_name": "ceph_lv0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_size": "21470642176",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "name": "ceph_lv0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "tags": {
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cluster_name": "ceph",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.crush_device_class": "",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.encrypted": "0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osd_id": "0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.type": "block",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.vdo": "0"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             },
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "type": "block",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "vg_name": "ceph_vg0"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:         }
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:     ],
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:     "1": [
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:         {
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "devices": [
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "/dev/loop4"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             ],
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_name": "ceph_lv1",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_size": "21470642176",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "name": "ceph_lv1",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "tags": {
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cluster_name": "ceph",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.crush_device_class": "",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.encrypted": "0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osd_id": "1",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.type": "block",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.vdo": "0"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             },
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "type": "block",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "vg_name": "ceph_vg1"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:         }
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:     ],
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:     "2": [
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:         {
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "devices": [
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "/dev/loop5"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             ],
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_name": "ceph_lv2",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_size": "21470642176",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "name": "ceph_lv2",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "tags": {
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.cluster_name": "ceph",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.crush_device_class": "",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.encrypted": "0",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osd_id": "2",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.type": "block",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:                 "ceph.vdo": "0"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             },
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "type": "block",
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:             "vg_name": "ceph_vg2"
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:         }
Nov 25 17:03:24 compute-0 nifty_goodall[384126]:     ]
Nov 25 17:03:24 compute-0 nifty_goodall[384126]: }
Nov 25 17:03:24 compute-0 systemd[1]: libpod-f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5.scope: Deactivated successfully.
Nov 25 17:03:24 compute-0 podman[384109]: 2025-11-25 17:03:24.646089849 +0000 UTC m=+0.915701077 container died f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:03:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-60ebb4e772e0de8d881c8cdf7d727af10e85ab156d324d67c73a693bf8b9eff8-merged.mount: Deactivated successfully.
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:25 compute-0 podman[384109]: 2025-11-25 17:03:25.025370251 +0000 UTC m=+1.294981479 container remove f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:03:25 compute-0 systemd[1]: libpod-conmon-f5bd86d780fefe6f1f151f3bf646024c00ab50d84ea765314cd0e5e821e7c4d5.scope: Deactivated successfully.
Nov 25 17:03:25 compute-0 sudo[384001]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:25 compute-0 sudo[384149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:25 compute-0 sudo[384149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:25 compute-0 sudo[384149]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:25 compute-0 sudo[384174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:03:25 compute-0 sudo[384174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:25 compute-0 sudo[384174]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:25 compute-0 sudo[384199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:25 compute-0 sudo[384199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:25 compute-0 sudo[384199]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:25 compute-0 sudo[384224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:03:25 compute-0 sudo[384224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:25 compute-0 podman[384291]: 2025-11-25 17:03:25.613897723 +0000 UTC m=+0.042362803 container create e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:03:25 compute-0 systemd[1]: Started libpod-conmon-e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db.scope.
Nov 25 17:03:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:03:25 compute-0 podman[384291]: 2025-11-25 17:03:25.591688204 +0000 UTC m=+0.020153304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:03:25 compute-0 podman[384291]: 2025-11-25 17:03:25.691836541 +0000 UTC m=+0.120301671 container init e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:03:25 compute-0 podman[384291]: 2025-11-25 17:03:25.699306786 +0000 UTC m=+0.127771856 container start e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:03:25 compute-0 podman[384291]: 2025-11-25 17:03:25.702034631 +0000 UTC m=+0.130499761 container attach e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:03:25 compute-0 strange_mclean[384308]: 167 167
Nov 25 17:03:25 compute-0 systemd[1]: libpod-e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db.scope: Deactivated successfully.
Nov 25 17:03:25 compute-0 podman[384291]: 2025-11-25 17:03:25.704395976 +0000 UTC m=+0.132861066 container died e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d079414dd73fc1facc06b94ddc08e778364628bf97c05b23e268a8641def9c-merged.mount: Deactivated successfully.
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:25 compute-0 podman[384291]: 2025-11-25 17:03:25.746202743 +0000 UTC m=+0.174667823 container remove e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:03:25 compute-0 systemd[1]: libpod-conmon-e355e6e06f806b842fbd0ff0e91268640927eaaabce56b5d13e8d45f90daa1db.scope: Deactivated successfully.
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.819 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.821 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.836 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.904 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.905 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.914 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:03:25 compute-0 nova_compute[254092]: 2025-11-25 17:03:25.915 254096 INFO nova.compute.claims [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:03:25 compute-0 podman[384331]: 2025-11-25 17:03:25.915573878 +0000 UTC m=+0.044647716 container create afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:03:25 compute-0 systemd[1]: Started libpod-conmon-afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f.scope.
Nov 25 17:03:25 compute-0 podman[384331]: 2025-11-25 17:03:25.897254245 +0000 UTC m=+0.026328103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:03:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:03:26 compute-0 podman[384331]: 2025-11-25 17:03:26.016050374 +0000 UTC m=+0.145124212 container init afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:03:26 compute-0 podman[384331]: 2025-11-25 17:03:26.028772152 +0000 UTC m=+0.157846020 container start afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:03:26 compute-0 podman[384331]: 2025-11-25 17:03:26.032146736 +0000 UTC m=+0.161220574 container attach afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.042 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:26 compute-0 ceph-mon[74985]: pgmap v2396: 321 pgs: 321 active+clean; 200 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.9 MiB/s wr, 107 op/s
Nov 25 17:03:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.9 MiB/s wr, 108 op/s
Nov 25 17:03:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:03:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922475283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.483 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.490 254096 DEBUG nova.compute.provider_tree [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.639 254096 DEBUG nova.scheduler.client.report [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.673 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.674 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.739 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.739 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.769 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.803 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.922 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.923 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.923 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Creating image(s)
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.949 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.973 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:26 compute-0 fervent_pare[384348]: {
Nov 25 17:03:26 compute-0 fervent_pare[384348]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "osd_id": 1,
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "type": "bluestore"
Nov 25 17:03:26 compute-0 fervent_pare[384348]:     },
Nov 25 17:03:26 compute-0 fervent_pare[384348]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "osd_id": 2,
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "type": "bluestore"
Nov 25 17:03:26 compute-0 fervent_pare[384348]:     },
Nov 25 17:03:26 compute-0 fervent_pare[384348]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "osd_id": 0,
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:03:26 compute-0 fervent_pare[384348]:         "type": "bluestore"
Nov 25 17:03:26 compute-0 fervent_pare[384348]:     }
Nov 25 17:03:26 compute-0 fervent_pare[384348]: }
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.994 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:26 compute-0 nova_compute[254092]: 2025-11-25 17:03:26.998 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:27 compute-0 systemd[1]: libpod-afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f.scope: Deactivated successfully.
Nov 25 17:03:27 compute-0 podman[384331]: 2025-11-25 17:03:27.022412346 +0000 UTC m=+1.151486184 container died afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:03:27 compute-0 nova_compute[254092]: 2025-11-25 17:03:27.072 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:27 compute-0 nova_compute[254092]: 2025-11-25 17:03:27.126 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:27 compute-0 nova_compute[254092]: 2025-11-25 17:03:27.127 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:27 compute-0 nova_compute[254092]: 2025-11-25 17:03:27.127 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:27 compute-0 nova_compute[254092]: 2025-11-25 17:03:27.184 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:27 compute-0 nova_compute[254092]: 2025-11-25 17:03:27.189 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc02b95b-290f-441d-9b04-957187d0f885_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3922475283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd86b6d83e48861a910bf60f1fe80963069e66dc9c88ad0bbf7c7a78cec933c6-merged.mount: Deactivated successfully.
Nov 25 17:03:27 compute-0 nova_compute[254092]: 2025-11-25 17:03:27.231 254096 DEBUG nova.policy [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:03:27 compute-0 podman[384331]: 2025-11-25 17:03:27.252354683 +0000 UTC m=+1.381428521 container remove afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pare, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:03:27 compute-0 systemd[1]: libpod-conmon-afd227fb3c5efd2b0463cdc88e38b2f2f908992a60b08b306c5eb10030e4c44f.scope: Deactivated successfully.
Nov 25 17:03:27 compute-0 sudo[384224]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:03:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:03:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:03:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:03:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b10c8dc0-b501-452f-a6f6-23e959a2a05a does not exist
Nov 25 17:03:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d4570110-d28d-407a-be2a-fc2346339b91 does not exist
Nov 25 17:03:27 compute-0 sudo[384507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:03:27 compute-0 sudo[384507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:27 compute-0 sudo[384507]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:27 compute-0 sudo[384535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:03:27 compute-0 sudo[384535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:03:27 compute-0 sudo[384535]: pam_unix(sudo:session): session closed for user root
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.163 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 dc02b95b-290f-441d-9b04-957187d0f885_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:28 compute-0 ceph-mon[74985]: pgmap v2397: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.9 MiB/s wr, 108 op/s
Nov 25 17:03:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:03:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.235 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:03:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.342 254096 DEBUG nova.objects.instance [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid dc02b95b-290f-441d-9b04-957187d0f885 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.356 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.357 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Ensure instance console log exists: /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.358 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.359 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.359 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:28 compute-0 nova_compute[254092]: 2025-11-25 17:03:28.660 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Successfully created port: 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:03:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:29 compute-0 ceph-mon[74985]: pgmap v2398: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.568 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Successfully updated port: 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.586 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.586 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.586 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.668 254096 DEBUG nova.compute.manager [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.668 254096 DEBUG nova.compute.manager [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing instance network info cache due to event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.669 254096 DEBUG oslo_concurrency.lockutils [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:29 compute-0 nova_compute[254092]: 2025-11-25 17:03:29.759 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:03:30 compute-0 nova_compute[254092]: 2025-11-25 17:03:30.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 208 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.3 MiB/s wr, 61 op/s
Nov 25 17:03:30 compute-0 nova_compute[254092]: 2025-11-25 17:03:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:30 compute-0 nova_compute[254092]: 2025-11-25 17:03:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:30 compute-0 nova_compute[254092]: 2025-11-25 17:03:30.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:30 compute-0 nova_compute[254092]: 2025-11-25 17:03:30.896 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:30 compute-0 nova_compute[254092]: 2025-11-25 17:03:30.897 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:30 compute-0 nova_compute[254092]: 2025-11-25 17:03:30.914 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.010 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.011 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.021 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.022 254096 INFO nova.compute.claims [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.240 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:31 compute-0 ceph-mon[74985]: pgmap v2399: 321 pgs: 321 active+clean; 208 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.3 MiB/s wr, 61 op/s
Nov 25 17:03:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:03:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190977656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.719 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.724 254096 DEBUG nova.compute.provider_tree [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.734 254096 DEBUG nova.network.neutron [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.739 254096 DEBUG nova.scheduler.client.report [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.752 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.752 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance network_info: |[{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.752 254096 DEBUG oslo_concurrency.lockutils [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.753 254096 DEBUG nova.network.neutron [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.755 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start _get_guest_xml network_info=[{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.757 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.758 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.765 254096 WARNING nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.773 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.773 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.777 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.777 254096 DEBUG nova.virt.libvirt.host [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.777 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.778 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.779 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.780 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.780 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.780 254096 DEBUG nova.virt.hardware [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.783 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.818 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.819 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.841 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.859 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.953 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.954 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.955 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Creating image(s)
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.975 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:31 compute-0 nova_compute[254092]: 2025-11-25 17:03:31.997 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.016 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.020 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.077 254096 DEBUG nova.policy [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.101 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.102 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.102 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.103 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.123 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.126 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:03:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048479177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.257 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.289 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.293 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 246 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 3.2 MiB/s wr, 76 op/s
Nov 25 17:03:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4190977656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1048479177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.683 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.762 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:03:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:03:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143354871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.857 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.860 254096 DEBUG nova.virt.libvirt.vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1163677477',display_name='tempest-TestNetworkBasicOps-server-1163677477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1163677477',id=118,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCxvG1kVT8rj7rz3z1G56tHhBp7SuNAkkLNuv/fKw8COC5VTTwy6Y98cgqFvLD/dMqbJMho2xMxaZGbc9qY1ZWoyzmuMb54S1JTVXIQHKR2yNSi3mjSBeFFRS3qe8724pw==',key_name='tempest-TestNetworkBasicOps-1314414902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-akatof1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=dc02b95b-290f-441d-9b04-957187d0f885,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.860 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.862 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.864 254096 DEBUG nova.objects.instance [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc02b95b-290f-441d-9b04-957187d0f885 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.877 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <uuid>dc02b95b-290f-441d-9b04-957187d0f885</uuid>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <name>instance-00000076</name>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-1163677477</nova:name>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:03:31</nova:creationTime>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <nova:port uuid="9fb8b6ba-4534-4320-a88f-3da6cdc1eb28">
Nov 25 17:03:32 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <system>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <entry name="serial">dc02b95b-290f-441d-9b04-957187d0f885</entry>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <entry name="uuid">dc02b95b-290f-441d-9b04-957187d0f885</entry>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </system>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <os>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   </os>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <features>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   </features>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/dc02b95b-290f-441d-9b04-957187d0f885_disk">
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       </source>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/dc02b95b-290f-441d-9b04-957187d0f885_disk.config">
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       </source>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:03:32 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:c6:a2:cd"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <target dev="tap9fb8b6ba-45"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/console.log" append="off"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <video>
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </video>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:03:32 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:03:32 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:03:32 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:03:32 compute-0 nova_compute[254092]: </domain>
Nov 25 17:03:32 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.877 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Preparing to wait for external event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.878 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.878 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.879 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.880 254096 DEBUG nova.virt.libvirt.vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1163677477',display_name='tempest-TestNetworkBasicOps-server-1163677477',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1163677477',id=118,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCxvG1kVT8rj7rz3z1G56tHhBp7SuNAkkLNuv/fKw8COC5VTTwy6Y98cgqFvLD/dMqbJMho2xMxaZGbc9qY1ZWoyzmuMb54S1JTVXIQHKR2yNSi3mjSBeFFRS3qe8724pw==',key_name='tempest-TestNetworkBasicOps-1314414902',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-akatof1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=dc02b95b-290f-441d-9b04-957187d0f885,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.880 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.881 254096 DEBUG nova.network.os_vif_util [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.882 254096 DEBUG os_vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.884 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.884 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.890 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fb8b6ba-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.891 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fb8b6ba-45, col_values=(('external_ids', {'iface-id': '9fb8b6ba-4534-4320-a88f-3da6cdc1eb28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:a2:cd', 'vm-uuid': 'dc02b95b-290f-441d-9b04-957187d0f885'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:32 compute-0 NetworkManager[48891]: <info>  [1764090212.8942] manager: (tap9fb8b6ba-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.900 254096 INFO os_vif [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45')
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.991 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.991 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.992 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:c6:a2:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:03:32 compute-0 nova_compute[254092]: 2025-11-25 17:03:32.992 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Using config drive
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.010 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.083 254096 DEBUG nova.objects.instance [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.096 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.096 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Ensure instance console log exists: /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.097 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.097 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.097 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:03:33 compute-0 ceph-mon[74985]: pgmap v2400: 321 pgs: 321 active+clean; 246 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 3.2 MiB/s wr, 76 op/s
Nov 25 17:03:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4143354871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.791 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Successfully created port: 29c53aaa-054f-442e-8673-22d0d7fc5f72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.821 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Creating config drive at /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.826 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyn3pllm1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:33 compute-0 nova_compute[254092]: 2025-11-25 17:03:33.989 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyn3pllm1" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.034 254096 DEBUG nova.storage.rbd_utils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image dc02b95b-290f-441d-9b04-957187d0f885_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.041 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config dc02b95b-290f-441d-9b04-957187d0f885_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.106 254096 DEBUG nova.network.neutron [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updated VIF entry in instance network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.108 254096 DEBUG nova.network.neutron [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.148 254096 DEBUG oslo_concurrency.lockutils [req-e189a099-59ec-411a-9f59-736d55fbd660 req-96e9c989-1b84-442f-b6c9-e364ae58d0b4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 246 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.362 254096 DEBUG oslo_concurrency.processutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config dc02b95b-290f-441d-9b04-957187d0f885_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.362 254096 INFO nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deleting local config drive /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885/disk.config because it was imported into RBD.
Nov 25 17:03:34 compute-0 kernel: tap9fb8b6ba-45: entered promiscuous mode
Nov 25 17:03:34 compute-0 NetworkManager[48891]: <info>  [1764090214.4299] manager: (tap9fb8b6ba-45): new Tun device (/org/freedesktop/NetworkManager/Devices/505)
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.467 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:34 compute-0 ovn_controller[153477]: 2025-11-25T17:03:34Z|01217|binding|INFO|Claiming lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for this chassis.
Nov 25 17:03:34 compute-0 ovn_controller[153477]: 2025-11-25T17:03:34Z|01218|binding|INFO|9fb8b6ba-4534-4320-a88f-3da6cdc1eb28: Claiming fa:16:3e:c6:a2:cd 10.100.0.8
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.480 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:a2:cd 10.100.0.8'], port_security=['fa:16:3e:c6:a2:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dc02b95b-290f-441d-9b04-957187d0f885', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1a0257b-ea79-4747-8da2-0da26a4a2e35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.481 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce bound to our chassis
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.483 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:34 compute-0 ovn_controller[153477]: 2025-11-25T17:03:34Z|01219|binding|INFO|Setting lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 ovn-installed in OVS
Nov 25 17:03:34 compute-0 ovn_controller[153477]: 2025-11-25T17:03:34Z|01220|binding|INFO|Setting lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 up in Southbound
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1991711a-2726-4f35-bec0-4483a2a7fff4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:34 compute-0 systemd-machined[216343]: New machine qemu-150-instance-00000076.
Nov 25 17:03:34 compute-0 systemd-udevd[384958]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.528 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.528 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:34 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Nov 25 17:03:34 compute-0 NetworkManager[48891]: <info>  [1764090214.5361] device (tap9fb8b6ba-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:03:34 compute-0 NetworkManager[48891]: <info>  [1764090214.5370] device (tap9fb8b6ba-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.547 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2ed3c6c6-cccc-43cc-9279-efaa2a2656ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.549 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecc8477-b54c-4970-a107-df215b701c57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.593 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a993a-ed57-4b24-aec9-69cb446c6c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.616 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[43a6fff5-4159-48c9-a238-80ec900e4854]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384971, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.640 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5c77decd-40df-4383-b3f4-fb436aabbfc4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663545, 'tstamp': 663545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384972, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663547, 'tstamp': 663547}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384972, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.643 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.646 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd54b59f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.646 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd54b59f-80, col_values=(('external_ids', {'iface-id': 'd6826306-6b17-44f3-a972-7d5089250616'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:34 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:34.647 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.782 254096 DEBUG nova.compute.manager [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.783 254096 DEBUG oslo_concurrency.lockutils [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.784 254096 DEBUG oslo_concurrency.lockutils [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.784 254096 DEBUG oslo_concurrency.lockutils [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.784 254096 DEBUG nova.compute.manager [req-199f122e-4067-4b34-968b-e6f52618460b req-0f0ba3c2-9848-4e1f-8918-b59eac2c4b84 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Processing event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.789 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Successfully updated port: 29c53aaa-054f-442e-8673-22d0d7fc5f72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.814 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.815 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.815 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.895 254096 DEBUG nova.compute.manager [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-changed-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.896 254096 DEBUG nova.compute.manager [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Refreshing instance network info cache due to event network-changed-29c53aaa-054f-442e-8673-22d0d7fc5f72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.900 254096 DEBUG oslo_concurrency.lockutils [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:03:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544051314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.970 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.977 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090214.9771938, dc02b95b-290f-441d-9b04-957187d0f885 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.978 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Started (Lifecycle Event)
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.981 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.985 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.990 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.993 254096 INFO nova.virt.libvirt.driver [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance spawned successfully.
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.993 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:03:34 compute-0 nova_compute[254092]: 2025-11-25 17:03:34.997 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.001 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.012 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.012 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.013 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.013 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.013 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.014 254096 DEBUG nova.virt.libvirt.driver [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.017 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.017 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090214.9807823, dc02b95b-290f-441d-9b04-957187d0f885 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.017 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Paused (Lifecycle Event)
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.074 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.077 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090214.990103, dc02b95b-290f-441d-9b04-957187d0f885 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.077 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Resumed (Lifecycle Event)
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.107 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.109 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.109 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.110 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.113 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.114 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.117 254096 INFO nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 8.20 seconds to spawn the instance on the hypervisor.
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.118 254096 DEBUG nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.119 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.119 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.146 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.195 254096 INFO nova.compute.manager [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 9.32 seconds to build instance.
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.228 254096 DEBUG oslo_concurrency.lockutils [None req-cce065c3-f42c-4585-b1e4-98ee5290f22e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.330 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.331 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3296MB free_disk=59.876468658447266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.331 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.331 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 73a187fa-5479-4191-bd44-757c3840137a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 12deb2c6-31fb-4186-940b-8131e43ea3f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance dc02b95b-290f-441d-9b04-957187d0f885 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.396 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.482 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:35 compute-0 ceph-mon[74985]: pgmap v2401: 321 pgs: 321 active+clean; 246 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:03:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/544051314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:03:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121133011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.934 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.938 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.952 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.979 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.979 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:35 compute-0 nova_compute[254092]: 2025-11-25 17:03:35.987 254096 DEBUG nova.network.neutron [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updating instance_info_cache with network_info: [{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.003 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.003 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance network_info: |[{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.004 254096 DEBUG oslo_concurrency.lockutils [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.004 254096 DEBUG nova.network.neutron [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Refreshing network info cache for port 29c53aaa-054f-442e-8673-22d0d7fc5f72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.006 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start _get_guest_xml network_info=[{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.011 254096 WARNING nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.017 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.017 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.023 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.023 254096 DEBUG nova.virt.libvirt.host [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.024 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.025 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.026 254096 DEBUG nova.virt.hardware [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.029 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Nov 25 17:03:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:03:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109160244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.550 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2121133011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3109160244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.582 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.586 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.861 254096 DEBUG nova.compute.manager [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.861 254096 DEBUG oslo_concurrency.lockutils [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.862 254096 DEBUG oslo_concurrency.lockutils [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.862 254096 DEBUG oslo_concurrency.lockutils [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.862 254096 DEBUG nova.compute.manager [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] No waiting events found dispatching network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:03:36 compute-0 nova_compute[254092]: 2025-11-25 17:03:36.863 254096 WARNING nova.compute.manager [req-82555fc7-3d81-4123-be53-7a23201e75fe req-16e52819-0ab1-4d20-8e32-2bbc1870a131 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received unexpected event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for instance with vm_state active and task_state None.
Nov 25 17:03:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:03:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315091351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.021 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.022 254096 DEBUG nova.virt.libvirt.vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=119,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-4a5cegzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:31Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.023 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.024 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.025 254096 DEBUG nova.objects.instance [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.046 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <uuid>4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3</uuid>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <name>instance-00000077</name>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422</nova:name>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:03:36</nova:creationTime>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <nova:port uuid="29c53aaa-054f-442e-8673-22d0d7fc5f72">
Nov 25 17:03:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <system>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <entry name="serial">4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3</entry>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <entry name="uuid">4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3</entry>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </system>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <os>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   </os>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <features>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   </features>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk">
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       </source>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config">
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       </source>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:03:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:da:b2:95"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <target dev="tap29c53aaa-05"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/console.log" append="off"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <video>
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </video>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:03:37 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:03:37 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:03:37 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:03:37 compute-0 nova_compute[254092]: </domain>
Nov 25 17:03:37 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.052 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Preparing to wait for external event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.052 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.052 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.053 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.053 254096 DEBUG nova.virt.libvirt.vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=119,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-4a5cegzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:03:31Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.053 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.054 254096 DEBUG nova.network.os_vif_util [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.054 254096 DEBUG os_vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.056 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.056 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.060 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29c53aaa-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.060 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29c53aaa-05, col_values=(('external_ids', {'iface-id': '29c53aaa-054f-442e-8673-22d0d7fc5f72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:b2:95', 'vm-uuid': '4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:37 compute-0 NetworkManager[48891]: <info>  [1764090217.0628] manager: (tap29c53aaa-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.068 254096 INFO os_vif [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05')
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.116 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.116 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.116 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:da:b2:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.117 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Using config drive
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.137 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:37 compute-0 ceph-mon[74985]: pgmap v2402: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.6 MiB/s wr, 74 op/s
Nov 25 17:03:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/315091351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.695 254096 DEBUG nova.compute.manager [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.696 254096 DEBUG nova.compute.manager [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing instance network info cache due to event network-changed-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.696 254096 DEBUG oslo_concurrency.lockutils [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.697 254096 DEBUG oslo_concurrency.lockutils [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.697 254096 DEBUG nova.network.neutron [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Refreshing network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.698 254096 DEBUG nova.network.neutron [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updated VIF entry in instance network info cache for port 29c53aaa-054f-442e-8673-22d0d7fc5f72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.699 254096 DEBUG nova.network.neutron [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updating instance_info_cache with network_info: [{"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.726 254096 DEBUG oslo_concurrency.lockutils [req-16e92d0f-d0b1-472d-8821-f3b5280bc4d2 req-7664e066-d38d-4a57-8f1c-74f3acc185cf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.729 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Creating config drive at /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.735 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7utnzcdy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.878 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7utnzcdy" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.902 254096 DEBUG nova.storage.rbd_utils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.905 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:37 compute-0 nova_compute[254092]: 2025-11-25 17:03:37.979 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.832 254096 DEBUG oslo_concurrency.processutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.927s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.833 254096 INFO nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deleting local config drive /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3/disk.config because it was imported into RBD.
Nov 25 17:03:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:38 compute-0 kernel: tap29c53aaa-05: entered promiscuous mode
Nov 25 17:03:38 compute-0 NetworkManager[48891]: <info>  [1764090218.8759] manager: (tap29c53aaa-05): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Nov 25 17:03:38 compute-0 ovn_controller[153477]: 2025-11-25T17:03:38Z|01221|binding|INFO|Claiming lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 for this chassis.
Nov 25 17:03:38 compute-0 ovn_controller[153477]: 2025-11-25T17:03:38Z|01222|binding|INFO|29c53aaa-054f-442e-8673-22d0d7fc5f72: Claiming fa:16:3e:da:b2:95 10.100.0.10
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.896 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:b2:95 10.100.0.10'], port_security=['fa:16:3e:da:b2:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=29c53aaa-054f-442e-8673-22d0d7fc5f72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:03:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.898 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 29c53aaa-054f-442e-8673-22d0d7fc5f72 in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 bound to our chassis
Nov 25 17:03:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.899 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51d8d234-0a41-496f-82c7-0c98aa4761b8
Nov 25 17:03:38 compute-0 ovn_controller[153477]: 2025-11-25T17:03:38Z|01223|binding|INFO|Setting lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 ovn-installed in OVS
Nov 25 17:03:38 compute-0 ovn_controller[153477]: 2025-11-25T17:03:38Z|01224|binding|INFO|Setting lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 up in Southbound
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:38 compute-0 systemd-machined[216343]: New machine qemu-151-instance-00000077.
Nov 25 17:03:38 compute-0 nova_compute[254092]: 2025-11-25 17:03:38.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.920 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abf38955-7d17-4526-8b53-903f2b358026]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:38 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Nov 25 17:03:38 compute-0 systemd-udevd[385197]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:03:38 compute-0 NetworkManager[48891]: <info>  [1764090218.9469] device (tap29c53aaa-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:03:38 compute-0 NetworkManager[48891]: <info>  [1764090218.9482] device (tap29c53aaa-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:03:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.956 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9368d79c-51fb-4349-9cae-9ff762169d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.959 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[94afebe8-1349-4705-b15b-ea6c014acc6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:38.994 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5d03e426-1720-463c-af21-a23809ab832e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.012 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[674b036b-e5e4-4913-a549-c2dc6f437649]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385208, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.024 254096 DEBUG nova.network.neutron [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updated VIF entry in instance network info cache for port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.024 254096 DEBUG nova.network.neutron [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [{"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.025 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[24feb474-7a3b-481d-b296-c39a43fd415e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663959, 'tstamp': 663959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385210, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663963, 'tstamp': 663963}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385210, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.026 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51d8d234-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.030 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51d8d234-00, col_values=(('external_ids', {'iface-id': '81151907-1dea-47e9-9a64-99f3b4cd12fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:39.031 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.116 254096 DEBUG oslo_concurrency.lockutils [req-f5eccd91-bd2a-44e7-a5b7-e0634a942d64 req-f9d4c360-05d5-432e-8a60-6bc0afb4baa9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-dc02b95b-290f-441d-9b04-957187d0f885" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.211 254096 DEBUG nova.compute.manager [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.211 254096 DEBUG oslo_concurrency.lockutils [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.212 254096 DEBUG oslo_concurrency.lockutils [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.212 254096 DEBUG oslo_concurrency.lockutils [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:39 compute-0 nova_compute[254092]: 2025-11-25 17:03:39.212 254096 DEBUG nova.compute.manager [req-464ca48e-7a2e-4a51-8837-a7f2e7e4f250 req-edcde14e-3d8f-4db6-bbcc-5798ba92ad1b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Processing event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:03:39 compute-0 ceph-mon[74985]: pgmap v2403: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:03:40
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta']
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.273 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.274 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090220.272911, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.275 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Started (Lifecycle Event)
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.278 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.281 254096 INFO nova.virt.libvirt.driver [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance spawned successfully.
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.282 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.299 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.305 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.306 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.306 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.307 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.308 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.308 254096 DEBUG nova.virt.libvirt.driver [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.313 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.353 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.354 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090220.275255, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.355 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Paused (Lifecycle Event)
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.380 254096 INFO nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 8.43 seconds to spawn the instance on the hypervisor.
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.381 254096 DEBUG nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.382 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.393 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090220.278339, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.394 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Resumed (Lifecycle Event)
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.419 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.423 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.456 254096 INFO nova.compute.manager [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 9.48 seconds to build instance.
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.471 254096 DEBUG oslo_concurrency.lockutils [None req-b577b8e3-6377-4c7c-ace2-1ea9ccc60f55 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.513 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.665 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.666 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.666 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:03:40 compute-0 nova_compute[254092]: 2025-11-25 17:03:40.666 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:03:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:03:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.2 total, 600.0 interval
                                           Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.88 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2402 writes, 10K keys, 2402 commit groups, 1.0 writes per commit group, ingest: 12.11 MB, 0.02 MB/s
                                           Interval WAL: 2402 writes, 888 syncs, 2.70 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:03:41 compute-0 nova_compute[254092]: 2025-11-25 17:03:41.319 254096 DEBUG nova.compute.manager [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:41 compute-0 nova_compute[254092]: 2025-11-25 17:03:41.319 254096 DEBUG oslo_concurrency.lockutils [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:41 compute-0 nova_compute[254092]: 2025-11-25 17:03:41.319 254096 DEBUG oslo_concurrency.lockutils [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:41 compute-0 nova_compute[254092]: 2025-11-25 17:03:41.320 254096 DEBUG oslo_concurrency.lockutils [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:41 compute-0 nova_compute[254092]: 2025-11-25 17:03:41.320 254096 DEBUG nova.compute.manager [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] No waiting events found dispatching network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:03:41 compute-0 nova_compute[254092]: 2025-11-25 17:03:41.320 254096 WARNING nova.compute.manager [req-b9002579-0d42-4126-8e02-1fde11d9a7fc req-73f94103-6c17-4650-9424-989b7c250154 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received unexpected event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 for instance with vm_state active and task_state None.
Nov 25 17:03:41 compute-0 ceph-mon[74985]: pgmap v2404: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Nov 25 17:03:42 compute-0 nova_compute[254092]: 2025-11-25 17:03:42.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 135 op/s
Nov 25 17:03:43 compute-0 podman[385254]: 2025-11-25 17:03:43.657549217 +0000 UTC m=+0.070143195 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:03:43 compute-0 podman[385253]: 2025-11-25 17:03:43.677505134 +0000 UTC m=+0.089517527 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:03:43 compute-0 podman[385255]: 2025-11-25 17:03:43.707272801 +0000 UTC m=+0.105582278 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 17:03:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:43 compute-0 ceph-mon[74985]: pgmap v2405: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 135 op/s
Nov 25 17:03:43 compute-0 nova_compute[254092]: 2025-11-25 17:03:43.957 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [{"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:43 compute-0 nova_compute[254092]: 2025-11-25 17:03:43.968 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-73a187fa-5479-4191-bd44-757c3840137a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:03:43 compute-0 nova_compute[254092]: 2025-11-25 17:03:43.968 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:03:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 17:03:44 compute-0 nova_compute[254092]: 2025-11-25 17:03:44.947 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:45 compute-0 nova_compute[254092]: 2025-11-25 17:03:45.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:45 compute-0 nova_compute[254092]: 2025-11-25 17:03:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:45 compute-0 ceph-mon[74985]: pgmap v2406: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 25 17:03:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 25 17:03:47 compute-0 nova_compute[254092]: 2025-11-25 17:03:47.069 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:47 compute-0 ceph-mon[74985]: pgmap v2407: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Nov 25 17:03:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 129 op/s
Nov 25 17:03:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:49 compute-0 ceph-mon[74985]: pgmap v2408: 321 pgs: 321 active+clean; 293 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 129 op/s
Nov 25 17:03:50 compute-0 ovn_controller[153477]: 2025-11-25T17:03:50Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:a2:cd 10.100.0.8
Nov 25 17:03:50 compute-0 ovn_controller[153477]: 2025-11-25T17:03:50Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:a2:cd 10.100.0.8
Nov 25 17:03:50 compute-0 nova_compute[254092]: 2025-11-25 17:03:50.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:03:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.2 total, 600.0 interval
                                           Cumulative writes: 37K writes, 145K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.86 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2826 writes, 11K keys, 2826 commit groups, 1.0 writes per commit group, ingest: 15.40 MB, 0.03 MB/s
                                           Interval WAL: 2826 writes, 994 syncs, 2.84 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:03:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 313 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 946 KiB/s wr, 137 op/s
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002392791153871199 of space, bias 1.0, pg target 0.7178373461613596 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:03:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:03:52 compute-0 ceph-mon[74985]: pgmap v2409: 321 pgs: 321 active+clean; 313 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 946 KiB/s wr, 137 op/s
Nov 25 17:03:52 compute-0 nova_compute[254092]: 2025-11-25 17:03:52.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 326 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Nov 25 17:03:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:54 compute-0 ceph-mon[74985]: pgmap v2410: 321 pgs: 321 active+clean; 326 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Nov 25 17:03:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 326 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 25 17:03:55 compute-0 nova_compute[254092]: 2025-11-25 17:03:55.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:03:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2759510647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:03:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:03:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2759510647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:03:55 compute-0 nova_compute[254092]: 2025-11-25 17:03:55.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:03:55 compute-0 nova_compute[254092]: 2025-11-25 17:03:55.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:03:56 compute-0 ceph-mon[74985]: pgmap v2411: 321 pgs: 321 active+clean; 326 MiB data, 978 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Nov 25 17:03:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2759510647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:03:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2759510647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:03:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 347 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 172 op/s
Nov 25 17:03:56 compute-0 ovn_controller[153477]: 2025-11-25T17:03:56Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:b2:95 10.100.0.10
Nov 25 17:03:56 compute-0 ovn_controller[153477]: 2025-11-25T17:03:56Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:b2:95 10.100.0.10
Nov 25 17:03:56 compute-0 nova_compute[254092]: 2025-11-25 17:03:56.713 254096 INFO nova.compute.manager [None req-12fe7c42-fa57-4a08-ae91-b950af7f9d4c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Get console output
Nov 25 17:03:56 compute-0 nova_compute[254092]: 2025-11-25 17:03:56.724 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.019 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.020 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.022 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.022 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.022 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.023 254096 INFO nova.compute.manager [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Terminating instance
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.024 254096 DEBUG nova.compute.manager [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 kernel: tap9fb8b6ba-45 (unregistering): left promiscuous mode
Nov 25 17:03:57 compute-0 NetworkManager[48891]: <info>  [1764090237.0957] device (tap9fb8b6ba-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.115 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 ovn_controller[153477]: 2025-11-25T17:03:57Z|01225|binding|INFO|Releasing lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 from this chassis (sb_readonly=0)
Nov 25 17:03:57 compute-0 ovn_controller[153477]: 2025-11-25T17:03:57Z|01226|binding|INFO|Setting lport 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 down in Southbound
Nov 25 17:03:57 compute-0 ovn_controller[153477]: 2025-11-25T17:03:57Z|01227|binding|INFO|Removing iface tap9fb8b6ba-45 ovn-installed in OVS
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.128 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:a2:cd 10.100.0.8'], port_security=['fa:16:3e:c6:a2:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dc02b95b-290f-441d-9b04-957187d0f885', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1a0257b-ea79-4747-8da2-0da26a4a2e35', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.130 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce unbound from our chassis
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.131 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.155 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d9ea93-c158-42d4-ba63-23e3e9aefd92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:57 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Nov 25 17:03:57 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 13.991s CPU time.
Nov 25 17:03:57 compute-0 systemd-machined[216343]: Machine qemu-150-instance-00000076 terminated.
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.189 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[84bea833-03fc-468b-8fa9-69c76049a9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.192 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[01e63c89-be24-4e31-80df-c066770258a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.222 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bb36f38f-9ce9-46d0-9a14-7f2d4fc37a4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b940767-8bdc-4ed3-82a5-42035816dbe4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd54b59f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:f1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 355], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663534, 'reachable_time': 36364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385326, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.262 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9da0db21-7f35-4e99-a0b5-f40148be7939]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663545, 'tstamp': 663545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385331, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd54b59f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663547, 'tstamp': 663547}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385331, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.263 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.266 254096 INFO nova.virt.libvirt.driver [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Instance destroyed successfully.
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.267 254096 DEBUG nova.objects.instance [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid dc02b95b-290f-441d-9b04-957187d0f885 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.271 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd54b59f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.277 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.278 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd54b59f-80, col_values=(('external_ids', {'iface-id': 'd6826306-6b17-44f3-a972-7d5089250616'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:03:57.278 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.279 254096 DEBUG nova.virt.libvirt.vif [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:03:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1163677477',display_name='tempest-TestNetworkBasicOps-server-1163677477',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1163677477',id=118,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCxvG1kVT8rj7rz3z1G56tHhBp7SuNAkkLNuv/fKw8COC5VTTwy6Y98cgqFvLD/dMqbJMho2xMxaZGbc9qY1ZWoyzmuMb54S1JTVXIQHKR2yNSi3mjSBeFFRS3qe8724pw==',key_name='tempest-TestNetworkBasicOps-1314414902',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:03:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-akatof1w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:03:35Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=dc02b95b-290f-441d-9b04-957187d0f885,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.280 254096 DEBUG nova.network.os_vif_util [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "address": "fa:16:3e:c6:a2:cd", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb8b6ba-45", "ovs_interfaceid": "9fb8b6ba-4534-4320-a88f-3da6cdc1eb28", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.281 254096 DEBUG nova.network.os_vif_util [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.282 254096 DEBUG os_vif [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.286 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fb8b6ba-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.293 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.295 254096 INFO os_vif [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:a2:cd,bridge_name='br-int',has_traffic_filtering=True,id=9fb8b6ba-4534-4320-a88f-3da6cdc1eb28,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb8b6ba-45')
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG nova.compute.manager [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-unplugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG oslo_concurrency.lockutils [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG oslo_concurrency.lockutils [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.654 254096 DEBUG oslo_concurrency.lockutils [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.655 254096 DEBUG nova.compute.manager [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] No waiting events found dispatching network-vif-unplugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.655 254096 DEBUG nova.compute.manager [req-7905c43c-af5b-4039-ba65-3c7f33426815 req-5f490ca9-af25-48e4-b6d8-ba8c4b20ba32 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-unplugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.716 254096 INFO nova.virt.libvirt.driver [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deleting instance files /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885_del
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.718 254096 INFO nova.virt.libvirt.driver [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deletion of /var/lib/nova/instances/dc02b95b-290f-441d-9b04-957187d0f885_del complete
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.761 254096 INFO nova.compute.manager [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.762 254096 DEBUG oslo.service.loopingcall [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.762 254096 DEBUG nova.compute.manager [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:03:57 compute-0 nova_compute[254092]: 2025-11-25 17:03:57.762 254096 DEBUG nova.network.neutron [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:03:58 compute-0 ceph-mon[74985]: pgmap v2412: 321 pgs: 321 active+clean; 347 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 172 op/s
Nov 25 17:03:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 347 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 4.2 MiB/s wr, 105 op/s
Nov 25 17:03:58 compute-0 nova_compute[254092]: 2025-11-25 17:03:58.676 254096 DEBUG nova.network.neutron [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:03:58 compute-0 nova_compute[254092]: 2025-11-25 17:03:58.693 254096 INFO nova.compute.manager [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Took 0.93 seconds to deallocate network for instance.
Nov 25 17:03:58 compute-0 nova_compute[254092]: 2025-11-25 17:03:58.744 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:58 compute-0 nova_compute[254092]: 2025-11-25 17:03:58.744 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:58 compute-0 nova_compute[254092]: 2025-11-25 17:03:58.785 254096 DEBUG nova.compute.manager [req-c7d3b3f7-c52b-4be3-a8c0-3303ecb25179 req-11907621-a425-4ce0-8ba1-ef51b149efc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-deleted-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:03:58 compute-0 nova_compute[254092]: 2025-11-25 17:03:58.867 254096 DEBUG oslo_concurrency.processutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:03:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:03:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856563403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.336 254096 DEBUG oslo_concurrency.processutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.346 254096 DEBUG nova.compute.provider_tree [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.364 254096 DEBUG nova.scheduler.client.report [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.389 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.422 254096 INFO nova.scheduler.client.report [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance dc02b95b-290f-441d-9b04-957187d0f885
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.507 254096 DEBUG oslo_concurrency.lockutils [None req-b6a43684-53e8-43a7-8f19-d6d7351aba52 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.761 254096 DEBUG nova.compute.manager [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.762 254096 DEBUG oslo_concurrency.lockutils [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "dc02b95b-290f-441d-9b04-957187d0f885-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.762 254096 DEBUG oslo_concurrency.lockutils [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.763 254096 DEBUG oslo_concurrency.lockutils [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "dc02b95b-290f-441d-9b04-957187d0f885-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.763 254096 DEBUG nova.compute.manager [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] No waiting events found dispatching network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:03:59 compute-0 nova_compute[254092]: 2025-11-25 17:03:59.764 254096 WARNING nova.compute.manager [req-6c29f852-5121-44c9-96b5-cfc3f660dd2e req-12f665d2-8c5f-4538-a59f-9c7f962740a8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Received unexpected event network-vif-plugged-9fb8b6ba-4534-4320-a88f-3da6cdc1eb28 for instance with vm_state deleted and task_state None.
Nov 25 17:04:00 compute-0 nova_compute[254092]: 2025-11-25 17:04:00.017 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:00 compute-0 ceph-mon[74985]: pgmap v2413: 321 pgs: 321 active+clean; 347 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 4.2 MiB/s wr, 105 op/s
Nov 25 17:04:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2856563403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 320 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 4.2 MiB/s wr, 130 op/s
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.304 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.305 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.305 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.306 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.307 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.310 254096 INFO nova.compute.manager [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Terminating instance
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.313 254096 DEBUG nova.compute.manager [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:04:01 compute-0 kernel: tapb0512c6a-fb (unregistering): left promiscuous mode
Nov 25 17:04:01 compute-0 NetworkManager[48891]: <info>  [1764090241.3839] device (tapb0512c6a-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:04:01 compute-0 ovn_controller[153477]: 2025-11-25T17:04:01Z|01228|binding|INFO|Releasing lport b0512c6a-fbc4-4639-8508-e6493d18bd3a from this chassis (sb_readonly=0)
Nov 25 17:04:01 compute-0 ovn_controller[153477]: 2025-11-25T17:04:01Z|01229|binding|INFO|Setting lport b0512c6a-fbc4-4639-8508-e6493d18bd3a down in Southbound
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 ovn_controller[153477]: 2025-11-25T17:04:01Z|01230|binding|INFO|Removing iface tapb0512c6a-fb ovn-installed in OVS
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.402 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b5:b4 10.100.0.9'], port_security=['fa:16:3e:d7:b5:b4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '73a187fa-5479-4191-bd44-757c3840137a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '05b9ee3b-4cbe-486c-b386-9a71c1c7373a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78a09f35-40c2-481d-a3e7-9cd1e21550b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b0512c6a-fbc4-4639-8508-e6493d18bd3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.403 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b0512c6a-fbc4-4639-8508-e6493d18bd3a in datapath cd54b59f-8568-4ad8-a75d-4fcdad6f8dce unbound from our chassis
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.405 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.406 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[73088e11-a1ee-40c6-9349-7c45ff21d1af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.407 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce namespace which is not needed anymore
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.416 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 25 17:04:01 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Consumed 15.256s CPU time.
Nov 25 17:04:01 compute-0 systemd-machined[216343]: Machine qemu-148-instance-00000074 terminated.
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.564 254096 INFO nova.virt.libvirt.driver [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Instance destroyed successfully.
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.564 254096 DEBUG nova.objects.instance [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 73a187fa-5479-4191-bd44-757c3840137a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:04:01 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : haproxy version is 2.8.14-c23fe91
Nov 25 17:04:01 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [NOTICE]   (383205) : path to executable is /usr/sbin/haproxy
Nov 25 17:04:01 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [WARNING]  (383205) : Exiting Master process...
Nov 25 17:04:01 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [WARNING]  (383205) : Exiting Master process...
Nov 25 17:04:01 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [ALERT]    (383205) : Current worker (383207) exited with code 143 (Terminated)
Nov 25 17:04:01 compute-0 neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce[383201]: [WARNING]  (383205) : All workers exited. Exiting... (0)
Nov 25 17:04:01 compute-0 systemd[1]: libpod-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434.scope: Deactivated successfully.
Nov 25 17:04:01 compute-0 podman[385400]: 2025-11-25 17:04:01.579016709 +0000 UTC m=+0.055697339 container died 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.579 254096 DEBUG nova.virt.libvirt.vif [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:02:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-365721910',display_name='tempest-TestNetworkBasicOps-server-365721910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-365721910',id=116,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8fF/LQWIeXhdQwSKSLAErz4jvUPWKjw4L1GZb+ASjyhqtlkSjIxhUDwHlkKBv0qHWvbUkdKHkhYl9JuEVV2LarQZvIoe1QUEsDx05YVfl0dpyKfWcSmCOAyR6fZFjEqA==',key_name='tempest-TestNetworkBasicOps-1895434798',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:02:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5bv1kvw6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:02:58Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=73a187fa-5479-4191-bd44-757c3840137a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.580 254096 DEBUG nova.network.os_vif_util [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "address": "fa:16:3e:d7:b5:b4", "network": {"id": "cd54b59f-8568-4ad8-a75d-4fcdad6f8dce", "bridge": "br-int", "label": "tempest-network-smoke--1847695866", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0512c6a-fb", "ovs_interfaceid": "b0512c6a-fbc4-4639-8508-e6493d18bd3a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.580 254096 DEBUG nova.network.os_vif_util [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.581 254096 DEBUG os_vif [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.583 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0512c6a-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.589 254096 INFO os_vif [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:b5:b4,bridge_name='br-int',has_traffic_filtering=True,id=b0512c6a-fbc4-4639-8508-e6493d18bd3a,network=Network(cd54b59f-8568-4ad8-a75d-4fcdad6f8dce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0512c6a-fb')
Nov 25 17:04:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434-userdata-shm.mount: Deactivated successfully.
Nov 25 17:04:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e893ac8ad73aa3d08cc59807e1b727426905ed7800c4aa1c5d54d074cb489584-merged.mount: Deactivated successfully.
Nov 25 17:04:01 compute-0 podman[385400]: 2025-11-25 17:04:01.633444421 +0000 UTC m=+0.110125031 container cleanup 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:04:01 compute-0 systemd[1]: libpod-conmon-1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434.scope: Deactivated successfully.
Nov 25 17:04:01 compute-0 podman[385456]: 2025-11-25 17:04:01.719690497 +0000 UTC m=+0.059101262 container remove 1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5af08c-4c65-4ee0-adaa-a2c1cc47e4c0]: (4, ('Tue Nov 25 05:04:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce (1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434)\n1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434\nTue Nov 25 05:04:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce (1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434)\n1d84d91536685470a3976870ef163cfcc0e7223c9cafe7c5d6341cd4e34ce434\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.731 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3f0b04-280a-444a-b3ea-dbf2cf782dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.734 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd54b59f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 kernel: tapcd54b59f-80: left promiscuous mode
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.755 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffcf4c2-3fc3-4980-98cc-8f09cc22cec9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce149889-9a03-41b8-9567-73dbd591466f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.772 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cae6b5f9-ba2b-4b7e-826c-1ff13d35c5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05feeacc-913d-4bda-9208-4bfddd077957]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663527, 'reachable_time': 28426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385475, 'error': None, 'target': 'ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd54b59f\x2d8568\x2d4ad8\x2da75d\x2d4fcdad6f8dce.mount: Deactivated successfully.
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.797 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd54b59f-8568-4ad8-a75d-4fcdad6f8dce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:04:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:01.797 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a4571518-4fc3-4986-90eb-0c552d28493e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.911 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-unplugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.913 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] No waiting events found dispatching network-vif-unplugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.914 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-unplugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "73a187fa-5479-4191-bd44-757c3840137a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG oslo_concurrency.lockutils [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 DEBUG nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] No waiting events found dispatching network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.915 254096 WARNING nova.compute.manager [req-4643ddd2-eff7-4bf9-89fb-3345af4d607f req-717160d1-8293-494b-b783-f88934407a90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received unexpected event network-vif-plugged-b0512c6a-fbc4-4639-8508-e6493d18bd3a for instance with vm_state active and task_state deleting.
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.960 254096 INFO nova.virt.libvirt.driver [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deleting instance files /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a_del
Nov 25 17:04:01 compute-0 nova_compute[254092]: 2025-11-25 17:04:01.962 254096 INFO nova.virt.libvirt.driver [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deletion of /var/lib/nova/instances/73a187fa-5479-4191-bd44-757c3840137a_del complete
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.021 254096 INFO nova.compute.manager [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.022 254096 DEBUG oslo.service.loopingcall [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.022 254096 DEBUG nova.compute.manager [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.022 254096 DEBUG nova.network.neutron [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:04:02 compute-0 ceph-mon[74985]: pgmap v2414: 321 pgs: 321 active+clean; 320 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 4.2 MiB/s wr, 130 op/s
Nov 25 17:04:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 279 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.4 MiB/s wr, 147 op/s
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.801 254096 DEBUG nova.network.neutron [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.821 254096 INFO nova.compute.manager [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Took 0.80 seconds to deallocate network for instance.
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.871 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.872 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.886 254096 DEBUG nova.compute.manager [req-0dafa6df-9821-41c6-a14a-3bff348c6a28 req-aa840026-3e9b-4eac-800c-7f2212c7762a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Received event network-vif-deleted-b0512c6a-fbc4-4639-8508-e6493d18bd3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.912 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.912 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.912 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.913 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.913 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.914 254096 INFO nova.compute.manager [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Terminating instance
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.915 254096 DEBUG nova.compute.manager [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:04:02 compute-0 nova_compute[254092]: 2025-11-25 17:04:02.947 254096 DEBUG oslo_concurrency.processutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:02 compute-0 kernel: tap29c53aaa-05 (unregistering): left promiscuous mode
Nov 25 17:04:02 compute-0 NetworkManager[48891]: <info>  [1764090242.9914] device (tap29c53aaa-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:04:02 compute-0 ovn_controller[153477]: 2025-11-25T17:04:02Z|01231|binding|INFO|Releasing lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 from this chassis (sb_readonly=0)
Nov 25 17:04:02 compute-0 ovn_controller[153477]: 2025-11-25T17:04:02Z|01232|binding|INFO|Setting lport 29c53aaa-054f-442e-8673-22d0d7fc5f72 down in Southbound
Nov 25 17:04:02 compute-0 ovn_controller[153477]: 2025-11-25T17:04:02Z|01233|binding|INFO|Removing iface tap29c53aaa-05 ovn-installed in OVS
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.006 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:b2:95 10.100.0.10'], port_security=['fa:16:3e:da:b2:95 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=29c53aaa-054f-442e-8673-22d0d7fc5f72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.008 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 29c53aaa-054f-442e-8673-22d0d7fc5f72 in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 unbound from our chassis
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.010 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51d8d234-0a41-496f-82c7-0c98aa4761b8
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.042 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ef6ff1-0f3d-4a74-9b55-cf408418da6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.082 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ab2c2d-9180-4857-8110-c95cadddb7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:03 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.085 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[02b44783-b0d7-404e-864e-42f0172a9651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:03 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 17.727s CPU time.
Nov 25 17:04:03 compute-0 systemd-machined[216343]: Machine qemu-151-instance-00000077 terminated.
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.121 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1cab360f-0d09-47e7-954f-a75a4993c727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.140 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5ec2cb-4fac-45c7-8af1-bc0571da2eac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51d8d234-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:9b:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663946, 'reachable_time': 17868, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385489, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.154 254096 INFO nova.virt.libvirt.driver [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Instance destroyed successfully.
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.155 254096 DEBUG nova.objects.instance [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.161 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b365abbd-dd9a-43f0-8ac5-e6b79d889dfb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663959, 'tstamp': 663959}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385515, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51d8d234-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663963, 'tstamp': 663963}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385515, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.163 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.166 254096 DEBUG nova.virt.libvirt.vif [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-0-527929422',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=119,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:03:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-4a5cegzc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:03:40Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.167 254096 DEBUG nova.network.os_vif_util [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "address": "fa:16:3e:da:b2:95", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c53aaa-05", "ovs_interfaceid": "29c53aaa-054f-442e-8673-22d0d7fc5f72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.168 254096 DEBUG nova.network.os_vif_util [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.168 254096 DEBUG os_vif [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29c53aaa-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.170 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51d8d234-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.171 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.171 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51d8d234-00, col_values=(('external_ids', {'iface-id': '81151907-1dea-47e9-9a64-99f3b4cd12fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:03.172 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.175 254096 INFO os_vif [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b2:95,bridge_name='br-int',has_traffic_filtering=True,id=29c53aaa-054f-442e-8673-22d0d7fc5f72,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c53aaa-05')
Nov 25 17:04:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:04:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752532862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.460 254096 DEBUG oslo_concurrency.processutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.467 254096 DEBUG nova.compute.provider_tree [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.488 254096 DEBUG nova.scheduler.client.report [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.496 254096 INFO nova.virt.libvirt.driver [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deleting instance files /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_del
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.496 254096 INFO nova.virt.libvirt.driver [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deletion of /var/lib/nova/instances/4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3_del complete
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.526 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.567 254096 INFO nova.compute.manager [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 0.65 seconds to destroy the instance on the hypervisor.
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.567 254096 DEBUG oslo.service.loopingcall [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.568 254096 DEBUG nova.compute.manager [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.568 254096 DEBUG nova.network.neutron [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.572 254096 INFO nova.scheduler.client.report [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 73a187fa-5479-4191-bd44-757c3840137a
Nov 25 17:04:03 compute-0 nova_compute[254092]: 2025-11-25 17:04:03.648 254096 DEBUG oslo_concurrency.lockutils [None req-c3aa0103-0b5d-4d91-86bd-24ef7d3afe45 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "73a187fa-5479-4191-bd44-757c3840137a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-unplugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.010 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] No waiting events found dispatching network-vif-unplugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-unplugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG oslo_concurrency.lockutils [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.011 254096 DEBUG nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] No waiting events found dispatching network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:04 compute-0 nova_compute[254092]: 2025-11-25 17:04:04.012 254096 WARNING nova.compute.manager [req-f755ada4-0c61-4daa-8f44-40270061a682 req-3cfb9089-5bee-4faf-bb2f-04eb471aabc4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received unexpected event network-vif-plugged-29c53aaa-054f-442e-8673-22d0d7fc5f72 for instance with vm_state active and task_state deleting.
Nov 25 17:04:04 compute-0 ceph-mon[74985]: pgmap v2415: 321 pgs: 321 active+clean; 279 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 3.4 MiB/s wr, 147 op/s
Nov 25 17:04:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1752532862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 279 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:04:05 compute-0 nova_compute[254092]: 2025-11-25 17:04:05.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:05 compute-0 nova_compute[254092]: 2025-11-25 17:04:05.561 254096 DEBUG nova.network.neutron [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:05 compute-0 nova_compute[254092]: 2025-11-25 17:04:05.578 254096 INFO nova.compute.manager [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Took 2.01 seconds to deallocate network for instance.
Nov 25 17:04:05 compute-0 nova_compute[254092]: 2025-11-25 17:04:05.656 254096 DEBUG nova.compute.manager [req-0f43215a-db42-496b-87e7-0ac2a0034c23 req-c26d33b8-3599-4fed-9944-abb3651dac19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Received event network-vif-deleted-29c53aaa-054f-442e-8673-22d0d7fc5f72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:05 compute-0 nova_compute[254092]: 2025-11-25 17:04:05.660 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:05 compute-0 nova_compute[254092]: 2025-11-25 17:04:05.660 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:05 compute-0 nova_compute[254092]: 2025-11-25 17:04:05.726 254096 DEBUG oslo_concurrency.processutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:06 compute-0 ceph-mon[74985]: pgmap v2416: 321 pgs: 321 active+clean; 279 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:04:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:04:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596410790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:06 compute-0 nova_compute[254092]: 2025-11-25 17:04:06.192 254096 DEBUG oslo_concurrency.processutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:06 compute-0 nova_compute[254092]: 2025-11-25 17:04:06.199 254096 DEBUG nova.compute.provider_tree [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:04:06 compute-0 nova_compute[254092]: 2025-11-25 17:04:06.216 254096 DEBUG nova.scheduler.client.report [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:04:06 compute-0 nova_compute[254092]: 2025-11-25 17:04:06.244 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:06 compute-0 nova_compute[254092]: 2025-11-25 17:04:06.273 254096 INFO nova.scheduler.client.report [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3
Nov 25 17:04:06 compute-0 nova_compute[254092]: 2025-11-25 17:04:06.344 254096 DEBUG oslo_concurrency.lockutils [None req-a5e596b4-3966-41ea-88d0-06df5376e29b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 121 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 151 op/s
Nov 25 17:04:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2596410790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:04:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4201.5 total, 600.0 interval
                                           Cumulative writes: 32K writes, 127K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.85 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2559 writes, 10K keys, 2559 commit groups, 1.0 writes per commit group, ingest: 12.29 MB, 0.02 MB/s
                                           Interval WAL: 2559 writes, 986 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.755 254096 DEBUG nova.compute.manager [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.756 254096 DEBUG nova.compute.manager [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing instance network info cache due to event network-changed-142675a5-3c37-4e43-9f80-e8fedd63f3cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.756 254096 DEBUG oslo_concurrency.lockutils [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.757 254096 DEBUG oslo_concurrency.lockutils [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.758 254096 DEBUG nova.network.neutron [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Refreshing network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.831 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.832 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.833 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.834 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.834 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.836 254096 INFO nova.compute.manager [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Terminating instance
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.838 254096 DEBUG nova.compute.manager [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:04:07 compute-0 kernel: tap142675a5-3c (unregistering): left promiscuous mode
Nov 25 17:04:07 compute-0 NetworkManager[48891]: <info>  [1764090247.8904] device (tap142675a5-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:04:07 compute-0 ovn_controller[153477]: 2025-11-25T17:04:07Z|01234|binding|INFO|Releasing lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf from this chassis (sb_readonly=0)
Nov 25 17:04:07 compute-0 ovn_controller[153477]: 2025-11-25T17:04:07Z|01235|binding|INFO|Setting lport 142675a5-3c37-4e43-9f80-e8fedd63f3cf down in Southbound
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:07 compute-0 ovn_controller[153477]: 2025-11-25T17:04:07Z|01236|binding|INFO|Removing iface tap142675a5-3c ovn-installed in OVS
Nov 25 17:04:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.909 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:64:18 10.100.0.3'], port_security=['fa:16:3e:4d:64:18 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '12deb2c6-31fb-4186-940b-8131e43ea3f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d070883-7c27-4b8a-ba4e-1b0864814a05 3898aea7-272e-40b9-acc3-c2b5ff1428fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1cc609c-e570-4376-a4c0-1e1c78f929d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=142675a5-3c37-4e43-9f80-e8fedd63f3cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:04:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.910 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 142675a5-3c37-4e43-9f80-e8fedd63f3cf in datapath 51d8d234-0a41-496f-82c7-0c98aa4761b8 unbound from our chassis
Nov 25 17:04:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.912 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51d8d234-0a41-496f-82c7-0c98aa4761b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:04:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.912 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[93cd8707-c374-41fb-8440-1c09bde72b37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:07.913 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 namespace which is not needed anymore
Nov 25 17:04:07 compute-0 nova_compute[254092]: 2025-11-25 17:04:07.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:07 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Deactivated successfully.
Nov 25 17:04:07 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Consumed 15.976s CPU time.
Nov 25 17:04:07 compute-0 systemd-machined[216343]: Machine qemu-149-instance-00000075 terminated.
Nov 25 17:04:08 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : haproxy version is 2.8.14-c23fe91
Nov 25 17:04:08 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [NOTICE]   (383480) : path to executable is /usr/sbin/haproxy
Nov 25 17:04:08 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [WARNING]  (383480) : Exiting Master process...
Nov 25 17:04:08 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [WARNING]  (383480) : Exiting Master process...
Nov 25 17:04:08 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [ALERT]    (383480) : Current worker (383482) exited with code 143 (Terminated)
Nov 25 17:04:08 compute-0 neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8[383476]: [WARNING]  (383480) : All workers exited. Exiting... (0)
Nov 25 17:04:08 compute-0 systemd[1]: libpod-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087.scope: Deactivated successfully.
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:08 compute-0 podman[385589]: 2025-11-25 17:04:08.06836162 +0000 UTC m=+0.046432974 container died f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.085 254096 INFO nova.virt.libvirt.driver [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Instance destroyed successfully.
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.086 254096 DEBUG nova.objects.instance [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid 12deb2c6-31fb-4186-940b-8131e43ea3f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.098 254096 DEBUG nova.virt.libvirt.vif [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:02:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1828112716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=117,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENCxp3bTXGsKjC+pXKPCNnWwcQujVQHZ7u1aFGP/yEo2FpTOpo1ONs5vq960JiIu5OhMB87S9/LqqgZ1/j7VB13L8Lrffj2U41nmW0H1eg/FnO9M5/++4mSrfYAP8cVng==',key_name='tempest-TestSecurityGroupsBasicOps-209947328',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:03:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-ez50uxgr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:03:03Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=12deb2c6-31fb-4186-940b-8131e43ea3f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.098 254096 DEBUG nova.network.os_vif_util [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.099 254096 DEBUG nova.network.os_vif_util [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.099 254096 DEBUG os_vif [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087-userdata-shm.mount: Deactivated successfully.
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.101 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap142675a5-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee1fa6406befdaaca4f1d07638e3ce32c9871c9601e12c9f35106bd16f6c1c0d-merged.mount: Deactivated successfully.
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.105 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.108 254096 INFO os_vif [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:64:18,bridge_name='br-int',has_traffic_filtering=True,id=142675a5-3c37-4e43-9f80-e8fedd63f3cf,network=Network(51d8d234-0a41-496f-82c7-0c98aa4761b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap142675a5-3c')
Nov 25 17:04:08 compute-0 podman[385589]: 2025-11-25 17:04:08.110245359 +0000 UTC m=+0.088316713 container cleanup f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:04:08 compute-0 systemd[1]: libpod-conmon-f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087.scope: Deactivated successfully.
Nov 25 17:04:08 compute-0 ceph-mon[74985]: pgmap v2417: 321 pgs: 321 active+clean; 121 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 2.2 MiB/s wr, 151 op/s
Nov 25 17:04:08 compute-0 podman[385637]: 2025-11-25 17:04:08.175730444 +0000 UTC m=+0.042667531 container remove f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.182 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f83ad36e-c97f-49cd-995f-da44fc30ec87]: (4, ('Tue Nov 25 05:04:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 (f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087)\nf95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087\nTue Nov 25 05:04:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 (f95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087)\nf95f4e65c56a1c3e5424fbff018a47f094cf6312f6583c3abb2fac951bec3087\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[318d8709-191b-4d35-8f4e-a64debb87cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.184 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51d8d234-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.185 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:08 compute-0 kernel: tap51d8d234-00: left promiscuous mode
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2665aae5-24a0-482d-a555-9d1204adbfd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1fce61-0e05-4685-a069-d318656a31e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.210 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abafb875-2bef-4a9d-8fa1-7efbc64a08ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.227 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c0647207-75da-43e5-ab67-46b2777b51dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663937, 'reachable_time': 27179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385663, 'error': None, 'target': 'ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d51d8d234\x2d0a41\x2d496f\x2d82c7\x2d0c98aa4761b8.mount: Deactivated successfully.
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.231 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51d8d234-0a41-496f-82c7-0c98aa4761b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:04:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:08.231 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2356c90e-68b6-4323-a1a7-6e7593a72008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 121 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 172 KiB/s rd, 129 KiB/s wr, 109 op/s
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.516 254096 INFO nova.virt.libvirt.driver [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deleting instance files /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8_del
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.517 254096 INFO nova.virt.libvirt.driver [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deletion of /var/lib/nova/instances/12deb2c6-31fb-4186-940b-8131e43ea3f8_del complete
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.581 254096 INFO nova.compute.manager [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.582 254096 DEBUG oslo.service.loopingcall [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.583 254096 DEBUG nova.compute.manager [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:04:08 compute-0 nova_compute[254092]: 2025-11-25 17:04:08.583 254096 DEBUG nova.network.neutron [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:04:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:09.510 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.511 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:09.511 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.866 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-unplugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.866 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.867 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.867 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.867 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] No waiting events found dispatching network-vif-unplugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.868 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-unplugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.868 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.868 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.869 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.869 254096 DEBUG oslo_concurrency.lockutils [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.869 254096 DEBUG nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] No waiting events found dispatching network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.870 254096 WARNING nova.compute.manager [req-93126ac0-016c-49f6-9559-769ad09a5381 req-e7b41d13-11f2-4893-8594-7fe5d3534ea4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received unexpected event network-vif-plugged-142675a5-3c37-4e43-9f80-e8fedd63f3cf for instance with vm_state active and task_state deleting.
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.890 254096 DEBUG nova.network.neutron [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.901 254096 INFO nova.compute.manager [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Took 1.32 seconds to deallocate network for instance.
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.941 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.942 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.955 254096 DEBUG nova.compute.manager [req-8d5b3268-1fcb-45fb-8bf5-4521a53ab1d5 req-5c58149b-e29f-4fb8-a344-2d2ddc157894 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Received event network-vif-deleted-142675a5-3c37-4e43-9f80-e8fedd63f3cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:09 compute-0 nova_compute[254092]: 2025-11-25 17:04:09.996 254096 DEBUG oslo_concurrency.processutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.045 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.114 254096 DEBUG nova.network.neutron [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updated VIF entry in instance network info cache for port 142675a5-3c37-4e43-9f80-e8fedd63f3cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.115 254096 DEBUG nova.network.neutron [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Updating instance_info_cache with network_info: [{"id": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "address": "fa:16:3e:4d:64:18", "network": {"id": "51d8d234-0a41-496f-82c7-0c98aa4761b8", "bridge": "br-int", "label": "tempest-network-smoke--615777126", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap142675a5-3c", "ovs_interfaceid": "142675a5-3c37-4e43-9f80-e8fedd63f3cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.127 254096 DEBUG oslo_concurrency.lockutils [req-c2e7a8f3-eca7-4e5c-8aed-22a5f3f147bf req-390ef716-1b0d-49b0-9b7c-254bb33fd1dc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-12deb2c6-31fb-4186-940b-8131e43ea3f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:04:10 compute-0 ceph-mon[74985]: pgmap v2418: 321 pgs: 321 active+clean; 121 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 172 KiB/s rd, 129 KiB/s wr, 109 op/s
Nov 25 17:04:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 98 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 129 KiB/s wr, 114 op/s
Nov 25 17:04:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:04:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199238265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.473 254096 DEBUG oslo_concurrency.processutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.478 254096 DEBUG nova.compute.provider_tree [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.503 254096 DEBUG nova.scheduler.client.report [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.530 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.558 254096 INFO nova.scheduler.client.report [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance 12deb2c6-31fb-4186-940b-8131e43ea3f8
Nov 25 17:04:10 compute-0 nova_compute[254092]: 2025-11-25 17:04:10.625 254096 DEBUG oslo_concurrency.lockutils [None req-9d2a0c43-5a5f-4e4c-8140-56272bf79acb 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "12deb2c6-31fb-4186-940b-8131e43ea3f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2199238265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:12 compute-0 ceph-mon[74985]: pgmap v2419: 321 pgs: 321 active+clean; 98 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 175 KiB/s rd, 129 KiB/s wr, 114 op/s
Nov 25 17:04:12 compute-0 nova_compute[254092]: 2025-11-25 17:04:12.265 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090237.2632968, dc02b95b-290f-441d-9b04-957187d0f885 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:12 compute-0 nova_compute[254092]: 2025-11-25 17:04:12.266 254096 INFO nova.compute.manager [-] [instance: dc02b95b-290f-441d-9b04-957187d0f885] VM Stopped (Lifecycle Event)
Nov 25 17:04:12 compute-0 nova_compute[254092]: 2025-11-25 17:04:12.286 254096 DEBUG nova.compute.manager [None req-3b53a3c4-ef3b-4fd1-9613-e4d084d00f18 - - - - - -] [instance: dc02b95b-290f-441d-9b04-957187d0f885] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 115 KiB/s wr, 112 op/s
Nov 25 17:04:13 compute-0 nova_compute[254092]: 2025-11-25 17:04:13.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:13.641 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:14 compute-0 ceph-mon[74985]: pgmap v2420: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 115 KiB/s wr, 112 op/s
Nov 25 17:04:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 35 KiB/s wr, 86 op/s
Nov 25 17:04:14 compute-0 podman[385688]: 2025-11-25 17:04:14.70897685 +0000 UTC m=+0.089790113 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 17:04:14 compute-0 podman[385689]: 2025-11-25 17:04:14.721919215 +0000 UTC m=+0.100464407 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:04:14 compute-0 podman[385690]: 2025-11-25 17:04:14.760170814 +0000 UTC m=+0.133013619 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 17:04:15 compute-0 nova_compute[254092]: 2025-11-25 17:04:15.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:16 compute-0 ceph-mon[74985]: pgmap v2421: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 35 KiB/s wr, 86 op/s
Nov 25 17:04:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 35 KiB/s wr, 86 op/s
Nov 25 17:04:16 compute-0 nova_compute[254092]: 2025-11-25 17:04:16.562 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090241.561269, 73a187fa-5479-4191-bd44-757c3840137a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:16 compute-0 nova_compute[254092]: 2025-11-25 17:04:16.563 254096 INFO nova.compute.manager [-] [instance: 73a187fa-5479-4191-bd44-757c3840137a] VM Stopped (Lifecycle Event)
Nov 25 17:04:16 compute-0 nova_compute[254092]: 2025-11-25 17:04:16.581 254096 DEBUG nova.compute.manager [None req-d049721e-1c0c-439b-bd08-b9fe24fb59a7 - - - - - -] [instance: 73a187fa-5479-4191-bd44-757c3840137a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:18 compute-0 nova_compute[254092]: 2025-11-25 17:04:18.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:18 compute-0 nova_compute[254092]: 2025-11-25 17:04:18.150 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090243.1498055, 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:18 compute-0 nova_compute[254092]: 2025-11-25 17:04:18.150 254096 INFO nova.compute.manager [-] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] VM Stopped (Lifecycle Event)
Nov 25 17:04:18 compute-0 nova_compute[254092]: 2025-11-25 17:04:18.168 254096 DEBUG nova.compute.manager [None req-5d545722-ca76-42e6-b412-de15cae2adad - - - - - -] [instance: 4c6e96e1-6f61-4d08-8e7e-b7457f0da1c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:18 compute-0 ceph-mon[74985]: pgmap v2422: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 35 KiB/s wr, 86 op/s
Nov 25 17:04:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 25 17:04:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:19.513 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:20 compute-0 nova_compute[254092]: 2025-11-25 17:04:20.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:20 compute-0 ceph-mon[74985]: pgmap v2423: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 25 17:04:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 25 17:04:22 compute-0 ceph-mon[74985]: pgmap v2424: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Nov 25 17:04:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 25 17:04:23 compute-0 nova_compute[254092]: 2025-11-25 17:04:23.083 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090248.0821123, 12deb2c6-31fb-4186-940b-8131e43ea3f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:23 compute-0 nova_compute[254092]: 2025-11-25 17:04:23.083 254096 INFO nova.compute.manager [-] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] VM Stopped (Lifecycle Event)
Nov 25 17:04:23 compute-0 nova_compute[254092]: 2025-11-25 17:04:23.101 254096 DEBUG nova.compute.manager [None req-2a6fb1d7-0be5-4fc5-8c9b-8ee6770e1536 - - - - - -] [instance: 12deb2c6-31fb-4186-940b-8131e43ea3f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:23 compute-0 nova_compute[254092]: 2025-11-25 17:04:23.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:23 compute-0 ceph-mon[74985]: pgmap v2425: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Nov 25 17:04:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:04:24 compute-0 nova_compute[254092]: 2025-11-25 17:04:24.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:25 compute-0 nova_compute[254092]: 2025-11-25 17:04:25.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 17:04:25 compute-0 ceph-mon[74985]: pgmap v2426: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:04:25 compute-0 nova_compute[254092]: 2025-11-25 17:04:25.887 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:25 compute-0 nova_compute[254092]: 2025-11-25 17:04:25.887 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:25 compute-0 nova_compute[254092]: 2025-11-25 17:04:25.906 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.029 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.030 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.038 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.038 254096 INFO nova.compute.claims [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.205 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:04:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814858778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.632 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.639 254096 DEBUG nova.compute.provider_tree [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.652 254096 DEBUG nova.scheduler.client.report [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.695 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.696 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.744 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.745 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.797 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.816 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.914 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.915 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.916 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Creating image(s)
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.939 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.961 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.981 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:26 compute-0 nova_compute[254092]: 2025-11-25 17:04:26.984 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:27 compute-0 nova_compute[254092]: 2025-11-25 17:04:27.056 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:27 compute-0 nova_compute[254092]: 2025-11-25 17:04:27.057 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:27 compute-0 nova_compute[254092]: 2025-11-25 17:04:27.057 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:27 compute-0 nova_compute[254092]: 2025-11-25 17:04:27.058 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:27 compute-0 nova_compute[254092]: 2025-11-25 17:04:27.078 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:27 compute-0 nova_compute[254092]: 2025-11-25 17:04:27.082 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:27 compute-0 nova_compute[254092]: 2025-11-25 17:04:27.240 254096 DEBUG nova.policy [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:04:27 compute-0 sudo[385865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:27 compute-0 sudo[385865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:27 compute-0 sudo[385865]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:27 compute-0 sudo[385890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:04:27 compute-0 sudo[385890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:27 compute-0 sudo[385890]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:27 compute-0 ceph-mon[74985]: pgmap v2427: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:04:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3814858778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:27 compute-0 sudo[385915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:27 compute-0 sudo[385915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:27 compute-0 sudo[385915]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:27 compute-0 sudo[385940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:04:27 compute-0 sudo[385940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:28 compute-0 nova_compute[254092]: 2025-11-25 17:04:28.110 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:28 compute-0 sudo[385940]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:04:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:04:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:04:28 compute-0 nova_compute[254092]: 2025-11-25 17:04:28.247 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:28 compute-0 nova_compute[254092]: 2025-11-25 17:04:28.298 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:04:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:04:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fcdad622-b63a-4f24-8a25-4a4c496e0cd6 does not exist
Nov 25 17:04:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fad3c683-0a64-46ea-b453-6984bd41ad20 does not exist
Nov 25 17:04:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2e3b12b3-43aa-4150-a580-ddecd7303524 does not exist
Nov 25 17:04:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:04:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:04:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:04:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:04:28 compute-0 sudo[386050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:28 compute-0 sudo[386050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:28 compute-0 sudo[386050]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:28 compute-0 sudo[386075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:04:28 compute-0 sudo[386075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:28 compute-0 sudo[386075]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:28 compute-0 sudo[386100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:28 compute-0 sudo[386100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:28 compute-0 sudo[386100]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:28 compute-0 sudo[386125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:04:28 compute-0 sudo[386125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:28 compute-0 nova_compute[254092]: 2025-11-25 17:04:28.613 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully created port: a13b6cf4-602d-4af3-b369-9dfa273e1514 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:04:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:04:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:04:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:28 compute-0 podman[386189]: 2025-11-25 17:04:28.953913123 +0000 UTC m=+0.087960154 container create d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:04:28 compute-0 podman[386189]: 2025-11-25 17:04:28.88743997 +0000 UTC m=+0.021487021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:04:29 compute-0 systemd[1]: Started libpod-conmon-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope.
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.057 254096 DEBUG nova.objects.instance [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.068 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.068 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Ensure instance console log exists: /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.069 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.069 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.069 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:29 compute-0 podman[386189]: 2025-11-25 17:04:29.10950222 +0000 UTC m=+0.243549271 container init d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 17:04:29 compute-0 podman[386189]: 2025-11-25 17:04:29.115668429 +0000 UTC m=+0.249715460 container start d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 17:04:29 compute-0 musing_boyd[386220]: 167 167
Nov 25 17:04:29 compute-0 systemd[1]: libpod-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope: Deactivated successfully.
Nov 25 17:04:29 compute-0 conmon[386220]: conmon d45e1a3758a2b6a4e9b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope/container/memory.events
Nov 25 17:04:29 compute-0 podman[386189]: 2025-11-25 17:04:29.169525827 +0000 UTC m=+0.303572868 container attach d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:04:29 compute-0 podman[386189]: 2025-11-25 17:04:29.169949318 +0000 UTC m=+0.303996349 container died d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.233 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f6be5949ab66033ef739e525333872c0fa1649fd950a9317beb08e20bdc14eb-merged.mount: Deactivated successfully.
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.235 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.250 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:04:29 compute-0 podman[386189]: 2025-11-25 17:04:29.252917154 +0000 UTC m=+0.386964185 container remove d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:04:29 compute-0 systemd[1]: libpod-conmon-d45e1a3758a2b6a4e9b55fd7df6bd0c9159fa6122ab0f58d0b626e01dc465760.scope: Deactivated successfully.
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.314 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.314 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.321 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.321 254096 INFO nova.compute.claims [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.408 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:29 compute-0 podman[386250]: 2025-11-25 17:04:29.412154551 +0000 UTC m=+0.041739206 container create c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.442 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully updated port: a13b6cf4-602d-4af3-b369-9dfa273e1514 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:04:29 compute-0 systemd[1]: Started libpod-conmon-c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959.scope.
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.455 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.456 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.456 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:04:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:29 compute-0 podman[386250]: 2025-11-25 17:04:29.39352003 +0000 UTC m=+0.023104715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.521 254096 DEBUG nova.compute.manager [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.521 254096 DEBUG nova.compute.manager [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.521 254096 DEBUG oslo_concurrency.lockutils [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:04:29 compute-0 nova_compute[254092]: 2025-11-25 17:04:29.590 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:04:29 compute-0 podman[386250]: 2025-11-25 17:04:29.90684365 +0000 UTC m=+0.536428345 container init c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:04:29 compute-0 podman[386250]: 2025-11-25 17:04:29.915010424 +0000 UTC m=+0.544595099 container start c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:04:29 compute-0 ceph-mon[74985]: pgmap v2428: 321 pgs: 321 active+clean; 41 MiB data, 821 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:04:29 compute-0 podman[386250]: 2025-11-25 17:04:29.948194153 +0000 UTC m=+0.577778848 container attach c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:04:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351854010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.220 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.226 254096 DEBUG nova.compute.provider_tree [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.244 254096 DEBUG nova.scheduler.client.report [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.305 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.305 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:04:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 57 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 419 KiB/s wr, 11 op/s
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.397 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.397 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.439 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.481 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.627 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.628 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.628 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Creating image(s)
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.646 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.664 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.683 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.686 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.722 254096 DEBUG nova.policy [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.759 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.760 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.761 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.761 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.780 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:30 compute-0 nova_compute[254092]: 2025-11-25 17:04:30.783 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c5d6f631-cec2-431d-b476-feafa21e4f80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:31 compute-0 strange_satoshi[386268]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:04:31 compute-0 strange_satoshi[386268]: --> relative data size: 1.0
Nov 25 17:04:31 compute-0 strange_satoshi[386268]: --> All data devices are unavailable
Nov 25 17:04:31 compute-0 systemd[1]: libpod-c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959.scope: Deactivated successfully.
Nov 25 17:04:31 compute-0 podman[386250]: 2025-11-25 17:04:31.08041811 +0000 UTC m=+1.710002795 container died c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:04:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/351854010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:31 compute-0 nova_compute[254092]: 2025-11-25 17:04:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad4d5e742aece0f2f5e1d3a7c73902dfe7a0641a44c30736ba27293b73fd269b-merged.mount: Deactivated successfully.
Nov 25 17:04:32 compute-0 ceph-mon[74985]: pgmap v2429: 321 pgs: 321 active+clean; 57 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 419 KiB/s wr, 11 op/s
Nov 25 17:04:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 88 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:04:32 compute-0 podman[386250]: 2025-11-25 17:04:32.451067873 +0000 UTC m=+3.080652538 container remove c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:04:32 compute-0 systemd[1]: libpod-conmon-c08f218e8dd38c836e18fe36152cf2668cfda5a0ecf7d3cb275cd9be72fef959.scope: Deactivated successfully.
Nov 25 17:04:32 compute-0 nova_compute[254092]: 2025-11-25 17:04:32.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:32 compute-0 nova_compute[254092]: 2025-11-25 17:04:32.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:32 compute-0 sudo[386125]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:32 compute-0 sudo[386424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:32 compute-0 sudo[386424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:32 compute-0 sudo[386424]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:32 compute-0 sudo[386449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:04:32 compute-0 sudo[386449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:32 compute-0 sudo[386449]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:32 compute-0 sudo[386474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:32 compute-0 sudo[386474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:32 compute-0 sudo[386474]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:32 compute-0 sudo[386499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:04:32 compute-0 sudo[386499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:33 compute-0 podman[386562]: 2025-11-25 17:04:33.093275237 +0000 UTC m=+0.021711697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:04:33 compute-0 podman[386562]: 2025-11-25 17:04:33.391754834 +0000 UTC m=+0.320191274 container create 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:04:33 compute-0 ceph-mon[74985]: pgmap v2430: 321 pgs: 321 active+clean; 88 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.581 254096 DEBUG nova.network.neutron [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.623 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c5d6f631-cec2-431d-b476-feafa21e4f80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.839s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:33 compute-0 systemd[1]: Started libpod-conmon-643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4.scope.
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.657 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.657 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance network_info: |[{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.658 254096 DEBUG oslo_concurrency.lockutils [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.658 254096 DEBUG nova.network.neutron [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.661 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start _get_guest_xml network_info=[{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:04:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.701 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:04:33 compute-0 podman[386562]: 2025-11-25 17:04:33.743613414 +0000 UTC m=+0.672049894 container init 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:04:33 compute-0 podman[386562]: 2025-11-25 17:04:33.755713207 +0000 UTC m=+0.684149667 container start 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.761 254096 WARNING nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:04:33 compute-0 tender_saha[386593]: 167 167
Nov 25 17:04:33 compute-0 systemd[1]: libpod-643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4.scope: Deactivated successfully.
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.769 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.770 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.773 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.773 254096 DEBUG nova.virt.libvirt.host [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.773 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.774 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.775 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.776 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.776 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.776 254096 DEBUG nova.virt.hardware [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:04:33 compute-0 nova_compute[254092]: 2025-11-25 17:04:33.778 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:33 compute-0 podman[386562]: 2025-11-25 17:04:33.954315074 +0000 UTC m=+0.882751564 container attach 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:04:33 compute-0 podman[386562]: 2025-11-25 17:04:33.955114916 +0000 UTC m=+0.883551396 container died 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecc096ffeb811a5ed747939091b0d93c399811f9feb458a098f279b4412b9018-merged.mount: Deactivated successfully.
Nov 25 17:04:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:04:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270052297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.295 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.325 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.331 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 88 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:04:34 compute-0 podman[386562]: 2025-11-25 17:04:34.37235748 +0000 UTC m=+1.300793930 container remove 643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:04:34 compute-0 systemd[1]: libpod-conmon-643cf86c9e6ea07a83eb788698992c6342c0a20a9ae4352c3d90ce577381cae4.scope: Deactivated successfully.
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.421 254096 DEBUG nova.objects.instance [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid c5d6f631-cec2-431d-b476-feafa21e4f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.436 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.437 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Ensure instance console log exists: /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.437 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.438 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.438 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.459 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Successfully created port: b68643ca-2301-486a-984d-43fc41d1f773 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:04:34 compute-0 podman[386736]: 2025-11-25 17:04:34.537781207 +0000 UTC m=+0.023390463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:04:34 compute-0 podman[386736]: 2025-11-25 17:04:34.740447006 +0000 UTC m=+0.226056292 container create e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 25 17:04:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2270052297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:04:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722532707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.842 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.843 254096 DEBUG nova.virt.libvirt.vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.844 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.845 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.846 254096 DEBUG nova.objects.instance [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.859 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <name>instance-00000078</name>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:04:33</nova:creationTime>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:04:34 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <system>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <entry name="serial">fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <entry name="uuid">fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </system>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <os>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   </os>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <features>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   </features>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk">
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       </source>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config">
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       </source>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:04:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:7c:d6:7a"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <target dev="tapa13b6cf4-60"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log" append="off"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <video>
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </video>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:04:34 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:04:34 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:04:34 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:04:34 compute-0 nova_compute[254092]: </domain>
Nov 25 17:04:34 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:04:34 compute-0 systemd[1]: Started libpod-conmon-e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3.scope.
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.860 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Preparing to wait for external event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.861 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.862 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.862 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.864 254096 DEBUG nova.virt.libvirt.vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:26Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.864 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.865 254096 DEBUG nova.network.os_vif_util [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.865 254096 DEBUG os_vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.867 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.868 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.873 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.874 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa13b6cf4-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.874 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa13b6cf4-60, col_values=(('external_ids', {'iface-id': 'a13b6cf4-602d-4af3-b369-9dfa273e1514', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:d6:7a', 'vm-uuid': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.876 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:34 compute-0 NetworkManager[48891]: <info>  [1764090274.8775] manager: (tapa13b6cf4-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:34 compute-0 nova_compute[254092]: 2025-11-25 17:04:34.887 254096 INFO os_vif [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60')
Nov 25 17:04:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.175 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.176 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.176 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:7c:d6:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.177 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Using config drive
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.208 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:35 compute-0 podman[386736]: 2025-11-25 17:04:35.231800264 +0000 UTC m=+0.717409530 container init e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:04:35 compute-0 podman[386736]: 2025-11-25 17:04:35.241975872 +0000 UTC m=+0.727585118 container start e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:04:35 compute-0 podman[386736]: 2025-11-25 17:04:35.392776198 +0000 UTC m=+0.878385434 container attach e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.518 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:35 compute-0 ceph-mon[74985]: pgmap v2431: 321 pgs: 321 active+clean; 88 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:04:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2722532707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.834 254096 DEBUG nova.network.neutron [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.835 254096 DEBUG nova.network.neutron [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.850 254096 DEBUG oslo_concurrency.lockutils [req-6ec81308-38a9-4ada-b909-f534e49cf7aa req-4a735ca0-dcca-4f55-9c88-4fd7c9113379 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.887 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Creating config drive at /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.892 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3i5zl3m1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:04:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537005772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:35 compute-0 nova_compute[254092]: 2025-11-25 17:04:35.995 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:36 compute-0 elastic_newton[386754]: {
Nov 25 17:04:36 compute-0 elastic_newton[386754]:     "0": [
Nov 25 17:04:36 compute-0 elastic_newton[386754]:         {
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "devices": [
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "/dev/loop3"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             ],
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_name": "ceph_lv0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_size": "21470642176",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "name": "ceph_lv0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "tags": {
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cluster_name": "ceph",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.crush_device_class": "",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.encrypted": "0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osd_id": "0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.type": "block",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.vdo": "0"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             },
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "type": "block",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "vg_name": "ceph_vg0"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:         }
Nov 25 17:04:36 compute-0 elastic_newton[386754]:     ],
Nov 25 17:04:36 compute-0 elastic_newton[386754]:     "1": [
Nov 25 17:04:36 compute-0 elastic_newton[386754]:         {
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "devices": [
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "/dev/loop4"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             ],
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_name": "ceph_lv1",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_size": "21470642176",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "name": "ceph_lv1",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "tags": {
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cluster_name": "ceph",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.crush_device_class": "",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.encrypted": "0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osd_id": "1",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.type": "block",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.vdo": "0"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             },
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "type": "block",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "vg_name": "ceph_vg1"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:         }
Nov 25 17:04:36 compute-0 elastic_newton[386754]:     ],
Nov 25 17:04:36 compute-0 elastic_newton[386754]:     "2": [
Nov 25 17:04:36 compute-0 elastic_newton[386754]:         {
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "devices": [
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "/dev/loop5"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             ],
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_name": "ceph_lv2",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_size": "21470642176",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "name": "ceph_lv2",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "tags": {
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.cluster_name": "ceph",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.crush_device_class": "",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.encrypted": "0",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osd_id": "2",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.type": "block",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:                 "ceph.vdo": "0"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             },
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "type": "block",
Nov 25 17:04:36 compute-0 elastic_newton[386754]:             "vg_name": "ceph_vg2"
Nov 25 17:04:36 compute-0 elastic_newton[386754]:         }
Nov 25 17:04:36 compute-0 elastic_newton[386754]:     ]
Nov 25 17:04:36 compute-0 elastic_newton[386754]: }
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.051 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3i5zl3m1" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:36 compute-0 systemd[1]: libpod-e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3.scope: Deactivated successfully.
Nov 25 17:04:36 compute-0 podman[386736]: 2025-11-25 17:04:36.065262243 +0000 UTC m=+1.550871469 container died e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.087 254096 DEBUG nova.storage.rbd_utils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.091 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6244b098f921f6a4720046a171e56b2891175ca8e453b1989eb2214f61da6d-merged.mount: Deactivated successfully.
Nov 25 17:04:36 compute-0 podman[386736]: 2025-11-25 17:04:36.158946023 +0000 UTC m=+1.644555269 container remove e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.167 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.168 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:04:36 compute-0 systemd[1]: libpod-conmon-e30332b8f143af0bd91a9738ad7b687276f8a487f6c9f74b3d65c796c14bf6e3.scope: Deactivated successfully.
Nov 25 17:04:36 compute-0 sudo[386499]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.210 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Successfully updated port: b68643ca-2301-486a-984d-43fc41d1f773 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.226 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.226 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.227 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:04:36 compute-0 sudo[386855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:36 compute-0 sudo[386855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:36 compute-0 sudo[386855]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.321 254096 DEBUG oslo_concurrency.processutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.322 254096 INFO nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deleting local config drive /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/disk.config because it was imported into RBD.
Nov 25 17:04:36 compute-0 sudo[386883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.354 254096 DEBUG nova.compute.manager [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-changed-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.354 254096 DEBUG nova.compute.manager [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing instance network info cache due to event network-changed-b68643ca-2301-486a-984d-43fc41d1f773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.355 254096 DEBUG oslo_concurrency.lockutils [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:04:36 compute-0 sudo[386883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:36 compute-0 sudo[386883]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 17:04:36 compute-0 kernel: tapa13b6cf4-60: entered promiscuous mode
Nov 25 17:04:36 compute-0 NetworkManager[48891]: <info>  [1764090276.3878] manager: (tapa13b6cf4-60): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Nov 25 17:04:36 compute-0 ovn_controller[153477]: 2025-11-25T17:04:36Z|01237|binding|INFO|Claiming lport a13b6cf4-602d-4af3-b369-9dfa273e1514 for this chassis.
Nov 25 17:04:36 compute-0 ovn_controller[153477]: 2025-11-25T17:04:36Z|01238|binding|INFO|a13b6cf4-602d-4af3-b369-9dfa273e1514: Claiming fa:16:3e:7c:d6:7a 10.100.0.11
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.399 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.406 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3686MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.408 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.408 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.413 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:04:36 compute-0 systemd-udevd[386943]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.424 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d6:7a 10.100.0.11'], port_security=['fa:16:3e:7c:d6:7a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e177dffd-fd87-489e-a59d-1d241fe7a148', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad26d7dc-a577-438b-b143-107c43340ab4, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a13b6cf4-602d-4af3-b369-9dfa273e1514) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a13b6cf4-602d-4af3-b369-9dfa273e1514 in datapath 136c69a7-c4f8-40a2-be13-7ef82b7b3709 bound to our chassis
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.426 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 136c69a7-c4f8-40a2-be13-7ef82b7b3709
Nov 25 17:04:36 compute-0 systemd-machined[216343]: New machine qemu-152-instance-00000078.
Nov 25 17:04:36 compute-0 NetworkManager[48891]: <info>  [1764090276.4321] device (tapa13b6cf4-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:04:36 compute-0 NetworkManager[48891]: <info>  [1764090276.4329] device (tapa13b6cf4-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.438 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4452e7b8-eb4b-4c8a-ba34-bca5d04435bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.439 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap136c69a7-c1 in ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.441 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap136c69a7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10b36087-3979-459e-ac2a-d2c8d2aae6cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54a56681-8766-4eab-bb67-9e9f25bc2965]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.451 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e38b11-9096-43af-9f3f-bf9c4b1dc71f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-00000078.
Nov 25 17:04:36 compute-0 sudo[386913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 ovn_controller[153477]: 2025-11-25T17:04:36Z|01239|binding|INFO|Setting lport a13b6cf4-602d-4af3-b369-9dfa273e1514 ovn-installed in OVS
Nov 25 17:04:36 compute-0 ovn_controller[153477]: 2025-11-25T17:04:36Z|01240|binding|INFO|Setting lport a13b6cf4-602d-4af3-b369-9dfa273e1514 up in Southbound
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 sudo[386913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.473 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[adcb4468-76cf-452a-9faa-ac8f3c3954e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 sudo[386913]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance c5d6f631-cec2-431d-b476-feafa21e4f80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.498 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.506 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4644bd-419b-4c74-845c-5a45f1e61bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 NetworkManager[48891]: <info>  [1764090276.5130] manager: (tap136c69a7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/510)
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8007109-81c3-4164-b216-473196f68ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 systemd-udevd[386948]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:04:36 compute-0 sudo[386953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:04:36 compute-0 sudo[386953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.546 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0c5286-30eb-4f17-8863-d3166b6e9b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.549 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e26eca-00be-4b6d-abf1-383767bf7fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 NetworkManager[48891]: <info>  [1764090276.5735] device (tap136c69a7-c0): carrier: link connected
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.579 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0a3601-3a7d-4ea3-a04b-e93b7332ff92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.596 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4986fdbe-ce31-4507-b755-7c77efa92e45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap136c69a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:85:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673416, 'reachable_time': 42133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387005, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.614 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c08235ed-a52b-45c9-a349-be77a5835900]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:8524'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673416, 'tstamp': 673416}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387006, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.630 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1353b96a-4a2e-4efd-8ccf-8e730796bb25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap136c69a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:85:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673416, 'reachable_time': 42133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387008, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.664 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1b153b-a006-4a91-bcdb-c479764c7e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.728 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2018b91d-de80-4787-9c23-3d1dece207ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap136c69a7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.730 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap136c69a7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:36 compute-0 kernel: tap136c69a7-c0: entered promiscuous mode
Nov 25 17:04:36 compute-0 NetworkManager[48891]: <info>  [1764090276.7321] manager: (tap136c69a7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.733 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.736 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap136c69a7-c0, col_values=(('external_ids', {'iface-id': '77b25960-ed8e-4022-ac16-27312b8189a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:36 compute-0 ovn_controller[153477]: 2025-11-25T17:04:36Z|01241|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.738 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/136c69a7-c4f8-40a2-be13-7ef82b7b3709.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/136c69a7-c4f8-40a2-be13-7ef82b7b3709.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.740 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6362363-4d1e-41dc-9c9a-2f112e17ae89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.740 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-136c69a7-c4f8-40a2-be13-7ef82b7b3709
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/136c69a7-c4f8-40a2-be13-7ef82b7b3709.pid.haproxy
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 136c69a7-c4f8-40a2-be13-7ef82b7b3709
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:04:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:36.741 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'env', 'PROCESS_TAG=haproxy-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/136c69a7-c4f8-40a2-be13-7ef82b7b3709.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.751 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3537005772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.803 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090276.8025553, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.804 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Started (Lifecycle Event)
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.826 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.829 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090276.8026872, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.829 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Paused (Lifecycle Event)
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.846 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.850 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:04:36 compute-0 podman[387115]: 2025-11-25 17:04:36.867713564 +0000 UTC m=+0.042593810 container create e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:04:36 compute-0 nova_compute[254092]: 2025-11-25 17:04:36.870 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:04:36 compute-0 systemd[1]: Started libpod-conmon-e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c.scope.
Nov 25 17:04:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:36 compute-0 podman[387115]: 2025-11-25 17:04:36.850577453 +0000 UTC m=+0.025457719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:04:36 compute-0 podman[387115]: 2025-11-25 17:04:36.94705646 +0000 UTC m=+0.121936726 container init e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:04:36 compute-0 podman[387115]: 2025-11-25 17:04:36.955152732 +0000 UTC m=+0.130032978 container start e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:04:36 compute-0 dazzling_hellman[387131]: 167 167
Nov 25 17:04:36 compute-0 podman[387115]: 2025-11-25 17:04:36.958676178 +0000 UTC m=+0.133556444 container attach e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:04:36 compute-0 systemd[1]: libpod-e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c.scope: Deactivated successfully.
Nov 25 17:04:36 compute-0 podman[387115]: 2025-11-25 17:04:36.961802504 +0000 UTC m=+0.136682760 container died e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e88c1eb3cb7eda942452ec02566e68cd9d0585bbd09bf543ec5aed6e7997d9a8-merged.mount: Deactivated successfully.
Nov 25 17:04:36 compute-0 podman[387115]: 2025-11-25 17:04:36.993359029 +0000 UTC m=+0.168239275 container remove e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:04:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:04:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684110260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:37 compute-0 systemd[1]: libpod-conmon-e63da288822d456aa9d088b24e4664b34e458e84a7b60b9a29de0138d586fc3c.scope: Deactivated successfully.
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.036 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.043 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.054 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.069 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.070 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:37 compute-0 podman[387172]: 2025-11-25 17:04:37.084873819 +0000 UTC m=+0.041672294 container create be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:04:37 compute-0 systemd[1]: Started libpod-conmon-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f.scope.
Nov 25 17:04:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ef1abaaa64a067771ec5da28f54ab4615764922df7ecc3992312613eab4146/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:37 compute-0 podman[387193]: 2025-11-25 17:04:37.156269278 +0000 UTC m=+0.037805798 container create 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 17:04:37 compute-0 podman[387172]: 2025-11-25 17:04:37.064118521 +0000 UTC m=+0.020917006 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:04:37 compute-0 podman[387172]: 2025-11-25 17:04:37.165706587 +0000 UTC m=+0.122505072 container init be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:04:37 compute-0 podman[387172]: 2025-11-25 17:04:37.171743442 +0000 UTC m=+0.128541917 container start be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 17:04:37 compute-0 systemd[1]: Started libpod-conmon-0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8.scope.
Nov 25 17:04:37 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : New worker (387217) forked
Nov 25 17:04:37 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : Loading success.
Nov 25 17:04:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:37 compute-0 podman[387193]: 2025-11-25 17:04:37.14031465 +0000 UTC m=+0.021851200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:04:37 compute-0 podman[387193]: 2025-11-25 17:04:37.252757104 +0000 UTC m=+0.134293624 container init 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:04:37 compute-0 podman[387193]: 2025-11-25 17:04:37.260828196 +0000 UTC m=+0.142364716 container start 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:04:37 compute-0 podman[387193]: 2025-11-25 17:04:37.264365752 +0000 UTC m=+0.145902262 container attach 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.392 254096 DEBUG nova.network.neutron [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.409 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.410 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance network_info: |[{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.410 254096 DEBUG oslo_concurrency.lockutils [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.410 254096 DEBUG nova.network.neutron [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing network info cache for port b68643ca-2301-486a-984d-43fc41d1f773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.413 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start _get_guest_xml network_info=[{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.417 254096 WARNING nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.424 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.425 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.428 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.428 254096 DEBUG nova.virt.libvirt.host [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.429 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.429 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.430 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.430 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.430 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.431 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.432 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.432 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.432 254096 DEBUG nova.virt.hardware [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.436 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:37 compute-0 ceph-mon[74985]: pgmap v2432: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 17:04:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1684110260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:04:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:04:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794994296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.920 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.944 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:37 compute-0 nova_compute[254092]: 2025-11-25 17:04:37.948 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]: {
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "osd_id": 1,
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "type": "bluestore"
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:     },
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "osd_id": 2,
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "type": "bluestore"
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:     },
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "osd_id": 0,
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:         "type": "bluestore"
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]:     }
Nov 25 17:04:38 compute-0 interesting_proskuriakova[387215]: }
Nov 25 17:04:38 compute-0 systemd[1]: libpod-0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8.scope: Deactivated successfully.
Nov 25 17:04:38 compute-0 podman[387193]: 2025-11-25 17:04:38.202591856 +0000 UTC m=+1.084128376 container died 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f7c4458fe556d7b06e62521f4fb6c681de07bb960672e67442b813e380d4acc-merged.mount: Deactivated successfully.
Nov 25 17:04:38 compute-0 podman[387193]: 2025-11-25 17:04:38.269886942 +0000 UTC m=+1.151423462 container remove 0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:04:38 compute-0 systemd[1]: libpod-conmon-0238a6dab6d498660d84e2e83d737a3939973104e21fb0ca19dbbbeee65980b8.scope: Deactivated successfully.
Nov 25 17:04:38 compute-0 sudo[386953]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:04:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:04:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:04:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:04:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2bfa7d58-78fd-4e61-b93c-ae769a6cff5b does not exist
Nov 25 17:04:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev efa73423-c184-40b5-86d4-5eb73c0c55ca does not exist
Nov 25 17:04:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:04:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015512114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.381 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:38 compute-0 sudo[387331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.385 254096 DEBUG nova.virt.libvirt.vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=121,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItBRZm2yDqYd+wVxwOqSlZSDzlMlxevUAlkeawR8Ad9B/U2D7ctHPpd22ukw4f9eeesjVGXob1S5He80yqU0J/RQxjf579CY5kWM2qsSzAfmn3IDw4uBaYcepmxt/Q0ZA==',key_name='tempest-TestSecurityGroupsBasicOps-1015716509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-h2rb4r91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:30Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=c5d6f631-cec2-431d-b476-feafa21e4f80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.385 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.386 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.387 254096 DEBUG nova.objects.instance [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5d6f631-cec2-431d-b476-feafa21e4f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:04:38 compute-0 sudo[387331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:38 compute-0 sudo[387331]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.403 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <uuid>c5d6f631-cec2-431d-b476-feafa21e4f80</uuid>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <name>instance-00000079</name>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184</nova:name>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:04:37</nova:creationTime>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <nova:port uuid="b68643ca-2301-486a-984d-43fc41d1f773">
Nov 25 17:04:38 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <system>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <entry name="serial">c5d6f631-cec2-431d-b476-feafa21e4f80</entry>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <entry name="uuid">c5d6f631-cec2-431d-b476-feafa21e4f80</entry>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </system>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <os>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   </os>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <features>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   </features>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c5d6f631-cec2-431d-b476-feafa21e4f80_disk">
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       </source>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config">
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       </source>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:04:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:dd:cf:7c"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <target dev="tapb68643ca-23"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/console.log" append="off"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <video>
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </video>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:04:38 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:04:38 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:04:38 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:04:38 compute-0 nova_compute[254092]: </domain>
Nov 25 17:04:38 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.404 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Preparing to wait for external event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.404 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.404 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.405 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.405 254096 DEBUG nova.virt.libvirt.vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=121,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItBRZm2yDqYd+wVxwOqSlZSDzlMlxevUAlkeawR8Ad9B/U2D7ctHPpd22ukw4f9eeesjVGXob1S5He80yqU0J/RQxjf579CY5kWM2qsSzAfmn3IDw4uBaYcepmxt/Q0ZA==',key_name='tempest-TestSecurityGroupsBasicOps-1015716509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-h2rb4r91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:04:30Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=c5d6f631-cec2-431d-b476-feafa21e4f80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.406 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.406 254096 DEBUG nova.network.os_vif_util [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.406 254096 DEBUG os_vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.407 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.407 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.408 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.411 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb68643ca-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.411 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb68643ca-23, col_values=(('external_ids', {'iface-id': 'b68643ca-2301-486a-984d-43fc41d1f773', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:cf:7c', 'vm-uuid': 'c5d6f631-cec2-431d-b476-feafa21e4f80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:38 compute-0 NetworkManager[48891]: <info>  [1764090278.4136] manager: (tapb68643ca-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.415 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.422 254096 INFO os_vif [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23')
Nov 25 17:04:38 compute-0 sudo[387358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:04:38 compute-0 sudo[387358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:04:38 compute-0 sudo[387358]: pam_unix(sudo:session): session closed for user root
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:dd:cf:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.469 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Using config drive
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.489 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.698 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.698 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Processing event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.699 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 DEBUG oslo_concurrency.lockutils [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 DEBUG nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.700 254096 WARNING nova.compute.manager [req-4e4d79ab-e2c6-4d00-99c3-a93fd7e37177 req-538b3c2a-6618-4096-98b6-5ede9e0d4e2c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 for instance with vm_state building and task_state spawning.
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.701 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.705 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090278.7047448, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.705 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Resumed (Lifecycle Event)
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.706 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.710 254096 INFO nova.virt.libvirt.driver [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance spawned successfully.
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.710 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.733 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.739 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.742 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.742 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.742 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.743 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.743 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.743 254096 DEBUG nova.virt.libvirt.driver [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.766 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:04:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3794994296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:04:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:04:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3015512114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.800 254096 INFO nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 11.89 seconds to spawn the instance on the hypervisor.
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.800 254096 DEBUG nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.863 254096 INFO nova.compute.manager [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 12.91 seconds to build instance.
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.888 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Creating config drive at /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.896 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptypfhp13 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.946 254096 DEBUG nova.network.neutron [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updated VIF entry in instance network info cache for port b68643ca-2301-486a-984d-43fc41d1f773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.947 254096 DEBUG nova.network.neutron [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.950 254096 DEBUG oslo_concurrency.lockutils [None req-059998a7-65b4-42ca-962a-b181361c5344 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:38 compute-0 nova_compute[254092]: 2025-11-25 17:04:38.961 254096 DEBUG oslo_concurrency.lockutils [req-02583fb5-c5d7-43d7-832c-29feec630068 req-453e6c27-35d1-41ba-8342-f6874611ce87 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.049 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptypfhp13" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.086 254096 DEBUG nova.storage.rbd_utils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.092 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.139 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.272 254096 DEBUG oslo_concurrency.processutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config c5d6f631-cec2-431d-b476-feafa21e4f80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.273 254096 INFO nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deleting local config drive /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80/disk.config because it was imported into RBD.
Nov 25 17:04:39 compute-0 kernel: tapb68643ca-23: entered promiscuous mode
Nov 25 17:04:39 compute-0 NetworkManager[48891]: <info>  [1764090279.3737] manager: (tapb68643ca-23): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Nov 25 17:04:39 compute-0 ovn_controller[153477]: 2025-11-25T17:04:39Z|01242|binding|INFO|Claiming lport b68643ca-2301-486a-984d-43fc41d1f773 for this chassis.
Nov 25 17:04:39 compute-0 ovn_controller[153477]: 2025-11-25T17:04:39Z|01243|binding|INFO|b68643ca-2301-486a-984d-43fc41d1f773: Claiming fa:16:3e:dd:cf:7c 10.100.0.6
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:39 compute-0 NetworkManager[48891]: <info>  [1764090279.3943] device (tapb68643ca-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.389 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:cf:7c 10.100.0.6'], port_security=['fa:16:3e:dd:cf:7c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5d6f631-cec2-431d-b476-feafa21e4f80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '141c3ba7-cca0-4ab6-b807-4cd786e57588 76216c71-c251-4ffd-abb8-124655d34fd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fde4dc0-c42b-4191-847a-6d09a5cf68e7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b68643ca-2301-486a-984d-43fc41d1f773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.390 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b68643ca-2301-486a-984d-43fc41d1f773 in datapath 14274c68-4eff-4ea0-b9e4-434b52f5c24f bound to our chassis
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.391 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14274c68-4eff-4ea0-b9e4-434b52f5c24f
Nov 25 17:04:39 compute-0 NetworkManager[48891]: <info>  [1764090279.4002] device (tapb68643ca-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.410 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ed97a8-abb1-4ad1-af80-8c193546e401]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.411 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14274c68-41 in ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.414 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14274c68-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.415 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8c719a-f95a-4fed-9a1c-ee1ccbcc7315]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87db727e-31aa-4f86-89b5-47ad946ffab2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.431 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc66dc3-8e2d-49d0-882a-2085c82a4634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 systemd-machined[216343]: New machine qemu-153-instance-00000079.
Nov 25 17:04:39 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-00000079.
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c02ff18b-1ff5-4673-a6c4-e49ada03f499]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_controller[153477]: 2025-11-25T17:04:39Z|01244|binding|INFO|Setting lport b68643ca-2301-486a-984d-43fc41d1f773 ovn-installed in OVS
Nov 25 17:04:39 compute-0 ovn_controller[153477]: 2025-11-25T17:04:39Z|01245|binding|INFO|Setting lport b68643ca-2301-486a-984d-43fc41d1f773 up in Southbound
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.493 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[503a3439-9ae3-4ffe-aa90-53bc40138a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 systemd-udevd[387459]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:04:39 compute-0 NetworkManager[48891]: <info>  [1764090279.5022] manager: (tap14274c68-40): new Veth device (/org/freedesktop/NetworkManager/Devices/514)
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6ab014-57bc-469d-b6ff-9570c26d4777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.535 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e79349f9-3445-4858-b67d-f709c7603629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.537 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a33b7696-b9d8-4843-9d16-82db54c0d475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 NetworkManager[48891]: <info>  [1764090279.5662] device (tap14274c68-40): carrier: link connected
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.573 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b08c6657-42bd-4844-8dc5-dc8efd6d9139]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.591 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[90c1077d-8936-456c-9879-7c672fa9ef3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14274c68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:c8:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673715, 'reachable_time': 25107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387489, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[804ad0a5-9689-4ac0-ac18-66fbf5ae8c5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:c8c9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673715, 'tstamp': 673715}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387490, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.633 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03f51289-8ea7-4060-982d-102b8e2fb33e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14274c68-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:c8:c9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673715, 'reachable_time': 25107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387491, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.683 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3c9d7a-d4ff-4a22-871f-7a5d2cedf91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[385fc5cd-013b-4e2f-bfed-542e64b9cf4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14274c68-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.747 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.748 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14274c68-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:39 compute-0 NetworkManager[48891]: <info>  [1764090279.7506] manager: (tap14274c68-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Nov 25 17:04:39 compute-0 kernel: tap14274c68-40: entered promiscuous mode
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.758 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14274c68-40, col_values=(('external_ids', {'iface-id': '426285e8-6729-4138-ae8b-27c8efd6b758'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:04:39 compute-0 ovn_controller[153477]: 2025-11-25T17:04:39Z|01246|binding|INFO|Releasing lport 426285e8-6729-4138-ae8b-27c8efd6b758 from this chassis (sb_readonly=0)
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.763 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14274c68-4eff-4ea0-b9e4-434b52f5c24f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14274c68-4eff-4ea0-b9e4-434b52f5c24f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.765 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2a088ad7-9f91-45cc-bddf-00ec87719651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.765 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-14274c68-4eff-4ea0-b9e4-434b52f5c24f
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/14274c68-4eff-4ea0-b9e4-434b52f5c24f.pid.haproxy
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 14274c68-4eff-4ea0-b9e4-434b52f5c24f
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:04:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:04:39.767 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'env', 'PROCESS_TAG=haproxy-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14274c68-4eff-4ea0-b9e4-434b52f5c24f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:04:39 compute-0 nova_compute[254092]: 2025-11-25 17:04:39.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:39 compute-0 ceph-mon[74985]: pgmap v2433: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.137 254096 DEBUG nova.compute.manager [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.138 254096 DEBUG oslo_concurrency.lockutils [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.139 254096 DEBUG oslo_concurrency.lockutils [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.139 254096 DEBUG oslo_concurrency.lockutils [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.139 254096 DEBUG nova.compute.manager [req-e8ad5cb0-6f04-4f40-b2a1-76a4539d3e30 req-26256f81-2841-4ef6-a7c9-0bd7238bb2a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Processing event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:04:40
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:40 compute-0 podman[387523]: 2025-11-25 17:04:40.218440418 +0000 UTC m=+0.050418514 container create 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:04:40 compute-0 systemd[1]: Started libpod-conmon-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c.scope.
Nov 25 17:04:40 compute-0 podman[387523]: 2025-11-25 17:04:40.18938518 +0000 UTC m=+0.021363276 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:04:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc152218d65abff31069b204813b7e011958f1c03540ebac42a934b9611177c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:04:40 compute-0 podman[387523]: 2025-11-25 17:04:40.310200034 +0000 UTC m=+0.142178110 container init 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 17:04:40 compute-0 podman[387523]: 2025-11-25 17:04:40.314935834 +0000 UTC m=+0.146913910 container start 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 17:04:40 compute-0 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : New worker (387545) forked
Nov 25 17:04:40 compute-0 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : Loading success.
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.550 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.574 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090280.574189, c5d6f631-cec2-431d-b476-feafa21e4f80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.575 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Started (Lifecycle Event)
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.581 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.604 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.605 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.612 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.617 254096 INFO nova.virt.libvirt.driver [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance spawned successfully.
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.619 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.673 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.677 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090280.5751793, c5d6f631-cec2-431d-b476-feafa21e4f80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.680 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Paused (Lifecycle Event)
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.700 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.701 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.701 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.702 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.702 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.702 254096 DEBUG nova.virt.libvirt.driver [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.721 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.724 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090280.597982, c5d6f631-cec2-431d-b476-feafa21e4f80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.724 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Resumed (Lifecycle Event)
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.747 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.750 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.755 254096 INFO nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 10.13 seconds to spawn the instance on the hypervisor.
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.755 254096 DEBUG nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.763 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.811 254096 INFO nova.compute.manager [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 11.52 seconds to build instance.
Nov 25 17:04:40 compute-0 nova_compute[254092]: 2025-11-25 17:04:40.823 254096 DEBUG oslo_concurrency.lockutils [None req-3669d51a-25ee-4517-94a6-2cd16f0f3049 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:41 compute-0 ceph-mon[74985]: pgmap v2434: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Nov 25 17:04:42 compute-0 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG nova.compute.manager [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:42 compute-0 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG oslo_concurrency.lockutils [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:42 compute-0 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG oslo_concurrency.lockutils [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:42 compute-0 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG oslo_concurrency.lockutils [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:42 compute-0 nova_compute[254092]: 2025-11-25 17:04:42.215 254096 DEBUG nova.compute.manager [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] No waiting events found dispatching network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:04:42 compute-0 nova_compute[254092]: 2025-11-25 17:04:42.216 254096 WARNING nova.compute.manager [req-cf043702-6e95-4357-9bb0-69f4f66bd88d req-5db277ca-4af4-4b91-988b-c463f3e9e5e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received unexpected event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 for instance with vm_state active and task_state None.
Nov 25 17:04:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 113 op/s
Nov 25 17:04:43 compute-0 nova_compute[254092]: 2025-11-25 17:04:43.475 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:44 compute-0 ceph-mon[74985]: pgmap v2435: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 113 op/s
Nov 25 17:04:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.160 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:45 compute-0 NetworkManager[48891]: <info>  [1764090285.3937] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Nov 25 17:04:45 compute-0 NetworkManager[48891]: <info>  [1764090285.3943] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Nov 25 17:04:45 compute-0 ovn_controller[153477]: 2025-11-25T17:04:45Z|01247|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 17:04:45 compute-0 ovn_controller[153477]: 2025-11-25T17:04:45Z|01248|binding|INFO|Releasing lport 426285e8-6729-4138-ae8b-27c8efd6b758 from this chassis (sb_readonly=0)
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:45 compute-0 ovn_controller[153477]: 2025-11-25T17:04:45Z|01249|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 17:04:45 compute-0 ovn_controller[153477]: 2025-11-25T17:04:45Z|01250|binding|INFO|Releasing lport 426285e8-6729-4138-ae8b-27c8efd6b758 from this chassis (sb_readonly=0)
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.418 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:45 compute-0 podman[387597]: 2025-11-25 17:04:45.648987697 +0000 UTC m=+0.062086133 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 17:04:45 compute-0 podman[387598]: 2025-11-25 17:04:45.654704864 +0000 UTC m=+0.063499033 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 17:04:45 compute-0 podman[387599]: 2025-11-25 17:04:45.701896249 +0000 UTC m=+0.107273453 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.798 254096 DEBUG nova.compute.manager [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG nova.compute.manager [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG oslo_concurrency.lockutils [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG oslo_concurrency.lockutils [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:04:45 compute-0 nova_compute[254092]: 2025-11-25 17:04:45.799 254096 DEBUG nova.network.neutron [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:04:46 compute-0 ceph-mon[74985]: pgmap v2436: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Nov 25 17:04:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.479 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.502 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.503 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid c5d6f631-cec2-431d-b476-feafa21e4f80 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.504 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.505 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.505 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.506 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.544 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.547 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.757 254096 DEBUG nova.network.neutron [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.759 254096 DEBUG nova.network.neutron [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.783 254096 DEBUG oslo_concurrency.lockutils [req-54031d2b-5a30-419a-b5ee-db8877ec9569 req-4b302e9c-15b1-4c63-b943-fb7fc8c24676 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.877 254096 DEBUG nova.compute.manager [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-changed-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.878 254096 DEBUG nova.compute.manager [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing instance network info cache due to event network-changed-b68643ca-2301-486a-984d-43fc41d1f773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.879 254096 DEBUG oslo_concurrency.lockutils [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.879 254096 DEBUG oslo_concurrency.lockutils [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:04:47 compute-0 nova_compute[254092]: 2025-11-25 17:04:47.879 254096 DEBUG nova.network.neutron [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing network info cache for port b68643ca-2301-486a-984d-43fc41d1f773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:04:48 compute-0 ceph-mon[74985]: pgmap v2437: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 25 17:04:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 25 17:04:48 compute-0 nova_compute[254092]: 2025-11-25 17:04:48.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:49 compute-0 nova_compute[254092]: 2025-11-25 17:04:49.567 254096 DEBUG nova.network.neutron [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updated VIF entry in instance network info cache for port b68643ca-2301-486a-984d-43fc41d1f773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:04:49 compute-0 nova_compute[254092]: 2025-11-25 17:04:49.569 254096 DEBUG nova.network.neutron [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:04:49 compute-0 nova_compute[254092]: 2025-11-25 17:04:49.587 254096 DEBUG oslo_concurrency.lockutils [req-a427043a-51a2-49f5-bc22-f20b1a5160bc req-6efdad15-79c2-4f1d-ab1a-59d571c20202 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:04:50 compute-0 ceph-mon[74985]: pgmap v2438: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Nov 25 17:04:50 compute-0 nova_compute[254092]: 2025-11-25 17:04:50.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 78 KiB/s wr, 148 op/s
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007068107174252784 of space, bias 1.0, pg target 0.21204321522758351 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:04:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:04:51 compute-0 ovn_controller[153477]: 2025-11-25T17:04:51Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:d6:7a 10.100.0.11
Nov 25 17:04:51 compute-0 ovn_controller[153477]: 2025-11-25T17:04:51Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:d6:7a 10.100.0.11
Nov 25 17:04:52 compute-0 ceph-mon[74985]: pgmap v2439: 321 pgs: 321 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 78 KiB/s wr, 148 op/s
Nov 25 17:04:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 146 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Nov 25 17:04:53 compute-0 nova_compute[254092]: 2025-11-25 17:04:53.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:04:54 compute-0 ceph-mon[74985]: pgmap v2440: 321 pgs: 321 active+clean; 146 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 172 op/s
Nov 25 17:04:54 compute-0 ovn_controller[153477]: 2025-11-25T17:04:54Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:cf:7c 10.100.0.6
Nov 25 17:04:54 compute-0 ovn_controller[153477]: 2025-11-25T17:04:54Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:cf:7c 10.100.0.6
Nov 25 17:04:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 146 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 117 op/s
Nov 25 17:04:55 compute-0 nova_compute[254092]: 2025-11-25 17:04:55.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:56 compute-0 ceph-mon[74985]: pgmap v2441: 321 pgs: 321 active+clean; 146 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 117 op/s
Nov 25 17:04:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Nov 25 17:04:57 compute-0 nova_compute[254092]: 2025-11-25 17:04:57.102 254096 INFO nova.compute.manager [None req-6eaa24f1-d76e-4be5-9453-22e72120ca67 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Get console output
Nov 25 17:04:57 compute-0 nova_compute[254092]: 2025-11-25 17:04:57.109 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:04:58 compute-0 ceph-mon[74985]: pgmap v2442: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Nov 25 17:04:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 17:04:58 compute-0 nova_compute[254092]: 2025-11-25 17:04:58.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:04:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:00 compute-0 ceph-mon[74985]: pgmap v2443: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 17:05:00 compute-0 nova_compute[254092]: 2025-11-25 17:05:00.210 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:00 compute-0 nova_compute[254092]: 2025-11-25 17:05:00.210 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:00 compute-0 nova_compute[254092]: 2025-11-25 17:05:00.211 254096 DEBUG nova.objects.instance [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:00 compute-0 nova_compute[254092]: 2025-11-25 17:05:00.244 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 17:05:01 compute-0 nova_compute[254092]: 2025-11-25 17:05:01.519 254096 DEBUG nova.objects.instance [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_requests' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:01 compute-0 nova_compute[254092]: 2025-11-25 17:05:01.536 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:05:02 compute-0 ceph-mon[74985]: pgmap v2444: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Nov 25 17:05:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 4.2 MiB/s wr, 127 op/s
Nov 25 17:05:02 compute-0 nova_compute[254092]: 2025-11-25 17:05:02.510 254096 DEBUG nova.policy [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:05:03 compute-0 nova_compute[254092]: 2025-11-25 17:05:03.120 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully created port: d5b14553-424b-4985-9ed6-2f4afac92c00 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:05:03 compute-0 nova_compute[254092]: 2025-11-25 17:05:03.482 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:04 compute-0 ceph-mon[74985]: pgmap v2445: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 666 KiB/s rd, 4.2 MiB/s wr, 127 op/s
Nov 25 17:05:04 compute-0 nova_compute[254092]: 2025-11-25 17:05:04.287 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Successfully updated port: d5b14553-424b-4985-9ed6-2f4afac92c00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:05:04 compute-0 nova_compute[254092]: 2025-11-25 17:05:04.311 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:04 compute-0 nova_compute[254092]: 2025-11-25 17:05:04.311 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:04 compute-0 nova_compute[254092]: 2025-11-25 17:05:04.311 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:05:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 3.1 MiB/s wr, 87 op/s
Nov 25 17:05:04 compute-0 nova_compute[254092]: 2025-11-25 17:05:04.433 254096 DEBUG nova.compute.manager [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:04 compute-0 nova_compute[254092]: 2025-11-25 17:05:04.434 254096 DEBUG nova.compute.manager [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:05:04 compute-0 nova_compute[254092]: 2025-11-25 17:05:04.435 254096 DEBUG oslo_concurrency.lockutils [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:05 compute-0 nova_compute[254092]: 2025-11-25 17:05:05.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:06 compute-0 ceph-mon[74985]: pgmap v2446: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 3.1 MiB/s wr, 87 op/s
Nov 25 17:05:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 427 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.699 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.700 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.700 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.700 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.701 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.702 254096 INFO nova.compute.manager [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Terminating instance
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.703 254096 DEBUG nova.compute.manager [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:05:06 compute-0 kernel: tapb68643ca-23 (unregistering): left promiscuous mode
Nov 25 17:05:06 compute-0 NetworkManager[48891]: <info>  [1764090306.7710] device (tapb68643ca-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG nova.compute.manager [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-changed-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG nova.compute.manager [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing instance network info cache due to event network-changed-b68643ca-2301-486a-984d-43fc41d1f773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG oslo_concurrency.lockutils [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.770 254096 DEBUG oslo_concurrency.lockutils [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.771 254096 DEBUG nova.network.neutron [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Refreshing network info cache for port b68643ca-2301-486a-984d-43fc41d1f773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:05:06 compute-0 ovn_controller[153477]: 2025-11-25T17:05:06Z|01251|binding|INFO|Releasing lport b68643ca-2301-486a-984d-43fc41d1f773 from this chassis (sb_readonly=0)
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:06 compute-0 ovn_controller[153477]: 2025-11-25T17:05:06Z|01252|binding|INFO|Setting lport b68643ca-2301-486a-984d-43fc41d1f773 down in Southbound
Nov 25 17:05:06 compute-0 ovn_controller[153477]: 2025-11-25T17:05:06Z|01253|binding|INFO|Removing iface tapb68643ca-23 ovn-installed in OVS
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.787 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:cf:7c 10.100.0.6'], port_security=['fa:16:3e:dd:cf:7c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5d6f631-cec2-431d-b476-feafa21e4f80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '141c3ba7-cca0-4ab6-b807-4cd786e57588 76216c71-c251-4ffd-abb8-124655d34fd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fde4dc0-c42b-4191-847a-6d09a5cf68e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b68643ca-2301-486a-984d-43fc41d1f773) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:05:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.788 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b68643ca-2301-486a-984d-43fc41d1f773 in datapath 14274c68-4eff-4ea0-b9e4-434b52f5c24f unbound from our chassis
Nov 25 17:05:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.789 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14274c68-4eff-4ea0-b9e4-434b52f5c24f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:05:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.790 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[edf06ffc-cb01-44c4-903d-62e3a72a081f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:06.790 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f namespace which is not needed anymore
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:06 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 25 17:05:06 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Consumed 14.214s CPU time.
Nov 25 17:05:06 compute-0 systemd-machined[216343]: Machine qemu-153-instance-00000079 terminated.
Nov 25 17:05:06 compute-0 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : haproxy version is 2.8.14-c23fe91
Nov 25 17:05:06 compute-0 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [NOTICE]   (387543) : path to executable is /usr/sbin/haproxy
Nov 25 17:05:06 compute-0 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [WARNING]  (387543) : Exiting Master process...
Nov 25 17:05:06 compute-0 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [ALERT]    (387543) : Current worker (387545) exited with code 143 (Terminated)
Nov 25 17:05:06 compute-0 neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f[387539]: [WARNING]  (387543) : All workers exited. Exiting... (0)
Nov 25 17:05:06 compute-0 systemd[1]: libpod-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c.scope: Deactivated successfully.
Nov 25 17:05:06 compute-0 podman[387685]: 2025-11-25 17:05:06.941392189 +0000 UTC m=+0.058544727 container died 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.949 254096 INFO nova.virt.libvirt.driver [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Instance destroyed successfully.
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.950 254096 DEBUG nova.objects.instance [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid c5d6f631-cec2-431d-b476-feafa21e4f80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.965 254096 DEBUG nova.virt.libvirt.vif [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1468495184',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=121,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBItBRZm2yDqYd+wVxwOqSlZSDzlMlxevUAlkeawR8Ad9B/U2D7ctHPpd22ukw4f9eeesjVGXob1S5He80yqU0J/RQxjf579CY5kWM2qsSzAfmn3IDw4uBaYcepmxt/Q0ZA==',key_name='tempest-TestSecurityGroupsBasicOps-1015716509',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-h2rb4r91',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:40Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=c5d6f631-cec2-431d-b476-feafa21e4f80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.968 254096 DEBUG nova.network.os_vif_util [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.968 254096 DEBUG nova.network.os_vif_util [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.969 254096 DEBUG os_vif [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.972 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb68643ca-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.975 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:05:06 compute-0 nova_compute[254092]: 2025-11-25 17:05:06.979 254096 INFO os_vif [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:cf:7c,bridge_name='br-int',has_traffic_filtering=True,id=b68643ca-2301-486a-984d-43fc41d1f773,network=Network(14274c68-4eff-4ea0-b9e4-434b52f5c24f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb68643ca-23')
Nov 25 17:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc152218d65abff31069b204813b7e011958f1c03540ebac42a934b9611177c-merged.mount: Deactivated successfully.
Nov 25 17:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c-userdata-shm.mount: Deactivated successfully.
Nov 25 17:05:07 compute-0 podman[387685]: 2025-11-25 17:05:07.004065908 +0000 UTC m=+0.121218456 container cleanup 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:05:07 compute-0 systemd[1]: libpod-conmon-69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c.scope: Deactivated successfully.
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.134 254096 DEBUG nova.network.neutron [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.206 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.207 254096 DEBUG oslo_concurrency.lockutils [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.208 254096 DEBUG nova.network.neutron [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.211 254096 DEBUG nova.virt.libvirt.vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.212 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.213 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.213 254096 DEBUG os_vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.214 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.214 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.218 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5b14553-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.218 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5b14553-42, col_values=(('external_ids', {'iface-id': 'd5b14553-424b-4985-9ed6-2f4afac92c00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:9c:66', 'vm-uuid': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 NetworkManager[48891]: <info>  [1764090307.2217] manager: (tapd5b14553-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.230 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.234 254096 INFO os_vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42')
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.235 254096 DEBUG nova.virt.libvirt.vif [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.236 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.236 254096 DEBUG nova.network.os_vif_util [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.241 254096 DEBUG nova.virt.libvirt.guest [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] attach device xml: <interface type="ethernet">
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:8b:9c:66"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <target dev="tapd5b14553-42"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]: </interface>
Nov 25 17:05:07 compute-0 nova_compute[254092]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 25 17:05:07 compute-0 kernel: tapd5b14553-42: entered promiscuous mode
Nov 25 17:05:07 compute-0 NetworkManager[48891]: <info>  [1764090307.2568] manager: (tapd5b14553-42): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Nov 25 17:05:07 compute-0 systemd-udevd[387665]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:05:07 compute-0 ovn_controller[153477]: 2025-11-25T17:05:07Z|01254|binding|INFO|Claiming lport d5b14553-424b-4985-9ed6-2f4afac92c00 for this chassis.
Nov 25 17:05:07 compute-0 ovn_controller[153477]: 2025-11-25T17:05:07Z|01255|binding|INFO|d5b14553-424b-4985-9ed6-2f4afac92c00: Claiming fa:16:3e:8b:9c:66 10.100.0.28
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 NetworkManager[48891]: <info>  [1764090307.2793] device (tapd5b14553-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:05:07 compute-0 NetworkManager[48891]: <info>  [1764090307.2809] device (tapd5b14553-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 ovn_controller[153477]: 2025-11-25T17:05:07Z|01256|binding|INFO|Setting lport d5b14553-424b-4985-9ed6-2f4afac92c00 ovn-installed in OVS
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 ovn_controller[153477]: 2025-11-25T17:05:07Z|01257|binding|INFO|Setting lport d5b14553-424b-4985-9ed6-2f4afac92c00 up in Southbound
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.413 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9c:66 10.100.0.28'], port_security=['fa:16:3e:8b:9c:66 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d5b14553-424b-4985-9ed6-2f4afac92c00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:05:07 compute-0 ceph-mon[74985]: pgmap v2447: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 427 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:7c:d6:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.445 254096 DEBUG nova.virt.libvirt.driver [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:8b:9c:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:05:07 compute-0 podman[387740]: 2025-11-25 17:05:07.450882244 +0000 UTC m=+0.425075771 container remove 69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.458 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6f95fa-bf52-4df3-a5cc-f94e0c069c44]: (4, ('Tue Nov 25 05:05:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f (69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c)\n69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c\nTue Nov 25 05:05:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f (69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c)\n69b068750b4a7797212dbb8a32206f2f3076426921bb309f2b4e7ad633bde27c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.460 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[213943e1-91cb-4d35-b72c-54e6139cc295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.462 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14274c68-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 kernel: tap14274c68-40: left promiscuous mode
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.561 254096 DEBUG nova.virt.libvirt.guest [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:05:07</nova:creationTime>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:05:07 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     <nova:port uuid="d5b14553-424b-4985-9ed6-2f4afac92c00">
Nov 25 17:05:07 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 25 17:05:07 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:05:07 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:05:07 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:05:07 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.568 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.570 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0343eee6-43ef-475a-9858-83165949b92a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.581 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7313a9a-e38e-4069-88ed-2ab35c439d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac644b8-15d9-4f42-9763-8d8c772a3b55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3a78f81c-07c4-4513-8b01-b5205edb4f77]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673707, 'reachable_time': 28531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387765, 'error': None, 'target': 'ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d14274c68\x2d4eff\x2d4ea0\x2db9e4\x2d434b52f5c24f.mount: Deactivated successfully.
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.601 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14274c68-4eff-4ea0-b9e4-434b52f5c24f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.601 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d4baad06-0ecc-474a-ba98-fab105f75a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.602 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d5b14553-424b-4985-9ed6-2f4afac92c00 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f unbound from our chassis
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.603 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.614 254096 DEBUG oslo_concurrency.lockutils [None req-66b26b57-8a30-4a63-aa4f-df72dbdbe783 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.617 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fa7c7b-ad81-4a44-9a2a-601a0a13c64e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.618 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a48b006-a1 in ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.621 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a48b006-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.621 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[143524c2-908c-4524-8f16-e8e0d2495be1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.621 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e93db2fe-0c11-4a56-9e9c-8e9149ece79e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.633 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[83e158a4-b01f-451c-b463-a632e16a4ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.656 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b119000-303a-4110-ab8c-e69d1b0d357a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.693 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[86b53e8c-10a5-43bf-8b14-0b15d10dda87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.699 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dae71665-97b3-43bf-9e62-10c138079a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 NetworkManager[48891]: <info>  [1764090307.7012] manager: (tap4a48b006-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.744 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c17975e6-29a5-44ff-a5a5-b9dd7a0eb014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.749 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7084f8-c80f-41e7-aeb4-89827f0c3762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 NetworkManager[48891]: <info>  [1764090307.7829] device (tap4a48b006-a0): carrier: link connected
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.790 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[87692b4a-7c18-4270-b24b-5e2ea0090cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.814 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[455e2716-e0d9-4bcf-8b1d-e525b8bd37b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387789, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3caed1ee-4581-49f1-a945-78744d2b2ff1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:295'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676537, 'tstamp': 676537}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387790, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.861 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0cf4e2-77d5-484f-b19d-a3a215642e35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387791, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f64ebeb8-dbc7-465e-a3f7-3f6fe632913b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.957 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39e88408-72be-4ec1-9012-1f1993ad1064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.958 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.959 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a48b006-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 NetworkManager[48891]: <info>  [1764090307.9616] manager: (tap4a48b006-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Nov 25 17:05:07 compute-0 kernel: tap4a48b006-a0: entered promiscuous mode
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.964 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a48b006-a0, col_values=(('external_ids', {'iface-id': '85d5b09a-dc15-4154-acec-abe7a2e5fc19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:07 compute-0 ovn_controller[153477]: 2025-11-25T17:05:07Z|01258|binding|INFO|Releasing lport 85d5b09a-dc15-4154-acec-abe7a2e5fc19 from this chassis (sb_readonly=0)
Nov 25 17:05:07 compute-0 nova_compute[254092]: 2025-11-25 17:05:07.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.984 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a48b006-a4d1-4fa5-88f1-79386ed2958f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a48b006-a4d1-4fa5-88f1-79386ed2958f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.985 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd42f658-8acf-407c-9332-974b9936fd24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.986 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-4a48b006-a4d1-4fa5-88f1-79386ed2958f
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/4a48b006-a4d1-4fa5-88f1-79386ed2958f.pid.haproxy
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 4a48b006-a4d1-4fa5-88f1-79386ed2958f
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:05:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:07.987 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'env', 'PROCESS_TAG=haproxy-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a48b006-a4d1-4fa5-88f1-79386ed2958f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:05:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 31 KiB/s wr, 1 op/s
Nov 25 17:05:08 compute-0 podman[387823]: 2025-11-25 17:05:08.471722083 +0000 UTC m=+0.122248354 container create f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:05:08 compute-0 podman[387823]: 2025-11-25 17:05:08.383681249 +0000 UTC m=+0.034207590 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:05:08 compute-0 systemd[1]: Started libpod-conmon-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465.scope.
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.548 254096 DEBUG nova.network.neutron [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updated VIF entry in instance network info cache for port b68643ca-2301-486a-984d-43fc41d1f773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.549 254096 DEBUG nova.network.neutron [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [{"id": "b68643ca-2301-486a-984d-43fc41d1f773", "address": "fa:16:3e:dd:cf:7c", "network": {"id": "14274c68-4eff-4ea0-b9e4-434b52f5c24f", "bridge": "br-int", "label": "tempest-network-smoke--836711466", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb68643ca-23", "ovs_interfaceid": "b68643ca-2301-486a-984d-43fc41d1f773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5bfab8528d115e346b339b56d12524bd34cf4549cb171cd97809410d53295b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.611 254096 DEBUG oslo_concurrency.lockutils [req-8f47ef03-07f7-42db-bf8d-6c4702dcba9b req-cdb4e4af-1a1c-4e4d-9c02-bf5698a2506b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c5d6f631-cec2-431d-b476-feafa21e4f80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:08 compute-0 podman[387823]: 2025-11-25 17:05:08.615421884 +0000 UTC m=+0.265948155 container init f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:05:08 compute-0 podman[387823]: 2025-11-25 17:05:08.621054509 +0000 UTC m=+0.271580760 container start f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 17:05:08 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : New worker (387844) forked
Nov 25 17:05:08 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : Loading success.
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.826 254096 DEBUG nova.network.neutron [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.826 254096 DEBUG nova.network.neutron [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.849 254096 DEBUG oslo_concurrency.lockutils [req-e94c127e-447a-4b7b-a048-21be369b18fd req-d6f472ab-e3cc-4eae-b6f4-3174e97a56d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.862 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-unplugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.862 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.862 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] No waiting events found dispatching network-vif-unplugged-b68643ca-2301-486a-984d-43fc41d1f773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-unplugged-b68643ca-2301-486a-984d-43fc41d1f773 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.863 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] No waiting events found dispatching network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 WARNING nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received unexpected event network-vif-plugged-b68643ca-2301-486a-984d-43fc41d1f773 for instance with vm_state active and task_state deleting.
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.864 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 WARNING nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.865 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG oslo_concurrency.lockutils [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 DEBUG nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:05:08 compute-0 nova_compute[254092]: 2025-11-25 17:05:08.866 254096 WARNING nova.compute.manager [req-d57712a4-4919-4ae9-9925-fdbf155bb3e6 req-30f50edc-a4d0-4943-a3da-04950a23ad35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.
Nov 25 17:05:09 compute-0 ovn_controller[153477]: 2025-11-25T17:05:09Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:9c:66 10.100.0.28
Nov 25 17:05:09 compute-0 ovn_controller[153477]: 2025-11-25T17:05:09Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:9c:66 10.100.0.28
Nov 25 17:05:09 compute-0 ceph-mon[74985]: pgmap v2448: 321 pgs: 321 active+clean; 200 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 31 KiB/s wr, 1 op/s
Nov 25 17:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:05:10 compute-0 nova_compute[254092]: 2025-11-25 17:05:10.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 181 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 31 KiB/s wr, 18 op/s
Nov 25 17:05:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:10.981 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:05:10 compute-0 nova_compute[254092]: 2025-11-25 17:05:10.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:10.982 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.015 254096 INFO nova.virt.libvirt.driver [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deleting instance files /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80_del
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.016 254096 INFO nova.virt.libvirt.driver [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deletion of /var/lib/nova/instances/c5d6f631-cec2-431d-b476-feafa21e4f80_del complete
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.061 254096 INFO nova.compute.manager [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 4.36 seconds to destroy the instance on the hypervisor.
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.061 254096 DEBUG oslo.service.loopingcall [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.062 254096 DEBUG nova.compute.manager [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.062 254096 DEBUG nova.network.neutron [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:05:11 compute-0 ceph-mon[74985]: pgmap v2449: 321 pgs: 321 active+clean; 181 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 31 KiB/s wr, 18 op/s
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.952 254096 DEBUG nova.network.neutron [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:11 compute-0 nova_compute[254092]: 2025-11-25 17:05:11.974 254096 INFO nova.compute.manager [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Took 0.91 seconds to deallocate network for instance.
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.017 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.018 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.046 254096 DEBUG nova.compute.manager [req-e06ce0ac-d76c-4b57-8bb9-703fe7dc695e req-3bfdba21-d10d-4081-b61a-4aac228889b0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Received event network-vif-deleted-b68643ca-2301-486a-984d-43fc41d1f773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.104 254096 DEBUG oslo_concurrency.processutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 34 KiB/s wr, 23 op/s
Nov 25 17:05:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:05:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221322770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.516 254096 DEBUG oslo_concurrency.processutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.522 254096 DEBUG nova.compute.provider_tree [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.539 254096 DEBUG nova.scheduler.client.report [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.561 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.597 254096 INFO nova.scheduler.client.report [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance c5d6f631-cec2-431d-b476-feafa21e4f80
Nov 25 17:05:12 compute-0 nova_compute[254092]: 2025-11-25 17:05:12.662 254096 DEBUG oslo_concurrency.lockutils [None req-ca2b56f5-dad8-4299-9710-2501457a9b4b 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "c5d6f631-cec2-431d-b476-feafa21e4f80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2221322770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:13.642 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:13.643 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:13.658 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:13 compute-0 ceph-mon[74985]: pgmap v2450: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 34 KiB/s wr, 23 op/s
Nov 25 17:05:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 11 KiB/s wr, 22 op/s
Nov 25 17:05:15 compute-0 nova_compute[254092]: 2025-11-25 17:05:15.274 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:15 compute-0 ceph-mon[74985]: pgmap v2451: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 11 KiB/s wr, 22 op/s
Nov 25 17:05:16 compute-0 ovn_controller[153477]: 2025-11-25T17:05:16Z|01259|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 17:05:16 compute-0 ovn_controller[153477]: 2025-11-25T17:05:16Z|01260|binding|INFO|Releasing lport 85d5b09a-dc15-4154-acec-abe7a2e5fc19 from this chassis (sb_readonly=0)
Nov 25 17:05:16 compute-0 nova_compute[254092]: 2025-11-25 17:05:16.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 29 op/s
Nov 25 17:05:16 compute-0 podman[387877]: 2025-11-25 17:05:16.63838166 +0000 UTC m=+0.054431124 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:05:16 compute-0 podman[387878]: 2025-11-25 17:05:16.677523353 +0000 UTC m=+0.091572332 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:05:16 compute-0 podman[387876]: 2025-11-25 17:05:16.680449473 +0000 UTC m=+0.090242405 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 25 17:05:16 compute-0 nova_compute[254092]: 2025-11-25 17:05:16.972 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:16 compute-0 nova_compute[254092]: 2025-11-25 17:05:16.973 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:16.985 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:16 compute-0 nova_compute[254092]: 2025-11-25 17:05:16.987 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.048 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.048 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.055 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.055 254096 INFO nova.compute.claims [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.161 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:05:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275450836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.622 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.628 254096 DEBUG nova.compute.provider_tree [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.655 254096 DEBUG nova.scheduler.client.report [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.683 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.683 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.725 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.726 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.745 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.764 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:05:17 compute-0 ceph-mon[74985]: pgmap v2452: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 29 op/s
Nov 25 17:05:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2275450836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.909 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.911 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.912 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Creating image(s)
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.950 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:17 compute-0 nova_compute[254092]: 2025-11-25 17:05:17.986 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.022 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.027 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.070 254096 DEBUG nova.policy [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.108 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.108 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.109 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.110 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.134 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.138 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b747b045-786f-49a8-907c-cc222a07fa05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 8.2 KiB/s wr, 28 op/s
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.441 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 b747b045-786f-49a8-907c-cc222a07fa05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.501 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.579 254096 DEBUG nova.objects.instance [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid b747b045-786f-49a8-907c-cc222a07fa05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.591 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.592 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Ensure instance console log exists: /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.592 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.592 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:18 compute-0 nova_compute[254092]: 2025-11-25 17:05:18.593 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:19 compute-0 nova_compute[254092]: 2025-11-25 17:05:19.536 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Successfully created port: 4f707331-3a0e-47b6-98ee-569db81bd594 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:05:19 compute-0 nova_compute[254092]: 2025-11-25 17:05:19.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:19 compute-0 ceph-mon[74985]: pgmap v2453: 321 pgs: 321 active+clean; 121 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 8.2 KiB/s wr, 28 op/s
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 132 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 380 KiB/s wr, 53 op/s
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.816 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Successfully updated port: 4f707331-3a0e-47b6-98ee-569db81bd594 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.838 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.838 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.838 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.913 254096 DEBUG nova.compute.manager [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-changed-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.913 254096 DEBUG nova.compute.manager [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Refreshing instance network info cache due to event network-changed-4f707331-3a0e-47b6-98ee-569db81bd594. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:05:20 compute-0 nova_compute[254092]: 2025-11-25 17:05:20.914 254096 DEBUG oslo_concurrency.lockutils [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:21 compute-0 nova_compute[254092]: 2025-11-25 17:05:21.098 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:05:21 compute-0 ceph-mon[74985]: pgmap v2454: 321 pgs: 321 active+clean; 132 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 380 KiB/s wr, 53 op/s
Nov 25 17:05:21 compute-0 nova_compute[254092]: 2025-11-25 17:05:21.948 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090306.947035, c5d6f631-cec2-431d-b476-feafa21e4f80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:05:21 compute-0 nova_compute[254092]: 2025-11-25 17:05:21.949 254096 INFO nova.compute.manager [-] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] VM Stopped (Lifecycle Event)
Nov 25 17:05:21 compute-0 nova_compute[254092]: 2025-11-25 17:05:21.964 254096 DEBUG nova.compute.manager [None req-bf84427c-3325-4aa3-a504-ac65871c6100 - - - - - -] [instance: c5d6f631-cec2-431d-b476-feafa21e4f80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.111 254096 DEBUG nova.network.neutron [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updating instance_info_cache with network_info: [{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.140 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.140 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance network_info: |[{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.141 254096 DEBUG oslo_concurrency.lockutils [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.141 254096 DEBUG nova.network.neutron [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Refreshing network info cache for port 4f707331-3a0e-47b6-98ee-569db81bd594 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.145 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start _get_guest_xml network_info=[{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.149 254096 WARNING nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.180 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.181 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.192 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.192 254096 DEBUG nova.virt.libvirt.host [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.193 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.193 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.194 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.195 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.195 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.195 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.196 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.196 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.196 254096 DEBUG nova.virt.hardware [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.199 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 17:05:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:05:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2839384658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.710 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.746 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:22 compute-0 nova_compute[254092]: 2025-11-25 17:05:22.752 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2839384658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:05:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883095188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.195 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.197 254096 DEBUG nova.virt.libvirt.vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2108522710',display_name='tempest-TestNetworkBasicOps-server-2108522710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2108522710',id=122,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvG1eO5jPF+SV0Ao2r1oFJ5d8qXkQHoty8TB6rNrtbLrWvCsgRx2hMEMQlhuNosicoMv5mD+tjKe9vDZEtzlGPJcyIy9mr2/vCsJq0iL2bTTiQYg0Y14H1/7blYpbPNoQ==',key_name='tempest-TestNetworkBasicOps-1506231644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5q8f6jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:17Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=b747b045-786f-49a8-907c-cc222a07fa05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.198 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.199 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.200 254096 DEBUG nova.objects.instance [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid b747b045-786f-49a8-907c-cc222a07fa05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.234 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <uuid>b747b045-786f-49a8-907c-cc222a07fa05</uuid>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <name>instance-0000007a</name>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-2108522710</nova:name>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:05:22</nova:creationTime>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <nova:port uuid="4f707331-3a0e-47b6-98ee-569db81bd594">
Nov 25 17:05:23 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <system>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <entry name="serial">b747b045-786f-49a8-907c-cc222a07fa05</entry>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <entry name="uuid">b747b045-786f-49a8-907c-cc222a07fa05</entry>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </system>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <os>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   </os>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <features>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   </features>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b747b045-786f-49a8-907c-cc222a07fa05_disk">
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       </source>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/b747b045-786f-49a8-907c-cc222a07fa05_disk.config">
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       </source>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:05:23 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:15:5c:fc"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <target dev="tap4f707331-3a"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/console.log" append="off"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <video>
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </video>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:05:23 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:05:23 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:05:23 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:05:23 compute-0 nova_compute[254092]: </domain>
Nov 25 17:05:23 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.237 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Preparing to wait for external event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.237 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.238 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.238 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.239 254096 DEBUG nova.virt.libvirt.vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2108522710',display_name='tempest-TestNetworkBasicOps-server-2108522710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2108522710',id=122,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvG1eO5jPF+SV0Ao2r1oFJ5d8qXkQHoty8TB6rNrtbLrWvCsgRx2hMEMQlhuNosicoMv5mD+tjKe9vDZEtzlGPJcyIy9mr2/vCsJq0iL2bTTiQYg0Y14H1/7blYpbPNoQ==',key_name='tempest-TestNetworkBasicOps-1506231644',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5q8f6jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:17Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=b747b045-786f-49a8-907c-cc222a07fa05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.239 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.240 254096 DEBUG nova.network.os_vif_util [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.240 254096 DEBUG os_vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.241 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.242 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.242 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.246 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f707331-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.246 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f707331-3a, col_values=(('external_ids', {'iface-id': '4f707331-3a0e-47b6-98ee-569db81bd594', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:5c:fc', 'vm-uuid': 'b747b045-786f-49a8-907c-cc222a07fa05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.275 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:23 compute-0 NetworkManager[48891]: <info>  [1764090323.2770] manager: (tap4f707331-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.286 254096 INFO os_vif [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a')
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.353 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.354 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.354 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:15:5c:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.356 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Using config drive
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.395 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.676 254096 DEBUG nova.network.neutron [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updated VIF entry in instance network info cache for port 4f707331-3a0e-47b6-98ee-569db81bd594. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.676 254096 DEBUG nova.network.neutron [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updating instance_info_cache with network_info: [{"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.696 254096 DEBUG oslo_concurrency.lockutils [req-3f886f70-3874-4f85-b62b-b6f58893a08b req-d73f59e2-75d3-499e-b50b-af4d1df6df81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-b747b045-786f-49a8-907c-cc222a07fa05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.706 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Creating config drive at /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.719 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpainul2bu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:23 compute-0 ceph-mon[74985]: pgmap v2455: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 25 17:05:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/883095188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.868 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpainul2bu" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.895 254096 DEBUG nova.storage.rbd_utils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image b747b045-786f-49a8-907c-cc222a07fa05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:23 compute-0 nova_compute[254092]: 2025-11-25 17:05:23.900 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config b747b045-786f-49a8-907c-cc222a07fa05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.082 254096 DEBUG oslo_concurrency.processutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config b747b045-786f-49a8-907c-cc222a07fa05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.084 254096 INFO nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deleting local config drive /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05/disk.config because it was imported into RBD.
Nov 25 17:05:24 compute-0 kernel: tap4f707331-3a: entered promiscuous mode
Nov 25 17:05:24 compute-0 NetworkManager[48891]: <info>  [1764090324.1557] manager: (tap4f707331-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Nov 25 17:05:24 compute-0 ovn_controller[153477]: 2025-11-25T17:05:24Z|01261|binding|INFO|Claiming lport 4f707331-3a0e-47b6-98ee-569db81bd594 for this chassis.
Nov 25 17:05:24 compute-0 ovn_controller[153477]: 2025-11-25T17:05:24Z|01262|binding|INFO|4f707331-3a0e-47b6-98ee-569db81bd594: Claiming fa:16:3e:15:5c:fc 10.100.0.26
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.165 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:5c:fc 10.100.0.26'], port_security=['fa:16:3e:15:5c:fc 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'b747b045-786f-49a8-907c-cc222a07fa05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30729089-f7ac-4b8e-887e-74cce32287f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4f707331-3a0e-47b6-98ee-569db81bd594) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.167 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4f707331-3a0e-47b6-98ee-569db81bd594 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f bound to our chassis
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.168 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f
Nov 25 17:05:24 compute-0 ovn_controller[153477]: 2025-11-25T17:05:24Z|01263|binding|INFO|Setting lport 4f707331-3a0e-47b6-98ee-569db81bd594 ovn-installed in OVS
Nov 25 17:05:24 compute-0 ovn_controller[153477]: 2025-11-25T17:05:24Z|01264|binding|INFO|Setting lport 4f707331-3a0e-47b6-98ee-569db81bd594 up in Southbound
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.191 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5370a3c1-696b-433b-b44a-9712353c7178]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:24 compute-0 systemd-machined[216343]: New machine qemu-154-instance-0000007a.
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:24 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Nov 25 17:05:24 compute-0 systemd-udevd[388263]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:05:24 compute-0 NetworkManager[48891]: <info>  [1764090324.2212] device (tap4f707331-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:05:24 compute-0 NetworkManager[48891]: <info>  [1764090324.2223] device (tap4f707331-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.222 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7d9726-3d17-4dab-8fc6-294228335cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.225 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4deecb15-9d99-40a4-8ceb-0f5eebee6e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.261 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[af568685-102b-45da-86cb-51cc2b5d00ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.280 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef0c7ec-0bd3-48eb-843b-b23756d7f276]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388273, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.294 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[472f53fc-8720-4472-8cdd-558354aee892]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676551, 'tstamp': 676551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388275, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676554, 'tstamp': 676554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388275, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.296 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.298 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.299 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a48b006-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.299 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.300 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a48b006-a0, col_values=(('external_ids', {'iface-id': '85d5b09a-dc15-4154-acec-abe7a2e5fc19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:24.300 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.384 254096 DEBUG nova.compute.manager [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.384 254096 DEBUG oslo_concurrency.lockutils [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.385 254096 DEBUG oslo_concurrency.lockutils [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.385 254096 DEBUG oslo_concurrency.lockutils [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.385 254096 DEBUG nova.compute.manager [req-dad4e447-2e37-4208-9183-0f6d414bde60 req-59cc24a6-a43c-404f-ba08-2bcaaa3fbfe5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Processing event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:05:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.985 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090324.9847481, b747b045-786f-49a8-907c-cc222a07fa05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.985 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Started (Lifecycle Event)
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.988 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.991 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.994 254096 INFO nova.virt.libvirt.driver [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance spawned successfully.
Nov 25 17:05:24 compute-0 nova_compute[254092]: 2025-11-25 17:05:24.994 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.004 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.010 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.015 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.016 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.016 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.017 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.017 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.018 254096 DEBUG nova.virt.libvirt.driver [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.042 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.043 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090324.984854, b747b045-786f-49a8-907c-cc222a07fa05 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.043 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Paused (Lifecycle Event)
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.058 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.061 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090324.9915025, b747b045-786f-49a8-907c-cc222a07fa05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.061 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Resumed (Lifecycle Event)
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.077 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.081 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.108 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.121 254096 INFO nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 7.21 seconds to spawn the instance on the hypervisor.
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.122 254096 DEBUG nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.172 254096 INFO nova.compute.manager [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 8.14 seconds to build instance.
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.190 254096 DEBUG oslo_concurrency.lockutils [None req-b668c2b1-2bdc-4128-b348-88d9d10320c8 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:25 compute-0 nova_compute[254092]: 2025-11-25 17:05:25.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:25 compute-0 ceph-mon[74985]: pgmap v2456: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 25 17:05:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 17:05:26 compute-0 nova_compute[254092]: 2025-11-25 17:05:26.459 254096 DEBUG nova.compute.manager [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:26 compute-0 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG oslo_concurrency.lockutils [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:26 compute-0 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG oslo_concurrency.lockutils [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:26 compute-0 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG oslo_concurrency.lockutils [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:26 compute-0 nova_compute[254092]: 2025-11-25 17:05:26.460 254096 DEBUG nova.compute.manager [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] No waiting events found dispatching network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:05:26 compute-0 nova_compute[254092]: 2025-11-25 17:05:26.461 254096 WARNING nova.compute.manager [req-9f6fb995-d509-4e07-b048-32cc58d20957 req-51943d5e-23b9-4c5e-b2a4-d31e92afc486 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received unexpected event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 for instance with vm_state active and task_state None.
Nov 25 17:05:27 compute-0 nova_compute[254092]: 2025-11-25 17:05:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:27 compute-0 ceph-mon[74985]: pgmap v2457: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.830 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.830 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.846 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:05:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.920 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.920 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.927 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:05:28 compute-0 nova_compute[254092]: 2025-11-25 17:05:28.927 254096 INFO nova.compute.claims [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.107 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:05:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761654020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.567 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.576 254096 DEBUG nova.compute.provider_tree [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.592 254096 DEBUG nova.scheduler.client.report [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.614 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.616 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.669 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.670 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.689 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.706 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.801 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.804 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.805 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Creating image(s)
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.843 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.882 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:29 compute-0 ceph-mon[74985]: pgmap v2458: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Nov 25 17:05:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1761654020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.930 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:29 compute-0 nova_compute[254092]: 2025-11-25 17:05:29.936 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.048 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.049 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.050 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.051 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.081 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.087 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cbf5c589-9701-44c9-9600-739675853610_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.409 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cbf5c589-9701-44c9-9600-739675853610_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.489 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image cbf5c589-9701-44c9-9600-739675853610_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.528 254096 DEBUG nova.policy [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.579 254096 DEBUG nova.objects.instance [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid cbf5c589-9701-44c9-9600-739675853610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.589 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.589 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Ensure instance console log exists: /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.589 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.590 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:30 compute-0 nova_compute[254092]: 2025-11-25 17:05:30.590 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:31 compute-0 ceph-mon[74985]: pgmap v2459: 321 pgs: 321 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 25 17:05:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 197 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 80 op/s
Nov 25 17:05:32 compute-0 nova_compute[254092]: 2025-11-25 17:05:32.590 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Successfully created port: 332ae922-3280-48c2-8889-d1ab181a43db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.817 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Successfully updated port: 332ae922-3280-48c2-8889-d1ab181a43db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.832 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.833 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.833 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:05:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:33 compute-0 ceph-mon[74985]: pgmap v2460: 321 pgs: 321 active+clean; 197 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 80 op/s
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.934 254096 DEBUG nova.compute.manager [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-changed-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.934 254096 DEBUG nova.compute.manager [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing instance network info cache due to event network-changed-332ae922-3280-48c2-8889-d1ab181a43db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:05:33 compute-0 nova_compute[254092]: 2025-11-25 17:05:33.934 254096 DEBUG oslo_concurrency.lockutils [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:34 compute-0 nova_compute[254092]: 2025-11-25 17:05:34.285 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:05:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 197 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 77 op/s
Nov 25 17:05:34 compute-0 nova_compute[254092]: 2025-11-25 17:05:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.564 254096 DEBUG nova.network.neutron [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.582 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.582 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance network_info: |[{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.582 254096 DEBUG oslo_concurrency.lockutils [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.583 254096 DEBUG nova.network.neutron [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.585 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start _get_guest_xml network_info=[{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.589 254096 WARNING nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.593 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.594 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.600 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.600 254096 DEBUG nova.virt.libvirt.host [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.601 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.601 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.601 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.602 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.603 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.604 254096 DEBUG nova.virt.hardware [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:05:35 compute-0 nova_compute[254092]: 2025-11-25 17:05:35.606 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:35 compute-0 ceph-mon[74985]: pgmap v2461: 321 pgs: 321 active+clean; 197 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 77 op/s
Nov 25 17:05:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:05:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/877801198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.058 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.084 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.088 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:36 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 17:05:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:05:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:05:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040764990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.609 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.611 254096 DEBUG nova.virt.libvirt.vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=123,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-jw45rb9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:29Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=cbf5c589-9701-44c9-9600-739675853610,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.612 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.613 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.614 254096 DEBUG nova.objects.instance [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid cbf5c589-9701-44c9-9600-739675853610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.626 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <uuid>cbf5c589-9701-44c9-9600-739675853610</uuid>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <name>instance-0000007b</name>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728</nova:name>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:05:35</nova:creationTime>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <nova:port uuid="332ae922-3280-48c2-8889-d1ab181a43db">
Nov 25 17:05:36 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <system>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <entry name="serial">cbf5c589-9701-44c9-9600-739675853610</entry>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <entry name="uuid">cbf5c589-9701-44c9-9600-739675853610</entry>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </system>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <os>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   </os>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <features>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   </features>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/cbf5c589-9701-44c9-9600-739675853610_disk">
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       </source>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/cbf5c589-9701-44c9-9600-739675853610_disk.config">
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       </source>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:05:36 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:98:d4:72"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <target dev="tap332ae922-32"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/console.log" append="off"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <video>
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </video>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:05:36 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:05:36 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:05:36 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:05:36 compute-0 nova_compute[254092]: </domain>
Nov 25 17:05:36 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.628 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Preparing to wait for external event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.628 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.629 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.630 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.632 254096 DEBUG nova.virt.libvirt.vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=123,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-jw45rb9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:05:29Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=cbf5c589-9701-44c9-9600-739675853610,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.633 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.635 254096 DEBUG nova.network.os_vif_util [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.636 254096 DEBUG os_vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.638 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.640 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.645 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap332ae922-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.647 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap332ae922-32, col_values=(('external_ids', {'iface-id': '332ae922-3280-48c2-8889-d1ab181a43db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:d4:72', 'vm-uuid': 'cbf5c589-9701-44c9-9600-739675853610'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:36 compute-0 NetworkManager[48891]: <info>  [1764090336.6507] manager: (tap332ae922-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.659 254096 INFO os_vif [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32')
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.845 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.846 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.846 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:98:d4:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.846 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Using config drive
Nov 25 17:05:36 compute-0 nova_compute[254092]: 2025-11-25 17:05:36.864 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/877801198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2040764990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.053 254096 DEBUG nova.network.neutron [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updated VIF entry in instance network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.054 254096 DEBUG nova.network.neutron [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.066 254096 DEBUG oslo_concurrency.lockutils [req-b06c40c8-70ec-4852-a649-3e3687c592bb req-acddc051-a60b-4d7a-89cf-cd2b2f7fef81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:37 compute-0 ovn_controller[153477]: 2025-11-25T17:05:37Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:5c:fc 10.100.0.26
Nov 25 17:05:37 compute-0 ovn_controller[153477]: 2025-11-25T17:05:37Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:5c:fc 10.100.0.26
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.490 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Creating config drive at /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.495 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8jtavxj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.554 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.554 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.554 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.646 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8jtavxj" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.670 254096 DEBUG nova.storage.rbd_utils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image cbf5c589-9701-44c9-9600-739675853610_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.676 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config cbf5c589-9701-44c9-9600-739675853610_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.959 254096 DEBUG oslo_concurrency.processutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config cbf5c589-9701-44c9-9600-739675853610_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:37 compute-0 nova_compute[254092]: 2025-11-25 17:05:37.961 254096 INFO nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Deleting local config drive /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610/disk.config because it was imported into RBD.
Nov 25 17:05:37 compute-0 ceph-mon[74985]: pgmap v2462: 321 pgs: 321 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 17:05:38 compute-0 kernel: tap332ae922-32: entered promiscuous mode
Nov 25 17:05:38 compute-0 NetworkManager[48891]: <info>  [1764090338.0199] manager: (tap332ae922-32): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Nov 25 17:05:38 compute-0 ovn_controller[153477]: 2025-11-25T17:05:38Z|01265|binding|INFO|Claiming lport 332ae922-3280-48c2-8889-d1ab181a43db for this chassis.
Nov 25 17:05:38 compute-0 ovn_controller[153477]: 2025-11-25T17:05:38Z|01266|binding|INFO|332ae922-3280-48c2-8889-d1ab181a43db: Claiming fa:16:3e:98:d4:72 10.100.0.8
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:05:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201702930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:38 compute-0 ovn_controller[153477]: 2025-11-25T17:05:38Z|01267|binding|INFO|Setting lport 332ae922-3280-48c2-8889-d1ab181a43db ovn-installed in OVS
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:38 compute-0 systemd-udevd[388664]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.067 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:38 compute-0 systemd-machined[216343]: New machine qemu-155-instance-0000007b.
Nov 25 17:05:38 compute-0 NetworkManager[48891]: <info>  [1764090338.0804] device (tap332ae922-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:05:38 compute-0 NetworkManager[48891]: <info>  [1764090338.0814] device (tap332ae922-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:05:38 compute-0 ovn_controller[153477]: 2025-11-25T17:05:38Z|01268|binding|INFO|Setting lport 332ae922-3280-48c2-8889-d1ab181a43db up in Southbound
Nov 25 17:05:38 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.083 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:d4:72 10.100.0.8'], port_security=['fa:16:3e:98:d4:72 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cbf5c589-9701-44c9-9600-739675853610', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2db36ef1-2db7-4ccb-b8a5-63a9a57f3dde 7a710be2-f756-4f31-8d8e-270c10735b5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=332ae922-3280-48c2-8889-d1ab181a43db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.084 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 332ae922-3280-48c2-8889-d1ab181a43db in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 bound to our chassis
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.085 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf9312b9-f4d2-496f-a143-7586e12fbee3
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec33aad-2028-41d7-af10-d92975701337]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.097 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf9312b9-f1 in ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.099 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf9312b9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efb71741-9c0a-43e3-a854-75e9c165370b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9282fcc9-91f1-4cf8-99a6-eb2af19fa2f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.117 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[54ef51bb-6be0-4ee2-a221-e5b5e8b8fa9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.145 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8802ae8-74a1-4349-83ea-651b1b63521d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.184 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f08f29-6a21-48ca-88c5-b3397ec4f1cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 NetworkManager[48891]: <info>  [1764090338.1905] manager: (tapbf9312b9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[546f129c-14ef-4f1e-9995-712f9e951ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.210 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.211 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.218 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.219 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.225 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.226 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.230 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a2507c12-6ff4-49a9-b737-abe36aefc7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.234 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7a48d2e9-2fe8-4663-bfda-4680b38300f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 NetworkManager[48891]: <info>  [1764090338.2665] device (tapbf9312b9-f0): carrier: link connected
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.271 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f18db40f-6b18-4da7-b692-0eb3030c0d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.289 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d236f983-5ce1-4824-a78a-81258fc2ee60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 39201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388697, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.303 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1f5c70-c55c-47f7-8caa-4814789f6b2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:2fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679585, 'tstamp': 679585}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388712, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.327 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c05c5ecc-2711-4867-aaf0-7e8205baf646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 39201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388715, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.360 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8b2757-3306-42cc-957a-27873f75f805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.437 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3872aafc-97b7-4f6a-aa1c-050bb6336b86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.439 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.439 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.439 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9312b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:38 compute-0 NetworkManager[48891]: <info>  [1764090338.4420] manager: (tapbf9312b9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:38 compute-0 kernel: tapbf9312b9-f0: entered promiscuous mode
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.445 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf9312b9-f0, col_values=(('external_ids', {'iface-id': 'c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:05:38 compute-0 ovn_controller[153477]: 2025-11-25T17:05:38Z|01269|binding|INFO|Releasing lport c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35 from this chassis (sb_readonly=0)
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.463 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf9312b9-f4d2-496f-a143-7586e12fbee3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf9312b9-f4d2-496f-a143-7586e12fbee3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.464 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8946d8-5f8c-4b7a-86ba-c5d24ef2ebaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.464 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-bf9312b9-f4d2-496f-a143-7586e12fbee3
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/bf9312b9-f4d2-496f-a143-7586e12fbee3.pid.haproxy
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID bf9312b9-f4d2-496f-a143-7586e12fbee3
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:05:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:05:38.465 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'env', 'PROCESS_TAG=haproxy-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf9312b9-f4d2-496f-a143-7586e12fbee3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.492 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.494 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3269MB free_disk=59.90095520019531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.494 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.495 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.528 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090338.526368, cbf5c589-9701-44c9-9600-739675853610 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.528 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Started (Lifecycle Event)
Nov 25 17:05:38 compute-0 sudo[388748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:38 compute-0 sudo[388748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:38 compute-0 sudo[388748]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.551 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.554 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090338.5277677, cbf5c589-9701-44c9-9600-739675853610 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.555 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Paused (Lifecycle Event)
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.574 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.579 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:05:38 compute-0 sudo[388776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.601 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:05:38 compute-0 sudo[388776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:38 compute-0 sudo[388776]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.611 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.611 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance b747b045-786f-49a8-907c-cc222a07fa05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.611 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cbf5c589-9701-44c9-9600-739675853610 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.612 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.612 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.629 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.654 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.655 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:05:38 compute-0 sudo[388801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:38 compute-0 sudo[388801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:38 compute-0 sudo[388801]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.682 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.711 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:05:38 compute-0 sudo[388826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 17:05:38 compute-0 sudo[388826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.717 254096 DEBUG nova.compute.manager [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.718 254096 DEBUG oslo_concurrency.lockutils [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.718 254096 DEBUG oslo_concurrency.lockutils [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.719 254096 DEBUG oslo_concurrency.lockutils [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.719 254096 DEBUG nova.compute.manager [req-9ac7543d-8940-4571-8ad4-cd7a4f9e654c req-0762ab8a-b0ed-4cc8-b27a-01ee5c467ba4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Processing event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.719 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.737 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090338.729908, cbf5c589-9701-44c9-9600-739675853610 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.737 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Resumed (Lifecycle Event)
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.746 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.753 254096 INFO nova.virt.libvirt.driver [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance spawned successfully.
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.753 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.779 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.784 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.787 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.788 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.788 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.788 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.789 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.789 254096 DEBUG nova.virt.libvirt.driver [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.833 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:05:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.866 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.888 254096 INFO nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 9.09 seconds to spawn the instance on the hypervisor.
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.889 254096 DEBUG nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:05:38 compute-0 podman[388871]: 2025-11-25 17:05:38.893858676 +0000 UTC m=+0.059776651 container create 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:05:38 compute-0 systemd[1]: Started libpod-conmon-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948.scope.
Nov 25 17:05:38 compute-0 podman[388871]: 2025-11-25 17:05:38.857288553 +0000 UTC m=+0.023206428 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.958 254096 INFO nova.compute.manager [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 10.07 seconds to build instance.
Nov 25 17:05:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4201702930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:38 compute-0 nova_compute[254092]: 2025-11-25 17:05:38.983 254096 DEBUG oslo_concurrency.lockutils [None req-d0be7669-bc1d-4c82-888a-b31cfdfc30e1 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a68caf33163305d309df46a027dcb2a4dc5df4d4d0497aef68c09c791db936/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:39 compute-0 podman[388871]: 2025-11-25 17:05:39.008276375 +0000 UTC m=+0.174194260 container init 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 17:05:39 compute-0 podman[388871]: 2025-11-25 17:05:39.016311364 +0000 UTC m=+0.182229219 container start 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:05:39 compute-0 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : New worker (388931) forked
Nov 25 17:05:39 compute-0 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : Loading success.
Nov 25 17:05:39 compute-0 sudo[388826]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:39 compute-0 sudo[388940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:39 compute-0 sudo[388940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:39 compute-0 sudo[388940]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:39 compute-0 sudo[388965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:05:39 compute-0 sudo[388965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:39 compute-0 sudo[388965]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:39 compute-0 sudo[388990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:39 compute-0 sudo[388990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:39 compute-0 sudo[388990]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783463155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:39 compute-0 sudo[389015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:05:39 compute-0 sudo[389015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:39 compute-0 nova_compute[254092]: 2025-11-25 17:05:39.334 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:05:39 compute-0 nova_compute[254092]: 2025-11-25 17:05:39.341 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:05:39 compute-0 nova_compute[254092]: 2025-11-25 17:05:39.367 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:05:39 compute-0 nova_compute[254092]: 2025-11-25 17:05:39.401 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:05:39 compute-0 nova_compute[254092]: 2025-11-25 17:05:39.402 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:39 compute-0 sudo[389015]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 86612fb8-28c9-4bb6-a942-6dafa3caa0f5 does not exist
Nov 25 17:05:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 64a2cb5d-e142-429c-8a06-3c2aa06d51e1 does not exist
Nov 25 17:05:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e0cddf10-368b-432d-866b-215efa82304e does not exist
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:05:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:05:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:05:39 compute-0 sudo[389074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:39 compute-0 sudo[389074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:40 compute-0 sudo[389074]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:40 compute-0 sudo[389099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:05:40 compute-0 sudo[389099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:40 compute-0 sudo[389099]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:05:40 compute-0 sudo[389124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:40 compute-0 sudo[389124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:40 compute-0 ceph-mon[74985]: pgmap v2463: 321 pgs: 321 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/783463155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:05:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:05:40 compute-0 sudo[389124]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:05:40
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms']
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:05:40 compute-0 sudo[389149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:05:40 compute-0 sudo[389149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.364 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 224 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 135 op/s
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:05:40 compute-0 podman[389214]: 2025-11-25 17:05:40.54815469 +0000 UTC m=+0.069988721 container create 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:05:40 compute-0 podman[389214]: 2025-11-25 17:05:40.509371296 +0000 UTC m=+0.031205357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:05:40 compute-0 systemd[1]: Started libpod-conmon-162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce.scope.
Nov 25 17:05:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.697 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.697 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.698 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.698 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:05:40 compute-0 podman[389214]: 2025-11-25 17:05:40.786495757 +0000 UTC m=+0.308329828 container init 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:05:40 compute-0 podman[389214]: 2025-11-25 17:05:40.796687247 +0000 UTC m=+0.318521278 container start 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.798 254096 DEBUG nova.compute.manager [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.798 254096 DEBUG oslo_concurrency.lockutils [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.799 254096 DEBUG oslo_concurrency.lockutils [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.799 254096 DEBUG oslo_concurrency.lockutils [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.800 254096 DEBUG nova.compute.manager [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] No waiting events found dispatching network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:05:40 compute-0 nova_compute[254092]: 2025-11-25 17:05:40.800 254096 WARNING nova.compute.manager [req-370e5a38-ab73-405a-aa12-1d6bd2ff4794 req-93170d33-eeb1-48ef-b51d-e4e46c70ee62 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received unexpected event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db for instance with vm_state active and task_state None.
Nov 25 17:05:40 compute-0 festive_brahmagupta[389230]: 167 167
Nov 25 17:05:40 compute-0 systemd[1]: libpod-162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce.scope: Deactivated successfully.
Nov 25 17:05:40 compute-0 podman[389214]: 2025-11-25 17:05:40.939509714 +0000 UTC m=+0.461343775 container attach 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:05:40 compute-0 podman[389214]: 2025-11-25 17:05:40.940858191 +0000 UTC m=+0.462692222 container died 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 17:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc29dfffff1a59ff1c6c7c5dcf49070f888c03b3c13b67415771157dc8d5888d-merged.mount: Deactivated successfully.
Nov 25 17:05:41 compute-0 podman[389214]: 2025-11-25 17:05:41.39328621 +0000 UTC m=+0.915120251 container remove 162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 17:05:41 compute-0 systemd[1]: libpod-conmon-162b42da745ab0ee7ed1c859afa6a0818ce2a986341e9abb7fcc95e7322e06ce.scope: Deactivated successfully.
Nov 25 17:05:41 compute-0 nova_compute[254092]: 2025-11-25 17:05:41.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:41 compute-0 podman[389256]: 2025-11-25 17:05:41.597779189 +0000 UTC m=+0.023582417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:05:41 compute-0 podman[389256]: 2025-11-25 17:05:41.713545104 +0000 UTC m=+0.139348282 container create 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:05:41 compute-0 systemd[1]: Started libpod-conmon-223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36.scope.
Nov 25 17:05:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:42 compute-0 podman[389256]: 2025-11-25 17:05:42.035206747 +0000 UTC m=+0.461009915 container init 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:05:42 compute-0 podman[389256]: 2025-11-25 17:05:42.046277081 +0000 UTC m=+0.472080259 container start 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:05:42 compute-0 podman[389256]: 2025-11-25 17:05:42.12539065 +0000 UTC m=+0.551193828 container attach 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:05:42 compute-0 ceph-mon[74985]: pgmap v2464: 321 pgs: 321 active+clean; 224 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 135 op/s
Nov 25 17:05:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 204 op/s
Nov 25 17:05:43 compute-0 infallible_faraday[389273]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:05:43 compute-0 infallible_faraday[389273]: --> relative data size: 1.0
Nov 25 17:05:43 compute-0 infallible_faraday[389273]: --> All data devices are unavailable
Nov 25 17:05:43 compute-0 systemd[1]: libpod-223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36.scope: Deactivated successfully.
Nov 25 17:05:43 compute-0 podman[389256]: 2025-11-25 17:05:43.086963445 +0000 UTC m=+1.512766623 container died 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:05:43 compute-0 ceph-mon[74985]: pgmap v2465: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 204 op/s
Nov 25 17:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-944e5fc3e739b7f60d76b1d40a70af9d470e0efebc7bafb7bd8a9f02c6eaea5e-merged.mount: Deactivated successfully.
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.532 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.573 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.573 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.652 254096 DEBUG nova.compute.manager [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-changed-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.653 254096 DEBUG nova.compute.manager [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing instance network info cache due to event network-changed-332ae922-3280-48c2-8889-d1ab181a43db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.653 254096 DEBUG oslo_concurrency.lockutils [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.654 254096 DEBUG oslo_concurrency.lockutils [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:43 compute-0 nova_compute[254092]: 2025-11-25 17:05:43.654 254096 DEBUG nova.network.neutron [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:05:43 compute-0 podman[389256]: 2025-11-25 17:05:43.662606354 +0000 UTC m=+2.088409532 container remove 223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_faraday, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:05:43 compute-0 sudo[389149]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:43 compute-0 systemd[1]: libpod-conmon-223a45befae96d47bc9638ea47e569a5e15ec761cd641b134e8ed8a2500d4d36.scope: Deactivated successfully.
Nov 25 17:05:43 compute-0 sudo[389315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:43 compute-0 sudo[389315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:43 compute-0 sudo[389315]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:43 compute-0 sudo[389340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:05:43 compute-0 sudo[389340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:43 compute-0 sudo[389340]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:43 compute-0 sudo[389365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:43 compute-0 sudo[389365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:43 compute-0 sudo[389365]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:43 compute-0 sudo[389390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:05:43 compute-0 sudo[389390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:44 compute-0 nova_compute[254092]: 2025-11-25 17:05:44.140 254096 DEBUG nova.compute.manager [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:05:44 compute-0 nova_compute[254092]: 2025-11-25 17:05:44.140 254096 DEBUG nova.compute.manager [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-d5b14553-424b-4985-9ed6-2f4afac92c00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:05:44 compute-0 nova_compute[254092]: 2025-11-25 17:05:44.140 254096 DEBUG oslo_concurrency.lockutils [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:05:44 compute-0 nova_compute[254092]: 2025-11-25 17:05:44.141 254096 DEBUG oslo_concurrency.lockutils [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:05:44 compute-0 nova_compute[254092]: 2025-11-25 17:05:44.141 254096 DEBUG nova.network.neutron [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:05:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 161 op/s
Nov 25 17:05:44 compute-0 podman[389457]: 2025-11-25 17:05:44.398750265 +0000 UTC m=+0.073844767 container create 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:05:44 compute-0 podman[389457]: 2025-11-25 17:05:44.349210106 +0000 UTC m=+0.024304628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:05:44 compute-0 systemd[1]: Started libpod-conmon-21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f.scope.
Nov 25 17:05:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:44 compute-0 podman[389457]: 2025-11-25 17:05:44.521284566 +0000 UTC m=+0.196379088 container init 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:05:44 compute-0 podman[389457]: 2025-11-25 17:05:44.529040628 +0000 UTC m=+0.204135140 container start 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:05:44 compute-0 podman[389457]: 2025-11-25 17:05:44.53309679 +0000 UTC m=+0.208191312 container attach 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:05:44 compute-0 wizardly_shockley[389474]: 167 167
Nov 25 17:05:44 compute-0 systemd[1]: libpod-21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f.scope: Deactivated successfully.
Nov 25 17:05:44 compute-0 podman[389457]: 2025-11-25 17:05:44.536503463 +0000 UTC m=+0.211598005 container died 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b14ccb44408ba7c42b964885bd0dd9935d81a115c299bb21d86727c3036869d-merged.mount: Deactivated successfully.
Nov 25 17:05:44 compute-0 podman[389457]: 2025-11-25 17:05:44.586135975 +0000 UTC m=+0.261230517 container remove 21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:05:44 compute-0 systemd[1]: libpod-conmon-21d3a262c0f7d056394235d4ec4e18f23400d9f2003f597eebb60070bef2765f.scope: Deactivated successfully.
Nov 25 17:05:44 compute-0 podman[389497]: 2025-11-25 17:05:44.826811996 +0000 UTC m=+0.064356186 container create 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:05:44 compute-0 systemd[1]: Started libpod-conmon-6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0.scope.
Nov 25 17:05:44 compute-0 podman[389497]: 2025-11-25 17:05:44.803311231 +0000 UTC m=+0.040855421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:05:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:44 compute-0 podman[389497]: 2025-11-25 17:05:44.920285929 +0000 UTC m=+0.157830159 container init 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:05:44 compute-0 podman[389497]: 2025-11-25 17:05:44.929874012 +0000 UTC m=+0.167418192 container start 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:05:44 compute-0 podman[389497]: 2025-11-25 17:05:44.933452321 +0000 UTC m=+0.170996511 container attach 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.159 254096 DEBUG nova.network.neutron [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updated VIF entry in instance network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.161 254096 DEBUG nova.network.neutron [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.189 254096 DEBUG oslo_concurrency.lockutils [req-c9f72620-4c45-4b85-9432-a3a690c7da8a req-73560125-482d-44ff-9cea-303a08d221e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:45 compute-0 ceph-mon[74985]: pgmap v2466: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 161 op/s
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.527 254096 DEBUG nova.network.neutron [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port d5b14553-424b-4985-9ed6-2f4afac92c00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.527 254096 DEBUG nova.network.neutron [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.539 254096 DEBUG oslo_concurrency.lockutils [req-962e4daa-3a9e-4959-8bc4-fd5b9905d99c req-c50c2630-fe8d-4e4f-8e9c-f6be405ad6c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:05:45 compute-0 nova_compute[254092]: 2025-11-25 17:05:45.569 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:05:45 compute-0 zen_bassi[389513]: {
Nov 25 17:05:45 compute-0 zen_bassi[389513]:     "0": [
Nov 25 17:05:45 compute-0 zen_bassi[389513]:         {
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "devices": [
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "/dev/loop3"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             ],
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_name": "ceph_lv0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_size": "21470642176",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "name": "ceph_lv0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "tags": {
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cluster_name": "ceph",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.crush_device_class": "",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.encrypted": "0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osd_id": "0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.type": "block",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.vdo": "0"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             },
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "type": "block",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "vg_name": "ceph_vg0"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:         }
Nov 25 17:05:45 compute-0 zen_bassi[389513]:     ],
Nov 25 17:05:45 compute-0 zen_bassi[389513]:     "1": [
Nov 25 17:05:45 compute-0 zen_bassi[389513]:         {
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "devices": [
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "/dev/loop4"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             ],
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_name": "ceph_lv1",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_size": "21470642176",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "name": "ceph_lv1",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "tags": {
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cluster_name": "ceph",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.crush_device_class": "",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.encrypted": "0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osd_id": "1",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.type": "block",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.vdo": "0"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             },
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "type": "block",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "vg_name": "ceph_vg1"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:         }
Nov 25 17:05:45 compute-0 zen_bassi[389513]:     ],
Nov 25 17:05:45 compute-0 zen_bassi[389513]:     "2": [
Nov 25 17:05:45 compute-0 zen_bassi[389513]:         {
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "devices": [
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "/dev/loop5"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             ],
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_name": "ceph_lv2",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_size": "21470642176",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "name": "ceph_lv2",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "tags": {
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.cluster_name": "ceph",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.crush_device_class": "",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.encrypted": "0",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osd_id": "2",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.type": "block",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:                 "ceph.vdo": "0"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             },
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "type": "block",
Nov 25 17:05:45 compute-0 zen_bassi[389513]:             "vg_name": "ceph_vg2"
Nov 25 17:05:45 compute-0 zen_bassi[389513]:         }
Nov 25 17:05:45 compute-0 zen_bassi[389513]:     ]
Nov 25 17:05:45 compute-0 zen_bassi[389513]: }
Nov 25 17:05:45 compute-0 systemd[1]: libpod-6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0.scope: Deactivated successfully.
Nov 25 17:05:45 compute-0 podman[389497]: 2025-11-25 17:05:45.742936383 +0000 UTC m=+0.980480583 container died 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-19ddf52ca0e3c53bca15e8fa773f96576c307655ee40c43846d6bbad0d4ede86-merged.mount: Deactivated successfully.
Nov 25 17:05:45 compute-0 podman[389497]: 2025-11-25 17:05:45.806380693 +0000 UTC m=+1.043924873 container remove 6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:05:45 compute-0 systemd[1]: libpod-conmon-6eb33d4f75e2ec1f63b49abd007544646d3b6e2bdc3360e4a2a9fe3d42a669c0.scope: Deactivated successfully.
Nov 25 17:05:45 compute-0 sudo[389390]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:45 compute-0 sudo[389533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:45 compute-0 sudo[389533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:45 compute-0 sudo[389533]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:45 compute-0 sudo[389558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:05:45 compute-0 sudo[389558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:45 compute-0 sudo[389558]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:46 compute-0 sudo[389583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:46 compute-0 sudo[389583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:46 compute-0 sudo[389583]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:46 compute-0 sudo[389608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:05:46 compute-0 sudo[389608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:46 compute-0 podman[389672]: 2025-11-25 17:05:46.39324424 +0000 UTC m=+0.036784690 container create 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 25 17:05:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 166 op/s
Nov 25 17:05:46 compute-0 systemd[1]: Started libpod-conmon-728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03.scope.
Nov 25 17:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:46 compute-0 podman[389672]: 2025-11-25 17:05:46.473055749 +0000 UTC m=+0.116596219 container init 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:05:46 compute-0 podman[389672]: 2025-11-25 17:05:46.375217135 +0000 UTC m=+0.018757605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:05:46 compute-0 podman[389672]: 2025-11-25 17:05:46.479057913 +0000 UTC m=+0.122598363 container start 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:05:46 compute-0 podman[389672]: 2025-11-25 17:05:46.481577652 +0000 UTC m=+0.125118102 container attach 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:05:46 compute-0 sweet_chaum[389688]: 167 167
Nov 25 17:05:46 compute-0 systemd[1]: libpod-728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03.scope: Deactivated successfully.
Nov 25 17:05:46 compute-0 podman[389672]: 2025-11-25 17:05:46.484498112 +0000 UTC m=+0.128038562 container died 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 17:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-87860ac5170e9f7e89fc930bb6d5a2a1f93fdcc24e2f8949a86c077f530f1d63-merged.mount: Deactivated successfully.
Nov 25 17:05:46 compute-0 podman[389672]: 2025-11-25 17:05:46.516892741 +0000 UTC m=+0.160433191 container remove 728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:05:46 compute-0 systemd[1]: libpod-conmon-728431ae597287e9556d14aa50c60ed5c3c5302bbf98074d1cae80fc43d3ef03.scope: Deactivated successfully.
Nov 25 17:05:46 compute-0 nova_compute[254092]: 2025-11-25 17:05:46.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:46 compute-0 podman[389710]: 2025-11-25 17:05:46.703220751 +0000 UTC m=+0.039744420 container create db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:05:46 compute-0 systemd[1]: Started libpod-conmon-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope.
Nov 25 17:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:05:46 compute-0 podman[389710]: 2025-11-25 17:05:46.775230697 +0000 UTC m=+0.111754406 container init db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:05:46 compute-0 podman[389710]: 2025-11-25 17:05:46.78335547 +0000 UTC m=+0.119879139 container start db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:05:46 compute-0 podman[389710]: 2025-11-25 17:05:46.68784237 +0000 UTC m=+0.024366059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:05:46 compute-0 podman[389710]: 2025-11-25 17:05:46.78808535 +0000 UTC m=+0.124609069 container attach db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:05:46 compute-0 podman[389728]: 2025-11-25 17:05:46.811917423 +0000 UTC m=+0.069540229 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 17:05:46 compute-0 podman[389729]: 2025-11-25 17:05:46.843353885 +0000 UTC m=+0.090586475 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:05:46 compute-0 podman[389725]: 2025-11-25 17:05:46.859316073 +0000 UTC m=+0.118164392 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:05:47 compute-0 ceph-mon[74985]: pgmap v2467: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 166 op/s
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]: {
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "osd_id": 1,
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "type": "bluestore"
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:     },
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "osd_id": 2,
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "type": "bluestore"
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:     },
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "osd_id": 0,
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:         "type": "bluestore"
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]:     }
Nov 25 17:05:47 compute-0 zealous_antonelli[389730]: }
Nov 25 17:05:47 compute-0 systemd[1]: libpod-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope: Deactivated successfully.
Nov 25 17:05:47 compute-0 systemd[1]: libpod-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope: Consumed 1.013s CPU time.
Nov 25 17:05:47 compute-0 podman[389710]: 2025-11-25 17:05:47.806254206 +0000 UTC m=+1.142777895 container died db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:05:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-417a3527dc6c1222db4aee2bc07d2c6c25aeec05bf2df18813d5177af07b7c51-merged.mount: Deactivated successfully.
Nov 25 17:05:47 compute-0 podman[389710]: 2025-11-25 17:05:47.86217323 +0000 UTC m=+1.198696899 container remove db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_antonelli, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:05:47 compute-0 systemd[1]: libpod-conmon-db57609355803ad18b370fd9324542705d4e5b36bb432a6ce5d5a7b090687935.scope: Deactivated successfully.
Nov 25 17:05:47 compute-0 sudo[389608]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:05:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:05:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 67a44557-5f9f-4bb0-9907-9f536d2c66d5 does not exist
Nov 25 17:05:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6cb48162-5c50-4a24-978f-281f0c9dd401 does not exist
Nov 25 17:05:47 compute-0 sudo[389835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:05:47 compute-0 sudo[389835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:47 compute-0 sudo[389835]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:48 compute-0 sudo[389860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:05:48 compute-0 sudo[389860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:05:48 compute-0 sudo[389860]: pam_unix(sudo:session): session closed for user root
Nov 25 17:05:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Nov 25 17:05:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:05:50 compute-0 ceph-mon[74985]: pgmap v2468: 321 pgs: 321 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Nov 25 17:05:50 compute-0 nova_compute[254092]: 2025-11-25 17:05:50.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 254 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 148 op/s
Nov 25 17:05:51 compute-0 ovn_controller[153477]: 2025-11-25T17:05:51Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:d4:72 10.100.0.8
Nov 25 17:05:51 compute-0 ovn_controller[153477]: 2025-11-25T17:05:51Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:d4:72 10.100.0.8
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020349662409650322 of space, bias 1.0, pg target 0.6104898722895097 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:05:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:05:51 compute-0 nova_compute[254092]: 2025-11-25 17:05:51.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:52 compute-0 ceph-mon[74985]: pgmap v2469: 321 pgs: 321 active+clean; 254 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 148 op/s
Nov 25 17:05:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 260 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 111 op/s
Nov 25 17:05:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:05:54 compute-0 ceph-mon[74985]: pgmap v2470: 321 pgs: 321 active+clean; 260 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 111 op/s
Nov 25 17:05:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 260 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Nov 25 17:05:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:05:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951635910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:05:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:05:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951635910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:05:55 compute-0 nova_compute[254092]: 2025-11-25 17:05:55.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:56 compute-0 ceph-mon[74985]: pgmap v2471: 321 pgs: 321 active+clean; 260 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 229 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Nov 25 17:05:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1951635910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:05:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1951635910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:05:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:05:56 compute-0 nova_compute[254092]: 2025-11-25 17:05:56.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:05:58 compute-0 ceph-mon[74985]: pgmap v2472: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:05:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:05:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:00 compute-0 ceph-mon[74985]: pgmap v2473: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:06:00 compute-0 nova_compute[254092]: 2025-11-25 17:06:00.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:06:01 compute-0 ceph-mon[74985]: pgmap v2474: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:06:01 compute-0 nova_compute[254092]: 2025-11-25 17:06:01.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.460 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.461 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:03 compute-0 ceph-mon[74985]: pgmap v2475: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.475 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.560 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.560 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.571 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.571 254096 INFO nova.compute.claims [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:06:03 compute-0 nova_compute[254092]: 2025-11-25 17:06:03.728 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228265069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.248 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.256 254096 DEBUG nova.compute.provider_tree [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.270 254096 DEBUG nova.scheduler.client.report [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.290 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.291 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.332 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.333 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.351 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.363 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:06:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 677 KiB/s wr, 35 op/s
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.457 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.458 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.459 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Creating image(s)
Nov 25 17:06:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2228265069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.495 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.531 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.564 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.569 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.621 254096 DEBUG nova.policy [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.680 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.681 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.682 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.682 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.712 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:04 compute-0 nova_compute[254092]: 2025-11-25 17:06:04.717 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.048 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.099 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.194 254096 DEBUG nova.objects.instance [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid a4e18007-11e8-4531-9dc8-8cbc10fe2b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.207 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.208 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Ensure instance console log exists: /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.208 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.209 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.209 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:05 compute-0 ceph-mon[74985]: pgmap v2476: 321 pgs: 321 active+clean; 279 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 165 KiB/s rd, 677 KiB/s wr, 35 op/s
Nov 25 17:06:05 compute-0 nova_compute[254092]: 2025-11-25 17:06:05.887 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Successfully created port: 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:06:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 316 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 1.9 MiB/s wr, 112 op/s
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.697 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Successfully updated port: 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.719 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.719 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.719 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.817 254096 DEBUG nova.compute.manager [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.818 254096 DEBUG nova.compute.manager [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing instance network info cache due to event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.818 254096 DEBUG oslo_concurrency.lockutils [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:06 compute-0 nova_compute[254092]: 2025-11-25 17:06:06.866 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:06:07 compute-0 ceph-mon[74985]: pgmap v2477: 321 pgs: 321 active+clean; 316 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 1.9 MiB/s wr, 112 op/s
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.920 254096 DEBUG nova.network.neutron [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.944 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.945 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance network_info: |[{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.946 254096 DEBUG oslo_concurrency.lockutils [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.946 254096 DEBUG nova.network.neutron [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.950 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start _get_guest_xml network_info=[{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.954 254096 WARNING nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.970 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.971 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.975 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.976 254096 DEBUG nova.virt.libvirt.host [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.977 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.977 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.978 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.978 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.979 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.979 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.979 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.980 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.980 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.981 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.981 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.981 254096 DEBUG nova.virt.hardware [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:06:07 compute-0 nova_compute[254092]: 2025-11-25 17:06:07.986 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 316 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.3 MiB/s wr, 77 op/s
Nov 25 17:06:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:06:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732536767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:08 compute-0 nova_compute[254092]: 2025-11-25 17:06:08.488 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:08 compute-0 nova_compute[254092]: 2025-11-25 17:06:08.513 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:08 compute-0 nova_compute[254092]: 2025-11-25 17:06:08.555 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/732536767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:06:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670394506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.014 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.018 254096 DEBUG nova.virt.libvirt.vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=124,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-nzwglgyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:04Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=a4e18007-11e8-4531-9dc8-8cbc10fe2b75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.019 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.021 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.023 254096 DEBUG nova.objects.instance [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4e18007-11e8-4531-9dc8-8cbc10fe2b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.040 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <uuid>a4e18007-11e8-4531-9dc8-8cbc10fe2b75</uuid>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <name>instance-0000007c</name>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693</nova:name>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:06:07</nova:creationTime>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <nova:port uuid="843d689b-6e0b-4ce9-9177-6c3cd41a19d6">
Nov 25 17:06:09 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <system>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <entry name="serial">a4e18007-11e8-4531-9dc8-8cbc10fe2b75</entry>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <entry name="uuid">a4e18007-11e8-4531-9dc8-8cbc10fe2b75</entry>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </system>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <os>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   </os>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <features>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   </features>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk">
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config">
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:06:09 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:81:3c:32"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <target dev="tap843d689b-6e"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/console.log" append="off"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <video>
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </video>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:06:09 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:06:09 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:06:09 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:06:09 compute-0 nova_compute[254092]: </domain>
Nov 25 17:06:09 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.042 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Preparing to wait for external event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.043 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.046 254096 DEBUG nova.virt.libvirt.vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=124,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-nzwglgyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:04Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=a4e18007-11e8-4531-9dc8-8cbc10fe2b75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.046 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.047 254096 DEBUG nova.network.os_vif_util [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.048 254096 DEBUG os_vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.050 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.051 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.057 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap843d689b-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.058 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap843d689b-6e, col_values=(('external_ids', {'iface-id': '843d689b-6e0b-4ce9-9177-6c3cd41a19d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:3c:32', 'vm-uuid': 'a4e18007-11e8-4531-9dc8-8cbc10fe2b75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:09 compute-0 NetworkManager[48891]: <info>  [1764090369.0622] manager: (tap843d689b-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.070 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.072 254096 INFO os_vif [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e')
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.136 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.138 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.138 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:81:3c:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.139 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Using config drive
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.168 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.475 254096 DEBUG nova.network.neutron [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updated VIF entry in instance network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.476 254096 DEBUG nova.network.neutron [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.497 254096 DEBUG oslo_concurrency.lockutils [req-732507c5-3b27-4665-bb48-7fdd9137eb54 req-58747e03-c6f1-47c5-9c85-5bd38ceeba58 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.513 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Creating config drive at /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.518 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ur27q0a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:09 compute-0 ceph-mon[74985]: pgmap v2478: 321 pgs: 321 active+clean; 316 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.3 MiB/s wr, 77 op/s
Nov 25 17:06:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1670394506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.674 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ur27q0a" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.708 254096 DEBUG nova.storage.rbd_utils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.712 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.888 254096 DEBUG oslo_concurrency.processutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config a4e18007-11e8-4531-9dc8-8cbc10fe2b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.889 254096 INFO nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deleting local config drive /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75/disk.config because it was imported into RBD.
Nov 25 17:06:09 compute-0 kernel: tap843d689b-6e: entered promiscuous mode
Nov 25 17:06:09 compute-0 NetworkManager[48891]: <info>  [1764090369.9521] manager: (tap843d689b-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Nov 25 17:06:09 compute-0 ovn_controller[153477]: 2025-11-25T17:06:09Z|01270|binding|INFO|Claiming lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for this chassis.
Nov 25 17:06:09 compute-0 ovn_controller[153477]: 2025-11-25T17:06:09Z|01271|binding|INFO|843d689b-6e0b-4ce9-9177-6c3cd41a19d6: Claiming fa:16:3e:81:3c:32 10.100.0.5
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.967 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:3c:32 10.100.0.5'], port_security=['fa:16:3e:81:3c:32 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a4e18007-11e8-4531-9dc8-8cbc10fe2b75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a710be2-f756-4f31-8d8e-270c10735b5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=843d689b-6e0b-4ce9-9177-6c3cd41a19d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.968 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 bound to our chassis
Nov 25 17:06:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.969 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf9312b9-f4d2-496f-a143-7586e12fbee3
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:09 compute-0 ovn_controller[153477]: 2025-11-25T17:06:09Z|01272|binding|INFO|Setting lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 ovn-installed in OVS
Nov 25 17:06:09 compute-0 ovn_controller[153477]: 2025-11-25T17:06:09Z|01273|binding|INFO|Setting lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 up in Southbound
Nov 25 17:06:09 compute-0 nova_compute[254092]: 2025-11-25 17:06:09.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:09.986 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f6cb0c-595c-4944-9441-bd299f89fc3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:09 compute-0 systemd-machined[216343]: New machine qemu-156-instance-0000007c.
Nov 25 17:06:10 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-0000007c.
Nov 25 17:06:10 compute-0 systemd-udevd[390213]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.015 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[81a2913e-1357-4e4f-ac68-e8142f68b61c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.019 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[83553275-9fe6-41da-8198-b623c49e4d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:10 compute-0 NetworkManager[48891]: <info>  [1764090370.0248] device (tap843d689b-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:06:10 compute-0 NetworkManager[48891]: <info>  [1764090370.0263] device (tap843d689b-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.046 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2b5819-6249-4570-a7af-6248894d76bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94038631-53eb-4a07-8cf3-3ca1bf201b91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 39201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390221, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ba1dd3-2212-4532-9056-9162e8b09964]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679598, 'tstamp': 679598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390224, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679602, 'tstamp': 679602}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390224, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.091 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9312b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf9312b9-f0, col_values=(('external_ids', {'iface-id': 'c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:10.095 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.406 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090370.4056184, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Started (Lifecycle Event)
Nov 25 17:06:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.436 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.440 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090370.4058151, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.441 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Paused (Lifecycle Event)
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.464 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.468 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.495 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.703 254096 DEBUG nova.compute.manager [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.705 254096 DEBUG oslo_concurrency.lockutils [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.706 254096 DEBUG oslo_concurrency.lockutils [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.706 254096 DEBUG oslo_concurrency.lockutils [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.707 254096 DEBUG nova.compute.manager [req-d68818f4-51f8-486a-b0d3-ee7e597fab07 req-412cfead-4a33-49d5-988e-dbc1832e92ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Processing event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.708 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.711 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090370.7108247, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.711 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Resumed (Lifecycle Event)
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.713 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.717 254096 INFO nova.virt.libvirt.driver [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance spawned successfully.
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.717 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.732 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.737 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.744 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.744 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.745 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.745 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.746 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.746 254096 DEBUG nova.virt.libvirt.driver [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.765 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.819 254096 INFO nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 6.36 seconds to spawn the instance on the hypervisor.
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.820 254096 DEBUG nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.879 254096 INFO nova.compute.manager [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 7.35 seconds to build instance.
Nov 25 17:06:10 compute-0 nova_compute[254092]: 2025-11-25 17:06:10.892 254096 DEBUG oslo_concurrency.lockutils [None req-225d4e54-b51e-44cb-8f19-c46ee3edc211 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:11 compute-0 ceph-mon[74985]: pgmap v2479: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Nov 25 17:06:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Nov 25 17:06:12 compute-0 nova_compute[254092]: 2025-11-25 17:06:12.798 254096 DEBUG nova.compute.manager [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:12 compute-0 nova_compute[254092]: 2025-11-25 17:06:12.800 254096 DEBUG oslo_concurrency.lockutils [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:12 compute-0 nova_compute[254092]: 2025-11-25 17:06:12.800 254096 DEBUG oslo_concurrency.lockutils [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:12 compute-0 nova_compute[254092]: 2025-11-25 17:06:12.801 254096 DEBUG oslo_concurrency.lockutils [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:12 compute-0 nova_compute[254092]: 2025-11-25 17:06:12.801 254096 DEBUG nova.compute.manager [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] No waiting events found dispatching network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:12 compute-0 nova_compute[254092]: 2025-11-25 17:06:12.802 254096 WARNING nova.compute.manager [req-b66bafda-230d-46c3-994a-486684627367 req-b33e7423-d539-4c0d-ad96-2664b5f3878f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received unexpected event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for instance with vm_state active and task_state None.
Nov 25 17:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.643 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.644 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:13 compute-0 ceph-mon[74985]: pgmap v2480: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Nov 25 17:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.770 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:13.771 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.959 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.960 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.960 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.961 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.962 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.964 254096 INFO nova.compute.manager [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Terminating instance
Nov 25 17:06:13 compute-0 nova_compute[254092]: 2025-11-25 17:06:13.965 254096 DEBUG nova.compute.manager [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:06:14 compute-0 kernel: tap4f707331-3a (unregistering): left promiscuous mode
Nov 25 17:06:14 compute-0 NetworkManager[48891]: <info>  [1764090374.0188] device (tap4f707331-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:14 compute-0 ovn_controller[153477]: 2025-11-25T17:06:14Z|01274|binding|INFO|Releasing lport 4f707331-3a0e-47b6-98ee-569db81bd594 from this chassis (sb_readonly=0)
Nov 25 17:06:14 compute-0 ovn_controller[153477]: 2025-11-25T17:06:14Z|01275|binding|INFO|Setting lport 4f707331-3a0e-47b6-98ee-569db81bd594 down in Southbound
Nov 25 17:06:14 compute-0 ovn_controller[153477]: 2025-11-25T17:06:14Z|01276|binding|INFO|Removing iface tap4f707331-3a ovn-installed in OVS
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.034 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:5c:fc 10.100.0.26'], port_security=['fa:16:3e:15:5c:fc 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': 'b747b045-786f-49a8-907c-cc222a07fa05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30729089-f7ac-4b8e-887e-74cce32287f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4f707331-3a0e-47b6-98ee-569db81bd594) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.035 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4f707331-3a0e-47b6-98ee-569db81bd594 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f unbound from our chassis
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.036 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.051 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7b585b8f-4c34-4edf-958d-7d4733390bfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.097 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ad3e31-32bb-41ee-9814-5752891adf86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.101 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b7071753-ff4b-4618-b4d0-52afd2d397ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:14 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 25 17:06:14 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 14.819s CPU time.
Nov 25 17:06:14 compute-0 systemd-machined[216343]: Machine qemu-154-instance-0000007a terminated.
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.136 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e401413f-7ac2-4b48-88a2-625e78342ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.159 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ea16f1ce-c59e-4208-8855-d3118c3e3c47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a48b006-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:02:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 922, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 922, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676537, 'reachable_time': 38501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 600, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 600, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390279, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.183 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1732aa52-9cdc-423a-87c6-7f8b20f0fdee]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676551, 'tstamp': 676551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390280, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a48b006-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676554, 'tstamp': 676554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390280, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.186 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.199 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a48b006-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.200 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a48b006-a0, col_values=(('external_ids', {'iface-id': '85d5b09a-dc15-4154-acec-abe7a2e5fc19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:14.201 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.205 254096 INFO nova.virt.libvirt.driver [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Instance destroyed successfully.
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.206 254096 DEBUG nova.objects.instance [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid b747b045-786f-49a8-907c-cc222a07fa05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.219 254096 DEBUG nova.virt.libvirt.vif [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2108522710',display_name='tempest-TestNetworkBasicOps-server-2108522710',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2108522710',id=122,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJvG1eO5jPF+SV0Ao2r1oFJ5d8qXkQHoty8TB6rNrtbLrWvCsgRx2hMEMQlhuNosicoMv5mD+tjKe9vDZEtzlGPJcyIy9mr2/vCsJq0iL2bTTiQYg0Y14H1/7blYpbPNoQ==',key_name='tempest-TestNetworkBasicOps-1506231644',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:05:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-5q8f6jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:05:25Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=b747b045-786f-49a8-907c-cc222a07fa05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.219 254096 DEBUG nova.network.os_vif_util [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "4f707331-3a0e-47b6-98ee-569db81bd594", "address": "fa:16:3e:15:5c:fc", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f707331-3a", "ovs_interfaceid": "4f707331-3a0e-47b6-98ee-569db81bd594", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.220 254096 DEBUG nova.network.os_vif_util [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.220 254096 DEBUG os_vif [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.221 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f707331-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.227 254096 INFO os_vif [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:5c:fc,bridge_name='br-int',has_traffic_filtering=True,id=4f707331-3a0e-47b6-98ee-569db81bd594,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f707331-3a')
Nov 25 17:06:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.564 254096 INFO nova.virt.libvirt.driver [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deleting instance files /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05_del
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.566 254096 INFO nova.virt.libvirt.driver [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deletion of /var/lib/nova/instances/b747b045-786f-49a8-907c-cc222a07fa05_del complete
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.642 254096 INFO nova.compute.manager [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 0.68 seconds to destroy the instance on the hypervisor.
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.643 254096 DEBUG oslo.service.loopingcall [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.645 254096 DEBUG nova.compute.manager [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.646 254096 DEBUG nova.network.neutron [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.906 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.907 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing instance network info cache due to event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.908 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.908 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:14 compute-0 nova_compute[254092]: 2025-11-25 17:06:14.908 254096 DEBUG nova.network.neutron [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:06:15 compute-0 nova_compute[254092]: 2025-11-25 17:06:15.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:15 compute-0 nova_compute[254092]: 2025-11-25 17:06:15.418 254096 DEBUG nova.network.neutron [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:15 compute-0 nova_compute[254092]: 2025-11-25 17:06:15.440 254096 INFO nova.compute.manager [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Took 0.79 seconds to deallocate network for instance.
Nov 25 17:06:15 compute-0 nova_compute[254092]: 2025-11-25 17:06:15.505 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:15 compute-0 nova_compute[254092]: 2025-11-25 17:06:15.505 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:15 compute-0 nova_compute[254092]: 2025-11-25 17:06:15.646 254096 DEBUG oslo_concurrency.processutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:15 compute-0 ceph-mon[74985]: pgmap v2481: 321 pgs: 321 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 25 17:06:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269446327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.108 254096 DEBUG oslo_concurrency.processutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.117 254096 DEBUG nova.compute.provider_tree [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.136 254096 DEBUG nova.scheduler.client.report [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.139 254096 DEBUG nova.network.neutron [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updated VIF entry in instance network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.140 254096 DEBUG nova.network.neutron [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.172 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-unplugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.178 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.179 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.179 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] No waiting events found dispatching network-vif-unplugged-4f707331-3a0e-47b6-98ee-569db81bd594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.179 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-unplugged-4f707331-3a0e-47b6-98ee-569db81bd594 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "b747b045-786f-49a8-907c-cc222a07fa05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.180 254096 DEBUG oslo_concurrency.lockutils [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.181 254096 DEBUG nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] No waiting events found dispatching network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.181 254096 WARNING nova.compute.manager [req-0e93bae4-4cb0-48e8-b51f-5317433bc885 req-59043fcb-dc82-4a98-a221-da6d7835a67e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received unexpected event network-vif-plugged-4f707331-3a0e-47b6-98ee-569db81bd594 for instance with vm_state active and task_state deleting.
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.194 254096 INFO nova.scheduler.client.report [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance b747b045-786f-49a8-907c-cc222a07fa05
Nov 25 17:06:16 compute-0 nova_compute[254092]: 2025-11-25 17:06:16.274 254096 DEBUG oslo_concurrency.lockutils [None req-a2235995-a9f5-4ae1-8c37-b529b07d7703 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "b747b045-786f-49a8-907c-cc222a07fa05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Nov 25 17:06:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1269446327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.028 254096 DEBUG nova.compute.manager [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Received event network-vif-deleted-4f707331-3a0e-47b6-98ee-569db81bd594 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG nova.compute.manager [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG nova.compute.manager [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing instance network info cache due to event network-changed-843d689b-6e0b-4ce9-9177-6c3cd41a19d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG oslo_concurrency.lockutils [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.029 254096 DEBUG oslo_concurrency.lockutils [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.030 254096 DEBUG nova.network.neutron [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Refreshing network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.424 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-d5b14553-424b-4985-9ed6-2f4afac92c00" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.424 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-d5b14553-424b-4985-9ed6-2f4afac92c00" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.442 254096 DEBUG nova.objects.instance [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'flavor' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.466 254096 DEBUG nova.virt.libvirt.vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.467 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.468 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.471 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.475 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.478 254096 DEBUG nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Attempting to detach device tapd5b14553-42 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.479 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:8b:9c:66"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <target dev="tapd5b14553-42"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]: </interface>
Nov 25 17:06:17 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.492 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.495 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <name>instance-00000078</name>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:05:07</nova:creationTime>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:port uuid="d5b14553-424b-4985-9ed6-2f4afac92c00">
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:06:17 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <system>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </system>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <os>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </os>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <features>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </features>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target dev='tapa13b6cf4-60'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:8b:9c:66'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target dev='tapd5b14553-42'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='net1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </target>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </console>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <video>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </video>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]: </domain>
Nov 25 17:06:17 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.499 254096 INFO nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tapd5b14553-42 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the persistent domain config.
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.499 254096 DEBUG nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] (1/8): Attempting to detach device tapd5b14553-42 with device alias net1 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.499 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] detach device xml: <interface type="ethernet">
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <mac address="fa:16:3e:8b:9c:66"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <model type="virtio"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <mtu size="1442"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <target dev="tapd5b14553-42"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]: </interface>
Nov 25 17:06:17 compute-0 nova_compute[254092]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 25 17:06:17 compute-0 kernel: tapd5b14553-42 (unregistering): left promiscuous mode
Nov 25 17:06:17 compute-0 NetworkManager[48891]: <info>  [1764090377.6074] device (tapd5b14553-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:06:17 compute-0 ovn_controller[153477]: 2025-11-25T17:06:17Z|01277|binding|INFO|Releasing lport d5b14553-424b-4985-9ed6-2f4afac92c00 from this chassis (sb_readonly=0)
Nov 25 17:06:17 compute-0 ovn_controller[153477]: 2025-11-25T17:06:17Z|01278|binding|INFO|Setting lport d5b14553-424b-4985-9ed6-2f4afac92c00 down in Southbound
Nov 25 17:06:17 compute-0 ovn_controller[153477]: 2025-11-25T17:06:17Z|01279|binding|INFO|Removing iface tapd5b14553-42 ovn-installed in OVS
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.614 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.621 254096 DEBUG nova.virt.libvirt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Received event <DeviceRemovedEvent: 1764090377.6206884, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.622 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9c:66 10.100.0.28', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3bba81af-c0c6-44bd-ba73-08033ea9224a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d5b14553-424b-4985-9ed6-2f4afac92c00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.625 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d5b14553-424b-4985-9ed6-2f4afac92c00 in datapath 4a48b006-a4d1-4fa5-88f1-79386ed2958f unbound from our chassis
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.628 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a48b006-a4d1-4fa5-88f1-79386ed2958f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9c593889-1baa-42d8-b9c2-487abae5f4b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.630 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f namespace which is not needed anymore
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.628 254096 DEBUG nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Start waiting for the detach event from libvirt for device tapd5b14553-42 with device alias net1 for instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.628 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.629 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.633 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <name>instance-00000078</name>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:05:07</nova:creationTime>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:port uuid="d5b14553-424b-4985-9ed6-2f4afac92c00">
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:06:17 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <system>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </system>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <os>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </os>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <features>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </features>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target dev='tapa13b6cf4-60'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       </target>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </console>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <video>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </video>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:17 compute-0 nova_compute[254092]: </domain>
Nov 25 17:06:17 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.634 254096 INFO nova.virt.libvirt.driver [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully detached device tapd5b14553-42 from instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 from the live domain config.
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.634 254096 DEBUG nova.virt.libvirt.vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.635 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.635 254096 DEBUG nova.network.os_vif_util [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.636 254096 DEBUG os_vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.637 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b14553-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.653 254096 INFO os_vif [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42')
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.653 254096 DEBUG nova.virt.libvirt.guest [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:06:17</nova:creationTime>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:06:17 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:06:17 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:17 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:06:17 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:06:17 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 17:06:17 compute-0 podman[390334]: 2025-11-25 17:06:17.66652192 +0000 UTC m=+0.075399310 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 17:06:17 compute-0 ceph-mon[74985]: pgmap v2482: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Nov 25 17:06:17 compute-0 podman[390335]: 2025-11-25 17:06:17.691198727 +0000 UTC m=+0.099288904 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:06:17 compute-0 podman[390336]: 2025-11-25 17:06:17.712046599 +0000 UTC m=+0.114731908 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:06:17 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : haproxy version is 2.8.14-c23fe91
Nov 25 17:06:17 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [NOTICE]   (387842) : path to executable is /usr/sbin/haproxy
Nov 25 17:06:17 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [WARNING]  (387842) : Exiting Master process...
Nov 25 17:06:17 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [WARNING]  (387842) : Exiting Master process...
Nov 25 17:06:17 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [ALERT]    (387842) : Current worker (387844) exited with code 143 (Terminated)
Nov 25 17:06:17 compute-0 neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f[387838]: [WARNING]  (387842) : All workers exited. Exiting... (0)
Nov 25 17:06:17 compute-0 systemd[1]: libpod-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465.scope: Deactivated successfully.
Nov 25 17:06:17 compute-0 podman[390413]: 2025-11-25 17:06:17.766941104 +0000 UTC m=+0.041181441 container died f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.775 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b5bfab8528d115e346b339b56d12524bd34cf4549cb171cd97809410d53295b-merged.mount: Deactivated successfully.
Nov 25 17:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465-userdata-shm.mount: Deactivated successfully.
Nov 25 17:06:17 compute-0 podman[390413]: 2025-11-25 17:06:17.806056827 +0000 UTC m=+0.080297204 container cleanup f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:06:17 compute-0 systemd[1]: libpod-conmon-f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465.scope: Deactivated successfully.
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.848 254096 DEBUG nova.compute.manager [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-unplugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.848 254096 DEBUG oslo_concurrency.lockutils [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 DEBUG oslo_concurrency.lockutils [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 DEBUG oslo_concurrency.lockutils [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 DEBUG nova.compute.manager [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-unplugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.849 254096 WARNING nova.compute.manager [req-8771217d-ed0a-4e2c-b0d0-0169e3f092a5 req-c1165913-ae1c-4473-a07f-65cc645a435d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-unplugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.
Nov 25 17:06:17 compute-0 podman[390441]: 2025-11-25 17:06:17.871182263 +0000 UTC m=+0.042730473 container remove f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.879 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3dc20f-31c0-4a9c-9fa9-89c9e98c4d05]: (4, ('Tue Nov 25 05:06:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f (f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465)\nf602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465\nTue Nov 25 05:06:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f (f602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465)\nf602f147986b5f054e0df3fda565934f2dd505c3a439662c13696fbbd0a4b465\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.881 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[341ca183-55af-49e2-8fb5-75399515b804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.882 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a48b006-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.884 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:17 compute-0 kernel: tap4a48b006-a0: left promiscuous mode
Nov 25 17:06:17 compute-0 nova_compute[254092]: 2025-11-25 17:06:17.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.901 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[414c3260-be2b-43b4-a8da-c0bce12bcb08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef4c2d3-2011-4536-b6b7-e84240f31a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.918 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[919ba36f-754e-4e87-b1d7-907f435340cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.931 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbdea09-d530-4346-9a36-2db4a6cdd675]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676527, 'reachable_time': 43008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390456, 'error': None, 'target': 'ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a48b006\x2da4d1\x2d4fa5\x2d88f1\x2d79386ed2958f.mount: Deactivated successfully.
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.935 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a48b006-a4d1-4fa5-88f1-79386ed2958f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:06:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:17.935 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[7adedc87-a271-4b3c-8392-9021fe0f92d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 538 KiB/s wr, 113 op/s
Nov 25 17:06:18 compute-0 nova_compute[254092]: 2025-11-25 17:06:18.736 254096 DEBUG nova.network.neutron [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updated VIF entry in instance network info cache for port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:06:18 compute-0 nova_compute[254092]: 2025-11-25 17:06:18.737 254096 DEBUG nova.network.neutron [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [{"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:18 compute-0 nova_compute[254092]: 2025-11-25 17:06:18.750 254096 DEBUG oslo_concurrency.lockutils [req-e7ce1670-db52-43e1-add1-95eab78c0f72 req-230eb07d-91e3-47f1-82c4-2fd043665d1f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a4e18007-11e8-4531-9dc8-8cbc10fe2b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:18 compute-0 nova_compute[254092]: 2025-11-25 17:06:18.810 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:18 compute-0 nova_compute[254092]: 2025-11-25 17:06:18.811 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:18 compute-0 nova_compute[254092]: 2025-11-25 17:06:18.811 254096 DEBUG nova.network.neutron [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:06:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.117 254096 DEBUG nova.compute.manager [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-deleted-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.117 254096 INFO nova.compute.manager [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Neutron deleted interface d5b14553-424b-4985-9ed6-2f4afac92c00; detaching it from the instance and deleting it from the info cache
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.117 254096 DEBUG nova.network.neutron [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.140 254096 DEBUG nova.objects.instance [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'system_metadata' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.165 254096 DEBUG nova.objects.instance [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lazy-loading 'flavor' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.184 254096 DEBUG nova.virt.libvirt.vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.185 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.185 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.189 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.194 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <name>instance-00000078</name>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:06:17</nova:creationTime>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:06:19 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <system>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </system>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <os>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </os>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <features>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </features>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target dev='tapa13b6cf4-60'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </target>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </console>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <video>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </video>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]: </domain>
Nov 25 17:06:19 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.194 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.202 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:8b:9c:66"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd5b14553-42"/></interface>not found in domain: <domain type='kvm' id='152'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <name>instance-00000078</name>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <uuid>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</uuid>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:06:17</nova:creationTime>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:06:19 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <memory unit='KiB'>131072</memory>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <vcpu placement='static'>1</vcpu>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <resource>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <partition>/machine</partition>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </resource>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <sysinfo type='smbios'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <system>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='manufacturer'>RDO</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='product'>OpenStack Compute</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='serial'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='uuid'>fb2e8439-c6c2-4a88-9e88-faf385d9a4d9</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <entry name='family'>Virtual Machine</entry>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </system>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <os>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <boot dev='hd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <smbios mode='sysinfo'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </os>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <features>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <vmcoreinfo state='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </features>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <cpu mode='custom' match='exact' check='full'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <model fallback='forbid'>EPYC-Rome</model>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <vendor>AMD</vendor>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='x2apic'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc-deadline'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='hypervisor'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='tsc_adjust'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='spec-ctrl'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='stibp'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='ssbd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='cmp_legacy'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='overflow-recov'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='succor'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='ibrs'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='amd-ssbd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='virt-ssbd'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='lbrv'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='tsc-scale'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='vmcb-clean'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='flushbyasid'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='pause-filter'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='pfthreshold'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='svme-addr-chk'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='lfence-always-serializing'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='xsaves'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='svm'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='require' name='topoext'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='npt'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <feature policy='disable' name='nrip-save'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <clock offset='utc'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <timer name='pit' tickpolicy='delay'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <timer name='rtc' tickpolicy='catchup'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <timer name='hpet' present='no'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <on_poweroff>destroy</on_poweroff>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <on_reboot>restart</on_reboot>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <on_crash>destroy</on_crash>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <disk type='network' device='disk'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk' index='2'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target dev='vda' bus='virtio'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='virtio-disk0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <disk type='network' device='cdrom'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <driver name='qemu' type='raw' cache='none'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <auth username='openstack'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <secret type='ceph' uuid='d82baeae-c742-50a4-b8f6-b5257c8a2c92'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source protocol='rbd' name='vms/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_disk.config' index='1'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <host name='192.168.122.100' port='6789'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target dev='sda' bus='sata'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <readonly/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='sata0-0-0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='0' model='pcie-root'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pcie.0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='1' port='0x10'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='2' port='0x11'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='3' port='0x12'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='4' port='0x13'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='5' port='0x14'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='6' port='0x15'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='7' port='0x16'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='8' port='0x17'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.8'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='9' port='0x18'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.9'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='10' port='0x19'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.10'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='11' port='0x1a'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.11'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='12' port='0x1b'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.12'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='13' port='0x1c'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.13'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='14' port='0x1d'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.14'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='15' port='0x1e'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.15'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='16' port='0x1f'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.16'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='17' port='0x20'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.17'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='18' port='0x21'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.18'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='19' port='0x22'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.19'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='20' port='0x23'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.20'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='21' port='0x24'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.21'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='22' port='0x25'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.22'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='23' port='0x26'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.23'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='24' port='0x27'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.24'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-root-port'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target chassis='25' port='0x28'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.25'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model name='pcie-pci-bridge'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='pci.26'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='usb'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <controller type='sata' index='0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='ide'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </controller>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <interface type='ethernet'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <mac address='fa:16:3e:7c:d6:7a'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target dev='tapa13b6cf4-60'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model type='virtio'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <driver name='vhost' rx_queue_size='512'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <mtu size='1442'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='net0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <serial type='pty'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target type='isa-serial' port='0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:         <model name='isa-serial'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       </target>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <console type='pty' tty='/dev/pts/0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <source path='/dev/pts/0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <log file='/var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9/console.log' append='off'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <target type='serial' port='0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='serial0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </console>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <input type='tablet' bus='usb'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='input0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='usb' bus='0' port='1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <input type='mouse' bus='ps2'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='input1'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <input type='keyboard' bus='ps2'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='input2'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </input>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <listen type='address' address='::0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </graphics>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <audio id='1' type='none'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <video>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <model type='virtio' heads='1' primary='yes'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='video0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </video>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <watchdog model='itco' action='reset'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='watchdog0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </watchdog>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <memballoon model='virtio'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <stats period='10'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='balloon0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <rng model='virtio'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <backend model='random'>/dev/urandom</backend>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <alias name='rng0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <label>system_u:system_r:svirt_t:s0:c156,c809</label>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c156,c809</imagelabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <label>+107:+107</label>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <imagelabel>+107:+107</imagelabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </seclabel>
Nov 25 17:06:19 compute-0 nova_compute[254092]: </domain>
Nov 25 17:06:19 compute-0 nova_compute[254092]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.202 254096 WARNING nova.virt.libvirt.driver [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Detaching interface fa:16:3e:8b:9c:66 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapd5b14553-42' not found.
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.203 254096 DEBUG nova.virt.libvirt.vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.204 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converting VIF {"id": "d5b14553-424b-4985-9ed6-2f4afac92c00", "address": "fa:16:3e:8b:9c:66", "network": {"id": "4a48b006-a4d1-4fa5-88f1-79386ed2958f", "bridge": "br-int", "label": "tempest-network-smoke--1693378648", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5b14553-42", "ovs_interfaceid": "d5b14553-424b-4985-9ed6-2f4afac92c00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.204 254096 DEBUG nova.network.os_vif_util [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.205 254096 DEBUG os_vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.206 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5b14553-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.206 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.209 254096 INFO os_vif [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9c:66,bridge_name='br-int',has_traffic_filtering=True,id=d5b14553-424b-4985-9ed6-2f4afac92c00,network=Network(4a48b006-a4d1-4fa5-88f1-79386ed2958f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5b14553-42')
Nov 25 17:06:19 compute-0 nova_compute[254092]: 2025-11-25 17:06:19.209 254096 DEBUG nova.virt.libvirt.guest [req-a0616eb3-a7a4-4d03-a949-d33f61eb0d3c req-ad99e7aa-bae0-41d7-88bb-638265c52b97 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:name>tempest-TestNetworkBasicOps-server-1064240069</nova:name>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:creationTime>2025-11-25 17:06:19</nova:creationTime>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:flavor name="m1.nano">
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:memory>128</nova:memory>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:disk>1</nova:disk>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:swap>0</nova:swap>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:flavor>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:owner>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:owner>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   <nova:ports>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     <nova:port uuid="a13b6cf4-602d-4af3-b369-9dfa273e1514">
Nov 25 17:06:19 compute-0 nova_compute[254092]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:06:19 compute-0 nova_compute[254092]:     </nova:port>
Nov 25 17:06:19 compute-0 nova_compute[254092]:   </nova:ports>
Nov 25 17:06:19 compute-0 nova_compute[254092]: </nova:instance>
Nov 25 17:06:19 compute-0 nova_compute[254092]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Nov 25 17:06:19 compute-0 ceph-mon[74985]: pgmap v2483: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 538 KiB/s wr, 113 op/s
Nov 25 17:06:20 compute-0 nova_compute[254092]: 2025-11-25 17:06:20.016 254096 DEBUG nova.compute.manager [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:20 compute-0 nova_compute[254092]: 2025-11-25 17:06:20.016 254096 DEBUG oslo_concurrency.lockutils [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:20 compute-0 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 DEBUG oslo_concurrency.lockutils [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:20 compute-0 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 DEBUG oslo_concurrency.lockutils [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:20 compute-0 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 DEBUG nova.compute.manager [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:20 compute-0 nova_compute[254092]: 2025-11-25 17:06:20.017 254096 WARNING nova.compute.manager [req-6c0fc1d0-93d0-4537-91b0-971c90d20d6e req-c0005787-52bf-4431-bcd7-9cd934604e66 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-d5b14553-424b-4985-9ed6-2f4afac92c00 for instance with vm_state active and task_state None.
Nov 25 17:06:20 compute-0 nova_compute[254092]: 2025-11-25 17:06:20.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 538 KiB/s wr, 113 op/s
Nov 25 17:06:21 compute-0 ovn_controller[153477]: 2025-11-25T17:06:21Z|01280|binding|INFO|Releasing lport c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35 from this chassis (sb_readonly=0)
Nov 25 17:06:21 compute-0 ovn_controller[153477]: 2025-11-25T17:06:21Z|01281|binding|INFO|Releasing lport 77b25960-ed8e-4022-ac16-27312b8189a1 from this chassis (sb_readonly=0)
Nov 25 17:06:21 compute-0 ceph-mon[74985]: pgmap v2484: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 538 KiB/s wr, 113 op/s
Nov 25 17:06:21 compute-0 nova_compute[254092]: 2025-11-25 17:06:21.725 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:21 compute-0 nova_compute[254092]: 2025-11-25 17:06:21.840 254096 INFO nova.network.neutron [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Port d5b14553-424b-4985-9ed6-2f4afac92c00 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Nov 25 17:06:21 compute-0 nova_compute[254092]: 2025-11-25 17:06:21.840 254096 DEBUG nova.network.neutron [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:21 compute-0 nova_compute[254092]: 2025-11-25 17:06:21.854 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:21 compute-0 nova_compute[254092]: 2025-11-25 17:06:21.877 254096 DEBUG oslo_concurrency.lockutils [None req-014e163f-2b2b-4d34-8b0a-bcff2e129ebc e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "interface-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-d5b14553-424b-4985-9ed6-2f4afac92c00" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 101 op/s
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.786 254096 DEBUG nova.compute.manager [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG nova.compute.manager [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing instance network info cache due to event network-changed-a13b6cf4-602d-4af3-b369-9dfa273e1514. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG oslo_concurrency.lockutils [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG oslo_concurrency.lockutils [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.787 254096 DEBUG nova.network.neutron [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Refreshing network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.874 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.875 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.875 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.876 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.876 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.877 254096 INFO nova.compute.manager [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Terminating instance
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.879 254096 DEBUG nova.compute.manager [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:06:22 compute-0 kernel: tapa13b6cf4-60 (unregistering): left promiscuous mode
Nov 25 17:06:22 compute-0 NetworkManager[48891]: <info>  [1764090382.9501] device (tapa13b6cf4-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:22 compute-0 ovn_controller[153477]: 2025-11-25T17:06:22Z|01282|binding|INFO|Releasing lport a13b6cf4-602d-4af3-b369-9dfa273e1514 from this chassis (sb_readonly=0)
Nov 25 17:06:22 compute-0 ovn_controller[153477]: 2025-11-25T17:06:22Z|01283|binding|INFO|Setting lport a13b6cf4-602d-4af3-b369-9dfa273e1514 down in Southbound
Nov 25 17:06:22 compute-0 ovn_controller[153477]: 2025-11-25T17:06:22Z|01284|binding|INFO|Removing iface tapa13b6cf4-60 ovn-installed in OVS
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.980 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:d6:7a 10.100.0.11'], port_security=['fa:16:3e:7c:d6:7a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fb2e8439-c6c2-4a88-9e88-faf385d9a4d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e177dffd-fd87-489e-a59d-1d241fe7a148', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad26d7dc-a577-438b-b143-107c43340ab4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a13b6cf4-602d-4af3-b369-9dfa273e1514) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.981 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a13b6cf4-602d-4af3-b369-9dfa273e1514 in datapath 136c69a7-c4f8-40a2-be13-7ef82b7b3709 unbound from our chassis
Nov 25 17:06:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.981 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 136c69a7-c4f8-40a2-be13-7ef82b7b3709, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:06:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.983 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3fc7ea-9b8e-4037-9b4f-6c41c94eef4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:22.984 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 namespace which is not needed anymore
Nov 25 17:06:22 compute-0 nova_compute[254092]: 2025-11-25 17:06:22.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:23 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 25 17:06:23 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Consumed 18.021s CPU time.
Nov 25 17:06:23 compute-0 systemd-machined[216343]: Machine qemu-152-instance-00000078 terminated.
Nov 25 17:06:23 compute-0 kernel: tapa13b6cf4-60: entered promiscuous mode
Nov 25 17:06:23 compute-0 kernel: tapa13b6cf4-60 (unregistering): left promiscuous mode
Nov 25 17:06:23 compute-0 NetworkManager[48891]: <info>  [1764090383.1012] manager: (tapa13b6cf4-60): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.105 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.122 254096 INFO nova.virt.libvirt.driver [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance destroyed successfully.
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.124 254096 DEBUG nova.objects.instance [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:23 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : haproxy version is 2.8.14-c23fe91
Nov 25 17:06:23 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [NOTICE]   (387211) : path to executable is /usr/sbin/haproxy
Nov 25 17:06:23 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [WARNING]  (387211) : Exiting Master process...
Nov 25 17:06:23 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [WARNING]  (387211) : Exiting Master process...
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.134 254096 DEBUG nova.virt.libvirt.vif [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:04:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1064240069',display_name='tempest-TestNetworkBasicOps-server-1064240069',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1064240069',id=120,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxRJG0xj4lNhOZp9ZWVpUqIVMXoOTqx2HT5kEbWz5Kzrp9lUM1esv4/Y9y+6X72HfZQHeQJVGMfF2AfhTMmKKwlb9fZPc7AjVKpzxCULHdu4UtL6CrGGLwggOltMQGwgQ==',key_name='tempest-TestNetworkBasicOps-808036867',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:04:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-ziqlckum',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:04:38Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=fb2e8439-c6c2-4a88-9e88-faf385d9a4d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:06:23 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [ALERT]    (387211) : Current worker (387217) exited with code 143 (Terminated)
Nov 25 17:06:23 compute-0 neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709[387200]: [WARNING]  (387211) : All workers exited. Exiting... (0)
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.134 254096 DEBUG nova.network.os_vif_util [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.135 254096 DEBUG nova.network.os_vif_util [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.135 254096 DEBUG os_vif [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:06:23 compute-0 systemd[1]: libpod-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f.scope: Deactivated successfully.
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.139 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa13b6cf4-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.144 254096 INFO os_vif [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:d6:7a,bridge_name='br-int',has_traffic_filtering=True,id=a13b6cf4-602d-4af3-b369-9dfa273e1514,network=Network(136c69a7-c4f8-40a2-be13-7ef82b7b3709),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa13b6cf4-60')
Nov 25 17:06:23 compute-0 podman[390479]: 2025-11-25 17:06:23.145111697 +0000 UTC m=+0.056720186 container died be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f-userdata-shm.mount: Deactivated successfully.
Nov 25 17:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ef1abaaa64a067771ec5da28f54ab4615764922df7ecc3992312613eab4146-merged.mount: Deactivated successfully.
Nov 25 17:06:23 compute-0 podman[390479]: 2025-11-25 17:06:23.18678168 +0000 UTC m=+0.098390169 container cleanup be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:06:23 compute-0 systemd[1]: libpod-conmon-be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f.scope: Deactivated successfully.
Nov 25 17:06:23 compute-0 podman[390533]: 2025-11-25 17:06:23.252084651 +0000 UTC m=+0.044721108 container remove be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.258 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b89fec4-593f-46e7-9b73-d54aaad4c767]: (4, ('Tue Nov 25 05:06:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 (be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f)\nbe459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f\nTue Nov 25 05:06:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 (be459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f)\nbe459a871ff06d6d95ace21d31adaea26bc0530b8e7d8f38d64b7c0fa602731f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.260 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42344219-32b3-4050-9238-aca543198ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.260 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap136c69a7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:23 compute-0 kernel: tap136c69a7-c0: left promiscuous mode
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.279 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[add525f8-0863-4032-a9d8-2d7445b0b790]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.299 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[983af79e-fbbb-416b-96c5-46e014bac094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.301 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba889745-209c-4a7d-8397-6884e96ca091]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.315 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9d961b-1e9a-4d2f-83d9-885f03979017]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673409, 'reachable_time': 31258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390548, 'error': None, 'target': 'ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.318 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-136c69a7-c4f8-40a2-be13-7ef82b7b3709 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:06:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:23.318 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[567bd2e8-9915-427a-9f5c-c0a828fc6722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d136c69a7\x2dc4f8\x2d40a2\x2dbe13\x2d7ef82b7b3709.mount: Deactivated successfully.
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.520 254096 INFO nova.virt.libvirt.driver [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deleting instance files /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_del
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.521 254096 INFO nova.virt.libvirt.driver [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deletion of /var/lib/nova/instances/fb2e8439-c6c2-4a88-9e88-faf385d9a4d9_del complete
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.601 254096 INFO nova.compute.manager [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.601 254096 DEBUG oslo.service.loopingcall [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.602 254096 DEBUG nova.compute.manager [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:06:23 compute-0 nova_compute[254092]: 2025-11-25 17:06:23.603 254096 DEBUG nova.network.neutron [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:06:23 compute-0 ceph-mon[74985]: pgmap v2485: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 101 op/s
Nov 25 17:06:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.5 KiB/s wr, 95 op/s
Nov 25 17:06:24 compute-0 ovn_controller[153477]: 2025-11-25T17:06:24Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:3c:32 10.100.0.5
Nov 25 17:06:24 compute-0 ovn_controller[153477]: 2025-11-25T17:06:24Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:3c:32 10.100.0.5
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.854 254096 DEBUG nova.network.neutron [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.880 254096 INFO nova.compute.manager [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Took 1.28 seconds to deallocate network for instance.
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.904 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-unplugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.906 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.906 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.906 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-unplugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-unplugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.907 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.908 254096 DEBUG oslo_concurrency.lockutils [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.908 254096 DEBUG nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] No waiting events found dispatching network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.908 254096 WARNING nova.compute.manager [req-aeb8a65f-c3b8-4501-b383-70b391d41aa8 req-c747639f-e913-425f-a68e-29392dc08442 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received unexpected event network-vif-plugged-a13b6cf4-602d-4af3-b369-9dfa273e1514 for instance with vm_state active and task_state deleting.
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.927 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:24 compute-0 nova_compute[254092]: 2025-11-25 17:06:24.928 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.014 254096 DEBUG nova.network.neutron [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updated VIF entry in instance network info cache for port a13b6cf4-602d-4af3-b369-9dfa273e1514. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.015 254096 DEBUG nova.network.neutron [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Updating instance_info_cache with network_info: [{"id": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "address": "fa:16:3e:7c:d6:7a", "network": {"id": "136c69a7-c4f8-40a2-be13-7ef82b7b3709", "bridge": "br-int", "label": "tempest-network-smoke--1102379118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa13b6cf4-60", "ovs_interfaceid": "a13b6cf4-602d-4af3-b369-9dfa273e1514", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.037 254096 DEBUG oslo_concurrency.lockutils [req-bb5bca7a-74a0-49d5-9c6b-72c58d8d7098 req-9f557562-9812-4d9c-b802-1f36119bbfbc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.052 254096 DEBUG oslo_concurrency.processutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892230981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.532 254096 DEBUG oslo_concurrency.processutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.542 254096 DEBUG nova.compute.provider_tree [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.560 254096 DEBUG nova.scheduler.client.report [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.583 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.610 254096 INFO nova.scheduler.client.report [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9
Nov 25 17:06:25 compute-0 ceph-mon[74985]: pgmap v2486: 321 pgs: 321 active+clean; 246 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 7.5 KiB/s wr, 95 op/s
Nov 25 17:06:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3892230981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:25 compute-0 nova_compute[254092]: 2025-11-25 17:06:25.780 254096 DEBUG oslo_concurrency.lockutils [None req-792a15e1-ee12-4c43-b5d2-d2df82359f02 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "fb2e8439-c6c2-4a88-9e88-faf385d9a4d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 198 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Nov 25 17:06:26 compute-0 nova_compute[254092]: 2025-11-25 17:06:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:27 compute-0 nova_compute[254092]: 2025-11-25 17:06:27.023 254096 DEBUG nova.compute.manager [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Received event network-vif-deleted-a13b6cf4-602d-4af3-b369-9dfa273e1514 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:27 compute-0 nova_compute[254092]: 2025-11-25 17:06:27.023 254096 INFO nova.compute.manager [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Neutron deleted interface a13b6cf4-602d-4af3-b369-9dfa273e1514; detaching it from the instance and deleting it from the info cache
Nov 25 17:06:27 compute-0 nova_compute[254092]: 2025-11-25 17:06:27.024 254096 DEBUG nova.network.neutron [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 17:06:27 compute-0 nova_compute[254092]: 2025-11-25 17:06:27.026 254096 DEBUG nova.compute.manager [req-e7b285b9-9ed2-4948-a757-9a176cd14b45 req-9ca61513-5d6d-43ba-a488-c6d7c0a9857c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Detach interface failed, port_id=a13b6cf4-602d-4af3-b369-9dfa273e1514, reason: Instance fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:06:27 compute-0 nova_compute[254092]: 2025-11-25 17:06:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:27 compute-0 ceph-mon[74985]: pgmap v2487: 321 pgs: 321 active+clean; 198 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Nov 25 17:06:28 compute-0 nova_compute[254092]: 2025-11-25 17:06:28.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 198 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 25 17:06:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:29 compute-0 nova_compute[254092]: 2025-11-25 17:06:29.203 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090374.202772, b747b045-786f-49a8-907c-cc222a07fa05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:29 compute-0 nova_compute[254092]: 2025-11-25 17:06:29.204 254096 INFO nova.compute.manager [-] [instance: b747b045-786f-49a8-907c-cc222a07fa05] VM Stopped (Lifecycle Event)
Nov 25 17:06:29 compute-0 nova_compute[254092]: 2025-11-25 17:06:29.225 254096 DEBUG nova.compute.manager [None req-c4f46b7f-1820-42ab-9a90-c0f7dc5d9115 - - - - - -] [instance: b747b045-786f-49a8-907c-cc222a07fa05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:29 compute-0 ceph-mon[74985]: pgmap v2488: 321 pgs: 321 active+clean; 198 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 25 17:06:30 compute-0 ovn_controller[153477]: 2025-11-25T17:06:30Z|01285|binding|INFO|Releasing lport c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35 from this chassis (sb_readonly=0)
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 200 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.793 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.793 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.794 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.794 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.794 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.795 254096 INFO nova.compute.manager [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Terminating instance
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.796 254096 DEBUG nova.compute.manager [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:06:30 compute-0 kernel: tap843d689b-6e (unregistering): left promiscuous mode
Nov 25 17:06:30 compute-0 NetworkManager[48891]: <info>  [1764090390.8530] device (tap843d689b-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:06:30 compute-0 ovn_controller[153477]: 2025-11-25T17:06:30Z|01286|binding|INFO|Releasing lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 from this chassis (sb_readonly=0)
Nov 25 17:06:30 compute-0 ovn_controller[153477]: 2025-11-25T17:06:30Z|01287|binding|INFO|Setting lport 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 down in Southbound
Nov 25 17:06:30 compute-0 ovn_controller[153477]: 2025-11-25T17:06:30Z|01288|binding|INFO|Removing iface tap843d689b-6e ovn-installed in OVS
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.864 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:3c:32 10.100.0.5', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a4e18007-11e8-4531-9dc8-8cbc10fe2b75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=843d689b-6e0b-4ce9-9177-6c3cd41a19d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.866 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 843d689b-6e0b-4ce9-9177-6c3cd41a19d6 in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 unbound from our chassis
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.867 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf9312b9-f4d2-496f-a143-7586e12fbee3
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.888 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a17f4e39-8609-4744-b60b-7d7c168d0a63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.916 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3fca2da6-6c51-4796-983e-30d32585fa50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.918 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abfaab40-849f-4267-881b-5f77a0262826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:30 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 25 17:06:30 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Consumed 15.270s CPU time.
Nov 25 17:06:30 compute-0 systemd-machined[216343]: Machine qemu-156-instance-0000007c terminated.
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.947 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c073617-60e4-4e2d-aa39-a5e91d4df36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.970 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65daf25b-a10f-4b5a-b093-ace3af10307e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf9312b9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:2f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679585, 'reachable_time': 41771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390583, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.989 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2da183a1-24dc-4b47-8838-5db037cf80cd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679598, 'tstamp': 679598}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390584, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbf9312b9-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679602, 'tstamp': 679602}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390584, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.991 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.993 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:30 compute-0 nova_compute[254092]: 2025-11-25 17:06:30.998 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf9312b9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:30.999 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:31.000 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf9312b9-f0, col_values=(('external_ids', {'iface-id': 'c21d5ea9-f23a-4c6c-8f75-917f0bc5fa35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:31.000 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:31 compute-0 kernel: tap843d689b-6e: entered promiscuous mode
Nov 25 17:06:31 compute-0 kernel: tap843d689b-6e (unregistering): left promiscuous mode
Nov 25 17:06:31 compute-0 NetworkManager[48891]: <info>  [1764090391.0224] manager: (tap843d689b-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/531)
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.038 254096 INFO nova.virt.libvirt.driver [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Instance destroyed successfully.
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.039 254096 DEBUG nova.objects.instance [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid a4e18007-11e8-4531-9dc8-8cbc10fe2b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.059 254096 DEBUG nova.virt.libvirt.vif [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:06:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1448912693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=124,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:06:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-nzwglgyu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:06:10Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=a4e18007-11e8-4531-9dc8-8cbc10fe2b75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.060 254096 DEBUG nova.network.os_vif_util [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "address": "fa:16:3e:81:3c:32", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap843d689b-6e", "ovs_interfaceid": "843d689b-6e0b-4ce9-9177-6c3cd41a19d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.061 254096 DEBUG nova.network.os_vif_util [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.061 254096 DEBUG os_vif [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.062 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843d689b-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.063 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.067 254096 INFO os_vif [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:3c:32,bridge_name='br-int',has_traffic_filtering=True,id=843d689b-6e0b-4ce9-9177-6c3cd41a19d6,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap843d689b-6e')
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.085 254096 DEBUG nova.compute.manager [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-unplugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG oslo_concurrency.lockutils [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG oslo_concurrency.lockutils [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG oslo_concurrency.lockutils [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.086 254096 DEBUG nova.compute.manager [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] No waiting events found dispatching network-vif-unplugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.087 254096 DEBUG nova.compute.manager [req-e013e149-c8e5-4e6b-8d62-817457e30cb5 req-4a93ba56-d2d1-480d-8ccf-04b01e9aab31 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-unplugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.477 254096 INFO nova.virt.libvirt.driver [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deleting instance files /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_del
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.477 254096 INFO nova.virt.libvirt.driver [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deletion of /var/lib/nova/instances/a4e18007-11e8-4531-9dc8-8cbc10fe2b75_del complete
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.548 254096 INFO nova.compute.manager [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.549 254096 DEBUG oslo.service.loopingcall [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.549 254096 DEBUG nova.compute.manager [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:06:31 compute-0 nova_compute[254092]: 2025-11-25 17:06:31.549 254096 DEBUG nova.network.neutron [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:06:31 compute-0 ceph-mon[74985]: pgmap v2489: 321 pgs: 321 active+clean; 200 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:06:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 200 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:06:32 compute-0 nova_compute[254092]: 2025-11-25 17:06:32.894 254096 DEBUG nova.network.neutron [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:32 compute-0 nova_compute[254092]: 2025-11-25 17:06:32.917 254096 INFO nova.compute.manager [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Took 1.37 seconds to deallocate network for instance.
Nov 25 17:06:32 compute-0 nova_compute[254092]: 2025-11-25 17:06:32.967 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:32 compute-0 nova_compute[254092]: 2025-11-25 17:06:32.968 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.054 254096 DEBUG oslo_concurrency.processutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.189 254096 DEBUG nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.189 254096 DEBUG oslo_concurrency.lockutils [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 DEBUG oslo_concurrency.lockutils [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 DEBUG oslo_concurrency.lockutils [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 DEBUG nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] No waiting events found dispatching network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.190 254096 WARNING nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received unexpected event network-vif-plugged-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 for instance with vm_state deleted and task_state None.
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.191 254096 DEBUG nova.compute.manager [req-3b743f65-995f-4629-8950-ed67258ca6ab req-864aa1ee-afdc-41fb-95e4-c6a3db9fca00 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Received event network-vif-deleted-843d689b-6e0b-4ce9-9177-6c3cd41a19d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243127858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.503 254096 DEBUG oslo_concurrency.processutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.509 254096 DEBUG nova.compute.provider_tree [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.523 254096 DEBUG nova.scheduler.client.report [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.540 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.568 254096 INFO nova.scheduler.client.report [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance a4e18007-11e8-4531-9dc8-8cbc10fe2b75
Nov 25 17:06:33 compute-0 nova_compute[254092]: 2025-11-25 17:06:33.637 254096 DEBUG oslo_concurrency.lockutils [None req-185d3475-623b-4f5c-a6d5-aefc6e260447 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "a4e18007-11e8-4531-9dc8-8cbc10fe2b75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:33 compute-0 ceph-mon[74985]: pgmap v2490: 321 pgs: 321 active+clean; 200 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:06:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3243127858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 200 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.988 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.988 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.988 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.989 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.989 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.990 254096 INFO nova.compute.manager [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Terminating instance
Nov 25 17:06:34 compute-0 nova_compute[254092]: 2025-11-25 17:06:34.991 254096 DEBUG nova.compute.manager [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:06:35 compute-0 kernel: tap332ae922-32 (unregistering): left promiscuous mode
Nov 25 17:06:35 compute-0 NetworkManager[48891]: <info>  [1764090395.0457] device (tap332ae922-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:06:35 compute-0 ovn_controller[153477]: 2025-11-25T17:06:35Z|01289|binding|INFO|Releasing lport 332ae922-3280-48c2-8889-d1ab181a43db from this chassis (sb_readonly=0)
Nov 25 17:06:35 compute-0 ovn_controller[153477]: 2025-11-25T17:06:35Z|01290|binding|INFO|Setting lport 332ae922-3280-48c2-8889-d1ab181a43db down in Southbound
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 ovn_controller[153477]: 2025-11-25T17:06:35Z|01291|binding|INFO|Removing iface tap332ae922-32 ovn-installed in OVS
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.062 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:d4:72 10.100.0.8'], port_security=['fa:16:3e:98:d4:72 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cbf5c589-9701-44c9-9600-739675853610', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2db36ef1-2db7-4ccb-b8a5-63a9a57f3dde 7a710be2-f756-4f31-8d8e-270c10735b5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87dee46c-adf2-4475-bb58-d34bae0a4b6e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=332ae922-3280-48c2-8889-d1ab181a43db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.063 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 332ae922-3280-48c2-8889-d1ab181a43db in datapath bf9312b9-f4d2-496f-a143-7586e12fbee3 unbound from our chassis
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.065 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf9312b9-f4d2-496f-a143-7586e12fbee3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.065 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec2d73f-77a2-4836-bf89-8369036ede84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.066 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 namespace which is not needed anymore
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.081 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Nov 25 17:06:35 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 14.364s CPU time.
Nov 25 17:06:35 compute-0 systemd-machined[216343]: Machine qemu-155-instance-0000007b terminated.
Nov 25 17:06:35 compute-0 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : haproxy version is 2.8.14-c23fe91
Nov 25 17:06:35 compute-0 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [NOTICE]   (388929) : path to executable is /usr/sbin/haproxy
Nov 25 17:06:35 compute-0 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [WARNING]  (388929) : Exiting Master process...
Nov 25 17:06:35 compute-0 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [ALERT]    (388929) : Current worker (388931) exited with code 143 (Terminated)
Nov 25 17:06:35 compute-0 neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3[388900]: [WARNING]  (388929) : All workers exited. Exiting... (0)
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.227 254096 INFO nova.virt.libvirt.driver [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Instance destroyed successfully.
Nov 25 17:06:35 compute-0 systemd[1]: libpod-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948.scope: Deactivated successfully.
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.228 254096 DEBUG nova.objects.instance [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid cbf5c589-9701-44c9-9600-739675853610 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:35 compute-0 podman[390656]: 2025-11-25 17:06:35.233500389 +0000 UTC m=+0.062927306 container died 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.239 254096 DEBUG nova.virt.libvirt.vif [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:05:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1342099728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=123,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM0UWFHQnhc6ntJ2USd1Ng03MEqRsDa8bLN+NceDu38bBM1Ko27FOUngPC7ayjW27qy5z0eeftBbrxj3YI6kVPDUfNU6Nlwiw/u+/fuCOrRMF8RUzgbS8zeTV5MDACNtGg==',key_name='tempest-TestSecurityGroupsBasicOps-278470401',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:05:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-jw45rb9c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:05:38Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=cbf5c589-9701-44c9-9600-739675853610,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.239 254096 DEBUG nova.network.os_vif_util [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.242 254096 DEBUG nova.network.os_vif_util [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.243 254096 DEBUG os_vif [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.244 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.245 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap332ae922-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.249 254096 INFO os_vif [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:d4:72,bridge_name='br-int',has_traffic_filtering=True,id=332ae922-3280-48c2-8889-d1ab181a43db,network=Network(bf9312b9-f4d2-496f-a143-7586e12fbee3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap332ae922-32')
Nov 25 17:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-46a68caf33163305d309df46a027dcb2a4dc5df4d4d0497aef68c09c791db936-merged.mount: Deactivated successfully.
Nov 25 17:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948-userdata-shm.mount: Deactivated successfully.
Nov 25 17:06:35 compute-0 podman[390656]: 2025-11-25 17:06:35.275146432 +0000 UTC m=+0.104573359 container cleanup 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 17:06:35 compute-0 systemd[1]: libpod-conmon-91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948.scope: Deactivated successfully.
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.294 254096 DEBUG nova.compute.manager [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-changed-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG nova.compute.manager [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing instance network info cache due to event network-changed-332ae922-3280-48c2-8889-d1ab181a43db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG oslo_concurrency.lockutils [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG oslo_concurrency.lockutils [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.295 254096 DEBUG nova.network.neutron [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Refreshing network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:06:35 compute-0 podman[390715]: 2025-11-25 17:06:35.333603415 +0000 UTC m=+0.036801040 container remove 91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.340 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f858db-bcf8-46bc-9ec9-c57a98192679]: (4, ('Tue Nov 25 05:06:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 (91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948)\n91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948\nTue Nov 25 05:06:35 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 (91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948)\n91d5773898e199d05495201593304eeb31a0c1a9d892489f684afca4dc952948\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.341 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8882450c-6a3c-41c2-a2eb-cd921fdb98c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.342 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf9312b9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.395 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 kernel: tapbf9312b9-f0: left promiscuous mode
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.415 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ac8ef8-5d96-4ab4-8a35-78d4bf4a792a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.427 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[72fe6aa0-35f4-4fd2-8731-e089d6105b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.427 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[40e6e3a8-2cea-4414-8509-752bcfe8f50c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.449 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a54c807c-0094-43ac-ad11-2691168ea244]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679576, 'reachable_time': 23661, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390729, 'error': None, 'target': 'ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 systemd[1]: run-netns-ovnmeta\x2dbf9312b9\x2df4d2\x2d496f\x2da143\x2d7586e12fbee3.mount: Deactivated successfully.
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.452 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf9312b9-f4d2-496f-a143-7586e12fbee3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:06:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:35.452 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5c0e2f-6354-424a-b6c5-e630b1d878cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.642 254096 INFO nova.virt.libvirt.driver [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Deleting instance files /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610_del
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.644 254096 INFO nova.virt.libvirt.driver [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Deletion of /var/lib/nova/instances/cbf5c589-9701-44c9-9600-739675853610_del complete
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.703 254096 INFO nova.compute.manager [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 0.71 seconds to destroy the instance on the hypervisor.
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.704 254096 DEBUG oslo.service.loopingcall [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.704 254096 DEBUG nova.compute.manager [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:06:35 compute-0 nova_compute[254092]: 2025-11-25 17:06:35.704 254096 DEBUG nova.network.neutron [-] [instance: cbf5c589-9701-44c9-9600-739675853610] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:06:35 compute-0 ceph-mon[74985]: pgmap v2491: 321 pgs: 321 active+clean; 200 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:06:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 79 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.840 254096 DEBUG nova.network.neutron [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.867 254096 INFO nova.compute.manager [-] [instance: cbf5c589-9701-44c9-9600-739675853610] Took 1.16 seconds to deallocate network for instance.
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.918 254096 DEBUG nova.network.neutron [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updated VIF entry in instance network info cache for port 332ae922-3280-48c2-8889-d1ab181a43db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.919 254096 DEBUG nova.network.neutron [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [{"id": "332ae922-3280-48c2-8889-d1ab181a43db", "address": "fa:16:3e:98:d4:72", "network": {"id": "bf9312b9-f4d2-496f-a143-7586e12fbee3", "bridge": "br-int", "label": "tempest-network-smoke--1513117463", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap332ae922-32", "ovs_interfaceid": "332ae922-3280-48c2-8889-d1ab181a43db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.923 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.923 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.939 254096 DEBUG oslo_concurrency.lockutils [req-1700acd6-defa-41a1-b775-f71eb2e32eac req-5c233b12-c5d9-4982-884b-43dd9b280619 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cbf5c589-9701-44c9-9600-739675853610" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:36 compute-0 nova_compute[254092]: 2025-11-25 17:06:36.979 254096 DEBUG oslo_concurrency.processutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.374 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-unplugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.375 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.375 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.376 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.376 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] No waiting events found dispatching network-vif-unplugged-332ae922-3280-48c2-8889-d1ab181a43db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.376 254096 WARNING nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received unexpected event network-vif-unplugged-332ae922-3280-48c2-8889-d1ab181a43db for instance with vm_state deleted and task_state None.
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.377 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.377 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cbf5c589-9701-44c9-9600-739675853610-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.377 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 DEBUG oslo_concurrency.lockutils [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] No waiting events found dispatching network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 WARNING nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received unexpected event network-vif-plugged-332ae922-3280-48c2-8889-d1ab181a43db for instance with vm_state deleted and task_state None.
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.378 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Received event network-vif-deleted-332ae922-3280-48c2-8889-d1ab181a43db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.379 254096 INFO nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Neutron deleted interface 332ae922-3280-48c2-8889-d1ab181a43db; detaching it from the instance and deleting it from the info cache
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.379 254096 DEBUG nova.network.neutron [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.396 254096 DEBUG nova.compute.manager [req-787289b9-7423-4d9e-b742-525160a2e1e3 req-ad18cb25-d9b4-418a-8cfc-08d5c7c17427 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cbf5c589-9701-44c9-9600-739675853610] Detach interface failed, port_id=332ae922-3280-48c2-8889-d1ab181a43db, reason: Instance cbf5c589-9701-44c9-9600-739675853610 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:06:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/899762936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.429 254096 DEBUG oslo_concurrency.processutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.434 254096 DEBUG nova.compute.provider_tree [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.446 254096 DEBUG nova.scheduler.client.report [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.466 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.499 254096 INFO nova.scheduler.client.report [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance cbf5c589-9701-44c9-9600-739675853610
Nov 25 17:06:37 compute-0 nova_compute[254092]: 2025-11-25 17:06:37.576 254096 DEBUG oslo_concurrency.lockutils [None req-f41240bc-b81d-4bbf-8938-37711505e282 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "cbf5c589-9701-44c9-9600-739675853610" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:37 compute-0 ceph-mon[74985]: pgmap v2492: 321 pgs: 321 active+clean; 79 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Nov 25 17:06:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/899762936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:38 compute-0 nova_compute[254092]: 2025-11-25 17:06:38.121 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090383.1200678, fb2e8439-c6c2-4a88-9e88-faf385d9a4d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:38 compute-0 nova_compute[254092]: 2025-11-25 17:06:38.122 254096 INFO nova.compute.manager [-] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] VM Stopped (Lifecycle Event)
Nov 25 17:06:38 compute-0 nova_compute[254092]: 2025-11-25 17:06:38.143 254096 DEBUG nova.compute.manager [None req-6cd9b905-e4f3-4a6b-b4a1-a1bca36d9130 - - - - - -] [instance: fb2e8439-c6c2-4a88-9e88-faf385d9a4d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 79 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 33 KiB/s wr, 41 op/s
Nov 25 17:06:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:39 compute-0 sshd-session[390753]: Connection closed by authenticating user root 171.244.51.45 port 60406 [preauth]
Nov 25 17:06:39 compute-0 ceph-mon[74985]: pgmap v2493: 321 pgs: 321 active+clean; 79 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 33 KiB/s wr, 41 op/s
Nov 25 17:06:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279901587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:39 compute-0 nova_compute[254092]: 2025-11-25 17:06:39.925 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.068 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.069 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3755MB free_disk=59.966068267822266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.069 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.070 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.121 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.121 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:06:40
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta']
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.162 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 41 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 33 KiB/s wr, 62 op/s
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:06:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590871679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.598 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.603 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.629 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.665 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:06:40 compute-0 nova_compute[254092]: 2025-11-25 17:06:40.666 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.868099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090400868130, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 2061, "num_deletes": 251, "total_data_size": 3345100, "memory_usage": 3395120, "flush_reason": "Manual Compaction"}
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 25 17:06:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4279901587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/590871679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090400898031, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 3278837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49922, "largest_seqno": 51982, "table_properties": {"data_size": 3269523, "index_size": 5872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19061, "raw_average_key_size": 20, "raw_value_size": 3250956, "raw_average_value_size": 3440, "num_data_blocks": 260, "num_entries": 945, "num_filter_entries": 945, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090181, "oldest_key_time": 1764090181, "file_creation_time": 1764090400, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 29983 microseconds, and 7020 cpu microseconds.
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.898078) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 3278837 bytes OK
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.898097) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.902680) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.902699) EVENT_LOG_v1 {"time_micros": 1764090400902693, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.902716) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 3336446, prev total WAL file size 3336446, number of live WAL files 2.
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.903465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(3201KB)], [113(8549KB)]
Nov 25 17:06:40 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090400903493, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 12033154, "oldest_snapshot_seqno": -1}
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7466 keys, 10311875 bytes, temperature: kUnknown
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090401007864, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10311875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10262109, "index_size": 30006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 193554, "raw_average_key_size": 25, "raw_value_size": 10128684, "raw_average_value_size": 1356, "num_data_blocks": 1177, "num_entries": 7466, "num_filter_entries": 7466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090400, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.008067) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10311875 bytes
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.010586) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.2 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7980, records dropped: 514 output_compression: NoCompression
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.010623) EVENT_LOG_v1 {"time_micros": 1764090401010609, "job": 68, "event": "compaction_finished", "compaction_time_micros": 104437, "compaction_time_cpu_micros": 23871, "output_level": 6, "num_output_files": 1, "total_output_size": 10311875, "num_input_records": 7980, "num_output_records": 7466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090401011325, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090401012943, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:40.903397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:06:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:06:41.013050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:06:41 compute-0 nova_compute[254092]: 2025-11-25 17:06:41.666 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:41 compute-0 nova_compute[254092]: 2025-11-25 17:06:41.666 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:06:41 compute-0 nova_compute[254092]: 2025-11-25 17:06:41.704 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:06:41 compute-0 nova_compute[254092]: 2025-11-25 17:06:41.705 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:06:41 compute-0 ceph-mon[74985]: pgmap v2494: 321 pgs: 321 active+clean; 41 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 33 KiB/s wr, 62 op/s
Nov 25 17:06:42 compute-0 nova_compute[254092]: 2025-11-25 17:06:42.036 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:42 compute-0 nova_compute[254092]: 2025-11-25 17:06:42.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Nov 25 17:06:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:43 compute-0 ceph-mon[74985]: pgmap v2495: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 24 KiB/s wr, 57 op/s
Nov 25 17:06:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Nov 25 17:06:45 compute-0 nova_compute[254092]: 2025-11-25 17:06:45.250 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:45 compute-0 nova_compute[254092]: 2025-11-25 17:06:45.416 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:45 compute-0 ceph-mon[74985]: pgmap v2496: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.035 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090391.0342882, a4e18007-11e8-4531-9dc8-8cbc10fe2b75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.035 254096 INFO nova.compute.manager [-] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] VM Stopped (Lifecycle Event)
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.054 254096 DEBUG nova.compute.manager [None req-d3d84c8b-b3c3-4117-8481-17b1e8d0ca9a - - - - - -] [instance: a4e18007-11e8-4531-9dc8-8cbc10fe2b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.756 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.757 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.769 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.836 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.837 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.843 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.844 254096 INFO nova.compute.claims [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:06:46 compute-0 nova_compute[254092]: 2025-11-25 17:06:46.955 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157608422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.420 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.429 254096 DEBUG nova.compute.provider_tree [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.445 254096 DEBUG nova.scheduler.client.report [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.465 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.466 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.511 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.511 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.529 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.549 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.635 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.636 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.637 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Creating image(s)
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.655 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.678 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.702 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.706 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.778 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.779 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.780 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.780 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.808 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.812 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c64717b5-8862-4f84-989e-9f21bdc37759_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:47 compute-0 ceph-mon[74985]: pgmap v2497: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Nov 25 17:06:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3157608422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:47 compute-0 nova_compute[254092]: 2025-11-25 17:06:47.969 254096 DEBUG nova.policy [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.125 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 c64717b5-8862-4f84-989e-9f21bdc37759_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:48 compute-0 sudo[390917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:48 compute-0 sudo[390917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:48 compute-0 sudo[390917]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.229 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:06:48 compute-0 sudo[390981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:06:48 compute-0 sudo[390981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:48 compute-0 podman[390957]: 2025-11-25 17:06:48.242438061 +0000 UTC m=+0.065682613 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:06:48 compute-0 sudo[390981]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:48 compute-0 podman[390941]: 2025-11-25 17:06:48.247553392 +0000 UTC m=+0.076432858 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 17:06:48 compute-0 podman[390958]: 2025-11-25 17:06:48.270289885 +0000 UTC m=+0.093683701 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:06:48 compute-0 sudo[391077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:48 compute-0 sudo[391077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:48 compute-0 sudo[391077]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.329 254096 DEBUG nova.objects.instance [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid c64717b5-8862-4f84-989e-9f21bdc37759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.339 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.339 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Ensure instance console log exists: /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.340 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.340 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:48 compute-0 nova_compute[254092]: 2025-11-25 17:06:48.340 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:48 compute-0 sudo[391122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:06:48 compute-0 sudo[391122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 341 B/s wr, 20 op/s
Nov 25 17:06:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:48 compute-0 sudo[391122]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:06:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:06:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:06:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:06:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:06:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:06:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e9519a5a-22f9-420b-9120-db8677a1a2bf does not exist
Nov 25 17:06:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 601e55e3-6c0d-4323-8a21-75f3512931ea does not exist
Nov 25 17:06:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b3759223-48a0-4194-9dda-378bf4c9ef37 does not exist
Nov 25 17:06:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:06:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:06:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:06:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:06:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:06:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:06:49 compute-0 sudo[391186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:49 compute-0 sudo[391186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:49 compute-0 sudo[391186]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:49 compute-0 sudo[391211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:06:49 compute-0 sudo[391211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:49 compute-0 sudo[391211]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:49 compute-0 sudo[391236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:49 compute-0 sudo[391236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:49 compute-0 sudo[391236]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:49 compute-0 sudo[391261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:06:49 compute-0 sudo[391261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.471 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Successfully updated port: c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.489 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.490 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.490 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.606 254096 DEBUG nova.compute.manager [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.606 254096 DEBUG nova.compute.manager [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing instance network info cache due to event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.606 254096 DEBUG oslo_concurrency.lockutils [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:06:49 compute-0 podman[391326]: 2025-11-25 17:06:49.789536285 +0000 UTC m=+0.055374760 container create 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:06:49 compute-0 systemd[1]: Started libpod-conmon-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope.
Nov 25 17:06:49 compute-0 nova_compute[254092]: 2025-11-25 17:06:49.841 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:06:49 compute-0 podman[391326]: 2025-11-25 17:06:49.770336748 +0000 UTC m=+0.036175243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:06:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:06:49 compute-0 podman[391326]: 2025-11-25 17:06:49.907764057 +0000 UTC m=+0.173602552 container init 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 17:06:49 compute-0 ceph-mon[74985]: pgmap v2498: 321 pgs: 321 active+clean; 41 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 341 B/s wr, 20 op/s
Nov 25 17:06:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:06:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:06:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:06:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:06:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:06:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:06:49 compute-0 podman[391326]: 2025-11-25 17:06:49.915615943 +0000 UTC m=+0.181454418 container start 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:06:49 compute-0 podman[391326]: 2025-11-25 17:06:49.919002746 +0000 UTC m=+0.184841221 container attach 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:06:49 compute-0 practical_knuth[391342]: 167 167
Nov 25 17:06:49 compute-0 systemd[1]: libpod-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope: Deactivated successfully.
Nov 25 17:06:49 compute-0 conmon[391342]: conmon 4ca64966e3a73dba273f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope/container/memory.events
Nov 25 17:06:49 compute-0 podman[391326]: 2025-11-25 17:06:49.92717485 +0000 UTC m=+0.193013325 container died 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ec95dff1dca1f233ffb1fb980ecde4451e90bf122af5635050a8453f0dd5706-merged.mount: Deactivated successfully.
Nov 25 17:06:49 compute-0 podman[391326]: 2025-11-25 17:06:49.966418756 +0000 UTC m=+0.232257231 container remove 4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:06:49 compute-0 systemd[1]: libpod-conmon-4ca64966e3a73dba273f2d8e06cfc35289c09ab7a1b0a2f75d350fbf96c88a40.scope: Deactivated successfully.
Nov 25 17:06:50 compute-0 podman[391366]: 2025-11-25 17:06:50.175255515 +0000 UTC m=+0.066914567 container create afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:06:50 compute-0 nova_compute[254092]: 2025-11-25 17:06:50.224 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090395.2232823, cbf5c589-9701-44c9-9600-739675853610 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:50 compute-0 nova_compute[254092]: 2025-11-25 17:06:50.225 254096 INFO nova.compute.manager [-] [instance: cbf5c589-9701-44c9-9600-739675853610] VM Stopped (Lifecycle Event)
Nov 25 17:06:50 compute-0 systemd[1]: Started libpod-conmon-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope.
Nov 25 17:06:50 compute-0 podman[391366]: 2025-11-25 17:06:50.146715552 +0000 UTC m=+0.038374624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:06:50 compute-0 nova_compute[254092]: 2025-11-25 17:06:50.248 254096 DEBUG nova.compute.manager [None req-adead496-b7f4-4535-86fa-e8e7a5a6d386 - - - - - -] [instance: cbf5c589-9701-44c9-9600-739675853610] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:50 compute-0 nova_compute[254092]: 2025-11-25 17:06:50.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:50 compute-0 podman[391366]: 2025-11-25 17:06:50.299967655 +0000 UTC m=+0.191626687 container init afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:06:50 compute-0 podman[391366]: 2025-11-25 17:06:50.311180092 +0000 UTC m=+0.202839154 container start afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:06:50 compute-0 podman[391366]: 2025-11-25 17:06:50.315735018 +0000 UTC m=+0.207394080 container attach afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:06:50 compute-0 nova_compute[254092]: 2025-11-25 17:06:50.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 60 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 192 KiB/s wr, 22 op/s
Nov 25 17:06:51 compute-0 bold_engelbart[391383]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:06:51 compute-0 bold_engelbart[391383]: --> relative data size: 1.0
Nov 25 17:06:51 compute-0 bold_engelbart[391383]: --> All data devices are unavailable
Nov 25 17:06:51 compute-0 systemd[1]: libpod-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope: Deactivated successfully.
Nov 25 17:06:51 compute-0 systemd[1]: libpod-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope: Consumed 1.121s CPU time.
Nov 25 17:06:51 compute-0 podman[391366]: 2025-11-25 17:06:51.484800773 +0000 UTC m=+1.376459805 container died afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 17:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f103226642ad3bc353649e28868d42e554e9024d1b6bf18f54b4bd60707ec016-merged.mount: Deactivated successfully.
Nov 25 17:06:51 compute-0 podman[391366]: 2025-11-25 17:06:51.550549586 +0000 UTC m=+1.442208618 container remove afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:06:51 compute-0 systemd[1]: libpod-conmon-afd29dc2cf69b0a95f5a2419b61d6ff781e419f3d7c2d3307d23dd7c0a6733a9.scope: Deactivated successfully.
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.6564656996809276e-05 of space, bias 1.0, pg target 0.010969397099042783 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:06:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:06:51 compute-0 sudo[391261]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.589 254096 DEBUG nova.network.neutron [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance network_info: |[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG oslo_concurrency.lockutils [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.614 254096 DEBUG nova.network.neutron [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.616 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start _get_guest_xml network_info=[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.624 254096 WARNING nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.633 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.634 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.637 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.637 254096 DEBUG nova.virt.libvirt.host [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.638 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.638 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.638 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.639 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.640 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.640 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.640 254096 DEBUG nova.virt.hardware [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:06:51 compute-0 nova_compute[254092]: 2025-11-25 17:06:51.643 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:51 compute-0 sudo[391428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:51 compute-0 sudo[391428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:51 compute-0 sudo[391428]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:51 compute-0 sudo[391453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:06:51 compute-0 sudo[391453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:51 compute-0 sudo[391453]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:51 compute-0 sudo[391479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:51 compute-0 sudo[391479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:51 compute-0 sudo[391479]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:51 compute-0 sudo[391514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:06:51 compute-0 sudo[391514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:51 compute-0 ceph-mon[74985]: pgmap v2499: 321 pgs: 321 active+clean; 60 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 192 KiB/s wr, 22 op/s
Nov 25 17:06:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:06:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966176760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.144 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.166 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.170 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:52 compute-0 podman[391585]: 2025-11-25 17:06:52.180789303 +0000 UTC m=+0.043298389 container create b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:06:52 compute-0 systemd[1]: Started libpod-conmon-b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0.scope.
Nov 25 17:06:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:06:52 compute-0 podman[391585]: 2025-11-25 17:06:52.162782078 +0000 UTC m=+0.025291164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:06:52 compute-0 podman[391585]: 2025-11-25 17:06:52.262148674 +0000 UTC m=+0.124657830 container init b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:06:52 compute-0 podman[391585]: 2025-11-25 17:06:52.268857538 +0000 UTC m=+0.131366624 container start b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:06:52 compute-0 podman[391585]: 2025-11-25 17:06:52.272560009 +0000 UTC m=+0.135069185 container attach b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:06:52 compute-0 flamboyant_volhard[391622]: 167 167
Nov 25 17:06:52 compute-0 systemd[1]: libpod-b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0.scope: Deactivated successfully.
Nov 25 17:06:52 compute-0 podman[391585]: 2025-11-25 17:06:52.275005796 +0000 UTC m=+0.137514892 container died b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-70d9377299eb680a4080d6c1518e71ffcb13623abec1db2db7eb9fb9156e7d28-merged.mount: Deactivated successfully.
Nov 25 17:06:52 compute-0 podman[391585]: 2025-11-25 17:06:52.323915558 +0000 UTC m=+0.186424674 container remove b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_volhard, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:06:52 compute-0 systemd[1]: libpod-conmon-b337b37de8acbf3334ad601ab661bde877f904b020e1633b2f6c939744dbffc0.scope: Deactivated successfully.
Nov 25 17:06:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:06:52 compute-0 podman[391663]: 2025-11-25 17:06:52.521679762 +0000 UTC m=+0.047904005 container create ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:06:52 compute-0 systemd[1]: Started libpod-conmon-ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913.scope.
Nov 25 17:06:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:06:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074771413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:52 compute-0 podman[391663]: 2025-11-25 17:06:52.495580807 +0000 UTC m=+0.021805080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.589 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.591 254096 DEBUG nova.virt.libvirt.vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191068635',display_name='tempest-TestNetworkBasicOps-server-191068635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-191068635',id=125,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJpHrgKSNh4nxd1EAQPL6cFToRs8NuFCJRe5V6wi+HlNMXfO3CA8jeqzTpxAut873d8a0itMoBeEb+RuoOJ/ichSywJk9w6n7vQ8jIINL58mZvzURBI/PRqCBb7SIebsPQ==',key_name='tempest-TestNetworkBasicOps-186218804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-xthfdnfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:47Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=c64717b5-8862-4f84-989e-9f21bdc37759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.592 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.593 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.594 254096 DEBUG nova.objects.instance [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid c64717b5-8862-4f84-989e-9f21bdc37759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.610 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <uuid>c64717b5-8862-4f84-989e-9f21bdc37759</uuid>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <name>instance-0000007d</name>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-191068635</nova:name>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:06:51</nova:creationTime>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <nova:port uuid="c7eaeb08-d94a-4ecb-a87f-459a8d848a74">
Nov 25 17:06:52 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <system>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <entry name="serial">c64717b5-8862-4f84-989e-9f21bdc37759</entry>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <entry name="uuid">c64717b5-8862-4f84-989e-9f21bdc37759</entry>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </system>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <os>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   </os>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <features>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   </features>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c64717b5-8862-4f84-989e-9f21bdc37759_disk">
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c64717b5-8862-4f84-989e-9f21bdc37759_disk.config">
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       </source>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:06:52 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:51:b5:45"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <target dev="tapc7eaeb08-d9"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/console.log" append="off"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <video>
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </video>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:06:52 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:06:52 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:06:52 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:06:52 compute-0 nova_compute[254092]: </domain>
Nov 25 17:06:52 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Preparing to wait for external event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.611 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.613 254096 DEBUG nova.virt.libvirt.vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191068635',display_name='tempest-TestNetworkBasicOps-server-191068635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-191068635',id=125,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJpHrgKSNh4nxd1EAQPL6cFToRs8NuFCJRe5V6wi+HlNMXfO3CA8jeqzTpxAut873d8a0itMoBeEb+RuoOJ/ichSywJk9w6n7vQ8jIINL58mZvzURBI/PRqCBb7SIebsPQ==',key_name='tempest-TestNetworkBasicOps-186218804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-xthfdnfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:47Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=c64717b5-8862-4f84-989e-9f21bdc37759,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.613 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.614 254096 DEBUG nova.network.os_vif_util [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.614 254096 DEBUG os_vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.620 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:52 compute-0 podman[391663]: 2025-11-25 17:06:52.620488023 +0000 UTC m=+0.146712276 container init ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.621 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.627 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7eaeb08-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.628 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7eaeb08-d9, col_values=(('external_ids', {'iface-id': 'c7eaeb08-d94a-4ecb-a87f-459a8d848a74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:b5:45', 'vm-uuid': 'c64717b5-8862-4f84-989e-9f21bdc37759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.629 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:52 compute-0 podman[391663]: 2025-11-25 17:06:52.631205316 +0000 UTC m=+0.157429549 container start ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:06:52 compute-0 NetworkManager[48891]: <info>  [1764090412.6316] manager: (tapc7eaeb08-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.633 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:06:52 compute-0 podman[391663]: 2025-11-25 17:06:52.636541222 +0000 UTC m=+0.162765465 container attach ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.640 254096 INFO os_vif [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.684 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.685 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.685 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:51:b5:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.686 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Using config drive
Nov 25 17:06:52 compute-0 nova_compute[254092]: 2025-11-25 17:06:52.714 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:52 compute-0 sshd-session[391393]: Invalid user ubuntu from 150.95.85.24 port 42926
Nov 25 17:06:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1966176760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3074771413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:06:53 compute-0 sshd-session[391393]: Received disconnect from 150.95.85.24 port 42926:11:  [preauth]
Nov 25 17:06:53 compute-0 sshd-session[391393]: Disconnected from invalid user ubuntu 150.95.85.24 port 42926 [preauth]
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]: {
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:     "0": [
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:         {
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "devices": [
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "/dev/loop3"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             ],
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_name": "ceph_lv0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_size": "21470642176",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "name": "ceph_lv0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "tags": {
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cluster_name": "ceph",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.crush_device_class": "",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.encrypted": "0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osd_id": "0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.type": "block",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.vdo": "0"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             },
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "type": "block",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "vg_name": "ceph_vg0"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:         }
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:     ],
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:     "1": [
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:         {
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "devices": [
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "/dev/loop4"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             ],
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_name": "ceph_lv1",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_size": "21470642176",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "name": "ceph_lv1",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "tags": {
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cluster_name": "ceph",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.crush_device_class": "",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.encrypted": "0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osd_id": "1",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.type": "block",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.vdo": "0"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             },
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "type": "block",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "vg_name": "ceph_vg1"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:         }
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:     ],
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:     "2": [
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:         {
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "devices": [
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "/dev/loop5"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             ],
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_name": "ceph_lv2",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_size": "21470642176",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "name": "ceph_lv2",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "tags": {
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.cluster_name": "ceph",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.crush_device_class": "",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.encrypted": "0",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osd_id": "2",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.type": "block",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:                 "ceph.vdo": "0"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             },
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "type": "block",
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:             "vg_name": "ceph_vg2"
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:         }
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]:     ]
Nov 25 17:06:53 compute-0 nifty_mirzakhani[391680]: }
Nov 25 17:06:53 compute-0 systemd[1]: libpod-ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913.scope: Deactivated successfully.
Nov 25 17:06:53 compute-0 podman[391663]: 2025-11-25 17:06:53.373349782 +0000 UTC m=+0.899574025 container died ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-055ccbd554e2eab4075af89e6fd2ae2b25e713f3d5222ae972da9fb0394f6500-merged.mount: Deactivated successfully.
Nov 25 17:06:53 compute-0 podman[391663]: 2025-11-25 17:06:53.439588139 +0000 UTC m=+0.965812382 container remove ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:06:53 compute-0 systemd[1]: libpod-conmon-ee3a53faa9ac12bf89a146721629d02949e2c0c04e7a360233389227d16d7913.scope: Deactivated successfully.
Nov 25 17:06:53 compute-0 sudo[391514]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:53 compute-0 sudo[391725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:53 compute-0 sudo[391725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:53 compute-0 sudo[391725]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:53 compute-0 sudo[391750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:06:53 compute-0 sudo[391750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:53 compute-0 sudo[391750]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:53 compute-0 sudo[391775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:53 compute-0 sudo[391775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:53 compute-0 sudo[391775]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:53 compute-0 nova_compute[254092]: 2025-11-25 17:06:53.742 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Creating config drive at /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config
Nov 25 17:06:53 compute-0 nova_compute[254092]: 2025-11-25 17:06:53.748 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3hto8v6y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:53 compute-0 sudo[391800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:06:53 compute-0 sudo[391800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:53 compute-0 nova_compute[254092]: 2025-11-25 17:06:53.899 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3hto8v6y" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:53 compute-0 nova_compute[254092]: 2025-11-25 17:06:53.938 254096 DEBUG nova.storage.rbd_utils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image c64717b5-8862-4f84-989e-9f21bdc37759_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:53 compute-0 ceph-mon[74985]: pgmap v2500: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:06:53 compute-0 nova_compute[254092]: 2025-11-25 17:06:53.945 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config c64717b5-8862-4f84-989e-9f21bdc37759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.147 254096 DEBUG oslo_concurrency.processutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config c64717b5-8862-4f84-989e-9f21bdc37759_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.148 254096 INFO nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deleting local config drive /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759/disk.config because it was imported into RBD.
Nov 25 17:06:54 compute-0 podman[391905]: 2025-11-25 17:06:54.220837987 +0000 UTC m=+0.060487690 container create a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:06:54 compute-0 kernel: tapc7eaeb08-d9: entered promiscuous mode
Nov 25 17:06:54 compute-0 NetworkManager[48891]: <info>  [1764090414.2318] manager: (tapc7eaeb08-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Nov 25 17:06:54 compute-0 ovn_controller[153477]: 2025-11-25T17:06:54Z|01292|binding|INFO|Claiming lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for this chassis.
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:54 compute-0 ovn_controller[153477]: 2025-11-25T17:06:54Z|01293|binding|INFO|c7eaeb08-d94a-4ecb-a87f-459a8d848a74: Claiming fa:16:3e:51:b5:45 10.100.0.7
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.274 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:54 compute-0 systemd[1]: Started libpod-conmon-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope.
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.283 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c64717b5-8862-4f84-989e-9f21bdc37759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.284 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 bound to our chassis
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.286 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 17:06:54 compute-0 podman[391905]: 2025-11-25 17:06:54.194141295 +0000 UTC m=+0.033791008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:06:54 compute-0 systemd-udevd[391937]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6415e9-8ad2-43d1-ad26-19c690c70a34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.308 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe8ef6db-e1 in ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.310 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe8ef6db-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.311 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c61573b-bfbb-4879-84f2-1e98434d063e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.312 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[124f599c-43d2-4f98-b194-74eaf76ae29b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 NetworkManager[48891]: <info>  [1764090414.3205] device (tapc7eaeb08-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:06:54 compute-0 NetworkManager[48891]: <info>  [1764090414.3220] device (tapc7eaeb08-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:06:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.333 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[e231c046-7dbc-49ac-8a85-9c60b59b2e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 systemd-machined[216343]: New machine qemu-157-instance-0000007d.
Nov 25 17:06:54 compute-0 ovn_controller[153477]: 2025-11-25T17:06:54Z|01294|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 ovn-installed in OVS
Nov 25 17:06:54 compute-0 ovn_controller[153477]: 2025-11-25T17:06:54Z|01295|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 up in Southbound
Nov 25 17:06:54 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-0000007d.
Nov 25 17:06:54 compute-0 podman[391905]: 2025-11-25 17:06:54.363959343 +0000 UTC m=+0.203609126 container init a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:54 compute-0 podman[391905]: 2025-11-25 17:06:54.377406132 +0000 UTC m=+0.217055835 container start a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:06:54 compute-0 podman[391905]: 2025-11-25 17:06:54.380691242 +0000 UTC m=+0.220340935 container attach a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.381 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c27a760d-c286-4929-99d2-481f61c510a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 systemd[1]: libpod-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope: Deactivated successfully.
Nov 25 17:06:54 compute-0 distracted_hoover[391931]: 167 167
Nov 25 17:06:54 compute-0 conmon[391931]: conmon a64378e0742d54511a80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope/container/memory.events
Nov 25 17:06:54 compute-0 podman[391905]: 2025-11-25 17:06:54.386801349 +0000 UTC m=+0.226451052 container died a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f63c608166666fa43e4ff66dbee3dbf0bb9c33f59feb1f9a11227abb862ccb83-merged.mount: Deactivated successfully.
Nov 25 17:06:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:06:54 compute-0 podman[391905]: 2025-11-25 17:06:54.433452359 +0000 UTC m=+0.273102092 container remove a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.433 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5c6d98-ab15-4140-83f6-1e5a4523cd89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1944594-6ad3-48c0-94fe-a880205c5575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 NetworkManager[48891]: <info>  [1764090414.4421] manager: (tapfe8ef6db-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Nov 25 17:06:54 compute-0 systemd[1]: libpod-conmon-a64378e0742d54511a80ccf76fac8ab9871560fe9ae753c13a8df75ccd5c15bf.scope: Deactivated successfully.
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.488 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7953de-f19b-424f-80bb-f9b878bbb459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.492 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[039dbfc4-2ec2-452e-ab69-028cc7df7e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 NetworkManager[48891]: <info>  [1764090414.5297] device (tapfe8ef6db-e0): carrier: link connected
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.537 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[772bec90-5fff-46b1-b3ed-f0f3fb2ac84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.551 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[97deb257-6616-4d4e-a476-2cbc5e637ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687212, 'reachable_time': 27072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391985, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.568 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[271654f5-5c7c-4ceb-9f5d-37347ff9ddbc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:36f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687212, 'tstamp': 687212}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391987, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de2a027f-125f-4962-993a-a93547b9e504]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 380], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687212, 'reachable_time': 27072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391994, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.619 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2753ad3f-5f6b-46bf-ae91-dd1ae7d5f540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 podman[391992]: 2025-11-25 17:06:54.631890882 +0000 UTC m=+0.050554878 container create a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.681 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[168ddc46-313d-4aca-bd4f-4e5d0da7ac0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 systemd[1]: Started libpod-conmon-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope.
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.682 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe8ef6db-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:54 compute-0 NetworkManager[48891]: <info>  [1764090414.6861] manager: (tapfe8ef6db-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Nov 25 17:06:54 compute-0 kernel: tapfe8ef6db-e0: entered promiscuous mode
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.687 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe8ef6db-e0, col_values=(('external_ids', {'iface-id': 'abb62292-a5ed-40d9-8b98-1dcedecc4b03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:06:54 compute-0 ovn_controller[153477]: 2025-11-25T17:06:54Z|01296|binding|INFO|Releasing lport abb62292-a5ed-40d9-8b98-1dcedecc4b03 from this chassis (sb_readonly=0)
Nov 25 17:06:54 compute-0 podman[391992]: 2025-11-25 17:06:54.603820812 +0000 UTC m=+0.022484838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.705 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.705 254096 DEBUG nova.compute.manager [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.705 254096 DEBUG oslo_concurrency.lockutils [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.705 254096 DEBUG oslo_concurrency.lockutils [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.706 254096 DEBUG oslo_concurrency.lockutils [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.706 254096 DEBUG nova.compute.manager [req-5351e15f-30f9-4447-8667-8173918b7079 req-71be6ae6-8509-4d1e-95d6-c2171cae9067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Processing event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8103f029-bbfc-48b6-a032-8e43bd613080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.708 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:06:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:06:54.710 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'env', 'PROCESS_TAG=haproxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe8ef6db-e551-4904-a3ea-4af9320e49b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:06:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:54 compute-0 podman[391992]: 2025-11-25 17:06:54.747203474 +0000 UTC m=+0.165867490 container init a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.753 254096 DEBUG nova.network.neutron [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updated VIF entry in instance network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.754 254096 DEBUG nova.network.neutron [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:06:54 compute-0 podman[391992]: 2025-11-25 17:06:54.757392683 +0000 UTC m=+0.176056679 container start a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 17:06:54 compute-0 podman[391992]: 2025-11-25 17:06:54.760313093 +0000 UTC m=+0.178977089 container attach a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:06:54 compute-0 nova_compute[254092]: 2025-11-25 17:06:54.766 254096 DEBUG oslo_concurrency.lockutils [req-035f26a7-92e2-4c0b-bbb6-8046d3766b2d req-21e0e317-c926-44f0-af06-58dff1eae923 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:06:55 compute-0 podman[392081]: 2025-11-25 17:06:55.059004656 +0000 UTC m=+0.042667681 container create a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 17:06:55 compute-0 systemd[1]: Started libpod-conmon-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2.scope.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.099 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:06:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.101 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090415.0986474, c64717b5-8862-4f84-989e-9f21bdc37759 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.102 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Started (Lifecycle Event)
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.104 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdb7dcc8a3df1de0e20ad88d7e2ca7a9b516d5bc7839b4b7f23d1b1edc5d54e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.108 254096 INFO nova.virt.libvirt.driver [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance spawned successfully.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.108 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:06:55 compute-0 podman[392081]: 2025-11-25 17:06:55.122396225 +0000 UTC m=+0.106059260 container init a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.125 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:55 compute-0 podman[392081]: 2025-11-25 17:06:55.127922137 +0000 UTC m=+0.111585162 container start a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 17:06:55 compute-0 podman[392081]: 2025-11-25 17:06:55.036564321 +0000 UTC m=+0.020227366 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.136 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.141 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.141 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.142 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.142 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.143 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.143 254096 DEBUG nova.virt.libvirt.driver [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:06:55 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : New worker (392109) forked
Nov 25 17:06:55 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : Loading success.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.173 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.173 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090415.1004786, c64717b5-8862-4f84-989e-9f21bdc37759 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.174 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Paused (Lifecycle Event)
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.199 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090415.1038833, c64717b5-8862-4f84-989e-9f21bdc37759 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Resumed (Lifecycle Event)
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.211 254096 INFO nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 7.58 seconds to spawn the instance on the hypervisor.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.211 254096 DEBUG nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.235 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.238 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.265 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.278 254096 INFO nova.compute.manager [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 8.47 seconds to build instance.
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.299 254096 DEBUG oslo_concurrency.lockutils [None req-591dbbec-439b-4764-9d60-71d69fc0ffa4 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:06:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945348001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:06:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:06:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945348001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:55 compute-0 elastic_edison[392016]: {
Nov 25 17:06:55 compute-0 elastic_edison[392016]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "osd_id": 1,
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "type": "bluestore"
Nov 25 17:06:55 compute-0 elastic_edison[392016]:     },
Nov 25 17:06:55 compute-0 elastic_edison[392016]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "osd_id": 2,
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "type": "bluestore"
Nov 25 17:06:55 compute-0 elastic_edison[392016]:     },
Nov 25 17:06:55 compute-0 elastic_edison[392016]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "osd_id": 0,
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:06:55 compute-0 elastic_edison[392016]:         "type": "bluestore"
Nov 25 17:06:55 compute-0 elastic_edison[392016]:     }
Nov 25 17:06:55 compute-0 elastic_edison[392016]: }
Nov 25 17:06:55 compute-0 systemd[1]: libpod-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope: Deactivated successfully.
Nov 25 17:06:55 compute-0 systemd[1]: libpod-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope: Consumed 1.003s CPU time.
Nov 25 17:06:55 compute-0 conmon[392016]: conmon a49d419cd9b3ef31f633 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope/container/memory.events
Nov 25 17:06:55 compute-0 podman[391992]: 2025-11-25 17:06:55.782024337 +0000 UTC m=+1.200688323 container died a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:06:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d404b3347d994af65cdf86f54c4cbf74f4739d20cbaf4e6b4e2433217de74e9b-merged.mount: Deactivated successfully.
Nov 25 17:06:55 compute-0 podman[391992]: 2025-11-25 17:06:55.844720498 +0000 UTC m=+1.263384494 container remove a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 17:06:55 compute-0 systemd[1]: libpod-conmon-a49d419cd9b3ef31f6334db208b4ed79a6a2e787ec84fa256929608d8bc00004.scope: Deactivated successfully.
Nov 25 17:06:55 compute-0 sudo[391800]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:06:55 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:06:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:06:55 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:06:55 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 863fda66-fd77-4b93-9507-3e0a06e113c3 does not exist
Nov 25 17:06:55 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 00c2b88d-ea1c-44e5-883e-a7e6a964d255 does not exist
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.944 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.945 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:55 compute-0 ceph-mon[74985]: pgmap v2501: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:06:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1945348001' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:06:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1945348001' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:06:55 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:06:55 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:06:55 compute-0 sudo[392156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:06:55 compute-0 sudo[392156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:55 compute-0 nova_compute[254092]: 2025-11-25 17:06:55.963 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:06:55 compute-0 sudo[392156]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:56 compute-0 sudo[392181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:06:56 compute-0 sudo[392181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:06:56 compute-0 sudo[392181]: pam_unix(sudo:session): session closed for user root
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.033 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.034 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.041 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.041 254096 INFO nova.compute.claims [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.163 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 17:06:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:06:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/345240936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.633 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.640 254096 DEBUG nova.compute.provider_tree [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.661 254096 DEBUG nova.scheduler.client.report [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.681 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.682 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.720 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.721 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.772 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.788 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.851 254096 DEBUG nova.compute.manager [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.851 254096 DEBUG oslo_concurrency.lockutils [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.852 254096 DEBUG oslo_concurrency.lockutils [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.852 254096 DEBUG oslo_concurrency.lockutils [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.853 254096 DEBUG nova.compute.manager [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.853 254096 WARNING nova.compute.manager [req-fbd5f0c5-c244-4175-9faa-e593deb35040 req-7e93e75b-dab2-48ee-b4fa-da8547657176 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state None.
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.861 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.862 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.863 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Creating image(s)
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.891 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.919 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.948 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:56 compute-0 nova_compute[254092]: 2025-11-25 17:06:56.955 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/345240936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.050 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.051 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.052 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.053 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.077 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.081 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.230 254096 DEBUG nova.policy [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.386 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.466 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.552 254096 DEBUG nova.objects.instance [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.565 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.566 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Ensure instance console log exists: /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.566 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.567 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.567 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:06:57 compute-0 nova_compute[254092]: 2025-11-25 17:06:57.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:57 compute-0 ceph-mon[74985]: pgmap v2502: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 17:06:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 17:06:58 compute-0 nova_compute[254092]: 2025-11-25 17:06:58.770 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Successfully created port: 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:06:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:06:59 compute-0 nova_compute[254092]: 2025-11-25 17:06:59.870 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:59 compute-0 NetworkManager[48891]: <info>  [1764090419.8721] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Nov 25 17:06:59 compute-0 NetworkManager[48891]: <info>  [1764090419.8744] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Nov 25 17:06:59 compute-0 nova_compute[254092]: 2025-11-25 17:06:59.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:59 compute-0 ovn_controller[153477]: 2025-11-25T17:06:59Z|01297|binding|INFO|Releasing lport abb62292-a5ed-40d9-8b98-1dcedecc4b03 from this chassis (sb_readonly=0)
Nov 25 17:06:59 compute-0 nova_compute[254092]: 2025-11-25 17:06:59.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:06:59 compute-0 ceph-mon[74985]: pgmap v2503: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.375 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Successfully updated port: 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.403 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.404 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.405 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 116 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 2.8 MiB/s wr, 72 op/s
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.463 254096 DEBUG nova.compute.manager [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.464 254096 DEBUG nova.compute.manager [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing instance network info cache due to event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.465 254096 DEBUG oslo_concurrency.lockutils [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.465 254096 DEBUG oslo_concurrency.lockutils [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.466 254096 DEBUG nova.network.neutron [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Refreshing network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.641 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.722 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.722 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.723 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.724 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.724 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.726 254096 INFO nova.compute.manager [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Terminating instance
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.728 254096 DEBUG nova.compute.manager [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:07:00 compute-0 kernel: tapc7eaeb08-d9 (unregistering): left promiscuous mode
Nov 25 17:07:00 compute-0 NetworkManager[48891]: <info>  [1764090420.7741] device (tapc7eaeb08-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:07:00 compute-0 ovn_controller[153477]: 2025-11-25T17:07:00Z|01298|binding|INFO|Releasing lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 from this chassis (sb_readonly=0)
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:00 compute-0 ovn_controller[153477]: 2025-11-25T17:07:00Z|01299|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 down in Southbound
Nov 25 17:07:00 compute-0 ovn_controller[153477]: 2025-11-25T17:07:00Z|01300|binding|INFO|Removing iface tapc7eaeb08-d9 ovn-installed in OVS
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.793 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c64717b5-8862-4f84-989e-9f21bdc37759', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:07:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.794 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 unbound from our chassis
Nov 25 17:07:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.795 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe8ef6db-e551-4904-a3ea-4af9320e49b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:07:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.796 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c47ab58-a5cf-46b1-a2d0-14cae4950699]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:00.797 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace which is not needed anymore
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:00 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 25 17:07:00 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Consumed 6.562s CPU time.
Nov 25 17:07:00 compute-0 systemd-machined[216343]: Machine qemu-157-instance-0000007d terminated.
Nov 25 17:07:00 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : haproxy version is 2.8.14-c23fe91
Nov 25 17:07:00 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [NOTICE]   (392107) : path to executable is /usr/sbin/haproxy
Nov 25 17:07:00 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [WARNING]  (392107) : Exiting Master process...
Nov 25 17:07:00 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [ALERT]    (392107) : Current worker (392109) exited with code 143 (Terminated)
Nov 25 17:07:00 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[392103]: [WARNING]  (392107) : All workers exited. Exiting... (0)
Nov 25 17:07:00 compute-0 systemd[1]: libpod-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2.scope: Deactivated successfully.
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.976 254096 INFO nova.virt.libvirt.driver [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Instance destroyed successfully.
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.977 254096 DEBUG nova.objects.instance [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid c64717b5-8862-4f84-989e-9f21bdc37759 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:00 compute-0 podman[392420]: 2025-11-25 17:07:00.981622892 +0000 UTC m=+0.063945195 container died a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.994 254096 DEBUG nova.virt.libvirt.vif [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:06:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191068635',display_name='tempest-TestNetworkBasicOps-server-191068635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-191068635',id=125,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJpHrgKSNh4nxd1EAQPL6cFToRs8NuFCJRe5V6wi+HlNMXfO3CA8jeqzTpxAut873d8a0itMoBeEb+RuoOJ/ichSywJk9w6n7vQ8jIINL58mZvzURBI/PRqCBb7SIebsPQ==',key_name='tempest-TestNetworkBasicOps-186218804',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:06:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-xthfdnfp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:06:55Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=c64717b5-8862-4f84-989e-9f21bdc37759,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.995 254096 DEBUG nova.network.os_vif_util [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.996 254096 DEBUG nova.network.os_vif_util [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:00 compute-0 nova_compute[254092]: 2025-11-25 17:07:00.997 254096 DEBUG os_vif [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.000 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.001 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7eaeb08-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.010 254096 INFO os_vif [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')
Nov 25 17:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2-userdata-shm.mount: Deactivated successfully.
Nov 25 17:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffdb7dcc8a3df1de0e20ad88d7e2ca7a9b516d5bc7839b4b7f23d1b1edc5d54e-merged.mount: Deactivated successfully.
Nov 25 17:07:01 compute-0 podman[392420]: 2025-11-25 17:07:01.031274364 +0000 UTC m=+0.113596637 container cleanup a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 17:07:01 compute-0 systemd[1]: libpod-conmon-a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2.scope: Deactivated successfully.
Nov 25 17:07:01 compute-0 podman[392474]: 2025-11-25 17:07:01.087151407 +0000 UTC m=+0.035712511 container remove a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.092 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0dafd8-549f-47c9-8627-79a66518b534]: (4, ('Tue Nov 25 05:07:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2)\na1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2\nTue Nov 25 05:07:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (a1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2)\na1ba0795f5e72b4bb1e249265ea2f671284e1bf7f90f1285d2cd0339baacd4f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.093 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b05f7947-564d-4401-afa5-93f4d6148793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:01 compute-0 kernel: tapfe8ef6db-e0: left promiscuous mode
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.130 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cfe9a6-d742-4634-b1c5-39999df69c48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.148 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf0808e-febe-4e78-9a06-d2b1916cbcaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[15453177-dafa-44cf-ae70-021198cbf609]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.163 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4cee5f85-2872-4846-98f4-e9f4a5f33edb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687201, 'reachable_time': 19992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392492, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.166 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:07:01 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:01.166 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[91e11f9b-f90a-4c99-b536-de7d9520927b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:01 compute-0 systemd[1]: run-netns-ovnmeta\x2dfe8ef6db\x2de551\x2d4904\x2da3ea\x2d4af9320e49b5.mount: Deactivated successfully.
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.403 254096 INFO nova.virt.libvirt.driver [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deleting instance files /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759_del
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.404 254096 INFO nova.virt.libvirt.driver [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deletion of /var/lib/nova/instances/c64717b5-8862-4f84-989e-9f21bdc37759_del complete
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.476 254096 INFO nova.compute.manager [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.477 254096 DEBUG oslo.service.loopingcall [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.478 254096 DEBUG nova.compute.manager [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:07:01 compute-0 nova_compute[254092]: 2025-11-25 17:07:01.478 254096 DEBUG nova.network.neutron [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:07:01 compute-0 ceph-mon[74985]: pgmap v2504: 321 pgs: 321 active+clean; 116 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 624 KiB/s rd, 2.8 MiB/s wr, 72 op/s
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.088 254096 DEBUG nova.network.neutron [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.115 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.115 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance network_info: |[{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.120 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start _get_guest_xml network_info=[{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.126 254096 WARNING nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.134 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.135 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.140 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.141 254096 DEBUG nova.virt.libvirt.host [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.141 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.141 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.142 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.142 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.143 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.144 254096 DEBUG nova.virt.hardware [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.148 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.238 254096 DEBUG nova.network.neutron [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updated VIF entry in instance network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.240 254096 DEBUG nova.network.neutron [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.338 254096 DEBUG oslo_concurrency.lockutils [req-8382deba-a21a-4134-ae2b-a169df5ae17e req-6722a20e-c721-443b-bd90-4f0a14ee576a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c64717b5-8862-4f84-989e-9f21bdc37759" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 134 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 126 op/s
Nov 25 17:07:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794554221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.621 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.646 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.650 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.972 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.973 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing instance network info cache due to event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.973 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.973 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:02 compute-0 nova_compute[254092]: 2025-11-25 17:07:02.974 254096 DEBUG nova.network.neutron [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:07:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1794554221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470768128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.141 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.144 254096 DEBUG nova.virt.libvirt.vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=126,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-uvh0a05a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=bd1d0296-ae28-4eac-9f38-80e6ca17dbff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.144 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.146 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.148 254096 DEBUG nova.objects.instance [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.164 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <uuid>bd1d0296-ae28-4eac-9f38-80e6ca17dbff</uuid>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <name>instance-0000007e</name>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353</nova:name>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:07:02</nova:creationTime>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <nova:port uuid="54e9d2ac-a4ca-41fe-9c2e-76eba828c99c">
Nov 25 17:07:03 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <system>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <entry name="serial">bd1d0296-ae28-4eac-9f38-80e6ca17dbff</entry>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <entry name="uuid">bd1d0296-ae28-4eac-9f38-80e6ca17dbff</entry>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </system>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <os>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   </os>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <features>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   </features>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk">
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config">
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:03 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:09:40:a3"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <target dev="tap54e9d2ac-a4"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/console.log" append="off"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <video>
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </video>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:07:03 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:07:03 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:07:03 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:07:03 compute-0 nova_compute[254092]: </domain>
Nov 25 17:07:03 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.166 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Preparing to wait for external event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.166 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.167 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.167 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.168 254096 DEBUG nova.virt.libvirt.vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=126,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-uvh0a05a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:06:56Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=bd1d0296-ae28-4eac-9f38-80e6ca17dbff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.168 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.169 254096 DEBUG nova.network.os_vif_util [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.169 254096 DEBUG os_vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.171 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.175 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54e9d2ac-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.175 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54e9d2ac-a4, col_values=(('external_ids', {'iface-id': '54e9d2ac-a4ca-41fe-9c2e-76eba828c99c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:40:a3', 'vm-uuid': 'bd1d0296-ae28-4eac-9f38-80e6ca17dbff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:03 compute-0 NetworkManager[48891]: <info>  [1764090423.1786] manager: (tap54e9d2ac-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.190 254096 INFO os_vif [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4')
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.207 254096 DEBUG nova.network.neutron [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.229 254096 INFO nova.compute.manager [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Took 1.75 seconds to deallocate network for instance.
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.250 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.250 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.251 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:09:40:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.251 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Using config drive
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.276 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.283 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.284 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.367 254096 DEBUG oslo_concurrency.processutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:07:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3927573114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.792 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Creating config drive at /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.799 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpltzamna4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.841 254096 DEBUG oslo_concurrency.processutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.850 254096 DEBUG nova.compute.provider_tree [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.863 254096 DEBUG nova.scheduler.client.report [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:07:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.895 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.920 254096 INFO nova.scheduler.client.report [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance c64717b5-8862-4f84-989e-9f21bdc37759
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.940 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpltzamna4" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.969 254096 DEBUG nova.storage.rbd_utils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:03 compute-0 nova_compute[254092]: 2025-11-25 17:07:03.973 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:03 compute-0 ceph-mon[74985]: pgmap v2505: 321 pgs: 321 active+clean; 134 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 126 op/s
Nov 25 17:07:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1470768128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3927573114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.021 254096 DEBUG oslo_concurrency.lockutils [None req-bec75948-b3eb-4519-8b81-b4bd5a42298e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.192 254096 DEBUG oslo_concurrency.processutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config bd1d0296-ae28-4eac-9f38-80e6ca17dbff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.193 254096 INFO nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deleting local config drive /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff/disk.config because it was imported into RBD.
Nov 25 17:07:04 compute-0 kernel: tap54e9d2ac-a4: entered promiscuous mode
Nov 25 17:07:04 compute-0 NetworkManager[48891]: <info>  [1764090424.2465] manager: (tap54e9d2ac-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:04 compute-0 ovn_controller[153477]: 2025-11-25T17:07:04Z|01301|binding|INFO|Claiming lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for this chassis.
Nov 25 17:07:04 compute-0 ovn_controller[153477]: 2025-11-25T17:07:04Z|01302|binding|INFO|54e9d2ac-a4ca-41fe-9c2e-76eba828c99c: Claiming fa:16:3e:09:40:a3 10.100.0.6
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.261 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:40:a3 10.100.0.6'], port_security=['fa:16:3e:09:40:a3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd1d0296-ae28-4eac-9f38-80e6ca17dbff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b953fef-d250-41a4-af84-97bf9c7f4822 779da3fb-8b66-4cf7-a59e-bc7311564ace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.262 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c in datapath ef2caff8-43ec-4364-a979-521405023410 bound to our chassis
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.263 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef2caff8-43ec-4364-a979-521405023410
Nov 25 17:07:04 compute-0 ovn_controller[153477]: 2025-11-25T17:07:04Z|01303|binding|INFO|Setting lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c ovn-installed in OVS
Nov 25 17:07:04 compute-0 ovn_controller[153477]: 2025-11-25T17:07:04Z|01304|binding|INFO|Setting lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c up in Southbound
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.265 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:04 compute-0 systemd-udevd[392652]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:07:04 compute-0 systemd-machined[216343]: New machine qemu-158-instance-0000007e.
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.277 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b809437b-014e-4cef-9444-f94cc59b6255]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.279 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef2caff8-41 in ovnmeta-ef2caff8-43ec-4364-a979-521405023410 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.281 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef2caff8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce652ad-a77f-4ebd-95cc-77a6d4897276]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.282 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b10ca708-efcc-4690-9359-5e7174489fb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 NetworkManager[48891]: <info>  [1764090424.2841] device (tap54e9d2ac-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:07:04 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-0000007e.
Nov 25 17:07:04 compute-0 NetworkManager[48891]: <info>  [1764090424.2859] device (tap54e9d2ac-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.295 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[88c818dc-4940-4eb5-87da-d5d5104931eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.320 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[171d1290-58da-44de-8bbf-20f2efbc31ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 134 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.458 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ee84d615-acd5-4aad-824f-44d47d01bc86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 NetworkManager[48891]: <info>  [1764090424.4644] manager: (tapef2caff8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.463 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29a6378a-44ac-46e1-b9e3-3147be62ab60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 systemd-udevd[392655]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.503 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ad8275-2312-4892-86c3-a2e471de9276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.506 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfeb635-e5d4-4a85-875c-fffe05c80d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 NetworkManager[48891]: <info>  [1764090424.5302] device (tapef2caff8-40): carrier: link connected
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8736f9-76e3-4ea4-be00-50d4f69cb508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.552 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6d45d84a-3576-4b0f-8b98-4733ef349f4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392685, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.570 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f21b257a-b243-4384-864e-ae778b34a933]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:52b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688212, 'tstamp': 688212}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392686, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f981d7-6fd9-4553-a664-1c2a06e0d113]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392687, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.631 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e04cf98f-72b1-4e6d-84ec-944d0b072fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.696 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c81acd9b-2361-462c-acc5-2a4f2b899aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.698 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.698 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.699 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef2caff8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:04 compute-0 NetworkManager[48891]: <info>  [1764090424.7020] manager: (tapef2caff8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Nov 25 17:07:04 compute-0 kernel: tapef2caff8-40: entered promiscuous mode
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.704 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef2caff8-40, col_values=(('external_ids', {'iface-id': '50aec7ed-4f15-4d72-87dd-48c327de28ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:04 compute-0 ovn_controller[153477]: 2025-11-25T17:07:04Z|01305|binding|INFO|Releasing lport 50aec7ed-4f15-4d72-87dd-48c327de28ce from this chassis (sb_readonly=0)
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.719 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef2caff8-43ec-4364-a979-521405023410.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef2caff8-43ec-4364-a979-521405023410.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.720 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[073465eb-78a4-4a38-ab73-641c326b1bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.721 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-ef2caff8-43ec-4364-a979-521405023410
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/ef2caff8-43ec-4364-a979-521405023410.pid.haproxy
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID ef2caff8-43ec-4364-a979-521405023410
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:07:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:04.724 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'env', 'PROCESS_TAG=haproxy-ef2caff8-43ec-4364-a979-521405023410', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef2caff8-43ec-4364-a979-521405023410.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.904 254096 DEBUG nova.compute.manager [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.905 254096 DEBUG oslo_concurrency.lockutils [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.905 254096 DEBUG oslo_concurrency.lockutils [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.906 254096 DEBUG oslo_concurrency.lockutils [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.906 254096 DEBUG nova.compute.manager [req-0598eba6-f8e6-4702-bacc-5445d7611cbc req-8fde816f-612b-4545-b06e-b94ba9631048 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Processing event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.956 254096 DEBUG nova.network.neutron [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated VIF entry in instance network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.957 254096 DEBUG nova.network.neutron [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.976 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.977 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] No waiting events found dispatching network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.978 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 DEBUG oslo_concurrency.lockutils [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c64717b5-8862-4f84-989e-9f21bdc37759-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 DEBUG nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:07:04 compute-0 nova_compute[254092]: 2025-11-25 17:07:04.979 254096 WARNING nova.compute.manager [req-f14cfd37-29ef-40ab-87c4-03c946f8b09e req-70995d04-313c-4c44-a164-aaa244e938ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state deleting.
Nov 25 17:07:05 compute-0 podman[392719]: 2025-11-25 17:07:05.103453377 +0000 UTC m=+0.075707948 container create a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 17:07:05 compute-0 systemd[1]: Started libpod-conmon-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0.scope.
Nov 25 17:07:05 compute-0 podman[392719]: 2025-11-25 17:07:05.053939988 +0000 UTC m=+0.026194589 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:07:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69c52ab9da1464e9256b97d314ba04db47e1086bb356a0791f8cd44f4454be6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:07:05 compute-0 podman[392719]: 2025-11-25 17:07:05.19655777 +0000 UTC m=+0.168812401 container init a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.203 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:07:05 compute-0 podman[392719]: 2025-11-25 17:07:05.203041848 +0000 UTC m=+0.175296439 container start a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.204 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090425.2027023, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.204 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Started (Lifecycle Event)
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.209 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.214 254096 INFO nova.virt.libvirt.driver [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance spawned successfully.
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.214 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.227 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.233 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.236 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.237 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:05 compute-0 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : New worker (392782) forked
Nov 25 17:07:05 compute-0 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : Loading success.
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.238 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.238 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.239 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.239 254096 DEBUG nova.virt.libvirt.driver [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.247 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.247 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090425.202915, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.248 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Paused (Lifecycle Event)
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.266 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.271 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090425.2095118, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.271 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Resumed (Lifecycle Event)
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.294 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.298 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.310 254096 INFO nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 8.45 seconds to spawn the instance on the hypervisor.
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.310 254096 DEBUG nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.318 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.367 254096 INFO nova.compute.manager [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 9.36 seconds to build instance.
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.385 254096 DEBUG oslo_concurrency.lockutils [None req-c888f1fb-b645-4617-85b8-2e450bc6ee12 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:05 compute-0 nova_compute[254092]: 2025-11-25 17:07:05.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:06 compute-0 ceph-mon[74985]: pgmap v2506: 321 pgs: 321 active+clean; 134 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:07:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Nov 25 17:07:07 compute-0 nova_compute[254092]: 2025-11-25 17:07:07.001 254096 DEBUG nova.compute.manager [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:07 compute-0 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG oslo_concurrency.lockutils [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:07 compute-0 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG oslo_concurrency.lockutils [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:07 compute-0 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG oslo_concurrency.lockutils [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:07 compute-0 nova_compute[254092]: 2025-11-25 17:07:07.002 254096 DEBUG nova.compute.manager [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] No waiting events found dispatching network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:07:07 compute-0 nova_compute[254092]: 2025-11-25 17:07:07.003 254096 WARNING nova.compute.manager [req-c1139e38-6350-4003-a3f5-d1627455e542 req-856584a8-915c-4bad-af77-74e118ef7cd5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received unexpected event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for instance with vm_state active and task_state None.
Nov 25 17:07:08 compute-0 ceph-mon[74985]: pgmap v2507: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Nov 25 17:07:08 compute-0 nova_compute[254092]: 2025-11-25 17:07:08.177 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 17:07:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:10 compute-0 ceph-mon[74985]: pgmap v2508: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 25 17:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:07:10 compute-0 nova_compute[254092]: 2025-11-25 17:07:10.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Nov 25 17:07:11 compute-0 nova_compute[254092]: 2025-11-25 17:07:11.117 254096 DEBUG nova.compute.manager [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:11 compute-0 nova_compute[254092]: 2025-11-25 17:07:11.118 254096 DEBUG nova.compute.manager [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing instance network info cache due to event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:07:11 compute-0 nova_compute[254092]: 2025-11-25 17:07:11.118 254096 DEBUG oslo_concurrency.lockutils [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:11 compute-0 nova_compute[254092]: 2025-11-25 17:07:11.119 254096 DEBUG oslo_concurrency.lockutils [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:11 compute-0 nova_compute[254092]: 2025-11-25 17:07:11.119 254096 DEBUG nova.network.neutron [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:07:12 compute-0 ceph-mon[74985]: pgmap v2509: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Nov 25 17:07:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 815 KiB/s wr, 155 op/s
Nov 25 17:07:13 compute-0 nova_compute[254092]: 2025-11-25 17:07:13.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:13.644 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.881680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433881706, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 255, "total_data_size": 526218, "memory_usage": 536584, "flush_reason": "Manual Compaction"}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433887991, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 521526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51983, "largest_seqno": 52524, "table_properties": {"data_size": 518529, "index_size": 969, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7082, "raw_average_key_size": 18, "raw_value_size": 512431, "raw_average_value_size": 1355, "num_data_blocks": 43, "num_entries": 378, "num_filter_entries": 378, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090401, "oldest_key_time": 1764090401, "file_creation_time": 1764090433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 6373 microseconds, and 2439 cpu microseconds.
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.888045) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 521526 bytes OK
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.888071) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.889848) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.889868) EVENT_LOG_v1 {"time_micros": 1764090433889862, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.889888) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 523105, prev total WAL file size 523105, number of live WAL files 2.
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.890434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303132' seq:72057594037927935, type:22 .. '6C6F676D0032323633' seq:0, type:0; will stop at (end)
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(509KB)], [116(10070KB)]
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433890474, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 10833401, "oldest_snapshot_seqno": -1}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7322 keys, 10705590 bytes, temperature: kUnknown
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433959000, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10705590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10655767, "index_size": 30383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 191526, "raw_average_key_size": 26, "raw_value_size": 10523882, "raw_average_value_size": 1437, "num_data_blocks": 1190, "num_entries": 7322, "num_filter_entries": 7322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.960523) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10705590 bytes
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.961979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.9 rd, 156.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.8 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(41.3) write-amplify(20.5) OK, records in: 7844, records dropped: 522 output_compression: NoCompression
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.962015) EVENT_LOG_v1 {"time_micros": 1764090433962000, "job": 70, "event": "compaction_finished", "compaction_time_micros": 68600, "compaction_time_cpu_micros": 30151, "output_level": 6, "num_output_files": 1, "total_output_size": 10705590, "num_input_records": 7844, "num_output_records": 7322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433962316, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090433965081, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.890340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:07:13 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:07:13.965144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:07:14 compute-0 ceph-mon[74985]: pgmap v2510: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 815 KiB/s wr, 155 op/s
Nov 25 17:07:14 compute-0 nova_compute[254092]: 2025-11-25 17:07:14.209 254096 DEBUG nova.network.neutron [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated VIF entry in instance network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:07:14 compute-0 nova_compute[254092]: 2025-11-25 17:07:14.209 254096 DEBUG nova.network.neutron [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:14 compute-0 nova_compute[254092]: 2025-11-25 17:07:14.240 254096 DEBUG oslo_concurrency.lockutils [req-44d7a3f4-a061-45c4-b0ef-623a79d34af6 req-e9cf1ea1-038e-4c81-be8c-f54f7b09941b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 17:07:15 compute-0 nova_compute[254092]: 2025-11-25 17:07:15.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:15 compute-0 nova_compute[254092]: 2025-11-25 17:07:15.935 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:15 compute-0 nova_compute[254092]: 2025-11-25 17:07:15.935 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:15 compute-0 nova_compute[254092]: 2025-11-25 17:07:15.950 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:07:15 compute-0 nova_compute[254092]: 2025-11-25 17:07:15.970 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090420.966844, c64717b5-8862-4f84-989e-9f21bdc37759 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:15 compute-0 nova_compute[254092]: 2025-11-25 17:07:15.971 254096 INFO nova.compute.manager [-] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] VM Stopped (Lifecycle Event)
Nov 25 17:07:15 compute-0 nova_compute[254092]: 2025-11-25 17:07:15.998 254096 DEBUG nova.compute.manager [None req-48858c16-7d19-4f27-bf65-98c03332c5f9 - - - - - -] [instance: c64717b5-8862-4f84-989e-9f21bdc37759] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.024 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.025 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.033 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.033 254096 INFO nova.compute.claims [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:07:16 compute-0 ceph-mon[74985]: pgmap v2511: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.170 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 17:07:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:07:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254771916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.630 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.637 254096 DEBUG nova.compute.provider_tree [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.651 254096 DEBUG nova.scheduler.client.report [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.683 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.684 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.913 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.914 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.938 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:07:16 compute-0 nova_compute[254092]: 2025-11-25 17:07:16.954 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.039 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.040 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.040 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Creating image(s)
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.072 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.098 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1254771916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.132 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.140 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.222 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.224 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.224 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.225 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.256 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.262 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c787d91-7197-42cc-9ee6-870806f4904b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.372 254096 DEBUG nova.policy [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.534 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2c787d91-7197-42cc-9ee6-870806f4904b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.596 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.681 254096 DEBUG nova.objects.instance [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c787d91-7197-42cc-9ee6-870806f4904b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.695 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.695 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Ensure instance console log exists: /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.695 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.696 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:17 compute-0 nova_compute[254092]: 2025-11-25 17:07:17.696 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:18 compute-0 ceph-mon[74985]: pgmap v2512: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Nov 25 17:07:18 compute-0 nova_compute[254092]: 2025-11-25 17:07:18.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 25 17:07:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:18.595 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:07:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:18.596 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:07:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:18.597 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:18 compute-0 nova_compute[254092]: 2025-11-25 17:07:18.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:18 compute-0 podman[392980]: 2025-11-25 17:07:18.676781867 +0000 UTC m=+0.098320918 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:07:18 compute-0 podman[392979]: 2025-11-25 17:07:18.714217643 +0000 UTC m=+0.128005541 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Nov 25 17:07:18 compute-0 podman[392981]: 2025-11-25 17:07:18.749564243 +0000 UTC m=+0.154775696 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:07:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:18 compute-0 nova_compute[254092]: 2025-11-25 17:07:18.937 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Successfully updated port: c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:07:18 compute-0 nova_compute[254092]: 2025-11-25 17:07:18.947 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:18 compute-0 nova_compute[254092]: 2025-11-25 17:07:18.947 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:18 compute-0 nova_compute[254092]: 2025-11-25 17:07:18.947 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:07:19 compute-0 nova_compute[254092]: 2025-11-25 17:07:19.009 254096 DEBUG nova.compute.manager [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:19 compute-0 nova_compute[254092]: 2025-11-25 17:07:19.010 254096 DEBUG nova.compute.manager [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Refreshing instance network info cache due to event network-changed-c7eaeb08-d94a-4ecb-a87f-459a8d848a74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:07:19 compute-0 nova_compute[254092]: 2025-11-25 17:07:19.010 254096 DEBUG oslo_concurrency.lockutils [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:19 compute-0 ovn_controller[153477]: 2025-11-25T17:07:19Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:40:a3 10.100.0.6
Nov 25 17:07:19 compute-0 ovn_controller[153477]: 2025-11-25T17:07:19Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:40:a3 10.100.0.6
Nov 25 17:07:19 compute-0 nova_compute[254092]: 2025-11-25 17:07:19.605 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:07:20 compute-0 ceph-mon[74985]: pgmap v2513: 321 pgs: 321 active+clean; 88 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 25 17:07:20 compute-0 nova_compute[254092]: 2025-11-25 17:07:20.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 126 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 102 op/s
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.012 254096 DEBUG nova.network.neutron [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.031 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.031 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance network_info: |[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.032 254096 DEBUG oslo_concurrency.lockutils [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.032 254096 DEBUG nova.network.neutron [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Refreshing network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.037 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start _get_guest_xml network_info=[{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.044 254096 WARNING nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.059 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.060 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.064 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.065 254096 DEBUG nova.virt.libvirt.host [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.066 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.066 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.067 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.067 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.068 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.068 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.069 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.069 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.070 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.070 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.071 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.071 254096 DEBUG nova.virt.hardware [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.076 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/798358495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.543 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.565 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:21 compute-0 nova_compute[254092]: 2025-11-25 17:07:21.569 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376944058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.046 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.048 254096 DEBUG nova.virt.libvirt.vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1986324305',display_name='tempest-TestNetworkBasicOps-server-1986324305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1986324305',id=127,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKrxKz+ClDmDPtttl68dpUGqC2hAC/9rCcixCbug4R50S5nwSU3HKhq3xj3GQLRg8Ve4nqve6H8xXdBSp8jAZjPjBXa2nszRROcY53aqbkyt1QUaGjq4KGsGK4a3amVOFw==',key_name='tempest-TestNetworkBasicOps-929152585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-14xdfzgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:16Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=2c787d91-7197-42cc-9ee6-870806f4904b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.048 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.049 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.050 254096 DEBUG nova.objects.instance [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c787d91-7197-42cc-9ee6-870806f4904b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.067 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <uuid>2c787d91-7197-42cc-9ee6-870806f4904b</uuid>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <name>instance-0000007f</name>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-1986324305</nova:name>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:07:21</nova:creationTime>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <nova:port uuid="c7eaeb08-d94a-4ecb-a87f-459a8d848a74">
Nov 25 17:07:22 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <system>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <entry name="serial">2c787d91-7197-42cc-9ee6-870806f4904b</entry>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <entry name="uuid">2c787d91-7197-42cc-9ee6-870806f4904b</entry>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </system>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <os>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   </os>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <features>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   </features>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2c787d91-7197-42cc-9ee6-870806f4904b_disk">
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2c787d91-7197-42cc-9ee6-870806f4904b_disk.config">
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:22 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:51:b5:45"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <target dev="tapc7eaeb08-d9"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/console.log" append="off"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <video>
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </video>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:07:22 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:07:22 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:07:22 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:07:22 compute-0 nova_compute[254092]: </domain>
Nov 25 17:07:22 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.068 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Preparing to wait for external event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.068 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.069 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.069 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.070 254096 DEBUG nova.virt.libvirt.vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1986324305',display_name='tempest-TestNetworkBasicOps-server-1986324305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1986324305',id=127,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKrxKz+ClDmDPtttl68dpUGqC2hAC/9rCcixCbug4R50S5nwSU3HKhq3xj3GQLRg8Ve4nqve6H8xXdBSp8jAZjPjBXa2nszRROcY53aqbkyt1QUaGjq4KGsGK4a3amVOFw==',key_name='tempest-TestNetworkBasicOps-929152585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-14xdfzgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:16Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=2c787d91-7197-42cc-9ee6-870806f4904b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.071 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.071 254096 DEBUG nova.network.os_vif_util [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.072 254096 DEBUG os_vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.073 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.074 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.078 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7eaeb08-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.078 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7eaeb08-d9, col_values=(('external_ids', {'iface-id': 'c7eaeb08-d94a-4ecb-a87f-459a8d848a74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:b5:45', 'vm-uuid': '2c787d91-7197-42cc-9ee6-870806f4904b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:22 compute-0 NetworkManager[48891]: <info>  [1764090442.1191] manager: (tapc7eaeb08-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.126 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.127 254096 INFO os_vif [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')
Nov 25 17:07:22 compute-0 ceph-mon[74985]: pgmap v2514: 321 pgs: 321 active+clean; 126 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 102 op/s
Nov 25 17:07:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/798358495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/376944058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.178 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.178 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.179 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:51:b5:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.179 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Using config drive
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.199 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 120 op/s
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.801 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Creating config drive at /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.815 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuood24s_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.862 254096 DEBUG nova.network.neutron [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updated VIF entry in instance network info cache for port c7eaeb08-d94a-4ecb-a87f-459a8d848a74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.863 254096 DEBUG nova.network.neutron [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updating instance_info_cache with network_info: [{"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.876 254096 DEBUG oslo_concurrency.lockutils [req-719feb89-fe4a-491c-a416-fac55588d79a req-8adade26-a268-472a-a7b4-a1699c810bdf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2c787d91-7197-42cc-9ee6-870806f4904b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.970 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuood24s_" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.995 254096 DEBUG nova.storage.rbd_utils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:22 compute-0 nova_compute[254092]: 2025-11-25 17:07:22.998 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.153 254096 DEBUG oslo_concurrency.processutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config 2c787d91-7197-42cc-9ee6-870806f4904b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.154 254096 INFO nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deleting local config drive /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b/disk.config because it was imported into RBD.
Nov 25 17:07:23 compute-0 kernel: tapc7eaeb08-d9: entered promiscuous mode
Nov 25 17:07:23 compute-0 NetworkManager[48891]: <info>  [1764090443.2018] manager: (tapc7eaeb08-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Nov 25 17:07:23 compute-0 ovn_controller[153477]: 2025-11-25T17:07:23Z|01306|binding|INFO|Claiming lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for this chassis.
Nov 25 17:07:23 compute-0 ovn_controller[153477]: 2025-11-25T17:07:23Z|01307|binding|INFO|c7eaeb08-d94a-4ecb-a87f-459a8d848a74: Claiming fa:16:3e:51:b5:45 10.100.0.7
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.205 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.212 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2c787d91-7197-42cc-9ee6-870806f4904b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '7', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.215 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 bound to our chassis
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.217 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 17:07:23 compute-0 ovn_controller[153477]: 2025-11-25T17:07:23Z|01308|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 ovn-installed in OVS
Nov 25 17:07:23 compute-0 ovn_controller[153477]: 2025-11-25T17:07:23Z|01309|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 up in Southbound
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.232 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84b0eb04-0989-4b22-8cc1-f3b6471858e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.234 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe8ef6db-e1 in ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.236 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe8ef6db-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.236 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d18c1ad-9913-4bbd-972e-5d634fe8c6a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.237 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcf1fea-bbba-4df7-9dda-e34bb41f8768]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 systemd-machined[216343]: New machine qemu-159-instance-0000007f.
Nov 25 17:07:23 compute-0 systemd-udevd[393182]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.249 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[08a85ee2-cfdd-4699-8cee-1b50671d0f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-0000007f.
Nov 25 17:07:23 compute-0 NetworkManager[48891]: <info>  [1764090443.2573] device (tapc7eaeb08-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:07:23 compute-0 NetworkManager[48891]: <info>  [1764090443.2585] device (tapc7eaeb08-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.271 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[792cc5c4-ae48-4db1-89bc-7927dcb20ceb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.298 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4330e8c5-6cc8-443f-b48e-66f37cfa80e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.303 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[87bae3b5-4b1f-4a4b-be91-44f5f6f1f5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 systemd-udevd[393185]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:07:23 compute-0 NetworkManager[48891]: <info>  [1764090443.3051] manager: (tapfe8ef6db-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/544)
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.340 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a740a8bc-1a90-4c5a-aade-9ad7ffde5182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.343 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d56b5a48-d26a-48bd-8dcd-e0cba0960cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 NetworkManager[48891]: <info>  [1764090443.3675] device (tapfe8ef6db-e0): carrier: link connected
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.374 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d9031fcd-13b8-41af-b98a-2aca7970edaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.390 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d4310ae3-8fd8-4d8d-be31-0c67f67fbbcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690095, 'reachable_time': 29700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393213, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.405 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d728b746-6e82-41cf-88e6-5aab643f78c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:36f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 690095, 'tstamp': 690095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393214, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.421 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4cd10f-e997-420d-b0d9-6a20dd5a1d13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe8ef6db-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:36:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690095, 'reachable_time': 29700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393215, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.456 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e17cb5c-33f2-47c7-839e-17a4f691a4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.517 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d247288f-7c86-4186-829b-6cf837da5244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.518 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.519 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.519 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe8ef6db-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:23 compute-0 NetworkManager[48891]: <info>  [1764090443.5217] manager: (tapfe8ef6db-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Nov 25 17:07:23 compute-0 kernel: tapfe8ef6db-e0: entered promiscuous mode
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.525 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.526 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe8ef6db-e0, col_values=(('external_ids', {'iface-id': 'abb62292-a5ed-40d9-8b98-1dcedecc4b03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:23 compute-0 ovn_controller[153477]: 2025-11-25T17:07:23Z|01310|binding|INFO|Releasing lport abb62292-a5ed-40d9-8b98-1dcedecc4b03 from this chassis (sb_readonly=0)
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.529 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.545 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.546 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8bf51f-010a-48e8-bb3e-5467f6094e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.547 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/fe8ef6db-e551-4904-a3ea-4af9320e49b5.pid.haproxy
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID fe8ef6db-e551-4904-a3ea-4af9320e49b5
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:07:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:23.547 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'env', 'PROCESS_TAG=haproxy-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe8ef6db-e551-4904-a3ea-4af9320e49b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.750 254096 DEBUG nova.compute.manager [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.750 254096 DEBUG oslo_concurrency.lockutils [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.751 254096 DEBUG oslo_concurrency.lockutils [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.751 254096 DEBUG oslo_concurrency.lockutils [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.752 254096 DEBUG nova.compute.manager [req-662bb182-fb6f-4d1d-ac31-6da1425bc10d req-a9880cb9-95e7-46cb-9d23-197011dd5f8f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Processing event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.770 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090443.768909, 2c787d91-7197-42cc-9ee6-870806f4904b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.770 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Started (Lifecycle Event)
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.773 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.778 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.781 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance spawned successfully.
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.782 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.795 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.801 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.807 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.807 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.807 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.808 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.808 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.808 254096 DEBUG nova.virt.libvirt.driver [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.825 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.826 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090443.7691536, 2c787d91-7197-42cc-9ee6-870806f4904b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.826 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Paused (Lifecycle Event)
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.848 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.852 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090443.7775419, 2c787d91-7197-42cc-9ee6-870806f4904b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.852 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Resumed (Lifecycle Event)
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.875 254096 INFO nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 6.84 seconds to spawn the instance on the hypervisor.
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.876 254096 DEBUG nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.877 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.883 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.917 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:07:23 compute-0 podman[393289]: 2025-11-25 17:07:23.932045299 +0000 UTC m=+0.061863278 container create 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.963 254096 INFO nova.compute.manager [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 7.96 seconds to build instance.
Nov 25 17:07:23 compute-0 systemd[1]: Started libpod-conmon-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf.scope.
Nov 25 17:07:23 compute-0 nova_compute[254092]: 2025-11-25 17:07:23.995 254096 DEBUG oslo_concurrency.lockutils [None req-cd03f674-4942-4548-bf71-9a400e049cda e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:23 compute-0 podman[393289]: 2025-11-25 17:07:23.905972074 +0000 UTC m=+0.035790083 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:07:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b791ef2e7b3ae2735d694805a682a8604ad7328a1136199e68bfe2d8104709ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:07:24 compute-0 podman[393289]: 2025-11-25 17:07:24.025224884 +0000 UTC m=+0.155042933 container init 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 17:07:24 compute-0 podman[393289]: 2025-11-25 17:07:24.03016355 +0000 UTC m=+0.159981569 container start 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 17:07:24 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : New worker (393310) forked
Nov 25 17:07:24 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : Loading success.
Nov 25 17:07:24 compute-0 ceph-mon[74985]: pgmap v2515: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 120 op/s
Nov 25 17:07:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 17:07:25 compute-0 nova_compute[254092]: 2025-11-25 17:07:25.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:25 compute-0 nova_compute[254092]: 2025-11-25 17:07:25.863 254096 DEBUG nova.compute.manager [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:25 compute-0 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG oslo_concurrency.lockutils [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:25 compute-0 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG oslo_concurrency.lockutils [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:25 compute-0 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG oslo_concurrency.lockutils [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:25 compute-0 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 DEBUG nova.compute.manager [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:07:25 compute-0 nova_compute[254092]: 2025-11-25 17:07:25.864 254096 WARNING nova.compute.manager [req-6412835c-b04c-4e4d-8a88-9a79c12dfe35 req-9605dfa6-7735-40df-bd0b-5fdac44bd670 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state None.
Nov 25 17:07:26 compute-0 ceph-mon[74985]: pgmap v2516: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.413 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.413 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.414 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.414 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.414 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.415 254096 INFO nova.compute.manager [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Terminating instance
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.415 254096 DEBUG nova.compute.manager [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:07:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 167 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 17:07:26 compute-0 kernel: tapc7eaeb08-d9 (unregistering): left promiscuous mode
Nov 25 17:07:26 compute-0 NetworkManager[48891]: <info>  [1764090446.4617] device (tapc7eaeb08-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:07:26 compute-0 ovn_controller[153477]: 2025-11-25T17:07:26Z|01311|binding|INFO|Releasing lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 from this chassis (sb_readonly=0)
Nov 25 17:07:26 compute-0 ovn_controller[153477]: 2025-11-25T17:07:26Z|01312|binding|INFO|Setting lport c7eaeb08-d94a-4ecb-a87f-459a8d848a74 down in Southbound
Nov 25 17:07:26 compute-0 ovn_controller[153477]: 2025-11-25T17:07:26Z|01313|binding|INFO|Removing iface tapc7eaeb08-d9 ovn-installed in OVS
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.527 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.535 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:b5:45 10.100.0.7'], port_security=['fa:16:3e:51:b5:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2c787d91-7197-42cc-9ee6-870806f4904b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1132643307', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '9', 'neutron:security_group_ids': '63a1678d-cfcd-4cc9-9221-d7d228d35cc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb7fa40-72a8-424a-b9c3-ddea4719d3ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7eaeb08-d94a-4ecb-a87f-459a8d848a74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.537 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7eaeb08-d94a-4ecb-a87f-459a8d848a74 in datapath fe8ef6db-e551-4904-a3ea-4af9320e49b5 unbound from our chassis
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.538 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe8ef6db-e551-4904-a3ea-4af9320e49b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.539 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fa72d3-db56-4eae-a6e6-c6ecbb979012]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.539 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 namespace which is not needed anymore
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Nov 25 17:07:26 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Consumed 3.247s CPU time.
Nov 25 17:07:26 compute-0 systemd-machined[216343]: Machine qemu-159-instance-0000007f terminated.
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.650 254096 INFO nova.virt.libvirt.driver [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Instance destroyed successfully.
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.651 254096 DEBUG nova.objects.instance [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 2c787d91-7197-42cc-9ee6-870806f4904b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:26 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : haproxy version is 2.8.14-c23fe91
Nov 25 17:07:26 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [NOTICE]   (393308) : path to executable is /usr/sbin/haproxy
Nov 25 17:07:26 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [WARNING]  (393308) : Exiting Master process...
Nov 25 17:07:26 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [WARNING]  (393308) : Exiting Master process...
Nov 25 17:07:26 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [ALERT]    (393308) : Current worker (393310) exited with code 143 (Terminated)
Nov 25 17:07:26 compute-0 neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5[393304]: [WARNING]  (393308) : All workers exited. Exiting... (0)
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.664 254096 DEBUG nova.virt.libvirt.vif [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:07:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1986324305',display_name='tempest-TestNetworkBasicOps-server-1986324305',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1986324305',id=127,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKrxKz+ClDmDPtttl68dpUGqC2hAC/9rCcixCbug4R50S5nwSU3HKhq3xj3GQLRg8Ve4nqve6H8xXdBSp8jAZjPjBXa2nszRROcY53aqbkyt1QUaGjq4KGsGK4a3amVOFw==',key_name='tempest-TestNetworkBasicOps-929152585',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:07:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-14xdfzgb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:07:23Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=2c787d91-7197-42cc-9ee6-870806f4904b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.665 254096 DEBUG nova.network.os_vif_util [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "address": "fa:16:3e:51:b5:45", "network": {"id": "fe8ef6db-e551-4904-a3ea-4af9320e49b5", "bridge": "br-int", "label": "tempest-network-smoke--1027346304", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7eaeb08-d9", "ovs_interfaceid": "c7eaeb08-d94a-4ecb-a87f-459a8d848a74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:26 compute-0 systemd[1]: libpod-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf.scope: Deactivated successfully.
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.665 254096 DEBUG nova.network.os_vif_util [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.666 254096 DEBUG os_vif [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.667 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7eaeb08-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.672 254096 INFO os_vif [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:b5:45,bridge_name='br-int',has_traffic_filtering=True,id=c7eaeb08-d94a-4ecb-a87f-459a8d848a74,network=Network(fe8ef6db-e551-4904-a3ea-4af9320e49b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7eaeb08-d9')
Nov 25 17:07:26 compute-0 podman[393342]: 2025-11-25 17:07:26.672932776 +0000 UTC m=+0.048741678 container died 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 17:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf-userdata-shm.mount: Deactivated successfully.
Nov 25 17:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b791ef2e7b3ae2735d694805a682a8604ad7328a1136199e68bfe2d8104709ce-merged.mount: Deactivated successfully.
Nov 25 17:07:26 compute-0 podman[393342]: 2025-11-25 17:07:26.719046811 +0000 UTC m=+0.094855713 container cleanup 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:07:26 compute-0 systemd[1]: libpod-conmon-0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf.scope: Deactivated successfully.
Nov 25 17:07:26 compute-0 podman[393399]: 2025-11-25 17:07:26.780096365 +0000 UTC m=+0.042762994 container remove 0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.784 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25f6c480-9609-4151-9e90-9374b3b81b28]: (4, ('Tue Nov 25 05:07:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf)\n0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf\nTue Nov 25 05:07:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 (0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf)\n0a41703d14b83196166ed6dd48fb97b7a8bcbff2c43f89e38d3da5321a88d5bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.786 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65dda74e-96b3-4d06-80f6-53c2cb9816f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe8ef6db-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 kernel: tapfe8ef6db-e0: left promiscuous mode
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a1e698-5770-4c80-b8bc-50955926b60e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 nova_compute[254092]: 2025-11-25 17:07:26.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.810 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf76c340-a01c-49e7-8b0f-ecaa176b61b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.810 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d06b87b8-6263-4231-86e3-da3816d5ebd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.824 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4971bf7-b197-4788-8007-1945ac8c9fe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690088, 'reachable_time': 28420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393414, 'error': None, 'target': 'ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.827 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe8ef6db-e551-4904-a3ea-4af9320e49b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:07:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:26.828 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3c28f3-aaa3-4d6b-b255-e72e015a09e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dfe8ef6db\x2de551\x2d4904\x2da3ea\x2d4af9320e49b5.mount: Deactivated successfully.
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.004 254096 INFO nova.virt.libvirt.driver [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deleting instance files /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b_del
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.005 254096 INFO nova.virt.libvirt.driver [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deletion of /var/lib/nova/instances/2c787d91-7197-42cc-9ee6-870806f4904b_del complete
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.071 254096 INFO nova.compute.manager [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 0.66 seconds to destroy the instance on the hypervisor.
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.072 254096 DEBUG oslo.service.loopingcall [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.072 254096 DEBUG nova.compute.manager [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.072 254096 DEBUG nova.network.neutron [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.963 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.963 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.964 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.964 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.964 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] No waiting events found dispatching network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.965 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-unplugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.965 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.965 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.966 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.966 254096 DEBUG oslo_concurrency.lockutils [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.966 254096 DEBUG nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] No waiting events found dispatching network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:07:27 compute-0 nova_compute[254092]: 2025-11-25 17:07:27.967 254096 WARNING nova.compute.manager [req-42770828-64fe-47cb-9a48-709a12c566c0 req-9585c3cb-6d25-486c-b6a4-c53a06791abb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Received unexpected event network-vif-plugged-c7eaeb08-d94a-4ecb-a87f-459a8d848a74 for instance with vm_state active and task_state deleting.
Nov 25 17:07:28 compute-0 ceph-mon[74985]: pgmap v2517: 321 pgs: 321 active+clean; 167 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 17:07:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 167 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 17:07:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:29 compute-0 nova_compute[254092]: 2025-11-25 17:07:29.504 254096 DEBUG nova.network.neutron [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:29 compute-0 nova_compute[254092]: 2025-11-25 17:07:29.542 254096 INFO nova.compute.manager [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Took 2.47 seconds to deallocate network for instance.
Nov 25 17:07:29 compute-0 nova_compute[254092]: 2025-11-25 17:07:29.595 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:29 compute-0 nova_compute[254092]: 2025-11-25 17:07:29.595 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:29 compute-0 nova_compute[254092]: 2025-11-25 17:07:29.769 254096 DEBUG oslo_concurrency.processutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:30 compute-0 ceph-mon[74985]: pgmap v2518: 321 pgs: 321 active+clean; 167 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 162 op/s
Nov 25 17:07:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:07:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/152362480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:30 compute-0 nova_compute[254092]: 2025-11-25 17:07:30.200 254096 DEBUG oslo_concurrency.processutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:30 compute-0 nova_compute[254092]: 2025-11-25 17:07:30.208 254096 DEBUG nova.compute.provider_tree [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:07:30 compute-0 nova_compute[254092]: 2025-11-25 17:07:30.230 254096 DEBUG nova.scheduler.client.report [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:07:30 compute-0 nova_compute[254092]: 2025-11-25 17:07:30.265 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:30 compute-0 nova_compute[254092]: 2025-11-25 17:07:30.296 254096 INFO nova.scheduler.client.report [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 2c787d91-7197-42cc-9ee6-870806f4904b
Nov 25 17:07:30 compute-0 nova_compute[254092]: 2025-11-25 17:07:30.376 254096 DEBUG oslo_concurrency.lockutils [None req-ab50400f-9c9a-4f55-af1a-6177a73290a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "2c787d91-7197-42cc-9ee6-870806f4904b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:30 compute-0 nova_compute[254092]: 2025-11-25 17:07:30.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 144 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 25 17:07:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/152362480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:31 compute-0 nova_compute[254092]: 2025-11-25 17:07:31.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:31 compute-0 nova_compute[254092]: 2025-11-25 17:07:31.906 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:31 compute-0 nova_compute[254092]: 2025-11-25 17:07:31.907 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:31 compute-0 nova_compute[254092]: 2025-11-25 17:07:31.926 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:07:31 compute-0 nova_compute[254092]: 2025-11-25 17:07:31.998 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:31 compute-0 nova_compute[254092]: 2025-11-25 17:07:31.998 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.005 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.006 254096 INFO nova.compute.claims [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.138 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:32 compute-0 ceph-mon[74985]: pgmap v2519: 321 pgs: 321 active+clean; 144 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Nov 25 17:07:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 155 op/s
Nov 25 17:07:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:07:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147495736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.644 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.652 254096 DEBUG nova.compute.provider_tree [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.675 254096 DEBUG nova.scheduler.client.report [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.703 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.704 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.775 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.776 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.801 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.831 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.960 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.962 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:07:32 compute-0 nova_compute[254092]: 2025-11-25 17:07:32.963 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Creating image(s)
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.005 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.043 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.075 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.079 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.116 254096 DEBUG nova.policy [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72f29c5d97464520bee19cb128258fd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.153 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.154 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.155 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.155 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/147495736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.193 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.199 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 600da46c-eccb-4422-9531-4fa91fdda153_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.528 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 600da46c-eccb-4422-9531-4fa91fdda153_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.615 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] resizing rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.747 254096 DEBUG nova.objects.instance [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'migration_context' on Instance uuid 600da46c-eccb-4422-9531-4fa91fdda153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.764 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Ensure instance console log exists: /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:33 compute-0 nova_compute[254092]: 2025-11-25 17:07:33.765 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:34 compute-0 nova_compute[254092]: 2025-11-25 17:07:34.118 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Successfully created port: 18b66f97-4edf-40c8-b35b-66005e28732c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:07:34 compute-0 ceph-mon[74985]: pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 155 op/s
Nov 25 17:07:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 100 op/s
Nov 25 17:07:34 compute-0 nova_compute[254092]: 2025-11-25 17:07:34.501 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.194 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Successfully updated port: 18b66f97-4edf-40c8-b35b-66005e28732c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.207 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.208 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquired lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.209 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.300 254096 DEBUG nova.compute.manager [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.301 254096 DEBUG nova.compute.manager [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing instance network info cache due to event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.301 254096 DEBUG oslo_concurrency.lockutils [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.382 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:07:35 compute-0 nova_compute[254092]: 2025-11-25 17:07:35.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:36 compute-0 ceph-mon[74985]: pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 100 op/s
Nov 25 17:07:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 17:07:36 compute-0 nova_compute[254092]: 2025-11-25 17:07:36.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:36 compute-0 nova_compute[254092]: 2025-11-25 17:07:36.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:36 compute-0 ovn_controller[153477]: 2025-11-25T17:07:36Z|01314|binding|INFO|Releasing lport 50aec7ed-4f15-4d72-87dd-48c327de28ce from this chassis (sb_readonly=0)
Nov 25 17:07:36 compute-0 nova_compute[254092]: 2025-11-25 17:07:36.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:36 compute-0 nova_compute[254092]: 2025-11-25 17:07:36.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.217 254096 DEBUG nova.network.neutron [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.246 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Releasing lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.247 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance network_info: |[{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.248 254096 DEBUG oslo_concurrency.lockutils [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.249 254096 DEBUG nova.network.neutron [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.254 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start _get_guest_xml network_info=[{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.262 254096 WARNING nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.273 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.274 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.280 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.282 254096 DEBUG nova.virt.libvirt.host [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.282 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.283 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.284 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.284 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.284 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.285 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.285 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.285 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.286 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.286 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.286 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.287 254096 DEBUG nova.virt.hardware [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.293 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:07:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379043969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.762 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.784 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:37 compute-0 nova_compute[254092]: 2025-11-25 17:07:37.789 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765316076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:38 compute-0 ceph-mon[74985]: pgmap v2522: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 17:07:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1379043969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2765316076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.216 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.218 254096 DEBUG nova.virt.libvirt.vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=128,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-vyqdrhti',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:32Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=600da46c-eccb-4422-9531-4fa91fdda153,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.218 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.219 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.221 254096 DEBUG nova.objects.instance [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'pci_devices' on Instance uuid 600da46c-eccb-4422-9531-4fa91fdda153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.236 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <uuid>600da46c-eccb-4422-9531-4fa91fdda153</uuid>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <name>instance-00000080</name>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294</nova:name>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:07:37</nova:creationTime>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:user uuid="72f29c5d97464520bee19cb128258fd5">tempest-TestSecurityGroupsBasicOps-804365052-project-member</nova:user>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:project uuid="f19d2d0776e84c3c99c640c0b7ed3743">tempest-TestSecurityGroupsBasicOps-804365052</nova:project>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <nova:port uuid="18b66f97-4edf-40c8-b35b-66005e28732c">
Nov 25 17:07:38 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <system>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <entry name="serial">600da46c-eccb-4422-9531-4fa91fdda153</entry>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <entry name="uuid">600da46c-eccb-4422-9531-4fa91fdda153</entry>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </system>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <os>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   </os>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <features>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   </features>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/600da46c-eccb-4422-9531-4fa91fdda153_disk">
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/600da46c-eccb-4422-9531-4fa91fdda153_disk.config">
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:38 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:9b:97:5e"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <target dev="tap18b66f97-4e"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/console.log" append="off"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <video>
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </video>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:07:38 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:07:38 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:07:38 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:07:38 compute-0 nova_compute[254092]: </domain>
Nov 25 17:07:38 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.237 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Preparing to wait for external event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.237 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.238 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.239 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.240 254096 DEBUG nova.virt.libvirt.vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=128,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-vyqdrhti',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:32Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=600da46c-eccb-4422-9531-4fa91fdda153,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.241 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.242 254096 DEBUG nova.network.os_vif_util [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.242 254096 DEBUG os_vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.244 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.245 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.250 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18b66f97-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.250 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18b66f97-4e, col_values=(('external_ids', {'iface-id': '18b66f97-4edf-40c8-b35b-66005e28732c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:97:5e', 'vm-uuid': '600da46c-eccb-4422-9531-4fa91fdda153'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:38 compute-0 NetworkManager[48891]: <info>  [1764090458.2536] manager: (tap18b66f97-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.261 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.262 254096 INFO os_vif [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e')
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.309 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.310 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.310 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] No VIF found with MAC fa:16:3e:9b:97:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.310 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Using config drive
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.335 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.866 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Creating config drive at /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config
Nov 25 17:07:38 compute-0 nova_compute[254092]: 2025-11-25 17:07:38.871 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6gpoo_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.008 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps6gpoo_u" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.031 254096 DEBUG nova.storage.rbd_utils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] rbd image 600da46c-eccb-4422-9531-4fa91fdda153_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.034 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config 600da46c-eccb-4422-9531-4fa91fdda153_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.186 254096 DEBUG oslo_concurrency.processutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config 600da46c-eccb-4422-9531-4fa91fdda153_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.187 254096 INFO nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deleting local config drive /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153/disk.config because it was imported into RBD.
Nov 25 17:07:39 compute-0 kernel: tap18b66f97-4e: entered promiscuous mode
Nov 25 17:07:39 compute-0 ovn_controller[153477]: 2025-11-25T17:07:39Z|01315|binding|INFO|Claiming lport 18b66f97-4edf-40c8-b35b-66005e28732c for this chassis.
Nov 25 17:07:39 compute-0 NetworkManager[48891]: <info>  [1764090459.2415] manager: (tap18b66f97-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Nov 25 17:07:39 compute-0 ovn_controller[153477]: 2025-11-25T17:07:39Z|01316|binding|INFO|18b66f97-4edf-40c8-b35b-66005e28732c: Claiming fa:16:3e:9b:97:5e 10.100.0.14
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.250 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:97:5e 10.100.0.14'], port_security=['fa:16:3e:9b:97:5e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '600da46c-eccb-4422-9531-4fa91fdda153', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '2', 'neutron:security_group_ids': '779da3fb-8b66-4cf7-a59e-bc7311564ace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=18b66f97-4edf-40c8-b35b-66005e28732c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.251 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 18b66f97-4edf-40c8-b35b-66005e28732c in datapath ef2caff8-43ec-4364-a979-521405023410 bound to our chassis
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.253 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef2caff8-43ec-4364-a979-521405023410
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:39 compute-0 ovn_controller[153477]: 2025-11-25T17:07:39Z|01317|binding|INFO|Setting lport 18b66f97-4edf-40c8-b35b-66005e28732c ovn-installed in OVS
Nov 25 17:07:39 compute-0 ovn_controller[153477]: 2025-11-25T17:07:39Z|01318|binding|INFO|Setting lport 18b66f97-4edf-40c8-b35b-66005e28732c up in Southbound
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.260 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:39 compute-0 systemd-udevd[393762]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.269 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[79efaf68-a367-43a8-8b83-13575e650ca6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:39 compute-0 NetworkManager[48891]: <info>  [1764090459.2819] device (tap18b66f97-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:07:39 compute-0 NetworkManager[48891]: <info>  [1764090459.2825] device (tap18b66f97-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:07:39 compute-0 systemd-machined[216343]: New machine qemu-160-instance-00000080.
Nov 25 17:07:39 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000080.
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.306 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[da418dc7-ce3d-4c6e-9407-8f2f25fb4a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9e16d365-baea-45f6-a05d-dcceec329e96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.337 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[482c02f3-eb33-46e0-82b7-b983933e6e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.353 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[677ff912-ccb5-48e5-bc58-b340966d9205]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393775, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.368 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53e76928-cf1e-4e31-bb16-9131d2e55473]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688224, 'tstamp': 688224}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393777, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688228, 'tstamp': 688228}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393777, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.371 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.373 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.374 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef2caff8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.374 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef2caff8-40, col_values=(('external_ids', {'iface-id': '50aec7ed-4f15-4d72-87dd-48c327de28ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:39.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.383 254096 DEBUG nova.network.neutron [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updated VIF entry in instance network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.384 254096 DEBUG nova.network.neutron [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.399 254096 DEBUG oslo_concurrency.lockutils [req-3f649425-5c1e-4637-b373-c4a714d7c98b req-dae568a0-cdf9-4b4b-b739-14e5bb8ca3f5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.506 254096 DEBUG nova.compute.manager [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG oslo_concurrency.lockutils [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG oslo_concurrency.lockutils [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG oslo_concurrency.lockutils [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:39 compute-0 nova_compute[254092]: 2025-11-25 17:07:39.507 254096 DEBUG nova.compute.manager [req-43ad6606-bb96-47f4-87d7-dfbc0c2fa8a1 req-a26e35b8-c0b1-4468-b005-3916a047d852 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Processing event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.058 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090460.0574887, 600da46c-eccb-4422-9531-4fa91fdda153 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.058 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Started (Lifecycle Event)
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.060 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.063 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.066 254096 INFO nova.virt.libvirt.driver [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance spawned successfully.
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.066 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.089 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.095 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.095 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.096 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.096 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.097 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.097 254096 DEBUG nova.virt.libvirt.driver [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.103 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.137 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.138 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090460.0577555, 600da46c-eccb-4422-9531-4fa91fdda153 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.138 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Paused (Lifecycle Event)
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:07:40
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', 'vms', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root']
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.166 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.169 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090460.0635598, 600da46c-eccb-4422-9531-4fa91fdda153 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.169 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Resumed (Lifecycle Event)
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.180 254096 INFO nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 7.22 seconds to spawn the instance on the hypervisor.
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.180 254096 DEBUG nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.205 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.208 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:07:40 compute-0 ceph-mon[74985]: pgmap v2523: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.229 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.244 254096 INFO nova.compute.manager [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 8.26 seconds to build instance.
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.261 254096 DEBUG oslo_concurrency.lockutils [None req-9c683691-2cad-434a-8505-3474d2ac8a3c 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:07:40 compute-0 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 17:07:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:07:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838234383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:40 compute-0 nova_compute[254092]: 2025-11-25 17:07:40.979 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.072 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.077 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.077 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:07:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/838234383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.278 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.280 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3443MB free_disk=59.92197799682617GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance bd1d0296-ae28-4eac-9f38-80e6ca17dbff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 600da46c-eccb-4422-9531-4fa91fdda153 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.361 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.412 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.591 254096 DEBUG nova.compute.manager [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.592 254096 DEBUG oslo_concurrency.lockutils [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.593 254096 DEBUG oslo_concurrency.lockutils [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.593 254096 DEBUG oslo_concurrency.lockutils [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.593 254096 DEBUG nova.compute.manager [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] No waiting events found dispatching network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.594 254096 WARNING nova.compute.manager [req-8eb32aca-3a01-48e4-a50e-b20a1670f273 req-a5f8b518-21e0-4db6-bf7e-20b13bfece26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received unexpected event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c for instance with vm_state active and task_state None.
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.650 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090446.6488988, 2c787d91-7197-42cc-9ee6-870806f4904b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.650 254096 INFO nova.compute.manager [-] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] VM Stopped (Lifecycle Event)
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.667 254096 DEBUG nova.compute.manager [None req-02f18b01-fbef-49b6-bd2a-0749b4566b14 - - - - - -] [instance: 2c787d91-7197-42cc-9ee6-870806f4904b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:07:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:07:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638363428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.854 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.860 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.877 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.903 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:07:41 compute-0 nova_compute[254092]: 2025-11-25 17:07:41.903 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:42 compute-0 ceph-mon[74985]: pgmap v2524: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 17:07:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1638363428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Nov 25 17:07:42 compute-0 nova_compute[254092]: 2025-11-25 17:07:42.904 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:42 compute-0 nova_compute[254092]: 2025-11-25 17:07:42.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:07:42 compute-0 nova_compute[254092]: 2025-11-25 17:07:42.905 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:07:43 compute-0 nova_compute[254092]: 2025-11-25 17:07:43.154 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:43 compute-0 nova_compute[254092]: 2025-11-25 17:07:43.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:43 compute-0 nova_compute[254092]: 2025-11-25 17:07:43.155 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:07:43 compute-0 nova_compute[254092]: 2025-11-25 17:07:43.156 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:43 compute-0 nova_compute[254092]: 2025-11-25 17:07:43.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:44 compute-0 ceph-mon[74985]: pgmap v2525: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Nov 25 17:07:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Nov 25 17:07:44 compute-0 nova_compute[254092]: 2025-11-25 17:07:44.902 254096 DEBUG nova.compute.manager [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:44 compute-0 nova_compute[254092]: 2025-11-25 17:07:44.902 254096 DEBUG nova.compute.manager [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing instance network info cache due to event network-changed-18b66f97-4edf-40c8-b35b-66005e28732c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:07:44 compute-0 nova_compute[254092]: 2025-11-25 17:07:44.903 254096 DEBUG oslo_concurrency.lockutils [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:44 compute-0 nova_compute[254092]: 2025-11-25 17:07:44.903 254096 DEBUG oslo_concurrency.lockutils [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:44 compute-0 nova_compute[254092]: 2025-11-25 17:07:44.903 254096 DEBUG nova.network.neutron [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Refreshing network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:07:45 compute-0 ceph-mon[74985]: pgmap v2526: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Nov 25 17:07:45 compute-0 nova_compute[254092]: 2025-11-25 17:07:45.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:45 compute-0 nova_compute[254092]: 2025-11-25 17:07:45.666 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:45 compute-0 nova_compute[254092]: 2025-11-25 17:07:45.690 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:45 compute-0 nova_compute[254092]: 2025-11-25 17:07:45.691 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:07:45 compute-0 nova_compute[254092]: 2025-11-25 17:07:45.692 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 17:07:46 compute-0 nova_compute[254092]: 2025-11-25 17:07:46.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:46 compute-0 nova_compute[254092]: 2025-11-25 17:07:46.880 254096 DEBUG nova.network.neutron [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updated VIF entry in instance network info cache for port 18b66f97-4edf-40c8-b35b-66005e28732c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:07:46 compute-0 nova_compute[254092]: 2025-11-25 17:07:46.881 254096 DEBUG nova.network.neutron [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [{"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:46 compute-0 nova_compute[254092]: 2025-11-25 17:07:46.905 254096 DEBUG oslo_concurrency.lockutils [req-ad67aa9a-8bb4-4075-bba6-513c7dabdf1a req-7305a4c4-1f98-469b-ac6a-81a8ef3ba1b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-600da46c-eccb-4422-9531-4fa91fdda153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:47 compute-0 nova_compute[254092]: 2025-11-25 17:07:47.278 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:07:47 compute-0 ceph-mon[74985]: pgmap v2527: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 17:07:48 compute-0 nova_compute[254092]: 2025-11-25 17:07:48.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:07:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:49 compute-0 ceph-mon[74985]: pgmap v2528: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:07:49 compute-0 podman[393866]: 2025-11-25 17:07:49.699025889 +0000 UTC m=+0.099201993 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 17:07:49 compute-0 podman[393867]: 2025-11-25 17:07:49.703534723 +0000 UTC m=+0.098155044 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:07:49 compute-0 podman[393865]: 2025-11-25 17:07:49.717315241 +0000 UTC m=+0.116957899 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 17:07:50 compute-0 nova_compute[254092]: 2025-11-25 17:07:50.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001108131674480693 of space, bias 1.0, pg target 0.3324395023442079 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:07:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:07:51 compute-0 ceph-mon[74985]: pgmap v2529: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 17:07:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 71 op/s
Nov 25 17:07:53 compute-0 nova_compute[254092]: 2025-11-25 17:07:53.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:53 compute-0 ceph-mon[74985]: pgmap v2530: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 71 op/s
Nov 25 17:07:53 compute-0 ovn_controller[153477]: 2025-11-25T17:07:53Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:97:5e 10.100.0.14
Nov 25 17:07:53 compute-0 ovn_controller[153477]: 2025-11-25T17:07:53Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:97:5e 10.100.0.14
Nov 25 17:07:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.139 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.140 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.153 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.216 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.217 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.224 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.225 254096 INFO nova.compute.claims [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.342 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.7 KiB/s wr, 53 op/s
Nov 25 17:07:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:07:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820752934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.833 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.840 254096 DEBUG nova.compute.provider_tree [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.857 254096 DEBUG nova.scheduler.client.report [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.880 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.881 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.931 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.932 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.958 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:07:54 compute-0 nova_compute[254092]: 2025-11-25 17:07:54.975 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.073 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.074 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.075 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Creating image(s)
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.098 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.119 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.138 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.141 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.227 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.230 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.231 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.232 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.270 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.276 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:07:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236060575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:07:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:07:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236060575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.587 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:55 compute-0 ceph-mon[74985]: pgmap v2531: 321 pgs: 321 active+clean; 167 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.7 KiB/s wr, 53 op/s
Nov 25 17:07:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/820752934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:07:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4236060575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:07:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4236060575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.667 254096 DEBUG nova.policy [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.674 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.766 254096 DEBUG nova.objects.instance [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.779 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.780 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Ensure instance console log exists: /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.780 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.781 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:55 compute-0 nova_compute[254092]: 2025-11-25 17:07:55.781 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:56 compute-0 sudo[394116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:07:56 compute-0 sudo[394116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:56 compute-0 sudo[394116]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:56 compute-0 sudo[394142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:07:56 compute-0 sudo[394142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:56 compute-0 sudo[394142]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:56 compute-0 sudo[394167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:07:56 compute-0 sudo[394167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:56 compute-0 sudo[394167]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:56 compute-0 sudo[394192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:07:56 compute-0 sudo[394192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 243 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 132 op/s
Nov 25 17:07:56 compute-0 nova_compute[254092]: 2025-11-25 17:07:56.646 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Successfully created port: 1e4bb581-2eb1-4909-a066-11e1096cbffa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:07:56 compute-0 sudo[394192]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 17:07:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:07:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:07:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:07:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:07:56 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev eb353d93-a044-4788-8e8f-672f0eba129b does not exist
Nov 25 17:07:56 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c0e10d0e-de65-4c0b-8616-6174ca0dd093 does not exist
Nov 25 17:07:56 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c3f08a5f-93cd-40f0-9e4e-ff30b8aad232 does not exist
Nov 25 17:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:07:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:07:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:07:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:07:57 compute-0 sudo[394248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:07:57 compute-0 sudo[394248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:57 compute-0 sudo[394248]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:57 compute-0 sudo[394273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:07:57 compute-0 sudo[394273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:57 compute-0 sudo[394273]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:57 compute-0 sudo[394298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:07:57 compute-0 sudo[394298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:57 compute-0 sudo[394298]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:57 compute-0 sudo[394323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:07:57 compute-0 sudo[394323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.347 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Successfully updated port: 1e4bb581-2eb1-4909-a066-11e1096cbffa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.358 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.359 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.359 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.453 254096 DEBUG nova.compute.manager [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.454 254096 DEBUG nova.compute.manager [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing instance network info cache due to event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.454 254096 DEBUG oslo_concurrency.lockutils [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:07:57 compute-0 nova_compute[254092]: 2025-11-25 17:07:57.512 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:07:57 compute-0 podman[394385]: 2025-11-25 17:07:57.596937964 +0000 UTC m=+0.065533118 container create 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:07:57 compute-0 ceph-mon[74985]: pgmap v2532: 321 pgs: 321 active+clean; 243 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 132 op/s
Nov 25 17:07:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:07:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:07:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:07:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:07:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:07:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:07:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:07:57 compute-0 systemd[1]: Started libpod-conmon-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope.
Nov 25 17:07:57 compute-0 podman[394385]: 2025-11-25 17:07:57.573515112 +0000 UTC m=+0.042110266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:07:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:07:57 compute-0 podman[394385]: 2025-11-25 17:07:57.702744056 +0000 UTC m=+0.171339190 container init 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:07:57 compute-0 podman[394385]: 2025-11-25 17:07:57.713536303 +0000 UTC m=+0.182131427 container start 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:07:57 compute-0 podman[394385]: 2025-11-25 17:07:57.717783258 +0000 UTC m=+0.186378412 container attach 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:07:57 compute-0 busy_williams[394401]: 167 167
Nov 25 17:07:57 compute-0 systemd[1]: libpod-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope: Deactivated successfully.
Nov 25 17:07:57 compute-0 conmon[394401]: conmon 7a4b6e88db36b1fe8fd8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope/container/memory.events
Nov 25 17:07:57 compute-0 podman[394385]: 2025-11-25 17:07:57.722494388 +0000 UTC m=+0.191089512 container died 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-47b2a252ed1e123587282309b290071d0e09b77bab9afa3579684f0261ef8379-merged.mount: Deactivated successfully.
Nov 25 17:07:57 compute-0 podman[394385]: 2025-11-25 17:07:57.770265588 +0000 UTC m=+0.238860712 container remove 7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:07:57 compute-0 systemd[1]: libpod-conmon-7a4b6e88db36b1fe8fd82054b8a7ad21b61982b5edb6967e575a5804ff151ddf.scope: Deactivated successfully.
Nov 25 17:07:58 compute-0 podman[394423]: 2025-11-25 17:07:58.037746354 +0000 UTC m=+0.058192916 container create a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:07:58 compute-0 systemd[1]: Started libpod-conmon-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope.
Nov 25 17:07:58 compute-0 podman[394423]: 2025-11-25 17:07:58.015632378 +0000 UTC m=+0.036078950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:07:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:07:58 compute-0 podman[394423]: 2025-11-25 17:07:58.158761794 +0000 UTC m=+0.179208426 container init a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:07:58 compute-0 podman[394423]: 2025-11-25 17:07:58.172293595 +0000 UTC m=+0.192740147 container start a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:07:58 compute-0 podman[394423]: 2025-11-25 17:07:58.176065129 +0000 UTC m=+0.196511711 container attach a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.258 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 243 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 3.8 MiB/s wr, 79 op/s
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.612 254096 DEBUG nova.network.neutron [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.642 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.643 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance network_info: |[{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.643 254096 DEBUG oslo_concurrency.lockutils [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.644 254096 DEBUG nova.network.neutron [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.646 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start _get_guest_xml network_info=[{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.652 254096 WARNING nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.659 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.660 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.664 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.664 254096 DEBUG nova.virt.libvirt.host [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.665 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.665 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.666 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.666 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.666 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.667 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.667 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.667 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.668 254096 DEBUG nova.virt.hardware [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:07:58 compute-0 nova_compute[254092]: 2025-11-25 17:07:58.671 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:07:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1168052522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.104 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.129 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.134 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:07:59 compute-0 wizardly_wozniak[394440]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:07:59 compute-0 wizardly_wozniak[394440]: --> relative data size: 1.0
Nov 25 17:07:59 compute-0 wizardly_wozniak[394440]: --> All data devices are unavailable
Nov 25 17:07:59 compute-0 systemd[1]: libpod-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope: Deactivated successfully.
Nov 25 17:07:59 compute-0 systemd[1]: libpod-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope: Consumed 1.009s CPU time.
Nov 25 17:07:59 compute-0 podman[394520]: 2025-11-25 17:07:59.33078835 +0000 UTC m=+0.043162204 container died a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d676cd6deaf5cddb0e0e738bf93c737abda0a9553cacc7f456782cd76f8c13b4-merged.mount: Deactivated successfully.
Nov 25 17:07:59 compute-0 podman[394520]: 2025-11-25 17:07:59.394960041 +0000 UTC m=+0.107333885 container remove a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_wozniak, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 17:07:59 compute-0 systemd[1]: libpod-conmon-a25f3e661ed1c31d1bf86fce4afbe28cc015efc787e106fc4a2cdb77ca7202b4.scope: Deactivated successfully.
Nov 25 17:07:59 compute-0 sudo[394323]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:59 compute-0 sudo[394545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:07:59 compute-0 sudo[394545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:59 compute-0 sudo[394545]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:07:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266493158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.567 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.570 254096 DEBUG nova.virt.libvirt.vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742985816',display_name='tempest-TestNetworkBasicOps-server-1742985816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742985816',id=129,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00WPYJfplltH8QXqH5VBLBDZhQ795ogiRVA83nhFmClvp98XVxMKrxCuE4fsMnEOta0H4tqzcEHlYCGCoNe9LzeAmLZ6Pp7hI8JV+Hz5Z8Dy6OeDo6E9caHpgF+YsYXg==',key_name='tempest-TestNetworkBasicOps-1671052',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-v513b262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:55Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.570 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.571 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.572 254096 DEBUG nova.objects.instance [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.587 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <uuid>239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e</uuid>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <name>instance-00000081</name>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-1742985816</nova:name>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:07:58</nova:creationTime>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <nova:port uuid="1e4bb581-2eb1-4909-a066-11e1096cbffa">
Nov 25 17:07:59 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <system>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <entry name="serial">239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e</entry>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <entry name="uuid">239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e</entry>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </system>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <os>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   </os>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <features>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   </features>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk">
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config">
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       </source>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:07:59 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:5d:af:f0"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <target dev="tap1e4bb581-2e"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/console.log" append="off"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <video>
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </video>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:07:59 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:07:59 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:07:59 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:07:59 compute-0 nova_compute[254092]: </domain>
Nov 25 17:07:59 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.587 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Preparing to wait for external event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.588 254096 DEBUG nova.virt.libvirt.vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742985816',display_name='tempest-TestNetworkBasicOps-server-1742985816',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742985816',id=129,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00WPYJfplltH8QXqH5VBLBDZhQ795ogiRVA83nhFmClvp98XVxMKrxCuE4fsMnEOta0H4tqzcEHlYCGCoNe9LzeAmLZ6Pp7hI8JV+Hz5Z8Dy6OeDo6E9caHpgF+YsYXg==',key_name='tempest-TestNetworkBasicOps-1671052',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-v513b262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:07:55Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.589 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.589 254096 DEBUG nova.network.os_vif_util [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.589 254096 DEBUG os_vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.590 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.591 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.594 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e4bb581-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.594 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e4bb581-2e, col_values=(('external_ids', {'iface-id': '1e4bb581-2eb1-4909-a066-11e1096cbffa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:af:f0', 'vm-uuid': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:59 compute-0 sudo[394570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:07:59 compute-0 NetworkManager[48891]: <info>  [1764090479.5975] manager: (tap1e4bb581-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/548)
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:07:59 compute-0 sudo[394570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:59 compute-0 sudo[394570]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.605 254096 INFO os_vif [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e')
Nov 25 17:07:59 compute-0 ceph-mon[74985]: pgmap v2533: 321 pgs: 321 active+clean; 243 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 3.8 MiB/s wr, 79 op/s
Nov 25 17:07:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1168052522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:59 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/266493158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.648 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.648 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.648 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:5d:af:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.649 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Using config drive
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.668 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:07:59 compute-0 sudo[394599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:07:59 compute-0 sudo[394599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:59 compute-0 sudo[394599]: pam_unix(sudo:session): session closed for user root
Nov 25 17:07:59 compute-0 sudo[394642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:07:59 compute-0 sudo[394642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.859 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.860 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.862 254096 INFO nova.compute.manager [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Terminating instance
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.863 254096 DEBUG nova.compute.manager [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:07:59 compute-0 kernel: tap18b66f97-4e (unregistering): left promiscuous mode
Nov 25 17:07:59 compute-0 NetworkManager[48891]: <info>  [1764090479.9178] device (tap18b66f97-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:07:59 compute-0 ovn_controller[153477]: 2025-11-25T17:07:59Z|01319|binding|INFO|Releasing lport 18b66f97-4edf-40c8-b35b-66005e28732c from this chassis (sb_readonly=0)
Nov 25 17:07:59 compute-0 ovn_controller[153477]: 2025-11-25T17:07:59Z|01320|binding|INFO|Setting lport 18b66f97-4edf-40c8-b35b-66005e28732c down in Southbound
Nov 25 17:07:59 compute-0 ovn_controller[153477]: 2025-11-25T17:07:59Z|01321|binding|INFO|Removing iface tap18b66f97-4e ovn-installed in OVS
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.933 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:97:5e 10.100.0.14'], port_security=['fa:16:3e:9b:97:5e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '600da46c-eccb-4422-9531-4fa91fdda153', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fd5575e0-e7b7-4a41-9013-0119bbe9a244', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=18b66f97-4edf-40c8-b35b-66005e28732c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:07:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.934 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 18b66f97-4edf-40c8-b35b-66005e28732c in datapath ef2caff8-43ec-4364-a979-521405023410 unbound from our chassis
Nov 25 17:07:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.936 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef2caff8-43ec-4364-a979-521405023410
Nov 25 17:07:59 compute-0 nova_compute[254092]: 2025-11-25 17:07:59.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:07:59 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Deactivated successfully.
Nov 25 17:07:59 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Consumed 13.650s CPU time.
Nov 25 17:07:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.964 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54f8866e-b238-441d-b004-3d0995df41e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:07:59 compute-0 systemd-machined[216343]: Machine qemu-160-instance-00000080 terminated.
Nov 25 17:07:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:07:59.997 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1216f144-e829-4407-9fe2-8182a90dd464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.000 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[36eccbf4-8503-4531-8d28-e7bf9a336e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.029 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[79bfafd9-597d-47af-ac48-32f6fe5c48ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.031 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Creating config drive at /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.035 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygcj1kuc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.046 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fed02793-399a-4b2d-a8d0-87c54972b4cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2caff8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:05:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688212, 'reachable_time': 34047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394713, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.061 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d26df08b-f350-42c6-a2ce-cf17e646c6dc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688224, 'tstamp': 688224}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394717, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef2caff8-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688228, 'tstamp': 688228}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394717, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.062 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.071 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef2caff8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.071 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.072 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef2caff8-40, col_values=(('external_ids', {'iface-id': '50aec7ed-4f15-4d72-87dd-48c327de28ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.072 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.098 254096 INFO nova.virt.libvirt.driver [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Instance destroyed successfully.
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.099 254096 DEBUG nova.objects.instance [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid 600da46c-eccb-4422-9531-4fa91fdda153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:00 compute-0 podman[394714]: 2025-11-25 17:08:00.105761957 +0000 UTC m=+0.047019941 container create 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.113 254096 DEBUG nova.virt.libvirt.vif [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:07:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-gen-1-1657599294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-gen',id=128,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:07:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-vyqdrhti',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:07:40Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=600da46c-eccb-4422-9531-4fa91fdda153,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.113 254096 DEBUG nova.network.os_vif_util [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "18b66f97-4edf-40c8-b35b-66005e28732c", "address": "fa:16:3e:9b:97:5e", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b66f97-4e", "ovs_interfaceid": "18b66f97-4edf-40c8-b35b-66005e28732c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.114 254096 DEBUG nova.network.os_vif_util [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.114 254096 DEBUG os_vif [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.116 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18b66f97-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.124 254096 INFO os_vif [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:97:5e,bridge_name='br-int',has_traffic_filtering=True,id=18b66f97-4edf-40c8-b35b-66005e28732c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b66f97-4e')
Nov 25 17:08:00 compute-0 systemd[1]: Started libpod-conmon-03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52.scope.
Nov 25 17:08:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:08:00 compute-0 podman[394714]: 2025-11-25 17:08:00.180885386 +0000 UTC m=+0.122143390 container init 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:08:00 compute-0 podman[394714]: 2025-11-25 17:08:00.08768476 +0000 UTC m=+0.028942774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.182 254096 DEBUG nova.network.neutron [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updated VIF entry in instance network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.183 254096 DEBUG nova.network.neutron [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.184 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpygcj1kuc" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:00 compute-0 podman[394714]: 2025-11-25 17:08:00.192532707 +0000 UTC m=+0.133790681 container start 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:08:00 compute-0 podman[394714]: 2025-11-25 17:08:00.196813114 +0000 UTC m=+0.138071118 container attach 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:08:00 compute-0 sweet_goldberg[394764]: 167 167
Nov 25 17:08:00 compute-0 systemd[1]: libpod-03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52.scope: Deactivated successfully.
Nov 25 17:08:00 compute-0 podman[394714]: 2025-11-25 17:08:00.205392129 +0000 UTC m=+0.146650123 container died 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.212 254096 DEBUG nova.storage.rbd_utils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.223 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9ae562390774e8cb4080234ee850bc8f4843542156650cca254cc75f2bde7c6-merged.mount: Deactivated successfully.
Nov 25 17:08:00 compute-0 podman[394714]: 2025-11-25 17:08:00.249593971 +0000 UTC m=+0.190851965 container remove 03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_goldberg, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:08:00 compute-0 systemd[1]: libpod-conmon-03c134d7d2fddaaa4ec0bd2eb0f2a8062bef30dbe5fe0964aab20a4993c94c52.scope: Deactivated successfully.
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.276 254096 DEBUG nova.compute.manager [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-unplugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.276 254096 DEBUG oslo_concurrency.lockutils [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG oslo_concurrency.lockutils [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG oslo_concurrency.lockutils [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG nova.compute.manager [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] No waiting events found dispatching network-vif-unplugged-18b66f97-4edf-40c8-b35b-66005e28732c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.277 254096 DEBUG nova.compute.manager [req-8ab2156c-6af5-45ab-b3f6-ac8b5f1e448f req-49f520a5-18b2-4e91-a283-801002231811 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-unplugged-18b66f97-4edf-40c8-b35b-66005e28732c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.278 254096 DEBUG oslo_concurrency.lockutils [req-eec76090-511d-4e31-8028-95b6d0324cfc req-d2a5df87-19ae-4d64-8bf5-f3115537f93e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.406 254096 DEBUG oslo_concurrency.processutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.407 254096 INFO nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deleting local config drive /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e/disk.config because it was imported into RBD.
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.452 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 246 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 17:08:00 compute-0 podman[394829]: 2025-11-25 17:08:00.480124754 +0000 UTC m=+0.061524448 container create b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:08:00 compute-0 kernel: tap1e4bb581-2e: entered promiscuous mode
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.516 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 NetworkManager[48891]: <info>  [1764090480.5180] manager: (tap1e4bb581-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Nov 25 17:08:00 compute-0 ovn_controller[153477]: 2025-11-25T17:08:00Z|01322|binding|INFO|Claiming lport 1e4bb581-2eb1-4909-a066-11e1096cbffa for this chassis.
Nov 25 17:08:00 compute-0 ovn_controller[153477]: 2025-11-25T17:08:00Z|01323|binding|INFO|1e4bb581-2eb1-4909-a066-11e1096cbffa: Claiming fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 17:08:00 compute-0 systemd-udevd[394690]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.526 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.528 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 bound to our chassis
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.529 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93
Nov 25 17:08:00 compute-0 ovn_controller[153477]: 2025-11-25T17:08:00Z|01324|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa ovn-installed in OVS
Nov 25 17:08:00 compute-0 ovn_controller[153477]: 2025-11-25T17:08:00Z|01325|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa up in Southbound
Nov 25 17:08:00 compute-0 systemd[1]: Started libpod-conmon-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope.
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 NetworkManager[48891]: <info>  [1764090480.5441] device (tap1e4bb581-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:08:00 compute-0 NetworkManager[48891]: <info>  [1764090480.5457] device (tap1e4bb581-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:08:00 compute-0 podman[394829]: 2025-11-25 17:08:00.454627725 +0000 UTC m=+0.036027439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35e1d67d-81e9-4d6c-bc1b-46a5359f2c21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.556 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap829d1d91-41 in ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.559 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap829d1d91-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.559 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6718f14f-de2c-4150-9839-150cea77d442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.560 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[65c25884-dbdb-4d75-ab96-6859f2324bfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 systemd-machined[216343]: New machine qemu-161-instance-00000081.
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.572 254096 INFO nova.virt.libvirt.driver [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deleting instance files /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153_del
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.573 254096 INFO nova.virt.libvirt.driver [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deletion of /var/lib/nova/instances/600da46c-eccb-4422-9531-4fa91fdda153_del complete
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.572 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[409db817-0bf8-40a7-9a73-78797fad8c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:08:00 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000081.
Nov 25 17:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:00 compute-0 podman[394829]: 2025-11-25 17:08:00.603253501 +0000 UTC m=+0.184653225 container init b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:08:00 compute-0 podman[394829]: 2025-11-25 17:08:00.61159201 +0000 UTC m=+0.192991684 container start b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06898e71-04e5-44fc-a8b6-b407e186724f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 podman[394829]: 2025-11-25 17:08:00.617353928 +0000 UTC m=+0.198753622 container attach b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.627 254096 INFO nova.compute.manager [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.628 254096 DEBUG oslo.service.loopingcall [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.629 254096 DEBUG nova.compute.manager [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.629 254096 DEBUG nova.network.neutron [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.643 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3880ac-028d-4665-a4d5-e6dd51e9bbac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 NetworkManager[48891]: <info>  [1764090480.6511] manager: (tap829d1d91-40): new Veth device (/org/freedesktop/NetworkManager/Devices/550)
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.653 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[741db36e-5d47-4ee2-9c7f-fb5fe4e97123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.695 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[60c2d008-8a76-42a5-8983-5d70f843ea17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.698 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[db8e7e9e-0523-47f8-9a9d-e92c9b376fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 NetworkManager[48891]: <info>  [1764090480.7212] device (tap829d1d91-40): carrier: link connected
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.728 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[733ae763-148b-4c24-87a7-1f766ca01ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.746 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b9840c-7b23-4d5f-ac07-2bff4be4f200]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap829d1d91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3f:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693831, 'reachable_time': 15069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394893, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.758 254096 DEBUG nova.compute.manager [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.758 254096 DEBUG oslo_concurrency.lockutils [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.759 254096 DEBUG oslo_concurrency.lockutils [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.759 254096 DEBUG oslo_concurrency.lockutils [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.759 254096 DEBUG nova.compute.manager [req-0de09189-df6b-4f17-ab86-994638a83245 req-9a3950d4-964d-4ff3-b452-63f39123054e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Processing event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.761 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e52d2a3e-6791-40cb-8385-1c3f33a89262]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:3f5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693831, 'tstamp': 693831}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394894, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.781 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6b83ac-04d7-41df-8582-8d2f89cbe346]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap829d1d91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:3f:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693831, 'reachable_time': 15069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394895, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.811 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fab2396a-3772-4e1a-acf3-1df00bcc8034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.876 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc5b7c9-93f5-4534-b264-29e42ced07e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.877 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap829d1d91-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap829d1d91-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:00 compute-0 NetworkManager[48891]: <info>  [1764090480.8814] manager: (tap829d1d91-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Nov 25 17:08:00 compute-0 kernel: tap829d1d91-40: entered promiscuous mode
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.890 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap829d1d91-40, col_values=(('external_ids', {'iface-id': '28d8cf8b-f054-409e-bf0f-959d42d6b803'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:00 compute-0 ovn_controller[153477]: 2025-11-25T17:08:00Z|01326|binding|INFO|Releasing lport 28d8cf8b-f054-409e-bf0f-959d42d6b803 from this chassis (sb_readonly=0)
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.893 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/829d1d91-46ff-47ac-8fc7-645e2a94fa93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/829d1d91-46ff-47ac-8fc7-645e2a94fa93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e08c785-fa75-4629-8503-07676affd0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.905 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-829d1d91-46ff-47ac-8fc7-645e2a94fa93
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/829d1d91-46ff-47ac-8fc7-645e2a94fa93.pid.haproxy
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 829d1d91-46ff-47ac-8fc7-645e2a94fa93
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:08:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:00.905 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'env', 'PROCESS_TAG=haproxy-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/829d1d91-46ff-47ac-8fc7-645e2a94fa93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:08:00 compute-0 nova_compute[254092]: 2025-11-25 17:08:00.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.172 254096 DEBUG nova.network.neutron [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.185 254096 INFO nova.compute.manager [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Took 0.56 seconds to deallocate network for instance.
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.242 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.242 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:01 compute-0 podman[394960]: 2025-11-25 17:08:01.305210485 +0000 UTC m=+0.049709465 container create 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 17:08:01 compute-0 systemd[1]: Started libpod-conmon-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b.scope.
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.351 254096 DEBUG oslo_concurrency.processutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:01 compute-0 podman[394960]: 2025-11-25 17:08:01.278396859 +0000 UTC m=+0.022895859 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]: {
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:     "0": [
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:         {
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "devices": [
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "/dev/loop3"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             ],
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_name": "ceph_lv0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_size": "21470642176",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "name": "ceph_lv0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "tags": {
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cluster_name": "ceph",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.crush_device_class": "",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.encrypted": "0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osd_id": "0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.type": "block",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.vdo": "0"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             },
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "type": "block",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "vg_name": "ceph_vg0"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:         }
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:     ],
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:     "1": [
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:         {
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "devices": [
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "/dev/loop4"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             ],
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_name": "ceph_lv1",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_size": "21470642176",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "name": "ceph_lv1",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "tags": {
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cluster_name": "ceph",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.crush_device_class": "",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.encrypted": "0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osd_id": "1",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.type": "block",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.vdo": "0"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             },
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "type": "block",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "vg_name": "ceph_vg1"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:         }
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:     ],
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:     "2": [
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:         {
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "devices": [
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "/dev/loop5"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             ],
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_name": "ceph_lv2",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_size": "21470642176",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "name": "ceph_lv2",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "tags": {
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.cluster_name": "ceph",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.crush_device_class": "",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.encrypted": "0",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osd_id": "2",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.type": "block",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:                 "ceph.vdo": "0"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             },
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "type": "block",
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:             "vg_name": "ceph_vg2"
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:         }
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]:     ]
Nov 25 17:08:01 compute-0 blissful_varahamihira[394859]: }
Nov 25 17:08:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.391 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090481.3698726, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.392 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Started (Lifecycle Event)
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.394 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8974f3be415033b929b46e3c2caddf48c1e9be811e4c30ce085d7210f3d05c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:01 compute-0 systemd[1]: libpod-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope: Deactivated successfully.
Nov 25 17:08:01 compute-0 conmon[394859]: conmon b8dc833c3d0690ec6742 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope/container/memory.events
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.405 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.410 254096 INFO nova.virt.libvirt.driver [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance spawned successfully.
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.410 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:08:01 compute-0 podman[394960]: 2025-11-25 17:08:01.413873476 +0000 UTC m=+0.158372476 container init 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.415 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:01 compute-0 podman[394960]: 2025-11-25 17:08:01.421556226 +0000 UTC m=+0.166055206 container start 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.427 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.432 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.432 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.433 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.433 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.434 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.434 254096 DEBUG nova.virt.libvirt.driver [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:01 compute-0 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : New worker (395002) forked
Nov 25 17:08:01 compute-0 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : Loading success.
Nov 25 17:08:01 compute-0 podman[394992]: 2025-11-25 17:08:01.471080155 +0000 UTC m=+0.035753582 container died b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c8f6d301ff9bccaca120494d267326ba626e25cc0cd5ff0edd7aab4202edabb-merged.mount: Deactivated successfully.
Nov 25 17:08:01 compute-0 podman[394992]: 2025-11-25 17:08:01.525137387 +0000 UTC m=+0.089810794 container remove b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:08:01 compute-0 systemd[1]: libpod-conmon-b8dc833c3d0690ec67426fb3e01a7c6c326d694292d0c913740aa23dce3cc7df.scope: Deactivated successfully.
Nov 25 17:08:01 compute-0 sudo[394642]: pam_unix(sudo:session): session closed for user root
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.618 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.619 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090481.3700488, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.619 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Paused (Lifecycle Event)
Nov 25 17:08:01 compute-0 ceph-mon[74985]: pgmap v2534: 321 pgs: 321 active+clean; 246 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.638 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:01 compute-0 sudo[395035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.647 254096 INFO nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 6.57 seconds to spawn the instance on the hypervisor.
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.647 254096 DEBUG nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.649 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090481.4051926, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.649 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Resumed (Lifecycle Event)
Nov 25 17:08:01 compute-0 sudo[395035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:08:01 compute-0 sudo[395035]: pam_unix(sudo:session): session closed for user root
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.674 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.678 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.699 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.717 254096 INFO nova.compute.manager [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 7.53 seconds to build instance.
Nov 25 17:08:01 compute-0 sudo[395060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:08:01 compute-0 sudo[395060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:08:01 compute-0 sudo[395060]: pam_unix(sudo:session): session closed for user root
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.735 254096 DEBUG oslo_concurrency.lockutils [None req-878caddf-6a02-4521-a54e-c9351513eba5 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181830008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:01 compute-0 sudo[395085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:08:01 compute-0 sudo[395085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:08:01 compute-0 sudo[395085]: pam_unix(sudo:session): session closed for user root
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.803 254096 DEBUG oslo_concurrency.processutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.817 254096 DEBUG nova.compute.provider_tree [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.831 254096 DEBUG nova.scheduler.client.report [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.866 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:01 compute-0 sudo[395112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:08:01 compute-0 sudo[395112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:08:01 compute-0 nova_compute[254092]: 2025-11-25 17:08:01.900 254096 INFO nova.scheduler.client.report [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance 600da46c-eccb-4422-9531-4fa91fdda153
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.010 254096 DEBUG oslo_concurrency.lockutils [None req-0d5f110e-b6b6-4f15-9aeb-b658356f0035 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:02 compute-0 podman[395178]: 2025-11-25 17:08:02.305786149 +0000 UTC m=+0.062924038 container create 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:08:02 compute-0 podman[395178]: 2025-11-25 17:08:02.267629542 +0000 UTC m=+0.024767461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:08:02 compute-0 systemd[1]: Started libpod-conmon-882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac.scope.
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.387 254096 DEBUG nova.compute.manager [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.387 254096 DEBUG oslo_concurrency.lockutils [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "600da46c-eccb-4422-9531-4fa91fdda153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.387 254096 DEBUG oslo_concurrency.lockutils [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.388 254096 DEBUG oslo_concurrency.lockutils [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "600da46c-eccb-4422-9531-4fa91fdda153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.388 254096 DEBUG nova.compute.manager [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] No waiting events found dispatching network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.388 254096 WARNING nova.compute.manager [req-d63b9c8c-d0b3-40f9-bd76-1cd2f0373def req-6e486f04-fade-4fc2-9de6-421e5b8e25e8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received unexpected event network-vif-plugged-18b66f97-4edf-40c8-b35b-66005e28732c for instance with vm_state deleted and task_state None.
Nov 25 17:08:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:08:02 compute-0 podman[395178]: 2025-11-25 17:08:02.427056715 +0000 UTC m=+0.184194614 container init 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 17:08:02 compute-0 podman[395178]: 2025-11-25 17:08:02.435764154 +0000 UTC m=+0.192902063 container start 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:08:02 compute-0 podman[395178]: 2025-11-25 17:08:02.439761774 +0000 UTC m=+0.196899683 container attach 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 17:08:02 compute-0 heuristic_galileo[395194]: 167 167
Nov 25 17:08:02 compute-0 systemd[1]: libpod-882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac.scope: Deactivated successfully.
Nov 25 17:08:02 compute-0 podman[395178]: 2025-11-25 17:08:02.44326842 +0000 UTC m=+0.200406299 container died 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:08:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 205 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Nov 25 17:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4f5d1b39b261f479904d24370dacf6d7a75d1536ef2669845c62f34a184cc80-merged.mount: Deactivated successfully.
Nov 25 17:08:02 compute-0 podman[395178]: 2025-11-25 17:08:02.492830399 +0000 UTC m=+0.249968278 container remove 882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:08:02 compute-0 systemd[1]: libpod-conmon-882b0dd61111698e8daa1d5a645922d6e3970f531adf60ef1e2e901dfe142eac.scope: Deactivated successfully.
Nov 25 17:08:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4181830008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:02 compute-0 podman[395218]: 2025-11-25 17:08:02.709753499 +0000 UTC m=+0.045037536 container create 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:08:02 compute-0 systemd[1]: Started libpod-conmon-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope.
Nov 25 17:08:02 compute-0 podman[395218]: 2025-11-25 17:08:02.689900064 +0000 UTC m=+0.025184131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:08:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:02 compute-0 podman[395218]: 2025-11-25 17:08:02.818339937 +0000 UTC m=+0.153624004 container init 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:08:02 compute-0 podman[395218]: 2025-11-25 17:08:02.827885909 +0000 UTC m=+0.163169956 container start 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:08:02 compute-0 podman[395218]: 2025-11-25 17:08:02.831665743 +0000 UTC m=+0.166949810 container attach 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.865 254096 DEBUG nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.866 254096 DEBUG oslo_concurrency.lockutils [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG oslo_concurrency.lockutils [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG oslo_concurrency.lockutils [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 WARNING nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state active and task_state None.
Nov 25 17:08:02 compute-0 nova_compute[254092]: 2025-11-25 17:08:02.867 254096 DEBUG nova.compute.manager [req-b09aed61-e4ad-453a-a28f-1bfa5a2d307b req-bbdbf3fd-31fd-47ea-bc16-c68de73fc7bc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Received event network-vif-deleted-18b66f97-4edf-40c8-b35b-66005e28732c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:03 compute-0 ceph-mon[74985]: pgmap v2535: 321 pgs: 321 active+clean; 205 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]: {
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "osd_id": 1,
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "type": "bluestore"
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:     },
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "osd_id": 2,
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "type": "bluestore"
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:     },
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "osd_id": 0,
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:         "type": "bluestore"
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]:     }
Nov 25 17:08:03 compute-0 modest_mcclintock[395235]: }
Nov 25 17:08:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:03 compute-0 systemd[1]: libpod-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope: Deactivated successfully.
Nov 25 17:08:03 compute-0 systemd[1]: libpod-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope: Consumed 1.078s CPU time.
Nov 25 17:08:03 compute-0 podman[395268]: 2025-11-25 17:08:03.953227485 +0000 UTC m=+0.034122347 container died 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d00591a997ae0998d040040fd25bc18286124a985a5b735f6a001dbbdb9e65eb-merged.mount: Deactivated successfully.
Nov 25 17:08:04 compute-0 podman[395268]: 2025-11-25 17:08:04.030743942 +0000 UTC m=+0.111638814 container remove 35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:08:04 compute-0 systemd[1]: libpod-conmon-35033e764223393b19dd779793d162791e346b4b1ee63feb4186fda16eb007cc.scope: Deactivated successfully.
Nov 25 17:08:04 compute-0 sudo[395112]: pam_unix(sudo:session): session closed for user root
Nov 25 17:08:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:08:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:08:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:08:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:08:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9ca37df6-69ce-428f-a820-f8c37a06f52f does not exist
Nov 25 17:08:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c288f77c-0f83-4c0a-b20f-78b30e66e99c does not exist
Nov 25 17:08:04 compute-0 sudo[395283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:08:04 compute-0 sudo[395283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:08:04 compute-0 sudo[395283]: pam_unix(sudo:session): session closed for user root
Nov 25 17:08:04 compute-0 sudo[395308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:08:04 compute-0 sudo[395308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:08:04 compute-0 sudo[395308]: pam_unix(sudo:session): session closed for user root
Nov 25 17:08:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 205 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Nov 25 17:08:04 compute-0 nova_compute[254092]: 2025-11-25 17:08:04.842 254096 DEBUG nova.compute.manager [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:04 compute-0 nova_compute[254092]: 2025-11-25 17:08:04.842 254096 DEBUG nova.compute.manager [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing instance network info cache due to event network-changed-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:08:04 compute-0 nova_compute[254092]: 2025-11-25 17:08:04.842 254096 DEBUG oslo_concurrency.lockutils [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:04 compute-0 nova_compute[254092]: 2025-11-25 17:08:04.843 254096 DEBUG oslo_concurrency.lockutils [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:04 compute-0 nova_compute[254092]: 2025-11-25 17:08:04.843 254096 DEBUG nova.network.neutron [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Refreshing network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.018 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.019 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.020 254096 INFO nova.compute.manager [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Terminating instance
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.021 254096 DEBUG nova.compute.manager [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:08:05 compute-0 kernel: tap54e9d2ac-a4 (unregistering): left promiscuous mode
Nov 25 17:08:05 compute-0 NetworkManager[48891]: <info>  [1764090485.0831] device (tap54e9d2ac-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 ovn_controller[153477]: 2025-11-25T17:08:05Z|01327|binding|INFO|Releasing lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c from this chassis (sb_readonly=0)
Nov 25 17:08:05 compute-0 ovn_controller[153477]: 2025-11-25T17:08:05Z|01328|binding|INFO|Setting lport 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c down in Southbound
Nov 25 17:08:05 compute-0 ovn_controller[153477]: 2025-11-25T17:08:05Z|01329|binding|INFO|Removing iface tap54e9d2ac-a4 ovn-installed in OVS
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.106 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:08:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.117 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:40:a3 10.100.0.6'], port_security=['fa:16:3e:09:40:a3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd1d0296-ae28-4eac-9f38-80e6ca17dbff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2caff8-43ec-4364-a979-521405023410', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f19d2d0776e84c3c99c640c0b7ed3743', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1b953fef-d250-41a4-af84-97bf9c7f4822 779da3fb-8b66-4cf7-a59e-bc7311564ace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32482432-a111-4664-9b0c-204a3e2a38fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.118 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c in datapath ef2caff8-43ec-4364-a979-521405023410 unbound from our chassis
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.119 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef2caff8-43ec-4364-a979-521405023410, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.120 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[46a38125-57a9-49c6-8542-96f8488d23f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.121 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef2caff8-43ec-4364-a979-521405023410 namespace which is not needed anymore
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Nov 25 17:08:05 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Consumed 16.583s CPU time.
Nov 25 17:08:05 compute-0 systemd-machined[216343]: Machine qemu-158-instance-0000007e terminated.
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.262 254096 INFO nova.virt.libvirt.driver [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance destroyed successfully.
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.262 254096 DEBUG nova.objects.instance [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lazy-loading 'resources' on Instance uuid bd1d0296-ae28-4eac-9f38-80e6ca17dbff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.273 254096 DEBUG nova.virt.libvirt.vif [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-804365052-access_point-1036615353',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-804365052-acc',id=126,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJewMHMcnizEkGIStteBKbtUtrMJBt/EToyGgUtC6JrzbfMY3fgmQCA8nS1qzlnKU2lH+JSVFP+p/6sjgfKpYyiluFasU1UBquzWDp3IJiLFG1sq+FYLyyElJSOAHc9Qrw==',key_name='tempest-TestSecurityGroupsBasicOps-1818642837',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:07:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f19d2d0776e84c3c99c640c0b7ed3743',ramdisk_id='',reservation_id='r-uvh0a05a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-804365052',owner_user_name='tempest-TestSecurityGroupsBasicOps-804365052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:07:05Z,user_data=None,user_id='72f29c5d97464520bee19cb128258fd5',uuid=bd1d0296-ae28-4eac-9f38-80e6ca17dbff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.273 254096 DEBUG nova.network.os_vif_util [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converting VIF {"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.274 254096 DEBUG nova.network.os_vif_util [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.274 254096 DEBUG os_vif [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.277 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54e9d2ac-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.282 254096 INFO os_vif [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c,network=Network(ef2caff8-43ec-4364-a979-521405023410),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54e9d2ac-a4')
Nov 25 17:08:05 compute-0 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : haproxy version is 2.8.14-c23fe91
Nov 25 17:08:05 compute-0 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [NOTICE]   (392780) : path to executable is /usr/sbin/haproxy
Nov 25 17:08:05 compute-0 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [WARNING]  (392780) : Exiting Master process...
Nov 25 17:08:05 compute-0 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [ALERT]    (392780) : Current worker (392782) exited with code 143 (Terminated)
Nov 25 17:08:05 compute-0 neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410[392775]: [WARNING]  (392780) : All workers exited. Exiting... (0)
Nov 25 17:08:05 compute-0 systemd[1]: libpod-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0.scope: Deactivated successfully.
Nov 25 17:08:05 compute-0 podman[395358]: 2025-11-25 17:08:05.325025541 +0000 UTC m=+0.071364529 container died a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0-userdata-shm.mount: Deactivated successfully.
Nov 25 17:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69c52ab9da1464e9256b97d314ba04db47e1086bb356a0791f8cd44f4454be6-merged.mount: Deactivated successfully.
Nov 25 17:08:05 compute-0 podman[395358]: 2025-11-25 17:08:05.369869631 +0000 UTC m=+0.116208609 container cleanup a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:08:05 compute-0 systemd[1]: libpod-conmon-a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0.scope: Deactivated successfully.
Nov 25 17:08:05 compute-0 podman[395418]: 2025-11-25 17:08:05.449480874 +0000 UTC m=+0.052574933 container remove a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.457 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff464f6-6486-4c29-b0be-dc0fb3ef0a31]: (4, ('Tue Nov 25 05:08:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410 (a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0)\na232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0\nTue Nov 25 05:08:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef2caff8-43ec-4364-a979-521405023410 (a232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0)\na232e357ff178d3fae9e120dd480f7eb5719f326bae0fa556d3df6152413c7d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e220fde4-54ea-4ab9-9f97-aef0503b0b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.460 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2caff8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:05 compute-0 kernel: tapef2caff8-40: left promiscuous mode
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.481 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e743f26e-6dca-4b20-b112-1a28fc4b5873]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.499 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6600c6e7-6d9f-4cc3-b1dd-1b993062a767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.502 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26a355b8-dab9-4170-9870-f7353cfaa9c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.527 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f5b518-bf50-4536-a930-1e01a9589d04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688203, 'reachable_time': 38596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395433, 'error': None, 'target': 'ovnmeta-ef2caff8-43ec-4364-a979-521405023410', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.531 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef2caff8-43ec-4364-a979-521405023410 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:08:05 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:05.531 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[238a4dfa-cbf5-43d1-b641-973dbd7740d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:05 compute-0 systemd[1]: run-netns-ovnmeta\x2def2caff8\x2d43ec\x2d4364\x2da979\x2d521405023410.mount: Deactivated successfully.
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.634 254096 INFO nova.virt.libvirt.driver [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deleting instance files /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_del
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.635 254096 INFO nova.virt.libvirt.driver [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deletion of /var/lib/nova/instances/bd1d0296-ae28-4eac-9f38-80e6ca17dbff_del complete
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.704 254096 INFO nova.compute.manager [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 0.68 seconds to destroy the instance on the hypervisor.
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.705 254096 DEBUG oslo.service.loopingcall [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.705 254096 DEBUG nova.compute.manager [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.706 254096 DEBUG nova.network.neutron [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.955 254096 DEBUG nova.compute.manager [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-unplugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.955 254096 DEBUG oslo_concurrency.lockutils [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.957 254096 DEBUG oslo_concurrency.lockutils [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.957 254096 DEBUG oslo_concurrency.lockutils [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.957 254096 DEBUG nova.compute.manager [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] No waiting events found dispatching network-vif-unplugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:05 compute-0 nova_compute[254092]: 2025-11-25 17:08:05.958 254096 DEBUG nova.compute.manager [req-6b01f9a2-35b5-4b7b-aa2e-4d9eae6848bd req-fc4ab9b0-c60d-4f67-b0eb-c4ad26a0de19 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-unplugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:08:06 compute-0 ceph-mon[74985]: pgmap v2536: 321 pgs: 321 active+clean; 205 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 3.9 MiB/s wr, 117 op/s
Nov 25 17:08:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 131 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.016 254096 DEBUG nova.network.neutron [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.031 254096 INFO nova.compute.manager [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Took 1.33 seconds to deallocate network for instance.
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.071 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.071 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.117 254096 DEBUG nova.network.neutron [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updated VIF entry in instance network info cache for port 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.118 254096 DEBUG nova.network.neutron [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Updating instance_info_cache with network_info: [{"id": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "address": "fa:16:3e:09:40:a3", "network": {"id": "ef2caff8-43ec-4364-a979-521405023410", "bridge": "br-int", "label": "tempest-network-smoke--168780319", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f19d2d0776e84c3c99c640c0b7ed3743", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54e9d2ac-a4", "ovs_interfaceid": "54e9d2ac-a4ca-41fe-9c2e-76eba828c99c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.137 254096 DEBUG oslo_concurrency.lockutils [req-b1f39396-9d28-4a52-831d-1e81fa3d6234 req-65f93528-16df-475d-a81e-90fddc66f93f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-bd1d0296-ae28-4eac-9f38-80e6ca17dbff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.161 254096 DEBUG oslo_concurrency.processutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/94981960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.619 254096 DEBUG oslo_concurrency.processutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.625 254096 DEBUG nova.compute.provider_tree [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.639 254096 DEBUG nova.scheduler.client.report [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.675 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.707 254096 INFO nova.scheduler.client.report [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Deleted allocations for instance bd1d0296-ae28-4eac-9f38-80e6ca17dbff
Nov 25 17:08:07 compute-0 nova_compute[254092]: 2025-11-25 17:08:07.774 254096 DEBUG oslo_concurrency.lockutils [None req-821bc65e-bb0e-4b00-af18-d2a7ba8ad604 72f29c5d97464520bee19cb128258fd5 f19d2d0776e84c3c99c640c0b7ed3743 - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.066 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.067 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.067 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.068 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "bd1d0296-ae28-4eac-9f38-80e6ca17dbff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.068 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] No waiting events found dispatching network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.068 254096 WARNING nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received unexpected event network-vif-plugged-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c for instance with vm_state deleted and task_state None.
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.069 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Received event network-vif-deleted-54e9d2ac-a4ca-41fe-9c2e-76eba828c99c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.069 254096 INFO nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Neutron deleted interface 54e9d2ac-a4ca-41fe-9c2e-76eba828c99c; detaching it from the instance and deleting it from the info cache
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.069 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.072 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Detach interface failed, port_id=54e9d2ac-a4ca-41fe-9c2e-76eba828c99c, reason: Instance bd1d0296-ae28-4eac-9f38-80e6ca17dbff could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG nova.compute.manager [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing instance network info cache due to event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.073 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:08 compute-0 nova_compute[254092]: 2025-11-25 17:08:08.074 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:08:08 compute-0 ceph-mon[74985]: pgmap v2537: 321 pgs: 321 active+clean; 131 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Nov 25 17:08:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/94981960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 131 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 128 op/s
Nov 25 17:08:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:09 compute-0 nova_compute[254092]: 2025-11-25 17:08:09.641 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updated VIF entry in instance network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:08:09 compute-0 nova_compute[254092]: 2025-11-25 17:08:09.642 254096 DEBUG nova.network.neutron [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:09 compute-0 nova_compute[254092]: 2025-11-25 17:08:09.662 254096 DEBUG oslo_concurrency.lockutils [req-f8ab15c8-dd23-40ee-adfd-a6a64434f5ad req-694ad333-dbea-4c18-91aa-2969cbb3b310 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:08:10 compute-0 ceph-mon[74985]: pgmap v2538: 321 pgs: 321 active+clean; 131 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 128 op/s
Nov 25 17:08:10 compute-0 nova_compute[254092]: 2025-11-25 17:08:10.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:10 compute-0 nova_compute[254092]: 2025-11-25 17:08:10.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 88 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 141 op/s
Nov 25 17:08:10 compute-0 ovn_controller[153477]: 2025-11-25T17:08:10Z|01330|binding|INFO|Releasing lport 28d8cf8b-f054-409e-bf0f-959d42d6b803 from this chassis (sb_readonly=0)
Nov 25 17:08:11 compute-0 nova_compute[254092]: 2025-11-25 17:08:11.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:12 compute-0 ceph-mon[74985]: pgmap v2539: 321 pgs: 321 active+clean; 88 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 121 KiB/s wr, 141 op/s
Nov 25 17:08:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 88 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 129 op/s
Nov 25 17:08:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:13.645 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:13 compute-0 ovn_controller[153477]: 2025-11-25T17:08:13Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 17:08:13 compute-0 ovn_controller[153477]: 2025-11-25T17:08:13Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 17:08:14 compute-0 ceph-mon[74985]: pgmap v2540: 321 pgs: 321 active+clean; 88 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 129 op/s
Nov 25 17:08:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 88 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Nov 25 17:08:15 compute-0 nova_compute[254092]: 2025-11-25 17:08:15.096 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090480.0954783, 600da46c-eccb-4422-9531-4fa91fdda153 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:15 compute-0 nova_compute[254092]: 2025-11-25 17:08:15.096 254096 INFO nova.compute.manager [-] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] VM Stopped (Lifecycle Event)
Nov 25 17:08:15 compute-0 nova_compute[254092]: 2025-11-25 17:08:15.124 254096 DEBUG nova.compute.manager [None req-15e40d0e-61af-482d-9c91-8041ed84922e - - - - - -] [instance: 600da46c-eccb-4422-9531-4fa91fdda153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:15 compute-0 nova_compute[254092]: 2025-11-25 17:08:15.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:15 compute-0 nova_compute[254092]: 2025-11-25 17:08:15.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:16 compute-0 ceph-mon[74985]: pgmap v2541: 321 pgs: 321 active+clean; 88 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Nov 25 17:08:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Nov 25 17:08:18 compute-0 ceph-mon[74985]: pgmap v2542: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Nov 25 17:08:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 17:08:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:20 compute-0 ceph-mon[74985]: pgmap v2543: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 17:08:20 compute-0 nova_compute[254092]: 2025-11-25 17:08:20.257 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090485.2546551, bd1d0296-ae28-4eac-9f38-80e6ca17dbff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:20 compute-0 nova_compute[254092]: 2025-11-25 17:08:20.257 254096 INFO nova.compute.manager [-] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] VM Stopped (Lifecycle Event)
Nov 25 17:08:20 compute-0 nova_compute[254092]: 2025-11-25 17:08:20.284 254096 DEBUG nova.compute.manager [None req-d4fbcff8-32e2-42cc-91da-c005b04142a2 - - - - - -] [instance: bd1d0296-ae28-4eac-9f38-80e6ca17dbff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:20 compute-0 nova_compute[254092]: 2025-11-25 17:08:20.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:20 compute-0 nova_compute[254092]: 2025-11-25 17:08:20.460 254096 INFO nova.compute.manager [None req-ab059e85-ca32-4ef3-9c76-fab88e85d6a9 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Get console output
Nov 25 17:08:20 compute-0 nova_compute[254092]: 2025-11-25 17:08:20.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 17:08:20 compute-0 nova_compute[254092]: 2025-11-25 17:08:20.472 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:08:20 compute-0 podman[395457]: 2025-11-25 17:08:20.689865739 +0000 UTC m=+0.086017160 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:08:20 compute-0 podman[395456]: 2025-11-25 17:08:20.695601356 +0000 UTC m=+0.095325025 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:08:20 compute-0 podman[395458]: 2025-11-25 17:08:20.724240632 +0000 UTC m=+0.114098480 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 17:08:22 compute-0 ceph-mon[74985]: pgmap v2544: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 17:08:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:08:22 compute-0 ovn_controller[153477]: 2025-11-25T17:08:22Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 17:08:22 compute-0 nova_compute[254092]: 2025-11-25 17:08:22.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:22.866 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:22.867 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:08:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:24 compute-0 ceph-mon[74985]: pgmap v2545: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:08:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:08:25 compute-0 ovn_controller[153477]: 2025-11-25T17:08:24Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 17:08:25 compute-0 nova_compute[254092]: 2025-11-25 17:08:25.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:25 compute-0 nova_compute[254092]: 2025-11-25 17:08:25.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.225 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:26 compute-0 ceph-mon[74985]: pgmap v2546: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.227 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.228 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.229 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.229 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.230 254096 INFO nova.compute.manager [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Terminating instance
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.232 254096 DEBUG nova.compute.manager [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:08:26 compute-0 kernel: tap1e4bb581-2e (unregistering): left promiscuous mode
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.295 254096 DEBUG nova.compute.manager [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:26 compute-0 NetworkManager[48891]: <info>  [1764090506.2972] device (tap1e4bb581-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.297 254096 DEBUG nova.compute.manager [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing instance network info cache due to event network-changed-1e4bb581-2eb1-4909-a066-11e1096cbffa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.300 254096 DEBUG oslo_concurrency.lockutils [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.300 254096 DEBUG oslo_concurrency.lockutils [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.300 254096 DEBUG nova.network.neutron [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Refreshing network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01331|binding|INFO|Releasing lport 1e4bb581-2eb1-4909-a066-11e1096cbffa from this chassis (sb_readonly=0)
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01332|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa down in Southbound
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01333|binding|INFO|Removing iface tap1e4bb581-2e ovn-installed in OVS
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:08:26 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.321 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.325 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 unbound from our chassis
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.328 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.330 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0267cc8a-50d6-4e40-a850-2a9cbac969ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.331 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 namespace which is not needed anymore
Nov 25 17:08:26 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 25 17:08:26 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Consumed 14.428s CPU time.
Nov 25 17:08:26 compute-0 systemd-machined[216343]: Machine qemu-161-instance-00000081 terminated.
Nov 25 17:08:26 compute-0 kernel: tap1e4bb581-2e: entered promiscuous mode
Nov 25 17:08:26 compute-0 NetworkManager[48891]: <info>  [1764090506.4574] manager: (tap1e4bb581-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01334|binding|INFO|Claiming lport 1e4bb581-2eb1-4909-a066-11e1096cbffa for this chassis.
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01335|binding|INFO|1e4bb581-2eb1-4909-a066-11e1096cbffa: Claiming fa:16:3e:5d:af:f0 10.100.0.7
Nov 25 17:08:26 compute-0 kernel: tap1e4bb581-2e (unregistering): left promiscuous mode
Nov 25 17:08:26 compute-0 systemd-udevd[395523]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.467 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:08:26 compute-0 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : haproxy version is 2.8.14-c23fe91
Nov 25 17:08:26 compute-0 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [NOTICE]   (394994) : path to executable is /usr/sbin/haproxy
Nov 25 17:08:26 compute-0 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [WARNING]  (394994) : Exiting Master process...
Nov 25 17:08:26 compute-0 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [ALERT]    (394994) : Current worker (395002) exited with code 143 (Terminated)
Nov 25 17:08:26 compute-0 neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93[394988]: [WARNING]  (394994) : All workers exited. Exiting... (0)
Nov 25 17:08:26 compute-0 systemd[1]: libpod-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b.scope: Deactivated successfully.
Nov 25 17:08:26 compute-0 podman[395544]: 2025-11-25 17:08:26.485456951 +0000 UTC m=+0.050662030 container died 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01336|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa ovn-installed in OVS
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01337|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa up in Southbound
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01338|binding|INFO|Releasing lport 1e4bb581-2eb1-4909-a066-11e1096cbffa from this chassis (sb_readonly=1)
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01339|if_status|INFO|Dropped 5 log messages in last 1626 seconds (most recently, 1626 seconds ago) due to excessive rate
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01340|if_status|INFO|Not setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa down as sb is readonly
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01341|binding|INFO|Removing iface tap1e4bb581-2e ovn-installed in OVS
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01342|binding|INFO|Releasing lport 1e4bb581-2eb1-4909-a066-11e1096cbffa from this chassis (sb_readonly=0)
Nov 25 17:08:26 compute-0 ovn_controller[153477]: 2025-11-25T17:08:26Z|01343|binding|INFO|Setting lport 1e4bb581-2eb1-4909-a066-11e1096cbffa down in Southbound
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.493 254096 INFO nova.virt.libvirt.driver [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Instance destroyed successfully.
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.494 254096 DEBUG nova.objects.instance [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.498 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:af:f0 10.100.0.7'], port_security=['fa:16:3e:5d:af:f0 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a3b1eb5-e8b2-426c-a986-74c661517045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08d0b9df-f126-461f-88f6-bb27189af180, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=1e4bb581-2eb1-4909-a066-11e1096cbffa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.509 254096 DEBUG nova.virt.libvirt.vif [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742985816',display_name='tempest-TestNetworkBasicOps-server-1742985816',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742985816',id=129,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM00WPYJfplltH8QXqH5VBLBDZhQ795ogiRVA83nhFmClvp98XVxMKrxCuE4fsMnEOta0H4tqzcEHlYCGCoNe9LzeAmLZ6Pp7hI8JV+Hz5Z8Dy6OeDo6E9caHpgF+YsYXg==',key_name='tempest-TestNetworkBasicOps-1671052',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-v513b262',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:01Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.510 254096 DEBUG nova.network.os_vif_util [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.511 254096 DEBUG nova.network.os_vif_util [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.512 254096 DEBUG os_vif [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.515 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e4bb581-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b-userdata-shm.mount: Deactivated successfully.
Nov 25 17:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-df8974f3be415033b929b46e3c2caddf48c1e9be811e4c30ce085d7210f3d05c-merged.mount: Deactivated successfully.
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.522 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.524 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.528 254096 INFO os_vif [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:af:f0,bridge_name='br-int',has_traffic_filtering=True,id=1e4bb581-2eb1-4909-a066-11e1096cbffa,network=Network(829d1d91-46ff-47ac-8fc7-645e2a94fa93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e4bb581-2e')
Nov 25 17:08:26 compute-0 podman[395544]: 2025-11-25 17:08:26.536039299 +0000 UTC m=+0.101244388 container cleanup 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:08:26 compute-0 systemd[1]: libpod-conmon-134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b.scope: Deactivated successfully.
Nov 25 17:08:26 compute-0 podman[395591]: 2025-11-25 17:08:26.604570759 +0000 UTC m=+0.047550885 container remove 134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.614 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[42e614af-68ed-4e96-9e7f-d6a99802d922]: (4, ('Tue Nov 25 05:08:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 (134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b)\n134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b\nTue Nov 25 05:08:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 (134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b)\n134ea456686b97a7401586f288fd284a78c694c7d1278dcbd7c05dd068d7094b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.617 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2d283a-8116-4dc9-8595-243450db05ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap829d1d91-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:26 compute-0 kernel: tap829d1d91-40: left promiscuous mode
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.663 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aedc2578-878f-462c-aa88-347c89abde6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.686 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57938066-b65a-4861-9c13-b7737d902366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.687 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37a21ed2-fac1-48f5-a68d-bc5c81c569e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7e1a3236-98f1-492c-bbde-ff2bcf86820b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693822, 'reachable_time': 18567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395612, 'error': None, 'target': 'ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.710 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-829d1d91-46ff-47ac-8fc7-645e2a94fa93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.710 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6128ef-021f-4ae4-836d-fcc0bc32e2c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.712 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 unbound from our chassis
Nov 25 17:08:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d829d1d91\x2d46ff\x2d47ac\x2d8fc7\x2d645e2a94fa93.mount: Deactivated successfully.
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.714 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.715 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3f039633-d3ba-4ee2-9495-37aac5decd88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.716 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 1e4bb581-2eb1-4909-a066-11e1096cbffa in datapath 829d1d91-46ff-47ac-8fc7-645e2a94fa93 unbound from our chassis
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.717 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 829d1d91-46ff-47ac-8fc7-645e2a94fa93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:08:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:26.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[77a3a9b7-a83b-4843-b7f9-6acb54a27c8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.838 254096 DEBUG nova.compute.manager [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-unplugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG oslo_concurrency.lockutils [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG oslo_concurrency.lockutils [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG oslo_concurrency.lockutils [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.839 254096 DEBUG nova.compute.manager [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-unplugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.840 254096 DEBUG nova.compute.manager [req-dedad734-873e-491f-afa2-a316a4442115 req-ce826117-48c6-4683-ae6c-bcd1f9b0fac7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-unplugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.973 254096 INFO nova.virt.libvirt.driver [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deleting instance files /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_del
Nov 25 17:08:26 compute-0 nova_compute[254092]: 2025-11-25 17:08:26.974 254096 INFO nova.virt.libvirt.driver [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deletion of /var/lib/nova/instances/239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e_del complete
Nov 25 17:08:27 compute-0 nova_compute[254092]: 2025-11-25 17:08:27.493 254096 INFO nova.compute.manager [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 1.26 seconds to destroy the instance on the hypervisor.
Nov 25 17:08:27 compute-0 nova_compute[254092]: 2025-11-25 17:08:27.494 254096 DEBUG oslo.service.loopingcall [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:08:27 compute-0 nova_compute[254092]: 2025-11-25 17:08:27.495 254096 DEBUG nova.compute.manager [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:08:27 compute-0 nova_compute[254092]: 2025-11-25 17:08:27.495 254096 DEBUG nova.network.neutron [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.191 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.191 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.213 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:08:28 compute-0 ceph-mon[74985]: pgmap v2547: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.290 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.291 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.300 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.300 254096 INFO nova.compute.claims [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.454 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.589 254096 DEBUG nova.network.neutron [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updated VIF entry in instance network info cache for port 1e4bb581-2eb1-4909-a066-11e1096cbffa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.591 254096 DEBUG nova.network.neutron [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [{"id": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "address": "fa:16:3e:5d:af:f0", "network": {"id": "829d1d91-46ff-47ac-8fc7-645e2a94fa93", "bridge": "br-int", "label": "tempest-network-smoke--538767373", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e4bb581-2e", "ovs_interfaceid": "1e4bb581-2eb1-4909-a066-11e1096cbffa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.609 254096 DEBUG oslo_concurrency.lockutils [req-ec4eafcd-fef7-4da1-9619-2f570e79e4a0 req-94e9ab51-f9d2-4ac3-b65e-576247c74f35 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.618 254096 DEBUG nova.network.neutron [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.631 254096 INFO nova.compute.manager [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Took 1.14 seconds to deallocate network for instance.
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.669 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898967013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.902 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.967 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.967 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.968 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.969 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.969 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.969 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.970 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.971 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.972 254096 DEBUG oslo_concurrency.lockutils [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.973 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] No waiting events found dispatching network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.973 254096 WARNING nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received unexpected event network-vif-plugged-1e4bb581-2eb1-4909-a066-11e1096cbffa for instance with vm_state deleted and task_state None.
Nov 25 17:08:28 compute-0 nova_compute[254092]: 2025-11-25 17:08:28.973 254096 DEBUG nova.compute.manager [req-a3fbb82b-5ecc-42ed-9935-43ccb905564c req-6b4d8c4e-f4de-4d30-80cd-a9bc043d58b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Received event network-vif-deleted-1e4bb581-2eb1-4909-a066-11e1096cbffa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.589 254096 DEBUG nova.compute.provider_tree [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:08:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3898967013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.600 254096 DEBUG nova.scheduler.client.report [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.622 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.623 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.625 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.670 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.671 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.689 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.692 254096 DEBUG oslo_concurrency.processutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.752 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.841 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.842 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.842 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Creating image(s)
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.864 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.891 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.920 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.925 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:29 compute-0 nova_compute[254092]: 2025-11-25 17:08:29.980 254096 DEBUG nova.policy [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c10f7baf67a8412caf2428d3200f851d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.033 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.034 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.035 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.035 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.056 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.060 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779679148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.136 254096 DEBUG oslo_concurrency.processutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.144 254096 DEBUG nova.compute.provider_tree [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.160 254096 DEBUG nova.scheduler.client.report [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.177 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.204 254096 INFO nova.scheduler.client.report [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.293 254096 DEBUG oslo_concurrency.lockutils [None req-8e3ffa7d-7a9c-4e9e-98a6-b4259cf3af70 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.320 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.388 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] resizing rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:08:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 103 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 14 op/s
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.481 254096 DEBUG nova.objects.instance [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'migration_context' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.497 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.498 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Ensure instance console log exists: /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.498 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.499 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.499 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:30 compute-0 ceph-mon[74985]: pgmap v2548: 321 pgs: 321 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Nov 25 17:08:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1779679148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:30 compute-0 nova_compute[254092]: 2025-11-25 17:08:30.747 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Successfully created port: cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:08:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:30.869 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.440 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Successfully updated port: cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.461 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.462 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.462 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.512 254096 DEBUG nova.compute.manager [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-changed-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.512 254096 DEBUG nova.compute.manager [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Refreshing instance network info cache due to event network-changed-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.513 254096 DEBUG oslo_concurrency.lockutils [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:31 compute-0 ceph-mon[74985]: pgmap v2549: 321 pgs: 321 active+clean; 103 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 14 op/s
Nov 25 17:08:31 compute-0 nova_compute[254092]: 2025-11-25 17:08:31.631 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:08:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 76 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 25 17:08:33 compute-0 ceph-mon[74985]: pgmap v2550: 321 pgs: 321 active+clean; 76 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.749 254096 DEBUG nova.network.neutron [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.768 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.768 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance network_info: |[{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.769 254096 DEBUG oslo_concurrency.lockutils [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.769 254096 DEBUG nova.network.neutron [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Refreshing network info cache for port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.772 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start _get_guest_xml network_info=[{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.778 254096 WARNING nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.783 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.783 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.787 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.libvirt.host [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.788 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.789 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.790 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.790 254096 DEBUG nova.virt.hardware [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:08:33 compute-0 nova_compute[254092]: 2025-11-25 17:08:33.792 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:08:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635124925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.282 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.321 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.328 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 76 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2635124925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:08:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4187159391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.800 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.801 254096 DEBUG nova.virt.libvirt.vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:29Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.802 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.803 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.804 254096 DEBUG nova.objects.instance [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.837 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <uuid>8dfb4b8d-30c5-414d-98f6-1f788dbae8c0</uuid>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <name>instance-00000082</name>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <nova:name>tempest-TestServerAdvancedOps-server-1287608653</nova:name>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:08:33</nova:creationTime>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:user uuid="c10f7baf67a8412caf2428d3200f851d">tempest-TestServerAdvancedOps-343246187-project-member</nova:user>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:project uuid="6d8d30c9803a4a3fa7d9179a85cf828e">tempest-TestServerAdvancedOps-343246187</nova:project>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <nova:port uuid="cd0a47c2-5422-4bcf-a714-1ecdc3db9e87">
Nov 25 17:08:34 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <system>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <entry name="serial">8dfb4b8d-30c5-414d-98f6-1f788dbae8c0</entry>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <entry name="uuid">8dfb4b8d-30c5-414d-98f6-1f788dbae8c0</entry>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </system>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <os>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   </os>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <features>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   </features>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk">
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       </source>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config">
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       </source>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:08:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:37:ef:7f"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <target dev="tapcd0a47c2-54"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/console.log" append="off"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <video>
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </video>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:08:34 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:08:34 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:08:34 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:08:34 compute-0 nova_compute[254092]: </domain>
Nov 25 17:08:34 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.838 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Preparing to wait for external event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.838 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.839 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.839 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.840 254096 DEBUG nova.virt.libvirt.vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:29Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.840 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.841 254096 DEBUG nova.network.os_vif_util [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.841 254096 DEBUG os_vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.842 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.842 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.845 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd0a47c2-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.845 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd0a47c2-54, col_values=(('external_ids', {'iface-id': 'cd0a47c2-5422-4bcf-a714-1ecdc3db9e87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ef:7f', 'vm-uuid': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:34 compute-0 NetworkManager[48891]: <info>  [1764090514.8477] manager: (tapcd0a47c2-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.854 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.855 254096 INFO os_vif [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.965 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.966 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.966 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] No VIF found with MAC fa:16:3e:37:ef:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.967 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Using config drive
Nov 25 17:08:34 compute-0 nova_compute[254092]: 2025-11-25 17:08:34.996 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:35 compute-0 ceph-mon[74985]: pgmap v2551: 321 pgs: 321 active+clean; 76 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Nov 25 17:08:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4187159391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.736 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Creating config drive at /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.742 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7g_mmbmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.774 254096 DEBUG nova.network.neutron [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updated VIF entry in instance network info cache for port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.775 254096 DEBUG nova.network.neutron [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.814 254096 DEBUG oslo_concurrency.lockutils [req-10b05ab1-277e-4316-aa20-e50489d35c2d req-ac0faaaa-f870-4983-9ab3-436f94aa8cdb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.882 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7g_mmbmj" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.923 254096 DEBUG nova.storage.rbd_utils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] rbd image 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:35 compute-0 nova_compute[254092]: 2025-11-25 17:08:35.928 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.100 254096 DEBUG oslo_concurrency.processutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.101 254096 INFO nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deleting local config drive /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0/disk.config because it was imported into RBD.
Nov 25 17:08:36 compute-0 kernel: tapcd0a47c2-54: entered promiscuous mode
Nov 25 17:08:36 compute-0 NetworkManager[48891]: <info>  [1764090516.1967] manager: (tapcd0a47c2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Nov 25 17:08:36 compute-0 ovn_controller[153477]: 2025-11-25T17:08:36Z|01344|binding|INFO|Claiming lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for this chassis.
Nov 25 17:08:36 compute-0 ovn_controller[153477]: 2025-11-25T17:08:36Z|01345|binding|INFO|cd0a47c2-5422-4bcf-a714-1ecdc3db9e87: Claiming fa:16:3e:37:ef:7f 10.100.0.3
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.237 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.248 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.250 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b bound to our chassis
Nov 25 17:08:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.250 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 17:08:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:36.251 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee4a058-f2fa-48fa-a95f-15e945c98f51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:36 compute-0 systemd-udevd[395959]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:08:36 compute-0 systemd-machined[216343]: New machine qemu-162-instance-00000082.
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:36 compute-0 NetworkManager[48891]: <info>  [1764090516.2844] device (tapcd0a47c2-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:08:36 compute-0 NetworkManager[48891]: <info>  [1764090516.2855] device (tapcd0a47c2-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:08:36 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Nov 25 17:08:36 compute-0 ovn_controller[153477]: 2025-11-25T17:08:36Z|01346|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 ovn-installed in OVS
Nov 25 17:08:36 compute-0 ovn_controller[153477]: 2025-11-25T17:08:36Z|01347|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 up in Southbound
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.902 254096 DEBUG nova.compute.manager [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.904 254096 DEBUG oslo_concurrency.lockutils [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.904 254096 DEBUG oslo_concurrency.lockutils [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.905 254096 DEBUG oslo_concurrency.lockutils [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.905 254096 DEBUG nova.compute.manager [req-9159f7be-f08d-4047-89f7-268dd48166e0 req-2d8d15ea-c1e3-4998-b035-939441e676da a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Processing event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.983 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.984 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090516.9821105, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.984 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Started (Lifecycle Event)
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.988 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.991 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance spawned successfully.
Nov 25 17:08:36 compute-0 nova_compute[254092]: 2025-11-25 17:08:36.992 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.015 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.022 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.025 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.026 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.026 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.027 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.027 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.027 254096 DEBUG nova.virt.libvirt.driver [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.047 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.048 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090516.9824736, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.048 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Paused (Lifecycle Event)
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.070 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.073 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090516.9875927, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.074 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Resumed (Lifecycle Event)
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.087 254096 INFO nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 7.25 seconds to spawn the instance on the hypervisor.
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.087 254096 DEBUG nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.091 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.097 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.127 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.159 254096 INFO nova.compute.manager [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 8.90 seconds to build instance.
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.186 254096 DEBUG oslo_concurrency.lockutils [None req-31ae68a8-e024-4b82-948b-8d922daf1dae c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:37 compute-0 nova_compute[254092]: 2025-11-25 17:08:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:08:37 compute-0 ceph-mon[74985]: pgmap v2552: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 17:08:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 25 17:08:38 compute-0 nova_compute[254092]: 2025-11-25 17:08:38.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:38 compute-0 nova_compute[254092]: 2025-11-25 17:08:38.992 254096 DEBUG nova.compute.manager [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:38 compute-0 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG oslo_concurrency.lockutils [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:38 compute-0 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG oslo_concurrency.lockutils [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:38 compute-0 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG oslo_concurrency.lockutils [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:38 compute-0 nova_compute[254092]: 2025-11-25 17:08:38.993 254096 DEBUG nova.compute.manager [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:38 compute-0 nova_compute[254092]: 2025-11-25 17:08:38.994 254096 WARNING nova.compute.manager [req-2daaf189-e56c-4000-a3b0-6af1c5fc86c5 req-f5c6c1f0-45f3-4791-a947-fd460c8de319 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.
Nov 25 17:08:39 compute-0 ceph-mon[74985]: pgmap v2553: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 25 17:08:39 compute-0 nova_compute[254092]: 2025-11-25 17:08:39.885 254096 DEBUG nova.objects.instance [None req-1f531883-117b-40b3-8c65-f71c375a05c7 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:39 compute-0 nova_compute[254092]: 2025-11-25 17:08:39.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:39 compute-0 nova_compute[254092]: 2025-11-25 17:08:39.901 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090519.9017365, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:39 compute-0 nova_compute[254092]: 2025-11-25 17:08:39.902 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Paused (Lifecycle Event)
Nov 25 17:08:39 compute-0 nova_compute[254092]: 2025-11-25 17:08:39.933 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:39 compute-0 nova_compute[254092]: 2025-11-25 17:08:39.938 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:39 compute-0 nova_compute[254092]: 2025-11-25 17:08:39.963 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:08:40
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control']
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:08:40 compute-0 kernel: tapcd0a47c2-54 (unregistering): left promiscuous mode
Nov 25 17:08:40 compute-0 NetworkManager[48891]: <info>  [1764090520.4096] device (tapcd0a47c2-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:08:40 compute-0 ovn_controller[153477]: 2025-11-25T17:08:40Z|01348|binding|INFO|Releasing lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 from this chassis (sb_readonly=0)
Nov 25 17:08:40 compute-0 ovn_controller[153477]: 2025-11-25T17:08:40Z|01349|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 down in Southbound
Nov 25 17:08:40 compute-0 ovn_controller[153477]: 2025-11-25T17:08:40Z|01350|binding|INFO|Removing iface tapcd0a47c2-54 ovn-installed in OVS
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.428 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.430 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b unbound from our chassis
Nov 25 17:08:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.430 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 17:08:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:40.432 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9796f1-8e18-41a5-9479-6379daae8445]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:40 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 17:08:40 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 3.819s CPU time.
Nov 25 17:08:40 compute-0 systemd-machined[216343]: Machine qemu-162-instance-00000082 terminated.
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:40 compute-0 nova_compute[254092]: 2025-11-25 17:08:40.585 254096 DEBUG nova.compute.manager [None req-1f531883-117b-40b3-8c65-f71c375a05c7 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:08:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/63606502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.037 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.083 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.083 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.084 254096 WARNING nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.085 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.085 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.085 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.086 254096 DEBUG oslo_concurrency.lockutils [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.086 254096 DEBUG nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.086 254096 WARNING nova.compute.manager [req-4dd8b3c6-f3bc-4e6c-b530-9604569bf4ec req-8a0db8f5-54b0-437e-83b3-9f23e3862c2f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.106 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.259 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.260 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3643MB free_disk=59.9675178527832GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.261 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.261 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.348 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.488 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090506.4877155, 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.489 254096 INFO nova.compute.manager [-] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] VM Stopped (Lifecycle Event)
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.505 254096 DEBUG nova.compute.manager [None req-e7ca1aef-eee9-4dc3-ac74-3d6a3d90e8c2 - - - - - -] [instance: 239f4ff1-2a0a-48d4-9f58-e6ff9ab24b0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:41 compute-0 ceph-mon[74985]: pgmap v2554: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 25 17:08:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/63606502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761545435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.762 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.767 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.791 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.809 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:08:41 compute-0 nova_compute[254092]: 2025-11-25 17:08:41.810 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.464 254096 INFO nova.compute.manager [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Resuming
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.466 254096 DEBUG nova.objects.instance [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'flavor' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.507 254096 DEBUG oslo_concurrency.lockutils [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.508 254096 DEBUG oslo_concurrency.lockutils [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.508 254096 DEBUG nova.network.neutron [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.517 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:08:42 compute-0 nova_compute[254092]: 2025-11-25 17:08:42.529 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:08:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2761545435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:43 compute-0 ceph-mon[74985]: pgmap v2555: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.748 254096 DEBUG nova.network.neutron [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.762 254096 DEBUG oslo_concurrency.lockutils [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.770 254096 DEBUG nova.virt.libvirt.vif [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:40Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.771 254096 DEBUG nova.network.os_vif_util [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.772 254096 DEBUG nova.network.os_vif_util [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.773 254096 DEBUG os_vif [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.775 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.776 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.781 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd0a47c2-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.782 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd0a47c2-54, col_values=(('external_ids', {'iface-id': 'cd0a47c2-5422-4bcf-a714-1ecdc3db9e87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ef:7f', 'vm-uuid': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.783 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.783 254096 INFO os_vif [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.810 254096 DEBUG nova.objects.instance [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'numa_topology' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:43 compute-0 kernel: tapcd0a47c2-54: entered promiscuous mode
Nov 25 17:08:43 compute-0 ovn_controller[153477]: 2025-11-25T17:08:43Z|01351|binding|INFO|Claiming lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for this chassis.
Nov 25 17:08:43 compute-0 ovn_controller[153477]: 2025-11-25T17:08:43Z|01352|binding|INFO|cd0a47c2-5422-4bcf-a714-1ecdc3db9e87: Claiming fa:16:3e:37:ef:7f 10.100.0.3
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:43 compute-0 NetworkManager[48891]: <info>  [1764090523.9109] manager: (tapcd0a47c2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/555)
Nov 25 17:08:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.920 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.922 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b bound to our chassis
Nov 25 17:08:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.923 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 17:08:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:43.925 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1f249bd9-1ecd-4e3d-ba4a-a785f5d23106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:43 compute-0 ovn_controller[153477]: 2025-11-25T17:08:43Z|01353|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 up in Southbound
Nov 25 17:08:43 compute-0 ovn_controller[153477]: 2025-11-25T17:08:43Z|01354|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 ovn-installed in OVS
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:43 compute-0 nova_compute[254092]: 2025-11-25 17:08:43.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:43 compute-0 systemd-udevd[396090]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:08:43 compute-0 systemd-machined[216343]: New machine qemu-163-instance-00000082.
Nov 25 17:08:43 compute-0 NetworkManager[48891]: <info>  [1764090523.9690] device (tapcd0a47c2-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:08:43 compute-0 NetworkManager[48891]: <info>  [1764090523.9706] device (tapcd0a47c2-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:08:43 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000082.
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.139 254096 DEBUG nova.compute.manager [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.140 254096 DEBUG oslo_concurrency.lockutils [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.140 254096 DEBUG oslo_concurrency.lockutils [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.141 254096 DEBUG oslo_concurrency.lockutils [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.141 254096 DEBUG nova.compute.manager [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.141 254096 WARNING nova.compute.manager [req-2be11e68-2249-4715-aca3-382f926f22e1 req-7f9e4744-6d2f-48ca-a4fb-e082632a4870 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state resuming.
Nov 25 17:08:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 87 op/s
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.717 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.718 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090524.7168858, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.718 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Started (Lifecycle Event)
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.745 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.753 254096 DEBUG nova.compute.manager [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.753 254096 DEBUG nova.objects.instance [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.757 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.787 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.788 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090524.7222576, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.788 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Resumed (Lifecycle Event)
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.796 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance running successfully.
Nov 25 17:08:44 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.799 254096 DEBUG nova.virt.libvirt.guest [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.800 254096 DEBUG nova.compute.manager [None req-f7aba151-ec0e-4d5a-919f-afd798cb315f c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.808 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.814 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.841 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 17:08:44 compute-0 nova_compute[254092]: 2025-11-25 17:08:44.892 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:45 compute-0 nova_compute[254092]: 2025-11-25 17:08:45.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:45 compute-0 ceph-mon[74985]: pgmap v2556: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 87 op/s
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.277 254096 DEBUG nova.compute.manager [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.278 254096 DEBUG oslo_concurrency.lockutils [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.279 254096 DEBUG oslo_concurrency.lockutils [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.279 254096 DEBUG oslo_concurrency.lockutils [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.280 254096 DEBUG nova.compute.manager [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.280 254096 WARNING nova.compute.manager [req-9dccaa09-77f2-4cec-bd38-909c46f0c23f req-a526c345-d858-4f87-a6f7-0aa92dab99b3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.421 254096 DEBUG nova.objects.instance [None req-d4791176-05a4-47a3-ad14-30dd70df3b03 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.451 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090526.4507444, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.452 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Paused (Lifecycle Event)
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.471 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.476 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 92 op/s
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.493 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (suspending). Skip.
Nov 25 17:08:46 compute-0 kernel: tapcd0a47c2-54 (unregistering): left promiscuous mode
Nov 25 17:08:46 compute-0 NetworkManager[48891]: <info>  [1764090526.9682] device (tapcd0a47c2-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:08:46 compute-0 ovn_controller[153477]: 2025-11-25T17:08:46Z|01355|binding|INFO|Releasing lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 from this chassis (sb_readonly=0)
Nov 25 17:08:46 compute-0 ovn_controller[153477]: 2025-11-25T17:08:46Z|01356|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 down in Southbound
Nov 25 17:08:46 compute-0 ovn_controller[153477]: 2025-11-25T17:08:46Z|01357|binding|INFO|Removing iface tapcd0a47c2-54 ovn-installed in OVS
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.981 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.984 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b unbound from our chassis
Nov 25 17:08:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.985 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 17:08:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:46.987 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f4c96d-e227-43a5-8dc3-d4d1d6d075d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:46 compute-0 nova_compute[254092]: 2025-11-25 17:08:46.991 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:47 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 17:08:47 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000082.scope: Consumed 2.535s CPU time.
Nov 25 17:08:47 compute-0 systemd-machined[216343]: Machine qemu-163-instance-00000082 terminated.
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.154 254096 DEBUG nova.compute.manager [None req-d4791176-05a4-47a3-ad14-30dd70df3b03 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.856 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.857 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.872 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:08:47 compute-0 ceph-mon[74985]: pgmap v2557: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 92 op/s
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.959 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.960 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.968 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:08:47 compute-0 nova_compute[254092]: 2025-11-25 17:08:47.968 254096 INFO nova.compute.claims [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.100 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.357 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.358 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.359 254096 WARNING nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.359 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.359 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 DEBUG oslo_concurrency.lockutils [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 DEBUG nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.360 254096 WARNING nova.compute.manager [req-fa6a51f1-db52-43f8-9dc1-f33ed9b51ac1 req-13d6c89e-1fef-495b-92cd-6bfe6e4b1cff a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state suspended and task_state None.
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.406 254096 INFO nova.compute.manager [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Resuming
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.407 254096 DEBUG nova.objects.instance [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'flavor' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.446 254096 DEBUG oslo_concurrency.lockutils [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.446 254096 DEBUG oslo_concurrency.lockutils [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquired lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.447 254096 DEBUG nova.network.neutron [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:08:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Nov 25 17:08:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567139493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.579 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.584 254096 DEBUG nova.compute.provider_tree [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.596 254096 DEBUG nova.scheduler.client.report [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.614 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.616 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.662 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.663 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.681 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.700 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.789 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.791 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.792 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Creating image(s)
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.829 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.865 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.895 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.900 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.903196) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528903242, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1029, "num_deletes": 251, "total_data_size": 1409745, "memory_usage": 1431664, "flush_reason": "Manual Compaction"}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528910262, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 856788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52525, "largest_seqno": 53553, "table_properties": {"data_size": 852821, "index_size": 1555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10739, "raw_average_key_size": 20, "raw_value_size": 844254, "raw_average_value_size": 1632, "num_data_blocks": 70, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090434, "oldest_key_time": 1764090434, "file_creation_time": 1764090528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 7118 microseconds, and 3852 cpu microseconds.
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.910313) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 856788 bytes OK
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.910334) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912000) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912018) EVENT_LOG_v1 {"time_micros": 1764090528912013, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1404883, prev total WAL file size 1404883, number of live WAL files 2.
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912856) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303039' seq:72057594037927935, type:22 .. '6D6772737461740032323631' seq:0, type:0; will stop at (end)
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(836KB)], [119(10MB)]
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528912919, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 11562378, "oldest_snapshot_seqno": -1}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/567139493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7366 keys, 8786319 bytes, temperature: kUnknown
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528968880, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 8786319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8739899, "index_size": 26916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 192652, "raw_average_key_size": 26, "raw_value_size": 8610923, "raw_average_value_size": 1169, "num_data_blocks": 1048, "num_entries": 7366, "num_filter_entries": 7366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.969056) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8786319 bytes
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.969976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.4 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.2 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(23.7) write-amplify(10.3) OK, records in: 7839, records dropped: 473 output_compression: NoCompression
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.969991) EVENT_LOG_v1 {"time_micros": 1764090528969984, "job": 72, "event": "compaction_finished", "compaction_time_micros": 56006, "compaction_time_cpu_micros": 22800, "output_level": 6, "num_output_files": 1, "total_output_size": 8786319, "num_input_records": 7839, "num_output_records": 7366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528970201, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090528971669, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.912720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:08:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:08:48.971749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.985 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.987 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.988 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:48 compute-0 nova_compute[254092]: 2025-11-25 17:08:48.988 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.016 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.019 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.054 254096 DEBUG nova.policy [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.282 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.344 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.417 254096 DEBUG nova.objects.instance [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.432 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.433 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Ensure instance console log exists: /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.433 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.433 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.434 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.657 254096 DEBUG nova.network.neutron [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [{"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.668 254096 DEBUG oslo_concurrency.lockutils [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Releasing lock "refresh_cache-8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.674 254096 DEBUG nova.virt.libvirt.vif [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:47Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.675 254096 DEBUG nova.network.os_vif_util [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.675 254096 DEBUG nova.network.os_vif_util [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.676 254096 DEBUG os_vif [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.677 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.677 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.680 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.680 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd0a47c2-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.680 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd0a47c2-54, col_values=(('external_ids', {'iface-id': 'cd0a47c2-5422-4bcf-a714-1ecdc3db9e87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:ef:7f', 'vm-uuid': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.681 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.681 254096 INFO os_vif [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.699 254096 DEBUG nova.objects.instance [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'numa_topology' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.717 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Successfully created port: 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:08:49 compute-0 kernel: tapcd0a47c2-54: entered promiscuous mode
Nov 25 17:08:49 compute-0 systemd-udevd[396148]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:08:49 compute-0 NetworkManager[48891]: <info>  [1764090529.7754] manager: (tapcd0a47c2-54): new Tun device (/org/freedesktop/NetworkManager/Devices/556)
Nov 25 17:08:49 compute-0 ovn_controller[153477]: 2025-11-25T17:08:49Z|01358|binding|INFO|Claiming lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for this chassis.
Nov 25 17:08:49 compute-0 ovn_controller[153477]: 2025-11-25T17:08:49Z|01359|binding|INFO|cd0a47c2-5422-4bcf-a714-1ecdc3db9e87: Claiming fa:16:3e:37:ef:7f 10.100.0.3
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.783 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.786 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b bound to our chassis
Nov 25 17:08:49 compute-0 NetworkManager[48891]: <info>  [1764090529.7881] device (tapcd0a47c2-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:08:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.787 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 17:08:49 compute-0 NetworkManager[48891]: <info>  [1764090529.7892] device (tapcd0a47c2-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:08:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:49.788 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c12031d9-9919-4b4d-b91b-d634b6594c0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:49 compute-0 ovn_controller[153477]: 2025-11-25T17:08:49Z|01360|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 ovn-installed in OVS
Nov 25 17:08:49 compute-0 ovn_controller[153477]: 2025-11-25T17:08:49Z|01361|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 up in Southbound
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.808 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:49 compute-0 systemd-machined[216343]: New machine qemu-164-instance-00000082.
Nov 25 17:08:49 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000082.
Nov 25 17:08:49 compute-0 nova_compute[254092]: 2025-11-25 17:08:49.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:49 compute-0 ceph-mon[74985]: pgmap v2558: 321 pgs: 321 active+clean; 88 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.354 254096 DEBUG nova.virt.libvirt.host [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Removed pending event for 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.354 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090530.3538384, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.355 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Started (Lifecycle Event)
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.369 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.375 254096 DEBUG nova.compute.manager [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.376 254096 DEBUG nova.objects.instance [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.379 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.409 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance running successfully.
Nov 25 17:08:50 compute-0 virtqemud[253880]: argument unsupported: QEMU guest agent is not configured
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.410 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.411 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090530.3584712, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.411 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Resumed (Lifecycle Event)
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.412 254096 DEBUG nova.virt.libvirt.guest [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.412 254096 DEBUG nova.compute.manager [None req-0bec9add-d31c-445c-a34f-855cd855182e c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.431 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.461 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] During sync_power_state the instance has a pending task (resuming). Skip.
Nov 25 17:08:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 103 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 365 KiB/s wr, 90 op/s
Nov 25 17:08:50 compute-0 nova_compute[254092]: 2025-11-25 17:08:50.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00041556527560721496 of space, bias 1.0, pg target 0.12466958268216449 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:08:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:08:51 compute-0 podman[396418]: 2025-11-25 17:08:51.649143436 +0000 UTC m=+0.060882621 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:08:51 compute-0 podman[396417]: 2025-11-25 17:08:51.649386772 +0000 UTC m=+0.060913162 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:08:51 compute-0 podman[396419]: 2025-11-25 17:08:51.685187555 +0000 UTC m=+0.079894833 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:08:51 compute-0 nova_compute[254092]: 2025-11-25 17:08:51.865 254096 DEBUG nova.compute.manager [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:51 compute-0 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG oslo_concurrency.lockutils [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:51 compute-0 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG oslo_concurrency.lockutils [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:51 compute-0 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG oslo_concurrency.lockutils [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:51 compute-0 nova_compute[254092]: 2025-11-25 17:08:51.866 254096 DEBUG nova.compute.manager [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:51 compute-0 nova_compute[254092]: 2025-11-25 17:08:51.867 254096 WARNING nova.compute.manager [req-3ca7861a-0750-409d-b644-2018529a2852 req-643082ab-2e8f-490f-a633-d8d9da3b011b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.
Nov 25 17:08:52 compute-0 ceph-mon[74985]: pgmap v2559: 321 pgs: 321 active+clean; 103 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 365 KiB/s wr, 90 op/s
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.375 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Successfully updated port: 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.434 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.435 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.435 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.447 254096 DEBUG nova.compute.manager [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.448 254096 DEBUG nova.compute.manager [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.448 254096 DEBUG oslo_concurrency.lockutils [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:08:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 134 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 25 17:08:52 compute-0 nova_compute[254092]: 2025-11-25 17:08:52.573 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:08:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:54 compute-0 ceph-mon[74985]: pgmap v2560: 321 pgs: 321 active+clean; 134 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 25 17:08:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 134 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.689 254096 DEBUG nova.network.neutron [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.732 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.733 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance network_info: |[{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.733 254096 DEBUG oslo_concurrency.lockutils [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.733 254096 DEBUG nova.network.neutron [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.736 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start _get_guest_xml network_info=[{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.743 254096 DEBUG nova.compute.manager [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.743 254096 DEBUG oslo_concurrency.lockutils [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.743 254096 DEBUG oslo_concurrency.lockutils [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.744 254096 DEBUG oslo_concurrency.lockutils [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.744 254096 DEBUG nova.compute.manager [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.744 254096 WARNING nova.compute.manager [req-cd5c0f79-89f0-4bad-b694-e15d6aead532 req-1d21848b-6752-450e-aa25-003b610ff535 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state active and task_state None.
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.745 254096 WARNING nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.749 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.749 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.752 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.753 254096 DEBUG nova.virt.libvirt.host [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.753 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.753 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.754 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.754 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.755 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.756 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.756 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.756 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.757 254096 DEBUG nova.virt.hardware [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.760 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.842 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.843 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.845 254096 INFO nova.compute.manager [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Terminating instance
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.846 254096 DEBUG nova.compute.manager [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:08:54 compute-0 kernel: tapcd0a47c2-54 (unregistering): left promiscuous mode
Nov 25 17:08:54 compute-0 NetworkManager[48891]: <info>  [1764090534.8820] device (tapcd0a47c2-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:08:54 compute-0 ovn_controller[153477]: 2025-11-25T17:08:54Z|01362|binding|INFO|Releasing lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 from this chassis (sb_readonly=0)
Nov 25 17:08:54 compute-0 ovn_controller[153477]: 2025-11-25T17:08:54Z|01363|binding|INFO|Setting lport cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 down in Southbound
Nov 25 17:08:54 compute-0 ovn_controller[153477]: 2025-11-25T17:08:54Z|01364|binding|INFO|Removing iface tapcd0a47c2-54 ovn-installed in OVS
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.898 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:ef:7f 10.100.0.3'], port_security=['fa:16:3e:37:ef:7f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8dfb4b8d-30c5-414d-98f6-1f788dbae8c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d8d30c9803a4a3fa7d9179a85cf828e', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ab0cb237-2b37-42b4-9676-2ced940d4952', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a73e907-7a5d-4e02-85a2-a26af225c516, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.899 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 in datapath a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b unbound from our chassis
Nov 25 17:08:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.900 163338 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Nov 25 17:08:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:54.901 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d26f984-3719-460e-8220-a0abf31c0933]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:54 compute-0 nova_compute[254092]: 2025-11-25 17:08:54.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:54 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Deactivated successfully.
Nov 25 17:08:54 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000082.scope: Consumed 4.941s CPU time.
Nov 25 17:08:54 compute-0 systemd-machined[216343]: Machine qemu-164-instance-00000082 terminated.
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.089 254096 INFO nova.virt.libvirt.driver [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Instance destroyed successfully.
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.090 254096 DEBUG nova.objects.instance [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lazy-loading 'resources' on Instance uuid 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.104 254096 DEBUG nova.virt.libvirt.vif [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1287608653',display_name='tempest-TestServerAdvancedOps-server-1287608653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1287608653',id=130,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6d8d30c9803a4a3fa7d9179a85cf828e',ramdisk_id='',reservation_id='r-lv49apwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-343246187',owner_user_name='tempest-TestServerAdvancedOps-343246187-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:50Z,user_data=None,user_id='c10f7baf67a8412caf2428d3200f851d',uuid=8dfb4b8d-30c5-414d-98f6-1f788dbae8c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.105 254096 DEBUG nova.network.os_vif_util [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converting VIF {"id": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "address": "fa:16:3e:37:ef:7f", "network": {"id": "a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1349202703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6d8d30c9803a4a3fa7d9179a85cf828e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd0a47c2-54", "ovs_interfaceid": "cd0a47c2-5422-4bcf-a714-1ecdc3db9e87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.106 254096 DEBUG nova.network.os_vif_util [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.106 254096 DEBUG os_vif [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.110 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd0a47c2-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.119 254096 INFO os_vif [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:ef:7f,bridge_name='br-int',has_traffic_filtering=True,id=cd0a47c2-5422-4bcf-a714-1ecdc3db9e87,network=Network(a4464ea7-d9ca-4c67-acfd-1e06ae9e5a4b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd0a47c2-54')
Nov 25 17:08:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:08:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410214177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.213 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.235 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.240 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:55 compute-0 ceph-mon[74985]: pgmap v2561: 321 pgs: 321 active+clean; 134 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Nov 25 17:08:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1410214177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:08:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4262876514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:08:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:08:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4262876514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.466 254096 INFO nova.virt.libvirt.driver [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deleting instance files /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_del
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.467 254096 INFO nova.virt.libvirt.driver [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deletion of /var/lib/nova/instances/8dfb4b8d-30c5-414d-98f6-1f788dbae8c0_del complete
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.545 254096 INFO nova.compute.manager [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.546 254096 DEBUG oslo.service.loopingcall [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.547 254096 DEBUG nova.compute.manager [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.547 254096 DEBUG nova.network.neutron [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:08:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:08:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041196014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.701 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.705 254096 DEBUG nova.virt.libvirt.vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-367110792',display_name='tempest-TestNetworkBasicOps-server-367110792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-367110792',id=131,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDN+GWJbAZWOIdymw+cz2oLcyndkUPhl6vtYaMAR6OngHd8OlebpGiETQxsMoSybRqEA+rFizH3rxBjAI6Jko4gUoKJ0EE0bXq9XY/gGruR3mEMNu5mTsv7YmUDww+bvsg==',key_name='tempest-TestNetworkBasicOps-2139833866',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-3zuncmiv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:48Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e9a105a6-90a1-4e21-9296-61a55e2ceec3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.705 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.707 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.709 254096 DEBUG nova.objects.instance [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.724 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <uuid>e9a105a6-90a1-4e21-9296-61a55e2ceec3</uuid>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <name>instance-00000083</name>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-367110792</nova:name>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:08:54</nova:creationTime>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <nova:port uuid="9b06b5b4-bc07-48ef-b51a-ecf0abc558ab">
Nov 25 17:08:55 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <system>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <entry name="serial">e9a105a6-90a1-4e21-9296-61a55e2ceec3</entry>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <entry name="uuid">e9a105a6-90a1-4e21-9296-61a55e2ceec3</entry>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </system>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <os>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   </os>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <features>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   </features>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk">
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       </source>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config">
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       </source>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:08:55 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e4:f0:6f"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <target dev="tap9b06b5b4-bc"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/console.log" append="off"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <video>
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </video>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:08:55 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:08:55 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:08:55 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:08:55 compute-0 nova_compute[254092]: </domain>
Nov 25 17:08:55 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.726 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Preparing to wait for external event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.727 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.727 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.727 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.728 254096 DEBUG nova.virt.libvirt.vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:08:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-367110792',display_name='tempest-TestNetworkBasicOps-server-367110792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-367110792',id=131,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDN+GWJbAZWOIdymw+cz2oLcyndkUPhl6vtYaMAR6OngHd8OlebpGiETQxsMoSybRqEA+rFizH3rxBjAI6Jko4gUoKJ0EE0bXq9XY/gGruR3mEMNu5mTsv7YmUDww+bvsg==',key_name='tempest-TestNetworkBasicOps-2139833866',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-3zuncmiv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:08:48Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e9a105a6-90a1-4e21-9296-61a55e2ceec3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.729 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.730 254096 DEBUG nova.network.os_vif_util [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.730 254096 DEBUG os_vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.731 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.732 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.735 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.736 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b06b5b4-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.737 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b06b5b4-bc, col_values=(('external_ids', {'iface-id': '9b06b5b4-bc07-48ef-b51a-ecf0abc558ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:f0:6f', 'vm-uuid': 'e9a105a6-90a1-4e21-9296-61a55e2ceec3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 NetworkManager[48891]: <info>  [1764090535.7399] manager: (tap9b06b5b4-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.749 254096 INFO os_vif [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc')
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.812 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.813 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.814 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:e4:f0:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.815 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Using config drive
Nov 25 17:08:55 compute-0 nova_compute[254092]: 2025-11-25 17:08:55.845 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4262876514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:08:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4262876514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:08:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1041196014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.442 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Creating config drive at /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.452 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmr0db8u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 99 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.519 254096 DEBUG nova.network.neutron [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.533 254096 INFO nova.compute.manager [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Took 0.99 seconds to deallocate network for instance.
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.577 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.578 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.611 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmr0db8u" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.642 254096 DEBUG nova.storage.rbd_utils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.646 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.683 254096 DEBUG nova.network.neutron [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.684 254096 DEBUG nova.network.neutron [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.705 254096 DEBUG oslo_concurrency.lockutils [req-3c87c2a2-e7c9-45ff-a085-cde80d80f50a req-b01434c3-6724-4251-8d89-789febfd3396 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.716 254096 DEBUG oslo_concurrency.processutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.822 254096 DEBUG oslo_concurrency.processutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config e9a105a6-90a1-4e21-9296-61a55e2ceec3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.823 254096 INFO nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deleting local config drive /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3/disk.config because it was imported into RBD.
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.830 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.830 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.830 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.831 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.831 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.831 254096 WARNING nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-unplugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state deleted and task_state None.
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.832 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.833 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.833 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 DEBUG oslo_concurrency.lockutils [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] No waiting events found dispatching network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 WARNING nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received unexpected event network-vif-plugged-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 for instance with vm_state deleted and task_state None.
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.834 254096 DEBUG nova.compute.manager [req-ad44d002-c159-4cad-8524-a7b574632655 req-6cabe628-6423-4573-84aa-89fb447b52fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Received event network-vif-deleted-cd0a47c2-5422-4bcf-a714-1ecdc3db9e87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:56 compute-0 kernel: tap9b06b5b4-bc: entered promiscuous mode
Nov 25 17:08:56 compute-0 NetworkManager[48891]: <info>  [1764090536.8749] manager: (tap9b06b5b4-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Nov 25 17:08:56 compute-0 systemd-udevd[396502]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:08:56 compute-0 ovn_controller[153477]: 2025-11-25T17:08:56Z|01365|binding|INFO|Claiming lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for this chassis.
Nov 25 17:08:56 compute-0 ovn_controller[153477]: 2025-11-25T17:08:56Z|01366|binding|INFO|9b06b5b4-bc07-48ef-b51a-ecf0abc558ab: Claiming fa:16:3e:e4:f0:6f 10.100.0.3
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:56 compute-0 NetworkManager[48891]: <info>  [1764090536.8867] device (tap9b06b5b4-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:08:56 compute-0 NetworkManager[48891]: <info>  [1764090536.8879] device (tap9b06b5b4-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.893 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:f0:6f 10.100.0.3'], port_security=['fa:16:3e:e4:f0:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e9a105a6-90a1-4e21-9296-61a55e2ceec3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3c9a6b1-fb73-4f95-84e0-2b0bae619305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.894 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 bound to our chassis
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.895 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 877a4e79-06f0-432b-a5f9-1a0277ccd412
Nov 25 17:08:56 compute-0 systemd-machined[216343]: New machine qemu-165-instance-00000083.
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.911 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e31ec9e7-00f0-4653-9a37-7c4445f14143]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.912 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap877a4e79-01 in ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.913 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap877a4e79-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.913 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[649a08bf-2bd1-4167-b83d-45b2f445b1f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.914 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe1fba2-04a7-4ecb-a03e-9d1c7d7ff73b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.929 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1857684b-3e13-413a-a344-344f81173a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:56 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000083.
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:56 compute-0 ovn_controller[153477]: 2025-11-25T17:08:56Z|01367|binding|INFO|Setting lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab ovn-installed in OVS
Nov 25 17:08:56 compute-0 ovn_controller[153477]: 2025-11-25T17:08:56Z|01368|binding|INFO|Setting lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab up in Southbound
Nov 25 17:08:56 compute-0 nova_compute[254092]: 2025-11-25 17:08:56.955 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.954 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[112cbc39-ea70-4b18-80aa-4271f161c328]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.982 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f99b5aad-9a8e-41eb-8036-0fa5195c83e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:56 compute-0 NetworkManager[48891]: <info>  [1764090536.9894] manager: (tap877a4e79-00): new Veth device (/org/freedesktop/NetworkManager/Devices/559)
Nov 25 17:08:56 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:56.990 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[75504731-3a97-4a41-944e-5c92d8b08802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.030 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[20a7c52b-d1a4-44a0-ae62-818e8ac9c53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.034 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a3641001-5364-4a91-87aa-97d13befb1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 NetworkManager[48891]: <info>  [1764090537.0574] device (tap877a4e79-00): carrier: link connected
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.062 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2143caef-8caa-4b5f-b435-c97fead5b830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.078 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b65772d7-ad85-45c4-b06e-0b096bf50209]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396700, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37c71797-4720-401c-aa1a-fd517930675e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:1639'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699464, 'tstamp': 699464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396701, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50106787-fd44-474e-8b18-729fee308bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396702, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.153 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb50713-abb1-4678-8757-3af5fb5f1579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:08:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996582494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.209 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[faa84eda-7825-469e-9b0a-6d54cf26d2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.213 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.213 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap877a4e79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:57 compute-0 NetworkManager[48891]: <info>  [1764090537.2212] manager: (tap877a4e79-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Nov 25 17:08:57 compute-0 kernel: tap877a4e79-00: entered promiscuous mode
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.227 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap877a4e79-00, col_values=(('external_ids', {'iface-id': '62f37f6b-9026-4c42-8bb8-f4b3e0610e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.227 254096 DEBUG oslo_concurrency.processutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:08:57 compute-0 ovn_controller[153477]: 2025-11-25T17:08:57Z|01369|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.229 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.237 254096 DEBUG nova.compute.provider_tree [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.256 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/877a4e79-06f0-432b-a5f9-1a0277ccd412.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/877a4e79-06f0-432b-a5f9-1a0277ccd412.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.256 254096 DEBUG nova.scheduler.client.report [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.257 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[52287e79-17d3-4618-a5f9-ff047d183305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.258 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-877a4e79-06f0-432b-a5f9-1a0277ccd412
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/877a4e79-06f0-432b-a5f9-1a0277ccd412.pid.haproxy
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 877a4e79-06f0-432b-a5f9-1a0277ccd412
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:08:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:08:57.259 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'env', 'PROCESS_TAG=haproxy-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/877a4e79-06f0-432b-a5f9-1a0277ccd412.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.277 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.291 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090537.2908912, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.291 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Started (Lifecycle Event)
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.310 254096 DEBUG nova.compute.manager [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG oslo_concurrency.lockutils [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG oslo_concurrency.lockutils [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG oslo_concurrency.lockutils [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.311 254096 DEBUG nova.compute.manager [req-e40d5c05-b078-49b5-93f7-74547c9cb40b req-33a381c3-c4c8-4859-9832-abe173786d78 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Processing event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.313 254096 INFO nova.scheduler.client.report [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Deleted allocations for instance 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.314 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.315 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:57 compute-0 ceph-mon[74985]: pgmap v2562: 321 pgs: 321 active+clean; 99 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Nov 25 17:08:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3996582494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.320 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.321 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.329 254096 INFO nova.virt.libvirt.driver [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance spawned successfully.
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.330 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.376 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.376 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090537.29106, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.376 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Paused (Lifecycle Event)
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.387 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.387 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.388 254096 DEBUG nova.virt.libvirt.driver [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.415 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.419 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090537.3183403, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.419 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Resumed (Lifecycle Event)
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.429 254096 DEBUG oslo_concurrency.lockutils [None req-723765cc-57e4-41ae-a4c8-cb5de7d82751 c10f7baf67a8412caf2428d3200f851d 6d8d30c9803a4a3fa7d9179a85cf828e - - default default] Lock "8dfb4b8d-30c5-414d-98f6-1f788dbae8c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.458 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.461 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.465 254096 INFO nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 8.68 seconds to spawn the instance on the hypervisor.
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.465 254096 DEBUG nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.493 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.527 254096 INFO nova.compute.manager [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 9.60 seconds to build instance.
Nov 25 17:08:57 compute-0 nova_compute[254092]: 2025-11-25 17:08:57.545 254096 DEBUG oslo_concurrency.lockutils [None req-896e8d2c-5d2c-4d54-a19d-cfd31fea7d8c e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:57 compute-0 podman[396777]: 2025-11-25 17:08:57.62710224 +0000 UTC m=+0.050942398 container create 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:08:57 compute-0 systemd[1]: Started libpod-conmon-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652.scope.
Nov 25 17:08:57 compute-0 podman[396777]: 2025-11-25 17:08:57.600374506 +0000 UTC m=+0.024214684 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:08:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e0d9216ee6b30cc22072c4c0c5dbb641729836999db290bc3d2789f7b7629e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:08:57 compute-0 podman[396777]: 2025-11-25 17:08:57.718153517 +0000 UTC m=+0.141993695 container init 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:08:57 compute-0 podman[396777]: 2025-11-25 17:08:57.725614222 +0000 UTC m=+0.149454380 container start 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:08:57 compute-0 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : New worker (396796) forked
Nov 25 17:08:57 compute-0 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : Loading success.
Nov 25 17:08:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 99 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 25 17:08:58 compute-0 nova_compute[254092]: 2025-11-25 17:08:58.535 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:08:58 compute-0 nova_compute[254092]: 2025-11-25 17:08:58.536 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:08:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:08:59 compute-0 nova_compute[254092]: 2025-11-25 17:08:59.409 254096 DEBUG nova.compute.manager [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:08:59 compute-0 nova_compute[254092]: 2025-11-25 17:08:59.409 254096 DEBUG oslo_concurrency.lockutils [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:08:59 compute-0 nova_compute[254092]: 2025-11-25 17:08:59.409 254096 DEBUG oslo_concurrency.lockutils [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:08:59 compute-0 nova_compute[254092]: 2025-11-25 17:08:59.410 254096 DEBUG oslo_concurrency.lockutils [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:08:59 compute-0 nova_compute[254092]: 2025-11-25 17:08:59.410 254096 DEBUG nova.compute.manager [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:08:59 compute-0 nova_compute[254092]: 2025-11-25 17:08:59.410 254096 WARNING nova.compute.manager [req-c3ae7125-d717-449a-a617-70973fb44ecc req-e651e592-85a7-47eb-b4bd-f04df66ab2fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.
Nov 25 17:08:59 compute-0 ovn_controller[153477]: 2025-11-25T17:08:59Z|01370|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 17:08:59 compute-0 nova_compute[254092]: 2025-11-25 17:08:59.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:08:59 compute-0 ceph-mon[74985]: pgmap v2563: 321 pgs: 321 active+clean; 99 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Nov 25 17:09:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 88 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 25 17:09:00 compute-0 nova_compute[254092]: 2025-11-25 17:09:00.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:00 compute-0 nova_compute[254092]: 2025-11-25 17:09:00.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:01 compute-0 ovn_controller[153477]: 2025-11-25T17:09:01Z|01371|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 17:09:01 compute-0 NetworkManager[48891]: <info>  [1764090541.1943] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Nov 25 17:09:01 compute-0 NetworkManager[48891]: <info>  [1764090541.1950] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:01 compute-0 ovn_controller[153477]: 2025-11-25T17:09:01Z|01372|binding|INFO|Releasing lport 62f37f6b-9026-4c42-8bb8-f4b3e0610e3b from this chassis (sb_readonly=0)
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:01 compute-0 ceph-mon[74985]: pgmap v2564: 321 pgs: 321 active+clean; 88 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.837 254096 DEBUG nova.compute.manager [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.837 254096 DEBUG nova.compute.manager [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.838 254096 DEBUG oslo_concurrency.lockutils [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.838 254096 DEBUG oslo_concurrency.lockutils [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:01 compute-0 nova_compute[254092]: 2025-11-25 17:09:01.838 254096 DEBUG nova.network.neutron [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 120 op/s
Nov 25 17:09:03 compute-0 nova_compute[254092]: 2025-11-25 17:09:03.467 254096 DEBUG nova.network.neutron [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:03 compute-0 nova_compute[254092]: 2025-11-25 17:09:03.467 254096 DEBUG nova.network.neutron [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:03 compute-0 nova_compute[254092]: 2025-11-25 17:09:03.547 254096 DEBUG oslo_concurrency.lockutils [req-2158489c-69f7-4960-b8d2-19b2bfe488b7 req-9b81740c-819d-4d27-a9f1-53b27ebb7df7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:03 compute-0 ceph-mon[74985]: pgmap v2565: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 120 op/s
Nov 25 17:09:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:04 compute-0 sudo[396806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:04 compute-0 sudo[396806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:04 compute-0 sudo[396806]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:04 compute-0 sudo[396831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:09:04 compute-0 sudo[396831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:04 compute-0 sudo[396831]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:04 compute-0 sudo[396856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:04 compute-0 sudo[396856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:04 compute-0 sudo[396856]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Nov 25 17:09:04 compute-0 sudo[396881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 17:09:04 compute-0 sudo[396881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:05 compute-0 podman[396982]: 2025-11-25 17:09:05.203601599 +0000 UTC m=+0.126190692 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:09:05 compute-0 podman[396982]: 2025-11-25 17:09:05.31335983 +0000 UTC m=+0.235948893 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:09:05 compute-0 nova_compute[254092]: 2025-11-25 17:09:05.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:05 compute-0 ceph-mon[74985]: pgmap v2566: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Nov 25 17:09:05 compute-0 nova_compute[254092]: 2025-11-25 17:09:05.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:06 compute-0 sudo[396881]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:09:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:09:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:06 compute-0 sudo[397142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:06 compute-0 sudo[397142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:06 compute-0 sudo[397142]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:06 compute-0 sudo[397167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:09:06 compute-0 sudo[397167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:06 compute-0 sudo[397167]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:06 compute-0 sudo[397192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:06 compute-0 sudo[397192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:06 compute-0 sudo[397192]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Nov 25 17:09:06 compute-0 sudo[397217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:09:06 compute-0 sudo[397217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:07 compute-0 sudo[397217]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:09:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:09:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:09:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b1c94ba0-b95f-4f70-8667-b9b7df4293f1 does not exist
Nov 25 17:09:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1d06334d-8e40-4181-98bf-a38f57a27919 does not exist
Nov 25 17:09:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 586ee341-b3a9-4635-94b7-7b80c47d919a does not exist
Nov 25 17:09:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:09:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:09:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:09:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:09:07 compute-0 sudo[397274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:07 compute-0 sudo[397274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:07 compute-0 sudo[397274]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:07 compute-0 sudo[397299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:09:07 compute-0 sudo[397299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:07 compute-0 sudo[397299]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:07 compute-0 sudo[397324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:07 compute-0 sudo[397324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:07 compute-0 sudo[397324]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:07 compute-0 sudo[397349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:09:07 compute-0 sudo[397349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:09:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.584 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.585 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.604 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:09:07 compute-0 podman[397415]: 2025-11-25 17:09:07.717064049 +0000 UTC m=+0.097088844 container create ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:09:07 compute-0 podman[397415]: 2025-11-25 17:09:07.641135895 +0000 UTC m=+0.021160700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.754 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.754 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.762 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.763 254096 INFO nova.compute.claims [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:09:07 compute-0 systemd[1]: Started libpod-conmon-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope.
Nov 25 17:09:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:09:07 compute-0 nova_compute[254092]: 2025-11-25 17:09:07.935 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:07 compute-0 podman[397415]: 2025-11-25 17:09:07.987842405 +0000 UTC m=+0.367867240 container init ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:09:07 compute-0 podman[397415]: 2025-11-25 17:09:07.995893576 +0000 UTC m=+0.375918361 container start ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:09:08 compute-0 affectionate_lamarr[397431]: 167 167
Nov 25 17:09:08 compute-0 systemd[1]: libpod-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope: Deactivated successfully.
Nov 25 17:09:08 compute-0 conmon[397431]: conmon ff4bddfac6f409a5fba0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope/container/memory.events
Nov 25 17:09:08 compute-0 podman[397415]: 2025-11-25 17:09:08.039207065 +0000 UTC m=+0.419231870 container attach ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:09:08 compute-0 podman[397415]: 2025-11-25 17:09:08.040180301 +0000 UTC m=+0.420205086 container died ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9dd10fc9d48e9664e9ef00d6bdcb95be3460ca169357ff9532c4620d629c6bf-merged.mount: Deactivated successfully.
Nov 25 17:09:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:09:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253505592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.367 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.373 254096 DEBUG nova.compute.provider_tree [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.385 254096 DEBUG nova.scheduler.client.report [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.430 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.431 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:09:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Nov 25 17:09:08 compute-0 ceph-mon[74985]: pgmap v2567: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Nov 25 17:09:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4253505592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.629 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.629 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.672 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.734 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:09:08 compute-0 podman[397415]: 2025-11-25 17:09:08.762106522 +0000 UTC m=+1.142131327 container remove ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:09:08 compute-0 systemd[1]: libpod-conmon-ff4bddfac6f409a5fba0721f811202617e99a5a87877457257e2503a3b746839.scope: Deactivated successfully.
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.877 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.878 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.879 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Creating image(s)
Nov 25 17:09:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.914 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.946 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.965 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:08 compute-0 nova_compute[254092]: 2025-11-25 17:09:08.969 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:09 compute-0 nova_compute[254092]: 2025-11-25 17:09:09.009 254096 DEBUG nova.policy [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:09:09 compute-0 podman[397479]: 2025-11-25 17:09:08.928941588 +0000 UTC m=+0.038419745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:09:09 compute-0 nova_compute[254092]: 2025-11-25 17:09:09.043 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:09 compute-0 nova_compute[254092]: 2025-11-25 17:09:09.043 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:09 compute-0 nova_compute[254092]: 2025-11-25 17:09:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:09 compute-0 nova_compute[254092]: 2025-11-25 17:09:09.044 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:09 compute-0 nova_compute[254092]: 2025-11-25 17:09:09.067 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:09 compute-0 nova_compute[254092]: 2025-11-25 17:09:09.071 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:09 compute-0 podman[397479]: 2025-11-25 17:09:09.164454377 +0000 UTC m=+0.273932534 container create dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:09:09 compute-0 systemd[1]: Started libpod-conmon-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope.
Nov 25 17:09:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:09 compute-0 podman[397479]: 2025-11-25 17:09:09.620534567 +0000 UTC m=+0.730012734 container init dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:09:09 compute-0 podman[397479]: 2025-11-25 17:09:09.628631149 +0000 UTC m=+0.738109296 container start dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:09:09 compute-0 podman[397479]: 2025-11-25 17:09:09.639115527 +0000 UTC m=+0.748593704 container attach dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:09:09 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 17:09:09 compute-0 ceph-mon[74985]: pgmap v2568: 321 pgs: 321 active+clean; 88 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.080 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090535.0785992, 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.082 254096 INFO nova.compute.manager [-] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] VM Stopped (Lifecycle Event)
Nov 25 17:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.108 254096 DEBUG nova.compute.manager [None req-2c275085-f0e9-441c-a688-e890800f1705 - - - - - -] [instance: 8dfb4b8d-30c5-414d-98f6-1f788dbae8c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.442 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Successfully created port: 8f6c27c2-adf3-4556-9c61-69f27de29c0a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:09:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 97 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 964 KiB/s wr, 93 op/s
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.508 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.666 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:10 compute-0 gallant_jang[397583]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:09:10 compute-0 gallant_jang[397583]: --> relative data size: 1.0
Nov 25 17:09:10 compute-0 gallant_jang[397583]: --> All data devices are unavailable
Nov 25 17:09:10 compute-0 systemd[1]: libpod-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope: Deactivated successfully.
Nov 25 17:09:10 compute-0 systemd[1]: libpod-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope: Consumed 1.027s CPU time.
Nov 25 17:09:10 compute-0 podman[397479]: 2025-11-25 17:09:10.730523872 +0000 UTC m=+1.840002039 container died dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:10 compute-0 nova_compute[254092]: 2025-11-25 17:09:10.755 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd8b50090f1e9efe79c72904f20f11359eb689b4b36852aa5e9f8b4791bccdcf-merged.mount: Deactivated successfully.
Nov 25 17:09:11 compute-0 podman[397479]: 2025-11-25 17:09:11.251494101 +0000 UTC m=+2.360972248 container remove dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:09:11 compute-0 sudo[397349]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:11 compute-0 systemd[1]: libpod-conmon-dc7f4f01737a4014ae5173e0070087b97c04da807d0d030c01bc612b322d4b44.scope: Deactivated successfully.
Nov 25 17:09:11 compute-0 sudo[397683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:11 compute-0 sudo[397683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:11 compute-0 sudo[397683]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:11 compute-0 sudo[397708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:09:11 compute-0 nova_compute[254092]: 2025-11-25 17:09:11.406 254096 DEBUG nova.objects.instance [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid e5fb0d68-c20f-4118-96eb-4e6de1db03a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:09:11 compute-0 sudo[397708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:11 compute-0 sudo[397708]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:11 compute-0 nova_compute[254092]: 2025-11-25 17:09:11.426 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:09:11 compute-0 nova_compute[254092]: 2025-11-25 17:09:11.426 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Ensure instance console log exists: /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:09:11 compute-0 nova_compute[254092]: 2025-11-25 17:09:11.427 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:11 compute-0 nova_compute[254092]: 2025-11-25 17:09:11.427 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:11 compute-0 nova_compute[254092]: 2025-11-25 17:09:11.427 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:11 compute-0 sudo[397751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:11 compute-0 sudo[397751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:11 compute-0 sudo[397751]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:11 compute-0 sudo[397776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:09:11 compute-0 sudo[397776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:11 compute-0 podman[397843]: 2025-11-25 17:09:11.854083799 +0000 UTC m=+0.044698117 container create 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:09:11 compute-0 systemd[1]: Started libpod-conmon-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope.
Nov 25 17:09:11 compute-0 ovn_controller[153477]: 2025-11-25T17:09:11Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:f0:6f 10.100.0.3
Nov 25 17:09:11 compute-0 ovn_controller[153477]: 2025-11-25T17:09:11Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:f0:6f 10.100.0.3
Nov 25 17:09:11 compute-0 podman[397843]: 2025-11-25 17:09:11.836527858 +0000 UTC m=+0.027142206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:09:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:09:11 compute-0 podman[397843]: 2025-11-25 17:09:11.953236399 +0000 UTC m=+0.143850747 container init 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:09:11 compute-0 podman[397843]: 2025-11-25 17:09:11.960490527 +0000 UTC m=+0.151104845 container start 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:09:11 compute-0 podman[397843]: 2025-11-25 17:09:11.964409945 +0000 UTC m=+0.155024293 container attach 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:09:11 compute-0 dazzling_hodgkin[397859]: 167 167
Nov 25 17:09:11 compute-0 systemd[1]: libpod-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope: Deactivated successfully.
Nov 25 17:09:11 compute-0 conmon[397859]: conmon 8d46efaf428a446a86c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope/container/memory.events
Nov 25 17:09:11 compute-0 podman[397843]: 2025-11-25 17:09:11.967898631 +0000 UTC m=+0.158512949 container died 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5089f48ff53013247aede612232e6dd61a870b5b2acc4754a60c308532f2a12c-merged.mount: Deactivated successfully.
Nov 25 17:09:12 compute-0 podman[397843]: 2025-11-25 17:09:12.010657113 +0000 UTC m=+0.201271431 container remove 8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hodgkin, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:09:12 compute-0 systemd[1]: libpod-conmon-8d46efaf428a446a86c7abd1924cbff3bc429ce88cb04970f64db7df71206a81.scope: Deactivated successfully.
Nov 25 17:09:12 compute-0 ceph-mon[74985]: pgmap v2569: 321 pgs: 321 active+clean; 97 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 964 KiB/s wr, 93 op/s
Nov 25 17:09:12 compute-0 podman[397883]: 2025-11-25 17:09:12.246976426 +0000 UTC m=+0.066522506 container create bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:09:12 compute-0 systemd[1]: Started libpod-conmon-bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547.scope.
Nov 25 17:09:12 compute-0 podman[397883]: 2025-11-25 17:09:12.215489931 +0000 UTC m=+0.035036031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:09:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:12 compute-0 podman[397883]: 2025-11-25 17:09:12.36710737 +0000 UTC m=+0.186653500 container init bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 17:09:12 compute-0 podman[397883]: 2025-11-25 17:09:12.375059978 +0000 UTC m=+0.194606058 container start bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:09:12 compute-0 podman[397883]: 2025-11-25 17:09:12.382982916 +0000 UTC m=+0.202529016 container attach bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 147 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.532 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Successfully updated port: 8f6c27c2-adf3-4556-9c61-69f27de29c0a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.558 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.558 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.558 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.690 254096 DEBUG nova.compute.manager [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.690 254096 DEBUG nova.compute.manager [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing instance network info cache due to event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.691 254096 DEBUG oslo_concurrency.lockutils [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:12 compute-0 nova_compute[254092]: 2025-11-25 17:09:12.751 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]: {
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:     "0": [
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:         {
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "devices": [
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "/dev/loop3"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             ],
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_name": "ceph_lv0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_size": "21470642176",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "name": "ceph_lv0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "tags": {
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cluster_name": "ceph",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.crush_device_class": "",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.encrypted": "0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osd_id": "0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.type": "block",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.vdo": "0"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             },
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "type": "block",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "vg_name": "ceph_vg0"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:         }
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:     ],
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:     "1": [
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:         {
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "devices": [
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "/dev/loop4"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             ],
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_name": "ceph_lv1",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_size": "21470642176",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "name": "ceph_lv1",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "tags": {
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cluster_name": "ceph",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.crush_device_class": "",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.encrypted": "0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osd_id": "1",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.type": "block",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.vdo": "0"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             },
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "type": "block",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "vg_name": "ceph_vg1"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:         }
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:     ],
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:     "2": [
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:         {
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "devices": [
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "/dev/loop5"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             ],
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_name": "ceph_lv2",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_size": "21470642176",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "name": "ceph_lv2",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "tags": {
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.cluster_name": "ceph",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.crush_device_class": "",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.encrypted": "0",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osd_id": "2",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.type": "block",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:                 "ceph.vdo": "0"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             },
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "type": "block",
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:             "vg_name": "ceph_vg2"
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:         }
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]:     ]
Nov 25 17:09:13 compute-0 mystifying_shamir[397901]: }
Nov 25 17:09:13 compute-0 systemd[1]: libpod-bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547.scope: Deactivated successfully.
Nov 25 17:09:13 compute-0 podman[397883]: 2025-11-25 17:09:13.177390215 +0000 UTC m=+0.996936295 container died bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a78aa2dc253d7752f72ea120988c4687dc77b5eb58bc3f30b3469d79b38d658-merged.mount: Deactivated successfully.
Nov 25 17:09:13 compute-0 podman[397883]: 2025-11-25 17:09:13.236478326 +0000 UTC m=+1.056024406 container remove bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 17:09:13 compute-0 systemd[1]: libpod-conmon-bc69f255783889da731c7f93b4a88f96d932536dac553053975e1d4492e15547.scope: Deactivated successfully.
Nov 25 17:09:13 compute-0 sudo[397776]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:13 compute-0 sudo[397924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:13 compute-0 sudo[397924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:13 compute-0 sudo[397924]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:13 compute-0 sudo[397949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:09:13 compute-0 sudo[397949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:13 compute-0 sudo[397949]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:13 compute-0 sudo[397974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:13 compute-0 sudo[397974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:13 compute-0 sudo[397974]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:13 compute-0 sudo[397999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:09:13 compute-0 sudo[397999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:13.646 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:13.647 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:13.647 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.749 254096 DEBUG nova.network.neutron [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.770 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.771 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance network_info: |[{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.773 254096 DEBUG oslo_concurrency.lockutils [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.774 254096 DEBUG nova.network.neutron [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.777 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start _get_guest_xml network_info=[{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.784 254096 WARNING nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.792 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.793 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.796 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.797 254096 DEBUG nova.virt.libvirt.host [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.798 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.798 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.799 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.799 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.800 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.800 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.800 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.801 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.802 254096 DEBUG nova.virt.hardware [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:09:13 compute-0 nova_compute[254092]: 2025-11-25 17:09:13.805 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:13 compute-0 podman[398065]: 2025-11-25 17:09:13.84586618 +0000 UTC m=+0.063533573 container create bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:09:13 compute-0 systemd[1]: Started libpod-conmon-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope.
Nov 25 17:09:13 compute-0 podman[398065]: 2025-11-25 17:09:13.813331958 +0000 UTC m=+0.030999411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:09:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:09:13 compute-0 podman[398065]: 2025-11-25 17:09:13.946853469 +0000 UTC m=+0.164520872 container init bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:09:13 compute-0 podman[398065]: 2025-11-25 17:09:13.955399324 +0000 UTC m=+0.173066677 container start bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 17:09:13 compute-0 podman[398065]: 2025-11-25 17:09:13.959332462 +0000 UTC m=+0.176999855 container attach bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:09:13 compute-0 vibrant_mccarthy[398081]: 167 167
Nov 25 17:09:13 compute-0 systemd[1]: libpod-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope: Deactivated successfully.
Nov 25 17:09:13 compute-0 conmon[398081]: conmon bb7330cd53dd86ad0f4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope/container/memory.events
Nov 25 17:09:13 compute-0 podman[398065]: 2025-11-25 17:09:13.963686141 +0000 UTC m=+0.181353504 container died bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bcdbf64cb0632f76c236d9f283ea777b8ea63b75aac0f2fa946c47821073f09-merged.mount: Deactivated successfully.
Nov 25 17:09:14 compute-0 podman[398065]: 2025-11-25 17:09:14.017708334 +0000 UTC m=+0.235375687 container remove bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:09:14 compute-0 systemd[1]: libpod-conmon-bb7330cd53dd86ad0f4e74a376b8606d16939c0b764a668d8d25d1d925b9f797.scope: Deactivated successfully.
Nov 25 17:09:14 compute-0 ceph-mon[74985]: pgmap v2570: 321 pgs: 321 active+clean; 147 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Nov 25 17:09:14 compute-0 podman[398124]: 2025-11-25 17:09:14.238396226 +0000 UTC m=+0.056279045 container create 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:09:14 compute-0 systemd[1]: Started libpod-conmon-63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82.scope.
Nov 25 17:09:14 compute-0 podman[398124]: 2025-11-25 17:09:14.211322904 +0000 UTC m=+0.029205743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:09:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:09:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:09:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351559669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:14 compute-0 podman[398124]: 2025-11-25 17:09:14.327618893 +0000 UTC m=+0.145501732 container init 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:09:14 compute-0 podman[398124]: 2025-11-25 17:09:14.338621325 +0000 UTC m=+0.156504144 container start 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.338 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:14 compute-0 podman[398124]: 2025-11-25 17:09:14.342272586 +0000 UTC m=+0.160155405 container attach 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.362 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.366 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 147 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 3.5 MiB/s wr, 56 op/s
Nov 25 17:09:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:09:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201384133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.833 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.835 254096 DEBUG nova.virt.libvirt.vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:09:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1686027048',display_name='tempest-TestNetworkBasicOps-server-1686027048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1686027048',id=132,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNdl5to0NzjjAFgFNhpsdh8Jfq3ScLPPqyXuaui5ecMeBqyO36Oalgk9S8OnNSAFKWqaym/gRqT0RKa9nF633E5JOkmAjnn0MFwmhHcgXwoFxcxFcA2kbyimTGOoXBEcA==',key_name='tempest-TestNetworkBasicOps-1297437986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-dsazanwp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:08Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e5fb0d68-c20f-4118-96eb-4e6de1db03a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.835 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.836 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.837 254096 DEBUG nova.objects.instance [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5fb0d68-c20f-4118-96eb-4e6de1db03a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.850 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <uuid>e5fb0d68-c20f-4118-96eb-4e6de1db03a1</uuid>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <name>instance-00000084</name>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-1686027048</nova:name>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:09:13</nova:creationTime>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <nova:port uuid="8f6c27c2-adf3-4556-9c61-69f27de29c0a">
Nov 25 17:09:14 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <system>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <entry name="serial">e5fb0d68-c20f-4118-96eb-4e6de1db03a1</entry>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <entry name="uuid">e5fb0d68-c20f-4118-96eb-4e6de1db03a1</entry>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </system>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <os>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   </os>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <features>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   </features>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk">
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       </source>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config">
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       </source>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:09:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:66:a8:a6"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <target dev="tap8f6c27c2-ad"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/console.log" append="off"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <video>
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </video>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:09:14 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:09:14 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:09:14 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:09:14 compute-0 nova_compute[254092]: </domain>
Nov 25 17:09:14 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.850 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Preparing to wait for external event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.851 254096 DEBUG nova.virt.libvirt.vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:09:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1686027048',display_name='tempest-TestNetworkBasicOps-server-1686027048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1686027048',id=132,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNdl5to0NzjjAFgFNhpsdh8Jfq3ScLPPqyXuaui5ecMeBqyO36Oalgk9S8OnNSAFKWqaym/gRqT0RKa9nF633E5JOkmAjnn0MFwmhHcgXwoFxcxFcA2kbyimTGOoXBEcA==',key_name='tempest-TestNetworkBasicOps-1297437986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-dsazanwp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:08Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e5fb0d68-c20f-4118-96eb-4e6de1db03a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.852 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.852 254096 DEBUG nova.network.os_vif_util [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.852 254096 DEBUG os_vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.853 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.854 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.855 254096 DEBUG nova.network.neutron [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updated VIF entry in instance network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.855 254096 DEBUG nova.network.neutron [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.858 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f6c27c2-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.859 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f6c27c2-ad, col_values=(('external_ids', {'iface-id': '8f6c27c2-adf3-4556-9c61-69f27de29c0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:a8:a6', 'vm-uuid': 'e5fb0d68-c20f-4118-96eb-4e6de1db03a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:14 compute-0 NetworkManager[48891]: <info>  [1764090554.8621] manager: (tap8f6c27c2-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.864 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.868 254096 DEBUG oslo_concurrency.lockutils [req-e31fe721-8614-4fb8-ad27-a8840e47c4dc req-6c3cbac0-225f-4e7c-9327-54c75c416625 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.870 254096 INFO os_vif [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad')
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.927 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.928 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.928 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:66:a8:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.929 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Using config drive
Nov 25 17:09:14 compute-0 nova_compute[254092]: 2025-11-25 17:09:14.948 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/351559669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4201384133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:15 compute-0 sleepy_newton[398141]: {
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "osd_id": 1,
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "type": "bluestore"
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:     },
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "osd_id": 2,
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "type": "bluestore"
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:     },
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "osd_id": 0,
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:         "type": "bluestore"
Nov 25 17:09:15 compute-0 sleepy_newton[398141]:     }
Nov 25 17:09:15 compute-0 sleepy_newton[398141]: }
Nov 25 17:09:15 compute-0 systemd[1]: libpod-63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82.scope: Deactivated successfully.
Nov 25 17:09:15 compute-0 podman[398124]: 2025-11-25 17:09:15.334560502 +0000 UTC m=+1.152443321 container died 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd0bd28b1d19cb90382b981199719036ab841f06839c1fef963566895076923d-merged.mount: Deactivated successfully.
Nov 25 17:09:15 compute-0 podman[398124]: 2025-11-25 17:09:15.424779677 +0000 UTC m=+1.242662496 container remove 63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 17:09:15 compute-0 systemd[1]: libpod-conmon-63d5fcd3d22081e473f96c8d10cb95d92f0891f27cb9fec24d881379eed94c82.scope: Deactivated successfully.
Nov 25 17:09:15 compute-0 sudo[397999]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:09:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:09:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ca6cdad2-82c9-4deb-8975-2af298c71564 does not exist
Nov 25 17:09:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 897da2fc-4144-4cfd-91f9-10347b5c32c2 does not exist
Nov 25 17:09:15 compute-0 nova_compute[254092]: 2025-11-25 17:09:15.511 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:15 compute-0 sudo[398251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:09:15 compute-0 sudo[398251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:15 compute-0 sudo[398251]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:15 compute-0 sudo[398276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:09:15 compute-0 sudo[398276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:09:15 compute-0 sudo[398276]: pam_unix(sudo:session): session closed for user root
Nov 25 17:09:15 compute-0 nova_compute[254092]: 2025-11-25 17:09:15.834 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Creating config drive at /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config
Nov 25 17:09:15 compute-0 nova_compute[254092]: 2025-11-25 17:09:15.841 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvfl44r_h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:15 compute-0 nova_compute[254092]: 2025-11-25 17:09:15.981 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvfl44r_h" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.010 254096 DEBUG nova.storage.rbd_utils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.014 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:16 compute-0 ceph-mon[74985]: pgmap v2571: 321 pgs: 321 active+clean; 147 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 3.5 MiB/s wr, 56 op/s
Nov 25 17:09:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.186 254096 DEBUG oslo_concurrency.processutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config e5fb0d68-c20f-4118-96eb-4e6de1db03a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.187 254096 INFO nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deleting local config drive /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1/disk.config because it was imported into RBD.
Nov 25 17:09:16 compute-0 kernel: tap8f6c27c2-ad: entered promiscuous mode
Nov 25 17:09:16 compute-0 NetworkManager[48891]: <info>  [1764090556.2509] manager: (tap8f6c27c2-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Nov 25 17:09:16 compute-0 ovn_controller[153477]: 2025-11-25T17:09:16Z|01373|binding|INFO|Claiming lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a for this chassis.
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:16 compute-0 ovn_controller[153477]: 2025-11-25T17:09:16Z|01374|binding|INFO|8f6c27c2-adf3-4556-9c61-69f27de29c0a: Claiming fa:16:3e:66:a8:a6 10.100.0.4
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.260 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:a8:a6 10.100.0.4'], port_security=['fa:16:3e:66:a8:a6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e5fb0d68-c20f-4118-96eb-4e6de1db03a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16916ada-fafd-4ccd-ac83-b8a7fafd2092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8f6c27c2-adf3-4556-9c61-69f27de29c0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.263 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8f6c27c2-adf3-4556-9c61-69f27de29c0a in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 bound to our chassis
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.265 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 877a4e79-06f0-432b-a5f9-1a0277ccd412
Nov 25 17:09:16 compute-0 ovn_controller[153477]: 2025-11-25T17:09:16Z|01375|binding|INFO|Setting lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a ovn-installed in OVS
Nov 25 17:09:16 compute-0 ovn_controller[153477]: 2025-11-25T17:09:16Z|01376|binding|INFO|Setting lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a up in Southbound
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.280 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1da6e84-17d2-4256-8645-fa7a33e0e761]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:16 compute-0 systemd-machined[216343]: New machine qemu-166-instance-00000084.
Nov 25 17:09:16 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000084.
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.310 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ac44a671-b4ae-48b0-8917-4806177196aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.312 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[70b3b034-829f-4e8b-81b8-2042174e536b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:16 compute-0 systemd-udevd[398357]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:09:16 compute-0 NetworkManager[48891]: <info>  [1764090556.3270] device (tap8f6c27c2-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:09:16 compute-0 NetworkManager[48891]: <info>  [1764090556.3280] device (tap8f6c27c2-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.340 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a852738f-fae7-47f8-ad57-0f60e506ecf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.355 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad5e965-892a-40f8-b1f4-bf40e3942772]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398367, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.373 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aef77d5e-3771-4d5a-87fb-6c502c02e5f6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699476, 'tstamp': 699476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398368, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699479, 'tstamp': 699479}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398368, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.377 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.377 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap877a4e79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.378 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.378 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap877a4e79-00, col_values=(('external_ids', {'iface-id': '62f37f6b-9026-4c42-8bb8-f4b3e0610e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:16.378 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.451 254096 DEBUG nova.compute.manager [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.451 254096 DEBUG oslo_concurrency.lockutils [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.451 254096 DEBUG oslo_concurrency.lockutils [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.452 254096 DEBUG oslo_concurrency.lockutils [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.452 254096 DEBUG nova.compute.manager [req-7abc8fec-8067-4334-8021-e9a5246ac812 req-32359c8c-4578-4bb7-af17-03d3de5b5ab8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Processing event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:09:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.744 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.745 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090556.7433374, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.745 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Started (Lifecycle Event)
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.751 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.756 254096 INFO nova.virt.libvirt.driver [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance spawned successfully.
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.756 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.768 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.774 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.778 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.779 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.779 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.779 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.780 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.780 254096 DEBUG nova.virt.libvirt.driver [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.802 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.802 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090556.7445712, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.802 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Paused (Lifecycle Event)
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.825 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.828 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090556.7502584, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.828 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Resumed (Lifecycle Event)
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.842 254096 INFO nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 7.96 seconds to spawn the instance on the hypervisor.
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.842 254096 DEBUG nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.843 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.850 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.877 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.904 254096 INFO nova.compute.manager [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 9.18 seconds to build instance.
Nov 25 17:09:16 compute-0 nova_compute[254092]: 2025-11-25 17:09:16.917 254096 DEBUG oslo_concurrency.lockutils [None req-a2ef4327-dca4-4269-8f27-1c10e0f3db0e e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:17 compute-0 nova_compute[254092]: 2025-11-25 17:09:17.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:18 compute-0 ceph-mon[74985]: pgmap v2572: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 17:09:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 17:09:18 compute-0 nova_compute[254092]: 2025-11-25 17:09:18.585 254096 DEBUG nova.compute.manager [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:18 compute-0 nova_compute[254092]: 2025-11-25 17:09:18.585 254096 DEBUG oslo_concurrency.lockutils [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:18 compute-0 nova_compute[254092]: 2025-11-25 17:09:18.585 254096 DEBUG oslo_concurrency.lockutils [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:18 compute-0 nova_compute[254092]: 2025-11-25 17:09:18.586 254096 DEBUG oslo_concurrency.lockutils [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:18 compute-0 nova_compute[254092]: 2025-11-25 17:09:18.586 254096 DEBUG nova.compute.manager [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] No waiting events found dispatching network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:18 compute-0 nova_compute[254092]: 2025-11-25 17:09:18.586 254096 WARNING nova.compute.manager [req-adcbb1d1-1f33-4ce4-81ce-ad0675e2d372 req-eb2a6809-a2a7-4e35-8d3c-ef57c51588fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received unexpected event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a for instance with vm_state active and task_state None.
Nov 25 17:09:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:19 compute-0 ceph-mon[74985]: pgmap v2573: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Nov 25 17:09:19 compute-0 nova_compute[254092]: 2025-11-25 17:09:19.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 25 17:09:20 compute-0 nova_compute[254092]: 2025-11-25 17:09:20.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:21 compute-0 ceph-mon[74985]: pgmap v2574: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 25 17:09:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 161 op/s
Nov 25 17:09:22 compute-0 podman[398412]: 2025-11-25 17:09:22.676186768 +0000 UTC m=+0.094221324 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 17:09:22 compute-0 podman[398414]: 2025-11-25 17:09:22.676187788 +0000 UTC m=+0.090027910 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:09:22 compute-0 podman[398413]: 2025-11-25 17:09:22.680734954 +0000 UTC m=+0.090143344 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:09:23 compute-0 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG nova.compute.manager [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:23 compute-0 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG nova.compute.manager [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing instance network info cache due to event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:23 compute-0 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG oslo_concurrency.lockutils [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:23 compute-0 nova_compute[254092]: 2025-11-25 17:09:23.253 254096 DEBUG oslo_concurrency.lockutils [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:23 compute-0 nova_compute[254092]: 2025-11-25 17:09:23.254 254096 DEBUG nova.network.neutron [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:23 compute-0 ceph-mon[74985]: pgmap v2575: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 161 op/s
Nov 25 17:09:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:24 compute-0 nova_compute[254092]: 2025-11-25 17:09:24.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 419 KiB/s wr, 113 op/s
Nov 25 17:09:24 compute-0 nova_compute[254092]: 2025-11-25 17:09:24.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:24 compute-0 nova_compute[254092]: 2025-11-25 17:09:24.971 254096 DEBUG nova.network.neutron [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updated VIF entry in instance network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:24 compute-0 nova_compute[254092]: 2025-11-25 17:09:24.971 254096 DEBUG nova.network.neutron [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [{"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:25 compute-0 nova_compute[254092]: 2025-11-25 17:09:24.999 254096 DEBUG oslo_concurrency.lockutils [req-8f32222b-cf70-47b5-ab9a-7b7423bab85d req-cec9ba60-1c9c-44b5-ba98-a1652634b2e9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:25 compute-0 nova_compute[254092]: 2025-11-25 17:09:25.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:25.812 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:09:25 compute-0 nova_compute[254092]: 2025-11-25 17:09:25.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:25.813 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:09:25 compute-0 ceph-mon[74985]: pgmap v2576: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 419 KiB/s wr, 113 op/s
Nov 25 17:09:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 423 KiB/s wr, 113 op/s
Nov 25 17:09:28 compute-0 ceph-mon[74985]: pgmap v2577: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 423 KiB/s wr, 113 op/s
Nov 25 17:09:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Nov 25 17:09:28 compute-0 nova_compute[254092]: 2025-11-25 17:09:28.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:29 compute-0 nova_compute[254092]: 2025-11-25 17:09:29.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:30 compute-0 ceph-mon[74985]: pgmap v2578: 321 pgs: 321 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Nov 25 17:09:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 190 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Nov 25 17:09:30 compute-0 nova_compute[254092]: 2025-11-25 17:09:30.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:30 compute-0 ovn_controller[153477]: 2025-11-25T17:09:30Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:a8:a6 10.100.0.4
Nov 25 17:09:30 compute-0 ovn_controller[153477]: 2025-11-25T17:09:30Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:a8:a6 10.100.0.4
Nov 25 17:09:31 compute-0 nova_compute[254092]: 2025-11-25 17:09:31.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:32 compute-0 ceph-mon[74985]: pgmap v2579: 321 pgs: 321 active+clean; 190 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Nov 25 17:09:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Nov 25 17:09:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:34 compute-0 ceph-mon[74985]: pgmap v2580: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Nov 25 17:09:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.801 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.802 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.816 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.890 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.891 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.899 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.899 254096 INFO nova.compute.claims [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:09:34 compute-0 nova_compute[254092]: 2025-11-25 17:09:34.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.193 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.519 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:09:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4099504700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.730 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.739 254096 DEBUG nova.compute.provider_tree [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.755 254096 DEBUG nova.scheduler.client.report [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.779 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.781 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:09:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:35.815 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.847 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.848 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.871 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.889 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.985 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.987 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:09:35 compute-0 nova_compute[254092]: 2025-11-25 17:09:35.988 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Creating image(s)
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.025 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.056 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.081 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.085 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.122 254096 DEBUG nova.policy [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '24306f395dd542b6a11b3bd0faadd4ad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '49df21ca46894c8fb4040c7e9eccaef4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.164 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.165 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.166 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.166 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.189 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.193 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:36 compute-0 ceph-mon[74985]: pgmap v2581: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:09:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4099504700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.538 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.614 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] resizing rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.731 254096 DEBUG nova.objects.instance [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a4ffa69-afb1-46b7-9109-8edeb9481103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.745 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.746 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Ensure instance console log exists: /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.746 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.747 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.748 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:36 compute-0 nova_compute[254092]: 2025-11-25 17:09:36.847 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Successfully created port: 507a4b35-dd4f-4777-a88c-c40597fe827b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:09:37 compute-0 ceph-mon[74985]: pgmap v2582: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.509 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Successfully updated port: 507a4b35-dd4f-4777-a88c-c40597fe827b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.528 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.529 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquired lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.529 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.614 254096 DEBUG nova.compute.manager [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.614 254096 DEBUG nova.compute.manager [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing instance network info cache due to event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:37 compute-0 nova_compute[254092]: 2025-11-25 17:09:37.615 254096 DEBUG oslo_concurrency.lockutils [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:38 compute-0 nova_compute[254092]: 2025-11-25 17:09:38.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:09:38 compute-0 nova_compute[254092]: 2025-11-25 17:09:38.748 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:09:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.161 254096 INFO nova.compute.manager [None req-1da6962b-6e80-454f-85be-4d52380b7ed1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Get console output
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.182 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.600 254096 DEBUG nova.network.neutron [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.618 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Releasing lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.619 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance network_info: |[{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.620 254096 DEBUG oslo_concurrency.lockutils [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.621 254096 DEBUG nova.network.neutron [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.628 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start _get_guest_xml network_info=[{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.634 254096 WARNING nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.642 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.644 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.654 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.656 254096 DEBUG nova.virt.libvirt.host [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.657 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.657 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.659 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.659 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.659 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.660 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.660 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.661 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.661 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.662 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.662 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.663 254096 DEBUG nova.virt.hardware [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.669 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:39 compute-0 ceph-mon[74985]: pgmap v2583: 321 pgs: 321 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:09:39 compute-0 nova_compute[254092]: 2025-11-25 17:09:39.925 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.029 254096 DEBUG nova.compute.manager [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.029 254096 DEBUG nova.compute.manager [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.030 254096 DEBUG oslo_concurrency.lockutils [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.030 254096 DEBUG oslo_concurrency.lockutils [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.030 254096 DEBUG nova.network.neutron [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:09:40
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups']
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:09:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:09:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826761267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.211 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.236 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.243 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 232 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.2 MiB/s wr, 80 op/s
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.529 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.696 254096 DEBUG nova.network.neutron [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updated VIF entry in instance network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.698 254096 DEBUG nova.network.neutron [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.714 254096 DEBUG oslo_concurrency.lockutils [req-78af7c70-b4f0-46e9-870d-e78c3d8ffca6 req-42673a01-8626-43b3-889e-0da9a35ab6b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:09:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765816984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/826761267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.797 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.800 254096 DEBUG nova.virt.libvirt.vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-955582536',display_name='tempest-TestServerBasicOps-server-955582536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-955582536',id=133,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDdtyaxmMVL8CAc7S++Y8gbOFbk1LCqSryz68UQakJZFbiX886AD33j7kjiNk5kHLWkm6AP4LpqGE2BaOqlADwTjW6lF33LqPINwV3iZ2r2irHV9h1AdPWEego3dYIkYew==',key_name='tempest-TestServerBasicOps-2071633386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49df21ca46894c8fb4040c7e9eccaef4',ramdisk_id='',reservation_id='r-w0qyy7pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1965980686',owner_user_name='tempest-TestServerBasicOps-1965980686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24306f395dd542b6a11b3bd0faadd4ad',uuid=6a4ffa69-afb1-46b7-9109-8edeb9481103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.801 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converting VIF {"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.802 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.804 254096 DEBUG nova.objects.instance [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a4ffa69-afb1-46b7-9109-8edeb9481103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.824 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <uuid>6a4ffa69-afb1-46b7-9109-8edeb9481103</uuid>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <name>instance-00000085</name>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <nova:name>tempest-TestServerBasicOps-server-955582536</nova:name>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:09:39</nova:creationTime>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:user uuid="24306f395dd542b6a11b3bd0faadd4ad">tempest-TestServerBasicOps-1965980686-project-member</nova:user>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:project uuid="49df21ca46894c8fb4040c7e9eccaef4">tempest-TestServerBasicOps-1965980686</nova:project>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <nova:port uuid="507a4b35-dd4f-4777-a88c-c40597fe827b">
Nov 25 17:09:40 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <system>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <entry name="serial">6a4ffa69-afb1-46b7-9109-8edeb9481103</entry>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <entry name="uuid">6a4ffa69-afb1-46b7-9109-8edeb9481103</entry>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </system>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <os>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   </os>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <features>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   </features>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6a4ffa69-afb1-46b7-9109-8edeb9481103_disk">
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       </source>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config">
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       </source>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:09:40 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:d4:22:50"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <target dev="tap507a4b35-dd"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/console.log" append="off"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <video>
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </video>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:09:40 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:09:40 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:09:40 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:09:40 compute-0 nova_compute[254092]: </domain>
Nov 25 17:09:40 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.833 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Preparing to wait for external event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.833 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.834 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.834 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.835 254096 DEBUG nova.virt.libvirt.vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-955582536',display_name='tempest-TestServerBasicOps-server-955582536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-955582536',id=133,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDdtyaxmMVL8CAc7S++Y8gbOFbk1LCqSryz68UQakJZFbiX886AD33j7kjiNk5kHLWkm6AP4LpqGE2BaOqlADwTjW6lF33LqPINwV3iZ2r2irHV9h1AdPWEego3dYIkYew==',key_name='tempest-TestServerBasicOps-2071633386',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49df21ca46894c8fb4040c7e9eccaef4',ramdisk_id='',reservation_id='r-w0qyy7pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1965980686',owner_user_name='tempest-TestServerBasicOps-1965980686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:09:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24306f395dd542b6a11b3bd0faadd4ad',uuid=6a4ffa69-afb1-46b7-9109-8edeb9481103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.836 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converting VIF {"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.837 254096 DEBUG nova.network.os_vif_util [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.838 254096 DEBUG os_vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.839 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.840 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.844 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap507a4b35-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.845 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap507a4b35-dd, col_values=(('external_ids', {'iface-id': '507a4b35-dd4f-4777-a88c-c40597fe827b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:22:50', 'vm-uuid': '6a4ffa69-afb1-46b7-9109-8edeb9481103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:40 compute-0 NetworkManager[48891]: <info>  [1764090580.8483] manager: (tap507a4b35-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:40 compute-0 nova_compute[254092]: 2025-11-25 17:09:40.856 254096 INFO os_vif [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd')
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.028 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.029 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.030 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] No VIF found with MAC fa:16:3e:d4:22:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.031 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Using config drive
Nov 25 17:09:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:09:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808388358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.121 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.135 254096 INFO nova.compute.manager [None req-0c87e0ed-a00b-48e8-8457-c887aa2f1ca7 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Get console output
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.137 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.145 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.233 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.234 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.238 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.238 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.242 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.243 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.418 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.420 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3304MB free_disk=59.89732360839844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.420 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.420 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e9a105a6-90a1-4e21-9296-61a55e2ceec3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e5fb0d68-c20f-4118-96eb-4e6de1db03a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 6a4ffa69-afb1-46b7-9109-8edeb9481103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.617 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:41 compute-0 ceph-mon[74985]: pgmap v2584: 321 pgs: 321 active+clean; 232 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.2 MiB/s wr, 80 op/s
Nov 25 17:09:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2765816984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:09:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/808388358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.924 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Creating config drive at /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config
Nov 25 17:09:41 compute-0 nova_compute[254092]: 2025-11-25 17:09:41.938 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnok5dwcm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:09:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538877808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.084 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.093 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.099 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnok5dwcm" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.129 254096 DEBUG nova.storage.rbd_utils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] rbd image 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.133 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.167 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.173 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.173 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.173 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 WARNING nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.174 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG oslo_concurrency.lockutils [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 DEBUG nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.175 254096 WARNING nova.compute.manager [req-c119ade4-1e98-4da4-931c-fc573b531298 req-cd31a8f1-1412-468f-9292-88a00a9b3069 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.199 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.199 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.275 254096 DEBUG oslo_concurrency.processutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config 6a4ffa69-afb1-46b7-9109-8edeb9481103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.276 254096 INFO nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deleting local config drive /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103/disk.config because it was imported into RBD.
Nov 25 17:09:42 compute-0 kernel: tap507a4b35-dd: entered promiscuous mode
Nov 25 17:09:42 compute-0 NetworkManager[48891]: <info>  [1764090582.3247] manager: (tap507a4b35-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Nov 25 17:09:42 compute-0 ovn_controller[153477]: 2025-11-25T17:09:42Z|01377|binding|INFO|Claiming lport 507a4b35-dd4f-4777-a88c-c40597fe827b for this chassis.
Nov 25 17:09:42 compute-0 ovn_controller[153477]: 2025-11-25T17:09:42Z|01378|binding|INFO|507a4b35-dd4f-4777-a88c-c40597fe827b: Claiming fa:16:3e:d4:22:50 10.100.0.9
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.325 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.336 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:22:50 10.100.0.9'], port_security=['fa:16:3e:d4:22:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a4ffa69-afb1-46b7-9109-8edeb9481103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e024dc03-b986-42e0-ad9c-68e6318af670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49df21ca46894c8fb4040c7e9eccaef4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dda0fb5-abf0-44ce-9142-5535344390ea 94b8f488-8d50-467c-9417-5b43dfa0fc8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edf8fd3-0892-465d-8830-58affb8f0bec, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=507a4b35-dd4f-4777-a88c-c40597fe827b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.338 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 507a4b35-dd4f-4777-a88c-c40597fe827b in datapath e024dc03-b986-42e0-ad9c-68e6318af670 bound to our chassis
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.340 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e024dc03-b986-42e0-ad9c-68e6318af670
Nov 25 17:09:42 compute-0 ovn_controller[153477]: 2025-11-25T17:09:42Z|01379|binding|INFO|Setting lport 507a4b35-dd4f-4777-a88c-c40597fe827b ovn-installed in OVS
Nov 25 17:09:42 compute-0 ovn_controller[153477]: 2025-11-25T17:09:42Z|01380|binding|INFO|Setting lport 507a4b35-dd4f-4777-a88c-c40597fe827b up in Southbound
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.343 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.354 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d87fb36-143c-4872-828f-685d6749c365]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.356 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape024dc03-b1 in ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.357 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape024dc03-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.357 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[519c2b4c-63db-423e-bd6f-3df0d93384a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.359 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25d1be8d-ce0f-491b-8094-518cfa3a5550]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 systemd-udevd[398846]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:09:42 compute-0 systemd-machined[216343]: New machine qemu-167-instance-00000085.
Nov 25 17:09:42 compute-0 NetworkManager[48891]: <info>  [1764090582.3746] device (tap507a4b35-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:09:42 compute-0 NetworkManager[48891]: <info>  [1764090582.3757] device (tap507a4b35-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:09:42 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-00000085.
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.383 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[f8793675-336f-4d51-ab3f-bb7defdb1d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.408 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35091f8f-cb3f-4bc5-bb96-e5d6b89425a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.436 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f8b487-14b4-40e5-84ca-2ee5f6b30088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.441 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10e0db70-4839-458c-89b3-f52c32ff2557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 NetworkManager[48891]: <info>  [1764090582.4428] manager: (tape024dc03-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/567)
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.476 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed4a95e-9249-477e-9030-437fc973ca03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.479 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee28ebd-5e37-43e6-bd8c-0a2aa2570344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Nov 25 17:09:42 compute-0 NetworkManager[48891]: <info>  [1764090582.5156] device (tape024dc03-b0): carrier: link connected
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.524 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41a9c97f-8c9a-417f-a499-157cebc00488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.543 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[45cc44df-0f50-4152-bd15-da1912ab05a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape024dc03-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:af:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704010, 'reachable_time': 34527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398878, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.560 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[520dd638-372a-4317-a9e5-9487c1d1e8de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef0:af8d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704010, 'tstamp': 704010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398879, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4602c558-b161-4d31-9283-8b299fc800fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape024dc03-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:af:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704010, 'reachable_time': 34527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398880, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.630 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[faadb7bc-c266-4f64-83db-fa8a64f668c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.722 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4d902918-3d7d-45a9-98f8-4a1613041074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape024dc03-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.724 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape024dc03-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:42 compute-0 kernel: tape024dc03-b0: entered promiscuous mode
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.728 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:42 compute-0 NetworkManager[48891]: <info>  [1764090582.7291] manager: (tape024dc03-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.729 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape024dc03-b0, col_values=(('external_ids', {'iface-id': '40effa5c-1023-4055-bf6e-f3fb50577f32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.730 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:42 compute-0 ovn_controller[153477]: 2025-11-25T17:09:42Z|01381|binding|INFO|Releasing lport 40effa5c-1023-4055-bf6e-f3fb50577f32 from this chassis (sb_readonly=0)
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.746 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e024dc03-b986-42e0-ad9c-68e6318af670.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e024dc03-b986-42e0-ad9c-68e6318af670.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.748 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4078301-9564-404e-904b-ea251371b826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.750 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-e024dc03-b986-42e0-ad9c-68e6318af670
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/e024dc03-b986-42e0-ad9c-68e6318af670.pid.haproxy
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID e024dc03-b986-42e0-ad9c-68e6318af670
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:09:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:42.751 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'env', 'PROCESS_TAG=haproxy-e024dc03-b986-42e0-ad9c-68e6318af670', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e024dc03-b986-42e0-ad9c-68e6318af670.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:09:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/538877808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.783 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090582.782857, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.784 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Started (Lifecycle Event)
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.807 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.813 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090582.7832856, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.813 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Paused (Lifecycle Event)
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.831 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.836 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:09:42 compute-0 nova_compute[254092]: 2025-11-25 17:09:42.854 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:09:43 compute-0 podman[398954]: 2025-11-25 17:09:43.133130504 +0000 UTC m=+0.048458530 container create 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:09:43 compute-0 systemd[1]: Started libpod-conmon-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b.scope.
Nov 25 17:09:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a09b45283fd3fcfbf225fb1458ae704eb97d1c98df77d8854db8ed15a4adda7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:09:43 compute-0 podman[398954]: 2025-11-25 17:09:43.106888015 +0000 UTC m=+0.022216011 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:09:43 compute-0 podman[398954]: 2025-11-25 17:09:43.215043551 +0000 UTC m=+0.130371537 container init 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 17:09:43 compute-0 podman[398954]: 2025-11-25 17:09:43.220366587 +0000 UTC m=+0.135694573 container start 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:09:43 compute-0 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : New worker (398975) forked
Nov 25 17:09:43 compute-0 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : Loading success.
Nov 25 17:09:43 compute-0 nova_compute[254092]: 2025-11-25 17:09:43.767 254096 DEBUG nova.network.neutron [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:43 compute-0 nova_compute[254092]: 2025-11-25 17:09:43.768 254096 DEBUG nova.network.neutron [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:43 compute-0 nova_compute[254092]: 2025-11-25 17:09:43.781 254096 DEBUG oslo_concurrency.lockutils [req-678570ca-5d49-4fc7-a73c-e0917360d9d8 req-7a9d7730-cd3f-4fcf-b8d6-2fb75e82aadd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:43 compute-0 ceph-mon[74985]: pgmap v2585: 321 pgs: 321 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 111 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Nov 25 17:09:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.055 254096 DEBUG nova.compute.manager [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.055 254096 DEBUG nova.compute.manager [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.056 254096 DEBUG oslo_concurrency.lockutils [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.056 254096 DEBUG oslo_concurrency.lockutils [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.056 254096 DEBUG nova.network.neutron [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.222 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.222 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.223 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.223 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.224 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Processing event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.224 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.225 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.225 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.226 254096 DEBUG oslo_concurrency.lockutils [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.226 254096 DEBUG nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] No waiting events found dispatching network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.227 254096 WARNING nova.compute.manager [req-b6378c69-6462-49a3-9cb4-daa9e757c584 req-fc6ef151-79fe-4451-9180-c868167e54e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received unexpected event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b for instance with vm_state building and task_state spawning.
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.228 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.234 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090584.2345781, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.235 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Resumed (Lifecycle Event)
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.238 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.244 254096 INFO nova.virt.libvirt.driver [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance spawned successfully.
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.245 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.260 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.269 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.276 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.277 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.277 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.278 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.279 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.280 254096 DEBUG nova.virt.libvirt.driver [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.320 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.363 254096 INFO nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 8.38 seconds to spawn the instance on the hypervisor.
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.364 254096 DEBUG nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.431 254096 INFO nova.compute.manager [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 9.57 seconds to build instance.
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.449 254096 DEBUG oslo_concurrency.lockutils [None req-9172bf73-61a4-41c0-93d2-eb3861683c38 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.553 254096 INFO nova.compute.manager [None req-2ee2c256-307f-4235-b5c2-7652d2305870 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Get console output
Nov 25 17:09:44 compute-0 nova_compute[254092]: 2025-11-25 17:09:44.559 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:09:45 compute-0 nova_compute[254092]: 2025-11-25 17:09:45.200 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:45 compute-0 nova_compute[254092]: 2025-11-25 17:09:45.200 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:09:45 compute-0 nova_compute[254092]: 2025-11-25 17:09:45.201 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:09:45 compute-0 nova_compute[254092]: 2025-11-25 17:09:45.526 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:45 compute-0 nova_compute[254092]: 2025-11-25 17:09:45.745 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:45 compute-0 ceph-mon[74985]: pgmap v2586: 321 pgs: 321 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:09:45 compute-0 nova_compute[254092]: 2025-11-25 17:09:45.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.259 254096 DEBUG nova.network.neutron [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.260 254096 DEBUG nova.network.neutron [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.284 254096 DEBUG oslo_concurrency.lockutils [req-260715ae-82b7-4563-9764-214ae707e4d6 req-b4b3f3e8-c801-41b9-8c98-1edcb85153fb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.286 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.287 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.288 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.305 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.306 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.306 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.307 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.307 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.307 254096 WARNING nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.308 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.308 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.309 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.309 254096 DEBUG oslo_concurrency.lockutils [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.309 254096 DEBUG nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.310 254096 WARNING nova.compute.manager [req-cd8de5b2-a1d3-4a5d-8137-f403300a6cfb req-4afe77f4-264d-4909-836b-6d337746d74d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state active and task_state None.
Nov 25 17:09:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.517 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.517 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.518 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.518 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.518 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.520 254096 INFO nova.compute.manager [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Terminating instance
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.522 254096 DEBUG nova.compute.manager [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:09:46 compute-0 kernel: tap8f6c27c2-ad (unregistering): left promiscuous mode
Nov 25 17:09:46 compute-0 NetworkManager[48891]: <info>  [1764090586.5928] device (tap8f6c27c2-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:09:46 compute-0 ovn_controller[153477]: 2025-11-25T17:09:46Z|01382|binding|INFO|Releasing lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a from this chassis (sb_readonly=0)
Nov 25 17:09:46 compute-0 ovn_controller[153477]: 2025-11-25T17:09:46Z|01383|binding|INFO|Setting lport 8f6c27c2-adf3-4556-9c61-69f27de29c0a down in Southbound
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 ovn_controller[153477]: 2025-11-25T17:09:46Z|01384|binding|INFO|Removing iface tap8f6c27c2-ad ovn-installed in OVS
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.617 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:a8:a6 10.100.0.4'], port_security=['fa:16:3e:66:a8:a6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e5fb0d68-c20f-4118-96eb-4e6de1db03a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16916ada-fafd-4ccd-ac83-b8a7fafd2092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=8f6c27c2-adf3-4556-9c61-69f27de29c0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.618 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 8f6c27c2-adf3-4556-9c61-69f27de29c0a in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 unbound from our chassis
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.620 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 877a4e79-06f0-432b-a5f9-1a0277ccd412
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.641 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4118e919-e657-4b49-8ca8-57c24b05ca56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:46 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Deactivated successfully.
Nov 25 17:09:46 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000084.scope: Consumed 13.474s CPU time.
Nov 25 17:09:46 compute-0 systemd-machined[216343]: Machine qemu-166-instance-00000084 terminated.
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.677 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[88fd7b03-8831-4ffe-92cc-15ddb2547d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.681 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[70c14873-e609-4692-b142-80de272e3630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.718 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3ed1ec-ae79-4198-aabe-0a340bd228e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.739 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[58ad68a4-499c-4cd9-b69d-53d3f159367d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap877a4e79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:16:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 400], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699464, 'reachable_time': 30269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398995, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.770 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfb0e1b-0f78-4c8b-afe9-6b280d9c1980]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699476, 'tstamp': 699476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398998, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap877a4e79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699479, 'tstamp': 699479}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398998, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.772 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.773 254096 INFO nova.virt.libvirt.driver [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance destroyed successfully.
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.774 254096 DEBUG nova.objects.instance [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid e5fb0d68-c20f-4118-96eb-4e6de1db03a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.777 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap877a4e79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.777 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap877a4e79-00, col_values=(('external_ids', {'iface-id': '62f37f6b-9026-4c42-8bb8-f4b3e0610e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:46.778 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.785 254096 DEBUG nova.virt.libvirt.vif [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1686027048',display_name='tempest-TestNetworkBasicOps-server-1686027048',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1686027048',id=132,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNdl5to0NzjjAFgFNhpsdh8Jfq3ScLPPqyXuaui5ecMeBqyO36Oalgk9S8OnNSAFKWqaym/gRqT0RKa9nF633E5JOkmAjnn0MFwmhHcgXwoFxcxFcA2kbyimTGOoXBEcA==',key_name='tempest-TestNetworkBasicOps-1297437986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:09:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-dsazanwp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:09:16Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e5fb0d68-c20f-4118-96eb-4e6de1db03a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.785 254096 DEBUG nova.network.os_vif_util [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "address": "fa:16:3e:66:a8:a6", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f6c27c2-ad", "ovs_interfaceid": "8f6c27c2-adf3-4556-9c61-69f27de29c0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.786 254096 DEBUG nova.network.os_vif_util [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.787 254096 DEBUG os_vif [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.789 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f6c27c2-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.797 254096 INFO os_vif [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:a8:a6,bridge_name='br-int',has_traffic_filtering=True,id=8f6c27c2-adf3-4556-9c61-69f27de29c0a,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f6c27c2-ad')
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.938 254096 DEBUG nova.compute.manager [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-unplugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.939 254096 DEBUG oslo_concurrency.lockutils [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.939 254096 DEBUG oslo_concurrency.lockutils [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.939 254096 DEBUG oslo_concurrency.lockutils [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.940 254096 DEBUG nova.compute.manager [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] No waiting events found dispatching network-vif-unplugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:46 compute-0 nova_compute[254092]: 2025-11-25 17:09:46.940 254096 DEBUG nova.compute.manager [req-558e1ed2-aaaa-4fe1-9e9b-a628c630cb73 req-431a20a1-d414-49e6-ac74-81ad038da0f7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-unplugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.157 254096 INFO nova.virt.libvirt.driver [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deleting instance files /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_del
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.158 254096 INFO nova.virt.libvirt.driver [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deletion of /var/lib/nova/instances/e5fb0d68-c20f-4118-96eb-4e6de1db03a1_del complete
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.224 254096 INFO nova.compute.manager [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.226 254096 DEBUG oslo.service.loopingcall [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.226 254096 DEBUG nova.compute.manager [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.227 254096 DEBUG nova.network.neutron [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.809 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:47 compute-0 ceph-mon[74985]: pgmap v2587: 321 pgs: 321 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 984 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.838 254096 DEBUG nova.network.neutron [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.841 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.842 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.842 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.855 254096 INFO nova.compute.manager [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Took 0.63 seconds to deallocate network for instance.
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.905 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.906 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:47 compute-0 nova_compute[254092]: 2025-11-25 17:09:47.997 254096 DEBUG oslo_concurrency.processutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.418 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.418 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing instance network info cache due to event network-changed-8f6c27c2-adf3-4556-9c61-69f27de29c0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.419 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.419 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.419 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Refreshing network info cache for port 8f6c27c2-adf3-4556-9c61-69f27de29c0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:09:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543606504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.439 254096 DEBUG oslo_concurrency.processutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.446 254096 DEBUG nova.compute.provider_tree [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.459 254096 DEBUG nova.scheduler.client.report [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.479 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.508 254096 INFO nova.scheduler.client.report [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance e5fb0d68-c20f-4118-96eb-4e6de1db03a1
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.569 254096 DEBUG oslo_concurrency.lockutils [None req-fd739c46-7f43-4f22-af60-1c25670acae1 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.587 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:09:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2543606504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.842 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.858 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e5fb0d68-c20f-4118-96eb-4e6de1db03a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.859 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.859 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing instance network info cache due to event network-changed-507a4b35-dd4f-4777-a88c-c40597fe827b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.859 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.860 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:48 compute-0 nova_compute[254092]: 2025-11-25 17:09:48.860 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Refreshing network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:49 compute-0 nova_compute[254092]: 2025-11-25 17:09:49.028 254096 DEBUG nova.compute.manager [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:49 compute-0 nova_compute[254092]: 2025-11-25 17:09:49.028 254096 DEBUG oslo_concurrency.lockutils [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:49 compute-0 nova_compute[254092]: 2025-11-25 17:09:49.028 254096 DEBUG oslo_concurrency.lockutils [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:49 compute-0 nova_compute[254092]: 2025-11-25 17:09:49.029 254096 DEBUG oslo_concurrency.lockutils [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e5fb0d68-c20f-4118-96eb-4e6de1db03a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:49 compute-0 nova_compute[254092]: 2025-11-25 17:09:49.029 254096 DEBUG nova.compute.manager [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] No waiting events found dispatching network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:49 compute-0 nova_compute[254092]: 2025-11-25 17:09:49.029 254096 WARNING nova.compute.manager [req-148a1512-81be-4173-801f-ee8d5e37c31e req-abf91867-8a43-49b0-8e7e-e6462080256c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received unexpected event network-vif-plugged-8f6c27c2-adf3-4556-9c61-69f27de29c0a for instance with vm_state deleted and task_state None.
Nov 25 17:09:49 compute-0 nova_compute[254092]: 2025-11-25 17:09:49.132 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:09:49 compute-0 ceph-mon[74985]: pgmap v2588: 321 pgs: 321 active+clean; 246 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.111 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updated VIF entry in instance network info cache for port 507a4b35-dd4f-4777-a88c-c40597fe827b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.112 254096 DEBUG nova.network.neutron [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [{"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.136 254096 DEBUG oslo_concurrency.lockutils [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6a4ffa69-afb1-46b7-9109-8edeb9481103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.137 254096 DEBUG nova.compute.manager [req-b0f4dc08-44f9-448c-a601-c111ca302ea7 req-99a13390-213f-4dce-b249-64773c28575f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Received event network-vif-deleted-8f6c27c2-adf3-4556-9c61-69f27de29c0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 190 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.904 254096 DEBUG nova.compute.manager [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.905 254096 DEBUG nova.compute.manager [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing instance network info cache due to event network-changed-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.905 254096 DEBUG oslo_concurrency.lockutils [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.905 254096 DEBUG oslo_concurrency.lockutils [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.906 254096 DEBUG nova.network.neutron [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Refreshing network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.980 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.981 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.981 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.982 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.982 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.984 254096 INFO nova.compute.manager [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Terminating instance
Nov 25 17:09:50 compute-0 nova_compute[254092]: 2025-11-25 17:09:50.986 254096 DEBUG nova.compute.manager [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:09:51 compute-0 kernel: tap9b06b5b4-bc (unregistering): left promiscuous mode
Nov 25 17:09:51 compute-0 NetworkManager[48891]: <info>  [1764090591.0326] device (tap9b06b5b4-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:09:51 compute-0 ovn_controller[153477]: 2025-11-25T17:09:51Z|01385|binding|INFO|Releasing lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab from this chassis (sb_readonly=0)
Nov 25 17:09:51 compute-0 ovn_controller[153477]: 2025-11-25T17:09:51Z|01386|binding|INFO|Setting lport 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab down in Southbound
Nov 25 17:09:51 compute-0 ovn_controller[153477]: 2025-11-25T17:09:51Z|01387|binding|INFO|Removing iface tap9b06b5b4-bc ovn-installed in OVS
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.047 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.052 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:f0:6f 10.100.0.3'], port_security=['fa:16:3e:e4:f0:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e9a105a6-90a1-4e21-9296-61a55e2ceec3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f3c9a6b1-fb73-4f95-84e0-2b0bae619305', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb339b20-1b85-426d-99a4-46bcf16ea0e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.054 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab in datapath 877a4e79-06f0-432b-a5f9-1a0277ccd412 unbound from our chassis
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.057 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 877a4e79-06f0-432b-a5f9-1a0277ccd412, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.058 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e308de-523a-4e5f-a0a4-40c3bafde057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.059 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 namespace which is not needed anymore
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.075 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Deactivated successfully.
Nov 25 17:09:51 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000083.scope: Consumed 14.504s CPU time.
Nov 25 17:09:51 compute-0 systemd-machined[216343]: Machine qemu-165-instance-00000083 terminated.
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : haproxy version is 2.8.14-c23fe91
Nov 25 17:09:51 compute-0 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [NOTICE]   (396794) : path to executable is /usr/sbin/haproxy
Nov 25 17:09:51 compute-0 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [WARNING]  (396794) : Exiting Master process...
Nov 25 17:09:51 compute-0 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [ALERT]    (396794) : Current worker (396796) exited with code 143 (Terminated)
Nov 25 17:09:51 compute-0 neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412[396790]: [WARNING]  (396794) : All workers exited. Exiting... (0)
Nov 25 17:09:51 compute-0 systemd[1]: libpod-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652.scope: Deactivated successfully.
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.227 254096 INFO nova.virt.libvirt.driver [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Instance destroyed successfully.
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.227 254096 DEBUG nova.objects.instance [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid e9a105a6-90a1-4e21-9296-61a55e2ceec3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:09:51 compute-0 podman[399073]: 2025-11-25 17:09:51.230004176 +0000 UTC m=+0.051945195 container died 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.237 254096 DEBUG nova.virt.libvirt.vif [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:08:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-367110792',display_name='tempest-TestNetworkBasicOps-server-367110792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-367110792',id=131,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDN+GWJbAZWOIdymw+cz2oLcyndkUPhl6vtYaMAR6OngHd8OlebpGiETQxsMoSybRqEA+rFizH3rxBjAI6Jko4gUoKJ0EE0bXq9XY/gGruR3mEMNu5mTsv7YmUDww+bvsg==',key_name='tempest-TestNetworkBasicOps-2139833866',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:08:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-3zuncmiv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:08:57Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=e9a105a6-90a1-4e21-9296-61a55e2ceec3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.238 254096 DEBUG nova.network.os_vif_util [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.238 254096 DEBUG nova.network.os_vif_util [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.238 254096 DEBUG os_vif [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.241 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b06b5b4-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.247 254096 INFO os_vif [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:f0:6f,bridge_name='br-int',has_traffic_filtering=True,id=9b06b5b4-bc07-48ef-b51a-ecf0abc558ab,network=Network(877a4e79-06f0-432b-a5f9-1a0277ccd412),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b06b5b4-bc')
Nov 25 17:09:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652-userdata-shm.mount: Deactivated successfully.
Nov 25 17:09:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0e0d9216ee6b30cc22072c4c0c5dbb641729836999db290bc3d2789f7b7629e-merged.mount: Deactivated successfully.
Nov 25 17:09:51 compute-0 podman[399073]: 2025-11-25 17:09:51.283620986 +0000 UTC m=+0.105562005 container cleanup 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:09:51 compute-0 systemd[1]: libpod-conmon-36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652.scope: Deactivated successfully.
Nov 25 17:09:51 compute-0 podman[399133]: 2025-11-25 17:09:51.364856655 +0000 UTC m=+0.054577498 container remove 36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.372 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b228835-359f-4a85-935a-416bfe0bd39d]: (4, ('Tue Nov 25 05:09:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 (36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652)\n36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652\nTue Nov 25 05:09:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 (36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652)\n36396cb8c4c59d39d430289b1192a036d13154c2c9b9c1ecde01d12a70459652\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.374 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f79e3c2e-f69b-4b2f-a6e4-240d3afc5339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.375 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap877a4e79-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 kernel: tap877a4e79-00: left promiscuous mode
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.393 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.396 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[57f5e0bb-10ae-4f21-aa3d-5c3f9043aab3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d264846c-3b00-44a5-a09b-e91f721dcbdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.417 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8f592bd0-3e68-4e79-892d-3c9b46ae8a5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.437 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c02a40-9ff7-4775-9688-2b0efc11fb8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699456, 'reachable_time': 37393, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399148, 'error': None, 'target': 'ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d877a4e79\x2d06f0\x2d432b\x2da5f9\x2d1a0277ccd412.mount: Deactivated successfully.
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.442 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-877a4e79-06f0-432b-a5f9-1a0277ccd412 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:09:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:09:51.442 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8efe8411-feb5-458c-8bd5-25f9b15c1e0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013727962004297714 of space, bias 1.0, pg target 0.41183886012893145 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:09:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.696 254096 INFO nova.virt.libvirt.driver [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deleting instance files /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3_del
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.698 254096 INFO nova.virt.libvirt.driver [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deletion of /var/lib/nova/instances/e9a105a6-90a1-4e21-9296-61a55e2ceec3_del complete
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.751 254096 INFO nova.compute.manager [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.752 254096 DEBUG oslo.service.loopingcall [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.752 254096 DEBUG nova.compute.manager [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:09:51 compute-0 nova_compute[254092]: 2025-11-25 17:09:51.753 254096 DEBUG nova.network.neutron [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:09:51 compute-0 ceph-mon[74985]: pgmap v2589: 321 pgs: 321 active+clean; 190 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.087 254096 DEBUG nova.network.neutron [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updated VIF entry in instance network info cache for port 9b06b5b4-bc07-48ef-b51a-ecf0abc558ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.088 254096 DEBUG nova.network.neutron [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [{"id": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "address": "fa:16:3e:e4:f0:6f", "network": {"id": "877a4e79-06f0-432b-a5f9-1a0277ccd412", "bridge": "br-int", "label": "tempest-network-smoke--1491444670", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b06b5b4-bc", "ovs_interfaceid": "9b06b5b4-bc07-48ef-b51a-ecf0abc558ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.109 254096 DEBUG oslo_concurrency.lockutils [req-b6cc9f20-3e66-43a5-bbc1-67a000ec266b req-84c41fe3-ea66-42c1-b89e-a5a28ce002f3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e9a105a6-90a1-4e21-9296-61a55e2ceec3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.228 254096 DEBUG nova.network.neutron [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.242 254096 INFO nova.compute.manager [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Took 0.49 seconds to deallocate network for instance.
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.283 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.283 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.341 254096 DEBUG oslo_concurrency.processutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:09:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 167 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 802 KiB/s wr, 114 op/s
Nov 25 17:09:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:09:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358946284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.829 254096 DEBUG oslo_concurrency.processutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.838 254096 DEBUG nova.compute.provider_tree [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.855 254096 DEBUG nova.scheduler.client.report [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:09:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3358946284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.878 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.916 254096 INFO nova.scheduler.client.report [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance e9a105a6-90a1-4e21-9296-61a55e2ceec3
Nov 25 17:09:52 compute-0 nova_compute[254092]: 2025-11-25 17:09:52.982 254096 DEBUG oslo_concurrency.lockutils [None req-47e05db3-77b7-467b-ae04-1df1916630a2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.004 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.005 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.006 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.006 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.007 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.008 254096 WARNING nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-unplugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state deleted and task_state None.
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.008 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.009 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.010 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.011 254096 DEBUG oslo_concurrency.lockutils [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e9a105a6-90a1-4e21-9296-61a55e2ceec3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.011 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] No waiting events found dispatching network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.012 254096 WARNING nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received unexpected event network-vif-plugged-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab for instance with vm_state deleted and task_state None.
Nov 25 17:09:53 compute-0 nova_compute[254092]: 2025-11-25 17:09:53.013 254096 DEBUG nova.compute.manager [req-94953198-1110-4f80-a0a5-61f3f86a12f8 req-c38487bf-541d-400a-87c1-083f6c4a1c91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Received event network-vif-deleted-9b06b5b4-bc07-48ef-b51a-ecf0abc558ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:09:53 compute-0 podman[399173]: 2025-11-25 17:09:53.655676158 +0000 UTC m=+0.062196697 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:09:53 compute-0 podman[399172]: 2025-11-25 17:09:53.662914597 +0000 UTC m=+0.075650806 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:09:53 compute-0 podman[399174]: 2025-11-25 17:09:53.704450785 +0000 UTC m=+0.104996960 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 17:09:53 compute-0 ceph-mon[74985]: pgmap v2590: 321 pgs: 321 active+clean; 167 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 802 KiB/s wr, 114 op/s
Nov 25 17:09:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 167 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Nov 25 17:09:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:09:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2378382817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:09:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:09:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2378382817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:09:55 compute-0 nova_compute[254092]: 2025-11-25 17:09:55.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:55 compute-0 ceph-mon[74985]: pgmap v2591: 321 pgs: 321 active+clean; 167 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 102 op/s
Nov 25 17:09:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2378382817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:09:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2378382817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:09:56 compute-0 nova_compute[254092]: 2025-11-25 17:09:56.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 130 op/s
Nov 25 17:09:57 compute-0 ceph-mon[74985]: pgmap v2592: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 130 op/s
Nov 25 17:09:58 compute-0 ovn_controller[153477]: 2025-11-25T17:09:58Z|01388|binding|INFO|Releasing lport 40effa5c-1023-4055-bf6e-f3fb50577f32 from this chassis (sb_readonly=0)
Nov 25 17:09:58 compute-0 nova_compute[254092]: 2025-11-25 17:09:58.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:09:58 compute-0 ovn_controller[153477]: 2025-11-25T17:09:58Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:22:50 10.100.0.9
Nov 25 17:09:58 compute-0 ovn_controller[153477]: 2025-11-25T17:09:58Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:22:50 10.100.0.9
Nov 25 17:09:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.4 KiB/s wr, 88 op/s
Nov 25 17:09:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:09:59 compute-0 ceph-mon[74985]: pgmap v2593: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.4 KiB/s wr, 88 op/s
Nov 25 17:10:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 107 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 124 op/s
Nov 25 17:10:00 compute-0 nova_compute[254092]: 2025-11-25 17:10:00.531 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:01 compute-0 nova_compute[254092]: 2025-11-25 17:10:01.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:01 compute-0 nova_compute[254092]: 2025-11-25 17:10:01.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:01 compute-0 nova_compute[254092]: 2025-11-25 17:10:01.767 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090586.766356, e5fb0d68-c20f-4118-96eb-4e6de1db03a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:10:01 compute-0 nova_compute[254092]: 2025-11-25 17:10:01.767 254096 INFO nova.compute.manager [-] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] VM Stopped (Lifecycle Event)
Nov 25 17:10:01 compute-0 nova_compute[254092]: 2025-11-25 17:10:01.788 254096 DEBUG nova.compute.manager [None req-1bbf74ad-1ee8-49a3-89f0-5dcfa4077cf5 - - - - - -] [instance: e5fb0d68-c20f-4118-96eb-4e6de1db03a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:01 compute-0 ceph-mon[74985]: pgmap v2594: 321 pgs: 321 active+clean; 107 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 124 op/s
Nov 25 17:10:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 25 17:10:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:03 compute-0 ceph-mon[74985]: pgmap v2595: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 25 17:10:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:10:05 compute-0 nova_compute[254092]: 2025-11-25 17:10:05.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:05 compute-0 nova_compute[254092]: 2025-11-25 17:10:05.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:05 compute-0 ceph-mon[74985]: pgmap v2596: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:10:06 compute-0 nova_compute[254092]: 2025-11-25 17:10:06.225 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090591.2246091, e9a105a6-90a1-4e21-9296-61a55e2ceec3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:10:06 compute-0 nova_compute[254092]: 2025-11-25 17:10:06.226 254096 INFO nova.compute.manager [-] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] VM Stopped (Lifecycle Event)
Nov 25 17:10:06 compute-0 nova_compute[254092]: 2025-11-25 17:10:06.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:06 compute-0 nova_compute[254092]: 2025-11-25 17:10:06.249 254096 DEBUG nova.compute.manager [None req-8ec635fa-1a86-44d1-9714-d79f4954cf72 - - - - - -] [instance: e9a105a6-90a1-4e21-9296-61a55e2ceec3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:10:07 compute-0 ceph-mon[74985]: pgmap v2597: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:10:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:10:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:09 compute-0 ceph-mon[74985]: pgmap v2598: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:10:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:10:10 compute-0 nova_compute[254092]: 2025-11-25 17:10:10.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.204 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.205 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.226 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.339 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.340 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.351 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.352 254096 INFO nova.compute.claims [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.472 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:10:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713015053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.897 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.904 254096 DEBUG nova.compute.provider_tree [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.922 254096 DEBUG nova.scheduler.client.report [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.949 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.950 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:10:11 compute-0 ceph-mon[74985]: pgmap v2599: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:10:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3713015053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.998 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:10:11 compute-0 nova_compute[254092]: 2025-11-25 17:10:11.998 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.014 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.037 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.128 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.130 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.131 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Creating image(s)
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.170 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.210 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.241 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.246 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.353 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.355 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.356 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.357 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.398 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.403 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 678bebc8-318d-4332-b89f-f86ac5f187c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 673 KiB/s wr, 29 op/s
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.747 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 678bebc8-318d-4332-b89f-f86ac5f187c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.801 254096 DEBUG nova.policy [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e13382ac65e94c7b8a89e67de0e754df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.841 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] resizing rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.958 254096 DEBUG nova.objects.instance [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'migration_context' on Instance uuid 678bebc8-318d-4332-b89f-f86ac5f187c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.971 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.972 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Ensure instance console log exists: /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.973 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.973 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:12 compute-0 nova_compute[254092]: 2025-11-25 17:10:12.974 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:13.648 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:13.648 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:13.649 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:13 compute-0 nova_compute[254092]: 2025-11-25 17:10:13.898 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Successfully created port: 3af510d7-9800-4ba2-9c9f-f8ded924314f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:10:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:13 compute-0 ceph-mon[74985]: pgmap v2600: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 673 KiB/s wr, 29 op/s
Nov 25 17:10:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 17:10:14 compute-0 nova_compute[254092]: 2025-11-25 17:10:14.983 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Successfully updated port: 3af510d7-9800-4ba2-9c9f-f8ded924314f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:10:14 compute-0 nova_compute[254092]: 2025-11-25 17:10:14.997 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:10:14 compute-0 nova_compute[254092]: 2025-11-25 17:10:14.998 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:10:14 compute-0 nova_compute[254092]: 2025-11-25 17:10:14.998 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.078 254096 DEBUG nova.compute.manager [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.078 254096 DEBUG nova.compute.manager [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing instance network info cache due to event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.079 254096 DEBUG oslo_concurrency.lockutils [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.128 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:15 compute-0 sudo[399427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:15 compute-0 sudo[399427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:15 compute-0 sudo[399427]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:15 compute-0 sudo[399452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:10:15 compute-0 sudo[399452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:15 compute-0 sudo[399452]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:15 compute-0 sudo[399477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:15 compute-0 sudo[399477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:15 compute-0 sudo[399477]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.912 254096 DEBUG nova.network.neutron [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:10:15 compute-0 sudo[399502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:10:15 compute-0 sudo[399502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.931 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.932 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance network_info: |[{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.933 254096 DEBUG oslo_concurrency.lockutils [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.933 254096 DEBUG nova.network.neutron [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.938 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start _get_guest_xml network_info=[{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.944 254096 WARNING nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.949 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.950 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.953 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.954 254096 DEBUG nova.virt.libvirt.host [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.954 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.955 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.955 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.955 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.956 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.956 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.956 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.957 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.957 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.957 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.958 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.958 254096 DEBUG nova.virt.hardware [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:10:15 compute-0 nova_compute[254092]: 2025-11-25 17:10:15.962 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:15 compute-0 ceph-mon[74985]: pgmap v2601: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.252 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:16 compute-0 sudo[399502]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664822610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.411 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.467 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.471 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:10:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c4ee6f04-d0a4-4564-b4fc-bc42d32c4f1e does not exist
Nov 25 17:10:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev df6765ca-fa8a-44a7-b90d-19745477817f does not exist
Nov 25 17:10:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b85cc2d4-1e1b-4520-8759-1afd4c8ee327 does not exist
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:10:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:10:16 compute-0 sudo[399598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:16 compute-0 sudo[399598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:16 compute-0 sudo[399598]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:16 compute-0 sudo[399623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:10:16 compute-0 sudo[399623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:16 compute-0 sudo[399623]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:16 compute-0 sudo[399667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:16 compute-0 sudo[399667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:16 compute-0 sudo[399667]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:16 compute-0 sudo[399692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:10:16 compute-0 sudo[399692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214018862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.910 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.912 254096 DEBUG nova.virt.libvirt.vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1329077363',display_name='tempest-TestNetworkBasicOps-server-1329077363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1329077363',id=134,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFAq/FVSBvI83QmbRYXeQD5oPtImBiS/J8S4tENTC3HiqmpnLQgZSgQo5Q3eg9KB495TzA7SZganOs6ca4kdDUFsCruPnoCgpH1Af7eq+g3pVeBR/yIGYrZrs0yR9s3UA==',key_name='tempest-TestNetworkBasicOps-801135771',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-k9efxcoe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:10:12Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=678bebc8-318d-4332-b89f-f86ac5f187c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.912 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.913 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.915 254096 DEBUG nova.objects.instance [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'pci_devices' on Instance uuid 678bebc8-318d-4332-b89f-f86ac5f187c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.929 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <uuid>678bebc8-318d-4332-b89f-f86ac5f187c4</uuid>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <name>instance-00000086</name>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <nova:name>tempest-TestNetworkBasicOps-server-1329077363</nova:name>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:10:15</nova:creationTime>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:user uuid="e13382ac65e94c7b8a89e67de0e754df">tempest-TestNetworkBasicOps-901857678-project-member</nova:user>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:project uuid="31307e1076a54d1da7cf2b1a4e0b0731">tempest-TestNetworkBasicOps-901857678</nova:project>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <nova:port uuid="3af510d7-9800-4ba2-9c9f-f8ded924314f">
Nov 25 17:10:16 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <system>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <entry name="serial">678bebc8-318d-4332-b89f-f86ac5f187c4</entry>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <entry name="uuid">678bebc8-318d-4332-b89f-f86ac5f187c4</entry>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </system>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <os>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   </os>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <features>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   </features>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/678bebc8-318d-4332-b89f-f86ac5f187c4_disk">
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       </source>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config">
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       </source>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:10:16 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:b7:9a:99"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <target dev="tap3af510d7-98"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/console.log" append="off"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <video>
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </video>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:10:16 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:10:16 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:10:16 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:10:16 compute-0 nova_compute[254092]: </domain>
Nov 25 17:10:16 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.931 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Preparing to wait for external event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.931 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.932 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.932 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.933 254096 DEBUG nova.virt.libvirt.vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1329077363',display_name='tempest-TestNetworkBasicOps-server-1329077363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1329077363',id=134,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFAq/FVSBvI83QmbRYXeQD5oPtImBiS/J8S4tENTC3HiqmpnLQgZSgQo5Q3eg9KB495TzA7SZganOs6ca4kdDUFsCruPnoCgpH1Af7eq+g3pVeBR/yIGYrZrs0yR9s3UA==',key_name='tempest-TestNetworkBasicOps-801135771',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-k9efxcoe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:10:12Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=678bebc8-318d-4332-b89f-f86ac5f187c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.933 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.934 254096 DEBUG nova.network.os_vif_util [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.934 254096 DEBUG os_vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.936 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.936 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.941 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3af510d7-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.942 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3af510d7-98, col_values=(('external_ids', {'iface-id': '3af510d7-9800-4ba2-9c9f-f8ded924314f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:9a:99', 'vm-uuid': '678bebc8-318d-4332-b89f-f86ac5f187c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:16 compute-0 NetworkManager[48891]: <info>  [1764090616.9445] manager: (tap3af510d7-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:16 compute-0 nova_compute[254092]: 2025-11-25 17:10:16.952 254096 INFO os_vif [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98')
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.106 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.106 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.106 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] No VIF found with MAC fa:16:3e:b7:9a:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.107 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Using config drive
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.129 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:10:17 compute-0 podman[399762]: 2025-11-25 17:10:17.110673664 +0000 UTC m=+0.032850942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/664822610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:10:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3214018862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:10:17 compute-0 podman[399762]: 2025-11-25 17:10:17.408845392 +0000 UTC m=+0.331022580 container create 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:10:17 compute-0 systemd[1]: Started libpod-conmon-8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57.scope.
Nov 25 17:10:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.565 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Creating config drive at /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.573 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphufrjpdg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.723 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphufrjpdg" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.726694) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617726825, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1007, "num_deletes": 251, "total_data_size": 1375693, "memory_usage": 1393936, "flush_reason": "Manual Compaction"}
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 25 17:10:17 compute-0 podman[399762]: 2025-11-25 17:10:17.728706095 +0000 UTC m=+0.650883363 container init 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:10:17 compute-0 podman[399762]: 2025-11-25 17:10:17.740163109 +0000 UTC m=+0.662340317 container start 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617744179, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1351050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53554, "largest_seqno": 54560, "table_properties": {"data_size": 1346140, "index_size": 2434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10943, "raw_average_key_size": 19, "raw_value_size": 1336226, "raw_average_value_size": 2420, "num_data_blocks": 109, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090528, "oldest_key_time": 1764090528, "file_creation_time": 1764090617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 17542 microseconds, and 10290 cpu microseconds.
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:10:17 compute-0 podman[399762]: 2025-11-25 17:10:17.745778113 +0000 UTC m=+0.667955301 container attach 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.744261) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1351050 bytes OK
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.744292) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.746237) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.746267) EVENT_LOG_v1 {"time_micros": 1764090617746257, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.746292) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1370889, prev total WAL file size 1397763, number of live WAL files 2.
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.747553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1319KB)], [122(8580KB)]
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617747621, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 10137369, "oldest_snapshot_seqno": -1}
Nov 25 17:10:17 compute-0 zen_yonath[399797]: 167 167
Nov 25 17:10:17 compute-0 systemd[1]: libpod-8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57.scope: Deactivated successfully.
Nov 25 17:10:17 compute-0 podman[399762]: 2025-11-25 17:10:17.752457397 +0000 UTC m=+0.674634615 container died 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.763 254096 DEBUG nova.storage.rbd_utils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] rbd image 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.773 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7404 keys, 8409268 bytes, temperature: kUnknown
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617810398, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8409268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8362928, "index_size": 26708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 194168, "raw_average_key_size": 26, "raw_value_size": 8233604, "raw_average_value_size": 1112, "num_data_blocks": 1034, "num_entries": 7404, "num_filter_entries": 7404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.810734) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8409268 bytes
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.812125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 133.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.7) write-amplify(6.2) OK, records in: 7918, records dropped: 514 output_compression: NoCompression
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.812144) EVENT_LOG_v1 {"time_micros": 1764090617812134, "job": 74, "event": "compaction_finished", "compaction_time_micros": 62800, "compaction_time_cpu_micros": 25989, "output_level": 6, "num_output_files": 1, "total_output_size": 8409268, "num_input_records": 7918, "num_output_records": 7404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617812549, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090617815292, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.747243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:10:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:10:17.815451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-495dcd64fe0b8475991de64555be3d3d2f1c2adabb93c1a00967205bb8046ccd-merged.mount: Deactivated successfully.
Nov 25 17:10:17 compute-0 podman[399762]: 2025-11-25 17:10:17.847056052 +0000 UTC m=+0.769233240 container remove 8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:10:17 compute-0 systemd[1]: libpod-conmon-8aaebb540299481fd7d137c5e2dea146892c1450f0e1ea5ae9d755e16efa2d57.scope: Deactivated successfully.
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.966 254096 DEBUG oslo_concurrency.processutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config 678bebc8-318d-4332-b89f-f86ac5f187c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:17 compute-0 nova_compute[254092]: 2025-11-25 17:10:17.968 254096 INFO nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deleting local config drive /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4/disk.config because it was imported into RBD.
Nov 25 17:10:18 compute-0 kernel: tap3af510d7-98: entered promiscuous mode
Nov 25 17:10:18 compute-0 NetworkManager[48891]: <info>  [1764090618.0202] manager: (tap3af510d7-98): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Nov 25 17:10:18 compute-0 podman[399862]: 2025-11-25 17:10:18.06250027 +0000 UTC m=+0.082795211 container create 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:10:18 compute-0 systemd-udevd[399884]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:10:18 compute-0 ovn_controller[153477]: 2025-11-25T17:10:18Z|01389|binding|INFO|Claiming lport 3af510d7-9800-4ba2-9c9f-f8ded924314f for this chassis.
Nov 25 17:10:18 compute-0 ovn_controller[153477]: 2025-11-25T17:10:18Z|01390|binding|INFO|3af510d7-9800-4ba2-9c9f-f8ded924314f: Claiming fa:16:3e:b7:9a:99 10.100.0.4
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.072 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:9a:99 10.100.0.4'], port_security=['fa:16:3e:b7:9a:99 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678bebc8-318d-4332-b89f-f86ac5f187c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58ae50d2-6994-45e4-b2e6-36301d8443e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=702c597c-9457-430d-aa7a-2d35c3cf306f, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3af510d7-9800-4ba2-9c9f-f8ded924314f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.073 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3af510d7-9800-4ba2-9c9f-f8ded924314f in datapath 169b0886-fc13-49ff-b4f6-0f14f908ad1c bound to our chassis
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.075 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 169b0886-fc13-49ff-b4f6-0f14f908ad1c
Nov 25 17:10:18 compute-0 NetworkManager[48891]: <info>  [1764090618.0792] device (tap3af510d7-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:10:18 compute-0 NetworkManager[48891]: <info>  [1764090618.0801] device (tap3af510d7-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:10:18 compute-0 ovn_controller[153477]: 2025-11-25T17:10:18Z|01391|binding|INFO|Setting lport 3af510d7-9800-4ba2-9c9f-f8ded924314f ovn-installed in OVS
Nov 25 17:10:18 compute-0 ovn_controller[153477]: 2025-11-25T17:10:18Z|01392|binding|INFO|Setting lport 3af510d7-9800-4ba2-9c9f-f8ded924314f up in Southbound
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.083 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8f3677-a6d5-4f1f-ac72-39b5731b5180]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.093 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap169b0886-f1 in ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.094 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap169b0886-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.095 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[405b23d8-b556-42f7-81b6-e2b2e878731d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.097 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c216c58c-b5b6-41de-8a25-0e5486e80e8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 systemd-machined[216343]: New machine qemu-168-instance-00000086.
Nov 25 17:10:18 compute-0 podman[399862]: 2025-11-25 17:10:18.003808691 +0000 UTC m=+0.024103662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.108 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[20b32be6-81bb-40a6-bef9-53779502d7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 systemd[1]: Started libpod-conmon-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope.
Nov 25 17:10:18 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.131 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[49f10e76-11d1-4549-88bc-c92d3ee2ee28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:18 compute-0 podman[399862]: 2025-11-25 17:10:18.158494784 +0000 UTC m=+0.178789745 container init 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:10:18 compute-0 podman[399862]: 2025-11-25 17:10:18.168059866 +0000 UTC m=+0.188354807 container start 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:10:18 compute-0 podman[399862]: 2025-11-25 17:10:18.171089709 +0000 UTC m=+0.191384660 container attach 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.170 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f37e4aba-e206-419d-bc00-9029675bbf80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 NetworkManager[48891]: <info>  [1764090618.1819] manager: (tap169b0886-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.180 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[677feadb-8e49-4f3f-bc9b-29220171f9a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.218 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7db7069d-9915-4547-9356-ba90538f2360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.222 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba65125-3325-45d0-bc9c-7e780c3380f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 NetworkManager[48891]: <info>  [1764090618.2473] device (tap169b0886-f0): carrier: link connected
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.255 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bd4585-9e5c-4597-bfc5-192a216d68ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.272 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3f21508d-8d18-4fa5-a1ea-bd89162c6dfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap169b0886-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:0e:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707583, 'reachable_time': 40120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399932, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ceph-mon[74985]: pgmap v2602: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.300 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e94c2a-6a93-4a67-a0fd-85a661d43ca0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:e7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707583, 'tstamp': 707583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399933, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.318 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[04c30405-c570-472a-8662-c9969fd1edd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap169b0886-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:0e:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 407], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707583, 'reachable_time': 40120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399949, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.352 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3fff40-89ea-4366-9deb-615befda64fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.406 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5af6afa-4f94-44a8-a189-3f85b98db01b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.407 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap169b0886-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.407 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.408 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap169b0886-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:18 compute-0 NetworkManager[48891]: <info>  [1764090618.4100] manager: (tap169b0886-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:18 compute-0 kernel: tap169b0886-f0: entered promiscuous mode
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.413 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap169b0886-f0, col_values=(('external_ids', {'iface-id': 'c8371c10-cfa3-4f58-a300-b742148bc7d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.414 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:18 compute-0 ovn_controller[153477]: 2025-11-25T17:10:18Z|01393|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.429 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/169b0886-fc13-49ff-b4f6-0f14f908ad1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/169b0886-fc13-49ff-b4f6-0f14f908ad1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.430 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5539389a-ed76-43d2-a3e2-a63e58ca5127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.431 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-169b0886-fc13-49ff-b4f6-0f14f908ad1c
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/169b0886-fc13-49ff-b4f6-0f14f908ad1c.pid.haproxy
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 169b0886-fc13-49ff-b4f6-0f14f908ad1c
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:10:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:18.431 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'env', 'PROCESS_TAG=haproxy-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/169b0886-fc13-49ff-b4f6-0f14f908ad1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.469 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090618.4683244, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.469 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Started (Lifecycle Event)
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.503 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.507 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090618.46845, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.507 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Paused (Lifecycle Event)
Nov 25 17:10:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.531 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.535 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.556 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.806 254096 DEBUG nova.network.neutron [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updated VIF entry in instance network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.806 254096 DEBUG nova.network.neutron [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.823 254096 DEBUG oslo_concurrency.lockutils [req-44e1e55e-a3cf-494c-b018-016341f82089 req-3b35de70-65d1-46c5-bc73-bc1d9d36474d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:10:18 compute-0 podman[400008]: 2025-11-25 17:10:18.831558114 +0000 UTC m=+0.081738162 container create 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 17:10:18 compute-0 podman[400008]: 2025-11-25 17:10:18.790789556 +0000 UTC m=+0.040969604 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:10:18 compute-0 systemd[1]: Started libpod-conmon-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1.scope.
Nov 25 17:10:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26536350f4636af02cd64a2d8c5eb49f4c32108a7d2a493c1f2754a932dc988a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:18 compute-0 podman[400008]: 2025-11-25 17:10:18.939981409 +0000 UTC m=+0.190161417 container init 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.939 254096 DEBUG nova.compute.manager [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.940 254096 DEBUG oslo_concurrency.lockutils [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.940 254096 DEBUG oslo_concurrency.lockutils [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.940 254096 DEBUG oslo_concurrency.lockutils [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.941 254096 DEBUG nova.compute.manager [req-977d2c8d-ef3d-48cc-943d-00ddbffa1c8a req-41612a33-5193-439f-8e7c-be2a5efc8ad5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Processing event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.941 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.945 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090618.9454968, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.946 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Resumed (Lifecycle Event)
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.948 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:10:18 compute-0 podman[400008]: 2025-11-25 17:10:18.9491516 +0000 UTC m=+0.199331628 container start 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.953 254096 INFO nova.virt.libvirt.driver [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance spawned successfully.
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.954 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.968 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:18 compute-0 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : New worker (400037) forked
Nov 25 17:10:18 compute-0 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : Loading success.
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.976 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.983 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.983 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.983 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.984 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.984 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:10:18 compute-0 nova_compute[254092]: 2025-11-25 17:10:18.984 254096 DEBUG nova.virt.libvirt.driver [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:10:19 compute-0 nova_compute[254092]: 2025-11-25 17:10:19.010 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:10:19 compute-0 nova_compute[254092]: 2025-11-25 17:10:19.044 254096 INFO nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 6.92 seconds to spawn the instance on the hypervisor.
Nov 25 17:10:19 compute-0 nova_compute[254092]: 2025-11-25 17:10:19.044 254096 DEBUG nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:19 compute-0 nova_compute[254092]: 2025-11-25 17:10:19.100 254096 INFO nova.compute.manager [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 7.81 seconds to build instance.
Nov 25 17:10:19 compute-0 nova_compute[254092]: 2025-11-25 17:10:19.115 254096 DEBUG oslo_concurrency.lockutils [None req-c9895af5-d8dd-4f3a-99a5-20480c5570f2 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:19 compute-0 unruffled_swirles[399898]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:10:19 compute-0 unruffled_swirles[399898]: --> relative data size: 1.0
Nov 25 17:10:19 compute-0 unruffled_swirles[399898]: --> All data devices are unavailable
Nov 25 17:10:19 compute-0 systemd[1]: libpod-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope: Deactivated successfully.
Nov 25 17:10:19 compute-0 systemd[1]: libpod-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope: Consumed 1.000s CPU time.
Nov 25 17:10:19 compute-0 conmon[399898]: conmon 9949995ab30923063940 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope/container/memory.events
Nov 25 17:10:19 compute-0 podman[399862]: 2025-11-25 17:10:19.240731657 +0000 UTC m=+1.261026598 container died 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c007f9aba4306b50fe99bdac2b54e2588f5ab450ff4eda3e9004847d234fdda5-merged.mount: Deactivated successfully.
Nov 25 17:10:19 compute-0 podman[399862]: 2025-11-25 17:10:19.321161763 +0000 UTC m=+1.341456704 container remove 9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:10:19 compute-0 ceph-mon[74985]: pgmap v2603: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:10:19 compute-0 systemd[1]: libpod-conmon-9949995ab3092306394049d32f4a2c04cc370d44d3f06724e342e7fd923939e9.scope: Deactivated successfully.
Nov 25 17:10:19 compute-0 sudo[399692]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:19 compute-0 sudo[400074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:19 compute-0 sudo[400074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:19 compute-0 sudo[400074]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:19 compute-0 sudo[400099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:10:19 compute-0 sudo[400099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:19 compute-0 sudo[400099]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:19 compute-0 sudo[400124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:19 compute-0 sudo[400124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:19 compute-0 sudo[400124]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:19 compute-0 sudo[400149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:10:19 compute-0 sudo[400149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:19 compute-0 podman[400214]: 2025-11-25 17:10:19.927976507 +0000 UTC m=+0.034952450 container create 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:10:19 compute-0 systemd[1]: Started libpod-conmon-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope.
Nov 25 17:10:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:10:20 compute-0 podman[400214]: 2025-11-25 17:10:19.912883773 +0000 UTC m=+0.019859746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:10:20 compute-0 podman[400214]: 2025-11-25 17:10:20.010980964 +0000 UTC m=+0.117956937 container init 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:10:20 compute-0 podman[400214]: 2025-11-25 17:10:20.017952455 +0000 UTC m=+0.124928398 container start 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:10:20 compute-0 podman[400214]: 2025-11-25 17:10:20.021928564 +0000 UTC m=+0.128904497 container attach 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:10:20 compute-0 sharp_mendeleev[400230]: 167 167
Nov 25 17:10:20 compute-0 systemd[1]: libpod-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope: Deactivated successfully.
Nov 25 17:10:20 compute-0 conmon[400230]: conmon 36208602e69236f1db8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope/container/memory.events
Nov 25 17:10:20 compute-0 podman[400214]: 2025-11-25 17:10:20.024090124 +0000 UTC m=+0.131066067 container died 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-96e3c21acd7c2541299f77909fcb526fb1351245feef185e30ee7c2301940b60-merged.mount: Deactivated successfully.
Nov 25 17:10:20 compute-0 podman[400214]: 2025-11-25 17:10:20.059256968 +0000 UTC m=+0.166232911 container remove 36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mendeleev, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:10:20 compute-0 systemd[1]: libpod-conmon-36208602e69236f1db8d940338e95a6aadc24cf1b56cbb86fff7c9252b626fde.scope: Deactivated successfully.
Nov 25 17:10:20 compute-0 podman[400255]: 2025-11-25 17:10:20.273311779 +0000 UTC m=+0.052433019 container create b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:10:20 compute-0 systemd[1]: Started libpod-conmon-b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2.scope.
Nov 25 17:10:20 compute-0 podman[400255]: 2025-11-25 17:10:20.250248486 +0000 UTC m=+0.029369766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:10:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:20 compute-0 podman[400255]: 2025-11-25 17:10:20.400310392 +0000 UTC m=+0.179431722 container init b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:10:20 compute-0 podman[400255]: 2025-11-25 17:10:20.41043772 +0000 UTC m=+0.189558990 container start b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:10:20 compute-0 podman[400255]: 2025-11-25 17:10:20.414531732 +0000 UTC m=+0.193653002 container attach b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:10:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 25 17:10:20 compute-0 nova_compute[254092]: 2025-11-25 17:10:20.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:21 compute-0 nova_compute[254092]: 2025-11-25 17:10:21.011 254096 DEBUG nova.compute.manager [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:21 compute-0 nova_compute[254092]: 2025-11-25 17:10:21.011 254096 DEBUG oslo_concurrency.lockutils [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:21 compute-0 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 DEBUG oslo_concurrency.lockutils [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:21 compute-0 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 DEBUG oslo_concurrency.lockutils [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:21 compute-0 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 DEBUG nova.compute.manager [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] No waiting events found dispatching network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:10:21 compute-0 nova_compute[254092]: 2025-11-25 17:10:21.012 254096 WARNING nova.compute.manager [req-a9f77148-f3b8-48a1-b5ac-1e568040f897 req-fc14ad13-1854-4d8b-84ad-d49fbef02bbb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received unexpected event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f for instance with vm_state active and task_state None.
Nov 25 17:10:21 compute-0 festive_bell[400272]: {
Nov 25 17:10:21 compute-0 festive_bell[400272]:     "0": [
Nov 25 17:10:21 compute-0 festive_bell[400272]:         {
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "devices": [
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "/dev/loop3"
Nov 25 17:10:21 compute-0 festive_bell[400272]:             ],
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_name": "ceph_lv0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_size": "21470642176",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "name": "ceph_lv0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "tags": {
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cluster_name": "ceph",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.crush_device_class": "",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.encrypted": "0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osd_id": "0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.type": "block",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.vdo": "0"
Nov 25 17:10:21 compute-0 festive_bell[400272]:             },
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "type": "block",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "vg_name": "ceph_vg0"
Nov 25 17:10:21 compute-0 festive_bell[400272]:         }
Nov 25 17:10:21 compute-0 festive_bell[400272]:     ],
Nov 25 17:10:21 compute-0 festive_bell[400272]:     "1": [
Nov 25 17:10:21 compute-0 festive_bell[400272]:         {
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "devices": [
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "/dev/loop4"
Nov 25 17:10:21 compute-0 festive_bell[400272]:             ],
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_name": "ceph_lv1",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_size": "21470642176",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "name": "ceph_lv1",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "tags": {
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cluster_name": "ceph",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.crush_device_class": "",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.encrypted": "0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osd_id": "1",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.type": "block",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.vdo": "0"
Nov 25 17:10:21 compute-0 festive_bell[400272]:             },
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "type": "block",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "vg_name": "ceph_vg1"
Nov 25 17:10:21 compute-0 festive_bell[400272]:         }
Nov 25 17:10:21 compute-0 festive_bell[400272]:     ],
Nov 25 17:10:21 compute-0 festive_bell[400272]:     "2": [
Nov 25 17:10:21 compute-0 festive_bell[400272]:         {
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "devices": [
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "/dev/loop5"
Nov 25 17:10:21 compute-0 festive_bell[400272]:             ],
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_name": "ceph_lv2",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_size": "21470642176",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "name": "ceph_lv2",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "tags": {
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.cluster_name": "ceph",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.crush_device_class": "",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.encrypted": "0",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osd_id": "2",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.type": "block",
Nov 25 17:10:21 compute-0 festive_bell[400272]:                 "ceph.vdo": "0"
Nov 25 17:10:21 compute-0 festive_bell[400272]:             },
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "type": "block",
Nov 25 17:10:21 compute-0 festive_bell[400272]:             "vg_name": "ceph_vg2"
Nov 25 17:10:21 compute-0 festive_bell[400272]:         }
Nov 25 17:10:21 compute-0 festive_bell[400272]:     ]
Nov 25 17:10:21 compute-0 festive_bell[400272]: }
Nov 25 17:10:21 compute-0 systemd[1]: libpod-b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2.scope: Deactivated successfully.
Nov 25 17:10:21 compute-0 podman[400255]: 2025-11-25 17:10:21.196935232 +0000 UTC m=+0.976056522 container died b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0507b2f2a8b7b5d18c11ef34fb0fa627492f36f96fa7fd7cc06974d3ced13913-merged.mount: Deactivated successfully.
Nov 25 17:10:21 compute-0 podman[400255]: 2025-11-25 17:10:21.265687668 +0000 UTC m=+1.044808908 container remove b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:10:21 compute-0 systemd[1]: libpod-conmon-b0c7ea0be550749881bfd369c3d015e00c9866781fb47704c474ed25bab349d2.scope: Deactivated successfully.
Nov 25 17:10:21 compute-0 sudo[400149]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:21 compute-0 sudo[400295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:21 compute-0 sudo[400295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:21 compute-0 sudo[400295]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:21 compute-0 sudo[400320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:10:21 compute-0 sudo[400320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:21 compute-0 sudo[400320]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:21 compute-0 sudo[400345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:21 compute-0 sudo[400345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:21 compute-0 sudo[400345]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:21 compute-0 sudo[400370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:10:21 compute-0 sudo[400370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:21 compute-0 ceph-mon[74985]: pgmap v2604: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 25 17:10:21 compute-0 podman[400435]: 2025-11-25 17:10:21.877293823 +0000 UTC m=+0.039527215 container create d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:10:21 compute-0 systemd[1]: Started libpod-conmon-d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943.scope.
Nov 25 17:10:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:10:21 compute-0 nova_compute[254092]: 2025-11-25 17:10:21.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:21 compute-0 podman[400435]: 2025-11-25 17:10:21.859169276 +0000 UTC m=+0.021402668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:10:21 compute-0 podman[400435]: 2025-11-25 17:10:21.958883291 +0000 UTC m=+0.121116683 container init d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 17:10:21 compute-0 podman[400435]: 2025-11-25 17:10:21.966468799 +0000 UTC m=+0.128702171 container start d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:21.970 163448 DEBUG eventlet.wsgi.server [-] (163448) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 25 17:10:21 compute-0 naughty_mestorf[400452]: 167 167
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:21.972 163448 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: Accept: */*
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: Connection: close
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: Content-Type: text/plain
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: Host: 169.254.169.254
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: User-Agent: curl/7.84.0
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: X-Forwarded-For: 10.100.0.9
Nov 25 17:10:21 compute-0 ovn_metadata_agent[163333]: X-Ovn-Network-Id: e024dc03-b986-42e0-ad9c-68e6318af670 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 25 17:10:21 compute-0 podman[400435]: 2025-11-25 17:10:21.969665916 +0000 UTC m=+0.131899318 container attach d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:10:21 compute-0 systemd[1]: libpod-d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943.scope: Deactivated successfully.
Nov 25 17:10:22 compute-0 podman[400457]: 2025-11-25 17:10:22.020041448 +0000 UTC m=+0.027461924 container died d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca8f1378b616ccfb0483ed532c65a889285f5d736447ceeb3a41f19a7af31566-merged.mount: Deactivated successfully.
Nov 25 17:10:22 compute-0 podman[400457]: 2025-11-25 17:10:22.058548765 +0000 UTC m=+0.065969231 container remove d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:10:22 compute-0 systemd[1]: libpod-conmon-d76f3846861c8531490498c071e85f7dae934d9c8dd37f6eabc58f7da2df7943.scope: Deactivated successfully.
Nov 25 17:10:22 compute-0 podman[400479]: 2025-11-25 17:10:22.294115226 +0000 UTC m=+0.048601034 container create 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 17:10:22 compute-0 systemd[1]: Started libpod-conmon-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope.
Nov 25 17:10:22 compute-0 podman[400479]: 2025-11-25 17:10:22.270304193 +0000 UTC m=+0.024790031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:10:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:10:22 compute-0 podman[400479]: 2025-11-25 17:10:22.409583073 +0000 UTC m=+0.164068891 container init 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:10:22 compute-0 podman[400479]: 2025-11-25 17:10:22.423098123 +0000 UTC m=+0.177583951 container start 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:10:22 compute-0 podman[400479]: 2025-11-25 17:10:22.426280641 +0000 UTC m=+0.180766439 container attach 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:10:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:10:23 compute-0 zen_mahavira[400496]: {
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "osd_id": 1,
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "type": "bluestore"
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:     },
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "osd_id": 2,
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "type": "bluestore"
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:     },
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "osd_id": 0,
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:         "type": "bluestore"
Nov 25 17:10:23 compute-0 zen_mahavira[400496]:     }
Nov 25 17:10:23 compute-0 zen_mahavira[400496]: }
Nov 25 17:10:23 compute-0 podman[400479]: 2025-11-25 17:10:23.501552163 +0000 UTC m=+1.256037981 container died 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:10:23 compute-0 systemd[1]: libpod-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope: Deactivated successfully.
Nov 25 17:10:23 compute-0 systemd[1]: libpod-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope: Consumed 1.083s CPU time.
Nov 25 17:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11b7ef4c475f927c222aad80abc4e1529e5345e443440ddb2a5418d6479dc8b-merged.mount: Deactivated successfully.
Nov 25 17:10:23 compute-0 podman[400479]: 2025-11-25 17:10:23.570971518 +0000 UTC m=+1.325457316 container remove 789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:10:23 compute-0 systemd[1]: libpod-conmon-789c891f45cb108fb9df79f2dd6c7665b8e5d86b65b55c716884cd3cf233e314.scope: Deactivated successfully.
Nov 25 17:10:23 compute-0 ceph-mon[74985]: pgmap v2605: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:10:23 compute-0 sudo[400370]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:10:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:10:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:10:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:10:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9dc6e192-2828-4d26-a60a-59ef743e7d2b does not exist
Nov 25 17:10:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 519f2a67-9458-49dc-b4c9-ade0c78b9c9d does not exist
Nov 25 17:10:23 compute-0 sudo[400541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:10:23 compute-0 sudo[400541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:23 compute-0 sudo[400541]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:23 compute-0 podman[400566]: 2025-11-25 17:10:23.792608647 +0000 UTC m=+0.074170056 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:10:23 compute-0 podman[400565]: 2025-11-25 17:10:23.808543754 +0000 UTC m=+0.087271656 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 17:10:23 compute-0 sudo[400589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:10:23 compute-0 sudo[400589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:10:23 compute-0 sudo[400589]: pam_unix(sudo:session): session closed for user root
Nov 25 17:10:23 compute-0 podman[400576]: 2025-11-25 17:10:23.854018871 +0000 UTC m=+0.117174285 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 17:10:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:23.940 163448 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 25 17:10:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:23.942 163448 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.9699941
Nov 25 17:10:23 compute-0 haproxy-metadata-proxy-e024dc03-b986-42e0-ad9c-68e6318af670[398975]: 10.100.0.9:33136 [25/Nov/2025:17:10:21.968] listener listener/metadata 0/0/0/1973/1973 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.009 163448 DEBUG eventlet.wsgi.server [-] (163448) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.011 163448 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: Accept: */*
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: Connection: close
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: Content-Length: 100
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: Content-Type: application/x-www-form-urlencoded
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: Host: 169.254.169.254
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: User-Agent: curl/7.84.0
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: X-Forwarded-For: 10.100.0.9
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: X-Ovn-Network-Id: e024dc03-b986-42e0-ad9c-68e6318af670
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 25 17:10:24 compute-0 haproxy-metadata-proxy-e024dc03-b986-42e0-ad9c-68e6318af670[398975]: 10.100.0.9:33150 [25/Nov/2025:17:10:24.008] listener listener/metadata 0/0/0/323/323 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.331 163448 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 25 17:10:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:24.332 163448 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3211155
Nov 25 17:10:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:10:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:10:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:10:25 compute-0 nova_compute[254092]: 2025-11-25 17:10:25.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:25 compute-0 ceph-mon[74985]: pgmap v2606: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 25 17:10:25 compute-0 nova_compute[254092]: 2025-11-25 17:10:25.910 254096 DEBUG nova.compute.manager [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:25 compute-0 nova_compute[254092]: 2025-11-25 17:10:25.912 254096 DEBUG nova.compute.manager [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing instance network info cache due to event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:10:25 compute-0 nova_compute[254092]: 2025-11-25 17:10:25.913 254096 DEBUG oslo_concurrency.lockutils [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:10:25 compute-0 nova_compute[254092]: 2025-11-25 17:10:25.914 254096 DEBUG oslo_concurrency.lockutils [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:10:25 compute-0 nova_compute[254092]: 2025-11-25 17:10:25.914 254096 DEBUG nova.network.neutron [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.163 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.164 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.164 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.165 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.165 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.166 254096 INFO nova.compute.manager [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Terminating instance
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.167 254096 DEBUG nova.compute.manager [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:10:26 compute-0 kernel: tap507a4b35-dd (unregistering): left promiscuous mode
Nov 25 17:10:26 compute-0 NetworkManager[48891]: <info>  [1764090626.2563] device (tap507a4b35-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:10:26 compute-0 ovn_controller[153477]: 2025-11-25T17:10:26Z|01394|binding|INFO|Releasing lport 507a4b35-dd4f-4777-a88c-c40597fe827b from this chassis (sb_readonly=0)
Nov 25 17:10:26 compute-0 ovn_controller[153477]: 2025-11-25T17:10:26Z|01395|binding|INFO|Setting lport 507a4b35-dd4f-4777-a88c-c40597fe827b down in Southbound
Nov 25 17:10:26 compute-0 ovn_controller[153477]: 2025-11-25T17:10:26Z|01396|binding|INFO|Removing iface tap507a4b35-dd ovn-installed in OVS
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.277 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:22:50 10.100.0.9'], port_security=['fa:16:3e:d4:22:50 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6a4ffa69-afb1-46b7-9109-8edeb9481103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e024dc03-b986-42e0-ad9c-68e6318af670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49df21ca46894c8fb4040c7e9eccaef4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dda0fb5-abf0-44ce-9142-5535344390ea 94b8f488-8d50-467c-9417-5b43dfa0fc8c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5edf8fd3-0892-465d-8830-58affb8f0bec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=507a4b35-dd4f-4777-a88c-c40597fe827b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.278 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 507a4b35-dd4f-4777-a88c-c40597fe827b in datapath e024dc03-b986-42e0-ad9c-68e6318af670 unbound from our chassis
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.279 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e024dc03-b986-42e0-ad9c-68e6318af670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.281 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7c1da6-cd1e-4996-bbfe-19986d1cec7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.281 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 namespace which is not needed anymore
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:26 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 25 17:10:26 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Consumed 16.129s CPU time.
Nov 25 17:10:26 compute-0 systemd-machined[216343]: Machine qemu-167-instance-00000085 terminated.
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.407 254096 INFO nova.virt.libvirt.driver [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Instance destroyed successfully.
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.410 254096 DEBUG nova.objects.instance [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lazy-loading 'resources' on Instance uuid 6a4ffa69-afb1-46b7-9109-8edeb9481103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.423 254096 DEBUG nova.virt.libvirt.vif [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-955582536',display_name='tempest-TestServerBasicOps-server-955582536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-955582536',id=133,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDdtyaxmMVL8CAc7S++Y8gbOFbk1LCqSryz68UQakJZFbiX886AD33j7kjiNk5kHLWkm6AP4LpqGE2BaOqlADwTjW6lF33LqPINwV3iZ2r2irHV9h1AdPWEego3dYIkYew==',key_name='tempest-TestServerBasicOps-2071633386',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:09:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='49df21ca46894c8fb4040c7e9eccaef4',ramdisk_id='',reservation_id='r-w0qyy7pr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1965980686',owner_user_name='tempest-TestServerBasicOps-1965980686-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:10:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24306f395dd542b6a11b3bd0faadd4ad',uuid=6a4ffa69-afb1-46b7-9109-8edeb9481103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.424 254096 DEBUG nova.network.os_vif_util [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converting VIF {"id": "507a4b35-dd4f-4777-a88c-c40597fe827b", "address": "fa:16:3e:d4:22:50", "network": {"id": "e024dc03-b986-42e0-ad9c-68e6318af670", "bridge": "br-int", "label": "tempest-TestServerBasicOps-808271538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "49df21ca46894c8fb4040c7e9eccaef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap507a4b35-dd", "ovs_interfaceid": "507a4b35-dd4f-4777-a88c-c40597fe827b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.425 254096 DEBUG nova.network.os_vif_util [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.426 254096 DEBUG os_vif [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap507a4b35-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.436 254096 INFO os_vif [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:22:50,bridge_name='br-int',has_traffic_filtering=True,id=507a4b35-dd4f-4777-a88c-c40597fe827b,network=Network(e024dc03-b986-42e0-ad9c-68e6318af670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap507a4b35-dd')
Nov 25 17:10:26 compute-0 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : haproxy version is 2.8.14-c23fe91
Nov 25 17:10:26 compute-0 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [NOTICE]   (398973) : path to executable is /usr/sbin/haproxy
Nov 25 17:10:26 compute-0 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [WARNING]  (398973) : Exiting Master process...
Nov 25 17:10:26 compute-0 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [ALERT]    (398973) : Current worker (398975) exited with code 143 (Terminated)
Nov 25 17:10:26 compute-0 neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670[398969]: [WARNING]  (398973) : All workers exited. Exiting... (0)
Nov 25 17:10:26 compute-0 systemd[1]: libpod-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b.scope: Deactivated successfully.
Nov 25 17:10:26 compute-0 podman[400677]: 2025-11-25 17:10:26.454020664 +0000 UTC m=+0.061384675 container died 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b-userdata-shm.mount: Deactivated successfully.
Nov 25 17:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a09b45283fd3fcfbf225fb1458ae704eb97d1c98df77d8854db8ed15a4adda7-merged.mount: Deactivated successfully.
Nov 25 17:10:26 compute-0 podman[400677]: 2025-11-25 17:10:26.497735543 +0000 UTC m=+0.105099534 container cleanup 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:10:26 compute-0 systemd[1]: libpod-conmon-484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b.scope: Deactivated successfully.
Nov 25 17:10:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 17:10:26 compute-0 podman[400736]: 2025-11-25 17:10:26.565773389 +0000 UTC m=+0.047833023 container remove 484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.575 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8192767-ffee-425f-bbca-ac7d9d189577]: (4, ('Tue Nov 25 05:10:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 (484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b)\n484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b\nTue Nov 25 05:10:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 (484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b)\n484a44a437498435fda71d5a3fc29676d159f6f7123ac291143e0a043c30f59b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.577 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[94a8a112-b150-4f4e-a5dc-c210b76579a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.578 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape024dc03-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:26 compute-0 kernel: tape024dc03-b0: left promiscuous mode
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.600 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e63515b1-b79c-4289-8dd9-692fd9160421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.614 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae50bb73-e2c2-46e5-b132-7dda748217b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[91d45b3d-563d-477b-9d50-ebacc6999257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.632 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d50d654c-3801-4dde-a0b0-2b2e72620971]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704002, 'reachable_time': 16802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400752, 'error': None, 'target': 'ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 systemd[1]: run-netns-ovnmeta\x2de024dc03\x2db986\x2d42e0\x2dad9c\x2d68e6318af670.mount: Deactivated successfully.
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.636 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e024dc03-b986-42e0-ad9c-68e6318af670 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.636 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[20e3d3f8-3e05-441c-a5ed-fa16b48d1759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.777 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:10:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:26.777 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.878 254096 INFO nova.virt.libvirt.driver [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deleting instance files /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103_del
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.879 254096 INFO nova.virt.libvirt.driver [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deletion of /var/lib/nova/instances/6a4ffa69-afb1-46b7-9109-8edeb9481103_del complete
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.965 254096 INFO nova.compute.manager [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.966 254096 DEBUG oslo.service.loopingcall [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.966 254096 DEBUG nova.compute.manager [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:10:26 compute-0 nova_compute[254092]: 2025-11-25 17:10:26.966 254096 DEBUG nova.network.neutron [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:10:27 compute-0 ceph-mon[74985]: pgmap v2607: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 25 17:10:27 compute-0 nova_compute[254092]: 2025-11-25 17:10:27.860 254096 DEBUG nova.network.neutron [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updated VIF entry in instance network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:10:27 compute-0 nova_compute[254092]: 2025-11-25 17:10:27.861 254096 DEBUG nova.network.neutron [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:10:27 compute-0 nova_compute[254092]: 2025-11-25 17:10:27.882 254096 DEBUG oslo_concurrency.lockutils [req-c8034177-19b3-4e03-925e-b7012931ac5a req-c6c3a5ca-2efe-4b8d-86b6-39f2028d5e5d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.007 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-unplugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.008 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.008 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.009 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.009 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] No waiting events found dispatching network-vif-unplugged-507a4b35-dd4f-4777-a88c-c40597fe827b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.009 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-unplugged-507a4b35-dd4f-4777-a88c-c40597fe827b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.010 254096 DEBUG oslo_concurrency.lockutils [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.011 254096 DEBUG nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] No waiting events found dispatching network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.011 254096 WARNING nova.compute.manager [req-ef4d2356-b9ce-4657-90fc-f8189a0f87ab req-43409d1b-9870-4781-baba-8f192f076065 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received unexpected event network-vif-plugged-507a4b35-dd4f-4777-a88c-c40597fe827b for instance with vm_state active and task_state deleting.
Nov 25 17:10:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.873 254096 DEBUG nova.network.neutron [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:10:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:28 compute-0 nova_compute[254092]: 2025-11-25 17:10:28.941 254096 INFO nova.compute.manager [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Took 1.97 seconds to deallocate network for instance.
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.128 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.129 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.315 254096 DEBUG oslo_concurrency.processutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:10:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175135485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.780 254096 DEBUG oslo_concurrency.processutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.787 254096 DEBUG nova.compute.provider_tree [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.812 254096 DEBUG nova.scheduler.client.report [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.834 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:29 compute-0 ceph-mon[74985]: pgmap v2608: 321 pgs: 321 active+clean; 167 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:10:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4175135485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.861 254096 INFO nova.scheduler.client.report [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Deleted allocations for instance 6a4ffa69-afb1-46b7-9109-8edeb9481103
Nov 25 17:10:29 compute-0 nova_compute[254092]: 2025-11-25 17:10:29.918 254096 DEBUG oslo_concurrency.lockutils [None req-a1a3c8d3-088e-4e1d-82dc-5453ec565d05 24306f395dd542b6a11b3bd0faadd4ad 49df21ca46894c8fb4040c7e9eccaef4 - - default default] Lock "6a4ffa69-afb1-46b7-9109-8edeb9481103" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:30 compute-0 nova_compute[254092]: 2025-11-25 17:10:30.115 254096 DEBUG nova.compute.manager [req-bac99123-275a-431e-bbd5-a6c8cf408717 req-b0c01ca8-8998-4137-8c54-9b42908083b7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Received event network-vif-deleted-507a4b35-dd4f-4777-a88c-c40597fe827b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:30 compute-0 nova_compute[254092]: 2025-11-25 17:10:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:30 compute-0 nova_compute[254092]: 2025-11-25 17:10:30.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 116 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 96 op/s
Nov 25 17:10:31 compute-0 nova_compute[254092]: 2025-11-25 17:10:31.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:31.779 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:31 compute-0 ceph-mon[74985]: pgmap v2609: 321 pgs: 321 active+clean; 116 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 96 op/s
Nov 25 17:10:32 compute-0 ovn_controller[153477]: 2025-11-25T17:10:32Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:9a:99 10.100.0.4
Nov 25 17:10:32 compute-0 ovn_controller[153477]: 2025-11-25T17:10:32Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:9a:99 10.100.0.4
Nov 25 17:10:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 88 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 170 KiB/s wr, 69 op/s
Nov 25 17:10:33 compute-0 nova_compute[254092]: 2025-11-25 17:10:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:33 compute-0 ceph-mon[74985]: pgmap v2610: 321 pgs: 321 active+clean; 88 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 170 KiB/s wr, 69 op/s
Nov 25 17:10:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 88 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 980 KiB/s rd, 170 KiB/s wr, 63 op/s
Nov 25 17:10:35 compute-0 nova_compute[254092]: 2025-11-25 17:10:35.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:35 compute-0 ceph-mon[74985]: pgmap v2611: 321 pgs: 321 active+clean; 88 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 980 KiB/s rd, 170 KiB/s wr, 63 op/s
Nov 25 17:10:36 compute-0 ovn_controller[153477]: 2025-11-25T17:10:36Z|01397|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 17:10:36 compute-0 nova_compute[254092]: 2025-11-25 17:10:36.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:36 compute-0 nova_compute[254092]: 2025-11-25 17:10:36.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 17:10:37 compute-0 nova_compute[254092]: 2025-11-25 17:10:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:37 compute-0 nova_compute[254092]: 2025-11-25 17:10:37.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:10:37 compute-0 ceph-mon[74985]: pgmap v2612: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 17:10:38 compute-0 nova_compute[254092]: 2025-11-25 17:10:38.487 254096 INFO nova.compute.manager [None req-0625a721-3427-40f3-b1d8-3a3ae55ba477 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Get console output
Nov 25 17:10:38 compute-0 nova_compute[254092]: 2025-11-25 17:10:38.493 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:10:38 compute-0 nova_compute[254092]: 2025-11-25 17:10:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:10:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:39 compute-0 ovn_controller[153477]: 2025-11-25T17:10:39Z|01398|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 17:10:39 compute-0 nova_compute[254092]: 2025-11-25 17:10:39.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:39 compute-0 ovn_controller[153477]: 2025-11-25T17:10:39Z|01399|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 17:10:39 compute-0 nova_compute[254092]: 2025-11-25 17:10:39.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:39 compute-0 ceph-mon[74985]: pgmap v2613: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:10:40
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'images', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control']
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.469 254096 INFO nova.compute.manager [None req-4e854509-5189-424c-bd55-0ef957736341 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Get console output
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.473 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:40 compute-0 nova_compute[254092]: 2025-11-25 17:10:40.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:10:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 25 17:10:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:10:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/910119553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.008 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:41 compute-0 NetworkManager[48891]: <info>  [1764090641.0690] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Nov 25 17:10:41 compute-0 NetworkManager[48891]: <info>  [1764090641.0700] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.080 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.080 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:10:41 compute-0 ovn_controller[153477]: 2025-11-25T17:10:41Z|01400|binding|INFO|Releasing lport c8371c10-cfa3-4f58-a300-b742148bc7d7 from this chassis (sb_readonly=0)
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.224 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.225 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3507MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.225 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.225 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.305 254096 INFO nova.compute.manager [None req-5d4179ee-086e-4bb9-9905-93683c2c4b0f e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Get console output
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.309 333852 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.404 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090626.402938, 6a4ffa69-afb1-46b7-9109-8edeb9481103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.405 254096 INFO nova.compute.manager [-] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] VM Stopped (Lifecycle Event)
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.421 254096 DEBUG nova.compute.manager [None req-e98c1e4d-db01-45c1-8923-5db874b2e8aa - - - - - -] [instance: 6a4ffa69-afb1-46b7-9109-8edeb9481103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.642 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 678bebc8-318d-4332-b89f-f86ac5f187c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.642 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:10:41 compute-0 nova_compute[254092]: 2025-11-25 17:10:41.643 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:10:41 compute-0 ceph-mon[74985]: pgmap v2614: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 25 17:10:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/910119553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.120 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.275 254096 DEBUG nova.compute.manager [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.275 254096 DEBUG nova.compute.manager [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing instance network info cache due to event network-changed-3af510d7-9800-4ba2-9c9f-f8ded924314f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.276 254096 DEBUG oslo_concurrency.lockutils [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.276 254096 DEBUG oslo_concurrency.lockutils [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.276 254096 DEBUG nova.network.neutron [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Refreshing network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.492 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.493 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.493 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.494 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.494 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.496 254096 INFO nova.compute.manager [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Terminating instance
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.498 254096 DEBUG nova.compute.manager [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:10:42 compute-0 kernel: tap3af510d7-98 (unregistering): left promiscuous mode
Nov 25 17:10:42 compute-0 NetworkManager[48891]: <info>  [1764090642.5560] device (tap3af510d7-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:10:42 compute-0 ovn_controller[153477]: 2025-11-25T17:10:42Z|01401|binding|INFO|Releasing lport 3af510d7-9800-4ba2-9c9f-f8ded924314f from this chassis (sb_readonly=0)
Nov 25 17:10:42 compute-0 ovn_controller[153477]: 2025-11-25T17:10:42Z|01402|binding|INFO|Setting lport 3af510d7-9800-4ba2-9c9f-f8ded924314f down in Southbound
Nov 25 17:10:42 compute-0 ovn_controller[153477]: 2025-11-25T17:10:42Z|01403|binding|INFO|Removing iface tap3af510d7-98 ovn-installed in OVS
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.572 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:9a:99 10.100.0.4'], port_security=['fa:16:3e:b7:9a:99 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678bebc8-318d-4332-b89f-f86ac5f187c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31307e1076a54d1da7cf2b1a4e0b0731', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58ae50d2-6994-45e4-b2e6-36301d8443e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=702c597c-9457-430d-aa7a-2d35c3cf306f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=3af510d7-9800-4ba2-9c9f-f8ded924314f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:10:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.573 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 3af510d7-9800-4ba2-9c9f-f8ded924314f in datapath 169b0886-fc13-49ff-b4f6-0f14f908ad1c unbound from our chassis
Nov 25 17:10:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.574 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 169b0886-fc13-49ff-b4f6-0f14f908ad1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:10:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.576 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb2c76a-ff43-40b0-a2c0-5417f7742588]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:42.576 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c namespace which is not needed anymore
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.597 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.597 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:10:42 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Nov 25 17:10:42 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 13.280s CPU time.
Nov 25 17:10:42 compute-0 systemd-machined[216343]: Machine qemu-168-instance-00000086 terminated.
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.626 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.656 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:10:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.696 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.753 254096 INFO nova.virt.libvirt.driver [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Instance destroyed successfully.
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.755 254096 DEBUG nova.objects.instance [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lazy-loading 'resources' on Instance uuid 678bebc8-318d-4332-b89f-f86ac5f187c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:10:42 compute-0 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : haproxy version is 2.8.14-c23fe91
Nov 25 17:10:42 compute-0 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [NOTICE]   (400035) : path to executable is /usr/sbin/haproxy
Nov 25 17:10:42 compute-0 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [WARNING]  (400035) : Exiting Master process...
Nov 25 17:10:42 compute-0 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [ALERT]    (400035) : Current worker (400037) exited with code 143 (Terminated)
Nov 25 17:10:42 compute-0 neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c[400027]: [WARNING]  (400035) : All workers exited. Exiting... (0)
Nov 25 17:10:42 compute-0 systemd[1]: libpod-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1.scope: Deactivated successfully.
Nov 25 17:10:42 compute-0 podman[400826]: 2025-11-25 17:10:42.772303402 +0000 UTC m=+0.064276914 container died 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.780 254096 DEBUG nova.virt.libvirt.vif [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:10:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1329077363',display_name='tempest-TestNetworkBasicOps-server-1329077363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1329077363',id=134,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLFAq/FVSBvI83QmbRYXeQD5oPtImBiS/J8S4tENTC3HiqmpnLQgZSgQo5Q3eg9KB495TzA7SZganOs6ca4kdDUFsCruPnoCgpH1Af7eq+g3pVeBR/yIGYrZrs0yR9s3UA==',key_name='tempest-TestNetworkBasicOps-801135771',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:10:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31307e1076a54d1da7cf2b1a4e0b0731',ramdisk_id='',reservation_id='r-k9efxcoe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-901857678',owner_user_name='tempest-TestNetworkBasicOps-901857678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:10:19Z,user_data=None,user_id='e13382ac65e94c7b8a89e67de0e754df',uuid=678bebc8-318d-4332-b89f-f86ac5f187c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.784 254096 DEBUG nova.network.os_vif_util [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converting VIF {"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.786 254096 DEBUG nova.network.os_vif_util [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.787 254096 DEBUG os_vif [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.794 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3af510d7-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1-userdata-shm.mount: Deactivated successfully.
Nov 25 17:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-26536350f4636af02cd64a2d8c5eb49f4c32108a7d2a493c1f2754a932dc988a-merged.mount: Deactivated successfully.
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.894 254096 DEBUG nova.compute.manager [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-unplugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.894 254096 DEBUG oslo_concurrency.lockutils [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG oslo_concurrency.lockutils [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG oslo_concurrency.lockutils [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG nova.compute.manager [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] No waiting events found dispatching network-vif-unplugged-3af510d7-9800-4ba2-9c9f-f8ded924314f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.895 254096 DEBUG nova.compute.manager [req-94c5a4fe-4dfe-4416-8c53-1d4a3802f3c2 req-c10ff4b7-dd3e-42df-8099-c813c55935ee a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-unplugged-3af510d7-9800-4ba2-9c9f-f8ded924314f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:10:42 compute-0 nova_compute[254092]: 2025-11-25 17:10:42.899 254096 INFO os_vif [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:9a:99,bridge_name='br-int',has_traffic_filtering=True,id=3af510d7-9800-4ba2-9c9f-f8ded924314f,network=Network(169b0886-fc13-49ff-b4f6-0f14f908ad1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af510d7-98')
Nov 25 17:10:42 compute-0 podman[400826]: 2025-11-25 17:10:42.987974697 +0000 UTC m=+0.279948209 container cleanup 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 17:10:43 compute-0 systemd[1]: libpod-conmon-2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1.scope: Deactivated successfully.
Nov 25 17:10:43 compute-0 podman[400904]: 2025-11-25 17:10:43.071975321 +0000 UTC m=+0.059791800 container remove 2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.080 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5d2631-45db-4c12-af44-423d47723aab]: (4, ('Tue Nov 25 05:10:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c (2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1)\n2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1\nTue Nov 25 05:10:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c (2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1)\n2166e05adba0cd7f21d11eeee04f0c9e3c7a19a1b0084ed6013fec89a50941a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06181a79-c47d-4ddf-bb1b-4d9d370e8fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.082 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap169b0886-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:10:43 compute-0 kernel: tap169b0886-f0: left promiscuous mode
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af5117b9-2e74-4223-beaf-934d742673b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecdc5e4-b03c-40a4-b9d4-6e1ff04a301d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.118 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26549f66-8e1b-4c5d-a119-23e833c4d34f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.150 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6e8dcd-70cf-4431-a80a-49c60c82ae23]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707575, 'reachable_time': 37880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400917, 'error': None, 'target': 'ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.153 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-169b0886-fc13-49ff-b4f6-0f14f908ad1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:10:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:10:43.153 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[db2c3fd8-fc14-43c9-9516-4782b2567669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:10:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d169b0886\x2dfc13\x2d49ff\x2db4f6\x2d0f14f908ad1c.mount: Deactivated successfully.
Nov 25 17:10:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:10:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574049576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.264 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.272 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.289 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.320 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.321 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.376 254096 INFO nova.virt.libvirt.driver [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deleting instance files /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4_del
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.379 254096 INFO nova.virt.libvirt.driver [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deletion of /var/lib/nova/instances/678bebc8-318d-4332-b89f-f86ac5f187c4_del complete
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.432 254096 INFO nova.compute.manager [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 0.93 seconds to destroy the instance on the hypervisor.
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.433 254096 DEBUG oslo.service.loopingcall [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.434 254096 DEBUG nova.compute.manager [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.434 254096 DEBUG nova.network.neutron [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:10:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.984 254096 DEBUG nova.network.neutron [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updated VIF entry in instance network info cache for port 3af510d7-9800-4ba2-9c9f-f8ded924314f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:10:43 compute-0 nova_compute[254092]: 2025-11-25 17:10:43.985 254096 DEBUG nova.network.neutron [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [{"id": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "address": "fa:16:3e:b7:9a:99", "network": {"id": "169b0886-fc13-49ff-b4f6-0f14f908ad1c", "bridge": "br-int", "label": "tempest-network-smoke--1110521818", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31307e1076a54d1da7cf2b1a4e0b0731", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af510d7-98", "ovs_interfaceid": "3af510d7-9800-4ba2-9c9f-f8ded924314f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:10:43 compute-0 ceph-mon[74985]: pgmap v2615: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 25 17:10:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/574049576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.007 254096 DEBUG oslo_concurrency.lockutils [req-8340390c-a464-4005-81d9-9726f74bbb00 req-336d9a1e-2d5d-4b96-a3f4-b2aef2370c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-678bebc8-318d-4332-b89f-f86ac5f187c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.199 254096 DEBUG nova.network.neutron [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.214 254096 INFO nova.compute.manager [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Took 0.78 seconds to deallocate network for instance.
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.250 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.251 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.292 254096 DEBUG oslo_concurrency.processutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.362 254096 DEBUG nova.compute.manager [req-aeed8a7c-7771-4293-a3f5-1470efce5354 req-c3d1202e-e954-4b26-a021-98e3e9992416 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-deleted-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Nov 25 17:10:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:10:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2973931364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.723 254096 DEBUG oslo_concurrency.processutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.730 254096 DEBUG nova.compute.provider_tree [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.752 254096 DEBUG nova.scheduler.client.report [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.776 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.817 254096 INFO nova.scheduler.client.report [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Deleted allocations for instance 678bebc8-318d-4332-b89f-f86ac5f187c4
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.869 254096 DEBUG oslo_concurrency.lockutils [None req-96ddc686-c451-448c-bba8-b1cc2a988740 e13382ac65e94c7b8a89e67de0e754df 31307e1076a54d1da7cf2b1a4e0b0731 - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.967 254096 DEBUG nova.compute.manager [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG oslo_concurrency.lockutils [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG oslo_concurrency.lockutils [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG oslo_concurrency.lockutils [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "678bebc8-318d-4332-b89f-f86ac5f187c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.968 254096 DEBUG nova.compute.manager [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] No waiting events found dispatching network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:10:44 compute-0 nova_compute[254092]: 2025-11-25 17:10:44.969 254096 WARNING nova.compute.manager [req-49210749-234f-4d5c-89f8-1ab59c49546d req-8d93ac1c-eae5-48d0-9665-19636fccbb1c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Received unexpected event network-vif-plugged-3af510d7-9800-4ba2-9c9f-f8ded924314f for instance with vm_state deleted and task_state None.
Nov 25 17:10:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2973931364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:10:45 compute-0 nova_compute[254092]: 2025-11-25 17:10:45.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:46 compute-0 ceph-mon[74985]: pgmap v2616: 321 pgs: 321 active+clean; 121 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Nov 25 17:10:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.0 MiB/s wr, 89 op/s
Nov 25 17:10:47 compute-0 nova_compute[254092]: 2025-11-25 17:10:47.322 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:47 compute-0 nova_compute[254092]: 2025-11-25 17:10:47.323 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:10:47 compute-0 nova_compute[254092]: 2025-11-25 17:10:47.361 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:10:47 compute-0 nova_compute[254092]: 2025-11-25 17:10:47.362 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:10:47 compute-0 nova_compute[254092]: 2025-11-25 17:10:47.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:48 compute-0 ceph-mon[74985]: pgmap v2617: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.0 MiB/s wr, 89 op/s
Nov 25 17:10:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 17:10:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:49 compute-0 nova_compute[254092]: 2025-11-25 17:10:49.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:49 compute-0 nova_compute[254092]: 2025-11-25 17:10:49.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:50 compute-0 ceph-mon[74985]: pgmap v2618: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 17:10:50 compute-0 nova_compute[254092]: 2025-11-25 17:10:50.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:10:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:10:52 compute-0 ceph-mon[74985]: pgmap v2619: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Nov 25 17:10:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:10:52 compute-0 nova_compute[254092]: 2025-11-25 17:10:52.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:54 compute-0 ceph-mon[74985]: pgmap v2620: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:10:54 compute-0 podman[400945]: 2025-11-25 17:10:54.638974041 +0000 UTC m=+0.048235914 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:10:54 compute-0 podman[400944]: 2025-11-25 17:10:54.64657456 +0000 UTC m=+0.057509939 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:10:54 compute-0 podman[400946]: 2025-11-25 17:10:54.686484264 +0000 UTC m=+0.090000799 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 25 17:10:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:10:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:10:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3181114344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:10:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:10:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3181114344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:10:55 compute-0 nova_compute[254092]: 2025-11-25 17:10:55.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:56 compute-0 ceph-mon[74985]: pgmap v2621: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:10:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3181114344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:10:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3181114344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:10:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:10:57 compute-0 nova_compute[254092]: 2025-11-25 17:10:57.750 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090642.7384002, 678bebc8-318d-4332-b89f-f86ac5f187c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:10:57 compute-0 nova_compute[254092]: 2025-11-25 17:10:57.751 254096 INFO nova.compute.manager [-] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] VM Stopped (Lifecycle Event)
Nov 25 17:10:57 compute-0 nova_compute[254092]: 2025-11-25 17:10:57.769 254096 DEBUG nova.compute.manager [None req-98fe419f-2440-4f83-bc24-9853e80c65fb - - - - - -] [instance: 678bebc8-318d-4332-b89f-f86ac5f187c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:10:57 compute-0 nova_compute[254092]: 2025-11-25 17:10:57.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:10:58 compute-0 ceph-mon[74985]: pgmap v2622: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:10:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:10:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:10:59 compute-0 sshd-session[401007]: Connection closed by authenticating user root 171.244.51.45 port 46740 [preauth]
Nov 25 17:11:00 compute-0 ceph-mon[74985]: pgmap v2623: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:00 compute-0 nova_compute[254092]: 2025-11-25 17:11:00.610 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:02 compute-0 ceph-mon[74985]: pgmap v2624: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:02 compute-0 nova_compute[254092]: 2025-11-25 17:11:02.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:04 compute-0 ceph-mon[74985]: pgmap v2625: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:05 compute-0 nova_compute[254092]: 2025-11-25 17:11:05.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:06 compute-0 ceph-mon[74985]: pgmap v2626: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.584 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:5c:a4 10.100.0.2 2001:db8::f816:3eff:fee8:5ca4'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee8:5ca4/64', 'neutron:device_id': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=6c3d14c5-0ace-4cc3-828c-1d4061ad3097) old=Port_Binding(mac=['fa:16:3e:e8:5c:a4 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:11:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.585 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 updated
Nov 25 17:11:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.586 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:11:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:06.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f782c710-103d-44e3-8fbb-d924e214427c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:07 compute-0 nova_compute[254092]: 2025-11-25 17:11:07.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:08 compute-0 ceph-mon[74985]: pgmap v2627: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:09 compute-0 ceph-mon[74985]: pgmap v2628: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:11:10 compute-0 nova_compute[254092]: 2025-11-25 17:11:10.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:11 compute-0 ceph-mon[74985]: pgmap v2629: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:12 compute-0 nova_compute[254092]: 2025-11-25 17:11:12.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:13.649 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:13.650 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:13.650 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:14 compute-0 ceph-mon[74985]: pgmap v2630: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.390 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.390 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.411 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.492 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.493 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.504 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.505 254096 INFO nova.compute.claims [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.602 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:15 compute-0 nova_compute[254092]: 2025-11-25 17:11:15.637 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:11:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4190248708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.043 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.049 254096 DEBUG nova.compute.provider_tree [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.139 254096 DEBUG nova.scheduler.client.report [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:11:16 compute-0 ceph-mon[74985]: pgmap v2631: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4190248708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.230 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.231 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.272 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.272 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.293 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.308 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.381 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.383 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.384 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Creating image(s)
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.415 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.436 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.460 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.465 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.511 254096 DEBUG nova.policy [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.563 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.564 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.565 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.565 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.594 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.600 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2f4d2580-5acd-4693-a158-926565a16fe9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:16 compute-0 nova_compute[254092]: 2025-11-25 17:11:16.963 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 2f4d2580-5acd-4693-a158-926565a16fe9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.025 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.117 254096 DEBUG nova.objects.instance [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.131 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.131 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Ensure instance console log exists: /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.132 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.132 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.133 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:17 compute-0 nova_compute[254092]: 2025-11-25 17:11:17.754 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Successfully created port: 0d520bf2-3b97-484c-9046-b614ea281b88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:18 compute-0 ceph-mon[74985]: pgmap v2632: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.459 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Successfully updated port: 0d520bf2-3b97-484c-9046-b614ea281b88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.472 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.472 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.473 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.571 254096 DEBUG nova.compute.manager [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.572 254096 DEBUG nova.compute.manager [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing instance network info cache due to event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.573 254096 DEBUG oslo_concurrency.lockutils [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:11:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:18 compute-0 nova_compute[254092]: 2025-11-25 17:11:18.759 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.760 254096 DEBUG nova.network.neutron [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.783 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.783 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance network_info: |[{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.784 254096 DEBUG oslo_concurrency.lockutils [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.784 254096 DEBUG nova.network.neutron [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.788 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start _get_guest_xml network_info=[{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.792 254096 WARNING nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.796 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.797 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.800 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.800 254096 DEBUG nova.virt.libvirt.host [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.800 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.801 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.802 254096 DEBUG nova.virt.hardware [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:11:19 compute-0 nova_compute[254092]: 2025-11-25 17:11:19.805 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:20 compute-0 ceph-mon[74985]: pgmap v2633: 321 pgs: 321 active+clean; 41 MiB data, 900 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:11:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/720976347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.251 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.277 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.281 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:11:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822169432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:11:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 75 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.722 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.725 254096 DEBUG nova.virt.libvirt.vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-318572700',display_name='tempest-TestGettingAddress-server-318572700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-318572700',id=135,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-6ty6vm7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:11:16Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=2f4d2580-5acd-4693-a158-926565a16fe9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.726 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.728 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.731 254096 DEBUG nova.objects.instance [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.752 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <uuid>2f4d2580-5acd-4693-a158-926565a16fe9</uuid>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <name>instance-00000087</name>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-318572700</nova:name>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:11:19</nova:creationTime>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <nova:port uuid="0d520bf2-3b97-484c-9046-b614ea281b88">
Nov 25 17:11:20 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fefa:3cb2" ipVersion="6"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <system>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <entry name="serial">2f4d2580-5acd-4693-a158-926565a16fe9</entry>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <entry name="uuid">2f4d2580-5acd-4693-a158-926565a16fe9</entry>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </system>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <os>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   </os>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <features>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   </features>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2f4d2580-5acd-4693-a158-926565a16fe9_disk">
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       </source>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/2f4d2580-5acd-4693-a158-926565a16fe9_disk.config">
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       </source>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:11:20 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:fa:3c:b2"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <target dev="tap0d520bf2-3b"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/console.log" append="off"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <video>
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </video>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:11:20 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:11:20 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:11:20 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:11:20 compute-0 nova_compute[254092]: </domain>
Nov 25 17:11:20 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Preparing to wait for external event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.755 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.756 254096 DEBUG nova.virt.libvirt.vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-318572700',display_name='tempest-TestGettingAddress-server-318572700',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-318572700',id=135,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-6ty6vm7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:11:16Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=2f4d2580-5acd-4693-a158-926565a16fe9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.756 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.757 254096 DEBUG nova.network.os_vif_util [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.757 254096 DEBUG os_vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.758 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.758 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.762 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.762 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d520bf2-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.763 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d520bf2-3b, col_values=(('external_ids', {'iface-id': '0d520bf2-3b97-484c-9046-b614ea281b88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:3c:b2', 'vm-uuid': '2f4d2580-5acd-4693-a158-926565a16fe9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:20 compute-0 NetworkManager[48891]: <info>  [1764090680.7677] manager: (tap0d520bf2-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.772 254096 INFO os_vif [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b')
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.822 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.822 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.822 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:fa:3c:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.823 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Using config drive
Nov 25 17:11:20 compute-0 nova_compute[254092]: 2025-11-25 17:11:20.848 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.234 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Creating config drive at /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.240 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzl60rrde execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.280 254096 DEBUG nova.network.neutron [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated VIF entry in instance network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.281 254096 DEBUG nova.network.neutron [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.303 254096 DEBUG oslo_concurrency.lockutils [req-550e6dce-294e-4982-b4ae-7b4f1501c235 req-43271f2e-f8ab-42f7-aef7-7fde5b85a01c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.385 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzl60rrde" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/720976347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:11:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2822169432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.433 254096 DEBUG nova.storage.rbd_utils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:11:21 compute-0 nova_compute[254092]: 2025-11-25 17:11:21.438 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:22 compute-0 ceph-mon[74985]: pgmap v2634: 321 pgs: 321 active+clean; 75 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.512 254096 DEBUG oslo_concurrency.processutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config 2f4d2580-5acd-4693-a158-926565a16fe9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.513 254096 INFO nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deleting local config drive /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9/disk.config because it was imported into RBD.
Nov 25 17:11:22 compute-0 kernel: tap0d520bf2-3b: entered promiscuous mode
Nov 25 17:11:22 compute-0 NetworkManager[48891]: <info>  [1764090682.5690] manager: (tap0d520bf2-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:22 compute-0 ovn_controller[153477]: 2025-11-25T17:11:22Z|01404|binding|INFO|Claiming lport 0d520bf2-3b97-484c-9046-b614ea281b88 for this chassis.
Nov 25 17:11:22 compute-0 ovn_controller[153477]: 2025-11-25T17:11:22Z|01405|binding|INFO|0d520bf2-3b97-484c-9046-b614ea281b88: Claiming fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.584 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], port_security=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fefa:3cb2/64', 'neutron:device_id': '2f4d2580-5acd-4693-a158-926565a16fe9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d520bf2-3b97-484c-9046-b614ea281b88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.585 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d520bf2-3b97-484c-9046-b614ea281b88 in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 bound to our chassis
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.587 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9
Nov 25 17:11:22 compute-0 systemd-udevd[401332]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.600 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[00a511e0-9dd3-46db-a91e-bafe5cd2e4b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.601 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0bd20b9f-21 in ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.603 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0bd20b9f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a91b4b51-7b39-497f-a9c6-944fbe1e8825]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.604 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fec58a-376e-47f8-86e5-c2dbb079b27d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 systemd-machined[216343]: New machine qemu-169-instance-00000087.
Nov 25 17:11:22 compute-0 NetworkManager[48891]: <info>  [1764090682.6102] device (tap0d520bf2-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:11:22 compute-0 NetworkManager[48891]: <info>  [1764090682.6113] device (tap0d520bf2-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.616 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[da13c04a-d04c-49ee-a2fa-76d202fc18bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000087.
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.640 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c01c3018-cacf-4ae6-a149-c0290428b95c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_controller[153477]: 2025-11-25T17:11:22Z|01406|binding|INFO|Setting lport 0d520bf2-3b97-484c-9046-b614ea281b88 ovn-installed in OVS
Nov 25 17:11:22 compute-0 ovn_controller[153477]: 2025-11-25T17:11:22Z|01407|binding|INFO|Setting lport 0d520bf2-3b97-484c-9046-b614ea281b88 up in Southbound
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.670 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9fee1a-a99a-4090-bc30-420678ac1651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 NetworkManager[48891]: <info>  [1764090682.6787] manager: (tap0bd20b9f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/577)
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.677 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf759df5-cfe7-484d-b982-1c5521de4d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.724 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2b3b22-7dff-4935-8e5a-89f3eaf8b839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.730 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7e3942-1b5c-4f80-9cbd-39c7fa3a05e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 NetworkManager[48891]: <info>  [1764090682.7683] device (tap0bd20b9f-20): carrier: link connected
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.780 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dec0e29f-6d65-4b76-8f0e-91b580f4909d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.801 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[64958853-419c-446c-99e9-b25496cfaf0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 30796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401366, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.825 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e492a416-6418-4917-88ed-a88b34d7a2c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:5ca4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714035, 'tstamp': 714035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401367, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.852 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[58b9313f-a5cf-4088-a880-094c56fc2fd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 30796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401368, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be80ea82-0345-47d8-8a59-7a7668c5550f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.966 254096 DEBUG nova.compute.manager [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.966 254096 DEBUG oslo_concurrency.lockutils [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.966 254096 DEBUG oslo_concurrency.lockutils [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.967 254096 DEBUG oslo_concurrency.lockutils [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.967 254096 DEBUG nova.compute.manager [req-37b8a53f-4544-4a92-a3c2-2f4a55835c4a req-a360de23-2efc-4186-91d8-d4e4da24cdcf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Processing event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.974 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d256a949-817c-4625-848f-0d3454493b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.977 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.977 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.978 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd20b9f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:22 compute-0 NetworkManager[48891]: <info>  [1764090682.9812] manager: (tap0bd20b9f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/578)
Nov 25 17:11:22 compute-0 kernel: tap0bd20b9f-20: entered promiscuous mode
Nov 25 17:11:22 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:22.983 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bd20b9f-20, col_values=(('external_ids', {'iface-id': '6c3d14c5-0ace-4cc3-828c-1d4061ad3097'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:11:22 compute-0 ovn_controller[153477]: 2025-11-25T17:11:22Z|01408|binding|INFO|Releasing lport 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 from this chassis (sb_readonly=0)
Nov 25 17:11:22 compute-0 nova_compute[254092]: 2025-11-25 17:11:22.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.000 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.001 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c4daa3-2a00-44bb-8c4a-5d915c7e010d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.003 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0bd20b9f-24bc-4728-90ff-2870bebbf3f9
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.pid.haproxy
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0bd20b9f-24bc-4728-90ff-2870bebbf3f9
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:11:23 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:23.003 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'env', 'PROCESS_TAG=haproxy-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0bd20b9f-24bc-4728-90ff-2870bebbf3f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090683.3952062, 2f4d2580-5acd-4693-a158-926565a16fe9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.396 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Started (Lifecycle Event)
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.398 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.401 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.405 254096 INFO nova.virt.libvirt.driver [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance spawned successfully.
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.405 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.416 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.423 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.431 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.432 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.433 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.434 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.434 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.435 254096 DEBUG nova.virt.libvirt.driver [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.442 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.443 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090683.3963647, 2f4d2580-5acd-4693-a158-926565a16fe9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.444 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Paused (Lifecycle Event)
Nov 25 17:11:23 compute-0 podman[401441]: 2025-11-25 17:11:23.357455287 +0000 UTC m=+0.023035163 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.463 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.466 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090683.4003797, 2f4d2580-5acd-4693-a158-926565a16fe9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.466 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Resumed (Lifecycle Event)
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.477 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.479 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.493 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.559 254096 INFO nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 7.18 seconds to spawn the instance on the hypervisor.
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.559 254096 DEBUG nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:11:23 compute-0 ceph-mon[74985]: pgmap v2635: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.624 254096 INFO nova.compute.manager [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 8.16 seconds to build instance.
Nov 25 17:11:23 compute-0 podman[401441]: 2025-11-25 17:11:23.624213854 +0000 UTC m=+0.289793700 container create 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:11:23 compute-0 nova_compute[254092]: 2025-11-25 17:11:23.650 254096 DEBUG oslo_concurrency.lockutils [None req-fc77a2f8-7ea3-440d-b4b9-b0ef7d90a017 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:23 compute-0 systemd[1]: Started libpod-conmon-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d.scope.
Nov 25 17:11:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde7a71fe41d6784c5b2181a1c90baa9c83c45304d06a23693e4f44a74edde50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:23 compute-0 podman[401441]: 2025-11-25 17:11:23.758823536 +0000 UTC m=+0.424403402 container init 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:11:23 compute-0 podman[401441]: 2025-11-25 17:11:23.766305421 +0000 UTC m=+0.431885267 container start 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:11:23 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : New worker (401464) forked
Nov 25 17:11:23 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : Loading success.
Nov 25 17:11:23 compute-0 sudo[401473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:23 compute-0 sudo[401473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:23 compute-0 sudo[401473]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:23 compute-0 sudo[401498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:11:23 compute-0 sudo[401498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:23 compute-0 sudo[401498]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:24 compute-0 sudo[401523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:24 compute-0 sudo[401523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:24 compute-0 sudo[401523]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:24 compute-0 sudo[401548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:11:24 compute-0 sudo[401548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:11:25 compute-0 sudo[401548]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:11:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:11:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:11:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:11:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2921b2fd-cb0a-4993-bd4c-c4d8bd1e787f does not exist
Nov 25 17:11:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ab607b29-99d7-45cd-acfe-9b955c1ad6ab does not exist
Nov 25 17:11:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 21c45035-0c76-4cb3-a241-468701f90808 does not exist
Nov 25 17:11:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:11:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:11:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:11:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.101 254096 DEBUG nova.compute.manager [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.103 254096 DEBUG oslo_concurrency.lockutils [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.103 254096 DEBUG oslo_concurrency.lockutils [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.104 254096 DEBUG oslo_concurrency.lockutils [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.105 254096 DEBUG nova.compute.manager [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] No waiting events found dispatching network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.105 254096 WARNING nova.compute.manager [req-8f466618-27a7-40f8-b598-aebca3195a37 req-0509c23d-7a92-4daa-85f7-adc555b7c45a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received unexpected event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 for instance with vm_state active and task_state None.
Nov 25 17:11:25 compute-0 sudo[401604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:25 compute-0 sudo[401604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:25 compute-0 sudo[401604]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:25 compute-0 podman[401629]: 2025-11-25 17:11:25.245472902 +0000 UTC m=+0.063082442 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 17:11:25 compute-0 sudo[401650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:11:25 compute-0 sudo[401650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:25 compute-0 sudo[401650]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:25 compute-0 podman[401628]: 2025-11-25 17:11:25.281573961 +0000 UTC m=+0.098790190 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 17:11:25 compute-0 podman[401630]: 2025-11-25 17:11:25.29023357 +0000 UTC m=+0.102366529 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 17:11:25 compute-0 sudo[401710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:25 compute-0 sudo[401710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:25 compute-0 sudo[401710]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:25 compute-0 sudo[401739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:11:25 compute-0 sudo[401739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:25 compute-0 nova_compute[254092]: 2025-11-25 17:11:25.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:25 compute-0 podman[401804]: 2025-11-25 17:11:25.796714221 +0000 UTC m=+0.062909276 container create fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:11:25 compute-0 ceph-mon[74985]: pgmap v2636: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:11:25 compute-0 podman[401804]: 2025-11-25 17:11:25.755935223 +0000 UTC m=+0.022130298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:11:25 compute-0 systemd[1]: Started libpod-conmon-fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381.scope.
Nov 25 17:11:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:11:25 compute-0 podman[401804]: 2025-11-25 17:11:25.99539121 +0000 UTC m=+0.261586295 container init fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:11:26 compute-0 podman[401804]: 2025-11-25 17:11:26.006106494 +0000 UTC m=+0.272301549 container start fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:11:26 compute-0 gifted_napier[401820]: 167 167
Nov 25 17:11:26 compute-0 systemd[1]: libpod-fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381.scope: Deactivated successfully.
Nov 25 17:11:26 compute-0 podman[401804]: 2025-11-25 17:11:26.028288893 +0000 UTC m=+0.294483998 container attach fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:11:26 compute-0 podman[401804]: 2025-11-25 17:11:26.02929448 +0000 UTC m=+0.295489555 container died fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0216e74757715ccf89ea27e1a32bdd00b715589609e850eb5f9f2f9e6d27d0-merged.mount: Deactivated successfully.
Nov 25 17:11:26 compute-0 podman[401804]: 2025-11-25 17:11:26.219113267 +0000 UTC m=+0.485308332 container remove fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:11:26 compute-0 systemd[1]: libpod-conmon-fddc01681d285a34491b0c815f8edbbc896636cbfe5363684f2b39dc438ca381.scope: Deactivated successfully.
Nov 25 17:11:26 compute-0 podman[401846]: 2025-11-25 17:11:26.370412836 +0000 UTC m=+0.020856762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:11:26 compute-0 podman[401846]: 2025-11-25 17:11:26.482502341 +0000 UTC m=+0.132946247 container create 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:11:26 compute-0 systemd[1]: Started libpod-conmon-75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575.scope.
Nov 25 17:11:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:26 compute-0 podman[401846]: 2025-11-25 17:11:26.593577118 +0000 UTC m=+0.244021054 container init 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:11:26 compute-0 podman[401846]: 2025-11-25 17:11:26.601420473 +0000 UTC m=+0.251864379 container start 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:11:26 compute-0 podman[401846]: 2025-11-25 17:11:26.612836666 +0000 UTC m=+0.263280592 container attach 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:11:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:11:26 compute-0 NetworkManager[48891]: <info>  [1764090686.7659] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/579)
Nov 25 17:11:26 compute-0 nova_compute[254092]: 2025-11-25 17:11:26.765 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:26 compute-0 NetworkManager[48891]: <info>  [1764090686.7691] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Nov 25 17:11:26 compute-0 ovn_controller[153477]: 2025-11-25T17:11:26Z|01409|binding|INFO|Releasing lport 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 from this chassis (sb_readonly=0)
Nov 25 17:11:26 compute-0 nova_compute[254092]: 2025-11-25 17:11:26.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:26 compute-0 ovn_controller[153477]: 2025-11-25T17:11:26Z|01410|binding|INFO|Releasing lport 6c3d14c5-0ace-4cc3-828c-1d4061ad3097 from this chassis (sb_readonly=0)
Nov 25 17:11:26 compute-0 nova_compute[254092]: 2025-11-25 17:11:26.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:27.122 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:11:27 compute-0 nova_compute[254092]: 2025-11-25 17:11:27.123 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:27.124 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:11:27 compute-0 nova_compute[254092]: 2025-11-25 17:11:27.327 254096 DEBUG nova.compute.manager [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:11:27 compute-0 nova_compute[254092]: 2025-11-25 17:11:27.328 254096 DEBUG nova.compute.manager [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing instance network info cache due to event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:11:27 compute-0 nova_compute[254092]: 2025-11-25 17:11:27.328 254096 DEBUG oslo_concurrency.lockutils [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:11:27 compute-0 nova_compute[254092]: 2025-11-25 17:11:27.329 254096 DEBUG oslo_concurrency.lockutils [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:11:27 compute-0 nova_compute[254092]: 2025-11-25 17:11:27.329 254096 DEBUG nova.network.neutron [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:11:27 compute-0 sad_jackson[401863]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:11:27 compute-0 sad_jackson[401863]: --> relative data size: 1.0
Nov 25 17:11:27 compute-0 sad_jackson[401863]: --> All data devices are unavailable
Nov 25 17:11:27 compute-0 systemd[1]: libpod-75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575.scope: Deactivated successfully.
Nov 25 17:11:27 compute-0 podman[401846]: 2025-11-25 17:11:27.645064018 +0000 UTC m=+1.295507924 container died 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:11:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-54306743130cfb5ab7c1398553b69e8fae0c07d3a3db4ceae6668e1fae52de1d-merged.mount: Deactivated successfully.
Nov 25 17:11:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:11:28 compute-0 ceph-mon[74985]: pgmap v2637: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:11:28 compute-0 podman[401846]: 2025-11-25 17:11:28.787114642 +0000 UTC m=+2.437558538 container remove 75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 17:11:28 compute-0 sudo[401739]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:28 compute-0 systemd[1]: libpod-conmon-75bc67746a2a507c350dbb2f83874abbd75cda591fefdcfafe584d6234191575.scope: Deactivated successfully.
Nov 25 17:11:28 compute-0 sudo[401906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:28 compute-0 sudo[401906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:28 compute-0 sudo[401906]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:28 compute-0 sudo[401932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:11:28 compute-0 sudo[401932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:28 compute-0 sudo[401932]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:29 compute-0 sudo[401957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:29 compute-0 sudo[401957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:29 compute-0 sudo[401957]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:29 compute-0 nova_compute[254092]: 2025-11-25 17:11:29.041 254096 DEBUG nova.network.neutron [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated VIF entry in instance network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:11:29 compute-0 nova_compute[254092]: 2025-11-25 17:11:29.043 254096 DEBUG nova.network.neutron [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:11:29 compute-0 nova_compute[254092]: 2025-11-25 17:11:29.070 254096 DEBUG oslo_concurrency.lockutils [req-d2e02dd4-e276-47c2-bc03-b8c8b2ad9a47 req-bb40e74d-07df-4f28-92d0-d6fefda7132a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:11:29 compute-0 sudo[401982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:11:29 compute-0 sudo[401982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:29 compute-0 podman[402047]: 2025-11-25 17:11:29.468688706 +0000 UTC m=+0.047531015 container create c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:11:29 compute-0 systemd[1]: Started libpod-conmon-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope.
Nov 25 17:11:29 compute-0 podman[402047]: 2025-11-25 17:11:29.443209417 +0000 UTC m=+0.022051726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:11:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:11:29 compute-0 podman[402047]: 2025-11-25 17:11:29.586512277 +0000 UTC m=+0.165354596 container init c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 17:11:29 compute-0 podman[402047]: 2025-11-25 17:11:29.598164637 +0000 UTC m=+0.177006936 container start c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:11:29 compute-0 great_proskuriakova[402063]: 167 167
Nov 25 17:11:29 compute-0 systemd[1]: libpod-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope: Deactivated successfully.
Nov 25 17:11:29 compute-0 conmon[402063]: conmon c61c298980aac0384919 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope/container/memory.events
Nov 25 17:11:29 compute-0 podman[402047]: 2025-11-25 17:11:29.696702949 +0000 UTC m=+0.275545248 container attach c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:11:29 compute-0 podman[402047]: 2025-11-25 17:11:29.698358785 +0000 UTC m=+0.277201074 container died c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:11:29 compute-0 ceph-mon[74985]: pgmap v2638: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:11:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e2ed412e7672d150f73c16f262fe6ec7d5ed81b81c0e13d4ca701a77caeea0-merged.mount: Deactivated successfully.
Nov 25 17:11:29 compute-0 podman[402047]: 2025-11-25 17:11:29.98146087 +0000 UTC m=+0.560303179 container remove c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:11:30 compute-0 systemd[1]: libpod-conmon-c61c298980aac038491912ff06933f87a5b5dab564b3f3bd424acef56d0b009f.scope: Deactivated successfully.
Nov 25 17:11:30 compute-0 podman[402087]: 2025-11-25 17:11:30.248309969 +0000 UTC m=+0.096695123 container create f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:11:30 compute-0 podman[402087]: 2025-11-25 17:11:30.18126156 +0000 UTC m=+0.029646674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:11:30 compute-0 systemd[1]: Started libpod-conmon-f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288.scope.
Nov 25 17:11:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:30 compute-0 podman[402087]: 2025-11-25 17:11:30.446894306 +0000 UTC m=+0.295279440 container init f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:11:30 compute-0 podman[402087]: 2025-11-25 17:11:30.453925979 +0000 UTC m=+0.302311093 container start f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 17:11:30 compute-0 nova_compute[254092]: 2025-11-25 17:11:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:30 compute-0 podman[402087]: 2025-11-25 17:11:30.509077751 +0000 UTC m=+0.357462895 container attach f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:11:30 compute-0 nova_compute[254092]: 2025-11-25 17:11:30.622 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:11:30 compute-0 nova_compute[254092]: 2025-11-25 17:11:30.768 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:31 compute-0 busy_lamport[402104]: {
Nov 25 17:11:31 compute-0 busy_lamport[402104]:     "0": [
Nov 25 17:11:31 compute-0 busy_lamport[402104]:         {
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "devices": [
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "/dev/loop3"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             ],
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_name": "ceph_lv0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_size": "21470642176",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "name": "ceph_lv0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "tags": {
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cluster_name": "ceph",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.crush_device_class": "",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.encrypted": "0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osd_id": "0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.type": "block",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.vdo": "0"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             },
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "type": "block",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "vg_name": "ceph_vg0"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:         }
Nov 25 17:11:31 compute-0 busy_lamport[402104]:     ],
Nov 25 17:11:31 compute-0 busy_lamport[402104]:     "1": [
Nov 25 17:11:31 compute-0 busy_lamport[402104]:         {
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "devices": [
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "/dev/loop4"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             ],
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_name": "ceph_lv1",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_size": "21470642176",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "name": "ceph_lv1",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "tags": {
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cluster_name": "ceph",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.crush_device_class": "",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.encrypted": "0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osd_id": "1",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.type": "block",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.vdo": "0"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             },
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "type": "block",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "vg_name": "ceph_vg1"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:         }
Nov 25 17:11:31 compute-0 busy_lamport[402104]:     ],
Nov 25 17:11:31 compute-0 busy_lamport[402104]:     "2": [
Nov 25 17:11:31 compute-0 busy_lamport[402104]:         {
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "devices": [
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "/dev/loop5"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             ],
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_name": "ceph_lv2",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_size": "21470642176",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "name": "ceph_lv2",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "tags": {
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.cluster_name": "ceph",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.crush_device_class": "",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.encrypted": "0",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osd_id": "2",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.type": "block",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:                 "ceph.vdo": "0"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             },
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "type": "block",
Nov 25 17:11:31 compute-0 busy_lamport[402104]:             "vg_name": "ceph_vg2"
Nov 25 17:11:31 compute-0 busy_lamport[402104]:         }
Nov 25 17:11:31 compute-0 busy_lamport[402104]:     ]
Nov 25 17:11:31 compute-0 busy_lamport[402104]: }
Nov 25 17:11:31 compute-0 systemd[1]: libpod-f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288.scope: Deactivated successfully.
Nov 25 17:11:31 compute-0 podman[402087]: 2025-11-25 17:11:31.198228293 +0000 UTC m=+1.046613407 container died f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb7de072eb82b22a734c9bebe81cf90b5ae72ff9cd80d3903cc964028c6bae6-merged.mount: Deactivated successfully.
Nov 25 17:11:31 compute-0 podman[402087]: 2025-11-25 17:11:31.485304288 +0000 UTC m=+1.333689442 container remove f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamport, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:11:31 compute-0 systemd[1]: libpod-conmon-f3bcb7d7c11583806e6e58698cf014e23f443ec1e16c2e173fb456d464391288.scope: Deactivated successfully.
Nov 25 17:11:31 compute-0 sudo[401982]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:31 compute-0 sudo[402127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:31 compute-0 sudo[402127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:31 compute-0 sudo[402127]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:31 compute-0 sudo[402152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:11:31 compute-0 sudo[402152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:31 compute-0 sudo[402152]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:31 compute-0 sudo[402177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:31 compute-0 sudo[402177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:31 compute-0 sudo[402177]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:31 compute-0 sudo[402202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:11:31 compute-0 sudo[402202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:32 compute-0 ceph-mon[74985]: pgmap v2639: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:11:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:11:32.126 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:11:32 compute-0 podman[402268]: 2025-11-25 17:11:32.133782024 +0000 UTC m=+0.079094450 container create 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:11:32 compute-0 podman[402268]: 2025-11-25 17:11:32.074766215 +0000 UTC m=+0.020078661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:11:32 compute-0 systemd[1]: Started libpod-conmon-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope.
Nov 25 17:11:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:11:32 compute-0 podman[402268]: 2025-11-25 17:11:32.296102976 +0000 UTC m=+0.241415442 container init 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:11:32 compute-0 podman[402268]: 2025-11-25 17:11:32.303569221 +0000 UTC m=+0.248881647 container start 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:11:32 compute-0 naughty_dubinsky[402285]: 167 167
Nov 25 17:11:32 compute-0 systemd[1]: libpod-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope: Deactivated successfully.
Nov 25 17:11:32 compute-0 conmon[402285]: conmon 4827954ce971dbac5cc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope/container/memory.events
Nov 25 17:11:32 compute-0 podman[402268]: 2025-11-25 17:11:32.4632101 +0000 UTC m=+0.408522546 container attach 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:11:32 compute-0 podman[402268]: 2025-11-25 17:11:32.464529416 +0000 UTC m=+0.409841842 container died 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-782353a02f82c42a089144d0c6221e036e2d130c7d22e8a6068d76d656e2a789-merged.mount: Deactivated successfully.
Nov 25 17:11:32 compute-0 podman[402268]: 2025-11-25 17:11:32.685843316 +0000 UTC m=+0.631155742 container remove 4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:11:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 111 KiB/s wr, 74 op/s
Nov 25 17:11:32 compute-0 systemd[1]: libpod-conmon-4827954ce971dbac5cc5ca37c5e53dc9223a91ac728f7042b4a518d4c3c081dc.scope: Deactivated successfully.
Nov 25 17:11:32 compute-0 podman[402309]: 2025-11-25 17:11:32.841819694 +0000 UTC m=+0.026533938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:11:32 compute-0 podman[402309]: 2025-11-25 17:11:32.956059238 +0000 UTC m=+0.140773452 container create e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:11:33 compute-0 systemd[1]: Started libpod-conmon-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope.
Nov 25 17:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:11:33 compute-0 podman[402309]: 2025-11-25 17:11:33.110079062 +0000 UTC m=+0.294793276 container init e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:11:33 compute-0 podman[402309]: 2025-11-25 17:11:33.120273632 +0000 UTC m=+0.304987856 container start e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 17:11:33 compute-0 podman[402309]: 2025-11-25 17:11:33.135488029 +0000 UTC m=+0.320202263 container attach e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:11:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:34 compute-0 epic_williams[402327]: {
Nov 25 17:11:34 compute-0 epic_williams[402327]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "osd_id": 1,
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "type": "bluestore"
Nov 25 17:11:34 compute-0 epic_williams[402327]:     },
Nov 25 17:11:34 compute-0 epic_williams[402327]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "osd_id": 2,
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "type": "bluestore"
Nov 25 17:11:34 compute-0 epic_williams[402327]:     },
Nov 25 17:11:34 compute-0 epic_williams[402327]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "osd_id": 0,
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:11:34 compute-0 epic_williams[402327]:         "type": "bluestore"
Nov 25 17:11:34 compute-0 epic_williams[402327]:     }
Nov 25 17:11:34 compute-0 epic_williams[402327]: }
Nov 25 17:11:34 compute-0 systemd[1]: libpod-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope: Deactivated successfully.
Nov 25 17:11:34 compute-0 podman[402309]: 2025-11-25 17:11:34.154936831 +0000 UTC m=+1.339651045 container died e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:11:34 compute-0 systemd[1]: libpod-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope: Consumed 1.040s CPU time.
Nov 25 17:11:34 compute-0 ceph-mon[74985]: pgmap v2640: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 111 KiB/s wr, 74 op/s
Nov 25 17:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4f038a3f0ea4b827650e7879ef55be5e15b891a8899bea290af60fd5ccac0c5-merged.mount: Deactivated successfully.
Nov 25 17:11:34 compute-0 podman[402309]: 2025-11-25 17:11:34.324676417 +0000 UTC m=+1.509390631 container remove e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 17:11:34 compute-0 sudo[402202]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:11:34 compute-0 systemd[1]: libpod-conmon-e5cc8d74269120010b7ca4bbe079d1b05a07a49213b89e3458c1edb4d920c186.scope: Deactivated successfully.
Nov 25 17:11:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:11:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:11:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:11:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev bf102b83-5738-44bb-abc9-4eb0cae743b9 does not exist
Nov 25 17:11:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev aa095296-2ee5-4337-97c4-843889be8525 does not exist
Nov 25 17:11:34 compute-0 sudo[402372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:11:34 compute-0 sudo[402372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:34 compute-0 sudo[402372]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:34 compute-0 sudo[402397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:11:34 compute-0 sudo[402397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:11:34 compute-0 sudo[402397]: pam_unix(sudo:session): session closed for user root
Nov 25 17:11:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:11:35 compute-0 nova_compute[254092]: 2025-11-25 17:11:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:11:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:11:35 compute-0 ceph-mon[74985]: pgmap v2641: 321 pgs: 321 active+clean; 88 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:11:35 compute-0 nova_compute[254092]: 2025-11-25 17:11:35.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:35 compute-0 nova_compute[254092]: 2025-11-25 17:11:35.772 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 92 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 456 KiB/s wr, 84 op/s
Nov 25 17:11:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:11:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 55K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1360 writes, 6176 keys, 1360 commit groups, 1.0 writes per commit group, ingest: 8.67 MB, 0.01 MB/s
                                           Interval WAL: 1360 writes, 1360 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     29.9      2.15              0.21        37    0.058       0      0       0.0       0.0
                                             L6      1/0    8.02 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    110.5     92.6      3.12              0.82        36    0.087    220K    20K       0.0       0.0
                                            Sum      1/0    8.02 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.5     65.5     67.0      5.27              1.03        73    0.072    220K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6    120.8    121.2      0.43              0.15        10    0.043     39K   2537       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    110.5     92.6      3.12              0.82        36    0.087    220K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.5      2.10              0.21        36    0.058       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.063, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.34 GB write, 0.07 MB/s write, 0.34 GB read, 0.07 MB/s read, 5.3 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 40.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000381 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2663,39.11 MB,12.8668%) FilterBlock(74,620.42 KB,0.199303%) IndexBlock(74,1.00 MB,0.329073%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 17:11:37 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 25 17:11:37 compute-0 ceph-mon[74985]: pgmap v2642: 321 pgs: 321 active+clean; 92 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 456 KiB/s wr, 84 op/s
Nov 25 17:11:38 compute-0 ovn_controller[153477]: 2025-11-25T17:11:38Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:3c:b2 10.100.0.11
Nov 25 17:11:38 compute-0 ovn_controller[153477]: 2025-11-25T17:11:38Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:3c:b2 10.100.0.11
Nov 25 17:11:38 compute-0 nova_compute[254092]: 2025-11-25 17:11:38.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:38 compute-0 nova_compute[254092]: 2025-11-25 17:11:38.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:11:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 92 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 443 KiB/s wr, 10 op/s
Nov 25 17:11:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:39 compute-0 ceph-mon[74985]: pgmap v2643: 321 pgs: 321 active+clean; 92 MiB data, 927 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 443 KiB/s wr, 10 op/s
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:11:40
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', '.mgr', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'volumes']
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:11:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 119 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Nov 25 17:11:40 compute-0 nova_compute[254092]: 2025-11-25 17:11:40.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:11:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1375603128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.017 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.117 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.118 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.335 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.336 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3457MB free_disk=59.94306182861328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.336 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.336 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.415 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 2f4d2580-5acd-4693-a158-926565a16fe9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.416 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.459 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:11:41 compute-0 ceph-mon[74985]: pgmap v2644: 321 pgs: 321 active+clean; 119 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Nov 25 17:11:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1375603128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:11:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:11:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027102479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.993 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:11:41 compute-0 nova_compute[254092]: 2025-11-25 17:11:41.999 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:11:42 compute-0 nova_compute[254092]: 2025-11-25 17:11:42.014 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:11:42 compute-0 nova_compute[254092]: 2025-11-25 17:11:42.059 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:11:42 compute-0 nova_compute[254092]: 2025-11-25 17:11:42.059 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:11:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:11:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1027102479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:11:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:44 compute-0 ceph-mon[74985]: pgmap v2645: 321 pgs: 321 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:11:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:11:45 compute-0 nova_compute[254092]: 2025-11-25 17:11:45.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:45 compute-0 nova_compute[254092]: 2025-11-25 17:11:45.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:46 compute-0 ceph-mon[74985]: pgmap v2646: 321 pgs: 321 active+clean; 121 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:11:46 compute-0 nova_compute[254092]: 2025-11-25 17:11:46.062 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:46 compute-0 nova_compute[254092]: 2025-11-25 17:11:46.062 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:11:46 compute-0 nova_compute[254092]: 2025-11-25 17:11:46.062 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:11:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:11:46 compute-0 nova_compute[254092]: 2025-11-25 17:11:46.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:11:46 compute-0 nova_compute[254092]: 2025-11-25 17:11:46.784 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:11:46 compute-0 nova_compute[254092]: 2025-11-25 17:11:46.784 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:11:46 compute-0 nova_compute[254092]: 2025-11-25 17:11:46.784 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:11:48 compute-0 ceph-mon[74985]: pgmap v2647: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:11:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.7 MiB/s wr, 49 op/s
Nov 25 17:11:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:49 compute-0 nova_compute[254092]: 2025-11-25 17:11:49.156 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:11:49 compute-0 nova_compute[254092]: 2025-11-25 17:11:49.172 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:11:49 compute-0 nova_compute[254092]: 2025-11-25 17:11:49.172 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:11:49 compute-0 nova_compute[254092]: 2025-11-25 17:11:49.173 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:50 compute-0 ceph-mon[74985]: pgmap v2648: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.7 MiB/s wr, 49 op/s
Nov 25 17:11:50 compute-0 nova_compute[254092]: 2025-11-25 17:11:50.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.7 MiB/s wr, 49 op/s
Nov 25 17:11:50 compute-0 nova_compute[254092]: 2025-11-25 17:11:50.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:11:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:11:52 compute-0 ceph-mon[74985]: pgmap v2649: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.7 MiB/s wr, 49 op/s
Nov 25 17:11:52 compute-0 nova_compute[254092]: 2025-11-25 17:11:52.602 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:11:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 71 KiB/s wr, 17 op/s
Nov 25 17:11:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:54 compute-0 ceph-mon[74985]: pgmap v2650: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 71 KiB/s wr, 17 op/s
Nov 25 17:11:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 17:11:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:11:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3430701093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:11:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:11:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3430701093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:11:55 compute-0 nova_compute[254092]: 2025-11-25 17:11:55.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:55 compute-0 podman[402467]: 2025-11-25 17:11:55.673804876 +0000 UTC m=+0.086376629 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:11:55 compute-0 podman[402468]: 2025-11-25 17:11:55.676478179 +0000 UTC m=+0.072479818 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 17:11:55 compute-0 ceph-mon[74985]: pgmap v2651: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 17:11:55 compute-0 podman[402469]: 2025-11-25 17:11:55.740736092 +0000 UTC m=+0.140367581 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:11:55 compute-0 nova_compute[254092]: 2025-11-25 17:11:55.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:11:56 compute-0 ovn_controller[153477]: 2025-11-25T17:11:56Z|01411|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 17:11:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3430701093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:11:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3430701093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:11:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 17:11:57 compute-0 ceph-mon[74985]: pgmap v2652: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Nov 25 17:11:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:11:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:11:59 compute-0 ceph-mon[74985]: pgmap v2653: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:00 compute-0 nova_compute[254092]: 2025-11-25 17:12:00.638 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:00 compute-0 nova_compute[254092]: 2025-11-25 17:12:00.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:02 compute-0 ceph-mon[74985]: pgmap v2654: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.055 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.056 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.072 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.156 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.157 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.166 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.166 254096 INFO nova.compute.claims [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.266 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:12:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278741482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.791 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.799 254096 DEBUG nova.compute.provider_tree [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.813 254096 DEBUG nova.scheduler.client.report [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.834 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.835 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.871 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.872 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.886 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.899 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:12:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.981 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.983 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:12:03 compute-0 nova_compute[254092]: 2025-11-25 17:12:03.983 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Creating image(s)
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.010 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:12:04 compute-0 ceph-mon[74985]: pgmap v2655: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/278741482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.035 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.058 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.062 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.112 254096 DEBUG nova.policy [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.163 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.164 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.165 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.165 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.193 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.197 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a8194956-04fe-46d6-9b07-63486afc3c7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.502 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a8194956-04fe-46d6-9b07-63486afc3c7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.572 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.672 254096 DEBUG nova.objects.instance [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid a8194956-04fe-46d6-9b07-63486afc3c7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.701 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.701 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Ensure instance console log exists: /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.702 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.702 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.703 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:04.831 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:12:04 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:04.833 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:04 compute-0 nova_compute[254092]: 2025-11-25 17:12:04.973 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Successfully created port: 44750fec-28de-4cf5-8393-333676bb4fed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.652 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Successfully updated port: 44750fec-28de-4cf5-8393-333676bb4fed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.670 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.671 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.761 254096 DEBUG nova.compute.manager [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-changed-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.762 254096 DEBUG nova.compute.manager [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing instance network info cache due to event network-changed-44750fec-28de-4cf5-8393-333676bb4fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.762 254096 DEBUG oslo_concurrency.lockutils [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:12:05 compute-0 nova_compute[254092]: 2025-11-25 17:12:05.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:06 compute-0 ceph-mon[74985]: pgmap v2656: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:06 compute-0 nova_compute[254092]: 2025-11-25 17:12:06.256 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:12:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 167 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.933 254096 DEBUG nova.network.neutron [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.954 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.954 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance network_info: |[{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.955 254096 DEBUG oslo_concurrency.lockutils [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.955 254096 DEBUG nova.network.neutron [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.959 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start _get_guest_xml network_info=[{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.963 254096 WARNING nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.972 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.972 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.980 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.981 254096 DEBUG nova.virt.libvirt.host [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.981 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.981 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.982 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.982 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.983 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.983 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.983 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.984 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.984 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.984 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.985 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.985 254096 DEBUG nova.virt.hardware [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:12:07 compute-0 nova_compute[254092]: 2025-11-25 17:12:07.988 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:08 compute-0 ceph-mon[74985]: pgmap v2657: 321 pgs: 321 active+clean; 167 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:12:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:12:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789845615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.459 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.480 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.484 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 167 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:12:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:12:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738469575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.910 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.911 254096 DEBUG nova.virt.libvirt.vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:12:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1415313673',display_name='tempest-TestGettingAddress-server-1415313673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1415313673',id=136,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzl12pdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:12:03Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a8194956-04fe-46d6-9b07-63486afc3c7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.912 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.912 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.914 254096 DEBUG nova.objects.instance [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8194956-04fe-46d6-9b07-63486afc3c7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.925 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <uuid>a8194956-04fe-46d6-9b07-63486afc3c7d</uuid>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <name>instance-00000088</name>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1415313673</nova:name>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:12:07</nova:creationTime>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <nova:port uuid="44750fec-28de-4cf5-8393-333676bb4fed">
Nov 25 17:12:08 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe23:af6f" ipVersion="6"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <system>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <entry name="serial">a8194956-04fe-46d6-9b07-63486afc3c7d</entry>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <entry name="uuid">a8194956-04fe-46d6-9b07-63486afc3c7d</entry>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </system>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <os>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   </os>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <features>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   </features>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a8194956-04fe-46d6-9b07-63486afc3c7d_disk">
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       </source>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config">
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       </source>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:12:08 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:23:af:6f"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <target dev="tap44750fec-28"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/console.log" append="off"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <video>
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </video>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:12:08 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:12:08 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:12:08 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:12:08 compute-0 nova_compute[254092]: </domain>
Nov 25 17:12:08 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Preparing to wait for external event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.926 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.927 254096 DEBUG nova.virt.libvirt.vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:12:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1415313673',display_name='tempest-TestGettingAddress-server-1415313673',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1415313673',id=136,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzl12pdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:12:03Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a8194956-04fe-46d6-9b07-63486afc3c7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.927 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.928 254096 DEBUG nova.network.os_vif_util [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.928 254096 DEBUG os_vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.929 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.929 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.932 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44750fec-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.932 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44750fec-28, col_values=(('external_ids', {'iface-id': '44750fec-28de-4cf5-8393-333676bb4fed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:af:6f', 'vm-uuid': 'a8194956-04fe-46d6-9b07-63486afc3c7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:08 compute-0 NetworkManager[48891]: <info>  [1764090728.9347] manager: (tap44750fec-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.940 254096 INFO os_vif [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28')
Nov 25 17:12:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.984 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.984 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.985 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:23:af:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:12:08 compute-0 nova_compute[254092]: 2025-11-25 17:12:08.986 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Using config drive
Nov 25 17:12:09 compute-0 nova_compute[254092]: 2025-11-25 17:12:09.005 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:12:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1789845615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:12:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/738469575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:12:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:09.834 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:09 compute-0 nova_compute[254092]: 2025-11-25 17:12:09.950 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Creating config drive at /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config
Nov 25 17:12:09 compute-0 nova_compute[254092]: 2025-11-25 17:12:09.954 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xksatra execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:10 compute-0 ceph-mon[74985]: pgmap v2658: 321 pgs: 321 active+clean; 167 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.088 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xksatra" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.110 254096 DEBUG nova.storage.rbd_utils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.114 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.246 254096 DEBUG oslo_concurrency.processutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config a8194956-04fe-46d6-9b07-63486afc3c7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.247 254096 INFO nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deleting local config drive /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d/disk.config because it was imported into RBD.
Nov 25 17:12:10 compute-0 kernel: tap44750fec-28: entered promiscuous mode
Nov 25 17:12:10 compute-0 NetworkManager[48891]: <info>  [1764090730.2918] manager: (tap44750fec-28): new Tun device (/org/freedesktop/NetworkManager/Devices/582)
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.291 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:10 compute-0 ovn_controller[153477]: 2025-11-25T17:12:10Z|01412|binding|INFO|Claiming lport 44750fec-28de-4cf5-8393-333676bb4fed for this chassis.
Nov 25 17:12:10 compute-0 ovn_controller[153477]: 2025-11-25T17:12:10Z|01413|binding|INFO|44750fec-28de-4cf5-8393-333676bb4fed: Claiming fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.298 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], port_security=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe23:af6f/64', 'neutron:device_id': 'a8194956-04fe-46d6-9b07-63486afc3c7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=44750fec-28de-4cf5-8393-333676bb4fed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.299 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 44750fec-28de-4cf5-8393-333676bb4fed in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 bound to our chassis
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.301 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.321 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b842008c-edb3-44a3-9025-5966769d7722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:10 compute-0 systemd-udevd[402854]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:12:10 compute-0 ovn_controller[153477]: 2025-11-25T17:12:10Z|01414|binding|INFO|Setting lport 44750fec-28de-4cf5-8393-333676bb4fed ovn-installed in OVS
Nov 25 17:12:10 compute-0 ovn_controller[153477]: 2025-11-25T17:12:10Z|01415|binding|INFO|Setting lport 44750fec-28de-4cf5-8393-333676bb4fed up in Southbound
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:10 compute-0 NetworkManager[48891]: <info>  [1764090730.3393] device (tap44750fec-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:12:10 compute-0 NetworkManager[48891]: <info>  [1764090730.3403] device (tap44750fec-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:12:10 compute-0 systemd-machined[216343]: New machine qemu-170-instance-00000088.
Nov 25 17:12:10 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-00000088.
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.356 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec0ea2e-60bb-4441-97e7-fb544a5e3511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.360 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[143ef667-6884-4f44-aa95-a74523d5ad78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.391 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29905af9-eec5-4269-8025-36ef7459ba98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.409 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[56eef1d2-cbea-4855-a620-dd51dc40131b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 31125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402863, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.423 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[16e31a5e-2824-429d-b13d-a28ce536dcce]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714051, 'tstamp': 714051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402868, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714055, 'tstamp': 714055}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402868, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.425 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.429 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd20b9f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.429 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.430 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bd20b9f-20, col_values=(('external_ids', {'iface-id': '6c3d14c5-0ace-4cc3-828c-1d4061ad3097'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:10.430 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:12:10 compute-0 nova_compute[254092]: 2025-11-25 17:12:10.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.000 254096 DEBUG nova.compute.manager [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.000 254096 DEBUG oslo_concurrency.lockutils [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.002 254096 DEBUG oslo_concurrency.lockutils [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.002 254096 DEBUG oslo_concurrency.lockutils [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.002 254096 DEBUG nova.compute.manager [req-77d812c7-2ddd-41ee-9cf4-6f271a7b2cfd req-0ccfe83d-1b2d-4181-9d8f-abf912254b9d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Processing event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.003 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090730.9963694, a8194956-04fe-46d6-9b07-63486afc3c7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.004 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Started (Lifecycle Event)
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.007 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.011 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.015 254096 INFO nova.virt.libvirt.driver [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance spawned successfully.
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.015 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.021 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.026 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.042 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.042 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.043 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.044 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.045 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.045 254096 DEBUG nova.virt.libvirt.driver [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.052 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.053 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090731.0018113, a8194956-04fe-46d6-9b07-63486afc3c7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.053 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Paused (Lifecycle Event)
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.090 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.093 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090731.0107813, a8194956-04fe-46d6-9b07-63486afc3c7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.093 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Resumed (Lifecycle Event)
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.112 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.117 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.121 254096 INFO nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 7.14 seconds to spawn the instance on the hypervisor.
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.121 254096 DEBUG nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.141 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.176 254096 INFO nova.compute.manager [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 8.05 seconds to build instance.
Nov 25 17:12:11 compute-0 nova_compute[254092]: 2025-11-25 17:12:11.189 254096 DEBUG oslo_concurrency.lockutils [None req-bdae34a5-c47a-4222-905b-f4d4aa479026 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:12 compute-0 ceph-mon[74985]: pgmap v2659: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:12:12 compute-0 nova_compute[254092]: 2025-11-25 17:12:12.074 254096 DEBUG nova.network.neutron [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updated VIF entry in instance network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:12:12 compute-0 nova_compute[254092]: 2025-11-25 17:12:12.074 254096 DEBUG nova.network.neutron [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:12:12 compute-0 nova_compute[254092]: 2025-11-25 17:12:12.086 254096 DEBUG oslo_concurrency.lockutils [req-960a46e8-5fbe-4dc1-b0aa-b0a6ef97a10d req-fe249309-e084-474c-9a4b-994f990ecb90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:12:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 17:12:13 compute-0 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG nova.compute.manager [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:13 compute-0 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG oslo_concurrency.lockutils [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:13 compute-0 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG oslo_concurrency.lockutils [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:13 compute-0 nova_compute[254092]: 2025-11-25 17:12:13.215 254096 DEBUG oslo_concurrency.lockutils [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:13 compute-0 nova_compute[254092]: 2025-11-25 17:12:13.216 254096 DEBUG nova.compute.manager [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] No waiting events found dispatching network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:12:13 compute-0 nova_compute[254092]: 2025-11-25 17:12:13.216 254096 WARNING nova.compute.manager [req-a0ce8cbb-d883-42a3-8223-d74cb597f399 req-00206480-8ecf-4df8-9bcf-15f2de3ade93 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received unexpected event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed for instance with vm_state active and task_state None.
Nov 25 17:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:13.650 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:13 compute-0 nova_compute[254092]: 2025-11-25 17:12:13.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:14 compute-0 ceph-mon[74985]: pgmap v2660: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 17:12:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 17:12:15 compute-0 nova_compute[254092]: 2025-11-25 17:12:15.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:16 compute-0 nova_compute[254092]: 2025-11-25 17:12:16.016 254096 DEBUG nova.compute.manager [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-changed-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:16 compute-0 nova_compute[254092]: 2025-11-25 17:12:16.017 254096 DEBUG nova.compute.manager [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing instance network info cache due to event network-changed-44750fec-28de-4cf5-8393-333676bb4fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:12:16 compute-0 nova_compute[254092]: 2025-11-25 17:12:16.017 254096 DEBUG oslo_concurrency.lockutils [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:12:16 compute-0 nova_compute[254092]: 2025-11-25 17:12:16.017 254096 DEBUG oslo_concurrency.lockutils [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:12:16 compute-0 nova_compute[254092]: 2025-11-25 17:12:16.018 254096 DEBUG nova.network.neutron [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:12:16 compute-0 ceph-mon[74985]: pgmap v2661: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 17:12:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 17:12:17 compute-0 nova_compute[254092]: 2025-11-25 17:12:17.619 254096 DEBUG nova.network.neutron [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updated VIF entry in instance network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:12:17 compute-0 nova_compute[254092]: 2025-11-25 17:12:17.620 254096 DEBUG nova.network.neutron [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:12:17 compute-0 nova_compute[254092]: 2025-11-25 17:12:17.640 254096 DEBUG oslo_concurrency.lockutils [req-979cb7f3-856e-4a06-92f3-a54593dc6d2c req-55976a71-ca60-42a8-82cd-8738e112489b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:12:18 compute-0 ceph-mon[74985]: pgmap v2662: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 25 17:12:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:12:18 compute-0 nova_compute[254092]: 2025-11-25 17:12:18.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:20 compute-0 ceph-mon[74985]: pgmap v2663: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:12:20 compute-0 nova_compute[254092]: 2025-11-25 17:12:20.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:12:22 compute-0 ceph-mon[74985]: pgmap v2664: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:12:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 17:12:23 compute-0 ceph-mon[74985]: pgmap v2665: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 17:12:23 compute-0 ovn_controller[153477]: 2025-11-25T17:12:23Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:af:6f 10.100.0.9
Nov 25 17:12:23 compute-0 ovn_controller[153477]: 2025-11-25T17:12:23Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:af:6f 10.100.0.9
Nov 25 17:12:23 compute-0 nova_compute[254092]: 2025-11-25 17:12:23.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 65 op/s
Nov 25 17:12:25 compute-0 nova_compute[254092]: 2025-11-25 17:12:25.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:25 compute-0 ceph-mon[74985]: pgmap v2666: 321 pgs: 321 active+clean; 167 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 65 op/s
Nov 25 17:12:26 compute-0 podman[402912]: 2025-11-25 17:12:26.661669545 +0000 UTC m=+0.066269979 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 17:12:26 compute-0 podman[402913]: 2025-11-25 17:12:26.671698631 +0000 UTC m=+0.073722884 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 17:12:26 compute-0 podman[402914]: 2025-11-25 17:12:26.744612941 +0000 UTC m=+0.133762271 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 17:12:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 25 17:12:27 compute-0 ceph-mon[74985]: pgmap v2667: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 25 17:12:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:12:28 compute-0 nova_compute[254092]: 2025-11-25 17:12:28.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:29 compute-0 ceph-mon[74985]: pgmap v2668: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:12:30 compute-0 nova_compute[254092]: 2025-11-25 17:12:30.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:30 compute-0 nova_compute[254092]: 2025-11-25 17:12:30.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:12:31 compute-0 ceph-mon[74985]: pgmap v2669: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:12:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.272 254096 DEBUG nova.compute.manager [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-changed-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.272 254096 DEBUG nova.compute.manager [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing instance network info cache due to event network-changed-44750fec-28de-4cf5-8393-333676bb4fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.273 254096 DEBUG oslo_concurrency.lockutils [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.273 254096 DEBUG oslo_concurrency.lockutils [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.273 254096 DEBUG nova.network.neutron [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Refreshing network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.337 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.338 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.339 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.339 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.339 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.340 254096 INFO nova.compute.manager [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Terminating instance
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.341 254096 DEBUG nova.compute.manager [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:12:33 compute-0 kernel: tap44750fec-28 (unregistering): left promiscuous mode
Nov 25 17:12:33 compute-0 NetworkManager[48891]: <info>  [1764090753.3930] device (tap44750fec-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:12:33 compute-0 ovn_controller[153477]: 2025-11-25T17:12:33Z|01416|binding|INFO|Releasing lport 44750fec-28de-4cf5-8393-333676bb4fed from this chassis (sb_readonly=0)
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:33 compute-0 ovn_controller[153477]: 2025-11-25T17:12:33Z|01417|binding|INFO|Setting lport 44750fec-28de-4cf5-8393-333676bb4fed down in Southbound
Nov 25 17:12:33 compute-0 ovn_controller[153477]: 2025-11-25T17:12:33Z|01418|binding|INFO|Removing iface tap44750fec-28 ovn-installed in OVS
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.470 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.477 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], port_security=['fa:16:3e:23:af:6f 10.100.0.9 2001:db8::f816:3eff:fe23:af6f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8::f816:3eff:fe23:af6f/64', 'neutron:device_id': 'a8194956-04fe-46d6-9b07-63486afc3c7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=44750fec-28de-4cf5-8393-333676bb4fed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.478 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 44750fec-28de-4cf5-8393-333676bb4fed in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 unbound from our chassis
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.480 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.497 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de44ae6a-8949-4b78-a7b1-ec2473c3de47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:33 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 25 17:12:33 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Consumed 13.264s CPU time.
Nov 25 17:12:33 compute-0 systemd-machined[216343]: Machine qemu-170-instance-00000088 terminated.
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.532 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[29d8fa50-2d55-48b7-9f41-269c6233db3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.535 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28322817-6007-4bd4-94ec-25c7b3135616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.572 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[48489ac3-917b-4e88-b6f1-cae571d37f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.578 254096 INFO nova.virt.libvirt.driver [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Instance destroyed successfully.
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.578 254096 DEBUG nova.objects.instance [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid a8194956-04fe-46d6-9b07-63486afc3c7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.593 254096 DEBUG nova.virt.libvirt.vif [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:12:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1415313673',display_name='tempest-TestGettingAddress-server-1415313673',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1415313673',id=136,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:12:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzl12pdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:12:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a8194956-04fe-46d6-9b07-63486afc3c7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.593 254096 DEBUG nova.network.os_vif_util [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.594 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c8304611-2812-4700-b2e8-321ee7f82b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0bd20b9f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:5c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714035, 'reachable_time': 31125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402997, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.594 254096 DEBUG nova.network.os_vif_util [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.595 254096 DEBUG os_vif [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.597 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44750fec-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.604 254096 INFO os_vif [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:af:6f,bridge_name='br-int',has_traffic_filtering=True,id=44750fec-28de-4cf5-8393-333676bb4fed,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44750fec-28')
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0d2d18-cb3e-486d-a4cc-b58f797990e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714051, 'tstamp': 714051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402998, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0bd20b9f-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714055, 'tstamp': 714055}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402998, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.610 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.612 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bd20b9f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0bd20b9f-20, col_values=(('external_ids', {'iface-id': '6c3d14c5-0ace-4cc3-828c-1d4061ad3097'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:33 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:33.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:12:33 compute-0 nova_compute[254092]: 2025-11-25 17:12:33.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:33 compute-0 ceph-mon[74985]: pgmap v2670: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:12:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.108 254096 INFO nova.virt.libvirt.driver [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deleting instance files /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d_del
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.110 254096 INFO nova.virt.libvirt.driver [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deletion of /var/lib/nova/instances/a8194956-04fe-46d6-9b07-63486afc3c7d_del complete
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.162 254096 INFO nova.compute.manager [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.163 254096 DEBUG oslo.service.loopingcall [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.164 254096 DEBUG nova.compute.manager [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.164 254096 DEBUG nova.network.neutron [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.614 254096 DEBUG nova.network.neutron [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updated VIF entry in instance network info cache for port 44750fec-28de-4cf5-8393-333676bb4fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.616 254096 DEBUG nova.network.neutron [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [{"id": "44750fec-28de-4cf5-8393-333676bb4fed", "address": "fa:16:3e:23:af:6f", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe23:af6f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44750fec-28", "ovs_interfaceid": "44750fec-28de-4cf5-8393-333676bb4fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:12:34 compute-0 sudo[403018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:34 compute-0 sudo[403018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:34 compute-0 sudo[403018]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.636 254096 DEBUG oslo_concurrency.lockutils [req-eae9567e-4e9d-4003-b550-7130797af024 req-da3faf94-7962-483c-93eb-57a2825c3632 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a8194956-04fe-46d6-9b07-63486afc3c7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:12:34 compute-0 sudo[403043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:12:34 compute-0 sudo[403043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:34 compute-0 sudo[403043]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:12:34 compute-0 sudo[403068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:34 compute-0 sudo[403068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:34 compute-0 sudo[403068]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.785 254096 DEBUG nova.network.neutron [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.800 254096 INFO nova.compute.manager [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Took 0.64 seconds to deallocate network for instance.
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.837 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.837 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:34 compute-0 sudo[403093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:12:34 compute-0 sudo[403093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.871 254096 DEBUG nova.compute.manager [req-81fba276-ef05-4b67-8926-617dfc488364 req-f9d19b09-749d-4c44-9eec-036f4a0ed425 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-deleted-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:34 compute-0 nova_compute[254092]: 2025-11-25 17:12:34.901 254096 DEBUG oslo_concurrency.processutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:35 compute-0 sudo[403093]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:12:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2574910987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.348 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-unplugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.349 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.349 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] No waiting events found dispatching network-vif-unplugged-44750fec-28de-4cf5-8393-333676bb4fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 WARNING nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received unexpected event network-vif-unplugged-44750fec-28de-4cf5-8393-333676bb4fed for instance with vm_state deleted and task_state None.
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.350 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG oslo_concurrency.lockutils [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 DEBUG nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] No waiting events found dispatching network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.351 254096 WARNING nova.compute.manager [req-6b46aa41-5c1a-4fae-a43f-613011ffe802 req-1793d2c4-dfa1-422a-a875-4e5ba1f0d122 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Received unexpected event network-vif-plugged-44750fec-28de-4cf5-8393-333676bb4fed for instance with vm_state deleted and task_state None.
Nov 25 17:12:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.352 254096 DEBUG oslo_concurrency.processutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:12:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.359 254096 DEBUG nova.compute.provider_tree [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:12:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:12:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f29818a6-c1d7-430d-be6a-30f7908272a6 does not exist
Nov 25 17:12:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 398e8e5e-3072-43b2-96fb-ec3da7f38744 does not exist
Nov 25 17:12:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 80bff87b-16b6-4ba6-b8b6-1f09fb9ebe9e does not exist
Nov 25 17:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:12:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:12:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:12:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.376 254096 DEBUG nova.scheduler.client.report [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.396 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.422 254096 INFO nova.scheduler.client.report [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance a8194956-04fe-46d6-9b07-63486afc3c7d
Nov 25 17:12:35 compute-0 sudo[403171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:35 compute-0 sudo[403171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:35 compute-0 sudo[403171]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.474 254096 DEBUG oslo_concurrency.lockutils [None req-84d1dd85-bfdb-4bc4-8d8c-2d4ff12a7ada a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a8194956-04fe-46d6-9b07-63486afc3c7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:35 compute-0 sudo[403196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:12:35 compute-0 sudo[403196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:35 compute-0 sudo[403196]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:35 compute-0 sudo[403221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:35 compute-0 sudo[403221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:35 compute-0 sudo[403221]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:35 compute-0 nova_compute[254092]: 2025-11-25 17:12:35.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:35 compute-0 sudo[403246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:12:35 compute-0 sudo[403246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:35 compute-0 ceph-mon[74985]: pgmap v2671: 321 pgs: 321 active+clean; 200 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:12:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2574910987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:12:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:12:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:12:36 compute-0 podman[403312]: 2025-11-25 17:12:36.070341098 +0000 UTC m=+0.065539189 container create 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:12:36 compute-0 systemd[1]: Started libpod-conmon-43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50.scope.
Nov 25 17:12:36 compute-0 podman[403312]: 2025-11-25 17:12:36.047229183 +0000 UTC m=+0.042427314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:12:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:12:36 compute-0 podman[403312]: 2025-11-25 17:12:36.181876737 +0000 UTC m=+0.177074858 container init 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:36 compute-0 podman[403312]: 2025-11-25 17:12:36.198554534 +0000 UTC m=+0.193752635 container start 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:36 compute-0 podman[403312]: 2025-11-25 17:12:36.203722786 +0000 UTC m=+0.198920927 container attach 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:12:36 compute-0 crazy_jemison[403329]: 167 167
Nov 25 17:12:36 compute-0 systemd[1]: libpod-43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50.scope: Deactivated successfully.
Nov 25 17:12:36 compute-0 podman[403312]: 2025-11-25 17:12:36.212067585 +0000 UTC m=+0.207265696 container died 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dd6194bad6761cc2f6f3b425de1aa6286d9c4c17e2188b5513f8d22b915e0b8-merged.mount: Deactivated successfully.
Nov 25 17:12:36 compute-0 podman[403312]: 2025-11-25 17:12:36.258376055 +0000 UTC m=+0.253574146 container remove 43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:12:36 compute-0 systemd[1]: libpod-conmon-43c37448cbbe80402f97e87c4663691a333402d18627af829b318f36b8f93c50.scope: Deactivated successfully.
Nov 25 17:12:36 compute-0 podman[403353]: 2025-11-25 17:12:36.460876249 +0000 UTC m=+0.047411581 container create 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:36 compute-0 systemd[1]: Started libpod-conmon-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope.
Nov 25 17:12:36 compute-0 podman[403353]: 2025-11-25 17:12:36.442221937 +0000 UTC m=+0.028757269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:12:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:36 compute-0 podman[403353]: 2025-11-25 17:12:36.565013655 +0000 UTC m=+0.151548997 container init 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 17:12:36 compute-0 podman[403353]: 2025-11-25 17:12:36.572560363 +0000 UTC m=+0.159095675 container start 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:36 compute-0 podman[403353]: 2025-11-25 17:12:36.575670708 +0000 UTC m=+0.162206040 container attach 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:12:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.838 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.839 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.839 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.840 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.840 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.842 254096 INFO nova.compute.manager [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Terminating instance
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.843 254096 DEBUG nova.compute.manager [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:12:36 compute-0 kernel: tap0d520bf2-3b (unregistering): left promiscuous mode
Nov 25 17:12:36 compute-0 NetworkManager[48891]: <info>  [1764090756.9017] device (tap0d520bf2-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:36 compute-0 ovn_controller[153477]: 2025-11-25T17:12:36Z|01419|binding|INFO|Releasing lport 0d520bf2-3b97-484c-9046-b614ea281b88 from this chassis (sb_readonly=0)
Nov 25 17:12:36 compute-0 ovn_controller[153477]: 2025-11-25T17:12:36Z|01420|binding|INFO|Setting lport 0d520bf2-3b97-484c-9046-b614ea281b88 down in Southbound
Nov 25 17:12:36 compute-0 ovn_controller[153477]: 2025-11-25T17:12:36Z|01421|binding|INFO|Removing iface tap0d520bf2-3b ovn-installed in OVS
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG nova.compute.manager [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG nova.compute.manager [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing instance network info cache due to event network-changed-0d520bf2-3b97-484c-9046-b614ea281b88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG oslo_concurrency.lockutils [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG oslo_concurrency.lockutils [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.945 254096 DEBUG nova.network.neutron [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Refreshing network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:12:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.945 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], port_security=['fa:16:3e:fa:3c:b2 10.100.0.11 2001:db8::f816:3eff:fefa:3cb2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fefa:3cb2/64', 'neutron:device_id': '2f4d2580-5acd-4693-a158-926565a16fe9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1084723a-8c1e-4696-9b12-4faefe06b5c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2394ea31-774c-4cab-8186-73ed87450ca6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d520bf2-3b97-484c-9046-b614ea281b88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:12:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.947 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d520bf2-3b97-484c-9046-b614ea281b88 in datapath 0bd20b9f-24bc-4728-90ff-2870bebbf3f9 unbound from our chassis
Nov 25 17:12:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.948 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:12:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[384ccfc8-4c67-4b69-b245-e83c0f9a4ba9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:36.952 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 namespace which is not needed anymore
Nov 25 17:12:36 compute-0 nova_compute[254092]: 2025-11-25 17:12:36.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:36 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Deactivated successfully.
Nov 25 17:12:36 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Consumed 16.143s CPU time.
Nov 25 17:12:36 compute-0 systemd-machined[216343]: Machine qemu-169-instance-00000087 terminated.
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.069 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.092 254096 INFO nova.virt.libvirt.driver [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance destroyed successfully.
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.093 254096 DEBUG nova.objects.instance [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 2f4d2580-5acd-4693-a158-926565a16fe9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.105 254096 DEBUG nova.virt.libvirt.vif [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-318572700',display_name='tempest-TestGettingAddress-server-318572700',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-318572700',id=135,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG7thEmoyKozSV70fSmXIKBzdgJECHN3GnSKzpZMf9In3z/LeK0Fm0kaAtX5kv64CasJMe/8BiJ7VhFgmrFP6y7poqA3BuRl/GzlLwom1v+Q2Cb33Se+uUu6cs+H/0PS7w==',key_name='tempest-TestGettingAddress-844897793',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:11:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-6ty6vm7r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:11:23Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=2f4d2580-5acd-4693-a158-926565a16fe9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.105 254096 DEBUG nova.network.os_vif_util [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.106 254096 DEBUG nova.network.os_vif_util [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.106 254096 DEBUG os_vif [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.109 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d520bf2-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:37 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : haproxy version is 2.8.14-c23fe91
Nov 25 17:12:37 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [NOTICE]   (401462) : path to executable is /usr/sbin/haproxy
Nov 25 17:12:37 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [WARNING]  (401462) : Exiting Master process...
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:12:37 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [WARNING]  (401462) : Exiting Master process...
Nov 25 17:12:37 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [ALERT]    (401462) : Current worker (401464) exited with code 143 (Terminated)
Nov 25 17:12:37 compute-0 neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9[401458]: [WARNING]  (401462) : All workers exited. Exiting... (0)
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.116 254096 INFO os_vif [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:3c:b2,bridge_name='br-int',has_traffic_filtering=True,id=0d520bf2-3b97-484c-9046-b614ea281b88,network=Network(0bd20b9f-24bc-4728-90ff-2870bebbf3f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d520bf2-3b')
Nov 25 17:12:37 compute-0 systemd[1]: libpod-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d.scope: Deactivated successfully.
Nov 25 17:12:37 compute-0 podman[403398]: 2025-11-25 17:12:37.123237736 +0000 UTC m=+0.054393572 container died 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d-userdata-shm.mount: Deactivated successfully.
Nov 25 17:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bde7a71fe41d6784c5b2181a1c90baa9c83c45304d06a23693e4f44a74edde50-merged.mount: Deactivated successfully.
Nov 25 17:12:37 compute-0 podman[403398]: 2025-11-25 17:12:37.208900416 +0000 UTC m=+0.140056252 container cleanup 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:12:37 compute-0 systemd[1]: libpod-conmon-3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d.scope: Deactivated successfully.
Nov 25 17:12:37 compute-0 podman[403458]: 2025-11-25 17:12:37.305001472 +0000 UTC m=+0.064900601 container remove 3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.311 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5a0951-a269-4624-b8b1-cbe6bca75f13]: (4, ('Tue Nov 25 05:12:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 (3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d)\n3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d\nTue Nov 25 05:12:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 (3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d)\n3752d9501be308a3d4c6f6e8f97ba158e419318155395cef8a5e28ef5415043d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef35d81f-fb7c-4bc6-8d64-6b4bc6530b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.315 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bd20b9f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.317 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:37 compute-0 kernel: tap0bd20b9f-20: left promiscuous mode
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1560f67d-eb00-46ae-ad66-d067b24c2a32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.353 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e5289313-fdc6-4a4b-a311-30ceda63ee9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.355 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a2dc19-6cb3-44de-8f71-a428c8b1ee4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.375 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dbdf4c-c08e-462a-804a-24733ed975f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714025, 'reachable_time': 30225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403476, 'error': None, 'target': 'ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d0bd20b9f\x2d24bc\x2d4728\x2d90ff\x2d2870bebbf3f9.mount: Deactivated successfully.
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.380 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0bd20b9f-24bc-4728-90ff-2870bebbf3f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:12:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:12:37.380 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[baa12052-2e94-4a42-bfb2-856462ff6152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:12:37 compute-0 vigilant_roentgen[403369]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:12:37 compute-0 vigilant_roentgen[403369]: --> relative data size: 1.0
Nov 25 17:12:37 compute-0 vigilant_roentgen[403369]: --> All data devices are unavailable
Nov 25 17:12:37 compute-0 systemd[1]: libpod-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope: Deactivated successfully.
Nov 25 17:12:37 compute-0 systemd[1]: libpod-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope: Consumed 1.067s CPU time.
Nov 25 17:12:37 compute-0 podman[403353]: 2025-11-25 17:12:37.752099365 +0000 UTC m=+1.338634697 container died 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a56e79e15afb00f89347c29f94134a9161e69710e5cfde304785406b99471f1-merged.mount: Deactivated successfully.
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.815 254096 INFO nova.virt.libvirt.driver [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deleting instance files /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9_del
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.817 254096 INFO nova.virt.libvirt.driver [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deletion of /var/lib/nova/instances/2f4d2580-5acd-4693-a158-926565a16fe9_del complete
Nov 25 17:12:37 compute-0 podman[403353]: 2025-11-25 17:12:37.857402664 +0000 UTC m=+1.443937976 container remove 5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:12:37 compute-0 ceph-mon[74985]: pgmap v2672: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Nov 25 17:12:37 compute-0 systemd[1]: libpod-conmon-5079b0ac985ad6d9281312a54b16edb02572465b8ac11039ce126d53461fe400.scope: Deactivated successfully.
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.864 254096 INFO nova.compute.manager [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 1.02 seconds to destroy the instance on the hypervisor.
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.865 254096 DEBUG oslo.service.loopingcall [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.865 254096 DEBUG nova.compute.manager [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:12:37 compute-0 nova_compute[254092]: 2025-11-25 17:12:37.865 254096 DEBUG nova.network.neutron [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:12:37 compute-0 sudo[403246]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:37 compute-0 sudo[403513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:37 compute-0 sudo[403513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:37 compute-0 sudo[403513]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:37 compute-0 sudo[403538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:12:37 compute-0 sudo[403538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:38 compute-0 sudo[403538]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:38 compute-0 sudo[403563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:38 compute-0 sudo[403563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:38 compute-0 sudo[403563]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:38 compute-0 sudo[403588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:12:38 compute-0 sudo[403588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:38 compute-0 podman[403652]: 2025-11-25 17:12:38.41216955 +0000 UTC m=+0.038734714 container create 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:12:38 compute-0 systemd[1]: Started libpod-conmon-6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf.scope.
Nov 25 17:12:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:12:38 compute-0 podman[403652]: 2025-11-25 17:12:38.395288077 +0000 UTC m=+0.021853281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:12:38 compute-0 podman[403652]: 2025-11-25 17:12:38.493233804 +0000 UTC m=+0.119798998 container init 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:12:38 compute-0 nova_compute[254092]: 2025-11-25 17:12:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:38 compute-0 nova_compute[254092]: 2025-11-25 17:12:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:12:38 compute-0 podman[403652]: 2025-11-25 17:12:38.501406837 +0000 UTC m=+0.127972011 container start 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:12:38 compute-0 podman[403652]: 2025-11-25 17:12:38.504241054 +0000 UTC m=+0.130806228 container attach 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:12:38 compute-0 nice_antonelli[403668]: 167 167
Nov 25 17:12:38 compute-0 systemd[1]: libpod-6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf.scope: Deactivated successfully.
Nov 25 17:12:38 compute-0 podman[403652]: 2025-11-25 17:12:38.507510255 +0000 UTC m=+0.134075439 container died 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b612afd5f55b63cdf076e65c56fe61eae4e06409569c069a496db742b3019fa-merged.mount: Deactivated successfully.
Nov 25 17:12:38 compute-0 podman[403652]: 2025-11-25 17:12:38.541838106 +0000 UTC m=+0.168403280 container remove 6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:12:38 compute-0 systemd[1]: libpod-conmon-6320a25bd8874826ac72e18d0ca5ae3bda13287cb9ca94992eb751b334abcdcf.scope: Deactivated successfully.
Nov 25 17:12:38 compute-0 podman[403692]: 2025-11-25 17:12:38.751726733 +0000 UTC m=+0.077406554 container create 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:12:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 25 17:12:38 compute-0 systemd[1]: Started libpod-conmon-4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b.scope.
Nov 25 17:12:38 compute-0 podman[403692]: 2025-11-25 17:12:38.723684434 +0000 UTC m=+0.049364355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:12:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:38 compute-0 podman[403692]: 2025-11-25 17:12:38.887041304 +0000 UTC m=+0.212721155 container init 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:12:38 compute-0 podman[403692]: 2025-11-25 17:12:38.900536905 +0000 UTC m=+0.226216816 container start 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:12:38 compute-0 podman[403692]: 2025-11-25 17:12:38.906228851 +0000 UTC m=+0.231908682 container attach 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:12:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.048 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-unplugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.049 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] No waiting events found dispatching network-vif-unplugged-0d520bf2-3b97-484c-9046-b614ea281b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.050 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-unplugged-0d520bf2-3b97-484c-9046-b614ea281b88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG oslo_concurrency.lockutils [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.051 254096 DEBUG nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] No waiting events found dispatching network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.052 254096 WARNING nova.compute.manager [req-4fdb7906-ab46-4328-905e-e93e1c38d162 req-e35d0316-73fa-405d-a212-b8faaef74a91 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received unexpected event network-vif-plugged-0d520bf2-3b97-484c-9046-b614ea281b88 for instance with vm_state active and task_state deleting.
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:39 compute-0 priceless_nash[403708]: {
Nov 25 17:12:39 compute-0 priceless_nash[403708]:     "0": [
Nov 25 17:12:39 compute-0 priceless_nash[403708]:         {
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "devices": [
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "/dev/loop3"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             ],
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_name": "ceph_lv0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_size": "21470642176",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "name": "ceph_lv0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "tags": {
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cluster_name": "ceph",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.crush_device_class": "",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.encrypted": "0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osd_id": "0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.type": "block",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.vdo": "0"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             },
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "type": "block",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "vg_name": "ceph_vg0"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:         }
Nov 25 17:12:39 compute-0 priceless_nash[403708]:     ],
Nov 25 17:12:39 compute-0 priceless_nash[403708]:     "1": [
Nov 25 17:12:39 compute-0 priceless_nash[403708]:         {
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "devices": [
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "/dev/loop4"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             ],
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_name": "ceph_lv1",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_size": "21470642176",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "name": "ceph_lv1",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "tags": {
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cluster_name": "ceph",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.crush_device_class": "",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.encrypted": "0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osd_id": "1",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.type": "block",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.vdo": "0"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             },
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "type": "block",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "vg_name": "ceph_vg1"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:         }
Nov 25 17:12:39 compute-0 priceless_nash[403708]:     ],
Nov 25 17:12:39 compute-0 priceless_nash[403708]:     "2": [
Nov 25 17:12:39 compute-0 priceless_nash[403708]:         {
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "devices": [
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "/dev/loop5"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             ],
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_name": "ceph_lv2",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_size": "21470642176",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "name": "ceph_lv2",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "tags": {
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.cluster_name": "ceph",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.crush_device_class": "",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.encrypted": "0",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osd_id": "2",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.type": "block",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:                 "ceph.vdo": "0"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             },
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "type": "block",
Nov 25 17:12:39 compute-0 priceless_nash[403708]:             "vg_name": "ceph_vg2"
Nov 25 17:12:39 compute-0 priceless_nash[403708]:         }
Nov 25 17:12:39 compute-0 priceless_nash[403708]:     ]
Nov 25 17:12:39 compute-0 priceless_nash[403708]: }
Nov 25 17:12:39 compute-0 systemd[1]: libpod-4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b.scope: Deactivated successfully.
Nov 25 17:12:39 compute-0 podman[403692]: 2025-11-25 17:12:39.723286021 +0000 UTC m=+1.048965842 container died 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e0dfa6d800d8ef636d70702086c38c5351a79ebabfdd4d4edfe14814583356-merged.mount: Deactivated successfully.
Nov 25 17:12:39 compute-0 podman[403692]: 2025-11-25 17:12:39.808074097 +0000 UTC m=+1.133753928 container remove 4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.810 254096 DEBUG nova.network.neutron [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:12:39 compute-0 systemd[1]: libpod-conmon-4bba791ddd008bd36e7754f4c393d9ad7b55aa2ea8822dc61989b6a7bd9b714b.scope: Deactivated successfully.
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.831 254096 INFO nova.compute.manager [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Took 1.97 seconds to deallocate network for instance.
Nov 25 17:12:39 compute-0 sudo[403588]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:39 compute-0 ceph-mon[74985]: pgmap v2673: 321 pgs: 321 active+clean; 121 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.879 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.879 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:39 compute-0 sudo[403731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:39 compute-0 sudo[403731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:39 compute-0 nova_compute[254092]: 2025-11-25 17:12:39.914 254096 DEBUG oslo_concurrency.processutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:39 compute-0 sudo[403731]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:39 compute-0 sudo[403756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:12:39 compute-0 sudo[403756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:39 compute-0 sudo[403756]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:40 compute-0 sudo[403782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:40 compute-0 sudo[403782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:40 compute-0 sudo[403782]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.074 254096 DEBUG nova.network.neutron [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updated VIF entry in instance network info cache for port 0d520bf2-3b97-484c-9046-b614ea281b88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.075 254096 DEBUG nova.network.neutron [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Updating instance_info_cache with network_info: [{"id": "0d520bf2-3b97-484c-9046-b614ea281b88", "address": "fa:16:3e:fa:3c:b2", "network": {"id": "0bd20b9f-24bc-4728-90ff-2870bebbf3f9", "bridge": "br-int", "label": "tempest-network-smoke--1136558495", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fefa:3cb2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d520bf2-3b", "ovs_interfaceid": "0d520bf2-3b97-484c-9046-b614ea281b88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:12:40 compute-0 sudo[403807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:12:40 compute-0 sudo[403807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.092 254096 DEBUG oslo_concurrency.lockutils [req-f49faf89-44a1-4b3d-8166-c547fbdc1e9f req-3cbb441a-a564-4919-a1ee-e50af66c90fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-2f4d2580-5acd-4693-a158-926565a16fe9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:12:40
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'backups', '.mgr', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:12:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:12:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306141709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.370 254096 DEBUG oslo_concurrency.processutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.376 254096 DEBUG nova.compute.provider_tree [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.391 254096 DEBUG nova.scheduler.client.report [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.408 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:40 compute-0 podman[403892]: 2025-11-25 17:12:40.419161578 +0000 UTC m=+0.038173709 container create afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.452 254096 INFO nova.scheduler.client.report [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 2f4d2580-5acd-4693-a158-926565a16fe9
Nov 25 17:12:40 compute-0 systemd[1]: Started libpod-conmon-afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0.scope.
Nov 25 17:12:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:40 compute-0 podman[403892]: 2025-11-25 17:12:40.401906034 +0000 UTC m=+0.020918185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:12:40 compute-0 podman[403892]: 2025-11-25 17:12:40.505199798 +0000 UTC m=+0.124211949 container init afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:12:40 compute-0 podman[403892]: 2025-11-25 17:12:40.518471851 +0000 UTC m=+0.137483982 container start afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:12:40 compute-0 podman[403892]: 2025-11-25 17:12:40.522630706 +0000 UTC m=+0.141642857 container attach afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:12:40 compute-0 pensive_kare[403908]: 167 167
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:12:40 compute-0 systemd[1]: libpod-afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0.scope: Deactivated successfully.
Nov 25 17:12:40 compute-0 podman[403892]: 2025-11-25 17:12:40.525479354 +0000 UTC m=+0.144491485 container died afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6634ec1057330b0912cf6541a7d94e814f1b36148f4d33e93de9410263ade52c-merged.mount: Deactivated successfully.
Nov 25 17:12:40 compute-0 podman[403892]: 2025-11-25 17:12:40.561735768 +0000 UTC m=+0.180747899 container remove afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.568 254096 DEBUG oslo_concurrency.lockutils [None req-d6a4040c-05ca-44ce-8a93-a450ebb9ff87 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "2f4d2580-5acd-4693-a158-926565a16fe9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:40 compute-0 systemd[1]: libpod-conmon-afa908b99b6418df83778f6a0694a70cab3b94648ee6983452ba2de4ee4f40b0.scope: Deactivated successfully.
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:40 compute-0 podman[403952]: 2025-11-25 17:12:40.726115127 +0000 UTC m=+0.043348610 container create 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 59 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 15 KiB/s wr, 55 op/s
Nov 25 17:12:40 compute-0 systemd[1]: Started libpod-conmon-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope.
Nov 25 17:12:40 compute-0 podman[403952]: 2025-11-25 17:12:40.706342605 +0000 UTC m=+0.023576108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:12:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:12:40 compute-0 podman[403952]: 2025-11-25 17:12:40.826310284 +0000 UTC m=+0.143543797 container init 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:12:40 compute-0 podman[403952]: 2025-11-25 17:12:40.83454026 +0000 UTC m=+0.151773743 container start 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:12:40 compute-0 podman[403952]: 2025-11-25 17:12:40.837726508 +0000 UTC m=+0.154959991 container attach 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:12:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1306141709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:12:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083821883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:40 compute-0 nova_compute[254092]: 2025-11-25 17:12:40.981 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.129 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.132 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3652MB free_disk=59.98165512084961GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.133 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.136 254096 DEBUG nova.compute.manager [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Received event network-vif-deleted-0d520bf2-3b97-484c-9046-b614ea281b88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.137 254096 INFO nova.compute.manager [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Neutron deleted interface 0d520bf2-3b97-484c-9046-b614ea281b88; detaching it from the instance and deleting it from the info cache
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.137 254096 DEBUG nova.network.neutron [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.139 254096 DEBUG nova.compute.manager [req-788f0113-368b-4c7f-9537-890b0d7018be req-5927bab7-bb60-4ac4-830f-9344ecfb0059 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Detach interface failed, port_id=0d520bf2-3b97-484c-9046-b614ea281b88, reason: Instance 2f4d2580-5acd-4693-a158-926565a16fe9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.174 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.175 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.188 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:12:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:12:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668610908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.657 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.664 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.683 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.704 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:12:41 compute-0 nova_compute[254092]: 2025-11-25 17:12:41.705 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:12:41 compute-0 ceph-mon[74985]: pgmap v2674: 321 pgs: 321 active+clean; 59 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 15 KiB/s wr, 55 op/s
Nov 25 17:12:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2083821883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2668610908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:12:41 compute-0 stoic_bohr[403968]: {
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "osd_id": 1,
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "type": "bluestore"
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:     },
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "osd_id": 2,
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "type": "bluestore"
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:     },
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "osd_id": 0,
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:         "type": "bluestore"
Nov 25 17:12:41 compute-0 stoic_bohr[403968]:     }
Nov 25 17:12:41 compute-0 stoic_bohr[403968]: }
Nov 25 17:12:41 compute-0 systemd[1]: libpod-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope: Deactivated successfully.
Nov 25 17:12:41 compute-0 podman[403952]: 2025-11-25 17:12:41.932787952 +0000 UTC m=+1.250021435 container died 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:12:41 compute-0 systemd[1]: libpod-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope: Consumed 1.096s CPU time.
Nov 25 17:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a300d751656ae0a3a0a11315127a9ca7233662013e2536ff42019a03a7042d27-merged.mount: Deactivated successfully.
Nov 25 17:12:41 compute-0 podman[403952]: 2025-11-25 17:12:41.983709007 +0000 UTC m=+1.300942490 container remove 6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:12:41 compute-0 systemd[1]: libpod-conmon-6e85292346d715f2ff5e436e5679e447e47e81a0f397739dbe7b9b256f282aa0.scope: Deactivated successfully.
Nov 25 17:12:42 compute-0 sudo[403807]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:12:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:12:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:12:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:12:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 95f5f54c-199c-4d78-a036-bb1c0464fcd4 does not exist
Nov 25 17:12:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e0cf41cb-9e2c-45e1-920f-1fa8c9d7ead1 does not exist
Nov 25 17:12:42 compute-0 nova_compute[254092]: 2025-11-25 17:12:42.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:42 compute-0 sudo[404038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:12:42 compute-0 sudo[404038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:42 compute-0 sudo[404038]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:42 compute-0 sudo[404063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:12:42 compute-0 sudo[404063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:12:42 compute-0 sudo[404063]: pam_unix(sudo:session): session closed for user root
Nov 25 17:12:42 compute-0 nova_compute[254092]: 2025-11-25 17:12:42.702 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:42 compute-0 nova_compute[254092]: 2025-11-25 17:12:42.702 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:42 compute-0 nova_compute[254092]: 2025-11-25 17:12:42.703 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 KiB/s wr, 56 op/s
Nov 25 17:12:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:12:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:12:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:44 compute-0 ceph-mon[74985]: pgmap v2675: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.7 KiB/s wr, 56 op/s
Nov 25 17:12:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Nov 25 17:12:45 compute-0 nova_compute[254092]: 2025-11-25 17:12:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:45 compute-0 nova_compute[254092]: 2025-11-25 17:12:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:12:45 compute-0 nova_compute[254092]: 2025-11-25 17:12:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:12:45 compute-0 nova_compute[254092]: 2025-11-25 17:12:45.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:12:45 compute-0 nova_compute[254092]: 2025-11-25 17:12:45.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:46 compute-0 ceph-mon[74985]: pgmap v2676: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Nov 25 17:12:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Nov 25 17:12:47 compute-0 nova_compute[254092]: 2025-11-25 17:12:47.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:47 compute-0 nova_compute[254092]: 2025-11-25 17:12:47.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:47 compute-0 nova_compute[254092]: 2025-11-25 17:12:47.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:48 compute-0 ceph-mon[74985]: pgmap v2677: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Nov 25 17:12:48 compute-0 nova_compute[254092]: 2025-11-25 17:12:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:12:48 compute-0 nova_compute[254092]: 2025-11-25 17:12:48.576 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090753.5749605, a8194956-04fe-46d6-9b07-63486afc3c7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:12:48 compute-0 nova_compute[254092]: 2025-11-25 17:12:48.576 254096 INFO nova.compute.manager [-] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] VM Stopped (Lifecycle Event)
Nov 25 17:12:48 compute-0 nova_compute[254092]: 2025-11-25 17:12:48.592 254096 DEBUG nova.compute.manager [None req-e63dc93c-d523-47fc-a400-dea559a995c2 - - - - - -] [instance: a8194956-04fe-46d6-9b07-63486afc3c7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:12:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:12:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:50 compute-0 ceph-mon[74985]: pgmap v2678: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:12:50 compute-0 nova_compute[254092]: 2025-11-25 17:12:50.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:12:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:12:52 compute-0 nova_compute[254092]: 2025-11-25 17:12:52.089 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090757.088163, 2f4d2580-5acd-4693-a158-926565a16fe9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:12:52 compute-0 nova_compute[254092]: 2025-11-25 17:12:52.090 254096 INFO nova.compute.manager [-] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] VM Stopped (Lifecycle Event)
Nov 25 17:12:52 compute-0 nova_compute[254092]: 2025-11-25 17:12:52.110 254096 DEBUG nova.compute.manager [None req-8846d0b1-5462-4e60-b3a4-8316f5dde1f2 - - - - - -] [instance: 2f4d2580-5acd-4693-a158-926565a16fe9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:12:52 compute-0 nova_compute[254092]: 2025-11-25 17:12:52.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:52 compute-0 ceph-mon[74985]: pgmap v2679: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:12:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Nov 25 17:12:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:54 compute-0 ceph-mon[74985]: pgmap v2680: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Nov 25 17:12:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:12:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762493568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:12:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:12:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762493568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:12:55 compute-0 nova_compute[254092]: 2025-11-25 17:12:55.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:56 compute-0 ceph-mon[74985]: pgmap v2681: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1762493568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:12:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1762493568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:12:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:57 compute-0 nova_compute[254092]: 2025-11-25 17:12:57.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:12:57 compute-0 podman[404091]: 2025-11-25 17:12:57.637309818 +0000 UTC m=+0.053391074 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 17:12:57 compute-0 podman[404090]: 2025-11-25 17:12:57.6454658 +0000 UTC m=+0.062219104 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 17:12:57 compute-0 podman[404092]: 2025-11-25 17:12:57.697374943 +0000 UTC m=+0.107788335 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 17:12:58 compute-0 ceph-mon[74985]: pgmap v2682: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:12:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:12:59 compute-0 ceph-mon[74985]: pgmap v2683: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:00 compute-0 nova_compute[254092]: 2025-11-25 17:13:00.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:01 compute-0 ceph-mon[74985]: pgmap v2684: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:02 compute-0 nova_compute[254092]: 2025-11-25 17:13:02.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:03 compute-0 ceph-mon[74985]: pgmap v2685: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:05 compute-0 nova_compute[254092]: 2025-11-25 17:13:05.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:05 compute-0 ceph-mon[74985]: pgmap v2686: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:06.136 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:13:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:06.137 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:13:06 compute-0 nova_compute[254092]: 2025-11-25 17:13:06.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:07 compute-0 nova_compute[254092]: 2025-11-25 17:13:07.190 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:07 compute-0 ceph-mon[74985]: pgmap v2687: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:09 compute-0 ceph-mon[74985]: pgmap v2688: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.416 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.416 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.429 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.543 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.544 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.554 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.555 254096 INFO nova.compute.claims [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:10 compute-0 nova_compute[254092]: 2025-11-25 17:13:10.706 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:13:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193490772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.127 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.133 254096 DEBUG nova.compute.provider_tree [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:13:11 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:11.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.145 254096 DEBUG nova.scheduler.client.report [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.161 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.162 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.198 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.198 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.216 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.231 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.311 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.312 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.312 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Creating image(s)
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.330 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.347 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.364 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.368 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.439 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.440 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.441 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.441 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.461 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.463 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.732 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.789 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.840 254096 DEBUG nova.policy [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.877 254096 DEBUG nova.objects.instance [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.891 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.892 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Ensure instance console log exists: /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.892 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.893 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:11 compute-0 nova_compute[254092]: 2025-11-25 17:13:11.893 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:11 compute-0 ceph-mon[74985]: pgmap v2689: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1193490772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:12 compute-0 nova_compute[254092]: 2025-11-25 17:13:12.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:13.652 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:13.652 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:13 compute-0 nova_compute[254092]: 2025-11-25 17:13:13.659 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully created port: bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:13:13 compute-0 ceph-mon[74985]: pgmap v2690: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:14 compute-0 nova_compute[254092]: 2025-11-25 17:13:14.108 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully created port: 831eaa83-55bd-4098-9037-4b628eb8d994 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:13:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.190 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully updated port: bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.320 254096 DEBUG nova.compute.manager [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.320 254096 DEBUG nova.compute.manager [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.321 254096 DEBUG oslo_concurrency.lockutils [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.321 254096 DEBUG oslo_concurrency.lockutils [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.321 254096 DEBUG nova.network.neutron [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.658 254096 DEBUG nova.network.neutron [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:13:15 compute-0 nova_compute[254092]: 2025-11-25 17:13:15.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:16 compute-0 nova_compute[254092]: 2025-11-25 17:13:16.029 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Successfully updated port: 831eaa83-55bd-4098-9037-4b628eb8d994 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:13:16 compute-0 nova_compute[254092]: 2025-11-25 17:13:16.041 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:16 compute-0 ceph-mon[74985]: pgmap v2691: 321 pgs: 321 active+clean; 41 MiB data, 929 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:13:16 compute-0 nova_compute[254092]: 2025-11-25 17:13:16.102 254096 DEBUG nova.network.neutron [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:16 compute-0 nova_compute[254092]: 2025-11-25 17:13:16.122 254096 DEBUG oslo_concurrency.lockutils [req-98da8bd2-e4fa-4ce7-ac18-7f45bc83097a req-c1ea8aef-3626-4321-9496-92f14ff1fa76 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:16 compute-0 nova_compute[254092]: 2025-11-25 17:13:16.122 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:16 compute-0 nova_compute[254092]: 2025-11-25 17:13:16.123 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:13:16 compute-0 nova_compute[254092]: 2025-11-25 17:13:16.291 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:13:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:17 compute-0 nova_compute[254092]: 2025-11-25 17:13:17.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:17 compute-0 nova_compute[254092]: 2025-11-25 17:13:17.435 254096 DEBUG nova.compute.manager [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:17 compute-0 nova_compute[254092]: 2025-11-25 17:13:17.435 254096 DEBUG nova.compute.manager [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-831eaa83-55bd-4098-9037-4b628eb8d994. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:13:17 compute-0 nova_compute[254092]: 2025-11-25 17:13:17.436 254096 DEBUG oslo_concurrency.lockutils [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:18 compute-0 ceph-mon[74985]: pgmap v2692: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.336 254096 DEBUG nova.network.neutron [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.356 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.356 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance network_info: |[{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.357 254096 DEBUG oslo_concurrency.lockutils [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.357 254096 DEBUG nova.network.neutron [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port 831eaa83-55bd-4098-9037-4b628eb8d994 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.361 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start _get_guest_xml network_info=[{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.365 254096 WARNING nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.370 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.370 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.376 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.376 254096 DEBUG nova.virt.libvirt.host [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.377 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.378 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.379 254096 DEBUG nova.virt.hardware [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.382 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:13:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964183865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.833 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.858 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:18 compute-0 nova_compute[254092]: 2025-11-25 17:13:18.862 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1964183865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:13:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451132653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.284 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.286 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.286 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.287 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.288 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.288 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.289 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.290 254096 DEBUG nova.objects.instance [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.306 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <uuid>3b797ee6-c82f-4c01-bb54-31de659fcad8</uuid>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <name>instance-00000089</name>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-848969978</nova:name>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:13:18</nova:creationTime>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:port uuid="bf0e0412-082f-4b0e-aabe-4e4f0de25b43">
Nov 25 17:13:19 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <nova:port uuid="831eaa83-55bd-4098-9037-4b628eb8d994">
Nov 25 17:13:19 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb0:cda4" ipVersion="6"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <system>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <entry name="serial">3b797ee6-c82f-4c01-bb54-31de659fcad8</entry>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <entry name="uuid">3b797ee6-c82f-4c01-bb54-31de659fcad8</entry>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </system>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <os>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   </os>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <features>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   </features>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3b797ee6-c82f-4c01-bb54-31de659fcad8_disk">
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config">
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       </source>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:13:19 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:3b:43:89"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <target dev="tapbf0e0412-08"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:b0:cd:a4"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <target dev="tap831eaa83-55"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/console.log" append="off"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <video>
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </video>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:13:19 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:13:19 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:13:19 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:13:19 compute-0 nova_compute[254092]: </domain>
Nov 25 17:13:19 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Preparing to wait for external event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.308 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Preparing to wait for external event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.309 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.310 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.310 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.311 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.311 254096 DEBUG os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.312 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.315 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0e0412-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.316 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0e0412-08, col_values=(('external_ids', {'iface-id': 'bf0e0412-082f-4b0e-aabe-4e4f0de25b43', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:43:89', 'vm-uuid': '3b797ee6-c82f-4c01-bb54-31de659fcad8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.317 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 NetworkManager[48891]: <info>  [1764090799.3191] manager: (tapbf0e0412-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.320 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.324 254096 INFO os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08')
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.325 254096 DEBUG nova.virt.libvirt.vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:11Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.325 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.326 254096 DEBUG nova.network.os_vif_util [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.326 254096 DEBUG os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.327 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.327 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.329 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.329 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap831eaa83-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.329 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap831eaa83-55, col_values=(('external_ids', {'iface-id': '831eaa83-55bd-4098-9037-4b628eb8d994', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:cd:a4', 'vm-uuid': '3b797ee6-c82f-4c01-bb54-31de659fcad8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 NetworkManager[48891]: <info>  [1764090799.3319] manager: (tap831eaa83-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.333 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.340 254096 INFO os_vif [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55')
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.402 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.403 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.403 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:3b:43:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.404 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:b0:cd:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.404 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Using config drive
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.437 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.884 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Creating config drive at /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.895 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpreeh6xzc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.941 254096 DEBUG nova.network.neutron [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated VIF entry in instance network info cache for port 831eaa83-55bd-4098-9037-4b628eb8d994. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.943 254096 DEBUG nova.network.neutron [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:19 compute-0 nova_compute[254092]: 2025-11-25 17:13:19.957 254096 DEBUG oslo_concurrency.lockutils [req-641faa51-7a27-4fe1-95ba-45793561a842 req-612d6709-a60a-497b-9044-8f1dcd248546 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.046 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpreeh6xzc" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:20 compute-0 ceph-mon[74985]: pgmap v2693: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3451132653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.087 254096 DEBUG nova.storage.rbd_utils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.094 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.368 254096 DEBUG oslo_concurrency.processutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config 3b797ee6-c82f-4c01-bb54-31de659fcad8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.369 254096 INFO nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deleting local config drive /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8/disk.config because it was imported into RBD.
Nov 25 17:13:20 compute-0 kernel: tapbf0e0412-08: entered promiscuous mode
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.4463] manager: (tapbf0e0412-08): new Tun device (/org/freedesktop/NetworkManager/Devices/585)
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01422|binding|INFO|Claiming lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for this chassis.
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01423|binding|INFO|bf0e0412-082f-4b0e-aabe-4e4f0de25b43: Claiming fa:16:3e:3b:43:89 10.100.0.8
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.470 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:43:89 10.100.0.8'], port_security=['fa:16:3e:3b:43:89 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bf0e0412-082f-4b0e-aabe-4e4f0de25b43) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.472 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d bound to our chassis
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.474 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddff42cd-c011-4371-96b1-f2bb5093a16d
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.4764] manager: (tap831eaa83-55): new Tun device (/org/freedesktop/NetworkManager/Devices/586)
Nov 25 17:13:20 compute-0 systemd-udevd[404483]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.486 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc14153-682f-4120-9b66-0bb065fc8bfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.487 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddff42cd-c1 in ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:13:20 compute-0 systemd-udevd[404484]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.490 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddff42cd-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.490 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66ccfbd7-14d7-495a-9ed1-88131abf007f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.492 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f77a98dd-3a84-4570-bf04-d83458b101c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.5020] device (tapbf0e0412-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.5031] device (tapbf0e0412-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.509 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fe2b23-ade2-47c3-809b-6fc91e5d46db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 systemd-machined[216343]: New machine qemu-171-instance-00000089.
Nov 25 17:13:20 compute-0 kernel: tap831eaa83-55: entered promiscuous mode
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.5349] device (tap831eaa83-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.5360] device (tap831eaa83-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01424|binding|INFO|Claiming lport 831eaa83-55bd-4098-9037-4b628eb8d994 for this chassis.
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01425|binding|INFO|831eaa83-55bd-4098-9037-4b628eb8d994: Claiming fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4
Nov 25 17:13:20 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01426|binding|INFO|Setting lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 ovn-installed in OVS
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01427|binding|INFO|Setting lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 up in Southbound
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.545 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], port_security=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb0:cda4/64', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=831eaa83-55bd-4098-9037-4b628eb8d994) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.547 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9018650-308d-4d26-bd25-54287f53a06d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01428|binding|INFO|Setting lport 831eaa83-55bd-4098-9037-4b628eb8d994 ovn-installed in OVS
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01429|binding|INFO|Setting lport 831eaa83-55bd-4098-9037-4b628eb8d994 up in Southbound
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.577 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[45a32e69-f62e-43c6-9d51-70111efbd2b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.583 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8d82d789-ec85-4e89-b471-7270328e5d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.5842] manager: (tapddff42cd-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/587)
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.615 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[52a650e3-8b1a-439f-8795-f631992465d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.619 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[27039a28-5864-4f4f-a967-4b8ad3e6b733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.6526] device (tapddff42cd-c0): carrier: link connected
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.659 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5c5ccc-eab8-486e-996b-700225fb344c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.674 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.682 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f5064d-90b1-4d45-8e59-24033c19f3d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404519, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.706 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b77c4b-bfcf-43f4-b595-5bc953d75a11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:82d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725824, 'tstamp': 725824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404520, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.734 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[67c51a2f-2441-48e2-8c86-5d92e5cd687c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404521, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.780 254096 DEBUG nova.compute.manager [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG oslo_concurrency.lockutils [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG oslo_concurrency.lockutils [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG oslo_concurrency.lockutils [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.781 254096 DEBUG nova.compute.manager [req-ed07e558-7343-4759-8e3d-fcd3b764d451 req-32417941-ebd6-49d8-a213-fcfb9547234a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Processing event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.783 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[27967f56-5d11-4493-9e9d-f5ebcd3b9e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.794 254096 DEBUG nova.compute.manager [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.794 254096 DEBUG oslo_concurrency.lockutils [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.795 254096 DEBUG oslo_concurrency.lockutils [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.795 254096 DEBUG oslo_concurrency.lockutils [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.795 254096 DEBUG nova.compute.manager [req-1f336669-59ea-477f-bbe6-d012a9bb6f01 req-d046f632-f700-455f-bd29-28c46ee37446 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Processing event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.862 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9871c6ec-67c0-4286-9fc9-675a85d0ef41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.863 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.864 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.864 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddff42cd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:20 compute-0 kernel: tapddff42cd-c0: entered promiscuous mode
Nov 25 17:13:20 compute-0 NetworkManager[48891]: <info>  [1764090800.8662] manager: (tapddff42cd-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.866 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.868 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddff42cd-c0, col_values=(('external_ids', {'iface-id': '9e2ffcd6-4edd-4351-a44c-02659b205145'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:20 compute-0 ovn_controller[153477]: 2025-11-25T17:13:20Z|01430|binding|INFO|Releasing lport 9e2ffcd6-4edd-4351-a44c-02659b205145 from this chassis (sb_readonly=0)
Nov 25 17:13:20 compute-0 nova_compute[254092]: 2025-11-25 17:13:20.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.898 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddff42cd-c011-4371-96b1-f2bb5093a16d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddff42cd-c011-4371-96b1-f2bb5093a16d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.900 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[056dd2d7-7a41-46f8-8481-73473608d67a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.902 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-ddff42cd-c011-4371-96b1-f2bb5093a16d
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/ddff42cd-c011-4371-96b1-f2bb5093a16d.pid.haproxy
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID ddff42cd-c011-4371-96b1-f2bb5093a16d
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:13:20 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:20.903 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'env', 'PROCESS_TAG=haproxy-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddff42cd-c011-4371-96b1-f2bb5093a16d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:13:21 compute-0 podman[404553]: 2025-11-25 17:13:21.405055838 +0000 UTC m=+0.090584817 container create a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 17:13:21 compute-0 systemd[1]: Started libpod-conmon-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587.scope.
Nov 25 17:13:21 compute-0 podman[404553]: 2025-11-25 17:13:21.364261697 +0000 UTC m=+0.049790746 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:13:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51787ffbad5844d0d6317c349bae98d9600fb763a7a78dde87ceae6df6f2b06d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:21 compute-0 podman[404553]: 2025-11-25 17:13:21.510807966 +0000 UTC m=+0.196336965 container init a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:13:21 compute-0 podman[404553]: 2025-11-25 17:13:21.516831 +0000 UTC m=+0.202359979 container start a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:13:21 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : New worker (404574) forked
Nov 25 17:13:21 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : Loading success.
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.576 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 831eaa83-55bd-4098-9037-4b628eb8d994 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0063509c-db60-47db-9c49-72faf9b698d7
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.595 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[86c9ebfd-2713-44ae-8d5c-2ce1ca46a3ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.596 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0063509c-d1 in ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.598 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0063509c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.598 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[807a0361-6006-4142-9db7-bd751d7b6305]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.599 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e3b3ae-4bb7-455d-b28e-4685061c9e42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.613 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[c65c5c93-7bc0-4614-8106-06876a9de3bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.637 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8f175f8b-6a3b-477b-8cde-dedd727901a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.668 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d67e1803-cf72-43a0-a7ab-cbd338a713a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 NetworkManager[48891]: <info>  [1764090801.6784] manager: (tap0063509c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/589)
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.677 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03a24bc2-f6f3-4b5e-964a-14913e5a67fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.731 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf33da0-0876-4bb7-ba6a-e84d0ed2be83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.735 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e32fe6b1-568f-4c2b-98e6-11b0e4db20ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 NetworkManager[48891]: <info>  [1764090801.7650] device (tap0063509c-d0): carrier: link connected
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.776 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[650c746c-8518-4242-bab7-338be1cae420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.795 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7403eb-09e9-4b2f-8f34-fd66d660594a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404593, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.827 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dbfdd642-2765-4d55-bd5e-6e863670daa2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:4959'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725935, 'tstamp': 725935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404594, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.850 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc868875-3d3c-40d0-8e77-7b66b693f890]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404595, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.898 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[920397de-6f1a-4ca3-82c9-5595d1f791f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.937 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[be1c7155-78e5-4475-814b-da04d0e77482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.940 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.940 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.941 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0063509c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:21 compute-0 NetworkManager[48891]: <info>  [1764090801.9439] manager: (tap0063509c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Nov 25 17:13:21 compute-0 nova_compute[254092]: 2025-11-25 17:13:21.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:21 compute-0 kernel: tap0063509c-d0: entered promiscuous mode
Nov 25 17:13:21 compute-0 nova_compute[254092]: 2025-11-25 17:13:21.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.949 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0063509c-d0, col_values=(('external_ids', {'iface-id': '76c9f96b-1150-4e5a-a9e7-680d9f908998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:21 compute-0 nova_compute[254092]: 2025-11-25 17:13:21.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:21 compute-0 ovn_controller[153477]: 2025-11-25T17:13:21Z|01431|binding|INFO|Releasing lport 76c9f96b-1150-4e5a-a9e7-680d9f908998 from this chassis (sb_readonly=0)
Nov 25 17:13:21 compute-0 nova_compute[254092]: 2025-11-25 17:13:21.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.951 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0063509c-db60-47db-9c49-72faf9b698d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0063509c-db60-47db-9c49-72faf9b698d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.952 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7174f2-44ea-43a1-b974-ccf8c5c0aa74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.953 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-0063509c-db60-47db-9c49-72faf9b698d7
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/0063509c-db60-47db-9c49-72faf9b698d7.pid.haproxy
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 0063509c-db60-47db-9c49-72faf9b698d7
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:13:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:21.954 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'env', 'PROCESS_TAG=haproxy-0063509c-db60-47db-9c49-72faf9b698d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0063509c-db60-47db-9c49-72faf9b698d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:13:21 compute-0 nova_compute[254092]: 2025-11-25 17:13:21.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:22 compute-0 ceph-mon[74985]: pgmap v2694: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:22 compute-0 podman[404663]: 2025-11-25 17:13:22.34748454 +0000 UTC m=+0.072342330 container create 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.363 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.364 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090802.3631496, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.364 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Started (Lifecycle Event)
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.370 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.378 254096 INFO nova.virt.libvirt.driver [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance spawned successfully.
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.378 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:13:22 compute-0 systemd[1]: Started libpod-conmon-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0.scope.
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.384 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.394 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.394 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.394 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.395 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.395 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.395 254096 DEBUG nova.virt.libvirt.driver [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.398 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.398 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090802.3641853, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.399 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Paused (Lifecycle Event)
Nov 25 17:13:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:22 compute-0 podman[404663]: 2025-11-25 17:13:22.323148957 +0000 UTC m=+0.048006767 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e541847c2aac7dd59889538a8dcfac905bc7101e6c3c35a058f1755af39f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.420 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.422 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090802.3697693, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.423 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Resumed (Lifecycle Event)
Nov 25 17:13:22 compute-0 podman[404663]: 2025-11-25 17:13:22.434418586 +0000 UTC m=+0.159276406 container init 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 17:13:22 compute-0 podman[404663]: 2025-11-25 17:13:22.441856779 +0000 UTC m=+0.166714569 container start 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.445 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.448 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.454 254096 INFO nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 11.14 seconds to spawn the instance on the hypervisor.
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.455 254096 DEBUG nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.463 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:13:22 compute-0 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : New worker (404690) forked
Nov 25 17:13:22 compute-0 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : Loading success.
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.507 254096 INFO nova.compute.manager [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 12.01 seconds to build instance.
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.519 254096 DEBUG oslo_concurrency.lockutils [None req-e941ce0e-01d2-497c-b134-69856f853e4b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.894 254096 DEBUG nova.compute.manager [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.894 254096 DEBUG oslo_concurrency.lockutils [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 DEBUG oslo_concurrency.lockutils [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 DEBUG oslo_concurrency.lockutils [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 DEBUG nova.compute.manager [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.895 254096 WARNING nova.compute.manager [req-ead1bccd-6a77-4a49-9c0c-61b665087eff req-23d9d6d0-590c-4ae1-9c3b-6db8d41e81c9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for instance with vm_state active and task_state None.
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.969 254096 DEBUG nova.compute.manager [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.969 254096 DEBUG oslo_concurrency.lockutils [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 DEBUG oslo_concurrency.lockutils [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 DEBUG oslo_concurrency.lockutils [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 DEBUG nova.compute.manager [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:13:22 compute-0 nova_compute[254092]: 2025-11-25 17:13:22.970 254096 WARNING nova.compute.manager [req-b453ef60-9871-4b91-89b0-e17544c2884d req-3784efcb-0b47-48cb-89ec-c4d9f8e60199 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 for instance with vm_state active and task_state None.
Nov 25 17:13:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:24 compute-0 ceph-mon[74985]: pgmap v2695: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:24 compute-0 nova_compute[254092]: 2025-11-25 17:13:24.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:25 compute-0 nova_compute[254092]: 2025-11-25 17:13:25.676 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:26 compute-0 ceph-mon[74985]: pgmap v2696: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:13:26 compute-0 ovn_controller[153477]: 2025-11-25T17:13:26Z|01432|binding|INFO|Releasing lport 76c9f96b-1150-4e5a-a9e7-680d9f908998 from this chassis (sb_readonly=0)
Nov 25 17:13:26 compute-0 ovn_controller[153477]: 2025-11-25T17:13:26Z|01433|binding|INFO|Releasing lport 9e2ffcd6-4edd-4351-a44c-02659b205145 from this chassis (sb_readonly=0)
Nov 25 17:13:26 compute-0 NetworkManager[48891]: <info>  [1764090806.8308] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Nov 25 17:13:26 compute-0 NetworkManager[48891]: <info>  [1764090806.8323] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Nov 25 17:13:26 compute-0 nova_compute[254092]: 2025-11-25 17:13:26.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:26 compute-0 ovn_controller[153477]: 2025-11-25T17:13:26Z|01434|binding|INFO|Releasing lport 76c9f96b-1150-4e5a-a9e7-680d9f908998 from this chassis (sb_readonly=0)
Nov 25 17:13:26 compute-0 ovn_controller[153477]: 2025-11-25T17:13:26Z|01435|binding|INFO|Releasing lport 9e2ffcd6-4edd-4351-a44c-02659b205145 from this chassis (sb_readonly=0)
Nov 25 17:13:26 compute-0 nova_compute[254092]: 2025-11-25 17:13:26.903 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:27 compute-0 nova_compute[254092]: 2025-11-25 17:13:27.160 254096 DEBUG nova.compute.manager [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:27 compute-0 nova_compute[254092]: 2025-11-25 17:13:27.161 254096 DEBUG nova.compute.manager [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:13:27 compute-0 nova_compute[254092]: 2025-11-25 17:13:27.161 254096 DEBUG oslo_concurrency.lockutils [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:27 compute-0 nova_compute[254092]: 2025-11-25 17:13:27.161 254096 DEBUG oslo_concurrency.lockutils [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:27 compute-0 nova_compute[254092]: 2025-11-25 17:13:27.162 254096 DEBUG nova.network.neutron [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:13:28 compute-0 ceph-mon[74985]: pgmap v2697: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:13:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:13:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:29 compute-0 nova_compute[254092]: 2025-11-25 17:13:29.113 254096 DEBUG nova.network.neutron [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated VIF entry in instance network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:13:29 compute-0 nova_compute[254092]: 2025-11-25 17:13:29.114 254096 DEBUG nova.network.neutron [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:29 compute-0 nova_compute[254092]: 2025-11-25 17:13:29.133 254096 DEBUG oslo_concurrency.lockutils [req-42af74e8-fc25-43f0-a2e6-57b9454f77b6 req-b1f85d22-d1ac-4cd6-9743-2391f9d32c7c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:29 compute-0 podman[404710]: 2025-11-25 17:13:29.192736923 +0000 UTC m=+0.061362672 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 17:13:29 compute-0 podman[404700]: 2025-11-25 17:13:29.199915658 +0000 UTC m=+0.099677624 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:13:29 compute-0 podman[404712]: 2025-11-25 17:13:29.239711221 +0000 UTC m=+0.102931742 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 17:13:29 compute-0 nova_compute[254092]: 2025-11-25 17:13:29.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:30 compute-0 ceph-mon[74985]: pgmap v2698: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:13:30 compute-0 nova_compute[254092]: 2025-11-25 17:13:30.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:13:31 compute-0 nova_compute[254092]: 2025-11-25 17:13:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:32 compute-0 ceph-mon[74985]: pgmap v2699: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:13:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:13:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:34 compute-0 ceph-mon[74985]: pgmap v2700: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:13:34 compute-0 nova_compute[254092]: 2025-11-25 17:13:34.335 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 17:13:35 compute-0 ovn_controller[153477]: 2025-11-25T17:13:35Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:43:89 10.100.0.8
Nov 25 17:13:35 compute-0 ovn_controller[153477]: 2025-11-25T17:13:35Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:43:89 10.100.0.8
Nov 25 17:13:35 compute-0 nova_compute[254092]: 2025-11-25 17:13:35.682 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:36 compute-0 ceph-mon[74985]: pgmap v2701: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 17:13:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 118 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 25 17:13:38 compute-0 ceph-mon[74985]: pgmap v2702: 321 pgs: 321 active+clean; 118 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 25 17:13:38 compute-0 nova_compute[254092]: 2025-11-25 17:13:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 118 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 25 17:13:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:39 compute-0 nova_compute[254092]: 2025-11-25 17:13:39.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:13:40
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data']
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:13:40 compute-0 ceph-mon[74985]: pgmap v2703: 321 pgs: 321 active+clean; 118 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:13:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:13:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.2 total, 600.0 interval
                                           Cumulative writes: 39K writes, 162K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.84 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4681 writes, 19K keys, 4681 commit groups, 1.0 writes per commit group, ingest: 20.83 MB, 0.03 MB/s
                                           Interval WAL: 4681 writes, 1788 syncs, 2.62 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:13:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:13:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3777566246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:40 compute-0 nova_compute[254092]: 2025-11-25 17:13:40.961 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.056 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.056 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:13:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3777566246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.275 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.276 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3464MB free_disk=59.94353485107422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.276 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.276 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.325 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 3b797ee6-c82f-4c01-bb54-31de659fcad8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.326 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.326 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.363 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:13:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000141578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.769 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.775 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.798 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.831 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:13:41 compute-0 nova_compute[254092]: 2025-11-25 17:13:41.831 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:42 compute-0 ceph-mon[74985]: pgmap v2704: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:13:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4000141578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:42 compute-0 sudo[404805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:42 compute-0 sudo[404805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:42 compute-0 sudo[404805]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:42 compute-0 sudo[404830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:13:42 compute-0 sudo[404830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:42 compute-0 sudo[404830]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:42 compute-0 sudo[404855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:42 compute-0 sudo[404855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:42 compute-0 sudo[404855]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:42 compute-0 sudo[404880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:13:42 compute-0 sudo[404880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:13:42 compute-0 nova_compute[254092]: 2025-11-25 17:13:42.827 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:42 compute-0 nova_compute[254092]: 2025-11-25 17:13:42.829 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:42 compute-0 nova_compute[254092]: 2025-11-25 17:13:42.829 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:43 compute-0 sudo[404880]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:13:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:13:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:13:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:13:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 01e06fed-d822-4b7c-9fa4-72bcf23b5bdf does not exist
Nov 25 17:13:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev bb345b48-71a5-4c92-857f-3b1c83949a0f does not exist
Nov 25 17:13:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2007107d-0597-4843-aa30-1f934a325df9 does not exist
Nov 25 17:13:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:13:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:13:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:13:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:13:43 compute-0 sudo[404936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:43 compute-0 sudo[404936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:43 compute-0 sudo[404936]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:43 compute-0 sudo[404961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:13:43 compute-0 sudo[404961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:43 compute-0 sudo[404961]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:13:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:13:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:13:43 compute-0 sudo[404986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:43 compute-0 sudo[404986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:43 compute-0 sudo[404986]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:43 compute-0 sudo[405011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:13:43 compute-0 sudo[405011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:43 compute-0 podman[405077]: 2025-11-25 17:13:43.626044457 +0000 UTC m=+0.040784821 container create 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:13:43 compute-0 systemd[1]: Started libpod-conmon-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope.
Nov 25 17:13:43 compute-0 podman[405077]: 2025-11-25 17:13:43.606240178 +0000 UTC m=+0.020980552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:13:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:43 compute-0 podman[405077]: 2025-11-25 17:13:43.739520685 +0000 UTC m=+0.154261089 container init 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 17:13:43 compute-0 podman[405077]: 2025-11-25 17:13:43.747831062 +0000 UTC m=+0.162571446 container start 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:13:43 compute-0 hardcore_burnell[405093]: 167 167
Nov 25 17:13:43 compute-0 systemd[1]: libpod-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope: Deactivated successfully.
Nov 25 17:13:43 compute-0 conmon[405093]: conmon 9127fb75ee98ddc9998c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope/container/memory.events
Nov 25 17:13:43 compute-0 podman[405077]: 2025-11-25 17:13:43.756375164 +0000 UTC m=+0.171115518 container attach 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:13:43 compute-0 podman[405077]: 2025-11-25 17:13:43.757611278 +0000 UTC m=+0.172351672 container died 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-509dcad1c59146c290d28ab6d921495739286c56f60efe500719027c89afdc18-merged.mount: Deactivated successfully.
Nov 25 17:13:43 compute-0 podman[405077]: 2025-11-25 17:13:43.836081074 +0000 UTC m=+0.250821438 container remove 9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 17:13:43 compute-0 systemd[1]: libpod-conmon-9127fb75ee98ddc9998c8e3bb82cb2849d3164b8a588bbb252aba1bd337f480c.scope: Deactivated successfully.
Nov 25 17:13:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:44 compute-0 podman[405115]: 2025-11-25 17:13:44.104906572 +0000 UTC m=+0.063776338 container create 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:13:44 compute-0 systemd[1]: Started libpod-conmon-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope.
Nov 25 17:13:44 compute-0 podman[405115]: 2025-11-25 17:13:44.084106455 +0000 UTC m=+0.042976161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:13:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:44 compute-0 podman[405115]: 2025-11-25 17:13:44.212542751 +0000 UTC m=+0.171412457 container init 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:13:44 compute-0 podman[405115]: 2025-11-25 17:13:44.221559706 +0000 UTC m=+0.180429392 container start 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:13:44 compute-0 podman[405115]: 2025-11-25 17:13:44.225043712 +0000 UTC m=+0.183913398 container attach 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 25 17:13:44 compute-0 ceph-mon[74985]: pgmap v2705: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:13:44 compute-0 nova_compute[254092]: 2025-11-25 17:13:44.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:44 compute-0 nova_compute[254092]: 2025-11-25 17:13:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:44 compute-0 nova_compute[254092]: 2025-11-25 17:13:44.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:13:44 compute-0 nova_compute[254092]: 2025-11-25 17:13:44.508 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:13:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:13:45 compute-0 admiring_jang[405132]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:13:45 compute-0 admiring_jang[405132]: --> relative data size: 1.0
Nov 25 17:13:45 compute-0 admiring_jang[405132]: --> All data devices are unavailable
Nov 25 17:13:45 compute-0 systemd[1]: libpod-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope: Deactivated successfully.
Nov 25 17:13:45 compute-0 systemd[1]: libpod-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope: Consumed 1.023s CPU time.
Nov 25 17:13:45 compute-0 podman[405161]: 2025-11-25 17:13:45.369438171 +0000 UTC m=+0.047256718 container died 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab13b98b47836766437c19ffd9d01ba5f926f81d0f5a1c4b33787769c84a0c7-merged.mount: Deactivated successfully.
Nov 25 17:13:45 compute-0 podman[405161]: 2025-11-25 17:13:45.442332815 +0000 UTC m=+0.120151372 container remove 89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 17:13:45 compute-0 systemd[1]: libpod-conmon-89c8471f5dd2c9dffd986e0808170cc162b8cf213a5dbde6f7a471b08c7387e4.scope: Deactivated successfully.
Nov 25 17:13:45 compute-0 sudo[405011]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:45 compute-0 sudo[405176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:45 compute-0 sudo[405176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:45 compute-0 sudo[405176]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:45 compute-0 sudo[405201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:13:45 compute-0 sudo[405201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:45 compute-0 sudo[405201]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:45 compute-0 nova_compute[254092]: 2025-11-25 17:13:45.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:45 compute-0 sudo[405226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:45 compute-0 sudo[405226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:45 compute-0 sudo[405226]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:45 compute-0 sudo[405251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:13:45 compute-0 sudo[405251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:46 compute-0 podman[405316]: 2025-11-25 17:13:46.118465098 +0000 UTC m=+0.052463868 container create fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:13:46 compute-0 systemd[1]: Started libpod-conmon-fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25.scope.
Nov 25 17:13:46 compute-0 podman[405316]: 2025-11-25 17:13:46.096068078 +0000 UTC m=+0.030066898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:13:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:46 compute-0 podman[405316]: 2025-11-25 17:13:46.234341093 +0000 UTC m=+0.168339903 container init fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 17:13:46 compute-0 podman[405316]: 2025-11-25 17:13:46.244045036 +0000 UTC m=+0.178043806 container start fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:13:46 compute-0 podman[405316]: 2025-11-25 17:13:46.248496788 +0000 UTC m=+0.182495578 container attach fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:13:46 compute-0 stoic_stonebraker[405333]: 167 167
Nov 25 17:13:46 compute-0 systemd[1]: libpod-fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25.scope: Deactivated successfully.
Nov 25 17:13:46 compute-0 ceph-mon[74985]: pgmap v2706: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:13:46 compute-0 podman[405316]: 2025-11-25 17:13:46.253877555 +0000 UTC m=+0.187876345 container died fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-94397c57a9ffb74739d2f457cc2b11d9187b1aa7c68f306f0192c67c2203ff8b-merged.mount: Deactivated successfully.
Nov 25 17:13:46 compute-0 podman[405316]: 2025-11-25 17:13:46.299389343 +0000 UTC m=+0.233388123 container remove fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:13:46 compute-0 systemd[1]: libpod-conmon-fcec59e758257e1bde87fdb900f72cfee47ba71b9280c49de8ea094a02bf7a25.scope: Deactivated successfully.
Nov 25 17:13:46 compute-0 podman[405357]: 2025-11-25 17:13:46.511689812 +0000 UTC m=+0.058621407 container create 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:13:46 compute-0 nova_compute[254092]: 2025-11-25 17:13:46.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:46 compute-0 nova_compute[254092]: 2025-11-25 17:13:46.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:13:46 compute-0 nova_compute[254092]: 2025-11-25 17:13:46.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:13:46 compute-0 systemd[1]: Started libpod-conmon-0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347.scope.
Nov 25 17:13:46 compute-0 podman[405357]: 2025-11-25 17:13:46.491001578 +0000 UTC m=+0.037933213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:13:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:46 compute-0 podman[405357]: 2025-11-25 17:13:46.633528668 +0000 UTC m=+0.180460263 container init 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:13:46 compute-0 podman[405357]: 2025-11-25 17:13:46.653170253 +0000 UTC m=+0.200101838 container start 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 17:13:46 compute-0 podman[405357]: 2025-11-25 17:13:46.658248521 +0000 UTC m=+0.205180126 container attach 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:13:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:13:46 compute-0 nova_compute[254092]: 2025-11-25 17:13:46.829 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:46 compute-0 nova_compute[254092]: 2025-11-25 17:13:46.830 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:46 compute-0 nova_compute[254092]: 2025-11-25 17:13:46.830 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:13:46 compute-0 nova_compute[254092]: 2025-11-25 17:13:46.831 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.378 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.378 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.397 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:13:47 compute-0 silly_darwin[405373]: {
Nov 25 17:13:47 compute-0 silly_darwin[405373]:     "0": [
Nov 25 17:13:47 compute-0 silly_darwin[405373]:         {
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "devices": [
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "/dev/loop3"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             ],
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_name": "ceph_lv0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_size": "21470642176",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "name": "ceph_lv0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "tags": {
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cluster_name": "ceph",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.crush_device_class": "",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.encrypted": "0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osd_id": "0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.type": "block",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.vdo": "0"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             },
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "type": "block",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "vg_name": "ceph_vg0"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:         }
Nov 25 17:13:47 compute-0 silly_darwin[405373]:     ],
Nov 25 17:13:47 compute-0 silly_darwin[405373]:     "1": [
Nov 25 17:13:47 compute-0 silly_darwin[405373]:         {
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "devices": [
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "/dev/loop4"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             ],
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_name": "ceph_lv1",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_size": "21470642176",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "name": "ceph_lv1",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "tags": {
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cluster_name": "ceph",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.crush_device_class": "",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.encrypted": "0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osd_id": "1",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.type": "block",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.vdo": "0"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             },
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "type": "block",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "vg_name": "ceph_vg1"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:         }
Nov 25 17:13:47 compute-0 silly_darwin[405373]:     ],
Nov 25 17:13:47 compute-0 silly_darwin[405373]:     "2": [
Nov 25 17:13:47 compute-0 silly_darwin[405373]:         {
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "devices": [
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "/dev/loop5"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             ],
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_name": "ceph_lv2",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_size": "21470642176",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "name": "ceph_lv2",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "tags": {
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.cluster_name": "ceph",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.crush_device_class": "",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.encrypted": "0",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osd_id": "2",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.type": "block",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:                 "ceph.vdo": "0"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             },
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "type": "block",
Nov 25 17:13:47 compute-0 silly_darwin[405373]:             "vg_name": "ceph_vg2"
Nov 25 17:13:47 compute-0 silly_darwin[405373]:         }
Nov 25 17:13:47 compute-0 silly_darwin[405373]:     ]
Nov 25 17:13:47 compute-0 silly_darwin[405373]: }
Nov 25 17:13:47 compute-0 systemd[1]: libpod-0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347.scope: Deactivated successfully.
Nov 25 17:13:47 compute-0 podman[405357]: 2025-11-25 17:13:47.468853325 +0000 UTC m=+1.015784910 container died 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.475 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.476 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.486 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.486 254096 INFO nova.compute.claims [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a89e7e47456c0fe4c453e5ec468c70ed8ffe1cd5bf189142e2e58c2c58c37525-merged.mount: Deactivated successfully.
Nov 25 17:13:47 compute-0 podman[405357]: 2025-11-25 17:13:47.532270281 +0000 UTC m=+1.079201866 container remove 0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:13:47 compute-0 systemd[1]: libpod-conmon-0b715eebf87aa9983201eba7f69b00f2cce03b0b5191ebb139e52c56c5de3347.scope: Deactivated successfully.
Nov 25 17:13:47 compute-0 sudo[405251]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:47 compute-0 nova_compute[254092]: 2025-11-25 17:13:47.585 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:47 compute-0 sudo[405394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:47 compute-0 sudo[405394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:47 compute-0 sudo[405394]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:47 compute-0 sudo[405420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:13:47 compute-0 sudo[405420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:47 compute-0 sudo[405420]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:47 compute-0 sudo[405464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:47 compute-0 sudo[405464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:47 compute-0 sudo[405464]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:47 compute-0 sudo[405489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:13:47 compute-0 sudo[405489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:13:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578496728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.058 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.065 254096 DEBUG nova.compute.provider_tree [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.077 254096 DEBUG nova.scheduler.client.report [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.094 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.094 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.140 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.140 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.154 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.172 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:13:48 compute-0 podman[405555]: 2025-11-25 17:13:48.259960538 +0000 UTC m=+0.044181043 container create ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.261 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.262 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.263 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Creating image(s)
Nov 25 17:13:48 compute-0 ceph-mon[74985]: pgmap v2707: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:13:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2578496728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.292 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:48 compute-0 systemd[1]: Started libpod-conmon-ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325.scope.
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.322 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:48 compute-0 podman[405555]: 2025-11-25 17:13:48.237169468 +0000 UTC m=+0.021389993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:13:48 compute-0 podman[405555]: 2025-11-25 17:13:48.342394902 +0000 UTC m=+0.126615437 container init ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.348 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:48 compute-0 podman[405555]: 2025-11-25 17:13:48.35185826 +0000 UTC m=+0.136078765 container start ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.354 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:48 compute-0 podman[405555]: 2025-11-25 17:13:48.356137106 +0000 UTC m=+0.140357611 container attach ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:13:48 compute-0 fervent_chaplygin[405590]: 167 167
Nov 25 17:13:48 compute-0 systemd[1]: libpod-ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325.scope: Deactivated successfully.
Nov 25 17:13:48 compute-0 podman[405555]: 2025-11-25 17:13:48.359400155 +0000 UTC m=+0.143620660 container died ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae62e2d7d09056285387f3767efa94af798482c60c58d91549d7dfdf1b23562-merged.mount: Deactivated successfully.
Nov 25 17:13:48 compute-0 podman[405555]: 2025-11-25 17:13:48.394968263 +0000 UTC m=+0.179188768 container remove ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chaplygin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.402 254096 DEBUG nova.policy [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:13:48 compute-0 systemd[1]: libpod-conmon-ed3884c6be18fcd5fc88547528cbfa2db9df2d54ababc5604598da3c09616325.scope: Deactivated successfully.
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.438 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.439 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.440 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.440 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.462 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.467 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6157e53b-5ff5-4b55-b71a-125301c5268a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:48 compute-0 podman[405672]: 2025-11-25 17:13:48.579527787 +0000 UTC m=+0.043807784 container create 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:13:48 compute-0 systemd[1]: Started libpod-conmon-452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1.scope.
Nov 25 17:13:48 compute-0 podman[405672]: 2025-11-25 17:13:48.561820015 +0000 UTC m=+0.026100032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:13:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:13:48 compute-0 podman[405672]: 2025-11-25 17:13:48.677456642 +0000 UTC m=+0.141736659 container init 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:13:48 compute-0 podman[405672]: 2025-11-25 17:13:48.690413745 +0000 UTC m=+0.154693732 container start 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:13:48 compute-0 podman[405672]: 2025-11-25 17:13:48.700011956 +0000 UTC m=+0.164291953 container attach 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.740 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 6157e53b-5ff5-4b55-b71a-125301c5268a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 92 KiB/s wr, 10 op/s
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.811 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.918 254096 DEBUG nova.objects.instance [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 6157e53b-5ff5-4b55-b71a-125301c5268a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.929 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.929 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Ensure instance console log exists: /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.929 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:48 compute-0 nova_compute[254092]: 2025-11-25 17:13:48.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:49.232 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:49.235 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:13:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:49.237 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.294 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.317 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.318 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.318 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.375 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully created port: c7c65059-3fc6-4a84-b8ce-de1306c01e13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:49 compute-0 festive_hypatia[405707]: {
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "osd_id": 1,
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "type": "bluestore"
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:     },
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "osd_id": 2,
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "type": "bluestore"
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:     },
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "osd_id": 0,
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:         "type": "bluestore"
Nov 25 17:13:49 compute-0 festive_hypatia[405707]:     }
Nov 25 17:13:49 compute-0 festive_hypatia[405707]: }
Nov 25 17:13:49 compute-0 systemd[1]: libpod-452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1.scope: Deactivated successfully.
Nov 25 17:13:49 compute-0 podman[405812]: 2025-11-25 17:13:49.709038701 +0000 UTC m=+0.025668880 container died 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-22d59edfbd07feb9ce733d11857f106c9ce61fe50f6eaa200b9cba85a0c7958b-merged.mount: Deactivated successfully.
Nov 25 17:13:49 compute-0 podman[405812]: 2025-11-25 17:13:49.767112812 +0000 UTC m=+0.083742971 container remove 452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:13:49 compute-0 systemd[1]: libpod-conmon-452b4c309cd20be87d9d4e87941e4537d2daf2d5fbaf29dca4cc0b14ec04b5f1.scope: Deactivated successfully.
Nov 25 17:13:49 compute-0 sudo[405489]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:13:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:13:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:13:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:13:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 60bc785f-4397-465b-ace3-cea5fb1ac089 does not exist
Nov 25 17:13:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f8b0ff6c-ab0a-4ddd-a0ba-a9a1998a64de does not exist
Nov 25 17:13:49 compute-0 nova_compute[254092]: 2025-11-25 17:13:49.910 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully created port: 05bba2c3-2422-401c-831b-f9af92f47719 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:13:49 compute-0 sudo[405827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:13:49 compute-0 sudo[405827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:49 compute-0 sudo[405827]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:49 compute-0 sudo[405852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:13:49 compute-0 sudo[405852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:13:49 compute-0 sudo[405852]: pam_unix(sudo:session): session closed for user root
Nov 25 17:13:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:13:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.2 total, 600.0 interval
                                           Cumulative writes: 42K writes, 165K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 42K writes, 14K syncs, 2.83 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4620 writes, 20K keys, 4620 commit groups, 1.0 writes per commit group, ingest: 23.67 MB, 0.04 MB/s
                                           Interval WAL: 4620 writes, 1779 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:13:50 compute-0 ceph-mon[74985]: pgmap v2708: 321 pgs: 321 active+clean; 121 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 92 KiB/s wr, 10 op/s
Nov 25 17:13:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:13:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:13:50 compute-0 nova_compute[254092]: 2025-11-25 17:13:50.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 148 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 982 KiB/s wr, 25 op/s
Nov 25 17:13:50 compute-0 nova_compute[254092]: 2025-11-25 17:13:50.831 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully updated port: c7c65059-3fc6-4a84-b8ce-de1306c01e13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:13:50 compute-0 nova_compute[254092]: 2025-11-25 17:13:50.963 254096 DEBUG nova.compute.manager [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:50 compute-0 nova_compute[254092]: 2025-11-25 17:13:50.963 254096 DEBUG nova.compute.manager [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:13:50 compute-0 nova_compute[254092]: 2025-11-25 17:13:50.964 254096 DEBUG oslo_concurrency.lockutils [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:50 compute-0 nova_compute[254092]: 2025-11-25 17:13:50.964 254096 DEBUG oslo_concurrency.lockutils [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:50 compute-0 nova_compute[254092]: 2025-11-25 17:13:50.965 254096 DEBUG nova.network.neutron [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:13:51 compute-0 nova_compute[254092]: 2025-11-25 17:13:51.196 254096 DEBUG nova.network.neutron [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:13:51 compute-0 nova_compute[254092]: 2025-11-25 17:13:51.650 254096 DEBUG nova.network.neutron [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:51 compute-0 nova_compute[254092]: 2025-11-25 17:13:51.662 254096 DEBUG oslo_concurrency.lockutils [req-6bd37856-a31d-463f-8a3b-15d8a871fbff req-de903cf2-5ee2-4ce3-8981-07b47b36dcd7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000928678697011135 of space, bias 1.0, pg target 0.2786036091033405 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:13:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:13:51 compute-0 nova_compute[254092]: 2025-11-25 17:13:51.692 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Successfully updated port: 05bba2c3-2422-401c-831b-f9af92f47719 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:13:51 compute-0 nova_compute[254092]: 2025-11-25 17:13:51.708 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:51 compute-0 nova_compute[254092]: 2025-11-25 17:13:51.708 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:51 compute-0 nova_compute[254092]: 2025-11-25 17:13:51.709 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:13:52 compute-0 ceph-mon[74985]: pgmap v2709: 321 pgs: 321 active+clean; 148 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 982 KiB/s wr, 25 op/s
Nov 25 17:13:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:52 compute-0 nova_compute[254092]: 2025-11-25 17:13:52.870 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:13:53 compute-0 nova_compute[254092]: 2025-11-25 17:13:53.039 254096 DEBUG nova.compute.manager [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:53 compute-0 nova_compute[254092]: 2025-11-25 17:13:53.039 254096 DEBUG nova.compute.manager [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-05bba2c3-2422-401c-831b-f9af92f47719. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:13:53 compute-0 nova_compute[254092]: 2025-11-25 17:13:53.040 254096 DEBUG oslo_concurrency.lockutils [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:13:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:54 compute-0 ceph-mon[74985]: pgmap v2710: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:54 compute-0 nova_compute[254092]: 2025-11-25 17:13:54.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:13:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1647477624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:13:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:13:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1647477624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.882 254096 DEBUG nova.network.neutron [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.902 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.903 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance network_info: |[{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.904 254096 DEBUG oslo_concurrency.lockutils [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.904 254096 DEBUG nova.network.neutron [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port 05bba2c3-2422-401c-831b-f9af92f47719 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.912 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start _get_guest_xml network_info=[{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.920 254096 WARNING nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.934 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.935 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.939 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.940 254096 DEBUG nova.virt.libvirt.host [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.940 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.941 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.942 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.942 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.943 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.943 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.944 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.944 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.945 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.946 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.947 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.947 254096 DEBUG nova.virt.hardware [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:13:55 compute-0 nova_compute[254092]: 2025-11-25 17:13:55.952 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.300 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:56 compute-0 ceph-mon[74985]: pgmap v2711: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1647477624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:13:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1647477624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:13:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:13:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865905048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.412 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.438 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.443 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:13:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506127068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.908 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.910 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.910 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.911 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.912 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.913 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.913 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.914 254096 DEBUG nova.objects.instance [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6157e53b-5ff5-4b55-b71a-125301c5268a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.928 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <uuid>6157e53b-5ff5-4b55-b71a-125301c5268a</uuid>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <name>instance-0000008a</name>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1481975263</nova:name>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:13:55</nova:creationTime>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:port uuid="c7c65059-3fc6-4a84-b8ce-de1306c01e13">
Nov 25 17:13:56 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <nova:port uuid="05bba2c3-2422-401c-831b-f9af92f47719">
Nov 25 17:13:56 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe47:6a96" ipVersion="6"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <system>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <entry name="serial">6157e53b-5ff5-4b55-b71a-125301c5268a</entry>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <entry name="uuid">6157e53b-5ff5-4b55-b71a-125301c5268a</entry>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </system>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <os>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   </os>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <features>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   </features>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6157e53b-5ff5-4b55-b71a-125301c5268a_disk">
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       </source>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config">
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       </source>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:13:56 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:16:7a:83"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <target dev="tapc7c65059-3f"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:47:6a:96"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <target dev="tap05bba2c3-24"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/console.log" append="off"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <video>
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </video>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:13:56 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:13:56 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:13:56 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:13:56 compute-0 nova_compute[254092]: </domain>
Nov 25 17:13:56 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.929 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Preparing to wait for external event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.930 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Preparing to wait for external event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.931 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.931 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.931 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.932 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.932 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.932 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.933 254096 DEBUG os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.934 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.934 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.937 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7c65059-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.938 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7c65059-3f, col_values=(('external_ids', {'iface-id': 'c7c65059-3fc6-4a84-b8ce-de1306c01e13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:7a:83', 'vm-uuid': '6157e53b-5ff5-4b55-b71a-125301c5268a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 NetworkManager[48891]: <info>  [1764090836.9403] manager: (tapc7c65059-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.946 254096 INFO os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f')
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.947 254096 DEBUG nova.virt.libvirt.vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:13:48Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.947 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG nova.network.os_vif_util [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.948 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.949 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.950 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05bba2c3-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.951 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05bba2c3-24, col_values=(('external_ids', {'iface-id': '05bba2c3-2422-401c-831b-f9af92f47719', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:6a:96', 'vm-uuid': '6157e53b-5ff5-4b55-b71a-125301c5268a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 NetworkManager[48891]: <info>  [1764090836.9526] manager: (tap05bba2c3-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:56 compute-0 nova_compute[254092]: 2025-11-25 17:13:56.959 254096 INFO os_vif [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24')
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.006 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.006 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.007 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:16:7a:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.007 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:47:6a:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.008 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Using config drive
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.032 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.222 254096 DEBUG nova.network.neutron [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updated VIF entry in instance network info cache for port 05bba2c3-2422-401c-831b-f9af92f47719. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.222 254096 DEBUG nova.network.neutron [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.239 254096 DEBUG oslo_concurrency.lockutils [req-01864d18-36ec-444e-b0c5-17b078a825df req-87dbc869-466c-4409-b85c-ab97a71f4cef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:13:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2865905048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2506127068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.401 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Creating config drive at /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.409 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm74k4tiv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.566 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm74k4tiv" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.595 254096 DEBUG nova.storage.rbd_utils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.600 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.778 254096 DEBUG oslo_concurrency.processutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config 6157e53b-5ff5-4b55-b71a-125301c5268a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.779 254096 INFO nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deleting local config drive /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a/disk.config because it was imported into RBD.
Nov 25 17:13:57 compute-0 kernel: tapc7c65059-3f: entered promiscuous mode
Nov 25 17:13:57 compute-0 NetworkManager[48891]: <info>  [1764090837.8321] manager: (tapc7c65059-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/595)
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01436|binding|INFO|Claiming lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 for this chassis.
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01437|binding|INFO|c7c65059-3fc6-4a84-b8ce-de1306c01e13: Claiming fa:16:3e:16:7a:83 10.100.0.10
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.844 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:7a:83 10.100.0.10'], port_security=['fa:16:3e:16:7a:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7c65059-3fc6-4a84-b8ce-de1306c01e13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.845 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7c65059-3fc6-4a84-b8ce-de1306c01e13 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d bound to our chassis
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.846 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddff42cd-c011-4371-96b1-f2bb5093a16d
Nov 25 17:13:57 compute-0 NetworkManager[48891]: <info>  [1764090837.8492] manager: (tap05bba2c3-24): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Nov 25 17:13:57 compute-0 systemd-udevd[406018]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:13:57 compute-0 systemd-udevd[406017]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:13:57 compute-0 kernel: tap05bba2c3-24: entered promiscuous mode
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01438|binding|INFO|Setting lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 ovn-installed in OVS
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01439|binding|INFO|Setting lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 up in Southbound
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.865 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f1280f9f-2d55-4fc3-b334-43448abe7250]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01440|if_status|INFO|Dropped 5 log messages in last 2609 seconds (most recently, 2609 seconds ago) due to excessive rate
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01441|if_status|INFO|Not updating pb chassis for 05bba2c3-2422-401c-831b-f9af92f47719 now as sb is readonly
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01442|binding|INFO|Claiming lport 05bba2c3-2422-401c-831b-f9af92f47719 for this chassis.
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01443|binding|INFO|05bba2c3-2422-401c-831b-f9af92f47719: Claiming fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96
Nov 25 17:13:57 compute-0 NetworkManager[48891]: <info>  [1764090837.8810] device (tapc7c65059-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.878 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], port_security=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe47:6a96/64', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05bba2c3-2422-401c-831b-f9af92f47719) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:13:57 compute-0 NetworkManager[48891]: <info>  [1764090837.8820] device (tap05bba2c3-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:13:57 compute-0 NetworkManager[48891]: <info>  [1764090837.8831] device (tapc7c65059-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:13:57 compute-0 NetworkManager[48891]: <info>  [1764090837.8836] device (tap05bba2c3-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01444|binding|INFO|Setting lport 05bba2c3-2422-401c-831b-f9af92f47719 ovn-installed in OVS
Nov 25 17:13:57 compute-0 ovn_controller[153477]: 2025-11-25T17:13:57Z|01445|binding|INFO|Setting lport 05bba2c3-2422-401c-831b-f9af92f47719 up in Southbound
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:57 compute-0 systemd-machined[216343]: New machine qemu-172-instance-0000008a.
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.904 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[680d6b72-88aa-40f0-8e9d-9ddd5cf981d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.907 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a97dd5a5-f87e-4a86-874e-657a0d513338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:57 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008a.
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.938 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[81f97c2d-822a-47a9-9e75-849552493d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.961 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[208437e2-d84b-4bec-88be-c962f4e8ba5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406030, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.979 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f95aa264-f18f-46e5-b171-63fdc2760040]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725840, 'tstamp': 725840}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406034, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725844, 'tstamp': 725844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406034, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.980 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:57 compute-0 nova_compute[254092]: 2025-11-25 17:13:57.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.983 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddff42cd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.983 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.984 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddff42cd-c0, col_values=(('external_ids', {'iface-id': '9e2ffcd6-4edd-4351-a44c-02659b205145'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.984 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.986 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05bba2c3-2422-401c-831b-f9af92f47719 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis
Nov 25 17:13:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:57.987 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0063509c-db60-47db-9c49-72faf9b698d7
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.002 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0f41ab9c-14d3-4540-9aad-2dd2080d45bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.034 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3600d5-743c-45f9-907e-0fd3dec9a004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.036 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[74399148-9709-4825-811f-39ff2749855c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.066 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9da8974b-4e90-49b9-8f80-2d5895a3b3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.082 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac72200-22e8-4203-aeb7-a4f56dced32d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406041, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.084 254096 DEBUG nova.compute.manager [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.085 254096 DEBUG oslo_concurrency.lockutils [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.085 254096 DEBUG oslo_concurrency.lockutils [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.086 254096 DEBUG oslo_concurrency.lockutils [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.086 254096 DEBUG nova.compute.manager [req-0173aad8-27af-4876-8422-feb95091561a req-f368e29d-cd60-46cd-812a-5f1c036643fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Processing event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae352293-7314-4b80-9485-fe6ce758bd32]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0063509c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725952, 'tstamp': 725952}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406042, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.101 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.104 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0063509c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.105 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0063509c-d0, col_values=(('external_ids', {'iface-id': '76c9f96b-1150-4e5a-a9e7-680d9f908998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:13:58 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:13:58.106 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:13:58 compute-0 ceph-mon[74985]: pgmap v2712: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.383 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090838.3828897, 6157e53b-5ff5-4b55-b71a-125301c5268a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.384 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Started (Lifecycle Event)
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.402 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.407 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090838.3831444, 6157e53b-5ff5-4b55-b71a-125301c5268a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Paused (Lifecycle Event)
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.420 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.425 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.444 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:13:58 compute-0 nova_compute[254092]: 2025-11-25 17:13:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:13:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:13:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:13:59 compute-0 podman[406086]: 2025-11-25 17:13:59.646613774 +0000 UTC m=+0.063444839 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:13:59 compute-0 podman[406087]: 2025-11-25 17:13:59.66450403 +0000 UTC m=+0.078519378 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:13:59 compute-0 podman[406088]: 2025-11-25 17:13:59.675627323 +0000 UTC m=+0.083923755 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.198 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.198 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.199 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.199 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.199 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No event matching network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 in dict_keys([('network-vif-plugged', '05bba2c3-2422-401c-831b-f9af92f47719')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.200 254096 WARNING nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 for instance with vm_state building and task_state spawning.
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.200 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.200 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.201 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.201 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.201 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Processing event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.202 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.202 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.202 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.203 254096 DEBUG oslo_concurrency.lockutils [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.203 254096 DEBUG nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.203 254096 WARNING nova.compute.manager [req-7c258ba3-b8a3-453c-8489-5b457dea34e8 req-c9e1ed13-79f5-4d8f-9aef-f4ac01cfea14 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 for instance with vm_state building and task_state spawning.
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.204 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.209 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090840.2086995, 6157e53b-5ff5-4b55-b71a-125301c5268a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.209 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Resumed (Lifecycle Event)
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.211 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.215 254096 INFO nova.virt.libvirt.driver [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance spawned successfully.
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.216 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.236 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.243 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.248 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.248 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.249 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.250 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.250 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.250 254096 DEBUG nova.virt.libvirt.driver [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.274 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.307 254096 INFO nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 12.04 seconds to spawn the instance on the hypervisor.
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.307 254096 DEBUG nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:14:00 compute-0 ceph-mon[74985]: pgmap v2713: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.399 254096 INFO nova.compute.manager [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 12.94 seconds to build instance.
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.413 254096 DEBUG oslo_concurrency.lockutils [None req-09c86403-2c2f-43a1-9188-aab1e0331dea a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:00 compute-0 nova_compute[254092]: 2025-11-25 17:14:00.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 17:14:01 compute-0 nova_compute[254092]: 2025-11-25 17:14:01.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:02 compute-0 ceph-mon[74985]: pgmap v2714: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 17:14:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 943 KiB/s wr, 22 op/s
Nov 25 17:14:03 compute-0 ceph-mon[74985]: pgmap v2715: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 943 KiB/s wr, 22 op/s
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.350095) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843350122, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3438721, "memory_usage": 3485072, "flush_reason": "Manual Compaction"}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843374600, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 3372636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54561, "largest_seqno": 56619, "table_properties": {"data_size": 3363219, "index_size": 5975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18919, "raw_average_key_size": 20, "raw_value_size": 3344585, "raw_average_value_size": 3561, "num_data_blocks": 265, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090617, "oldest_key_time": 1764090617, "file_creation_time": 1764090843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 25032 microseconds, and 6847 cpu microseconds.
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.375118) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 3372636 bytes OK
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.375308) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.376867) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.376892) EVENT_LOG_v1 {"time_micros": 1764090843376884, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.376916) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3430081, prev total WAL file size 3430081, number of live WAL files 2.
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.379398) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(3293KB)], [125(8212KB)]
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843379464, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11781904, "oldest_snapshot_seqno": -1}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7829 keys, 10090277 bytes, temperature: kUnknown
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843438371, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 10090277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10039465, "index_size": 30155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 203660, "raw_average_key_size": 26, "raw_value_size": 9901109, "raw_average_value_size": 1264, "num_data_blocks": 1177, "num_entries": 7829, "num_filter_entries": 7829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090843, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.438625) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 10090277 bytes
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.440231) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.8 rd, 171.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8343, records dropped: 514 output_compression: NoCompression
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.440254) EVENT_LOG_v1 {"time_micros": 1764090843440243, "job": 76, "event": "compaction_finished", "compaction_time_micros": 58980, "compaction_time_cpu_micros": 22663, "output_level": 6, "num_output_files": 1, "total_output_size": 10090277, "num_input_records": 8343, "num_output_records": 7829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843441035, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090843442815, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.379270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:03 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:03.442889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:03 compute-0 nova_compute[254092]: 2025-11-25 17:14:03.536 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 13 KiB/s wr, 10 op/s
Nov 25 17:14:05 compute-0 nova_compute[254092]: 2025-11-25 17:14:05.696 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:05 compute-0 ceph-mon[74985]: pgmap v2716: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 13 KiB/s wr, 10 op/s
Nov 25 17:14:05 compute-0 nova_compute[254092]: 2025-11-25 17:14:05.937 254096 DEBUG nova.compute.manager [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:05 compute-0 nova_compute[254092]: 2025-11-25 17:14:05.938 254096 DEBUG nova.compute.manager [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:14:05 compute-0 nova_compute[254092]: 2025-11-25 17:14:05.938 254096 DEBUG oslo_concurrency.lockutils [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:14:05 compute-0 nova_compute[254092]: 2025-11-25 17:14:05.939 254096 DEBUG oslo_concurrency.lockutils [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:14:05 compute-0 nova_compute[254092]: 2025-11-25 17:14:05.939 254096 DEBUG nova.network.neutron [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:14:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:14:06 compute-0 nova_compute[254092]: 2025-11-25 17:14:06.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:14:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4801.5 total, 600.0 interval
                                           Cumulative writes: 35K writes, 140K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.81 writes per sync, written: 0.14 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3348 writes, 13K keys, 3348 commit groups, 1.0 writes per commit group, ingest: 15.40 MB, 0.03 MB/s
                                           Interval WAL: 3348 writes, 1330 syncs, 2.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:14:07 compute-0 nova_compute[254092]: 2025-11-25 17:14:07.608 254096 DEBUG nova.network.neutron [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updated VIF entry in instance network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:14:07 compute-0 nova_compute[254092]: 2025-11-25 17:14:07.608 254096 DEBUG nova.network.neutron [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:14:07 compute-0 nova_compute[254092]: 2025-11-25 17:14:07.626 254096 DEBUG oslo_concurrency.lockutils [req-95a25b71-e142-470e-8edd-78201330e71d req-92965b09-568c-4ea1-a487-770387d981eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:14:07 compute-0 ceph-mon[74985]: pgmap v2717: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:14:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 25 17:14:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:09 compute-0 ceph-mon[74985]: pgmap v2718: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 25 17:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:14:10 compute-0 nova_compute[254092]: 2025-11-25 17:14:10.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:14:11 compute-0 ceph-mon[74985]: pgmap v2719: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:14:11 compute-0 nova_compute[254092]: 2025-11-25 17:14:11.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Nov 25 17:14:13 compute-0 ovn_controller[153477]: 2025-11-25T17:14:13Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:7a:83 10.100.0.10
Nov 25 17:14:13 compute-0 ovn_controller[153477]: 2025-11-25T17:14:13Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:7a:83 10.100.0.10
Nov 25 17:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:13.651 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:13.652 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:13.653 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:13 compute-0 ceph-mon[74985]: pgmap v2720: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Nov 25 17:14:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Nov 25 17:14:15 compute-0 nova_compute[254092]: 2025-11-25 17:14:15.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:15 compute-0 ceph-mon[74985]: pgmap v2721: 321 pgs: 321 active+clean; 167 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Nov 25 17:14:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 25 17:14:16 compute-0 nova_compute[254092]: 2025-11-25 17:14:16.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:17 compute-0 ceph-mon[74985]: pgmap v2722: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 25 17:14:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 17:14:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:19 compute-0 ceph-mon[74985]: pgmap v2723: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 17:14:20 compute-0 nova_compute[254092]: 2025-11-25 17:14:20.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:14:21 compute-0 ceph-mon[74985]: pgmap v2724: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:14:21 compute-0 nova_compute[254092]: 2025-11-25 17:14:21.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:14:23 compute-0 ceph-mon[74985]: pgmap v2725: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 25 17:14:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 17:14:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 17:14:25 compute-0 nova_compute[254092]: 2025-11-25 17:14:25.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:25 compute-0 ceph-mon[74985]: pgmap v2726: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.458 254096 DEBUG nova.compute.manager [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.459 254096 DEBUG nova.compute.manager [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing instance network info cache due to event network-changed-c7c65059-3fc6-4a84-b8ce-de1306c01e13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.460 254096 DEBUG oslo_concurrency.lockutils [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.460 254096 DEBUG oslo_concurrency.lockutils [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.461 254096 DEBUG nova.network.neutron [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Refreshing network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.691 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.692 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.692 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.693 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.693 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.694 254096 INFO nova.compute.manager [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Terminating instance
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.695 254096 DEBUG nova.compute.manager [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:14:26 compute-0 kernel: tapc7c65059-3f (unregistering): left promiscuous mode
Nov 25 17:14:26 compute-0 NetworkManager[48891]: <info>  [1764090866.7553] device (tapc7c65059-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:14:26 compute-0 ovn_controller[153477]: 2025-11-25T17:14:26Z|01446|binding|INFO|Releasing lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 from this chassis (sb_readonly=0)
Nov 25 17:14:26 compute-0 ovn_controller[153477]: 2025-11-25T17:14:26Z|01447|binding|INFO|Setting lport c7c65059-3fc6-4a84-b8ce-de1306c01e13 down in Southbound
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 ovn_controller[153477]: 2025-11-25T17:14:26Z|01448|binding|INFO|Removing iface tapc7c65059-3f ovn-installed in OVS
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.781 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:7a:83 10.100.0.10'], port_security=['fa:16:3e:16:7a:83 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c7c65059-3fc6-4a84-b8ce-de1306c01e13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.783 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c7c65059-3fc6-4a84-b8ce-de1306c01e13 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d unbound from our chassis
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.785 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddff42cd-c011-4371-96b1-f2bb5093a16d
Nov 25 17:14:26 compute-0 kernel: tap05bba2c3-24 (unregistering): left promiscuous mode
Nov 25 17:14:26 compute-0 NetworkManager[48891]: <info>  [1764090866.7911] device (tap05bba2c3-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 ovn_controller[153477]: 2025-11-25T17:14:26Z|01449|binding|INFO|Releasing lport 05bba2c3-2422-401c-831b-f9af92f47719 from this chassis (sb_readonly=0)
Nov 25 17:14:26 compute-0 ovn_controller[153477]: 2025-11-25T17:14:26Z|01450|binding|INFO|Setting lport 05bba2c3-2422-401c-831b-f9af92f47719 down in Southbound
Nov 25 17:14:26 compute-0 ovn_controller[153477]: 2025-11-25T17:14:26Z|01451|binding|INFO|Removing iface tap05bba2c3-24 ovn-installed in OVS
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.808 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c261eb8-c702-4e55-b488-4037c998cabc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.811 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], port_security=['fa:16:3e:47:6a:96 2001:db8::f816:3eff:fe47:6a96'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe47:6a96/64', 'neutron:device_id': '6157e53b-5ff5-4b55-b71a-125301c5268a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=05bba2c3-2422-401c-831b-f9af92f47719) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.839 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[10a7e7d9-79de-40ae-87b4-3b9bc6e3d8ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Nov 25 17:14:26 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Consumed 14.082s CPU time.
Nov 25 17:14:26 compute-0 systemd-machined[216343]: Machine qemu-172-instance-0000008a terminated.
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.847 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[85abba5a-98c1-456e-855f-ec9870ef9703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.872 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ecca69a1-72ee-40d3-a86f-e661b839fdb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.891 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[616074b6-e2fd-4c27-9aa0-37415021e572]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddff42cd-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:82:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725824, 'reachable_time': 16078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406166, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.911 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6f592c-9a8b-4a4a-8809-cba691e3e782]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725840, 'tstamp': 725840}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406167, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddff42cd-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725844, 'tstamp': 725844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406167, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.913 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.924 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddff42cd-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.925 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:14:26 compute-0 NetworkManager[48891]: <info>  [1764090866.9275] manager: (tap05bba2c3-24): new Tun device (/org/freedesktop/NetworkManager/Devices/597)
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.927 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddff42cd-c0, col_values=(('external_ids', {'iface-id': '9e2ffcd6-4edd-4351-a44c-02659b205145'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.928 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.931 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 05bba2c3-2422-401c-831b-f9af92f47719 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.933 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0063509c-db60-47db-9c49-72faf9b698d7
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.945 254096 INFO nova.virt.libvirt.driver [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Instance destroyed successfully.
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.945 254096 DEBUG nova.objects.instance [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 6157e53b-5ff5-4b55-b71a-125301c5268a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.958 254096 DEBUG nova.virt.libvirt.vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:14:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:14:00Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.958 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.959 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.956 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[06c80d16-2efd-45b9-b332-d635739500e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.959 254096 DEBUG os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.961 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7c65059-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.974 254096 INFO os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:7a:83,bridge_name='br-int',has_traffic_filtering=True,id=c7c65059-3fc6-4a84-b8ce-de1306c01e13,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc7c65059-3f')
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.975 254096 DEBUG nova.virt.libvirt.vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1481975263',display_name='tempest-TestGettingAddress-server-1481975263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1481975263',id=138,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:14:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-mml9qjj9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:14:00Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=6157e53b-5ff5-4b55-b71a-125301c5268a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.975 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.976 254096 DEBUG nova.network.os_vif_util [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.976 254096 DEBUG os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.977 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.978 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05bba2c3-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:26 compute-0 nova_compute[254092]: 2025-11-25 17:14:26.982 254096 INFO os_vif [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:6a:96,bridge_name='br-int',has_traffic_filtering=True,id=05bba2c3-2422-401c-831b-f9af92f47719,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05bba2c3-24')
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.989 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6d784d-01c3-45b0-9615-00bbda31f239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:26.992 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a0704cc1-36c7-4a1f-ae85-87044f72a0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.023 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d5922ccf-2d8e-479e-bb01-bf135308ea74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.040 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[225d26c6-9092-4ed1-9394-59453fc69198]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0063509c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:49:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 418], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725935, 'reachable_time': 28494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406214, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.057 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5108a6e0-c50a-448d-81e8-850de92fdc0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0063509c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725952, 'tstamp': 725952}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406215, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.059 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.060 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.062 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0063509c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.062 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.063 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0063509c-d0, col_values=(('external_ids', {'iface-id': '76c9f96b-1150-4e5a-a9e7-680d9f908998'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:27.063 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.367 254096 INFO nova.virt.libvirt.driver [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deleting instance files /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a_del
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.368 254096 INFO nova.virt.libvirt.driver [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deletion of /var/lib/nova/instances/6157e53b-5ff5-4b55-b71a-125301c5268a_del complete
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.415 254096 INFO nova.compute.manager [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.415 254096 DEBUG oslo.service.loopingcall [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.416 254096 DEBUG nova.compute.manager [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.416 254096 DEBUG nova.network.neutron [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.917 254096 DEBUG nova.network.neutron [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updated VIF entry in instance network info cache for port c7c65059-3fc6-4a84-b8ce-de1306c01e13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.918 254096 DEBUG nova.network.neutron [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "address": "fa:16:3e:16:7a:83", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7c65059-3f", "ovs_interfaceid": "c7c65059-3fc6-4a84-b8ce-de1306c01e13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:14:27 compute-0 nova_compute[254092]: 2025-11-25 17:14:27.937 254096 DEBUG oslo_concurrency.lockutils [req-4731aa6f-94a4-419f-b467-b73e0ffa13c6 req-9d59abf8-904d-472b-b966-5cf706df505b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-6157e53b-5ff5-4b55-b71a-125301c5268a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:14:27 compute-0 ceph-mon[74985]: pgmap v2727: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.182 254096 DEBUG nova.compute.manager [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-deleted-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.183 254096 INFO nova.compute.manager [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Neutron deleted interface c7c65059-3fc6-4a84-b8ce-de1306c01e13; detaching it from the instance and deleting it from the info cache
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.183 254096 DEBUG nova.network.neutron [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [{"id": "05bba2c3-2422-401c-831b-f9af92f47719", "address": "fa:16:3e:47:6a:96", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe47:6a96", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05bba2c3-24", "ovs_interfaceid": "05bba2c3-2422-401c-831b-f9af92f47719", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.210 254096 DEBUG nova.compute.manager [req-e299a7a4-5a9a-4620-85c6-7db5db0f7a02 req-65483431-3266-4a3c-8d58-53e1de846248 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Detach interface failed, port_id=c7c65059-3fc6-4a84-b8ce-de1306c01e13, reason: Instance 6157e53b-5ff5-4b55-b71a-125301c5268a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.482 254096 DEBUG nova.network.neutron [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.498 254096 INFO nova.compute.manager [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Took 1.08 seconds to deallocate network for instance.
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.537 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.538 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.565 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-unplugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.565 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.566 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.566 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.566 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-unplugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-unplugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 for instance with vm_state deleted and task_state None.
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.567 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-c7c65059-3fc6-4a84-b8ce-de1306c01e13 for instance with vm_state deleted and task_state None.
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-unplugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.568 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-unplugged-05bba2c3-2422-401c-831b-f9af92f47719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-unplugged-05bba2c3-2422-401c-831b-f9af92f47719 for instance with vm_state deleted and task_state None.
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.569 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG oslo_concurrency.lockutils [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.570 254096 DEBUG nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] No waiting events found dispatching network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.571 254096 WARNING nova.compute.manager [req-29fe348e-ce90-4143-88df-dec66f784339 req-fec39797-b56b-4824-ba54-348ebef04c07 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received unexpected event network-vif-plugged-05bba2c3-2422-401c-831b-f9af92f47719 for instance with vm_state deleted and task_state None.
Nov 25 17:14:28 compute-0 nova_compute[254092]: 2025-11-25 17:14:28.618 254096 DEBUG oslo_concurrency.processutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:14:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 25 17:14:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:14:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426995146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:29 compute-0 nova_compute[254092]: 2025-11-25 17:14:29.189 254096 DEBUG oslo_concurrency.processutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:14:29 compute-0 nova_compute[254092]: 2025-11-25 17:14:29.195 254096 DEBUG nova.compute.provider_tree [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:14:29 compute-0 nova_compute[254092]: 2025-11-25 17:14:29.212 254096 DEBUG nova.scheduler.client.report [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:14:29 compute-0 nova_compute[254092]: 2025-11-25 17:14:29.229 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:29 compute-0 nova_compute[254092]: 2025-11-25 17:14:29.251 254096 INFO nova.scheduler.client.report [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 6157e53b-5ff5-4b55-b71a-125301c5268a
Nov 25 17:14:29 compute-0 nova_compute[254092]: 2025-11-25 17:14:29.308 254096 DEBUG oslo_concurrency.lockutils [None req-1d25a8fe-2396-42e1-8f95-50325b33d74e a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "6157e53b-5ff5-4b55-b71a-125301c5268a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:29 compute-0 ceph-mon[74985]: pgmap v2728: 321 pgs: 321 active+clean; 200 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 1 op/s
Nov 25 17:14:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3426995146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.274 254096 DEBUG nova.compute.manager [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Received event network-vif-deleted-05bba2c3-2422-401c-831b-f9af92f47719 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG nova.compute.manager [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG nova.compute.manager [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing instance network info cache due to event network-changed-bf0e0412-082f-4b0e-aabe-4e4f0de25b43. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG oslo_concurrency.lockutils [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.275 254096 DEBUG oslo_concurrency.lockutils [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.276 254096 DEBUG nova.network.neutron [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Refreshing network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.332 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.332 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.333 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.333 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.333 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.334 254096 INFO nova.compute.manager [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Terminating instance
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.336 254096 DEBUG nova.compute.manager [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:14:30 compute-0 kernel: tapbf0e0412-08 (unregistering): left promiscuous mode
Nov 25 17:14:30 compute-0 NetworkManager[48891]: <info>  [1764090870.3908] device (tapbf0e0412-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 ovn_controller[153477]: 2025-11-25T17:14:30Z|01452|binding|INFO|Releasing lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 from this chassis (sb_readonly=0)
Nov 25 17:14:30 compute-0 ovn_controller[153477]: 2025-11-25T17:14:30Z|01453|binding|INFO|Setting lport bf0e0412-082f-4b0e-aabe-4e4f0de25b43 down in Southbound
Nov 25 17:14:30 compute-0 ovn_controller[153477]: 2025-11-25T17:14:30Z|01454|binding|INFO|Removing iface tapbf0e0412-08 ovn-installed in OVS
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.413 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:43:89 10.100.0.8'], port_security=['fa:16:3e:3b:43:89 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05eb2321-7214-473a-ae97-3a16001b4f58, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bf0e0412-082f-4b0e-aabe-4e4f0de25b43) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.414 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bf0e0412-082f-4b0e-aabe-4e4f0de25b43 in datapath ddff42cd-c011-4371-96b1-f2bb5093a16d unbound from our chassis
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.415 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddff42cd-c011-4371-96b1-f2bb5093a16d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.416 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4bf81b-1213-4bf6-a311-5494fd352113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.417 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d namespace which is not needed anymore
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 kernel: tap831eaa83-55 (unregistering): left promiscuous mode
Nov 25 17:14:30 compute-0 NetworkManager[48891]: <info>  [1764090870.4327] device (tap831eaa83-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 ovn_controller[153477]: 2025-11-25T17:14:30Z|01455|binding|INFO|Releasing lport 831eaa83-55bd-4098-9037-4b628eb8d994 from this chassis (sb_readonly=0)
Nov 25 17:14:30 compute-0 ovn_controller[153477]: 2025-11-25T17:14:30Z|01456|binding|INFO|Setting lport 831eaa83-55bd-4098-9037-4b628eb8d994 down in Southbound
Nov 25 17:14:30 compute-0 ovn_controller[153477]: 2025-11-25T17:14:30Z|01457|binding|INFO|Removing iface tap831eaa83-55 ovn-installed in OVS
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.459 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], port_security=['fa:16:3e:b0:cd:a4 2001:db8::f816:3eff:feb0:cda4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb0:cda4/64', 'neutron:device_id': '3b797ee6-c82f-4c01-bb54-31de659fcad8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0063509c-db60-47db-9c49-72faf9b698d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28d41b65-ad5d-4095-abc6-8f6f750d16aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd325579-1b15-4e26-8600-222e6e6531a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=831eaa83-55bd-4098-9037-4b628eb8d994) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.473 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Nov 25 17:14:30 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 17.348s CPU time.
Nov 25 17:14:30 compute-0 systemd-machined[216343]: Machine qemu-171-instance-00000089 terminated.
Nov 25 17:14:30 compute-0 podman[406242]: 2025-11-25 17:14:30.510608804 +0000 UTC m=+0.071715253 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 17:14:30 compute-0 podman[406241]: 2025-11-25 17:14:30.545079222 +0000 UTC m=+0.113646754 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:14:30 compute-0 podman[406243]: 2025-11-25 17:14:30.548854336 +0000 UTC m=+0.110319115 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller)
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : haproxy version is 2.8.14-c23fe91
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [NOTICE]   (404572) : path to executable is /usr/sbin/haproxy
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [WARNING]  (404572) : Exiting Master process...
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [WARNING]  (404572) : Exiting Master process...
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [ALERT]    (404572) : Current worker (404574) exited with code 143 (Terminated)
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d[404568]: [WARNING]  (404572) : All workers exited. Exiting... (0)
Nov 25 17:14:30 compute-0 systemd[1]: libpod-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587.scope: Deactivated successfully.
Nov 25 17:14:30 compute-0 NetworkManager[48891]: <info>  [1764090870.5708] manager: (tap831eaa83-55): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Nov 25 17:14:30 compute-0 podman[406316]: 2025-11-25 17:14:30.574868284 +0000 UTC m=+0.051560146 container died a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.602 254096 INFO nova.virt.libvirt.driver [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Instance destroyed successfully.
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.603 254096 DEBUG nova.objects.instance [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 3b797ee6-c82f-4c01-bb54-31de659fcad8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587-userdata-shm.mount: Deactivated successfully.
Nov 25 17:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-51787ffbad5844d0d6317c349bae98d9600fb763a7a78dde87ceae6df6f2b06d-merged.mount: Deactivated successfully.
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.614 254096 DEBUG nova.virt.libvirt.vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:13:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:13:22Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.614 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.615 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.615 254096 DEBUG os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.618 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0e0412-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.619 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.621 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.625 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.628 254096 INFO os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:43:89,bridge_name='br-int',has_traffic_filtering=True,id=bf0e0412-082f-4b0e-aabe-4e4f0de25b43,network=Network(ddff42cd-c011-4371-96b1-f2bb5093a16d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0e0412-08')
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.629 254096 DEBUG nova.virt.libvirt.vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:13:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-848969978',display_name='tempest-TestGettingAddress-server-848969978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-848969978',id=137,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF9z68jccGfyAVxEITitYiuN9vFSdA5Xa/Y4I6Func8CZqbMVn2UqMvh2Pq/mSGYMJkN1SoVPK2VCVvj1pyhy91CkHawiwsrdFCgvBZGIbNZI9aqQaje8pTP0iHBJQrkZQ==',key_name='tempest-TestGettingAddress-1190179439',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:13:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-70lki4pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:13:22Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=3b797ee6-c82f-4c01-bb54-31de659fcad8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.629 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.630 254096 DEBUG nova.network.os_vif_util [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.630 254096 DEBUG os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:14:30 compute-0 podman[406316]: 2025-11-25 17:14:30.630736254 +0000 UTC m=+0.107428116 container cleanup a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.631 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap831eaa83-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.636 254096 INFO os_vif [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:cd:a4,bridge_name='br-int',has_traffic_filtering=True,id=831eaa83-55bd-4098-9037-4b628eb8d994,network=Network(0063509c-db60-47db-9c49-72faf9b698d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap831eaa83-55')
Nov 25 17:14:30 compute-0 systemd[1]: libpod-conmon-a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587.scope: Deactivated successfully.
Nov 25 17:14:30 compute-0 podman[406375]: 2025-11-25 17:14:30.703058643 +0000 UTC m=+0.047642558 container remove a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.709 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[37ee0cd9-3a10-43a6-900a-b1a7d783b1cf]: (4, ('Tue Nov 25 05:14:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d (a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587)\na88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587\nTue Nov 25 05:14:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d (a88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587)\na88a0caf9afcc34c59b170fc8790dab9214efc08778e474baaa41ba4bc7b9587\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.710 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5293cfad-93b8-496e-beca-2e7f313d7fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.711 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddff42cd-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.719 254096 DEBUG nova.compute.manager [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG oslo_concurrency.lockutils [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG oslo_concurrency.lockutils [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG oslo_concurrency.lockutils [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG nova.compute.manager [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-unplugged-831eaa83-55bd-4098-9037-4b628eb8d994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.720 254096 DEBUG nova.compute.manager [req-35ed0e8e-d66f-4d6a-9042-5f18b00c9e9e req-5dfb8f3e-85ec-4ee5-9247-e57476263589 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-831eaa83-55bd-4098-9037-4b628eb8d994 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 kernel: tapddff42cd-c0: left promiscuous mode
Nov 25 17:14:30 compute-0 nova_compute[254092]: 2025-11-25 17:14:30.768 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.771 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d144be8-b37f-4793-9c94-1f053619e146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.788 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e1064013-5c0c-4ecb-b90a-522ef44608d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.789 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4553f39e-116a-4eee-b78c-bfcd6e8d801a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.806 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e03a14-e9cf-475d-9071-8a4ec60b9e6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725816, 'reachable_time': 37407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406408, 'error': None, 'target': 'ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 152 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 23 KiB/s wr, 17 op/s
Nov 25 17:14:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dddff42cd\x2dc011\x2d4371\x2d96b1\x2df2bb5093a16d.mount: Deactivated successfully.
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.811 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddff42cd-c011-4371-96b1-f2bb5093a16d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.812 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a1d07d-458b-4133-ad94-4b5d13db18b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.813 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 831eaa83-55bd-4098-9037-4b628eb8d994 in datapath 0063509c-db60-47db-9c49-72faf9b698d7 unbound from our chassis
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.814 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0063509c-db60-47db-9c49-72faf9b698d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.815 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f92c48-e052-4133-b4fd-2943d59c05bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:30.816 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 namespace which is not needed anymore
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : haproxy version is 2.8.14-c23fe91
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [NOTICE]   (404688) : path to executable is /usr/sbin/haproxy
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [WARNING]  (404688) : Exiting Master process...
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [ALERT]    (404688) : Current worker (404690) exited with code 143 (Terminated)
Nov 25 17:14:30 compute-0 neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7[404684]: [WARNING]  (404688) : All workers exited. Exiting... (0)
Nov 25 17:14:30 compute-0 systemd[1]: libpod-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0.scope: Deactivated successfully.
Nov 25 17:14:30 compute-0 podman[406425]: 2025-11-25 17:14:30.965843655 +0000 UTC m=+0.047089682 container died 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0-userdata-shm.mount: Deactivated successfully.
Nov 25 17:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c4e541847c2aac7dd59889538a8dcfac905bc7101e6c3c35a058f1755af39f3-merged.mount: Deactivated successfully.
Nov 25 17:14:31 compute-0 podman[406425]: 2025-11-25 17:14:31.001376132 +0000 UTC m=+0.082622159 container cleanup 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:14:31 compute-0 systemd[1]: libpod-conmon-4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0.scope: Deactivated successfully.
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.047 254096 INFO nova.virt.libvirt.driver [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deleting instance files /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8_del
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.048 254096 INFO nova.virt.libvirt.driver [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deletion of /var/lib/nova/instances/3b797ee6-c82f-4c01-bb54-31de659fcad8_del complete
Nov 25 17:14:31 compute-0 podman[406456]: 2025-11-25 17:14:31.080175057 +0000 UTC m=+0.049222970 container remove 4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.091 254096 INFO nova.compute.manager [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.092 254096 DEBUG oslo.service.loopingcall [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.092 254096 DEBUG nova.compute.manager [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.092 254096 DEBUG nova.network.neutron [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.093 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[08d3221b-77c3-46b6-a244-5a5488984789]: (4, ('Tue Nov 25 05:14:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 (4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0)\n4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0\nTue Nov 25 05:14:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 (4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0)\n4f26dc2efc5c6a2c6c700c9e316dd5a2a0dfaa9847305fe0c2ec7148aa1045d0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.094 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb860e5-c64b-4b07-a6da-ecf241b8d29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.094 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0063509c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:31 compute-0 kernel: tap0063509c-d0: left promiscuous mode
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.100 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38d1796a-1168-4586-a791-d6f1e4454a0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.109 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f58f6e4-6c10-4d7c-897d-37603e8d928b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c05501dd-9f61-459c-95f4-2746581bd5f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.138 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7acc676a-85c4-43bc-b410-a74c6c6bbeef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725925, 'reachable_time': 32404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406472, 'error': None, 'target': 'ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.140 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0063509c-db60-47db-9c49-72faf9b698d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:14:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:31.141 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[413d65e5-7724-4b9e-b0fa-6c60de1263f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:14:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d0063509c\x2ddb60\x2d47db\x2d9c49\x2d72faf9b698d7.mount: Deactivated successfully.
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.740 254096 DEBUG nova.network.neutron [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updated VIF entry in instance network info cache for port bf0e0412-082f-4b0e-aabe-4e4f0de25b43. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.741 254096 DEBUG nova.network.neutron [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [{"id": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "address": "fa:16:3e:3b:43:89", "network": {"id": "ddff42cd-c011-4371-96b1-f2bb5093a16d", "bridge": "br-int", "label": "tempest-network-smoke--517875533", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0e0412-08", "ovs_interfaceid": "bf0e0412-082f-4b0e-aabe-4e4f0de25b43", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "831eaa83-55bd-4098-9037-4b628eb8d994", "address": "fa:16:3e:b0:cd:a4", "network": {"id": "0063509c-db60-47db-9c49-72faf9b698d7", "bridge": "br-int", "label": "tempest-network-smoke--117855039", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb0:cda4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap831eaa83-55", "ovs_interfaceid": "831eaa83-55bd-4098-9037-4b628eb8d994", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:14:31 compute-0 nova_compute[254092]: 2025-11-25 17:14:31.763 254096 DEBUG oslo_concurrency.lockutils [req-794db1f5-2121-4d90-ad2a-9cc1ce78c4da req-10ffab50-dbeb-4a8c-878c-27908dd59927 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-3b797ee6-c82f-4c01-bb54-31de659fcad8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:14:31 compute-0 ceph-mon[74985]: pgmap v2729: 321 pgs: 321 active+clean; 152 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 23 KiB/s wr, 17 op/s
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.331 254096 DEBUG nova.network.neutron [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.356 254096 INFO nova.compute.manager [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Took 1.26 seconds to deallocate network for instance.
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.368 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.369 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-unplugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-unplugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.370 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.371 254096 DEBUG oslo_concurrency.lockutils [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.371 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.371 254096 WARNING nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 for instance with vm_state active and task_state deleting.
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.372 254096 DEBUG nova.compute.manager [req-3251151b-928d-4da9-9be4-d1d8b206276f req-d29c23b7-c265-49a0-991f-cbc1d5ca58e0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-deleted-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.398 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.399 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.446 254096 DEBUG oslo_concurrency.processutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 82 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 39 op/s
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.842 254096 DEBUG nova.compute.manager [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.842 254096 DEBUG oslo_concurrency.lockutils [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 DEBUG oslo_concurrency.lockutils [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 DEBUG oslo_concurrency.lockutils [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 DEBUG nova.compute.manager [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] No waiting events found dispatching network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.843 254096 WARNING nova.compute.manager [req-01509943-65e9-44af-8664-fc80fcc17486 req-e0743a03-19c9-4e36-9def-2b8032a0785b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received unexpected event network-vif-plugged-831eaa83-55bd-4098-9037-4b628eb8d994 for instance with vm_state deleted and task_state None.
Nov 25 17:14:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:14:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3875746165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.921 254096 DEBUG oslo_concurrency.processutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.931 254096 DEBUG nova.compute.provider_tree [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.953 254096 DEBUG nova.scheduler.client.report [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:14:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3875746165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:32 compute-0 nova_compute[254092]: 2025-11-25 17:14:32.981 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:33 compute-0 nova_compute[254092]: 2025-11-25 17:14:33.020 254096 INFO nova.scheduler.client.report [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 3b797ee6-c82f-4c01-bb54-31de659fcad8
Nov 25 17:14:33 compute-0 nova_compute[254092]: 2025-11-25 17:14:33.095 254096 DEBUG oslo_concurrency.lockutils [None req-c815590c-6b61-4b8d-85ac-6908a0cfaac2 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "3b797ee6-c82f-4c01-bb54-31de659fcad8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:33 compute-0 ceph-mon[74985]: pgmap v2730: 321 pgs: 321 active+clean; 82 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 39 op/s
Nov 25 17:14:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.070176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874070240, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 482, "num_deletes": 257, "total_data_size": 436337, "memory_usage": 446840, "flush_reason": "Manual Compaction"}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874076050, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 432605, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56620, "largest_seqno": 57101, "table_properties": {"data_size": 429858, "index_size": 782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6259, "raw_average_key_size": 18, "raw_value_size": 424459, "raw_average_value_size": 1226, "num_data_blocks": 35, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090844, "oldest_key_time": 1764090844, "file_creation_time": 1764090874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 5963 microseconds, and 3329 cpu microseconds.
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.076131) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 432605 bytes OK
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.076169) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.077759) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.077788) EVENT_LOG_v1 {"time_micros": 1764090874077778, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.077817) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 433474, prev total WAL file size 433474, number of live WAL files 2.
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.078542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323632' seq:72057594037927935, type:22 .. '6C6F676D0032353135' seq:0, type:0; will stop at (end)
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(422KB)], [128(9853KB)]
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874078689, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10522882, "oldest_snapshot_seqno": -1}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7653 keys, 10421070 bytes, temperature: kUnknown
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874123179, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10421070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10370421, "index_size": 30440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 200925, "raw_average_key_size": 26, "raw_value_size": 10234063, "raw_average_value_size": 1337, "num_data_blocks": 1188, "num_entries": 7653, "num_filter_entries": 7653, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764090874, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.123555) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10421070 bytes
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.124706) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.9 rd, 233.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(48.4) write-amplify(24.1) OK, records in: 8175, records dropped: 522 output_compression: NoCompression
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.124728) EVENT_LOG_v1 {"time_micros": 1764090874124717, "job": 78, "event": "compaction_finished", "compaction_time_micros": 44605, "compaction_time_cpu_micros": 23694, "output_level": 6, "num_output_files": 1, "total_output_size": 10421070, "num_input_records": 8175, "num_output_records": 7653, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874124982, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764090874127310, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.078362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:34 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:14:34.127394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:14:34 compute-0 nova_compute[254092]: 2025-11-25 17:14:34.459 254096 DEBUG nova.compute.manager [req-b55de48c-d6d1-4152-99c5-dfd054e384f8 req-a9de94ad-fb7b-4eb3-ad48-c3173c12267a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Received event network-vif-deleted-bf0e0412-082f-4b0e-aabe-4e4f0de25b43 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:14:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 82 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 39 op/s
Nov 25 17:14:35 compute-0 nova_compute[254092]: 2025-11-25 17:14:35.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:35 compute-0 nova_compute[254092]: 2025-11-25 17:14:35.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:36 compute-0 ceph-mon[74985]: pgmap v2731: 321 pgs: 321 active+clean; 82 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 39 op/s
Nov 25 17:14:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 17:14:36 compute-0 nova_compute[254092]: 2025-11-25 17:14:36.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:37 compute-0 nova_compute[254092]: 2025-11-25 17:14:37.078 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:38 compute-0 ceph-mon[74985]: pgmap v2732: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 17:14:38 compute-0 nova_compute[254092]: 2025-11-25 17:14:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 17:14:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:14:40 compute-0 ceph-mon[74985]: pgmap v2733: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:14:40
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'backups']
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 17:14:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:14:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975670560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:40 compute-0 nova_compute[254092]: 2025-11-25 17:14:40.951 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.096 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.097 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3701MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.097 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.097 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.149 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.149 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:14:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1975670560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.175 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:14:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:14:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1273956299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.622 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.628 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.640 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.662 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.662 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.943 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090866.940885, 6157e53b-5ff5-4b55-b71a-125301c5268a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.944 254096 INFO nova.compute.manager [-] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] VM Stopped (Lifecycle Event)
Nov 25 17:14:41 compute-0 nova_compute[254092]: 2025-11-25 17:14:41.963 254096 DEBUG nova.compute.manager [None req-05c568a2-6ab3-4e48-93e8-1470986d2b06 - - - - - -] [instance: 6157e53b-5ff5-4b55-b71a-125301c5268a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:14:42 compute-0 ceph-mon[74985]: pgmap v2734: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 56 op/s
Nov 25 17:14:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1273956299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:14:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 40 op/s
Nov 25 17:14:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:44 compute-0 ceph-mon[74985]: pgmap v2735: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 40 op/s
Nov 25 17:14:44 compute-0 nova_compute[254092]: 2025-11-25 17:14:44.658 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:44 compute-0 nova_compute[254092]: 2025-11-25 17:14:44.658 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:44 compute-0 nova_compute[254092]: 2025-11-25 17:14:44.658 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 17:14:45 compute-0 nova_compute[254092]: 2025-11-25 17:14:45.591 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090870.589668, 3b797ee6-c82f-4c01-bb54-31de659fcad8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:14:45 compute-0 nova_compute[254092]: 2025-11-25 17:14:45.592 254096 INFO nova.compute.manager [-] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] VM Stopped (Lifecycle Event)
Nov 25 17:14:45 compute-0 nova_compute[254092]: 2025-11-25 17:14:45.611 254096 DEBUG nova.compute.manager [None req-533ca3a7-0c19-45ef-ab7e-4ca2f7a13658 - - - - - -] [instance: 3b797ee6-c82f-4c01-bb54-31de659fcad8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:14:45 compute-0 nova_compute[254092]: 2025-11-25 17:14:45.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:45 compute-0 nova_compute[254092]: 2025-11-25 17:14:45.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:46 compute-0 ceph-mon[74985]: pgmap v2736: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 17:14:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 17:14:47 compute-0 nova_compute[254092]: 2025-11-25 17:14:47.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:47 compute-0 nova_compute[254092]: 2025-11-25 17:14:47.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:47 compute-0 nova_compute[254092]: 2025-11-25 17:14:47.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:14:47 compute-0 nova_compute[254092]: 2025-11-25 17:14:47.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:14:47 compute-0 nova_compute[254092]: 2025-11-25 17:14:47.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:14:48 compute-0 ceph-mon[74985]: pgmap v2737: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Nov 25 17:14:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:50 compute-0 sudo[406541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:50 compute-0 sudo[406541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:50 compute-0 sudo[406541]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:50 compute-0 sudo[406566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:14:50 compute-0 sudo[406566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:50 compute-0 sudo[406566]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:50 compute-0 sudo[406591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:50 compute-0 sudo[406591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:50 compute-0 sudo[406591]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:50 compute-0 ceph-mon[74985]: pgmap v2738: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:50 compute-0 sudo[406616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:14:50 compute-0 sudo[406616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:50 compute-0 nova_compute[254092]: 2025-11-25 17:14:50.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:50.480 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:14:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:50.481 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:14:50 compute-0 nova_compute[254092]: 2025-11-25 17:14:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:14:50 compute-0 nova_compute[254092]: 2025-11-25 17:14:50.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:50 compute-0 sudo[406616]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:14:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:14:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:14:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:14:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:14:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:14:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 33e4e4ec-0e5c-460c-a4be-d824ab728a46 does not exist
Nov 25 17:14:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 67617a35-083c-4d28-a854-7469971d7d3d does not exist
Nov 25 17:14:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2726a8f7-c13d-41d8-bcc8-05d6017b9f14 does not exist
Nov 25 17:14:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:14:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:14:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:14:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:14:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:14:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:14:50 compute-0 sudo[406673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:50 compute-0 sudo[406673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:50 compute-0 sudo[406673]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:50 compute-0 nova_compute[254092]: 2025-11-25 17:14:50.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:50 compute-0 sudo[406698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:14:50 compute-0 sudo[406698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:50 compute-0 sudo[406698]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:50 compute-0 sudo[406723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:50 compute-0 sudo[406723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:50 compute-0 sudo[406723]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:51 compute-0 sudo[406748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:14:51 compute-0 sudo[406748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:14:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:14:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:14:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:14:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:14:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:14:51 compute-0 podman[406813]: 2025-11-25 17:14:51.311248012 +0000 UTC m=+0.041894501 container create 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:14:51 compute-0 systemd[1]: Started libpod-conmon-1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0.scope.
Nov 25 17:14:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:14:51 compute-0 podman[406813]: 2025-11-25 17:14:51.379014727 +0000 UTC m=+0.109661186 container init 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:14:51 compute-0 podman[406813]: 2025-11-25 17:14:51.386275035 +0000 UTC m=+0.116921494 container start 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 17:14:51 compute-0 podman[406813]: 2025-11-25 17:14:51.290861248 +0000 UTC m=+0.021507707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:14:51 compute-0 podman[406813]: 2025-11-25 17:14:51.389853632 +0000 UTC m=+0.120500091 container attach 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:14:51 compute-0 objective_hypatia[406830]: 167 167
Nov 25 17:14:51 compute-0 systemd[1]: libpod-1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0.scope: Deactivated successfully.
Nov 25 17:14:51 compute-0 podman[406813]: 2025-11-25 17:14:51.394203351 +0000 UTC m=+0.124849850 container died 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c04b175fe8b5489cd072a981c0f8520302e335def70614668df541bcc14a64a-merged.mount: Deactivated successfully.
Nov 25 17:14:51 compute-0 podman[406813]: 2025-11-25 17:14:51.435838504 +0000 UTC m=+0.166484963 container remove 1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hypatia, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:14:51 compute-0 systemd[1]: libpod-conmon-1a0bd4f7e443131b2c9c71866c450ae8766a4df5b13b9ac3956e39920b3792e0.scope: Deactivated successfully.
Nov 25 17:14:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:14:51.482 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:14:51 compute-0 podman[406854]: 2025-11-25 17:14:51.601690348 +0000 UTC m=+0.045859220 container create f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:14:51 compute-0 systemd[1]: Started libpod-conmon-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope.
Nov 25 17:14:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:14:51 compute-0 podman[406854]: 2025-11-25 17:14:51.582438324 +0000 UTC m=+0.026607216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:14:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:14:51 compute-0 podman[406854]: 2025-11-25 17:14:51.69839936 +0000 UTC m=+0.142568232 container init f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:14:51 compute-0 podman[406854]: 2025-11-25 17:14:51.70721538 +0000 UTC m=+0.151384252 container start f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 17:14:51 compute-0 podman[406854]: 2025-11-25 17:14:51.710234242 +0000 UTC m=+0.154403114 container attach f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:14:52 compute-0 ceph-mon[74985]: pgmap v2739: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:52 compute-0 wonderful_ramanujan[406871]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:14:52 compute-0 wonderful_ramanujan[406871]: --> relative data size: 1.0
Nov 25 17:14:52 compute-0 wonderful_ramanujan[406871]: --> All data devices are unavailable
Nov 25 17:14:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:52 compute-0 systemd[1]: libpod-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope: Deactivated successfully.
Nov 25 17:14:52 compute-0 podman[406854]: 2025-11-25 17:14:52.864617004 +0000 UTC m=+1.308785876 container died f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:14:52 compute-0 systemd[1]: libpod-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope: Consumed 1.083s CPU time.
Nov 25 17:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d422450cb8bf9c5be872c9268b6b482dddad0ce480edb562434e01ac73b9abd4-merged.mount: Deactivated successfully.
Nov 25 17:14:52 compute-0 podman[406854]: 2025-11-25 17:14:52.919251501 +0000 UTC m=+1.363420383 container remove f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ramanujan, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:14:52 compute-0 systemd[1]: libpod-conmon-f2457b722498db3ffafe2775817111d370eb475a0e59d71c37a34bf932bb4500.scope: Deactivated successfully.
Nov 25 17:14:52 compute-0 sudo[406748]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:53 compute-0 sudo[406912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:53 compute-0 sudo[406912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:53 compute-0 sudo[406912]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:53 compute-0 sudo[406937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:14:53 compute-0 sudo[406937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:53 compute-0 sudo[406937]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:53 compute-0 sudo[406962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:53 compute-0 sudo[406962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:53 compute-0 sudo[406962]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:53 compute-0 sudo[406987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:14:53 compute-0 sudo[406987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:53 compute-0 podman[407053]: 2025-11-25 17:14:53.603874975 +0000 UTC m=+0.042344313 container create c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:14:53 compute-0 systemd[1]: Started libpod-conmon-c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591.scope.
Nov 25 17:14:53 compute-0 podman[407053]: 2025-11-25 17:14:53.585412413 +0000 UTC m=+0.023881741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:14:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:14:53 compute-0 podman[407053]: 2025-11-25 17:14:53.72123852 +0000 UTC m=+0.159707828 container init c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:14:53 compute-0 podman[407053]: 2025-11-25 17:14:53.733016851 +0000 UTC m=+0.171486169 container start c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:14:53 compute-0 podman[407053]: 2025-11-25 17:14:53.736949148 +0000 UTC m=+0.175418466 container attach c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:14:53 compute-0 epic_buck[407069]: 167 167
Nov 25 17:14:53 compute-0 systemd[1]: libpod-c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591.scope: Deactivated successfully.
Nov 25 17:14:53 compute-0 podman[407053]: 2025-11-25 17:14:53.741101291 +0000 UTC m=+0.179570579 container died c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-32d15216be5815392119942e440544766e7ceb3456746b6342d09b52696b7639-merged.mount: Deactivated successfully.
Nov 25 17:14:53 compute-0 podman[407053]: 2025-11-25 17:14:53.778371516 +0000 UTC m=+0.216840804 container remove c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:14:53 compute-0 systemd[1]: libpod-conmon-c13a3f29d3a6e896230ea090c786cd753f977f37c5091248891e8218d27ee591.scope: Deactivated successfully.
Nov 25 17:14:54 compute-0 podman[407093]: 2025-11-25 17:14:54.019025736 +0000 UTC m=+0.062564434 container create b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:14:54 compute-0 systemd[1]: Started libpod-conmon-b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4.scope.
Nov 25 17:14:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:14:54 compute-0 podman[407093]: 2025-11-25 17:14:53.994590541 +0000 UTC m=+0.038129229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:14:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:54 compute-0 podman[407093]: 2025-11-25 17:14:54.115733638 +0000 UTC m=+0.159272336 container init b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:14:54 compute-0 podman[407093]: 2025-11-25 17:14:54.126122451 +0000 UTC m=+0.169661109 container start b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:14:54 compute-0 podman[407093]: 2025-11-25 17:14:54.129739749 +0000 UTC m=+0.173278437 container attach b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:14:54 compute-0 ceph-mon[74985]: pgmap v2740: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:54 compute-0 serene_kilby[407109]: {
Nov 25 17:14:54 compute-0 serene_kilby[407109]:     "0": [
Nov 25 17:14:54 compute-0 serene_kilby[407109]:         {
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "devices": [
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "/dev/loop3"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             ],
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_name": "ceph_lv0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_size": "21470642176",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "name": "ceph_lv0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "tags": {
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cluster_name": "ceph",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.crush_device_class": "",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.encrypted": "0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osd_id": "0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.type": "block",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.vdo": "0"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             },
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "type": "block",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "vg_name": "ceph_vg0"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:         }
Nov 25 17:14:54 compute-0 serene_kilby[407109]:     ],
Nov 25 17:14:54 compute-0 serene_kilby[407109]:     "1": [
Nov 25 17:14:54 compute-0 serene_kilby[407109]:         {
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "devices": [
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "/dev/loop4"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             ],
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_name": "ceph_lv1",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_size": "21470642176",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "name": "ceph_lv1",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "tags": {
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cluster_name": "ceph",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.crush_device_class": "",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.encrypted": "0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osd_id": "1",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.type": "block",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.vdo": "0"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             },
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "type": "block",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "vg_name": "ceph_vg1"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:         }
Nov 25 17:14:54 compute-0 serene_kilby[407109]:     ],
Nov 25 17:14:54 compute-0 serene_kilby[407109]:     "2": [
Nov 25 17:14:54 compute-0 serene_kilby[407109]:         {
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "devices": [
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "/dev/loop5"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             ],
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_name": "ceph_lv2",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_size": "21470642176",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "name": "ceph_lv2",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "tags": {
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.cluster_name": "ceph",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.crush_device_class": "",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.encrypted": "0",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osd_id": "2",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.type": "block",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:                 "ceph.vdo": "0"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             },
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "type": "block",
Nov 25 17:14:54 compute-0 serene_kilby[407109]:             "vg_name": "ceph_vg2"
Nov 25 17:14:54 compute-0 serene_kilby[407109]:         }
Nov 25 17:14:54 compute-0 serene_kilby[407109]:     ]
Nov 25 17:14:54 compute-0 serene_kilby[407109]: }
Nov 25 17:14:54 compute-0 systemd[1]: libpod-b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4.scope: Deactivated successfully.
Nov 25 17:14:54 compute-0 podman[407093]: 2025-11-25 17:14:54.934845754 +0000 UTC m=+0.978384442 container died b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17d0a64ee650acbf1ab0e28fb1cb865ec90e3e11756c7d1a6b90f057e4e785a-merged.mount: Deactivated successfully.
Nov 25 17:14:55 compute-0 podman[407093]: 2025-11-25 17:14:54.999698819 +0000 UTC m=+1.043237487 container remove b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kilby, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:14:55 compute-0 systemd[1]: libpod-conmon-b85214c2314aed6687b6bf224da5f6e8943a807af09f9a3be49e6bf3e7362db4.scope: Deactivated successfully.
Nov 25 17:14:55 compute-0 sudo[406987]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:55 compute-0 sudo[407131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:55 compute-0 sudo[407131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:55 compute-0 sudo[407131]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:55 compute-0 sudo[407156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:14:55 compute-0 sudo[407156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:55 compute-0 sudo[407156]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:55 compute-0 sudo[407181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:55 compute-0 sudo[407181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:55 compute-0 sudo[407181]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:55 compute-0 sudo[407206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:14:55 compute-0 sudo[407206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:14:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2422693041' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:14:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:14:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2422693041' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:14:55 compute-0 nova_compute[254092]: 2025-11-25 17:14:55.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:55 compute-0 podman[407270]: 2025-11-25 17:14:55.670520988 +0000 UTC m=+0.049525479 container create 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 17:14:55 compute-0 systemd[1]: Started libpod-conmon-387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25.scope.
Nov 25 17:14:55 compute-0 podman[407270]: 2025-11-25 17:14:55.649987079 +0000 UTC m=+0.028991530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:14:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:14:55 compute-0 podman[407270]: 2025-11-25 17:14:55.784599383 +0000 UTC m=+0.163603834 container init 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:14:55 compute-0 podman[407270]: 2025-11-25 17:14:55.791988845 +0000 UTC m=+0.170993296 container start 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:14:55 compute-0 podman[407270]: 2025-11-25 17:14:55.797461703 +0000 UTC m=+0.176466234 container attach 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:14:55 compute-0 kind_proskuriakova[407286]: 167 167
Nov 25 17:14:55 compute-0 systemd[1]: libpod-387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25.scope: Deactivated successfully.
Nov 25 17:14:55 compute-0 podman[407270]: 2025-11-25 17:14:55.798585964 +0000 UTC m=+0.177590415 container died 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:14:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f2aaa79f28ed069b84a7d07ec9aa3a9955761f20ce788598614997203e54045-merged.mount: Deactivated successfully.
Nov 25 17:14:55 compute-0 podman[407270]: 2025-11-25 17:14:55.837287468 +0000 UTC m=+0.216291959 container remove 387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:14:55 compute-0 nova_compute[254092]: 2025-11-25 17:14:55.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:14:55 compute-0 systemd[1]: libpod-conmon-387201c0f189ab7bf04808502af2c5bdd99df6d9ce43eebf4895e0c122165c25.scope: Deactivated successfully.
Nov 25 17:14:56 compute-0 podman[407310]: 2025-11-25 17:14:56.013831013 +0000 UTC m=+0.041837040 container create e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:14:56 compute-0 systemd[1]: Started libpod-conmon-e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9.scope.
Nov 25 17:14:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:14:56 compute-0 podman[407310]: 2025-11-25 17:14:55.994871197 +0000 UTC m=+0.022877244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:14:56 compute-0 podman[407310]: 2025-11-25 17:14:56.098841747 +0000 UTC m=+0.126847794 container init e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:14:56 compute-0 podman[407310]: 2025-11-25 17:14:56.105143028 +0000 UTC m=+0.133149055 container start e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:14:56 compute-0 podman[407310]: 2025-11-25 17:14:56.10814395 +0000 UTC m=+0.136149977 container attach e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:14:56 compute-0 ceph-mon[74985]: pgmap v2741: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2422693041' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:14:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2422693041' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:14:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:57 compute-0 youthful_galileo[407327]: {
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "osd_id": 1,
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "type": "bluestore"
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:     },
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "osd_id": 2,
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "type": "bluestore"
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:     },
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "osd_id": 0,
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:         "type": "bluestore"
Nov 25 17:14:57 compute-0 youthful_galileo[407327]:     }
Nov 25 17:14:57 compute-0 youthful_galileo[407327]: }
Nov 25 17:14:57 compute-0 systemd[1]: libpod-e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9.scope: Deactivated successfully.
Nov 25 17:14:57 compute-0 podman[407310]: 2025-11-25 17:14:57.056858783 +0000 UTC m=+1.084864820 container died e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-28fd2794b45f1778316871ea12d1316f6385e91868a7b2a31ff431cba69c6186-merged.mount: Deactivated successfully.
Nov 25 17:14:57 compute-0 podman[407310]: 2025-11-25 17:14:57.119383835 +0000 UTC m=+1.147389862 container remove e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:14:57 compute-0 systemd[1]: libpod-conmon-e49a88d8ccab52328092457127011133034c670a828a0dcd84edd495114242d9.scope: Deactivated successfully.
Nov 25 17:14:57 compute-0 sudo[407206]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:14:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:14:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:14:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:14:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b7fae12d-ee77-4666-89e3-1883c03d7f3a does not exist
Nov 25 17:14:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3eb96b36-b7f9-4955-bec8-04e0a67b9ed8 does not exist
Nov 25 17:14:57 compute-0 sudo[407373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:14:57 compute-0 sudo[407373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:57 compute-0 sudo[407373]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:57 compute-0 sudo[407398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:14:57 compute-0 sudo[407398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:14:57 compute-0 sudo[407398]: pam_unix(sudo:session): session closed for user root
Nov 25 17:14:58 compute-0 ceph-mon[74985]: pgmap v2742: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:14:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:14:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:14:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:00 compute-0 ceph-mon[74985]: pgmap v2743: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.259 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:72:a6 2001:db8:0:1:f816:3eff:fe7b:72a6 2001:db8::f816:3eff:fe7b:72a6'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7b:72a6/64 2001:db8::f816:3eff:fe7b:72a6/64', 'neutron:device_id': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=fff6df53-4de9-409a-abf7-032bad835b32) old=Port_Binding(mac=['fa:16:3e:7b:72:a6 2001:db8::f816:3eff:fe7b:72a6'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7b:72a6/64', 'neutron:device_id': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:15:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.261 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port fff6df53-4de9-409a-abf7-032bad835b32 in datapath c57073ad-8c41-459b-9402-c367011860c7 updated
Nov 25 17:15:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.262 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c57073ad-8c41-459b-9402-c367011860c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:15:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:00.264 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[efa761a4-2573-425f-aa3f-45a7a5cde85d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:00 compute-0 podman[407424]: 2025-11-25 17:15:00.640392834 +0000 UTC m=+0.056761605 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:15:00 compute-0 podman[407423]: 2025-11-25 17:15:00.642337138 +0000 UTC m=+0.059240814 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:15:00 compute-0 nova_compute[254092]: 2025-11-25 17:15:00.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:00 compute-0 podman[407425]: 2025-11-25 17:15:00.674442551 +0000 UTC m=+0.086163946 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 17:15:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:00 compute-0 nova_compute[254092]: 2025-11-25 17:15:00.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:02 compute-0 ceph-mon[74985]: pgmap v2744: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:04 compute-0 ceph-mon[74985]: pgmap v2745: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:05 compute-0 nova_compute[254092]: 2025-11-25 17:15:05.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:05 compute-0 nova_compute[254092]: 2025-11-25 17:15:05.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:06 compute-0 ceph-mon[74985]: pgmap v2746: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.795 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.795 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.808 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:15:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.879 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.879 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.888 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.889 254096 INFO nova.compute.claims [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:15:06 compute-0 nova_compute[254092]: 2025-11-25 17:15:06.984 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:15:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262071285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.413 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.422 254096 DEBUG nova.compute.provider_tree [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.439 254096 DEBUG nova.scheduler.client.report [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.461 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.462 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.516 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.517 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.535 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.565 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.677 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.678 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.678 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Creating image(s)
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.700 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.724 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.747 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.752 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.789 254096 DEBUG nova.policy [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.828 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.829 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.830 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.830 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.855 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:07 compute-0 nova_compute[254092]: 2025-11-25 17:15:07.859 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cab3a333-1f68-435b-b6cb-a508755c2565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.183 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 cab3a333-1f68-435b-b6cb-a508755c2565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:08 compute-0 ceph-mon[74985]: pgmap v2747: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4262071285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.268 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.379 254096 DEBUG nova.objects.instance [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.392 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.392 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Ensure instance console log exists: /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.393 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.393 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.393 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:08 compute-0 nova_compute[254092]: 2025-11-25 17:15:08.408 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully created port: bfd7bca3-f01a-4857-8c51-1085cde3ad00 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:15:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:09 compute-0 nova_compute[254092]: 2025-11-25 17:15:09.229 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully created port: cd75615e-b80b-4685-b424-2c54f7fdbde8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:15:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.148 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully updated port: bfd7bca3-f01a-4857-8c51-1085cde3ad00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:15:10 compute-0 ceph-mon[74985]: pgmap v2748: 321 pgs: 321 active+clean; 41 MiB data, 922 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.258 254096 DEBUG nova.compute.manager [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.259 254096 DEBUG nova.compute.manager [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.259 254096 DEBUG oslo_concurrency.lockutils [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.259 254096 DEBUG oslo_concurrency.lockutils [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.260 254096 DEBUG nova.network.neutron [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.407 254096 DEBUG nova.network.neutron [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.838 254096 DEBUG nova.network.neutron [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 75 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.854 254096 DEBUG oslo_concurrency.lockutils [req-74fe77dc-690f-43cf-bd68-f4b1de75b490 req-0cc3ccec-6190-40e4-9f62-d1442d992bae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:10 compute-0 nova_compute[254092]: 2025-11-25 17:15:10.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:11 compute-0 nova_compute[254092]: 2025-11-25 17:15:11.164 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Successfully updated port: cd75615e-b80b-4685-b424-2c54f7fdbde8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:15:11 compute-0 nova_compute[254092]: 2025-11-25 17:15:11.189 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:11 compute-0 nova_compute[254092]: 2025-11-25 17:15:11.189 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:11 compute-0 nova_compute[254092]: 2025-11-25 17:15:11.190 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:15:11 compute-0 nova_compute[254092]: 2025-11-25 17:15:11.397 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:15:12 compute-0 ceph-mon[74985]: pgmap v2749: 321 pgs: 321 active+clean; 75 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Nov 25 17:15:12 compute-0 nova_compute[254092]: 2025-11-25 17:15:12.348 254096 DEBUG nova.compute.manager [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:12 compute-0 nova_compute[254092]: 2025-11-25 17:15:12.348 254096 DEBUG nova.compute.manager [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-cd75615e-b80b-4685-b424-2c54f7fdbde8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:15:12 compute-0 nova_compute[254092]: 2025-11-25 17:15:12.348 254096 DEBUG oslo_concurrency.lockutils [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:13.653 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:13.655 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:14 compute-0 ceph-mon[74985]: pgmap v2750: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.426 254096 DEBUG nova.network.neutron [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.440 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.441 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance network_info: |[{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.441 254096 DEBUG oslo_concurrency.lockutils [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.442 254096 DEBUG nova.network.neutron [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port cd75615e-b80b-4685-b424-2c54f7fdbde8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.445 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start _get_guest_xml network_info=[{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.451 254096 WARNING nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.454 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.455 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.458 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.458 254096 DEBUG nova.virt.libvirt.host [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.459 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.459 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.459 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.460 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.461 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.462 254096 DEBUG nova.virt.hardware [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.464 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:15:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717098985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.895 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.917 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:14 compute-0 nova_compute[254092]: 2025-11-25 17:15:14.921 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2717098985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:15:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991302879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.371 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.375 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.376 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.378 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.380 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.381 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.382 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.384 254096 DEBUG nova.objects.instance [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.399 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <uuid>cab3a333-1f68-435b-b6cb-a508755c2565</uuid>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <name>instance-0000008b</name>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1331536480</nova:name>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:15:14</nova:creationTime>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:port uuid="bfd7bca3-f01a-4857-8c51-1085cde3ad00">
Nov 25 17:15:15 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <nova:port uuid="cd75615e-b80b-4685-b424-2c54f7fdbde8">
Nov 25 17:15:15 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8f:6d38" ipVersion="6"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe8f:6d38" ipVersion="6"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <system>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <entry name="serial">cab3a333-1f68-435b-b6cb-a508755c2565</entry>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <entry name="uuid">cab3a333-1f68-435b-b6cb-a508755c2565</entry>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </system>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <os>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   </os>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <features>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   </features>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/cab3a333-1f68-435b-b6cb-a508755c2565_disk">
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       </source>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/cab3a333-1f68-435b-b6cb-a508755c2565_disk.config">
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       </source>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:15:15 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f9:85:52"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <target dev="tapbfd7bca3-f0"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:8f:6d:38"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <target dev="tapcd75615e-b8"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/console.log" append="off"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <video>
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </video>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:15:15 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:15:15 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:15:15 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:15:15 compute-0 nova_compute[254092]: </domain>
Nov 25 17:15:15 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.401 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Preparing to wait for external event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.402 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.403 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.403 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.404 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Preparing to wait for external event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.404 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.404 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.405 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.406 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.406 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.408 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.409 254096 DEBUG os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.411 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.412 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.418 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfd7bca3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.418 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbfd7bca3-f0, col_values=(('external_ids', {'iface-id': 'bfd7bca3-f01a-4857-8c51-1085cde3ad00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:85:52', 'vm-uuid': 'cab3a333-1f68-435b-b6cb-a508755c2565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.420 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:15 compute-0 NetworkManager[48891]: <info>  [1764090915.4221] manager: (tapbfd7bca3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.424 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.429 254096 INFO os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0')
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.430 254096 DEBUG nova.virt.libvirt.vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.430 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.431 254096 DEBUG nova.network.os_vif_util [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.432 254096 DEBUG os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.433 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.436 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd75615e-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.436 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcd75615e-b8, col_values=(('external_ids', {'iface-id': 'cd75615e-b80b-4685-b424-2c54f7fdbde8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:6d:38', 'vm-uuid': 'cab3a333-1f68-435b-b6cb-a508755c2565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:15 compute-0 NetworkManager[48891]: <info>  [1764090915.4391] manager: (tapcd75615e-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.445 254096 INFO os_vif [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8')
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.494 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.494 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.495 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:f9:85:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.495 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:8f:6d:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.495 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Using config drive
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.520 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:15 compute-0 nova_compute[254092]: 2025-11-25 17:15:15.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.034 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Creating config drive at /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.041 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xafk6j2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.192 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xafk6j2" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.221 254096 DEBUG nova.storage.rbd_utils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image cab3a333-1f68-435b-b6cb-a508755c2565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.225 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config cab3a333-1f68-435b-b6cb-a508755c2565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:16 compute-0 ceph-mon[74985]: pgmap v2751: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1991302879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.396 254096 DEBUG oslo_concurrency.processutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config cab3a333-1f68-435b-b6cb-a508755c2565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.397 254096 INFO nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deleting local config drive /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565/disk.config because it was imported into RBD.
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.4721] manager: (tapbfd7bca3-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/601)
Nov 25 17:15:16 compute-0 kernel: tapbfd7bca3-f0: entered promiscuous mode
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01458|binding|INFO|Claiming lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 for this chassis.
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01459|binding|INFO|bfd7bca3-f01a-4857-8c51-1085cde3ad00: Claiming fa:16:3e:f9:85:52 10.100.0.12
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.493 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:85:52 10.100.0.12'], port_security=['fa:16:3e:f9:85:52 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bfd7bca3-f01a-4857-8c51-1085cde3ad00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.495 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bfd7bca3-f01a-4857-8c51-1085cde3ad00 in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 bound to our chassis
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.496 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.4966] manager: (tapcd75615e-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Nov 25 17:15:16 compute-0 systemd-udevd[407818]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:15:16 compute-0 systemd-udevd[407817]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0fbba9-5499-43d8-9da4-a0097bed29b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.511 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c1b4538-71 in ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.513 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c1b4538-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.513 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a19a3cad-e8e3-4c25-9725-4768d0346b58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.514 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6a72a95c-c99f-46f5-b057-c5d0af92d21e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.527 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[d66751b3-3b3d-4d26-a82e-b92744031d2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.5308] device (tapbfd7bca3-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.5319] device (tapbfd7bca3-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:15:16 compute-0 systemd-machined[216343]: New machine qemu-173-instance-0000008b.
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.556 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e380d23c-768c-43e4-be81-bde302736ac2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008b.
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.5670] device (tapcd75615e-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:15:16 compute-0 kernel: tapcd75615e-b8: entered promiscuous mode
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.5683] device (tapcd75615e-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01460|binding|INFO|Claiming lport cd75615e-b80b-4685-b424-2c54f7fdbde8 for this chassis.
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01461|binding|INFO|cd75615e-b80b-4685-b424-2c54f7fdbde8: Claiming fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.580 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], port_security=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:6d38/64 2001:db8::f816:3eff:fe8f:6d38/64', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd75615e-b80b-4685-b424-2c54f7fdbde8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01462|binding|INFO|Setting lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 ovn-installed in OVS
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01463|binding|INFO|Setting lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 up in Southbound
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01464|binding|INFO|Setting lport cd75615e-b80b-4685-b424-2c54f7fdbde8 ovn-installed in OVS
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01465|binding|INFO|Setting lport cd75615e-b80b-4685-b424-2c54f7fdbde8 up in Southbound
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.605 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8098ee-273f-409f-81e3-cd6de020ae7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.609 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aba5607f-e68a-4f9f-9e34-949682e25add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.6110] manager: (tap8c1b4538-70): new Veth device (/org/freedesktop/NetworkManager/Devices/603)
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.649 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[57204a17-5468-472c-95dc-8dd7af997779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.653 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[edfcec99-8f1a-4512-af97-5f707f25d79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.6793] device (tap8c1b4538-70): carrier: link connected
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.687 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f747dfd0-2871-44fc-8af7-cf917820d62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.708 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4af7672b-a883-4b48-afe2-06a5c07e9c5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 34818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407853, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.727 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4975a0e0-77b2-427d-967e-49207c8cf9ea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:12ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737427, 'tstamp': 737427}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407854, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.750 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[df2437a4-7d46-484e-860f-365c347ce2ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 34818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407855, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.777 254096 DEBUG nova.compute.manager [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.778 254096 DEBUG oslo_concurrency.lockutils [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.779 254096 DEBUG oslo_concurrency.lockutils [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.779 254096 DEBUG oslo_concurrency.lockutils [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.780 254096 DEBUG nova.compute.manager [req-1a64c37b-f09c-4e51-914e-1bf137036eec req-40e2c029-6b02-4184-a31a-26aa33dafabb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Processing event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.784 254096 DEBUG nova.network.neutron [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated VIF entry in instance network info cache for port cd75615e-b80b-4685-b424-2c54f7fdbde8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.786 254096 DEBUG nova.network.neutron [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.803 254096 DEBUG oslo_concurrency.lockutils [req-9a55bbfc-f953-47c6-bd33-c7039db4f853 req-7b02fd24-f174-4fd5-bef0-a0015c5df5cb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.820 254096 DEBUG nova.compute.manager [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.821 254096 DEBUG oslo_concurrency.lockutils [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.823 254096 DEBUG oslo_concurrency.lockutils [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.823 254096 DEBUG oslo_concurrency.lockutils [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.823 254096 DEBUG nova.compute.manager [req-c74aa027-b9fe-4ad5-b29f-99991f6990d4 req-d37e9d7b-3f6a-4ee2-b603-74df70fcc2b8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Processing event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.826 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cc345c9b-d14d-48e7-a12e-7ba3f46b6297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.902 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d95e6c1a-bf39-4fb0-ac41-a848f579fe6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.904 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.904 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.905 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c1b4538-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:16 compute-0 NetworkManager[48891]: <info>  [1764090916.9082] manager: (tap8c1b4538-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/604)
Nov 25 17:15:16 compute-0 kernel: tap8c1b4538-70: entered promiscuous mode
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.912 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c1b4538-70, col_values=(('external_ids', {'iface-id': '805e6fea-3f01-4342-8f5e-5b75e48ec68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 ovn_controller[153477]: 2025-11-25T17:15:16Z|01466|binding|INFO|Releasing lport 805e6fea-3f01-4342-8f5e-5b75e48ec68e from this chassis (sb_readonly=0)
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.920 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:15:16 compute-0 nova_compute[254092]: 2025-11-25 17:15:16.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.929 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80dbdba2-2e59-4fc5-8c5b-9fbd94bd3300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.932 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-8c1b4538-7e1d-41aa-8e91-8a97df87ce48
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.pid.haproxy
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 8c1b4538-7e1d-41aa-8e91-8a97df87ce48
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:15:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:16.934 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'env', 'PROCESS_TAG=haproxy-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c1b4538-7e1d-41aa-8e91-8a97df87ce48.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:15:17 compute-0 podman[407886]: 2025-11-25 17:15:17.312703453 +0000 UTC m=+0.058107563 container create 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:15:17 compute-0 systemd[1]: Started libpod-conmon-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42.scope.
Nov 25 17:15:17 compute-0 podman[407886]: 2025-11-25 17:15:17.280352932 +0000 UTC m=+0.025757062 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:15:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2a19e2c416b67cb68207b4518ff73fc69af0445413f18a70a9f65c350e3045/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:15:17 compute-0 podman[407886]: 2025-11-25 17:15:17.411484761 +0000 UTC m=+0.156888891 container init 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 17:15:17 compute-0 podman[407886]: 2025-11-25 17:15:17.419948081 +0000 UTC m=+0.165352191 container start 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:15:17 compute-0 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : New worker (407943) forked
Nov 25 17:15:17 compute-0 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : Loading success.
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.486 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd75615e-b80b-4685-b424-2c54f7fdbde8 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.488 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c57073ad-8c41-459b-9402-c367011860c7
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.502 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7757fdab-8638-42de-9a30-a67a05cec8da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.504 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc57073ad-81 in ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.506 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc57073ad-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.507 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05776bd0-f779-4927-899e-21cf0d96f4fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.508 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6f25af7e-28c7-4dbb-ad48-f79e6e398e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.521 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[1d759470-1cf7-4933-bafb-cdc864cebb57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.555 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50d6d8ea-80ec-4a2c-98fd-4adce82ccfc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.575 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.576 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090917.5744035, cab3a333-1f68-435b-b6cb-a508755c2565 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.577 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Started (Lifecycle Event)
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.582 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.586 254096 INFO nova.virt.libvirt.driver [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance spawned successfully.
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.587 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.609 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.611 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ece2663a-6126-4fdb-81bf-f7991c8bf8a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.615 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.618 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.618 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.619 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.619 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.619 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.620 254096 DEBUG nova.virt.libvirt.driver [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.619 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a350f9d6-f57e-4f79-82b7-0d5b8b7915e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 NetworkManager[48891]: <info>  [1764090917.6206] manager: (tapc57073ad-80): new Veth device (/org/freedesktop/NetworkManager/Devices/605)
Nov 25 17:15:17 compute-0 systemd-udevd[407840]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.660 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.661 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090917.5757372, cab3a333-1f68-435b-b6cb-a508755c2565 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.661 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Paused (Lifecycle Event)
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.665 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f08ed9a0-935a-4c4c-905e-6aca0a58204f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.667 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[03c6e658-abc5-4f0b-826d-80092f62e863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.679 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.683 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090917.5790198, cab3a333-1f68-435b-b6cb-a508755c2565 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.683 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Resumed (Lifecycle Event)
Nov 25 17:15:17 compute-0 NetworkManager[48891]: <info>  [1764090917.6987] device (tapc57073ad-80): carrier: link connected
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.705 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9a747c57-623d-4e32-95e3-53b9abf3fd3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.706 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.709 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.719 254096 INFO nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 10.04 seconds to spawn the instance on the hypervisor.
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.719 254096 DEBUG nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.733 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[635fa236-b40a-40c9-9b86-209e93d3d228]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 38152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407969, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.745 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.767 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[899d15f7-ed62-4c0d-8715-f2608be74c3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:72a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737528, 'tstamp': 737528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407970, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.774 254096 INFO nova.compute.manager [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 10.92 seconds to build instance.
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.786 254096 DEBUG oslo_concurrency.lockutils [None req-9aeced09-6108-42a1-9b93-6da7dc5566bc a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.793 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c2acdb-ab06-4222-b629-02586fe9f392]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 38152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407971, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.837 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4043483a-25c6-48c5-96fe-d175680d9d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.873 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f59da72a-8fe1-4ee6-bbb8-8658b82e5017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.874 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.875 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.875 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc57073ad-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:17 compute-0 NetworkManager[48891]: <info>  [1764090917.9097] manager: (tapc57073ad-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Nov 25 17:15:17 compute-0 kernel: tapc57073ad-80: entered promiscuous mode
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.911 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc57073ad-80, col_values=(('external_ids', {'iface-id': 'fff6df53-4de9-409a-abf7-032bad835b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.915 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c57073ad-8c41-459b-9402-c367011860c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c57073ad-8c41-459b-9402-c367011860c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:15:17 compute-0 ovn_controller[153477]: 2025-11-25T17:15:17Z|01467|binding|INFO|Releasing lport fff6df53-4de9-409a-abf7-032bad835b32 from this chassis (sb_readonly=0)
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.916 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28487843-bd72-4ce0-a18b-672209105024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.917 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-c57073ad-8c41-459b-9402-c367011860c7
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/c57073ad-8c41-459b-9402-c367011860c7.pid.haproxy
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID c57073ad-8c41-459b-9402-c367011860c7
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:15:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:17.919 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'env', 'PROCESS_TAG=haproxy-c57073ad-8c41-459b-9402-c367011860c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c57073ad-8c41-459b-9402-c367011860c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:17 compute-0 nova_compute[254092]: 2025-11-25 17:15:17.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:18 compute-0 ceph-mon[74985]: pgmap v2752: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:18 compute-0 podman[408002]: 2025-11-25 17:15:18.299616645 +0000 UTC m=+0.047236427 container create f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:15:18 compute-0 systemd[1]: Started libpod-conmon-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135.scope.
Nov 25 17:15:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0817bcd731ce4c188b30a692b0672c775002a3b0968cf2b0f2382fce690850/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:15:18 compute-0 podman[408002]: 2025-11-25 17:15:18.277418611 +0000 UTC m=+0.025038393 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:15:18 compute-0 podman[408002]: 2025-11-25 17:15:18.378322067 +0000 UTC m=+0.125941869 container init f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:15:18 compute-0 podman[408002]: 2025-11-25 17:15:18.383161479 +0000 UTC m=+0.130781261 container start f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 17:15:18 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : New worker (408024) forked
Nov 25 17:15:18 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : Loading success.
Nov 25 17:15:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG nova.compute.manager [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG oslo_concurrency.lockutils [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG oslo_concurrency.lockutils [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.860 254096 DEBUG oslo_concurrency.lockutils [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.861 254096 DEBUG nova.compute.manager [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.861 254096 WARNING nova.compute.manager [req-9779b01e-9e3c-4ee7-9ca4-61e825f07331 req-ef9b7ca4-16e7-4902-b29e-0b5a068ee1c8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 for instance with vm_state active and task_state None.
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG nova.compute.manager [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG oslo_concurrency.lockutils [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG oslo_concurrency.lockutils [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.896 254096 DEBUG oslo_concurrency.lockutils [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.897 254096 DEBUG nova.compute.manager [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:15:18 compute-0 nova_compute[254092]: 2025-11-25 17:15:18.897 254096 WARNING nova.compute.manager [req-124e08e2-6649-4815-97f2-23fccb15a8a0 req-5da0dbdd-af8a-4e2c-b2ba-95f8074e7e69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 for instance with vm_state active and task_state None.
Nov 25 17:15:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:20 compute-0 ceph-mon[74985]: pgmap v2753: 321 pgs: 321 active+clean; 88 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:20 compute-0 nova_compute[254092]: 2025-11-25 17:15:20.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 17:15:20 compute-0 nova_compute[254092]: 2025-11-25 17:15:20.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:22 compute-0 ceph-mon[74985]: pgmap v2754: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 25 17:15:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 423 KiB/s wr, 75 op/s
Nov 25 17:15:23 compute-0 NetworkManager[48891]: <info>  [1764090923.1242] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Nov 25 17:15:23 compute-0 NetworkManager[48891]: <info>  [1764090923.1254] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Nov 25 17:15:23 compute-0 ovn_controller[153477]: 2025-11-25T17:15:23Z|01468|binding|INFO|Releasing lport fff6df53-4de9-409a-abf7-032bad835b32 from this chassis (sb_readonly=0)
Nov 25 17:15:23 compute-0 ovn_controller[153477]: 2025-11-25T17:15:23Z|01469|binding|INFO|Releasing lport 805e6fea-3f01-4342-8f5e-5b75e48ec68e from this chassis (sb_readonly=0)
Nov 25 17:15:23 compute-0 nova_compute[254092]: 2025-11-25 17:15:23.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:23 compute-0 ovn_controller[153477]: 2025-11-25T17:15:23Z|01470|binding|INFO|Releasing lport fff6df53-4de9-409a-abf7-032bad835b32 from this chassis (sb_readonly=0)
Nov 25 17:15:23 compute-0 ovn_controller[153477]: 2025-11-25T17:15:23Z|01471|binding|INFO|Releasing lport 805e6fea-3f01-4342-8f5e-5b75e48ec68e from this chassis (sb_readonly=0)
Nov 25 17:15:23 compute-0 nova_compute[254092]: 2025-11-25 17:15:23.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:23 compute-0 nova_compute[254092]: 2025-11-25 17:15:23.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:24 compute-0 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG nova.compute.manager [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:24 compute-0 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG nova.compute.manager [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:15:24 compute-0 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG oslo_concurrency.lockutils [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:24 compute-0 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG oslo_concurrency.lockutils [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:24 compute-0 nova_compute[254092]: 2025-11-25 17:15:24.073 254096 DEBUG nova.network.neutron [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:15:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:24 compute-0 ceph-mon[74985]: pgmap v2755: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 423 KiB/s wr, 75 op/s
Nov 25 17:15:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:15:25 compute-0 nova_compute[254092]: 2025-11-25 17:15:25.392 254096 DEBUG nova.network.neutron [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated VIF entry in instance network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:15:25 compute-0 nova_compute[254092]: 2025-11-25 17:15:25.393 254096 DEBUG nova.network.neutron [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:25 compute-0 nova_compute[254092]: 2025-11-25 17:15:25.413 254096 DEBUG oslo_concurrency.lockutils [req-68441519-f6dc-4bd5-a7ce-37b19f48e2a1 req-faeda6f9-e5e0-43c6-90f5-2f71409581b1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:25 compute-0 nova_compute[254092]: 2025-11-25 17:15:25.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:25 compute-0 nova_compute[254092]: 2025-11-25 17:15:25.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:26 compute-0 ceph-mon[74985]: pgmap v2756: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:15:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:15:27 compute-0 ceph-mon[74985]: pgmap v2757: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:15:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:15:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:29 compute-0 ovn_controller[153477]: 2025-11-25T17:15:29Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:85:52 10.100.0.12
Nov 25 17:15:29 compute-0 ovn_controller[153477]: 2025-11-25T17:15:29Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:85:52 10.100.0.12
Nov 25 17:15:29 compute-0 ceph-mon[74985]: pgmap v2758: 321 pgs: 321 active+clean; 88 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:15:30 compute-0 nova_compute[254092]: 2025-11-25 17:15:30.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 105 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 17:15:30 compute-0 nova_compute[254092]: 2025-11-25 17:15:30.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:31 compute-0 sshd-session[408036]: Received disconnect from 80.94.93.119 port 58455:11:  [preauth]
Nov 25 17:15:31 compute-0 sshd-session[408036]: Disconnected from authenticating user root 80.94.93.119 port 58455 [preauth]
Nov 25 17:15:31 compute-0 podman[408038]: 2025-11-25 17:15:31.645476479 +0000 UTC m=+0.061933507 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 17:15:31 compute-0 podman[408039]: 2025-11-25 17:15:31.665427922 +0000 UTC m=+0.081938551 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 17:15:31 compute-0 podman[408040]: 2025-11-25 17:15:31.708611288 +0000 UTC m=+0.119606737 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:15:31 compute-0 ceph-mon[74985]: pgmap v2759: 321 pgs: 321 active+clean; 105 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 25 17:15:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 112 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 25 17:15:33 compute-0 ceph-mon[74985]: pgmap v2760: 321 pgs: 321 active+clean; 112 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 25 17:15:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:34 compute-0 nova_compute[254092]: 2025-11-25 17:15:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:34 compute-0 sshd-session[408100]: Connection closed by authenticating user root 171.244.51.45 port 55464 [preauth]
Nov 25 17:15:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 112 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 25 17:15:35 compute-0 nova_compute[254092]: 2025-11-25 17:15:35.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:35 compute-0 nova_compute[254092]: 2025-11-25 17:15:35.920 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:35 compute-0 ceph-mon[74985]: pgmap v2761: 321 pgs: 321 active+clean; 112 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 25 17:15:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:15:37 compute-0 ceph-mon[74985]: pgmap v2762: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:15:38 compute-0 nova_compute[254092]: 2025-11-25 17:15:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:15:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:40 compute-0 ceph-mon[74985]: pgmap v2763: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:15:40
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control', 'images', 'vms']
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:15:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:15:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793098788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:40 compute-0 nova_compute[254092]: 2025-11-25 17:15:40.959 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.033 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.033 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:15:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1793098788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.213 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.215 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3461MB free_disk=59.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.215 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.215 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.288 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cab3a333-1f68-435b-b6cb-a508755c2565 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.288 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.289 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.371 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:15:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498552321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.857 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.862 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.892 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.925 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:15:41 compute-0 nova_compute[254092]: 2025-11-25 17:15:41.925 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:42 compute-0 ceph-mon[74985]: pgmap v2764: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:15:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/498552321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 394 KiB/s wr, 30 op/s
Nov 25 17:15:42 compute-0 nova_compute[254092]: 2025-11-25 17:15:42.927 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:42 compute-0 nova_compute[254092]: 2025-11-25 17:15:42.928 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:15:42 compute-0 nova_compute[254092]: 2025-11-25 17:15:42.948 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:42 compute-0 nova_compute[254092]: 2025-11-25 17:15:42.949 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:42 compute-0 nova_compute[254092]: 2025-11-25 17:15:42.969 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.119 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.120 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.130 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.131 254096 INFO nova.compute.claims [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.251 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.269 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.271 254096 DEBUG nova.compute.provider_tree [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.285 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.313 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.361 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:15:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521690084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.816 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.824 254096 DEBUG nova.compute.provider_tree [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.878 254096 DEBUG nova.scheduler.client.report [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.899 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.900 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.945 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.946 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:15:43 compute-0 nova_compute[254092]: 2025-11-25 17:15:43.984 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.009 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:15:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.117 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.118 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.119 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Creating image(s)
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.143 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.175 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:44 compute-0 ceph-mon[74985]: pgmap v2765: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 133 KiB/s rd, 394 KiB/s wr, 30 op/s
Nov 25 17:15:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2521690084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.217 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.221 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.259 254096 DEBUG nova.policy [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.294 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.295 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.296 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.296 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.315 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.319 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:44 compute-0 nova_compute[254092]: 2025-11-25 17:15:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 28 KiB/s wr, 14 op/s
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.172 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.273 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.509 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully created port: d4fd6164-a382-44de-8709-0a3941640a9d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.636 254096 DEBUG nova.objects.instance [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid f0fce250-5e4a-4063-a0c9-a2285f68c22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.649 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.649 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Ensure instance console log exists: /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.650 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.650 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.651 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:45 compute-0 nova_compute[254092]: 2025-11-25 17:15:45.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:46 compute-0 ceph-mon[74985]: pgmap v2766: 321 pgs: 321 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 28 KiB/s wr, 14 op/s
Nov 25 17:15:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 17:15:47 compute-0 nova_compute[254092]: 2025-11-25 17:15:47.065 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully created port: 5cfdc6b4-a091-429f-a33f-cc941535c221 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.170 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully updated port: d4fd6164-a382-44de-8709-0a3941640a9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.260 254096 DEBUG nova.compute.manager [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.260 254096 DEBUG nova.compute.manager [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.260 254096 DEBUG oslo_concurrency.lockutils [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.261 254096 DEBUG oslo_concurrency.lockutils [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.261 254096 DEBUG nova.network.neutron [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:15:48 compute-0 ceph-mon[74985]: pgmap v2767: 321 pgs: 321 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.415 254096 DEBUG nova.network.neutron [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.754 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.754 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.755 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.755 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:15:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.887 254096 DEBUG nova.network.neutron [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:48 compute-0 nova_compute[254092]: 2025-11-25 17:15:48.906 254096 DEBUG oslo_concurrency.lockutils [req-5be86411-04f4-468a-a6bc-62fbc6a799ee req-83adbd5d-3634-4124-a11d-0dac8d3b3f81 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:49 compute-0 nova_compute[254092]: 2025-11-25 17:15:49.268 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Successfully updated port: 5cfdc6b4-a091-429f-a33f-cc941535c221 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:15:49 compute-0 nova_compute[254092]: 2025-11-25 17:15:49.326 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:49 compute-0 nova_compute[254092]: 2025-11-25 17:15:49.327 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:49 compute-0 nova_compute[254092]: 2025-11-25 17:15:49.327 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:15:49 compute-0 nova_compute[254092]: 2025-11-25 17:15:49.483 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:15:49 compute-0 ceph-mon[74985]: pgmap v2768: 321 pgs: 321 active+clean; 167 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:15:50 compute-0 nova_compute[254092]: 2025-11-25 17:15:50.350 254096 DEBUG nova.compute.manager [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:50 compute-0 nova_compute[254092]: 2025-11-25 17:15:50.350 254096 DEBUG nova.compute.manager [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-5cfdc6b4-a091-429f-a33f-cc941535c221. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:15:50 compute-0 nova_compute[254092]: 2025-11-25 17:15:50.351 254096 DEBUG oslo_concurrency.lockutils [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:50 compute-0 nova_compute[254092]: 2025-11-25 17:15:50.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:50 compute-0 nova_compute[254092]: 2025-11-25 17:15:50.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.064 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.088 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.088 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.251 254096 DEBUG nova.network.neutron [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.283 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.284 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance network_info: |[{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.285 254096 DEBUG oslo_concurrency.lockutils [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.286 254096 DEBUG nova.network.neutron [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port 5cfdc6b4-a091-429f-a33f-cc941535c221 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.296 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start _get_guest_xml network_info=[{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.303 254096 WARNING nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.309 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.310 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.324 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.325 254096 DEBUG nova.virt.libvirt.host [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.326 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.326 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.327 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.327 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.328 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.328 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.328 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.329 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.329 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.330 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.330 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.330 254096 DEBUG nova.virt.hardware [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.336 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011052700926287686 of space, bias 1.0, pg target 0.3315810277886306 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:15:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:15:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:15:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391039012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.856 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.882 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:51 compute-0 nova_compute[254092]: 2025-11-25 17:15:51.887 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:51 compute-0 ceph-mon[74985]: pgmap v2769: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/391039012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:15:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529115130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.308 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.309 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.309 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.310 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.311 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.311 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.312 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.313 254096 DEBUG nova.objects.instance [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0fce250-5e4a-4063-a0c9-a2285f68c22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.330 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <uuid>f0fce250-5e4a-4063-a0c9-a2285f68c22e</uuid>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <name>instance-0000008c</name>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1931135016</nova:name>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:15:51</nova:creationTime>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:port uuid="d4fd6164-a382-44de-8709-0a3941640a9d">
Nov 25 17:15:52 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <nova:port uuid="5cfdc6b4-a091-429f-a33f-cc941535c221">
Nov 25 17:15:52 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fea7:615b" ipVersion="6"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fea7:615b" ipVersion="6"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <system>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <entry name="serial">f0fce250-5e4a-4063-a0c9-a2285f68c22e</entry>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <entry name="uuid">f0fce250-5e4a-4063-a0c9-a2285f68c22e</entry>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </system>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <os>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   </os>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <features>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   </features>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk">
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       </source>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config">
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       </source>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:15:52 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:e8:fe:9c"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <target dev="tapd4fd6164-a3"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:a7:61:5b"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <target dev="tap5cfdc6b4-a0"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/console.log" append="off"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <video>
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </video>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:15:52 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:15:52 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:15:52 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:15:52 compute-0 nova_compute[254092]: </domain>
Nov 25 17:15:52 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Preparing to wait for external event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.331 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Preparing to wait for external event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.332 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.333 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.333 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.333 254096 DEBUG os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.335 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.335 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.339 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4fd6164-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.339 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4fd6164-a3, col_values=(('external_ids', {'iface-id': 'd4fd6164-a382-44de-8709-0a3941640a9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:fe:9c', 'vm-uuid': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:52 compute-0 NetworkManager[48891]: <info>  [1764090952.4429] manager: (tapd4fd6164-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.449 254096 INFO os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3')
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.450 254096 DEBUG nova.virt.libvirt.vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:15:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.450 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.450 254096 DEBUG nova.network.os_vif_util [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.451 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.453 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cfdc6b4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.453 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5cfdc6b4-a0, col_values=(('external_ids', {'iface-id': '5cfdc6b4-a091-429f-a33f-cc941535c221', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:61:5b', 'vm-uuid': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 NetworkManager[48891]: <info>  [1764090952.4555] manager: (tap5cfdc6b4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.460 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.460 254096 INFO os_vif [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0')
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:e8:fe:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.513 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:a7:61:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.514 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Using config drive
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.532 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.671 254096 DEBUG nova.network.neutron [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updated VIF entry in instance network info cache for port 5cfdc6b4-a091-429f-a33f-cc941535c221. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.672 254096 DEBUG nova.network.neutron [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:15:52 compute-0 nova_compute[254092]: 2025-11-25 17:15:52.690 254096 DEBUG oslo_concurrency.lockutils [req-b2603582-1e49-45c4-aaf8-bb8f183e981e req-0184b704-ee56-4c81-bc29-f78ce91a922d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:15:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1529115130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:15:53 compute-0 nova_compute[254092]: 2025-11-25 17:15:53.878 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Creating config drive at /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config
Nov 25 17:15:53 compute-0 nova_compute[254092]: 2025-11-25 17:15:53.882 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2t1rxeqr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:53 compute-0 ceph-mon[74985]: pgmap v2770: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.039 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2t1rxeqr" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.075 254096 DEBUG nova.storage.rbd_utils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:15:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.083 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.250 254096 DEBUG oslo_concurrency.processutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config f0fce250-5e4a-4063-a0c9-a2285f68c22e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.252 254096 INFO nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deleting local config drive /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e/disk.config because it was imported into RBD.
Nov 25 17:15:54 compute-0 kernel: tapd4fd6164-a3: entered promiscuous mode
Nov 25 17:15:54 compute-0 NetworkManager[48891]: <info>  [1764090954.3078] manager: (tapd4fd6164-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/611)
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01472|binding|INFO|Claiming lport d4fd6164-a382-44de-8709-0a3941640a9d for this chassis.
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01473|binding|INFO|d4fd6164-a382-44de-8709-0a3941640a9d: Claiming fa:16:3e:e8:fe:9c 10.100.0.8
Nov 25 17:15:54 compute-0 NetworkManager[48891]: <info>  [1764090954.3289] manager: (tap5cfdc6b4-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.328 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:fe:9c 10.100.0.8'], port_security=['fa:16:3e:e8:fe:9c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d4fd6164-a382-44de-8709-0a3941640a9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.330 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d4fd6164-a382-44de-8709-0a3941640a9d in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 bound to our chassis
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.332 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01474|binding|INFO|Setting lport d4fd6164-a382-44de-8709-0a3941640a9d ovn-installed in OVS
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01475|binding|INFO|Setting lport d4fd6164-a382-44de-8709-0a3941640a9d up in Southbound
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 kernel: tap5cfdc6b4-a0: entered promiscuous mode
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01476|if_status|INFO|Dropped 2 log messages in last 116 seconds (most recently, 116 seconds ago) due to excessive rate
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01477|if_status|INFO|Not updating pb chassis for 5cfdc6b4-a091-429f-a33f-cc941535c221 now as sb is readonly
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01478|binding|INFO|Claiming lport 5cfdc6b4-a091-429f-a33f-cc941535c221 for this chassis.
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01479|binding|INFO|5cfdc6b4-a091-429f-a33f-cc941535c221: Claiming fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.348 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], port_security=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea7:615b/64 2001:db8::f816:3eff:fea7:615b/64', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cfdc6b4-a091-429f-a33f-cc941535c221) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:15:54 compute-0 systemd-udevd[408477]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:15:54 compute-0 systemd-udevd[408476]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.352 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c841d3-db14-44c0-865c-934a34e346da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01480|binding|INFO|Setting lport 5cfdc6b4-a091-429f-a33f-cc941535c221 ovn-installed in OVS
Nov 25 17:15:54 compute-0 ovn_controller[153477]: 2025-11-25T17:15:54Z|01481|binding|INFO|Setting lport 5cfdc6b4-a091-429f-a33f-cc941535c221 up in Southbound
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 NetworkManager[48891]: <info>  [1764090954.3661] device (tapd4fd6164-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:15:54 compute-0 NetworkManager[48891]: <info>  [1764090954.3671] device (tapd4fd6164-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:15:54 compute-0 NetworkManager[48891]: <info>  [1764090954.3675] device (tap5cfdc6b4-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:15:54 compute-0 NetworkManager[48891]: <info>  [1764090954.3682] device (tap5cfdc6b4-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:15:54 compute-0 systemd-machined[216343]: New machine qemu-174-instance-0000008c.
Nov 25 17:15:54 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-0000008c.
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.397 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c494c435-08c7-4a2d-8e67-7fc502097cac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.401 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3663c5d2-79e8-49f0-8cd2-de1917feffe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.437 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[412c72e9-6c4c-488c-8306-6dd8764dd01c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.459 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0e3c2c-0341-4e00-8608-758f4c27d675]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 34818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408492, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.483 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bd46f749-9448-474e-9598-283831499dfa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737444, 'tstamp': 737444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408494, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737448, 'tstamp': 737448}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408494, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.484 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.488 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c1b4538-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.489 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.489 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c1b4538-70, col_values=(('external_ids', {'iface-id': '805e6fea-3f01-4342-8f5e-5b75e48ec68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.489 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.491 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cfdc6b4-a091-429f-a33f-cc941535c221 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.492 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c57073ad-8c41-459b-9402-c367011860c7
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.509 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b46a3e-0184-4fba-ab50-f54236331285]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.533 254096 DEBUG nova.compute.manager [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.534 254096 DEBUG oslo_concurrency.lockutils [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.534 254096 DEBUG oslo_concurrency.lockutils [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.534 254096 DEBUG oslo_concurrency.lockutils [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.535 254096 DEBUG nova.compute.manager [req-80083e71-0473-490a-939d-30af66370c77 req-65a0fddb-6400-4404-bc10-3dad41ee4f8b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Processing event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.537 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[82d94a94-1c4d-48fd-97b4-147b4223e625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.539 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4dd88d-36ce-4ac2-984c-0e1cbd724fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.566 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6d28e13a-dbc4-4c4a-b90e-c64982452484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.587 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f692c81e-d663-45fe-a3dd-2d1db7f21f8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1846, 'tx_bytes': 402, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 38152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408500, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.603 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c5a6c3-77f3-4f1a-b8f8-a82fbb8ebf0c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc57073ad-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737545, 'tstamp': 737545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408516, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.605 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc57073ad-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.608 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.609 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc57073ad-80, col_values=(('external_ids', {'iface-id': 'fff6df53-4de9-409a-abf7-032bad835b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.609 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.615 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.615 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:15:54.616 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.757 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090954.7570217, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.758 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Started (Lifecycle Event)
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.780 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.786 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090954.7573197, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.786 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Paused (Lifecycle Event)
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.805 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.808 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:15:54 compute-0 nova_compute[254092]: 2025-11-25 17:15:54.827 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:15:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:15:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543334435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:15:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:15:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543334435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:15:55 compute-0 nova_compute[254092]: 2025-11-25 17:15:55.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:15:55 compute-0 nova_compute[254092]: 2025-11-25 17:15:55.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:56 compute-0 ceph-mon[74985]: pgmap v2771: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:15:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1543334435' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:15:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1543334435' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.659 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No event matching network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d in dict_keys([('network-vif-plugged', '5cfdc6b4-a091-429f-a33f-cc941535c221')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.660 254096 WARNING nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d for instance with vm_state building and task_state spawning.
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Processing event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.661 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG oslo_concurrency.lockutils [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 DEBUG nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.662 254096 WARNING nova.compute.manager [req-5a875ca9-2845-4937-ad54-4c14912d46bb req-3be87fb9-f9cf-4905-9541-035fcd6253fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 for instance with vm_state building and task_state spawning.
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.663 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.667 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764090956.667536, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.667 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Resumed (Lifecycle Event)
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.669 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.672 254096 INFO nova.virt.libvirt.driver [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance spawned successfully.
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.673 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.689 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.695 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.699 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.700 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.700 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.701 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.701 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.701 254096 DEBUG nova.virt.libvirt.driver [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.729 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.776 254096 INFO nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 12.66 seconds to spawn the instance on the hypervisor.
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.777 254096 DEBUG nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.853 254096 INFO nova.compute.manager [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 13.80 seconds to build instance.
Nov 25 17:15:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 17:15:56 compute-0 nova_compute[254092]: 2025-11-25 17:15:56.874 254096 DEBUG oslo_concurrency.lockutils [None req-ce9cea9e-d63d-4399-bf12-2735a16bd5ce a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:15:57 compute-0 sudo[408546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:15:57 compute-0 sudo[408546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:57 compute-0 sudo[408546]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:57 compute-0 nova_compute[254092]: 2025-11-25 17:15:57.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:15:57 compute-0 sudo[408571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:15:57 compute-0 sudo[408571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:57 compute-0 sudo[408571]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:57 compute-0 sudo[408596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:15:57 compute-0 sudo[408596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:57 compute-0 sudo[408596]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:57 compute-0 sudo[408621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 17:15:57 compute-0 sudo[408621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:57 compute-0 sudo[408621]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:15:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:15:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:15:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:15:57 compute-0 sudo[408666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:15:57 compute-0 sudo[408666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:57 compute-0 sudo[408666]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:58 compute-0 ceph-mon[74985]: pgmap v2772: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Nov 25 17:15:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:15:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:15:58 compute-0 sudo[408691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:15:58 compute-0 sudo[408691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:58 compute-0 sudo[408691]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:58 compute-0 sudo[408716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:15:58 compute-0 sudo[408716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:58 compute-0 sudo[408716]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:58 compute-0 sudo[408741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:15:58 compute-0 sudo[408741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:58 compute-0 sudo[408741]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:15:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:15:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:15:58 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:15:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:15:58 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:15:58 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e4077344-3c54-46d9-a3d3-b7164ede7bbd does not exist
Nov 25 17:15:58 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 11ad4fc0-db03-4d58-8850-934fe2209a13 does not exist
Nov 25 17:15:58 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cbd18275-97f6-4472-beea-6b7746c180a3 does not exist
Nov 25 17:15:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:15:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:15:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:15:58 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:15:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:15:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:15:58 compute-0 sudo[408798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:15:58 compute-0 sudo[408798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:58 compute-0 sudo[408798]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 25 17:15:58 compute-0 sudo[408823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:15:58 compute-0 sudo[408823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:58 compute-0 sudo[408823]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:59 compute-0 sudo[408848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:15:59 compute-0 sudo[408848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:59 compute-0 sudo[408848]: pam_unix(sudo:session): session closed for user root
Nov 25 17:15:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:15:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:15:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:15:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:15:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:15:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:15:59 compute-0 sudo[408873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:15:59 compute-0 sudo[408873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:15:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:15:59 compute-0 podman[408939]: 2025-11-25 17:15:59.4880282 +0000 UTC m=+0.080131122 container create 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:15:59 compute-0 podman[408939]: 2025-11-25 17:15:59.439345875 +0000 UTC m=+0.031448817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:15:59 compute-0 systemd[1]: Started libpod-conmon-142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b.scope.
Nov 25 17:15:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:15:59 compute-0 nova_compute[254092]: 2025-11-25 17:15:59.615 254096 DEBUG nova.compute.manager [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:15:59 compute-0 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG nova.compute.manager [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:15:59 compute-0 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG oslo_concurrency.lockutils [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:15:59 compute-0 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG oslo_concurrency.lockutils [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:15:59 compute-0 nova_compute[254092]: 2025-11-25 17:15:59.617 254096 DEBUG nova.network.neutron [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:15:59 compute-0 podman[408939]: 2025-11-25 17:15:59.634831036 +0000 UTC m=+0.226933978 container init 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:15:59 compute-0 podman[408939]: 2025-11-25 17:15:59.643887012 +0000 UTC m=+0.235989934 container start 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:15:59 compute-0 compassionate_mccarthy[408955]: 167 167
Nov 25 17:15:59 compute-0 systemd[1]: libpod-142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b.scope: Deactivated successfully.
Nov 25 17:15:59 compute-0 podman[408939]: 2025-11-25 17:15:59.654786199 +0000 UTC m=+0.246889091 container attach 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:15:59 compute-0 podman[408939]: 2025-11-25 17:15:59.655391296 +0000 UTC m=+0.247494188 container died 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 17:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-883aa7876024061a26bf8dcfb407ee94c6935f3924593790ec45e7ec46897343-merged.mount: Deactivated successfully.
Nov 25 17:15:59 compute-0 podman[408939]: 2025-11-25 17:15:59.769238535 +0000 UTC m=+0.361341447 container remove 142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:15:59 compute-0 systemd[1]: libpod-conmon-142b241fcb70b721e1e9e07ed3914d5cd86c2b218f4327ca45c28b15c5950f0b.scope: Deactivated successfully.
Nov 25 17:15:59 compute-0 podman[408979]: 2025-11-25 17:15:59.98949891 +0000 UTC m=+0.061125205 container create 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:16:00 compute-0 systemd[1]: Started libpod-conmon-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope.
Nov 25 17:16:00 compute-0 podman[408979]: 2025-11-25 17:15:59.953049298 +0000 UTC m=+0.024675613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:16:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:00 compute-0 ceph-mon[74985]: pgmap v2773: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 12 KiB/s wr, 10 op/s
Nov 25 17:16:00 compute-0 podman[408979]: 2025-11-25 17:16:00.08978682 +0000 UTC m=+0.161413295 container init 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:16:00 compute-0 podman[408979]: 2025-11-25 17:16:00.097831089 +0000 UTC m=+0.169457384 container start 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:16:00 compute-0 podman[408979]: 2025-11-25 17:16:00.104028878 +0000 UTC m=+0.175655193 container attach 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:16:00 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:00.618 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Nov 25 17:16:00 compute-0 nova_compute[254092]: 2025-11-25 17:16:00.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:01 compute-0 intelligent_nightingale[408995]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:16:01 compute-0 intelligent_nightingale[408995]: --> relative data size: 1.0
Nov 25 17:16:01 compute-0 intelligent_nightingale[408995]: --> All data devices are unavailable
Nov 25 17:16:01 compute-0 systemd[1]: libpod-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope: Deactivated successfully.
Nov 25 17:16:01 compute-0 systemd[1]: libpod-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope: Consumed 1.005s CPU time.
Nov 25 17:16:01 compute-0 podman[408979]: 2025-11-25 17:16:01.178629207 +0000 UTC m=+1.250255502 container died 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-573c03db46ef3c85f6a2c839d2703d08da4a637f0c4d34939a7a384febeea547-merged.mount: Deactivated successfully.
Nov 25 17:16:01 compute-0 podman[408979]: 2025-11-25 17:16:01.341217332 +0000 UTC m=+1.412843637 container remove 1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_nightingale, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:16:01 compute-0 systemd[1]: libpod-conmon-1533089b003290db1177db77a26bd03f7a26240ecc23c0806e3592c393d7f819.scope: Deactivated successfully.
Nov 25 17:16:01 compute-0 sudo[408873]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:01 compute-0 sudo[409039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:16:01 compute-0 sudo[409039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:01 compute-0 sudo[409039]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:01 compute-0 sudo[409064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:16:01 compute-0 sudo[409064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:01 compute-0 sudo[409064]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:01 compute-0 sudo[409089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:16:01 compute-0 sudo[409089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:01 compute-0 sudo[409089]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:01 compute-0 sudo[409114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:16:01 compute-0 sudo[409114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:01 compute-0 podman[409139]: 2025-11-25 17:16:01.779531393 +0000 UTC m=+0.055808460 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:16:01 compute-0 podman[409138]: 2025-11-25 17:16:01.817583989 +0000 UTC m=+0.094216555 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:16:01 compute-0 podman[409174]: 2025-11-25 17:16:01.901118753 +0000 UTC m=+0.081354636 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 17:16:02 compute-0 podman[409235]: 2025-11-25 17:16:02.070704469 +0000 UTC m=+0.045168600 container create 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:16:02 compute-0 systemd[1]: Started libpod-conmon-263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188.scope.
Nov 25 17:16:02 compute-0 ceph-mon[74985]: pgmap v2774: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 60 op/s
Nov 25 17:16:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:16:02 compute-0 podman[409235]: 2025-11-25 17:16:02.049964474 +0000 UTC m=+0.024428625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:16:02 compute-0 podman[409235]: 2025-11-25 17:16:02.194999782 +0000 UTC m=+0.169463933 container init 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:16:02 compute-0 podman[409235]: 2025-11-25 17:16:02.200986485 +0000 UTC m=+0.175450616 container start 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:16:02 compute-0 affectionate_brahmagupta[409251]: 167 167
Nov 25 17:16:02 compute-0 systemd[1]: libpod-263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188.scope: Deactivated successfully.
Nov 25 17:16:02 compute-0 podman[409235]: 2025-11-25 17:16:02.231324201 +0000 UTC m=+0.205788372 container attach 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:16:02 compute-0 podman[409235]: 2025-11-25 17:16:02.231875795 +0000 UTC m=+0.206339966 container died 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1343b10cae17f6bda8c2b2aed07fb46cda7a844c5b96ac54f4314b9a148261-merged.mount: Deactivated successfully.
Nov 25 17:16:02 compute-0 nova_compute[254092]: 2025-11-25 17:16:02.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:02 compute-0 podman[409235]: 2025-11-25 17:16:02.497719622 +0000 UTC m=+0.472183763 container remove 263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:16:02 compute-0 systemd[1]: libpod-conmon-263db5522ff100220b8a79de0e74fa26f83e4ee8cb7fd74fe3cb44592656c188.scope: Deactivated successfully.
Nov 25 17:16:02 compute-0 podman[409277]: 2025-11-25 17:16:02.731823634 +0000 UTC m=+0.083260697 container create 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:16:02 compute-0 podman[409277]: 2025-11-25 17:16:02.683573341 +0000 UTC m=+0.035010424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:16:02 compute-0 systemd[1]: Started libpod-conmon-939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee.scope.
Nov 25 17:16:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:16:03 compute-0 podman[409277]: 2025-11-25 17:16:03.169197489 +0000 UTC m=+0.520634672 container init 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 17:16:03 compute-0 podman[409277]: 2025-11-25 17:16:03.183734344 +0000 UTC m=+0.535171417 container start 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:16:03 compute-0 podman[409277]: 2025-11-25 17:16:03.254823879 +0000 UTC m=+0.606260982 container attach 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:16:03 compute-0 nova_compute[254092]: 2025-11-25 17:16:03.916 254096 DEBUG nova.network.neutron [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updated VIF entry in instance network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:16:03 compute-0 nova_compute[254092]: 2025-11-25 17:16:03.920 254096 DEBUG nova.network.neutron [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:16:03 compute-0 elastic_feynman[409293]: {
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:     "0": [
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:         {
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "devices": [
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "/dev/loop3"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             ],
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_name": "ceph_lv0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_size": "21470642176",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "name": "ceph_lv0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "tags": {
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cluster_name": "ceph",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.crush_device_class": "",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.encrypted": "0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osd_id": "0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.type": "block",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.vdo": "0"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             },
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "type": "block",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "vg_name": "ceph_vg0"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:         }
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:     ],
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:     "1": [
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:         {
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "devices": [
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "/dev/loop4"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             ],
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_name": "ceph_lv1",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_size": "21470642176",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "name": "ceph_lv1",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "tags": {
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cluster_name": "ceph",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.crush_device_class": "",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.encrypted": "0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osd_id": "1",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.type": "block",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.vdo": "0"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             },
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "type": "block",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "vg_name": "ceph_vg1"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:         }
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:     ],
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:     "2": [
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:         {
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "devices": [
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "/dev/loop5"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             ],
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_name": "ceph_lv2",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_size": "21470642176",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "name": "ceph_lv2",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "tags": {
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.cluster_name": "ceph",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.crush_device_class": "",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.encrypted": "0",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osd_id": "2",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.type": "block",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:                 "ceph.vdo": "0"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             },
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "type": "block",
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:             "vg_name": "ceph_vg2"
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:         }
Nov 25 17:16:03 compute-0 elastic_feynman[409293]:     ]
Nov 25 17:16:03 compute-0 elastic_feynman[409293]: }
Nov 25 17:16:04 compute-0 systemd[1]: libpod-939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee.scope: Deactivated successfully.
Nov 25 17:16:04 compute-0 podman[409277]: 2025-11-25 17:16:04.034157833 +0000 UTC m=+1.385594936 container died 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:16:04 compute-0 nova_compute[254092]: 2025-11-25 17:16:04.069 254096 DEBUG oslo_concurrency.lockutils [req-49b259c5-8278-402c-acd8-05e4839f8a65 req-232ca9fb-4440-4940-8d6e-596f76625611 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:16:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-041399d4162461a1e89a848638b637147d9b5babb6eb9acd2a58b1e40435b3f7-merged.mount: Deactivated successfully.
Nov 25 17:16:04 compute-0 ceph-mon[74985]: pgmap v2775: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:16:04 compute-0 podman[409277]: 2025-11-25 17:16:04.396502825 +0000 UTC m=+1.747939898 container remove 939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:16:04 compute-0 systemd[1]: libpod-conmon-939d57ffa20ca80819d92dbbcf4b3c8e738232cb1995ad2637d7d747eab612ee.scope: Deactivated successfully.
Nov 25 17:16:04 compute-0 sudo[409114]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:04 compute-0 sudo[409314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:16:04 compute-0 sudo[409314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:04 compute-0 sudo[409314]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:04 compute-0 sudo[409339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:16:04 compute-0 sudo[409339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:04 compute-0 sudo[409339]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:04 compute-0 sudo[409364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:16:04 compute-0 sudo[409364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:04 compute-0 sudo[409364]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:04 compute-0 sudo[409389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:16:04 compute-0 sudo[409389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:16:05 compute-0 podman[409456]: 2025-11-25 17:16:05.24645684 +0000 UTC m=+0.029394191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:16:05 compute-0 podman[409456]: 2025-11-25 17:16:05.393713758 +0000 UTC m=+0.176651089 container create 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:16:05 compute-0 systemd[1]: Started libpod-conmon-54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368.scope.
Nov 25 17:16:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:16:05 compute-0 podman[409456]: 2025-11-25 17:16:05.845927837 +0000 UTC m=+0.628865168 container init 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 17:16:05 compute-0 podman[409456]: 2025-11-25 17:16:05.853119593 +0000 UTC m=+0.636056954 container start 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:16:05 compute-0 wonderful_moore[409473]: 167 167
Nov 25 17:16:05 compute-0 systemd[1]: libpod-54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368.scope: Deactivated successfully.
Nov 25 17:16:05 compute-0 nova_compute[254092]: 2025-11-25 17:16:05.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:06 compute-0 podman[409456]: 2025-11-25 17:16:06.082280691 +0000 UTC m=+0.865218062 container attach 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:16:06 compute-0 podman[409456]: 2025-11-25 17:16:06.084016048 +0000 UTC m=+0.866953429 container died 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:16:06 compute-0 ceph-mon[74985]: pgmap v2776: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7b4b84e971e696e208841c13b537eb075e7416c7664951de116f52c2f13d52-merged.mount: Deactivated successfully.
Nov 25 17:16:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 86 op/s
Nov 25 17:16:07 compute-0 podman[409456]: 2025-11-25 17:16:07.25240567 +0000 UTC m=+2.035343021 container remove 54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:16:07 compute-0 systemd[1]: libpod-conmon-54351cef21122dffd04cd6b1264faf9fd1cae0b8c192a0cc622a40372b2bb368.scope: Deactivated successfully.
Nov 25 17:16:07 compute-0 nova_compute[254092]: 2025-11-25 17:16:07.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:07 compute-0 podman[409499]: 2025-11-25 17:16:07.443947284 +0000 UTC m=+0.029656578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:16:07 compute-0 podman[409499]: 2025-11-25 17:16:07.666010268 +0000 UTC m=+0.251719542 container create d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:16:07 compute-0 ceph-mon[74985]: pgmap v2777: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 86 op/s
Nov 25 17:16:07 compute-0 systemd[1]: Started libpod-conmon-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope.
Nov 25 17:16:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:16:08 compute-0 podman[409499]: 2025-11-25 17:16:08.045632202 +0000 UTC m=+0.631341496 container init d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:16:08 compute-0 podman[409499]: 2025-11-25 17:16:08.054863483 +0000 UTC m=+0.640572757 container start d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:16:08 compute-0 podman[409499]: 2025-11-25 17:16:08.147513114 +0000 UTC m=+0.733222499 container attach d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:16:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 76 op/s
Nov 25 17:16:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]: {
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "osd_id": 1,
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "type": "bluestore"
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:     },
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "osd_id": 2,
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "type": "bluestore"
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:     },
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "osd_id": 0,
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:         "type": "bluestore"
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]:     }
Nov 25 17:16:09 compute-0 flamboyant_wescoff[409516]: }
Nov 25 17:16:09 compute-0 systemd[1]: libpod-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope: Deactivated successfully.
Nov 25 17:16:09 compute-0 systemd[1]: libpod-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope: Consumed 1.094s CPU time.
Nov 25 17:16:09 compute-0 podman[409499]: 2025-11-25 17:16:09.156983541 +0000 UTC m=+1.742692815 container died d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12a5a1554f911c876227b067000b1ccb78c17b8e4d960e8d83e7cca26a66e5e-merged.mount: Deactivated successfully.
Nov 25 17:16:09 compute-0 podman[409499]: 2025-11-25 17:16:09.365424315 +0000 UTC m=+1.951133599 container remove d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wescoff, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:16:09 compute-0 systemd[1]: libpod-conmon-d9fb8c72e24ee5dca285dd72da88c683fc04dc1e4f25d0b6447644f2e197453b.scope: Deactivated successfully.
Nov 25 17:16:09 compute-0 sudo[409389]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:16:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:16:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:16:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:16:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev bb880e81-0749-4248-9eb5-d332f15d340d does not exist
Nov 25 17:16:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9c9c7613-c82a-4460-8405-c3541c3f2abc does not exist
Nov 25 17:16:09 compute-0 sudo[409565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:16:09 compute-0 sudo[409565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:09 compute-0 sudo[409565]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:09 compute-0 sudo[409590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:16:09 compute-0 sudo[409590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:16:09 compute-0 sudo[409590]: pam_unix(sudo:session): session closed for user root
Nov 25 17:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:16:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:16:10 compute-0 ovn_controller[153477]: 2025-11-25T17:16:10Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:fe:9c 10.100.0.8
Nov 25 17:16:10 compute-0 ovn_controller[153477]: 2025-11-25T17:16:10Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:fe:9c 10.100.0.8
Nov 25 17:16:10 compute-0 ceph-mon[74985]: pgmap v2778: 321 pgs: 321 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 76 op/s
Nov 25 17:16:10 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:16:10 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:16:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 180 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 111 op/s
Nov 25 17:16:10 compute-0 nova_compute[254092]: 2025-11-25 17:16:10.971 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:11 compute-0 ceph-mon[74985]: pgmap v2779: 321 pgs: 321 active+clean; 180 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 111 op/s
Nov 25 17:16:12 compute-0 nova_compute[254092]: 2025-11-25 17:16:12.462 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 17:16:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:13.655 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:14 compute-0 ceph-mon[74985]: pgmap v2780: 321 pgs: 321 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 25 17:16:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 17:16:15 compute-0 nova_compute[254092]: 2025-11-25 17:16:15.973 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:16 compute-0 ceph-mon[74985]: pgmap v2781: 321 pgs: 321 active+clean; 193 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Nov 25 17:16:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 17:16:17 compute-0 nova_compute[254092]: 2025-11-25 17:16:17.464 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:18 compute-0 ceph-mon[74985]: pgmap v2782: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 17:16:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Nov 25 17:16:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:20 compute-0 ceph-mon[74985]: pgmap v2783: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Nov 25 17:16:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Nov 25 17:16:20 compute-0 nova_compute[254092]: 2025-11-25 17:16:20.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:22 compute-0 ceph-mon[74985]: pgmap v2784: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Nov 25 17:16:22 compute-0 nova_compute[254092]: 2025-11-25 17:16:22.466 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 289 KiB/s rd, 827 KiB/s wr, 76 op/s
Nov 25 17:16:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:24 compute-0 ceph-mon[74985]: pgmap v2785: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 289 KiB/s rd, 827 KiB/s wr, 76 op/s
Nov 25 17:16:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 83 KiB/s wr, 47 op/s
Nov 25 17:16:25 compute-0 nova_compute[254092]: 2025-11-25 17:16:25.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:26 compute-0 ceph-mon[74985]: pgmap v2786: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 83 KiB/s wr, 47 op/s
Nov 25 17:16:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 83 KiB/s wr, 47 op/s
Nov 25 17:16:27 compute-0 nova_compute[254092]: 2025-11-25 17:16:27.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:27 compute-0 ceph-mon[74985]: pgmap v2787: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 83 KiB/s wr, 47 op/s
Nov 25 17:16:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 21 KiB/s wr, 1 op/s
Nov 25 17:16:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:29 compute-0 ovn_controller[153477]: 2025-11-25T17:16:29Z|01482|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Nov 25 17:16:29 compute-0 ceph-mon[74985]: pgmap v2788: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 21 KiB/s wr, 1 op/s
Nov 25 17:16:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 22 KiB/s wr, 1 op/s
Nov 25 17:16:31 compute-0 nova_compute[254092]: 2025-11-25 17:16:31.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:32 compute-0 ceph-mon[74985]: pgmap v2789: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 22 KiB/s wr, 1 op/s
Nov 25 17:16:32 compute-0 nova_compute[254092]: 2025-11-25 17:16:32.469 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:32 compute-0 podman[409615]: 2025-11-25 17:16:32.664531741 +0000 UTC m=+0.080729549 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:16:32 compute-0 podman[409616]: 2025-11-25 17:16:32.701148207 +0000 UTC m=+0.108415002 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 17:16:32 compute-0 podman[409617]: 2025-11-25 17:16:32.728597945 +0000 UTC m=+0.133215498 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:16:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Nov 25 17:16:34 compute-0 ceph-mon[74985]: pgmap v2790: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Nov 25 17:16:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:34 compute-0 nova_compute[254092]: 2025-11-25 17:16:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.061 254096 DEBUG nova.compute.manager [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.062 254096 DEBUG nova.compute.manager [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing instance network info cache due to event network-changed-d4fd6164-a382-44de-8709-0a3941640a9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.063 254096 DEBUG oslo_concurrency.lockutils [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.063 254096 DEBUG oslo_concurrency.lockutils [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.063 254096 DEBUG nova.network.neutron [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Refreshing network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.167 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.169 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.170 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.170 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.170 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.171 254096 INFO nova.compute.manager [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Terminating instance
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.172 254096 DEBUG nova.compute.manager [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:16:35 compute-0 kernel: tapd4fd6164-a3 (unregistering): left promiscuous mode
Nov 25 17:16:35 compute-0 NetworkManager[48891]: <info>  [1764090995.3791] device (tapd4fd6164-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:16:35 compute-0 ovn_controller[153477]: 2025-11-25T17:16:35Z|01483|binding|INFO|Releasing lport d4fd6164-a382-44de-8709-0a3941640a9d from this chassis (sb_readonly=0)
Nov 25 17:16:35 compute-0 ovn_controller[153477]: 2025-11-25T17:16:35Z|01484|binding|INFO|Setting lport d4fd6164-a382-44de-8709-0a3941640a9d down in Southbound
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 ovn_controller[153477]: 2025-11-25T17:16:35Z|01485|binding|INFO|Removing iface tapd4fd6164-a3 ovn-installed in OVS
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.400 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.411 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 kernel: tap5cfdc6b4-a0 (unregistering): left promiscuous mode
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.424 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:fe:9c 10.100.0.8'], port_security=['fa:16:3e:e8:fe:9c 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=d4fd6164-a382-44de-8709-0a3941640a9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.425 163338 INFO neutron.agent.ovn.metadata.agent [-] Port d4fd6164-a382-44de-8709-0a3941640a9d in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 unbound from our chassis
Nov 25 17:16:35 compute-0 NetworkManager[48891]: <info>  [1764090995.4267] device (tap5cfdc6b4-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.427 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48
Nov 25 17:16:35 compute-0 ovn_controller[153477]: 2025-11-25T17:16:35Z|01486|binding|INFO|Releasing lport 5cfdc6b4-a091-429f-a33f-cc941535c221 from this chassis (sb_readonly=0)
Nov 25 17:16:35 compute-0 ovn_controller[153477]: 2025-11-25T17:16:35Z|01487|binding|INFO|Setting lport 5cfdc6b4-a091-429f-a33f-cc941535c221 down in Southbound
Nov 25 17:16:35 compute-0 ovn_controller[153477]: 2025-11-25T17:16:35Z|01488|binding|INFO|Removing iface tap5cfdc6b4-a0 ovn-installed in OVS
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.444 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b518f692-afaa-4963-b0be-c4a22befc222]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.448 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], port_security=['fa:16:3e:a7:61:5b 2001:db8:0:1:f816:3eff:fea7:615b 2001:db8::f816:3eff:fea7:615b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea7:615b/64 2001:db8::f816:3eff:fea7:615b/64', 'neutron:device_id': 'f0fce250-5e4a-4063-a0c9-a2285f68c22e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=5cfdc6b4-a091-429f-a33f-cc941535c221) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.483 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6fece7-f073-4f89-9903-7d0461a9229c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.487 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3f03c88e-7c0b-4816-8477-a7d0fd2336e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Nov 25 17:16:35 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Consumed 14.268s CPU time.
Nov 25 17:16:35 compute-0 systemd-machined[216343]: Machine qemu-174-instance-0000008c terminated.
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.516 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[864a9ae3-2554-4e5c-8c8a-afea1ab8603f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.536 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84fda3c4-9646-449c-90dc-ce2164a4ee80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c1b4538-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:12:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737427, 'reachable_time': 31371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409693, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.557 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[33075044-f0d1-4906-92e3-d531312d678a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737444, 'tstamp': 737444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409694, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8c1b4538-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737448, 'tstamp': 737448}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409694, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.560 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.574 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.575 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c1b4538-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.576 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c1b4538-70, col_values=(('external_ids', {'iface-id': '805e6fea-3f01-4342-8f5e-5b75e48ec68e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.577 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.579 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 5cfdc6b4-a091-429f-a33f-cc941535c221 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.581 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c57073ad-8c41-459b-9402-c367011860c7
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.601 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[deca42b5-fc3b-4da4-bcac-d6dadb89a15e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 NetworkManager[48891]: <info>  [1764090995.6087] manager: (tap5cfdc6b4-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/613)
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.627 254096 INFO nova.virt.libvirt.driver [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Instance destroyed successfully.
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.628 254096 DEBUG nova.objects.instance [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid f0fce250-5e4a-4063-a0c9-a2285f68c22e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.644 254096 DEBUG nova.virt.libvirt.vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:56Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.644 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.645 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.646 254096 DEBUG os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.647 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30665d06-8bf7-43c0-a09c-d8475fa1ccec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.648 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4fd6164-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.651 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca66f59-800d-4f24-b4ba-088eafe326c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.659 254096 INFO os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:fe:9c,bridge_name='br-int',has_traffic_filtering=True,id=d4fd6164-a382-44de-8709-0a3941640a9d,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4fd6164-a3')
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.660 254096 DEBUG nova.virt.libvirt.vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1931135016',display_name='tempest-TestGettingAddress-server-1931135016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1931135016',id=140,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-gzae4ci4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:56Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=f0fce250-5e4a-4063-a0c9-a2285f68c22e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.660 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.661 254096 DEBUG nova.network.os_vif_util [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.661 254096 DEBUG os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.662 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cfdc6b4-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.667 254096 INFO os_vif [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:61:5b,bridge_name='br-int',has_traffic_filtering=True,id=5cfdc6b4-a091-429f-a33f-cc941535c221,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cfdc6b4-a0')
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.688 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[bd547a31-918a-4a5d-ba5b-69964e419481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.709 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[84dc7149-02b0-4beb-8c4c-63c133dc96b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc57073ad-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:72:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 6, 'rx_bytes': 3160, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737528, 'reachable_time': 23395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409739, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.733 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5fde957e-dbe4-45a7-bd92-fbc96a7b85f4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc57073ad-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737545, 'tstamp': 737545}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409743, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.734 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 nova_compute[254092]: 2025-11-25 17:16:35.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.786 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc57073ad-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc57073ad-80, col_values=(('external_ids', {'iface-id': 'fff6df53-4de9-409a-abf7-032bad835b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:35 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:35.787 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:16:36 compute-0 nova_compute[254092]: 2025-11-25 17:16:36.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:36 compute-0 ceph-mon[74985]: pgmap v2791: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Nov 25 17:16:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.129 254096 DEBUG nova.network.neutron [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updated VIF entry in instance network info cache for port d4fd6164-a382-44de-8709-0a3941640a9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.130 254096 DEBUG nova.network.neutron [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [{"id": "d4fd6164-a382-44de-8709-0a3941640a9d", "address": "fa:16:3e:e8:fe:9c", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4fd6164-a3", "ovs_interfaceid": "d4fd6164-a382-44de-8709-0a3941640a9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5cfdc6b4-a091-429f-a33f-cc941535c221", "address": "fa:16:3e:a7:61:5b", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fea7:615b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cfdc6b4-a0", "ovs_interfaceid": "5cfdc6b4-a091-429f-a33f-cc941535c221", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.163 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-unplugged-d4fd6164-a382-44de-8709-0a3941640a9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.164 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-d4fd6164-a382-44de-8709-0a3941640a9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG oslo_concurrency.lockutils [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 DEBUG nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.165 254096 WARNING nova.compute.manager [req-d4c4741c-dd8e-40b2-a91e-375613851f80 req-48b7940e-7dab-4767-904c-59cf9a0c9f38 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-d4fd6164-a382-44de-8709-0a3941640a9d for instance with vm_state active and task_state deleting.
Nov 25 17:16:37 compute-0 nova_compute[254092]: 2025-11-25 17:16:37.206 254096 DEBUG oslo_concurrency.lockutils [req-8acd3e17-030e-45db-bb86-038a7379090a req-b223e6a1-05c0-4c25-b0e6-182e7d1891d0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-f0fce250-5e4a-4063-a0c9-a2285f68c22e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:16:38 compute-0 nova_compute[254092]: 2025-11-25 17:16:38.329 254096 INFO nova.virt.libvirt.driver [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deleting instance files /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e_del
Nov 25 17:16:38 compute-0 nova_compute[254092]: 2025-11-25 17:16:38.330 254096 INFO nova.virt.libvirt.driver [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deletion of /var/lib/nova/instances/f0fce250-5e4a-4063-a0c9-a2285f68c22e_del complete
Nov 25 17:16:38 compute-0 ceph-mon[74985]: pgmap v2792: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 25 17:16:38 compute-0 nova_compute[254092]: 2025-11-25 17:16:38.439 254096 INFO nova.compute.manager [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 3.27 seconds to destroy the instance on the hypervisor.
Nov 25 17:16:38 compute-0 nova_compute[254092]: 2025-11-25 17:16:38.440 254096 DEBUG oslo.service.loopingcall [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:16:38 compute-0 nova_compute[254092]: 2025-11-25 17:16:38.440 254096 DEBUG nova.compute.manager [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:16:38 compute-0 nova_compute[254092]: 2025-11-25 17:16:38.440 254096 DEBUG nova.network.neutron [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:16:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 25 17:16:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.278 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.279 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-unplugged-5cfdc6b4-a091-429f-a33f-cc941535c221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.280 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-unplugged-5cfdc6b4-a091-429f-a33f-cc941535c221 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.281 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.281 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.281 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.282 254096 DEBUG oslo_concurrency.lockutils [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.282 254096 DEBUG nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] No waiting events found dispatching network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.282 254096 WARNING nova.compute.manager [req-4a6eb8c9-bc9c-47c1-8094-f7a20b7dd3ef req-3b3fff54-210e-4353-8fdf-32142b5c3f70 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received unexpected event network-vif-plugged-5cfdc6b4-a091-429f-a33f-cc941535c221 for instance with vm_state active and task_state deleting.
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.656 254096 DEBUG nova.network.neutron [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.694 254096 INFO nova.compute.manager [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Took 1.25 seconds to deallocate network for instance.
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.762 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.763 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:39 compute-0 nova_compute[254092]: 2025-11-25 17:16:39.834 254096 DEBUG oslo_concurrency.processutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:16:40
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:16:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:16:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787955388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:40 compute-0 nova_compute[254092]: 2025-11-25 17:16:40.264 254096 DEBUG oslo_concurrency.processutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:16:40 compute-0 nova_compute[254092]: 2025-11-25 17:16:40.272 254096 DEBUG nova.compute.provider_tree [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:16:40 compute-0 nova_compute[254092]: 2025-11-25 17:16:40.297 254096 DEBUG nova.scheduler.client.report [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:16:40 compute-0 ceph-mon[74985]: pgmap v2793: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 7 op/s
Nov 25 17:16:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1787955388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:40 compute-0 nova_compute[254092]: 2025-11-25 17:16:40.534 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:16:40 compute-0 nova_compute[254092]: 2025-11-25 17:16:40.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:16:40 compute-0 nova_compute[254092]: 2025-11-25 17:16:40.734 254096 INFO nova.scheduler.client.report [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance f0fce250-5e4a-4063-a0c9-a2285f68c22e
Nov 25 17:16:40 compute-0 nova_compute[254092]: 2025-11-25 17:16:40.884 254096 DEBUG oslo_concurrency.lockutils [None req-f93b5633-2569-4f3e-8cd1-8ea25fc7d1c6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "f0fce250-5e4a-4063-a0c9-a2285f68c22e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 144 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 22 op/s
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.482 254096 DEBUG nova.compute.manager [req-226fec3a-258f-4358-b0d4-47252684fdb1 req-b8cb8d48-33e1-4694-944d-0a4ce40ae755 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-deleted-d4fd6164-a382-44de-8709-0a3941640a9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.483 254096 DEBUG nova.compute.manager [req-226fec3a-258f-4358-b0d4-47252684fdb1 req-b8cb8d48-33e1-4694-944d-0a4ce40ae755 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Received event network-vif-deleted-5cfdc6b4-a091-429f-a33f-cc941535c221 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.510 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.510 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.511 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.511 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.511 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:16:41 compute-0 ceph-mon[74985]: pgmap v2794: 321 pgs: 321 active+clean; 144 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 22 op/s
Nov 25 17:16:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:16:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2243257493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:41 compute-0 nova_compute[254092]: 2025-11-25 17:16:41.990 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.057 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.058 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.119 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.120 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.225 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.228 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3424MB free_disk=59.93061828613281GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.228 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.228 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance cab3a333-1f68-435b-b6cb-a508755c2565 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.284 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.319 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.397 254096 DEBUG nova.compute.manager [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.397 254096 DEBUG nova.compute.manager [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing instance network info cache due to event network-changed-bfd7bca3-f01a-4857-8c51-1085cde3ad00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.398 254096 DEBUG oslo_concurrency.lockutils [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.398 254096 DEBUG oslo_concurrency.lockutils [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.398 254096 DEBUG nova.network.neutron [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Refreshing network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.429 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.429 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.430 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.430 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.430 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.431 254096 INFO nova.compute.manager [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Terminating instance
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.432 254096 DEBUG nova.compute.manager [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:16:42 compute-0 kernel: tapbfd7bca3-f0 (unregistering): left promiscuous mode
Nov 25 17:16:42 compute-0 NetworkManager[48891]: <info>  [1764091002.5074] device (tapbfd7bca3-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 ovn_controller[153477]: 2025-11-25T17:16:42Z|01489|binding|INFO|Releasing lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 from this chassis (sb_readonly=0)
Nov 25 17:16:42 compute-0 ovn_controller[153477]: 2025-11-25T17:16:42Z|01490|binding|INFO|Setting lport bfd7bca3-f01a-4857-8c51-1085cde3ad00 down in Southbound
Nov 25 17:16:42 compute-0 ovn_controller[153477]: 2025-11-25T17:16:42Z|01491|binding|INFO|Removing iface tapbfd7bca3-f0 ovn-installed in OVS
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.523 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:85:52 10.100.0.12'], port_security=['fa:16:3e:f9:85:52 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bd6efb3-458b-448a-a74a-463b74973f4e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=bfd7bca3-f01a-4857-8c51-1085cde3ad00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.525 163338 INFO neutron.agent.ovn.metadata.agent [-] Port bfd7bca3-f01a-4857-8c51-1085cde3ad00 in datapath 8c1b4538-7e1d-41aa-8e91-8a97df87ce48 unbound from our chassis
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.525 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c1b4538-7e1d-41aa-8e91-8a97df87ce48, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3194d4ca-f9b9-4c58-87a1-651d70943ad4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.527 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 namespace which is not needed anymore
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.527 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 kernel: tapcd75615e-b8 (unregistering): left promiscuous mode
Nov 25 17:16:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2243257493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:42 compute-0 NetworkManager[48891]: <info>  [1764091002.5480] device (tapcd75615e-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:16:42 compute-0 ovn_controller[153477]: 2025-11-25T17:16:42Z|01492|binding|INFO|Releasing lport cd75615e-b80b-4685-b424-2c54f7fdbde8 from this chassis (sb_readonly=0)
Nov 25 17:16:42 compute-0 ovn_controller[153477]: 2025-11-25T17:16:42Z|01493|binding|INFO|Setting lport cd75615e-b80b-4685-b424-2c54f7fdbde8 down in Southbound
Nov 25 17:16:42 compute-0 ovn_controller[153477]: 2025-11-25T17:16:42Z|01494|binding|INFO|Removing iface tapcd75615e-b8 ovn-installed in OVS
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.572 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], port_security=['fa:16:3e:8f:6d:38 2001:db8:0:1:f816:3eff:fe8f:6d38 2001:db8::f816:3eff:fe8f:6d38'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8f:6d38/64 2001:db8::f816:3eff:fe8f:6d38/64', 'neutron:device_id': 'cab3a333-1f68-435b-b6cb-a508755c2565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c57073ad-8c41-459b-9402-c367011860c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27a39082-c6dd-4238-9595-c012a711382c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ff1ab3c-191d-40dd-ac45-c3cedb65904e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=cd75615e-b80b-4685-b424-2c54f7fdbde8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.579 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Nov 25 17:16:42 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Consumed 16.919s CPU time.
Nov 25 17:16:42 compute-0 systemd-machined[216343]: Machine qemu-173-instance-0000008b terminated.
Nov 25 17:16:42 compute-0 NetworkManager[48891]: <info>  [1764091002.6628] manager: (tapcd75615e-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Nov 25 17:16:42 compute-0 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : haproxy version is 2.8.14-c23fe91
Nov 25 17:16:42 compute-0 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [NOTICE]   (407939) : path to executable is /usr/sbin/haproxy
Nov 25 17:16:42 compute-0 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [ALERT]    (407939) : Current worker (407943) exited with code 143 (Terminated)
Nov 25 17:16:42 compute-0 neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48[407908]: [WARNING]  (407939) : All workers exited. Exiting... (0)
Nov 25 17:16:42 compute-0 systemd[1]: libpod-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42.scope: Deactivated successfully.
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.689 254096 INFO nova.virt.libvirt.driver [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Instance destroyed successfully.
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.689 254096 DEBUG nova.objects.instance [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid cab3a333-1f68-435b-b6cb-a508755c2565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:16:42 compute-0 podman[409840]: 2025-11-25 17:16:42.697548691 +0000 UTC m=+0.058370240 container died 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.700 254096 DEBUG nova.virt.libvirt.vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.700 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.701 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.701 254096 DEBUG os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.703 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfd7bca3-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.705 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.708 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.711 254096 INFO os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:85:52,bridge_name='br-int',has_traffic_filtering=True,id=bfd7bca3-f01a-4857-8c51-1085cde3ad00,network=Network(8c1b4538-7e1d-41aa-8e91-8a97df87ce48),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbfd7bca3-f0')
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.712 254096 DEBUG nova.virt.libvirt.vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:15:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1331536480',display_name='tempest-TestGettingAddress-server-1331536480',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1331536480',id=139,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC/niTBX5ZJOBoFwQAnBe0Xx9dFvOVPcybxHEgE/6u4oA8qKJolxGIRVI2nyb8g35YAG8s/hnJ8BpvMG1lA6L621k5AN67EKkdOzTdANkryTGmObdTe8FPKQ0krR2uOZXQ==',key_name='tempest-TestGettingAddress-363937330',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:15:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-i4fxutcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:15:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=cab3a333-1f68-435b-b6cb-a508755c2565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.713 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.713 254096 DEBUG nova.network.os_vif_util [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.714 254096 DEBUG os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.716 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd75615e-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.721 254096 INFO os_vif [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:6d:38,bridge_name='br-int',has_traffic_filtering=True,id=cd75615e-b80b-4685-b424-2c54f7fdbde8,network=Network(c57073ad-8c41-459b-9402-c367011860c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcd75615e-b8')
Nov 25 17:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42-userdata-shm.mount: Deactivated successfully.
Nov 25 17:16:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d2a19e2c416b67cb68207b4518ff73fc69af0445413f18a70a9f65c350e3045-merged.mount: Deactivated successfully.
Nov 25 17:16:42 compute-0 podman[409840]: 2025-11-25 17:16:42.736576024 +0000 UTC m=+0.097397583 container cleanup 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:16:42 compute-0 systemd[1]: libpod-conmon-069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42.scope: Deactivated successfully.
Nov 25 17:16:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:16:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3823325371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.767 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.776 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.793 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:16:42 compute-0 podman[409906]: 2025-11-25 17:16:42.804756169 +0000 UTC m=+0.045992552 container remove 069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.811 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6b8776-3357-4325-9c27-9bc90bdc1a3f]: (4, ('Tue Nov 25 05:16:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 (069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42)\n069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42\nTue Nov 25 05:16:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 (069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42)\n069244993be646e62f00596e4839a705e1f3419c7b43dfda7d93c1656f364f42\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.813 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0e002aa4-490e-4fa7-80ec-95bc51f4d451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.814 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c1b4538-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 kernel: tap8c1b4538-70: left promiscuous mode
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.819 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.820 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:42 compute-0 nova_compute[254092]: 2025-11-25 17:16:42.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.832 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3ec6ad-f281-4f14-a715-23b6b322e6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.846 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[adbcf20a-cf57-4af9-8689-dae836a0afe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.848 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b2a7f8-36b9-4ab4-932d-d019c4673028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.864 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[484c1a23-1d1f-45b3-9f61-052dc0e27545]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737418, 'reachable_time': 31594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409927, 'error': None, 'target': 'ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d8c1b4538\x2d7e1d\x2d41aa\x2d8e91\x2d8a97df87ce48.mount: Deactivated successfully.
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.869 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c1b4538-7e1d-41aa-8e91-8a97df87ce48 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.869 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[92b87971-082f-46f1-bfd1-d01dcc9ce51e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.872 163338 INFO neutron.agent.ovn.metadata.agent [-] Port cd75615e-b80b-4685-b424-2c54f7fdbde8 in datapath c57073ad-8c41-459b-9402-c367011860c7 unbound from our chassis
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.874 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c57073ad-8c41-459b-9402-c367011860c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.874 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[80c6be45-a3fa-47e2-ba5f-d5c53c1039e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:42 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:42.875 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 namespace which is not needed anymore
Nov 25 17:16:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Nov 25 17:16:43 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : haproxy version is 2.8.14-c23fe91
Nov 25 17:16:43 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [NOTICE]   (408022) : path to executable is /usr/sbin/haproxy
Nov 25 17:16:43 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [WARNING]  (408022) : Exiting Master process...
Nov 25 17:16:43 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [WARNING]  (408022) : Exiting Master process...
Nov 25 17:16:43 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [ALERT]    (408022) : Current worker (408024) exited with code 143 (Terminated)
Nov 25 17:16:43 compute-0 neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7[408018]: [WARNING]  (408022) : All workers exited. Exiting... (0)
Nov 25 17:16:43 compute-0 systemd[1]: libpod-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135.scope: Deactivated successfully.
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.039 254096 INFO nova.virt.libvirt.driver [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deleting instance files /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565_del
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.040 254096 INFO nova.virt.libvirt.driver [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deletion of /var/lib/nova/instances/cab3a333-1f68-435b-b6cb-a508755c2565_del complete
Nov 25 17:16:43 compute-0 podman[409944]: 2025-11-25 17:16:43.04318149 +0000 UTC m=+0.057439935 container died f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135-userdata-shm.mount: Deactivated successfully.
Nov 25 17:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd0817bcd731ce4c188b30a692b0672c775002a3b0968cf2b0f2382fce690850-merged.mount: Deactivated successfully.
Nov 25 17:16:43 compute-0 podman[409944]: 2025-11-25 17:16:43.07077976 +0000 UTC m=+0.085038225 container cleanup f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:16:43 compute-0 systemd[1]: libpod-conmon-f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135.scope: Deactivated successfully.
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.119 254096 INFO nova.compute.manager [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 0.69 seconds to destroy the instance on the hypervisor.
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.120 254096 DEBUG oslo.service.loopingcall [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.120 254096 DEBUG nova.compute.manager [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.120 254096 DEBUG nova.network.neutron [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:16:43 compute-0 podman[409972]: 2025-11-25 17:16:43.129965282 +0000 UTC m=+0.039888858 container remove f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.135 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[29268928-7bb4-4e95-9737-9e31166e6f39]: (4, ('Tue Nov 25 05:16:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 (f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135)\nf960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135\nTue Nov 25 05:16:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 (f960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135)\nf960d96920a695446851694b59902b9361bd4703d15f463adc29c170acceb135\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.137 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b34f69cf-233a-43f1-858d-a9f71d070d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc57073ad-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:43 compute-0 kernel: tapc57073ad-80: left promiscuous mode
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.156 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5d3774-254d-44eb-bf9f-5e011ad49047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.175 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2780a5-30ae-4157-9225-d861794b6164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.176 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[78f54752-94eb-43a7-a3d2-e16c1cfd7dc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.197 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fc462a4d-cde8-4850-a6e7-69054a6ed359]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737518, 'reachable_time': 27575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409987, 'error': None, 'target': 'ovnmeta-c57073ad-8c41-459b-9402-c367011860c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.199 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c57073ad-8c41-459b-9402-c367011860c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:16:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:43.199 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[5f321d4a-72bf-4a6b-a678-4145a67d0c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:16:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3823325371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:43 compute-0 ceph-mon[74985]: pgmap v2795: 321 pgs: 321 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.613 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.613 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.613 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-unplugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.614 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG oslo_concurrency.lockutils [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.615 254096 DEBUG nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.616 254096 WARNING nova.compute.manager [req-81411098-7916-4947-a7a2-e425b6d276bf req-8d26e008-2ff2-471f-a4d7-8d7115b6d364 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-bfd7bca3-f01a-4857-8c51-1085cde3ad00 for instance with vm_state active and task_state deleting.
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.709 254096 DEBUG nova.network.neutron [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updated VIF entry in instance network info cache for port bfd7bca3-f01a-4857-8c51-1085cde3ad00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.709 254096 DEBUG nova.network.neutron [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "address": "fa:16:3e:8f:6d:38", "network": {"id": "c57073ad-8c41-459b-9402-c367011860c7", "bridge": "br-int", "label": "tempest-network-smoke--937669811", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8f:6d38", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcd75615e-b8", "ovs_interfaceid": "cd75615e-b80b-4685-b424-2c54f7fdbde8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:16:43 compute-0 systemd[1]: run-netns-ovnmeta\x2dc57073ad\x2d8c41\x2d459b\x2d9402\x2dc367011860c7.mount: Deactivated successfully.
Nov 25 17:16:43 compute-0 nova_compute[254092]: 2025-11-25 17:16:43.727 254096 DEBUG oslo_concurrency.lockutils [req-a50b6de6-da6d-4adf-af83-8024b48f6882 req-d49cf670-f090-432e-821d-6a738459ef69 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-cab3a333-1f68-435b-b6cb-a508755c2565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:16:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.488 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-unplugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-unplugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.489 254096 DEBUG oslo_concurrency.lockutils [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] No waiting events found dispatching network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 WARNING nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received unexpected event network-vif-plugged-cd75615e-b80b-4685-b424-2c54f7fdbde8 for instance with vm_state active and task_state deleting.
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-deleted-cd75615e-b80b-4685-b424-2c54f7fdbde8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 INFO nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Neutron deleted interface cd75615e-b80b-4685-b424-2c54f7fdbde8; detaching it from the instance and deleting it from the info cache
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.490 254096 DEBUG nova.network.neutron [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [{"id": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "address": "fa:16:3e:f9:85:52", "network": {"id": "8c1b4538-7e1d-41aa-8e91-8a97df87ce48", "bridge": "br-int", "label": "tempest-network-smoke--1895388593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbfd7bca3-f0", "ovs_interfaceid": "bfd7bca3-f01a-4857-8c51-1085cde3ad00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.512 254096 DEBUG nova.compute.manager [req-76782d9d-c96b-4173-8031-9fca554877e7 req-fe28d0cc-c281-4586-a0a5-b50728361cea a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Detach interface failed, port_id=cd75615e-b80b-4685-b424-2c54f7fdbde8, reason: Instance cab3a333-1f68-435b-b6cb-a508755c2565 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.573 254096 DEBUG nova.network.neutron [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.588 254096 INFO nova.compute.manager [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Took 1.47 seconds to deallocate network for instance.
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.638 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.639 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.696 254096 DEBUG oslo_concurrency.processutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.815 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.816 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.816 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:44 compute-0 nova_compute[254092]: 2025-11-25 17:16:44.816 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:16:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Nov 25 17:16:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:16:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854538254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:45 compute-0 nova_compute[254092]: 2025-11-25 17:16:45.130 254096 DEBUG oslo_concurrency.processutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:16:45 compute-0 nova_compute[254092]: 2025-11-25 17:16:45.137 254096 DEBUG nova.compute.provider_tree [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:16:45 compute-0 nova_compute[254092]: 2025-11-25 17:16:45.160 254096 DEBUG nova.scheduler.client.report [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:16:45 compute-0 nova_compute[254092]: 2025-11-25 17:16:45.179 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:45 compute-0 nova_compute[254092]: 2025-11-25 17:16:45.205 254096 INFO nova.scheduler.client.report [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance cab3a333-1f68-435b-b6cb-a508755c2565
Nov 25 17:16:45 compute-0 nova_compute[254092]: 2025-11-25 17:16:45.252 254096 DEBUG oslo_concurrency.lockutils [None req-c3a50bba-09ba-4338-85d1-4af81c47d123 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "cab3a333-1f68-435b-b6cb-a508755c2565" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:16:45 compute-0 nova_compute[254092]: 2025-11-25 17:16:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:45 compute-0 ceph-mon[74985]: pgmap v2796: 321 pgs: 321 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.9 KiB/s wr, 29 op/s
Nov 25 17:16:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/854538254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:16:46 compute-0 nova_compute[254092]: 2025-11-25 17:16:46.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:46 compute-0 nova_compute[254092]: 2025-11-25 17:16:46.600 254096 DEBUG nova.compute.manager [req-a72a272b-159a-4a45-b40d-3c8ee2e34359 req-66d0cd3e-8702-425b-a5be-b5b16563f244 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Received event network-vif-deleted-bfd7bca3-f01a-4857-8c51-1085cde3ad00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:16:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 7.1 KiB/s wr, 57 op/s
Nov 25 17:16:47 compute-0 nova_compute[254092]: 2025-11-25 17:16:47.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:47 compute-0 ceph-mon[74985]: pgmap v2797: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 7.1 KiB/s wr, 57 op/s
Nov 25 17:16:48 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:16:48.121 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:16:48 compute-0 nova_compute[254092]: 2025-11-25 17:16:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:48 compute-0 nova_compute[254092]: 2025-11-25 17:16:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:16:48 compute-0 nova_compute[254092]: 2025-11-25 17:16:48.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:16:48 compute-0 nova_compute[254092]: 2025-11-25 17:16:48.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:16:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 7.0 KiB/s wr, 50 op/s
Nov 25 17:16:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:50 compute-0 ceph-mon[74985]: pgmap v2798: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 7.0 KiB/s wr, 50 op/s
Nov 25 17:16:50 compute-0 nova_compute[254092]: 2025-11-25 17:16:50.625 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764090995.623302, f0fce250-5e4a-4063-a0c9-a2285f68c22e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:16:50 compute-0 nova_compute[254092]: 2025-11-25 17:16:50.625 254096 INFO nova.compute.manager [-] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] VM Stopped (Lifecycle Event)
Nov 25 17:16:50 compute-0 nova_compute[254092]: 2025-11-25 17:16:50.659 254096 DEBUG nova.compute.manager [None req-20073eca-0183-4492-be5f-fcc3ae87ef38 - - - - - -] [instance: f0fce250-5e4a-4063-a0c9-a2285f68c22e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:16:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 7.0 KiB/s wr, 50 op/s
Nov 25 17:16:51 compute-0 nova_compute[254092]: 2025-11-25 17:16:51.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:16:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:16:52 compute-0 ceph-mon[74985]: pgmap v2799: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 7.0 KiB/s wr, 50 op/s
Nov 25 17:16:52 compute-0 nova_compute[254092]: 2025-11-25 17:16:52.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 6.2 KiB/s wr, 34 op/s
Nov 25 17:16:53 compute-0 nova_compute[254092]: 2025-11-25 17:16:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:16:54 compute-0 ceph-mon[74985]: pgmap v2800: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 6.2 KiB/s wr, 34 op/s
Nov 25 17:16:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:16:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:16:55 compute-0 nova_compute[254092]: 2025-11-25 17:16:55.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396234751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:16:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:16:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396234751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:16:55 compute-0 nova_compute[254092]: 2025-11-25 17:16:55.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:56 compute-0 nova_compute[254092]: 2025-11-25 17:16:56.032 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:56 compute-0 ceph-mon[74985]: pgmap v2801: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:16:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/396234751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:16:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/396234751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:16:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:16:57 compute-0 nova_compute[254092]: 2025-11-25 17:16:57.684 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091002.6830397, cab3a333-1f68-435b-b6cb-a508755c2565 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:16:57 compute-0 nova_compute[254092]: 2025-11-25 17:16:57.685 254096 INFO nova.compute.manager [-] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] VM Stopped (Lifecycle Event)
Nov 25 17:16:57 compute-0 nova_compute[254092]: 2025-11-25 17:16:57.713 254096 DEBUG nova.compute.manager [None req-451e0bf5-c53b-448b-8a50-8d2dc3cdfad8 - - - - - -] [instance: cab3a333-1f68-435b-b6cb-a508755c2565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:16:57 compute-0 nova_compute[254092]: 2025-11-25 17:16:57.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:16:58 compute-0 ceph-mon[74985]: pgmap v2802: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:16:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:16:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:00 compute-0 ceph-mon[74985]: pgmap v2803: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:01 compute-0 nova_compute[254092]: 2025-11-25 17:17:01.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:02 compute-0 ceph-mon[74985]: pgmap v2804: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:02 compute-0 nova_compute[254092]: 2025-11-25 17:17:02.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:03 compute-0 podman[410012]: 2025-11-25 17:17:03.644395021 +0000 UTC m=+0.054331600 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:17:03 compute-0 podman[410011]: 2025-11-25 17:17:03.646673862 +0000 UTC m=+0.062412969 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 25 17:17:03 compute-0 podman[410013]: 2025-11-25 17:17:03.677730078 +0000 UTC m=+0.081554691 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 17:17:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:04 compute-0 ceph-mon[74985]: pgmap v2805: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:06 compute-0 nova_compute[254092]: 2025-11-25 17:17:06.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:06 compute-0 ceph-mon[74985]: pgmap v2806: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:07 compute-0 nova_compute[254092]: 2025-11-25 17:17:07.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:08 compute-0 ceph-mon[74985]: pgmap v2807: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.123999) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029124028, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1471, "num_deletes": 250, "total_data_size": 2338260, "memory_usage": 2371624, "flush_reason": "Manual Compaction"}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029133476, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1360802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57102, "largest_seqno": 58572, "table_properties": {"data_size": 1355684, "index_size": 2385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13394, "raw_average_key_size": 20, "raw_value_size": 1344520, "raw_average_value_size": 2081, "num_data_blocks": 109, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764090875, "oldest_key_time": 1764090875, "file_creation_time": 1764091029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 9511 microseconds, and 3714 cpu microseconds.
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.133510) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1360802 bytes OK
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.133524) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136194) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136207) EVENT_LOG_v1 {"time_micros": 1764091029136202, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 2331793, prev total WAL file size 2331793, number of live WAL files 2.
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136853) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323630' seq:72057594037927935, type:22 .. '6D6772737461740032353131' seq:0, type:0; will stop at (end)
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1328KB)], [131(10176KB)]
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029136917, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11781872, "oldest_snapshot_seqno": -1}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7858 keys, 9413788 bytes, temperature: kUnknown
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029192026, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9413788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9364473, "index_size": 28594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 205346, "raw_average_key_size": 26, "raw_value_size": 9227258, "raw_average_value_size": 1174, "num_data_blocks": 1116, "num_entries": 7858, "num_filter_entries": 7858, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.192273) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9413788 bytes
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.193614) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.9 rd, 170.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.9 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(15.6) write-amplify(6.9) OK, records in: 8299, records dropped: 441 output_compression: NoCompression
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.193631) EVENT_LOG_v1 {"time_micros": 1764091029193623, "job": 80, "event": "compaction_finished", "compaction_time_micros": 55085, "compaction_time_cpu_micros": 23941, "output_level": 6, "num_output_files": 1, "total_output_size": 9413788, "num_input_records": 8299, "num_output_records": 7858, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029194190, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091029196314, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.136749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:09.196488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:09 compute-0 sudo[410073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:09 compute-0 sudo[410073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:09 compute-0 sudo[410073]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:09 compute-0 sudo[410098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:17:09 compute-0 sudo[410098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:09 compute-0 sudo[410098]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:09 compute-0 sudo[410123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:09 compute-0 sudo[410123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:09 compute-0 sudo[410123]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:09 compute-0 sudo[410148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:17:09 compute-0 sudo[410148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:17:10 compute-0 ceph-mon[74985]: pgmap v2808: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:10 compute-0 sudo[410148]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:17:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:17:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:17:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:17:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:17:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f3a34970-2565-4173-872f-3be1704826b5 does not exist
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 191f69bf-3462-4d21-a7b9-d2652e845f5b does not exist
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 96fd0942-8196-4dda-a999-435ccf4dd088 does not exist
Nov 25 17:17:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:17:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:17:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:17:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:17:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:17:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:17:10 compute-0 sudo[410204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:10 compute-0 sudo[410204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:10 compute-0 sudo[410204]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:10 compute-0 sudo[410229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:17:10 compute-0 sudo[410229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:10 compute-0 sudo[410229]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:10 compute-0 sudo[410254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:10 compute-0 sudo[410254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:10 compute-0 sudo[410254]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:10 compute-0 sudo[410279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:17:10 compute-0 sudo[410279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:11 compute-0 nova_compute[254092]: 2025-11-25 17:17:11.092 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:17:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:17:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:17:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:17:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:17:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:17:11 compute-0 podman[410346]: 2025-11-25 17:17:11.220460777 +0000 UTC m=+0.054343790 container create 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:17:11 compute-0 systemd[1]: Started libpod-conmon-4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a.scope.
Nov 25 17:17:11 compute-0 podman[410346]: 2025-11-25 17:17:11.191152709 +0000 UTC m=+0.025035742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:17:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:11 compute-0 podman[410346]: 2025-11-25 17:17:11.334578583 +0000 UTC m=+0.168461596 container init 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:17:11 compute-0 podman[410346]: 2025-11-25 17:17:11.345214552 +0000 UTC m=+0.179097565 container start 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:17:11 compute-0 podman[410346]: 2025-11-25 17:17:11.349098048 +0000 UTC m=+0.182981181 container attach 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:17:11 compute-0 clever_easley[410362]: 167 167
Nov 25 17:17:11 compute-0 systemd[1]: libpod-4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a.scope: Deactivated successfully.
Nov 25 17:17:11 compute-0 podman[410346]: 2025-11-25 17:17:11.353186109 +0000 UTC m=+0.187069132 container died 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 17:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4098c339792f0097d66fee199086caf1118f264fb5bf58ff80fb64baef081632-merged.mount: Deactivated successfully.
Nov 25 17:17:11 compute-0 podman[410346]: 2025-11-25 17:17:11.402111912 +0000 UTC m=+0.235994925 container remove 4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:17:11 compute-0 systemd[1]: libpod-conmon-4dcdfbf1d91e8227f316ced94875ebe2c4ca9646573c32e86aad2fdfbc39a47a.scope: Deactivated successfully.
Nov 25 17:17:11 compute-0 podman[410385]: 2025-11-25 17:17:11.624577797 +0000 UTC m=+0.060087967 container create 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:17:11 compute-0 systemd[1]: Started libpod-conmon-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope.
Nov 25 17:17:11 compute-0 podman[410385]: 2025-11-25 17:17:11.598909318 +0000 UTC m=+0.034419558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:17:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:11 compute-0 podman[410385]: 2025-11-25 17:17:11.718148844 +0000 UTC m=+0.153659004 container init 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:17:11 compute-0 podman[410385]: 2025-11-25 17:17:11.727483368 +0000 UTC m=+0.162993498 container start 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:17:11 compute-0 podman[410385]: 2025-11-25 17:17:11.731179328 +0000 UTC m=+0.166689458 container attach 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:17:12 compute-0 ceph-mon[74985]: pgmap v2809: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:12 compute-0 nova_compute[254092]: 2025-11-25 17:17:12.783 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:12 compute-0 tender_heisenberg[410401]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:17:12 compute-0 tender_heisenberg[410401]: --> relative data size: 1.0
Nov 25 17:17:12 compute-0 tender_heisenberg[410401]: --> All data devices are unavailable
Nov 25 17:17:12 compute-0 systemd[1]: libpod-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope: Deactivated successfully.
Nov 25 17:17:12 compute-0 podman[410385]: 2025-11-25 17:17:12.956491611 +0000 UTC m=+1.392001751 container died 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 17:17:12 compute-0 systemd[1]: libpod-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope: Consumed 1.188s CPU time.
Nov 25 17:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba8c0684734379b4da20109081601f760579a29bd1423e6d19eedf659e12729f-merged.mount: Deactivated successfully.
Nov 25 17:17:13 compute-0 podman[410385]: 2025-11-25 17:17:13.522334973 +0000 UTC m=+1.957845103 container remove 231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heisenberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:17:13 compute-0 systemd[1]: libpod-conmon-231d94553c33cb7c7e6eb42c257e55cbb827e9845c89a77848da40ff74306832.scope: Deactivated successfully.
Nov 25 17:17:13 compute-0 sudo[410279]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:13 compute-0 sudo[410444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:13 compute-0 sudo[410444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:13 compute-0 sudo[410444]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:13.654 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:13 compute-0 sudo[410469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:17:13 compute-0 sudo[410469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:13 compute-0 sudo[410469]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:13 compute-0 sudo[410494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:13 compute-0 sudo[410494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:13 compute-0 sudo[410494]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:13 compute-0 sudo[410519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:17:13 compute-0 sudo[410519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:14 compute-0 podman[410583]: 2025-11-25 17:17:14.334966012 +0000 UTC m=+0.031448488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:17:14 compute-0 podman[410583]: 2025-11-25 17:17:14.465424932 +0000 UTC m=+0.161907358 container create 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:17:14 compute-0 ceph-mon[74985]: pgmap v2810: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:14 compute-0 systemd[1]: Started libpod-conmon-895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816.scope.
Nov 25 17:17:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:14 compute-0 podman[410583]: 2025-11-25 17:17:14.582289164 +0000 UTC m=+0.278771690 container init 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:17:14 compute-0 podman[410583]: 2025-11-25 17:17:14.594060614 +0000 UTC m=+0.290543080 container start 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:17:14 compute-0 beautiful_brahmagupta[410599]: 167 167
Nov 25 17:17:14 compute-0 systemd[1]: libpod-895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816.scope: Deactivated successfully.
Nov 25 17:17:14 compute-0 podman[410583]: 2025-11-25 17:17:14.626318562 +0000 UTC m=+0.322801028 container attach 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:17:14 compute-0 podman[410583]: 2025-11-25 17:17:14.626876467 +0000 UTC m=+0.323358903 container died 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa2d5f466162a94993cab67db2b84414dd1e83deba03de099502e750531e56e-merged.mount: Deactivated successfully.
Nov 25 17:17:14 compute-0 podman[410583]: 2025-11-25 17:17:14.770773094 +0000 UTC m=+0.467255540 container remove 895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:17:14 compute-0 systemd[1]: libpod-conmon-895d724313700c38ecbf52b6c6e2241ea1ed66dfe7cb81251540baacf6d1a816.scope: Deactivated successfully.
Nov 25 17:17:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:15 compute-0 podman[410626]: 2025-11-25 17:17:15.009671187 +0000 UTC m=+0.097577507 container create 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:17:15 compute-0 podman[410626]: 2025-11-25 17:17:14.983175756 +0000 UTC m=+0.071082116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:17:15 compute-0 systemd[1]: Started libpod-conmon-753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41.scope.
Nov 25 17:17:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:15 compute-0 podman[410626]: 2025-11-25 17:17:15.160832121 +0000 UTC m=+0.248738501 container init 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:17:15 compute-0 podman[410626]: 2025-11-25 17:17:15.178925464 +0000 UTC m=+0.266831784 container start 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 17:17:15 compute-0 podman[410626]: 2025-11-25 17:17:15.192066932 +0000 UTC m=+0.279973262 container attach 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:17:15 compute-0 ceph-mon[74985]: pgmap v2811: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:15 compute-0 jovial_yalow[410643]: {
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:     "0": [
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:         {
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "devices": [
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "/dev/loop3"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             ],
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_name": "ceph_lv0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_size": "21470642176",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "name": "ceph_lv0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "tags": {
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cluster_name": "ceph",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.crush_device_class": "",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.encrypted": "0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osd_id": "0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.type": "block",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.vdo": "0"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             },
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "type": "block",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "vg_name": "ceph_vg0"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:         }
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:     ],
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:     "1": [
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:         {
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "devices": [
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "/dev/loop4"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             ],
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_name": "ceph_lv1",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_size": "21470642176",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "name": "ceph_lv1",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "tags": {
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cluster_name": "ceph",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.crush_device_class": "",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.encrypted": "0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osd_id": "1",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.type": "block",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.vdo": "0"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             },
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "type": "block",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "vg_name": "ceph_vg1"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:         }
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:     ],
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:     "2": [
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:         {
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "devices": [
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "/dev/loop5"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             ],
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_name": "ceph_lv2",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_size": "21470642176",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "name": "ceph_lv2",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "tags": {
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.cluster_name": "ceph",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.crush_device_class": "",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.encrypted": "0",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osd_id": "2",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.type": "block",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:                 "ceph.vdo": "0"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             },
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "type": "block",
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:             "vg_name": "ceph_vg2"
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:         }
Nov 25 17:17:15 compute-0 jovial_yalow[410643]:     ]
Nov 25 17:17:15 compute-0 jovial_yalow[410643]: }
Nov 25 17:17:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:15.999 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:01:70 2001:db8:0:1:f816:3eff:fe96:170 2001:db8::f816:3eff:fe96:170'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe96:170/64 2001:db8::f816:3eff:fe96:170/64', 'neutron:device_id': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7624b22b-8369-4a12-940b-9f95890a4040) old=Port_Binding(mac=['fa:16:3e:96:01:70 2001:db8::f816:3eff:fe96:170'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe96:170/64', 'neutron:device_id': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:17:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:16.003 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7624b22b-8369-4a12-940b-9f95890a4040 in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 updated
Nov 25 17:17:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:16.004 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:17:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:16.006 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc6abeb-72f0-453c-8ac4-4c08a062d4cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:16 compute-0 systemd[1]: libpod-753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41.scope: Deactivated successfully.
Nov 25 17:17:16 compute-0 podman[410626]: 2025-11-25 17:17:16.01823283 +0000 UTC m=+1.106139150 container died 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-202be48555c4b125a224f5882ee74b3f401105365e46bdf6e61a93ebdc84f97f-merged.mount: Deactivated successfully.
Nov 25 17:17:16 compute-0 podman[410626]: 2025-11-25 17:17:16.067369686 +0000 UTC m=+1.155276006 container remove 753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:17:16 compute-0 systemd[1]: libpod-conmon-753a193c7055896498d549b55a0addb07766de1de2cb8e9a7509a8b7d2d23a41.scope: Deactivated successfully.
Nov 25 17:17:16 compute-0 nova_compute[254092]: 2025-11-25 17:17:16.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:16 compute-0 sudo[410519]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:16 compute-0 sudo[410666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:16 compute-0 sudo[410666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:16 compute-0 sudo[410666]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:16 compute-0 sudo[410691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:17:16 compute-0 sudo[410691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:16 compute-0 sudo[410691]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:16 compute-0 sudo[410716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:16 compute-0 sudo[410716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:16 compute-0 sudo[410716]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:16 compute-0 sudo[410741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:17:16 compute-0 sudo[410741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:16 compute-0 podman[410808]: 2025-11-25 17:17:16.690007294 +0000 UTC m=+0.025219988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:17:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:17 compute-0 podman[410808]: 2025-11-25 17:17:17.002592253 +0000 UTC m=+0.337804837 container create 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:17:17 compute-0 systemd[1]: Started libpod-conmon-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope.
Nov 25 17:17:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:17 compute-0 podman[410808]: 2025-11-25 17:17:17.213295689 +0000 UTC m=+0.548508293 container init 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 17:17:17 compute-0 podman[410808]: 2025-11-25 17:17:17.220425263 +0000 UTC m=+0.555637847 container start 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:17:17 compute-0 zen_dhawan[410824]: 167 167
Nov 25 17:17:17 compute-0 systemd[1]: libpod-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope: Deactivated successfully.
Nov 25 17:17:17 compute-0 podman[410808]: 2025-11-25 17:17:17.227765742 +0000 UTC m=+0.562978396 container attach 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:17:17 compute-0 conmon[410824]: conmon 1992988ad09e758b6e8f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope/container/memory.events
Nov 25 17:17:17 compute-0 podman[410808]: 2025-11-25 17:17:17.22878265 +0000 UTC m=+0.563995234 container died 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-84455048f18bf1fbf698ff4f0edef84cff8eafa06f670baee7ec0568107e85cd-merged.mount: Deactivated successfully.
Nov 25 17:17:17 compute-0 podman[410808]: 2025-11-25 17:17:17.263890585 +0000 UTC m=+0.599103169 container remove 1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 17:17:17 compute-0 systemd[1]: libpod-conmon-1992988ad09e758b6e8f6933dee79605a23c55457a7c6ef4b522475c017ecb50.scope: Deactivated successfully.
Nov 25 17:17:17 compute-0 podman[410847]: 2025-11-25 17:17:17.443835673 +0000 UTC m=+0.051613806 container create f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:17:17 compute-0 systemd[1]: Started libpod-conmon-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope.
Nov 25 17:17:17 compute-0 podman[410847]: 2025-11-25 17:17:17.417717102 +0000 UTC m=+0.025495255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:17:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:17 compute-0 podman[410847]: 2025-11-25 17:17:17.559450881 +0000 UTC m=+0.167229034 container init f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:17:17 compute-0 podman[410847]: 2025-11-25 17:17:17.571119208 +0000 UTC m=+0.178897351 container start f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:17:17 compute-0 podman[410847]: 2025-11-25 17:17:17.574542261 +0000 UTC m=+0.182320404 container attach f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:17:17 compute-0 nova_compute[254092]: 2025-11-25 17:17:17.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:18 compute-0 ceph-mon[74985]: pgmap v2812: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:18 compute-0 wonderful_greider[410863]: {
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "osd_id": 1,
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "type": "bluestore"
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:     },
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "osd_id": 2,
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "type": "bluestore"
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:     },
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "osd_id": 0,
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:         "type": "bluestore"
Nov 25 17:17:18 compute-0 wonderful_greider[410863]:     }
Nov 25 17:17:18 compute-0 wonderful_greider[410863]: }
Nov 25 17:17:18 compute-0 systemd[1]: libpod-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope: Deactivated successfully.
Nov 25 17:17:18 compute-0 systemd[1]: libpod-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope: Consumed 1.194s CPU time.
Nov 25 17:17:18 compute-0 podman[410847]: 2025-11-25 17:17:18.757991474 +0000 UTC m=+1.365769657 container died f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:17:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3920dd905294b1db0126e2fcbf03f8ab7a7b96a17ca1fd5ac2d565c7883ed413-merged.mount: Deactivated successfully.
Nov 25 17:17:18 compute-0 podman[410847]: 2025-11-25 17:17:18.831817573 +0000 UTC m=+1.439595716 container remove f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:17:18 compute-0 systemd[1]: libpod-conmon-f56b356d58ddbf5fa1801b0c63d0d0258f96e619eeba48c6c6e69e5610a97556.scope: Deactivated successfully.
Nov 25 17:17:18 compute-0 sudo[410741]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:17:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:17:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:17:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:17:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5e4bb086-ac9c-486b-97de-49e5da7579cd does not exist
Nov 25 17:17:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1cf3d535-71df-4b06-819c-0b1738041a9a does not exist
Nov 25 17:17:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:19 compute-0 sudo[410911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:17:19 compute-0 sudo[410911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:19 compute-0 sudo[410911]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:19 compute-0 sudo[410936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:17:19 compute-0 sudo[410936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:17:19 compute-0 sudo[410936]: pam_unix(sudo:session): session closed for user root
Nov 25 17:17:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:17:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:17:19 compute-0 ceph-mon[74985]: pgmap v2813: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:21 compute-0 nova_compute[254092]: 2025-11-25 17:17:21.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:21 compute-0 ceph-mon[74985]: pgmap v2814: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:22 compute-0 nova_compute[254092]: 2025-11-25 17:17:22.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:23 compute-0 ceph-mon[74985]: pgmap v2815: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.368 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.368 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.400 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.483 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.483 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.493 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.493 254096 INFO nova.compute.claims [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:17:24 compute-0 nova_compute[254092]: 2025-11-25 17:17:24.644 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:17:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545645781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.067 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.074 254096 DEBUG nova.compute.provider_tree [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.099 254096 DEBUG nova.scheduler.client.report [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.134 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.136 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.225 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.226 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.249 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.273 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.391 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.394 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.394 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Creating image(s)
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.424 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.448 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.477 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.482 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.559 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.561 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.561 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.562 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.586 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.590 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.936 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:25 compute-0 nova_compute[254092]: 2025-11-25 17:17:25.970 254096 DEBUG nova.policy [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:17:25 compute-0 ceph-mon[74985]: pgmap v2816: 321 pgs: 321 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:17:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3545645781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.013 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.118 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.125 254096 DEBUG nova.objects.instance [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.140 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.141 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Ensure instance console log exists: /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.141 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.142 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:26 compute-0 nova_compute[254092]: 2025-11-25 17:17:26.142 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 57 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 845 KiB/s wr, 1 op/s
Nov 25 17:17:27 compute-0 nova_compute[254092]: 2025-11-25 17:17:27.200 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully created port: a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:17:27 compute-0 nova_compute[254092]: 2025-11-25 17:17:27.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:28 compute-0 ceph-mon[74985]: pgmap v2817: 321 pgs: 321 active+clean; 57 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 845 KiB/s wr, 1 op/s
Nov 25 17:17:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 57 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 845 KiB/s wr, 1 op/s
Nov 25 17:17:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:29 compute-0 nova_compute[254092]: 2025-11-25 17:17:29.538 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully created port: fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:17:30 compute-0 ceph-mon[74985]: pgmap v2818: 321 pgs: 321 active+clean; 57 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 845 KiB/s wr, 1 op/s
Nov 25 17:17:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 88 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.100 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.128 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully updated port: a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.253 254096 DEBUG nova.compute.manager [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.253 254096 DEBUG nova.compute.manager [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.253 254096 DEBUG oslo_concurrency.lockutils [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.254 254096 DEBUG oslo_concurrency.lockutils [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.254 254096 DEBUG nova.network.neutron [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.443 254096 DEBUG nova.network.neutron [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.913 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Successfully updated port: fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.931 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:17:31 compute-0 nova_compute[254092]: 2025-11-25 17:17:31.998 254096 DEBUG nova.network.neutron [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:17:32 compute-0 nova_compute[254092]: 2025-11-25 17:17:32.024 254096 DEBUG oslo_concurrency.lockutils [req-74b75ece-1dd3-4b0d-b78a-1e9b358b5588 req-faba25e3-77c4-456d-920b-819eaf1534a9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:17:32 compute-0 nova_compute[254092]: 2025-11-25 17:17:32.026 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:17:32 compute-0 nova_compute[254092]: 2025-11-25 17:17:32.026 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:17:32 compute-0 nova_compute[254092]: 2025-11-25 17:17:32.174 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:17:32 compute-0 ceph-mon[74985]: pgmap v2819: 321 pgs: 321 active+clean; 88 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Nov 25 17:17:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:17:32 compute-0 nova_compute[254092]: 2025-11-25 17:17:32.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:33 compute-0 nova_compute[254092]: 2025-11-25 17:17:33.358 254096 DEBUG nova.compute.manager [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:17:33 compute-0 nova_compute[254092]: 2025-11-25 17:17:33.359 254096 DEBUG nova.compute.manager [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:17:33 compute-0 nova_compute[254092]: 2025-11-25 17:17:33.359 254096 DEBUG oslo_concurrency.lockutils [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:17:33 compute-0 ovn_controller[153477]: 2025-11-25T17:17:33Z|01495|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 25 17:17:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:34 compute-0 ceph-mon[74985]: pgmap v2820: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.452 254096 DEBUG nova.network.neutron [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.468 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.468 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance network_info: |[{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.469 254096 DEBUG oslo_concurrency.lockutils [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.469 254096 DEBUG nova.network.neutron [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.474 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start _get_guest_xml network_info=[{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.480 254096 WARNING nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.489 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.490 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.493 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.494 254096 DEBUG nova.virt.libvirt.host [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.494 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.494 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.495 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.495 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.496 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.497 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.497 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.497 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.498 254096 DEBUG nova.virt.hardware [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:17:34 compute-0 nova_compute[254092]: 2025-11-25 17:17:34.501 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:34 compute-0 podman[411150]: 2025-11-25 17:17:34.679784562 +0000 UTC m=+0.085249731 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:17:34 compute-0 podman[411151]: 2025-11-25 17:17:34.687021889 +0000 UTC m=+0.080124682 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 17:17:34 compute-0 podman[411152]: 2025-11-25 17:17:34.715386161 +0000 UTC m=+0.111522007 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 17:17:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:17:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:17:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260520596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.001 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.036 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.040 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3260520596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:17:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:17:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/886301348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.488 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.491 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.491 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.492 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.494 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.494 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.495 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.496 254096 DEBUG nova.objects.instance [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.521 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <uuid>e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac</uuid>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <name>instance-0000008d</name>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1474914346</nova:name>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:17:34</nova:creationTime>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:port uuid="a3c9174e-c8c3-4b9f-b87f-4d6244324c9b">
Nov 25 17:17:35 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <nova:port uuid="fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d">
Nov 25 17:17:35 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feac:9cd4" ipVersion="6"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feac:9cd4" ipVersion="6"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <system>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <entry name="serial">e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac</entry>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <entry name="uuid">e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac</entry>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </system>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <os>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   </os>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <features>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   </features>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk">
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       </source>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config">
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       </source>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:17:35 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ef:d8:27"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <target dev="tapa3c9174e-c8"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:ac:9c:d4"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <target dev="tapfdd7f4f6-80"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/console.log" append="off"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <video>
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </video>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:17:35 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:17:35 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:17:35 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:17:35 compute-0 nova_compute[254092]: </domain>
Nov 25 17:17:35 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.524 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Preparing to wait for external event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.524 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Preparing to wait for external event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.525 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.526 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.526 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.526 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.527 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.528 254096 DEBUG os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.532 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.533 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3c9174e-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.533 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3c9174e-c8, col_values=(('external_ids', {'iface-id': 'a3c9174e-c8c3-4b9f-b87f-4d6244324c9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:d8:27', 'vm-uuid': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 NetworkManager[48891]: <info>  [1764091055.5370] manager: (tapa3c9174e-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.545 254096 INFO os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8')
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.547 254096 DEBUG nova.virt.libvirt.vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:17:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.547 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.549 254096 DEBUG nova.network.os_vif_util [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.549 254096 DEBUG os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.551 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.551 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.555 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdd7f4f6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.556 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdd7f4f6-80, col_values=(('external_ids', {'iface-id': 'fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:9c:d4', 'vm-uuid': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:35 compute-0 NetworkManager[48891]: <info>  [1764091055.5591] manager: (tapfdd7f4f6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.566 254096 INFO os_vif [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80')
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.619 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.620 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.620 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:ef:d8:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.620 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:ac:9c:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.621 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Using config drive
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.642 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.923 254096 DEBUG nova.network.neutron [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated VIF entry in instance network info cache for port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.924 254096 DEBUG nova.network.neutron [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:17:35 compute-0 nova_compute[254092]: 2025-11-25 17:17:35.947 254096 DEBUG oslo_concurrency.lockutils [req-cfd55131-4bec-4faa-b06a-6f73cbeaa3ae req-ffd41aff-673b-44f7-9cbd-d7e9fa9eaf24 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:36 compute-0 ceph-mon[74985]: pgmap v2821: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:17:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/886301348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.336 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Creating config drive at /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.341 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80o3bp9w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.485 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80o3bp9w" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.511 254096 DEBUG nova.storage.rbd_utils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.515 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.560 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.760 254096 DEBUG oslo_concurrency.processutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.761 254096 INFO nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deleting local config drive /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac/disk.config because it was imported into RBD.
Nov 25 17:17:36 compute-0 NetworkManager[48891]: <info>  [1764091056.8185] manager: (tapa3c9174e-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Nov 25 17:17:36 compute-0 kernel: tapa3c9174e-c8: entered promiscuous mode
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01496|binding|INFO|Claiming lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for this chassis.
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01497|binding|INFO|a3c9174e-c8c3-4b9f-b87f-4d6244324c9b: Claiming fa:16:3e:ef:d8:27 10.100.0.8
Nov 25 17:17:36 compute-0 NetworkManager[48891]: <info>  [1764091056.8354] manager: (tapfdd7f4f6-80): new Tun device (/org/freedesktop/NetworkManager/Devices/618)
Nov 25 17:17:36 compute-0 kernel: tapfdd7f4f6-80: entered promiscuous mode
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.840 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:d8:27 10.100.0.8'], port_security=['fa:16:3e:ef:d8:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.842 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e bound to our chassis
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.843 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a702335-301a-4b90-b82e-e616a31e5b3e
Nov 25 17:17:36 compute-0 systemd-udevd[411352]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:17:36 compute-0 systemd-udevd[411351]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.856 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[99d34fe8-07fc-4f62-8509-7d55daf49fd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.857 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a702335-31 in ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.860 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a702335-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.860 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60e208dd-806c-4834-be66-41d9d70254dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 NetworkManager[48891]: <info>  [1764091056.8622] device (tapa3c9174e-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.861 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f25fcd4e-2408-4009-b986-90967642153f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 NetworkManager[48891]: <info>  [1764091056.8633] device (tapfdd7f4f6-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:17:36 compute-0 NetworkManager[48891]: <info>  [1764091056.8640] device (tapa3c9174e-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:17:36 compute-0 NetworkManager[48891]: <info>  [1764091056.8644] device (tapfdd7f4f6-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.876 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[80a66ade-8862-494f-87d7-05f4055a9d2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 systemd-machined[216343]: New machine qemu-175-instance-0000008d.
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.903 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce872b4-9446-45a2-8f1a-0db8ea0e2a84]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01498|binding|INFO|Claiming lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for this chassis.
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01499|binding|INFO|fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d: Claiming fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:36 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-0000008d.
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.924 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], port_security=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feac:9cd4/64 2001:db8::f816:3eff:feac:9cd4/64', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.924 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01500|binding|INFO|Setting lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b ovn-installed in OVS
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01501|binding|INFO|Setting lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b up in Southbound
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.934 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a07f4319-1792-4a22-a86c-af53d36d1b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01502|binding|INFO|Setting lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d ovn-installed in OVS
Nov 25 17:17:36 compute-0 ovn_controller[153477]: 2025-11-25T17:17:36Z|01503|binding|INFO|Setting lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d up in Southbound
Nov 25 17:17:36 compute-0 nova_compute[254092]: 2025-11-25 17:17:36.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:36 compute-0 NetworkManager[48891]: <info>  [1764091056.9435] manager: (tap4a702335-30): new Veth device (/org/freedesktop/NetworkManager/Devices/619)
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a45092-f6eb-4d13-90ef-9e486e63b902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.986 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[de7cbe0b-fba0-40d3-8108-866680f20da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:36.989 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1af36660-a00f-4881-bbab-a624dcc6a1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 NetworkManager[48891]: <info>  [1764091057.0131] device (tap4a702335-30): carrier: link connected
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.024 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e8903e-cdd2-4104-8419-0374b4f4adb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8c19be6d-8948-4273-8ef1-29b2a6c13e0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411388, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.059 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7749bd8d-4803-48ce-b165-2ca820e65281]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:4bf0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751460, 'tstamp': 751460}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411389, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[753a4ccb-2f28-43ad-a147-79d8f533db6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411390, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.119 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[28dbcc2e-ca68-4c52-bcb4-ecb90a1966b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9b40c6-392b-4ecd-abdb-15d5a87b6dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.195 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a702335-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:37 compute-0 NetworkManager[48891]: <info>  [1764091057.1982] manager: (tap4a702335-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Nov 25 17:17:37 compute-0 kernel: tap4a702335-30: entered promiscuous mode
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.200 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a702335-30, col_values=(('external_ids', {'iface-id': '1cef18fb-8ae8-44d1-93c6-659b405ed9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:37 compute-0 ovn_controller[153477]: 2025-11-25T17:17:37Z|01504|binding|INFO|Releasing lport 1cef18fb-8ae8-44d1-93c6-659b405ed9b8 from this chassis (sb_readonly=0)
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.217 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a702335-301a-4b90-b82e-e616a31e5b3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a702335-301a-4b90-b82e-e616a31e5b3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.219 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[21f88d87-bbe2-411f-aa1c-4c8569ecee4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.221 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-4a702335-301a-4b90-b82e-e616a31e5b3e
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/4a702335-301a-4b90-b82e-e616a31e5b3e.pid.haproxy
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 4a702335-301a-4b90-b82e-e616a31e5b3e
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.222 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'env', 'PROCESS_TAG=haproxy-4a702335-301a-4b90-b82e-e616a31e5b3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a702335-301a-4b90-b82e-e616a31e5b3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.315 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091057.3145525, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.315 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Started (Lifecycle Event)
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.338 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.343 254096 DEBUG nova.compute.manager [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.343 254096 DEBUG oslo_concurrency.lockutils [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.344 254096 DEBUG oslo_concurrency.lockutils [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.344 254096 DEBUG oslo_concurrency.lockutils [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.344 254096 DEBUG nova.compute.manager [req-7ed1f594-f5bc-4ad7-a6dd-a0f4bf1207fc req-5c810d4a-93e1-440f-96a1-fbe1056c9814 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Processing event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.348 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091057.3153772, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.349 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Paused (Lifecycle Event)
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.373 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.377 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:17:37 compute-0 nova_compute[254092]: 2025-11-25 17:17:37.398 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:17:37 compute-0 podman[411464]: 2025-11-25 17:17:37.631619099 +0000 UTC m=+0.067100118 container create 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:17:37 compute-0 systemd[1]: Started libpod-conmon-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0.scope.
Nov 25 17:17:37 compute-0 podman[411464]: 2025-11-25 17:17:37.595036293 +0000 UTC m=+0.030517412 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:17:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda96b984478ce4719be5239caee54b3e00b48097bfc3855698629f519ccf4eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:37 compute-0 podman[411464]: 2025-11-25 17:17:37.743073183 +0000 UTC m=+0.178554232 container init 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 17:17:37 compute-0 podman[411464]: 2025-11-25 17:17:37.754527284 +0000 UTC m=+0.190008313 container start 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:17:37 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : New worker (411485) forked
Nov 25 17:17:37 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : Loading success.
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.819 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.821 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.835 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[001ee9be-2a10-40bb-9dc4-7586b892f038]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.836 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap51f47401-e1 in ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.839 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap51f47401-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.839 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a9b6b6-8073-412e-9f42-9bd55e154632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.840 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5ab139-6e04-4010-8e4c-d3187fd24e69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.856 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcc5bb5-ed7d-4d60-8f41-bbc85a21f51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.873 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbc9929-1973-4947-b832-661eea3d770e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.905 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7f61d342-967f-431f-bdc2-263b91340768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.911 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9153f449-4ec8-4949-be64-07267f6a714e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 NetworkManager[48891]: <info>  [1764091057.9133] manager: (tap51f47401-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/621)
Nov 25 17:17:37 compute-0 systemd-udevd[411377]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.949 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ecb5cb-0de9-4de4-8e0b-6e76d0bd8f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.953 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ad7c82-01f4-4c18-ba02-a1cb37d6ef96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:37 compute-0 NetworkManager[48891]: <info>  [1764091057.9803] device (tap51f47401-e0): carrier: link connected
Nov 25 17:17:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:37.988 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8d80b9d3-1905-4acd-90ed-19274693b534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ce18e651-8c9e-451f-a428-5b6370ca21e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411504, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.030 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[416f9f31-7a4e-4964-bfd9-148dc83046a1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:170'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751557, 'tstamp': 751557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411505, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.052 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[888a1bb1-a3c0-425e-962a-a919cd7d75af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411506, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.090 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa118529-5608-4f9c-805c-f36acc70f6e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.130 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b223b204-0930-4811-91ef-9f7f0bfb3ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.133 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.133 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.134 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51f47401-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:38 compute-0 nova_compute[254092]: 2025-11-25 17:17:38.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:38 compute-0 kernel: tap51f47401-e0: entered promiscuous mode
Nov 25 17:17:38 compute-0 NetworkManager[48891]: <info>  [1764091058.1384] manager: (tap51f47401-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Nov 25 17:17:38 compute-0 nova_compute[254092]: 2025-11-25 17:17:38.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.143 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51f47401-e0, col_values=(('external_ids', {'iface-id': '7624b22b-8369-4a12-940b-9f95890a4040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:38 compute-0 nova_compute[254092]: 2025-11-25 17:17:38.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:38 compute-0 ovn_controller[153477]: 2025-11-25T17:17:38Z|01505|binding|INFO|Releasing lport 7624b22b-8369-4a12-940b-9f95890a4040 from this chassis (sb_readonly=0)
Nov 25 17:17:38 compute-0 nova_compute[254092]: 2025-11-25 17:17:38.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.146 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/51f47401-ed2b-45e2-aea1-5cbbd48e5245.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/51f47401-ed2b-45e2-aea1-5cbbd48e5245.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.147 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3ae38f-01a3-41d2-919e-3947c8e124d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.148 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-51f47401-ed2b-45e2-aea1-5cbbd48e5245
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/51f47401-ed2b-45e2-aea1-5cbbd48e5245.pid.haproxy
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 51f47401-ed2b-45e2-aea1-5cbbd48e5245
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:17:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:38.149 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'env', 'PROCESS_TAG=haproxy-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/51f47401-ed2b-45e2-aea1-5cbbd48e5245.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:17:38 compute-0 nova_compute[254092]: 2025-11-25 17:17:38.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:38 compute-0 ceph-mon[74985]: pgmap v2822: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:17:38 compute-0 podman[411537]: 2025-11-25 17:17:38.536956571 +0000 UTC m=+0.049497908 container create cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 17:17:38 compute-0 systemd[1]: Started libpod-conmon-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0.scope.
Nov 25 17:17:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f061dc6daf905b054161e2e14d141c86f73e0689fee8c1b141f9292e1f077fc2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:17:38 compute-0 podman[411537]: 2025-11-25 17:17:38.608336744 +0000 UTC m=+0.120878111 container init cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 17:17:38 compute-0 podman[411537]: 2025-11-25 17:17:38.511410656 +0000 UTC m=+0.023952003 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:17:38 compute-0 podman[411537]: 2025-11-25 17:17:38.614607795 +0000 UTC m=+0.127149142 container start cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 17:17:38 compute-0 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : New worker (411558) forked
Nov 25 17:17:38 compute-0 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : Loading success.
Nov 25 17:17:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 970 KiB/s wr, 25 op/s
Nov 25 17:17:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.588 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.589 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.590 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.591 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.591 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No event matching network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b in dict_keys([('network-vif-plugged', 'fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.592 254096 WARNING nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for instance with vm_state building and task_state spawning.
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.592 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.593 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.594 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.594 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.594 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Processing event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.595 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.595 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.595 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.596 254096 DEBUG oslo_concurrency.lockutils [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.596 254096 DEBUG nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.596 254096 WARNING nova.compute.manager [req-300d573b-4ab0-45d7-8f97-07145b045e7c req-69d4ea55-e1da-4662-8d6b-e224b3248d5a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for instance with vm_state building and task_state spawning.
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.599 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.603 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091059.603369, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.604 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Resumed (Lifecycle Event)
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.607 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.615 254096 INFO nova.virt.libvirt.driver [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance spawned successfully.
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.617 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.620 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.625 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.639 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.640 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.640 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.641 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.642 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.642 254096 DEBUG nova.virt.libvirt.driver [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.648 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.698 254096 INFO nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 14.31 seconds to spawn the instance on the hypervisor.
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.699 254096 DEBUG nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.757 254096 INFO nova.compute.manager [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 15.31 seconds to build instance.
Nov 25 17:17:39 compute-0 nova_compute[254092]: 2025-11-25 17:17:39.770 254096 DEBUG oslo_concurrency.lockutils [None req-2c9b3ce4-175a-4b83-ac7e-4d4fc1357b2c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:17:40
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.control', '.mgr', 'volumes', 'backups', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:17:40 compute-0 ceph-mon[74985]: pgmap v2823: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 970 KiB/s wr, 25 op/s
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:17:40 compute-0 nova_compute[254092]: 2025-11-25 17:17:40.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:17:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 982 KiB/s wr, 34 op/s
Nov 25 17:17:41 compute-0 nova_compute[254092]: 2025-11-25 17:17:41.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:42 compute-0 ceph-mon[74985]: pgmap v2824: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 982 KiB/s wr, 34 op/s
Nov 25 17:17:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 12 KiB/s wr, 37 op/s
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.524 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:43.949 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:43.952 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:17:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:17:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182407296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:17:43 compute-0 nova_compute[254092]: 2025-11-25 17:17:43.979 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.051 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:17:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:44 compute-0 ceph-mon[74985]: pgmap v2825: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 12 KiB/s wr, 37 op/s
Nov 25 17:17:44 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2182407296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.306 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.307 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3469MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.308 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.308 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.383 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.384 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.384 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.452 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:17:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 12 KiB/s wr, 36 op/s
Nov 25 17:17:44 compute-0 ovn_controller[153477]: 2025-11-25T17:17:44Z|01506|binding|INFO|Releasing lport 1cef18fb-8ae8-44d1-93c6-659b405ed9b8 from this chassis (sb_readonly=0)
Nov 25 17:17:44 compute-0 NetworkManager[48891]: <info>  [1764091064.9232] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Nov 25 17:17:44 compute-0 NetworkManager[48891]: <info>  [1764091064.9244] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Nov 25 17:17:44 compute-0 ovn_controller[153477]: 2025-11-25T17:17:44Z|01507|binding|INFO|Releasing lport 7624b22b-8369-4a12-940b-9f95890a4040 from this chassis (sb_readonly=0)
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:44 compute-0 ovn_controller[153477]: 2025-11-25T17:17:44Z|01508|binding|INFO|Releasing lport 1cef18fb-8ae8-44d1-93c6-659b405ed9b8 from this chassis (sb_readonly=0)
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:44 compute-0 ovn_controller[153477]: 2025-11-25T17:17:44Z|01509|binding|INFO|Releasing lport 7624b22b-8369-4a12-940b-9f95890a4040 from this chassis (sb_readonly=0)
Nov 25 17:17:44 compute-0 nova_compute[254092]: 2025-11-25 17:17:44.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:17:44 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460267635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.013 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.020 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.035 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.050 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.051 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:17:45 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1460267635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.287 254096 DEBUG nova.compute.manager [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.288 254096 DEBUG nova.compute.manager [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.288 254096 DEBUG oslo_concurrency.lockutils [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.289 254096 DEBUG oslo_concurrency.lockutils [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.289 254096 DEBUG nova.network.neutron [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:17:45 compute-0 nova_compute[254092]: 2025-11-25 17:17:45.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:46 compute-0 nova_compute[254092]: 2025-11-25 17:17:46.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:46 compute-0 ceph-mon[74985]: pgmap v2826: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 839 KiB/s rd, 12 KiB/s wr, 36 op/s
Nov 25 17:17:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:17:46 compute-0 nova_compute[254092]: 2025-11-25 17:17:46.940 254096 DEBUG nova.network.neutron [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated VIF entry in instance network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:17:46 compute-0 nova_compute[254092]: 2025-11-25 17:17:46.941 254096 DEBUG nova.network.neutron [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:17:46 compute-0 nova_compute[254092]: 2025-11-25 17:17:46.960 254096 DEBUG oslo_concurrency.lockutils [req-95e62c4c-34df-4415-89f2-9a501de45207 req-fb1e5adc-039a-4164-a186-d378b92defc6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:17:47 compute-0 nova_compute[254092]: 2025-11-25 17:17:47.046 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:47 compute-0 nova_compute[254092]: 2025-11-25 17:17:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:48 compute-0 ceph-mon[74985]: pgmap v2827: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:17:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:17:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:49 compute-0 nova_compute[254092]: 2025-11-25 17:17:49.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:49 compute-0 nova_compute[254092]: 2025-11-25 17:17:49.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:17:49 compute-0 nova_compute[254092]: 2025-11-25 17:17:49.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:17:49 compute-0 nova_compute[254092]: 2025-11-25 17:17:49.877 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:17:49 compute-0 nova_compute[254092]: 2025-11-25 17:17:49.878 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:17:49 compute-0 nova_compute[254092]: 2025-11-25 17:17:49.879 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:17:49 compute-0 nova_compute[254092]: 2025-11-25 17:17:49.879 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:17:49 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:17:49.955 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:17:50 compute-0 ceph-mon[74985]: pgmap v2828: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.302816) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070302870, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 604, "num_deletes": 251, "total_data_size": 659683, "memory_usage": 672072, "flush_reason": "Manual Compaction"}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070311269, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 653559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58573, "largest_seqno": 59176, "table_properties": {"data_size": 650267, "index_size": 1199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7643, "raw_average_key_size": 19, "raw_value_size": 643675, "raw_average_value_size": 1629, "num_data_blocks": 53, "num_entries": 395, "num_filter_entries": 395, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091029, "oldest_key_time": 1764091029, "file_creation_time": 1764091070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 8876 microseconds, and 5461 cpu microseconds.
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.311688) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 653559 bytes OK
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.311898) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.313719) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.313746) EVENT_LOG_v1 {"time_micros": 1764091070313738, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.313773) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 656380, prev total WAL file size 656380, number of live WAL files 2.
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.315492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(638KB)], [134(9193KB)]
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070315537, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 10067347, "oldest_snapshot_seqno": -1}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7739 keys, 8389251 bytes, temperature: kUnknown
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070379589, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 8389251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8341631, "index_size": 27155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 203562, "raw_average_key_size": 26, "raw_value_size": 8207398, "raw_average_value_size": 1060, "num_data_blocks": 1047, "num_entries": 7739, "num_filter_entries": 7739, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091070, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.379934) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 8389251 bytes
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.381355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.9 rd, 130.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.0 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(28.2) write-amplify(12.8) OK, records in: 8253, records dropped: 514 output_compression: NoCompression
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.381380) EVENT_LOG_v1 {"time_micros": 1764091070381368, "job": 82, "event": "compaction_finished", "compaction_time_micros": 64182, "compaction_time_cpu_micros": 22198, "output_level": 6, "num_output_files": 1, "total_output_size": 8389251, "num_input_records": 8253, "num_output_records": 7739, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070381666, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091070383445, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.315355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:17:50.383550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:17:50 compute-0 nova_compute[254092]: 2025-11-25 17:17:50.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:17:51 compute-0 nova_compute[254092]: 2025-11-25 17:17:51.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:51 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:17:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:17:52 compute-0 ceph-mon[74985]: pgmap v2829: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:17:52 compute-0 nova_compute[254092]: 2025-11-25 17:17:52.580 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:17:52 compute-0 nova_compute[254092]: 2025-11-25 17:17:52.606 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:17:52 compute-0 nova_compute[254092]: 2025-11-25 17:17:52.607 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:17:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 25 17:17:54 compute-0 ovn_controller[153477]: 2025-11-25T17:17:54Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:d8:27 10.100.0.8
Nov 25 17:17:54 compute-0 ovn_controller[153477]: 2025-11-25T17:17:54Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:d8:27 10.100.0.8
Nov 25 17:17:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:17:54 compute-0 ceph-mon[74985]: pgmap v2830: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 25 17:17:54 compute-0 nova_compute[254092]: 2025-11-25 17:17:54.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 37 op/s
Nov 25 17:17:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:17:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129759022' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:17:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:17:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129759022' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:17:55 compute-0 nova_compute[254092]: 2025-11-25 17:17:55.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:17:55 compute-0 nova_compute[254092]: 2025-11-25 17:17:55.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:56 compute-0 nova_compute[254092]: 2025-11-25 17:17:56.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:17:56 compute-0 ceph-mon[74985]: pgmap v2831: 321 pgs: 321 active+clean; 88 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 37 op/s
Nov 25 17:17:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/129759022' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:17:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/129759022' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:17:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 25 17:17:58 compute-0 ceph-mon[74985]: pgmap v2832: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 25 17:17:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:17:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:00 compute-0 ceph-mon[74985]: pgmap v2833: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:00 compute-0 nova_compute[254092]: 2025-11-25 17:18:00.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:01 compute-0 nova_compute[254092]: 2025-11-25 17:18:01.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:02 compute-0 ceph-mon[74985]: pgmap v2834: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:04 compute-0 ceph-mon[74985]: pgmap v2835: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.451 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.452 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.480 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.561 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.561 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.574 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.574 254096 INFO nova.compute.claims [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:18:04 compute-0 nova_compute[254092]: 2025-11-25 17:18:04.696 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:18:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492810995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.204 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.210 254096 DEBUG nova.compute.provider_tree [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.224 254096 DEBUG nova.scheduler.client.report [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.252 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.253 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.296 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.297 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.318 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.342 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:18:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/492810995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.428 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.429 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.430 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Creating image(s)
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.463 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.499 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.533 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.538 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.643 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.644 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.645 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.645 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:05 compute-0 podman[411692]: 2025-11-25 17:18:05.666140047 +0000 UTC m=+0.068059943 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.682 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.687 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:05 compute-0 podman[411691]: 2025-11-25 17:18:05.691245231 +0000 UTC m=+0.086301761 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:18:05 compute-0 podman[411693]: 2025-11-25 17:18:05.712958042 +0000 UTC m=+0.107538088 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:18:05 compute-0 nova_compute[254092]: 2025-11-25 17:18:05.994 254096 DEBUG nova.policy [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.003 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.076 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.235 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.244 254096 DEBUG nova.objects.instance [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ce224dc-5e5e-4105-bc00-9953c57babd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.255 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.256 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Ensure instance console log exists: /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.256 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.256 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.257 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:06 compute-0 ceph-mon[74985]: pgmap v2836: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:06 compute-0 nova_compute[254092]: 2025-11-25 17:18:06.704 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully created port: 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:18:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 126 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Nov 25 17:18:07 compute-0 nova_compute[254092]: 2025-11-25 17:18:07.934 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully created port: 479e8c0a-f171-45a0-b7de-778cf1b728bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:18:08 compute-0 ceph-mon[74985]: pgmap v2837: 321 pgs: 321 active+clean; 126 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Nov 25 17:18:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 126 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 403 KiB/s wr, 1 op/s
Nov 25 17:18:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:09 compute-0 nova_compute[254092]: 2025-11-25 17:18:09.470 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully updated port: 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:18:09 compute-0 nova_compute[254092]: 2025-11-25 17:18:09.587 254096 DEBUG nova.compute.manager [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:09 compute-0 nova_compute[254092]: 2025-11-25 17:18:09.588 254096 DEBUG nova.compute.manager [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:18:09 compute-0 nova_compute[254092]: 2025-11-25 17:18:09.589 254096 DEBUG oslo_concurrency.lockutils [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:18:09 compute-0 nova_compute[254092]: 2025-11-25 17:18:09.589 254096 DEBUG oslo_concurrency.lockutils [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:18:09 compute-0 nova_compute[254092]: 2025-11-25 17:18:09.590 254096 DEBUG nova.network.neutron [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:18:09 compute-0 nova_compute[254092]: 2025-11-25 17:18:09.859 254096 DEBUG nova.network.neutron [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:18:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:18:10 compute-0 ceph-mon[74985]: pgmap v2838: 321 pgs: 321 active+clean; 126 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 403 KiB/s wr, 1 op/s
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.433 254096 DEBUG nova.network.neutron [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.454 254096 DEBUG oslo_concurrency.lockutils [req-d797d463-d566-4c2c-ab2e-1a6199676d9f req-3331d75d-c4fb-4ee8-88d0-472ab50bb2c5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.609 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Successfully updated port: 479e8c0a-f171-45a0-b7de-778cf1b728bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.627 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.628 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.628 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:18:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 163 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 27 op/s
Nov 25 17:18:10 compute-0 nova_compute[254092]: 2025-11-25 17:18:10.987 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:18:11 compute-0 nova_compute[254092]: 2025-11-25 17:18:11.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:11 compute-0 nova_compute[254092]: 2025-11-25 17:18:11.704 254096 DEBUG nova.compute.manager [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:11 compute-0 nova_compute[254092]: 2025-11-25 17:18:11.704 254096 DEBUG nova.compute.manager [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-479e8c0a-f171-45a0-b7de-778cf1b728bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:18:11 compute-0 nova_compute[254092]: 2025-11-25 17:18:11.705 254096 DEBUG oslo_concurrency.lockutils [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:18:12 compute-0 ceph-mon[74985]: pgmap v2839: 321 pgs: 321 active+clean; 163 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 27 op/s
Nov 25 17:18:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.235 254096 DEBUG nova.network.neutron [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.255 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.255 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance network_info: |[{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.256 254096 DEBUG oslo_concurrency.lockutils [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.256 254096 DEBUG nova.network.neutron [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 479e8c0a-f171-45a0-b7de-778cf1b728bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.261 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start _get_guest_xml network_info=[{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.268 254096 WARNING nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.282 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.283 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.288 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.289 254096 DEBUG nova.virt.libvirt.host [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.290 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.291 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.292 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.293 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.294 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.294 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.295 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.296 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.296 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.297 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.298 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.298 254096 DEBUG nova.virt.hardware [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.305 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:13.655 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:18:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3916719126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.818 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.848 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:18:13 compute-0 nova_compute[254092]: 2025-11-25 17:18:13.853 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:18:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40798532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.335 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.338 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.339 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.341 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.342 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.343 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.344 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.346 254096 DEBUG nova.objects.instance [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ce224dc-5e5e-4105-bc00-9953c57babd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:18:14 compute-0 ceph-mon[74985]: pgmap v2840: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:18:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3916719126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:18:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/40798532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.520 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <uuid>1ce224dc-5e5e-4105-bc00-9953c57babd7</uuid>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <name>instance-0000008e</name>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-775791465</nova:name>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:18:13</nova:creationTime>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:port uuid="81fef3aa-29c9-47a1-8cba-c758c43f8e45">
Nov 25 17:18:14 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <nova:port uuid="479e8c0a-f171-45a0-b7de-778cf1b728bb">
Nov 25 17:18:14 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe52:2771" ipVersion="6"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe52:2771" ipVersion="6"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <system>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <entry name="serial">1ce224dc-5e5e-4105-bc00-9953c57babd7</entry>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <entry name="uuid">1ce224dc-5e5e-4105-bc00-9953c57babd7</entry>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </system>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <os>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   </os>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <features>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   </features>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ce224dc-5e5e-4105-bc00-9953c57babd7_disk">
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       </source>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config">
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       </source>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:18:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:68:79:bf"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <target dev="tap81fef3aa-29"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:52:27:71"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <target dev="tap479e8c0a-f1"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/console.log" append="off"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <video>
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </video>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:18:14 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:18:14 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:18:14 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:18:14 compute-0 nova_compute[254092]: </domain>
Nov 25 17:18:14 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.521 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Preparing to wait for external event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.522 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.522 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.522 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.523 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Preparing to wait for external event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.523 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.523 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.524 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.525 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.525 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.526 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.527 254096 DEBUG os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.528 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.529 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.530 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.534 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81fef3aa-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.535 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81fef3aa-29, col_values=(('external_ids', {'iface-id': '81fef3aa-29c9-47a1-8cba-c758c43f8e45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:79:bf', 'vm-uuid': '1ce224dc-5e5e-4105-bc00-9953c57babd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:14 compute-0 NetworkManager[48891]: <info>  [1764091094.5393] manager: (tap81fef3aa-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/625)
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.538 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.548 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.551 254096 INFO os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29')
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.552 254096 DEBUG nova.virt.libvirt.vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:18:05Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.553 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.554 254096 DEBUG nova.network.os_vif_util [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.554 254096 DEBUG os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.555 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.556 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.558 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap479e8c0a-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.559 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap479e8c0a-f1, col_values=(('external_ids', {'iface-id': '479e8c0a-f171-45a0-b7de-778cf1b728bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:27:71', 'vm-uuid': '1ce224dc-5e5e-4105-bc00-9953c57babd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:14 compute-0 NetworkManager[48891]: <info>  [1764091094.5617] manager: (tap479e8c0a-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.568 254096 INFO os_vif [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1')
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.608 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.609 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.609 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:68:79:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.609 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:52:27:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.610 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Using config drive
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.636 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.907 254096 DEBUG nova.network.neutron [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updated VIF entry in instance network info cache for port 479e8c0a-f171-45a0-b7de-778cf1b728bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.907 254096 DEBUG nova.network.neutron [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:14 compute-0 nova_compute[254092]: 2025-11-25 17:18:14.925 254096 DEBUG oslo_concurrency.lockutils [req-8f3d1c13-d1f7-44a5-bd7f-299554d03340 req-310622ed-eff2-414f-bf0b-81ba1de41b0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:18:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.366 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Creating config drive at /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.372 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmj7x_e5n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:16 compute-0 ceph-mon[74985]: pgmap v2841: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.543 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmj7x_e5n" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.581 254096 DEBUG nova.storage.rbd_utils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.588 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.795 254096 DEBUG oslo_concurrency.processutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config 1ce224dc-5e5e-4105-bc00-9953c57babd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.796 254096 INFO nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deleting local config drive /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7/disk.config because it was imported into RBD.
Nov 25 17:18:16 compute-0 NetworkManager[48891]: <info>  [1764091096.8629] manager: (tap81fef3aa-29): new Tun device (/org/freedesktop/NetworkManager/Devices/627)
Nov 25 17:18:16 compute-0 kernel: tap81fef3aa-29: entered promiscuous mode
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01510|binding|INFO|Claiming lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 for this chassis.
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01511|binding|INFO|81fef3aa-29c9-47a1-8cba-c758c43f8e45: Claiming fa:16:3e:68:79:bf 10.100.0.6
Nov 25 17:18:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.888 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:79:bf 10.100.0.6'], port_security=['fa:16:3e:68:79:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=81fef3aa-29c9-47a1-8cba-c758c43f8e45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:18:16 compute-0 NetworkManager[48891]: <info>  [1764091096.8914] manager: (tap479e8c0a-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Nov 25 17:18:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.890 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e bound to our chassis
Nov 25 17:18:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.892 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a702335-301a-4b90-b82e-e616a31e5b3e
Nov 25 17:18:16 compute-0 kernel: tap479e8c0a-f1: entered promiscuous mode
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01512|binding|INFO|Setting lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 ovn-installed in OVS
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01513|binding|INFO|Setting lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 up in Southbound
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01514|if_status|INFO|Not updating pb chassis for 479e8c0a-f171-45a0-b7de-778cf1b728bb now as sb is readonly
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01515|binding|INFO|Claiming lport 479e8c0a-f171-45a0-b7de-778cf1b728bb for this chassis.
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01516|binding|INFO|479e8c0a-f171-45a0-b7de-778cf1b728bb: Claiming fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771
Nov 25 17:18:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.908 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], port_security=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe52:2771/64 2001:db8::f816:3eff:fe52:2771/64', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479e8c0a-f171-45a0-b7de-778cf1b728bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:18:16 compute-0 systemd-udevd[412000]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:18:16 compute-0 systemd-udevd[412001]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:18:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.917 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d3044930-72a0-46aa-ba5f-fa12a22ed788]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01517|binding|INFO|Setting lport 479e8c0a-f171-45a0-b7de-778cf1b728bb up in Southbound
Nov 25 17:18:16 compute-0 ovn_controller[153477]: 2025-11-25T17:18:16Z|01518|binding|INFO|Setting lport 479e8c0a-f171-45a0-b7de-778cf1b728bb ovn-installed in OVS
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:16 compute-0 nova_compute[254092]: 2025-11-25 17:18:16.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:16 compute-0 NetworkManager[48891]: <info>  [1764091096.9299] device (tap81fef3aa-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:18:16 compute-0 NetworkManager[48891]: <info>  [1764091096.9310] device (tap81fef3aa-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:18:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:18:16 compute-0 NetworkManager[48891]: <info>  [1764091096.9384] device (tap479e8c0a-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:18:16 compute-0 NetworkManager[48891]: <info>  [1764091096.9394] device (tap479e8c0a-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:18:16 compute-0 systemd-machined[216343]: New machine qemu-176-instance-0000008e.
Nov 25 17:18:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.960 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[437beadc-bcef-46c0-9276-32a9f7602719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:16 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Nov 25 17:18:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:16.964 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2b29a49b-27fb-4d64-bc12-62b3eb45dbd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.002 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8701b2-bc9e-4f48-b38b-d3f52e4fedf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.022 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aeab5db7-e43f-4c0e-9fbf-d0e91d48e241]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412015, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.044 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[13041d0c-b6a1-4b85-9b58-19fa459076ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751474, 'tstamp': 751474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412018, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751477, 'tstamp': 751477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412018, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.046 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a702335-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.050 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.051 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a702335-30, col_values=(('external_ids', {'iface-id': '1cef18fb-8ae8-44d1-93c6-659b405ed9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.052 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.053 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479e8c0a-f171-45a0-b7de-778cf1b728bb in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.055 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.074 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[05dfebe8-b397-45c3-9b9c-7f53fd0e7f99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.121 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[28902694-42de-4742-9488-308312e3505c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.127 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1463651-9df5-45d0-bce0-c43a2b8b3a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.165 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d6643c14-1933-4042-857c-53dd73402d2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.192 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[640b0c92-9018-4243-bff9-4197a599ff35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412025, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.214 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae0d58f-87cf-4ede-915d-7a25e2a97f5e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51f47401-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751571, 'tstamp': 751571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412026, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.216 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.217 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.219 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51f47401-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.219 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51f47401-e0, col_values=(('external_ids', {'iface-id': '7624b22b-8369-4a12-940b-9f95890a4040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:17.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.366 254096 DEBUG nova.compute.manager [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.367 254096 DEBUG oslo_concurrency.lockutils [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.368 254096 DEBUG oslo_concurrency.lockutils [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.368 254096 DEBUG oslo_concurrency.lockutils [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.368 254096 DEBUG nova.compute.manager [req-eba9e737-0692-4d02-b1bf-7f30f8ac33eb req-df1e701c-8721-4603-9836-de7ad494c59b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Processing event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.407 254096 DEBUG nova.compute.manager [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.407 254096 DEBUG oslo_concurrency.lockutils [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.408 254096 DEBUG oslo_concurrency.lockutils [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.408 254096 DEBUG oslo_concurrency.lockutils [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.409 254096 DEBUG nova.compute.manager [req-bab82105-b17f-4e66-a3e2-1cbb99d59bde req-65685d32-42de-44e1-b3bd-4e349048ea72 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Processing event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.651 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091097.650954, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.652 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Started (Lifecycle Event)
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.655 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.660 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.665 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance spawned successfully.
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.665 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.671 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.676 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.692 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.693 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.693 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.693 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.694 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.694 254096 DEBUG nova.virt.libvirt.driver [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.698 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.699 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091097.6514113, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.699 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Paused (Lifecycle Event)
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.728 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.732 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091097.6585634, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.733 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Resumed (Lifecycle Event)
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.754 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.761 254096 INFO nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 12.33 seconds to spawn the instance on the hypervisor.
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.762 254096 DEBUG nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.763 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.780 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.837 254096 INFO nova.compute.manager [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 13.31 seconds to build instance.
Nov 25 17:18:17 compute-0 nova_compute[254092]: 2025-11-25 17:18:17.853 254096 DEBUG oslo_concurrency.lockutils [None req-2f9b4238-befa-4805-9be1-96b45ada644f a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:18 compute-0 ceph-mon[74985]: pgmap v2842: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:18:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Nov 25 17:18:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:19 compute-0 sudo[412070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:19 compute-0 sudo[412070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:19 compute-0 sudo[412070]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:19 compute-0 sudo[412095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:18:19 compute-0 sudo[412095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:19 compute-0 sudo[412095]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:19 compute-0 sudo[412120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:19 compute-0 sudo[412120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:19 compute-0 sudo[412120]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:19 compute-0 sudo[412145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.489 254096 DEBUG nova.compute.manager [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.490 254096 DEBUG oslo_concurrency.lockutils [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.491 254096 DEBUG oslo_concurrency.lockutils [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:19 compute-0 sudo[412145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.491 254096 DEBUG oslo_concurrency.lockutils [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.492 254096 DEBUG nova.compute.manager [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.492 254096 WARNING nova.compute.manager [req-1027aff6-5f5e-46d9-8211-0dcf11230c93 req-eb7023b8-8a2d-4fdd-9f5b-4e22162ed797 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb for instance with vm_state active and task_state None.
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.523 254096 DEBUG nova.compute.manager [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.524 254096 DEBUG oslo_concurrency.lockutils [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.524 254096 DEBUG oslo_concurrency.lockutils [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.524 254096 DEBUG oslo_concurrency.lockutils [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.525 254096 DEBUG nova.compute.manager [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.525 254096 WARNING nova.compute.manager [req-95c798ce-044b-4ef3-b086-208b8005214e req-f20a37ff-3b12-4a27-813f-c2a01a78d17e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 for instance with vm_state active and task_state None.
Nov 25 17:18:19 compute-0 nova_compute[254092]: 2025-11-25 17:18:19.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:20 compute-0 sudo[412145]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 17:18:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:18:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:18:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:18:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:18:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c787c164-d501-4cf8-b6b0-6433377924da does not exist
Nov 25 17:18:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 182631f9-589b-4df6-8082-41017ed6c6f8 does not exist
Nov 25 17:18:20 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e8d90fc7-7fc4-44c8-b3e5-f1da4b10b943 does not exist
Nov 25 17:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:18:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:18:20 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:18:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:18:20 compute-0 sudo[412201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:20 compute-0 sudo[412201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:20 compute-0 sudo[412201]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:20 compute-0 sudo[412226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:18:20 compute-0 sudo[412226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:20 compute-0 sudo[412226]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:20 compute-0 ceph-mon[74985]: pgmap v2843: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Nov 25 17:18:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:18:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:18:20 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:18:20 compute-0 sudo[412251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:20 compute-0 sudo[412251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:20 compute-0 sudo[412251]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:20 compute-0 sudo[412276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:18:20 compute-0 sudo[412276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Nov 25 17:18:21 compute-0 podman[412342]: 2025-11-25 17:18:21.045479952 +0000 UTC m=+0.060241211 container create 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:18:21 compute-0 systemd[1]: Started libpod-conmon-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope.
Nov 25 17:18:21 compute-0 podman[412342]: 2025-11-25 17:18:21.024725307 +0000 UTC m=+0.039486596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:18:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:18:21 compute-0 podman[412342]: 2025-11-25 17:18:21.145769402 +0000 UTC m=+0.160530681 container init 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:18:21 compute-0 podman[412342]: 2025-11-25 17:18:21.160264456 +0000 UTC m=+0.175025715 container start 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:18:21 compute-0 podman[412342]: 2025-11-25 17:18:21.164184902 +0000 UTC m=+0.178946201 container attach 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:18:21 compute-0 determined_wilbur[412356]: 167 167
Nov 25 17:18:21 compute-0 systemd[1]: libpod-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope: Deactivated successfully.
Nov 25 17:18:21 compute-0 conmon[412356]: conmon 444e11c0e23d7009a63b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope/container/memory.events
Nov 25 17:18:21 compute-0 podman[412342]: 2025-11-25 17:18:21.172858559 +0000 UTC m=+0.187619828 container died 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a1d8c6a3a8cf66d49859a2bb9da41a9f3b427fc1ffeb9544cc9dd2f2d985d18-merged.mount: Deactivated successfully.
Nov 25 17:18:21 compute-0 nova_compute[254092]: 2025-11-25 17:18:21.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:21 compute-0 podman[412342]: 2025-11-25 17:18:21.233831638 +0000 UTC m=+0.248592927 container remove 444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:18:21 compute-0 systemd[1]: libpod-conmon-444e11c0e23d7009a63b84e3c41e77f901ec825b18c68c679256f9f4d5176a3f.scope: Deactivated successfully.
Nov 25 17:18:21 compute-0 podman[412380]: 2025-11-25 17:18:21.493869266 +0000 UTC m=+0.058130903 container create 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:18:21 compute-0 systemd[1]: Started libpod-conmon-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope.
Nov 25 17:18:21 compute-0 podman[412380]: 2025-11-25 17:18:21.470446709 +0000 UTC m=+0.034708366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:18:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:21 compute-0 podman[412380]: 2025-11-25 17:18:21.614715936 +0000 UTC m=+0.178977613 container init 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 17:18:21 compute-0 podman[412380]: 2025-11-25 17:18:21.62184483 +0000 UTC m=+0.186106467 container start 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:18:21 compute-0 podman[412380]: 2025-11-25 17:18:21.625074568 +0000 UTC m=+0.189336225 container attach 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 17:18:22 compute-0 ceph-mon[74985]: pgmap v2844: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Nov 25 17:18:22 compute-0 epic_heisenberg[412396]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:18:22 compute-0 epic_heisenberg[412396]: --> relative data size: 1.0
Nov 25 17:18:22 compute-0 epic_heisenberg[412396]: --> All data devices are unavailable
Nov 25 17:18:22 compute-0 systemd[1]: libpod-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope: Deactivated successfully.
Nov 25 17:18:22 compute-0 systemd[1]: libpod-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope: Consumed 1.134s CPU time.
Nov 25 17:18:22 compute-0 podman[412380]: 2025-11-25 17:18:22.8147506 +0000 UTC m=+1.379012297 container died 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:18:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-974a6f9ea06e9ec6181443a5dccf6b284c02d53d9f036d7199b78d486e5626ee-merged.mount: Deactivated successfully.
Nov 25 17:18:22 compute-0 podman[412380]: 2025-11-25 17:18:22.908093611 +0000 UTC m=+1.472355268 container remove 18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_heisenberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:18:22 compute-0 systemd[1]: libpod-conmon-18222d45f6b58495a561fea76c0093ec460d23c03e000fe5f3660daf2e2dd521.scope: Deactivated successfully.
Nov 25 17:18:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 355 KiB/s wr, 74 op/s
Nov 25 17:18:22 compute-0 sudo[412276]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:23 compute-0 sudo[412436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:23 compute-0 sudo[412436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:23 compute-0 sudo[412436]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:23 compute-0 sudo[412461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:18:23 compute-0 sudo[412461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:23 compute-0 sudo[412461]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:23 compute-0 sudo[412486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:23 compute-0 sudo[412486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:23 compute-0 sudo[412486]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:23 compute-0 sudo[412511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:18:23 compute-0 sudo[412511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:23 compute-0 podman[412576]: 2025-11-25 17:18:23.825139412 +0000 UTC m=+0.052535111 container create 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:18:23 compute-0 systemd[1]: Started libpod-conmon-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope.
Nov 25 17:18:23 compute-0 podman[412576]: 2025-11-25 17:18:23.801614862 +0000 UTC m=+0.029010571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:18:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:18:23 compute-0 podman[412576]: 2025-11-25 17:18:23.922205724 +0000 UTC m=+0.149601473 container init 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 17:18:23 compute-0 podman[412576]: 2025-11-25 17:18:23.934146159 +0000 UTC m=+0.161541858 container start 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:18:23 compute-0 podman[412576]: 2025-11-25 17:18:23.938253971 +0000 UTC m=+0.165649680 container attach 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 17:18:23 compute-0 lucid_shockley[412592]: 167 167
Nov 25 17:18:23 compute-0 systemd[1]: libpod-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope: Deactivated successfully.
Nov 25 17:18:23 compute-0 conmon[412592]: conmon 1f14765a5cb76ee78855 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope/container/memory.events
Nov 25 17:18:23 compute-0 podman[412576]: 2025-11-25 17:18:23.948736606 +0000 UTC m=+0.176132295 container died 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 25 17:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-62cf843cb59d7560c11422677473c5324720294182d3884d89ed2b597286396a-merged.mount: Deactivated successfully.
Nov 25 17:18:24 compute-0 podman[412576]: 2025-11-25 17:18:24.011479284 +0000 UTC m=+0.238875023 container remove 1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:18:24 compute-0 systemd[1]: libpod-conmon-1f14765a5cb76ee788551f6ea2072dd5d3addd3813c5f246805f039f12545097.scope: Deactivated successfully.
Nov 25 17:18:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:24 compute-0 podman[412615]: 2025-11-25 17:18:24.282997155 +0000 UTC m=+0.069771911 container create a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:18:24 compute-0 systemd[1]: Started libpod-conmon-a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc.scope.
Nov 25 17:18:24 compute-0 podman[412615]: 2025-11-25 17:18:24.251841567 +0000 UTC m=+0.038616383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:18:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:24 compute-0 podman[412615]: 2025-11-25 17:18:24.405029066 +0000 UTC m=+0.191803832 container init a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:18:24 compute-0 podman[412615]: 2025-11-25 17:18:24.413706982 +0000 UTC m=+0.200481728 container start a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:18:24 compute-0 podman[412615]: 2025-11-25 17:18:24.417856205 +0000 UTC m=+0.204630941 container attach a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 25 17:18:24 compute-0 ceph-mon[74985]: pgmap v2845: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 355 KiB/s wr, 74 op/s
Nov 25 17:18:24 compute-0 nova_compute[254092]: 2025-11-25 17:18:24.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2846: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 17:18:24 compute-0 nova_compute[254092]: 2025-11-25 17:18:24.968 254096 DEBUG nova.compute.manager [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:24 compute-0 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG nova.compute.manager [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:18:24 compute-0 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG oslo_concurrency.lockutils [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:18:24 compute-0 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG oslo_concurrency.lockutils [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:18:24 compute-0 nova_compute[254092]: 2025-11-25 17:18:24.970 254096 DEBUG nova.network.neutron [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:18:25 compute-0 eager_bardeen[412630]: {
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:     "0": [
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:         {
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "devices": [
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "/dev/loop3"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             ],
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_name": "ceph_lv0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_size": "21470642176",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "name": "ceph_lv0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "tags": {
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cluster_name": "ceph",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.crush_device_class": "",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.encrypted": "0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osd_id": "0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.type": "block",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.vdo": "0"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             },
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "type": "block",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "vg_name": "ceph_vg0"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:         }
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:     ],
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:     "1": [
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:         {
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "devices": [
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "/dev/loop4"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             ],
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_name": "ceph_lv1",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_size": "21470642176",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "name": "ceph_lv1",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "tags": {
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cluster_name": "ceph",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.crush_device_class": "",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.encrypted": "0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osd_id": "1",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.type": "block",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.vdo": "0"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             },
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "type": "block",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "vg_name": "ceph_vg1"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:         }
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:     ],
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:     "2": [
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:         {
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "devices": [
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "/dev/loop5"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             ],
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_name": "ceph_lv2",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_size": "21470642176",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "name": "ceph_lv2",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "tags": {
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.cluster_name": "ceph",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.crush_device_class": "",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.encrypted": "0",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osd_id": "2",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.type": "block",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:                 "ceph.vdo": "0"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             },
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "type": "block",
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:             "vg_name": "ceph_vg2"
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:         }
Nov 25 17:18:25 compute-0 eager_bardeen[412630]:     ]
Nov 25 17:18:25 compute-0 eager_bardeen[412630]: }
Nov 25 17:18:25 compute-0 systemd[1]: libpod-a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc.scope: Deactivated successfully.
Nov 25 17:18:25 compute-0 podman[412615]: 2025-11-25 17:18:25.24465938 +0000 UTC m=+1.031434156 container died a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a8615d07262c2b58f73180b9cceb5256247cf86db3b94dd6f41608234ce62b8-merged.mount: Deactivated successfully.
Nov 25 17:18:25 compute-0 podman[412615]: 2025-11-25 17:18:25.325607654 +0000 UTC m=+1.112382380 container remove a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:18:25 compute-0 systemd[1]: libpod-conmon-a9db78996bd4c8db0616ef4a3c8293cab12888fe6a93a43acfe070aee90906bc.scope: Deactivated successfully.
Nov 25 17:18:25 compute-0 sudo[412511]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:25 compute-0 sudo[412652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:25 compute-0 sudo[412652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:25 compute-0 sudo[412652]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:25 compute-0 ceph-mon[74985]: pgmap v2846: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Nov 25 17:18:25 compute-0 sudo[412677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:18:25 compute-0 sudo[412677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:25 compute-0 sudo[412677]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:25 compute-0 sudo[412702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:25 compute-0 sudo[412702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:25 compute-0 sudo[412702]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:25 compute-0 sudo[412727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:18:25 compute-0 sudo[412727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:26 compute-0 podman[412796]: 2025-11-25 17:18:26.157678352 +0000 UTC m=+0.043925897 container create 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:18:26 compute-0 systemd[1]: Started libpod-conmon-3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e.scope.
Nov 25 17:18:26 compute-0 nova_compute[254092]: 2025-11-25 17:18:26.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:18:26 compute-0 podman[412796]: 2025-11-25 17:18:26.140160955 +0000 UTC m=+0.026408520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:18:26 compute-0 podman[412796]: 2025-11-25 17:18:26.249331797 +0000 UTC m=+0.135579472 container init 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:18:26 compute-0 podman[412796]: 2025-11-25 17:18:26.257073397 +0000 UTC m=+0.143320942 container start 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:18:26 compute-0 podman[412796]: 2025-11-25 17:18:26.260506911 +0000 UTC m=+0.146754466 container attach 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:18:26 compute-0 sweet_joliot[412814]: 167 167
Nov 25 17:18:26 compute-0 systemd[1]: libpod-3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e.scope: Deactivated successfully.
Nov 25 17:18:26 compute-0 podman[412796]: 2025-11-25 17:18:26.265234499 +0000 UTC m=+0.151482054 container died 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-829113fe87ec38ebdbd82d8f8d3dc7dd8606cdd333336d5633ff0c5aca92f2c8-merged.mount: Deactivated successfully.
Nov 25 17:18:26 compute-0 podman[412796]: 2025-11-25 17:18:26.312858135 +0000 UTC m=+0.199105670 container remove 3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:18:26 compute-0 systemd[1]: libpod-conmon-3004a409824c5eb74205e243031f22a17c874bd55d30c1b68ffd2bb83a1b5b2e.scope: Deactivated successfully.
Nov 25 17:18:26 compute-0 podman[412837]: 2025-11-25 17:18:26.531927379 +0000 UTC m=+0.046620480 container create 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:18:26 compute-0 systemd[1]: Started libpod-conmon-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope.
Nov 25 17:18:26 compute-0 podman[412837]: 2025-11-25 17:18:26.514200286 +0000 UTC m=+0.028893397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:18:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:18:26 compute-0 podman[412837]: 2025-11-25 17:18:26.644314818 +0000 UTC m=+0.159008019 container init 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:18:26 compute-0 podman[412837]: 2025-11-25 17:18:26.657908238 +0000 UTC m=+0.172601329 container start 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:18:26 compute-0 podman[412837]: 2025-11-25 17:18:26.662600705 +0000 UTC m=+0.177293906 container attach 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:18:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 17:18:27 compute-0 nova_compute[254092]: 2025-11-25 17:18:27.032 254096 DEBUG nova.network.neutron [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updated VIF entry in instance network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:18:27 compute-0 nova_compute[254092]: 2025-11-25 17:18:27.035 254096 DEBUG nova.network.neutron [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:27 compute-0 nova_compute[254092]: 2025-11-25 17:18:27.053 254096 DEBUG oslo_concurrency.lockutils [req-f38a8432-571e-440d-88a9-8626ffe7e550 req-8ef1da12-2306-4c72-95e5-c6fc10620b9b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]: {
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "osd_id": 1,
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "type": "bluestore"
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:     },
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "osd_id": 2,
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "type": "bluestore"
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:     },
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "osd_id": 0,
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:         "type": "bluestore"
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]:     }
Nov 25 17:18:27 compute-0 wonderful_goodall[412854]: }
Nov 25 17:18:27 compute-0 systemd[1]: libpod-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope: Deactivated successfully.
Nov 25 17:18:27 compute-0 systemd[1]: libpod-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope: Consumed 1.167s CPU time.
Nov 25 17:18:27 compute-0 podman[412887]: 2025-11-25 17:18:27.885701507 +0000 UTC m=+0.030122231 container died 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:18:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:18:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:29 compute-0 ceph-mon[74985]: pgmap v2847: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 25 17:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c1f7e97a5447b3d85ab2ae6676fba57de14baed3a1d29f317382bf9689c0804-merged.mount: Deactivated successfully.
Nov 25 17:18:29 compute-0 podman[412887]: 2025-11-25 17:18:29.507598804 +0000 UTC m=+1.652019468 container remove 161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:18:29 compute-0 systemd[1]: libpod-conmon-161e2a484f3e32a3e4a7c66058def635cd2df655073bdd187c42b6f8eeaba6da.scope: Deactivated successfully.
Nov 25 17:18:29 compute-0 nova_compute[254092]: 2025-11-25 17:18:29.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:29 compute-0 sudo[412727]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:18:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:18:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:18:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:18:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e2635490-f48b-4fad-9c41-380c8377f53d does not exist
Nov 25 17:18:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 53858401-390e-4b65-9bf2-56d2dfabff16 does not exist
Nov 25 17:18:29 compute-0 sudo[412902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:18:29 compute-0 sudo[412902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:29 compute-0 sudo[412902]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:29 compute-0 sudo[412927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:18:29 compute-0 sudo[412927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:18:29 compute-0 sudo[412927]: pam_unix(sudo:session): session closed for user root
Nov 25 17:18:30 compute-0 ceph-mon[74985]: pgmap v2848: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:18:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:18:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:18:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:18:31 compute-0 nova_compute[254092]: 2025-11-25 17:18:31.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:32 compute-0 ceph-mon[74985]: pgmap v2849: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:18:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 756 KiB/s rd, 98 KiB/s wr, 30 op/s
Nov 25 17:18:33 compute-0 ovn_controller[153477]: 2025-11-25T17:18:33Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:79:bf 10.100.0.6
Nov 25 17:18:33 compute-0 ovn_controller[153477]: 2025-11-25T17:18:33Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:79:bf 10.100.0.6
Nov 25 17:18:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:34 compute-0 ceph-mon[74985]: pgmap v2850: 321 pgs: 321 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 756 KiB/s rd, 98 KiB/s wr, 30 op/s
Nov 25 17:18:34 compute-0 nova_compute[254092]: 2025-11-25 17:18:34.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s wr, 0 op/s
Nov 25 17:18:36 compute-0 nova_compute[254092]: 2025-11-25 17:18:36.231 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:36 compute-0 ceph-mon[74985]: pgmap v2851: 321 pgs: 321 active+clean; 170 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s wr, 0 op/s
Nov 25 17:18:36 compute-0 podman[412953]: 2025-11-25 17:18:36.675769335 +0000 UTC m=+0.079884946 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 17:18:36 compute-0 podman[412952]: 2025-11-25 17:18:36.707975401 +0000 UTC m=+0.113569531 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 17:18:36 compute-0 podman[412954]: 2025-11-25 17:18:36.740793835 +0000 UTC m=+0.133752752 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 17:18:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:38 compute-0 ceph-mon[74985]: pgmap v2852: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:38 compute-0 nova_compute[254092]: 2025-11-25 17:18:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:18:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:39 compute-0 nova_compute[254092]: 2025-11-25 17:18:39.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:18:40
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data']
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:18:40 compute-0 ceph-mon[74985]: pgmap v2853: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:18:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:41 compute-0 nova_compute[254092]: 2025-11-25 17:18:41.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:41 compute-0 nova_compute[254092]: 2025-11-25 17:18:41.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:42 compute-0 ceph-mon[74985]: pgmap v2854: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:43 compute-0 ceph-mon[74985]: pgmap v2855: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:18:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:44.716 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:18:44 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:44.718 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.953 254096 DEBUG nova.compute.manager [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.954 254096 DEBUG nova.compute.manager [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing instance network info cache due to event network-changed-81fef3aa-29c9-47a1-8cba-c758c43f8e45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.954 254096 DEBUG oslo_concurrency.lockutils [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.954 254096 DEBUG oslo_concurrency.lockutils [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:18:44 compute-0 nova_compute[254092]: 2025-11-25 17:18:44.955 254096 DEBUG nova.network.neutron [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Refreshing network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.011 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.012 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.012 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.013 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.015 254096 INFO nova.compute.manager [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Terminating instance
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.017 254096 DEBUG nova.compute.manager [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:18:45 compute-0 kernel: tap81fef3aa-29 (unregistering): left promiscuous mode
Nov 25 17:18:45 compute-0 NetworkManager[48891]: <info>  [1764091125.0799] device (tap81fef3aa-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 ovn_controller[153477]: 2025-11-25T17:18:45Z|01519|binding|INFO|Releasing lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 from this chassis (sb_readonly=0)
Nov 25 17:18:45 compute-0 ovn_controller[153477]: 2025-11-25T17:18:45Z|01520|binding|INFO|Setting lport 81fef3aa-29c9-47a1-8cba-c758c43f8e45 down in Southbound
Nov 25 17:18:45 compute-0 ovn_controller[153477]: 2025-11-25T17:18:45Z|01521|binding|INFO|Removing iface tap81fef3aa-29 ovn-installed in OVS
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.097 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:79:bf 10.100.0.6'], port_security=['fa:16:3e:68:79:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=81fef3aa-29c9-47a1-8cba-c758c43f8e45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 81fef3aa-29c9-47a1-8cba-c758c43f8e45 in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e unbound from our chassis
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.101 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a702335-301a-4b90-b82e-e616a31e5b3e
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 kernel: tap479e8c0a-f1 (unregistering): left promiscuous mode
Nov 25 17:18:45 compute-0 NetworkManager[48891]: <info>  [1764091125.1176] device (tap479e8c0a-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:18:45 compute-0 ovn_controller[153477]: 2025-11-25T17:18:45Z|01522|binding|INFO|Releasing lport 479e8c0a-f171-45a0-b7de-778cf1b728bb from this chassis (sb_readonly=0)
Nov 25 17:18:45 compute-0 ovn_controller[153477]: 2025-11-25T17:18:45Z|01523|binding|INFO|Setting lport 479e8c0a-f171-45a0-b7de-778cf1b728bb down in Southbound
Nov 25 17:18:45 compute-0 ovn_controller[153477]: 2025-11-25T17:18:45Z|01524|binding|INFO|Removing iface tap479e8c0a-f1 ovn-installed in OVS
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.133 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6170a23a-023d-49e1-acc0-506a72352fca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.137 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], port_security=['fa:16:3e:52:27:71 2001:db8:0:1:f816:3eff:fe52:2771 2001:db8::f816:3eff:fe52:2771'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe52:2771/64 2001:db8::f816:3eff:fe52:2771/64', 'neutron:device_id': '1ce224dc-5e5e-4105-bc00-9953c57babd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=479e8c0a-f171-45a0-b7de-778cf1b728bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 25 17:18:45 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 15.841s CPU time.
Nov 25 17:18:45 compute-0 systemd-machined[216343]: Machine qemu-176-instance-0000008e terminated.
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.189 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e4fa9d-cf84-4adc-8200-78b0b8577bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.195 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb2f1a4-11a5-4976-8017-f2329438a8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.235 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1711ad-950f-4a8c-93cd-1cf8829fc7d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 NetworkManager[48891]: <info>  [1764091125.2626] manager: (tap479e8c0a-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.265 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb96257-6afc-4ea5-b7df-04e8ae3ee972]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a702335-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:4b:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751460, 'reachable_time': 25856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413033, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.285 254096 INFO nova.virt.libvirt.driver [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Instance destroyed successfully.
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.286 254096 DEBUG nova.objects.instance [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 1ce224dc-5e5e-4105-bc00-9953c57babd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.298 254096 DEBUG nova.virt.libvirt.vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:18:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:18:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.298 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.298 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff8fc0b-c2c0-41c6-bc32-739a562140ff]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751474, 'tstamp': 751474}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413048, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a702335-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751477, 'tstamp': 751477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413048, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.299 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.299 254096 DEBUG os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.300 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.301 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81fef3aa-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.311 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a702335-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.312 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.313 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a702335-30, col_values=(('external_ids', {'iface-id': '1cef18fb-8ae8-44d1-93c6-659b405ed9b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.313 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.315 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 479e8c0a-f171-45a0-b7de-778cf1b728bb in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.317 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.318 254096 INFO os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:79:bf,bridge_name='br-int',has_traffic_filtering=True,id=81fef3aa-29c9-47a1-8cba-c758c43f8e45,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81fef3aa-29')
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.319 254096 DEBUG nova.virt.libvirt.vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:18:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-775791465',display_name='tempest-TestGettingAddress-server-775791465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-775791465',id=142,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:18:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-lhk13l9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:18:17Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=1ce224dc-5e5e-4105-bc00-9953c57babd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.320 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.321 254096 DEBUG nova.network.os_vif_util [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.321 254096 DEBUG os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.323 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap479e8c0a-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.324 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.328 254096 INFO os_vif [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:27:71,bridge_name='br-int',has_traffic_filtering=True,id=479e8c0a-f171-45a0-b7de-778cf1b728bb,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap479e8c0a-f1')
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.344 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0861b0-829f-4306-822d-eadd94be66a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.385 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e64b4dbc-eedc-4739-b145-4a7becde26f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.391 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2418bc78-92b3-4619-b1e1-4dc66e375a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.448 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0f16a947-d04c-483d-8c5d-718106d0a451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.484 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[abf309d2-f420-4f42-8790-a88424d0732c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap51f47401-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:01:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751557, 'reachable_time': 21561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413078, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.526 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7daa8bfb-746d-4c71-b545-e63c105ffaa7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap51f47401-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751571, 'tstamp': 751571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413079, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.530 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.537 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51f47401-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.538 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.539 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap51f47401-e0, col_values=(('external_ids', {'iface-id': '7624b22b-8369-4a12-940b-9f95890a4040'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:45.540 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.803 254096 INFO nova.virt.libvirt.driver [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deleting instance files /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7_del
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.804 254096 INFO nova.virt.libvirt.driver [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deletion of /var/lib/nova/instances/1ce224dc-5e5e-4105-bc00-9953c57babd7_del complete
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.858 254096 INFO nova.compute.manager [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 0.84 seconds to destroy the instance on the hypervisor.
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.859 254096 DEBUG oslo.service.loopingcall [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.859 254096 DEBUG nova.compute.manager [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:18:45 compute-0 nova_compute[254092]: 2025-11-25 17:18:45.859 254096 DEBUG nova.network.neutron [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:18:46 compute-0 ceph-mon[74985]: pgmap v2856: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Nov 25 17:18:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:18:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2667142588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.085 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.175 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.175 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.431 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.432 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3463MB free_disk=59.89716720581055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.432 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.433 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 1ce224dc-5e5e-4105-bc00-9953c57babd7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.553 254096 DEBUG nova.compute.manager [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG oslo_concurrency.lockutils [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG oslo_concurrency.lockutils [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG oslo_concurrency.lockutils [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG nova.compute.manager [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-unplugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.554 254096 DEBUG nova.compute.manager [req-ffa82ac9-b84d-4bf4-a21a-6827b8d12402 req-342b6d7d-6fe6-47d0-aee8-26b793b00f4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:18:46 compute-0 nova_compute[254092]: 2025-11-25 17:18:46.577 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.0 MiB/s wr, 73 op/s
Nov 25 17:18:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2667142588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:18:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487486548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.055 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.062 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.073 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-unplugged-479e8c0a-f171-45a0-b7de-778cf1b728bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-unplugged-479e8c0a-f171-45a0-b7de-778cf1b728bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG oslo_concurrency.lockutils [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.074 254096 DEBUG nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.075 254096 WARNING nova.compute.manager [req-06179671-4236-4209-8a31-de97b437d7fb req-0d642368-7005-43a8-8e12-2605a47ddd0b a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-479e8c0a-f171-45a0-b7de-778cf1b728bb for instance with vm_state active and task_state deleting.
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.078 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.109 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.109 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.192 254096 DEBUG nova.network.neutron [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updated VIF entry in instance network info cache for port 81fef3aa-29c9-47a1-8cba-c758c43f8e45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.192 254096 DEBUG nova.network.neutron [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [{"id": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "address": "fa:16:3e:68:79:bf", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81fef3aa-29", "ovs_interfaceid": "81fef3aa-29c9-47a1-8cba-c758c43f8e45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "address": "fa:16:3e:52:27:71", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe52:2771", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap479e8c0a-f1", "ovs_interfaceid": "479e8c0a-f171-45a0-b7de-778cf1b728bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.213 254096 DEBUG oslo_concurrency.lockutils [req-462575cf-fdcb-4aa9-88c6-1c445c8a3e74 req-ac8b00e0-b7a8-46a9-a216-b875b68c2fa4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-1ce224dc-5e5e-4105-bc00-9953c57babd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.472 254096 DEBUG nova.network.neutron [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.496 254096 INFO nova.compute.manager [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Took 1.64 seconds to deallocate network for instance.
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.550 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.551 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:47 compute-0 nova_compute[254092]: 2025-11-25 17:18:47.628 254096 DEBUG oslo_concurrency.processutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:48 compute-0 ceph-mon[74985]: pgmap v2857: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.0 MiB/s wr, 73 op/s
Nov 25 17:18:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1487486548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:18:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523132620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.118 254096 DEBUG oslo_concurrency.processutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.128 254096 DEBUG nova.compute.provider_tree [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.146 254096 DEBUG nova.scheduler.client.report [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.176 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.217 254096 INFO nova.scheduler.client.report [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 1ce224dc-5e5e-4105-bc00-9953c57babd7
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.269 254096 DEBUG oslo_concurrency.lockutils [None req-d262f93f-e35b-441d-ac5a-dfa4d2f9bee7 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.661 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.662 254096 DEBUG oslo_concurrency.lockutils [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.662 254096 DEBUG oslo_concurrency.lockutils [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.662 254096 DEBUG oslo_concurrency.lockutils [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "1ce224dc-5e5e-4105-bc00-9953c57babd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] No waiting events found dispatching network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 WARNING nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received unexpected event network-vif-plugged-81fef3aa-29c9-47a1-8cba-c758c43f8e45 for instance with vm_state deleted and task_state None.
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-deleted-479e8c0a-f171-45a0-b7de-778cf1b728bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:48 compute-0 nova_compute[254092]: 2025-11-25 17:18:48.663 254096 DEBUG nova.compute.manager [req-d96e37dc-d404-4d88-a68e-f079bcbaa2f0 req-30a1c750-e0ad-462e-b5dc-5e3ccada0fe3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Received event network-vif-deleted-81fef3aa-29c9-47a1-8cba-c758c43f8e45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 11 op/s
Nov 25 17:18:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/523132620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:49 compute-0 nova_compute[254092]: 2025-11-25 17:18:49.110 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:50 compute-0 ceph-mon[74985]: pgmap v2858: 321 pgs: 321 active+clean; 183 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 11 op/s
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.384 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.395 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.396 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.396 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.396 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.397 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.398 254096 INFO nova.compute.manager [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Terminating instance
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.400 254096 DEBUG nova.compute.manager [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:18:50 compute-0 kernel: tapa3c9174e-c8 (unregistering): left promiscuous mode
Nov 25 17:18:50 compute-0 NetworkManager[48891]: <info>  [1764091130.4674] device (tapa3c9174e-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:18:50 compute-0 ovn_controller[153477]: 2025-11-25T17:18:50Z|01525|binding|INFO|Releasing lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b from this chassis (sb_readonly=0)
Nov 25 17:18:50 compute-0 ovn_controller[153477]: 2025-11-25T17:18:50Z|01526|binding|INFO|Setting lport a3c9174e-c8c3-4b9f-b87f-4d6244324c9b down in Southbound
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.477 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 ovn_controller[153477]: 2025-11-25T17:18:50Z|01527|binding|INFO|Removing iface tapa3c9174e-c8 ovn-installed in OVS
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.490 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:d8:27 10.100.0.8'], port_security=['fa:16:3e:ef:d8:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a702335-301a-4b90-b82e-e616a31e5b3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0702eb11-7151-4340-a4d9-f0fbc28cb176, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.491 163338 INFO neutron.agent.ovn.metadata.agent [-] Port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b in datapath 4a702335-301a-4b90-b82e-e616a31e5b3e unbound from our chassis
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.492 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a702335-301a-4b90-b82e-e616a31e5b3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.493 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ab80ab09-d9d4-4b3a-9490-1f67b0f2c4cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.493 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e namespace which is not needed anymore
Nov 25 17:18:50 compute-0 kernel: tapfdd7f4f6-80 (unregistering): left promiscuous mode
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 NetworkManager[48891]: <info>  [1764091130.5113] device (tapfdd7f4f6-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.533 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 ovn_controller[153477]: 2025-11-25T17:18:50Z|01528|binding|INFO|Releasing lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d from this chassis (sb_readonly=0)
Nov 25 17:18:50 compute-0 ovn_controller[153477]: 2025-11-25T17:18:50Z|01529|binding|INFO|Setting lport fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d down in Southbound
Nov 25 17:18:50 compute-0 ovn_controller[153477]: 2025-11-25T17:18:50Z|01530|binding|INFO|Removing iface tapfdd7f4f6-80 ovn-installed in OVS
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.550 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], port_security=['fa:16:3e:ac:9c:d4 2001:db8:0:1:f816:3eff:feac:9cd4 2001:db8::f816:3eff:feac:9cd4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feac:9cd4/64 2001:db8::f816:3eff:feac:9cd4/64', 'neutron:device_id': 'e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7810d87-4bfa-4d99-b3b6-de28ef8547f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=945443dd-5eab-4f86-892f-c2db25b0d0db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.568 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Nov 25 17:18:50 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Consumed 16.831s CPU time.
Nov 25 17:18:50 compute-0 systemd-machined[216343]: Machine qemu-175-instance-0000008d terminated.
Nov 25 17:18:50 compute-0 NetworkManager[48891]: <info>  [1764091130.6446] manager: (tapfdd7f4f6-80): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.659 254096 INFO nova.virt.libvirt.driver [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Instance destroyed successfully.
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.660 254096 DEBUG nova.objects.instance [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.682 254096 DEBUG nova.virt.libvirt.vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:17:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:17:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.683 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.684 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.684 254096 DEBUG os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.687 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3c9174e-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.704 254096 INFO os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:d8:27,bridge_name='br-int',has_traffic_filtering=True,id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b,network=Network(4a702335-301a-4b90-b82e-e616a31e5b3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3c9174e-c8')
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.704 254096 DEBUG nova.virt.libvirt.vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1474914346',display_name='tempest-TestGettingAddress-server-1474914346',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1474914346',id=141,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ2pcBrDGHvBVYZrwfG+0cOixMoEoWWyZwNBICyplvpWRfO1kJBjvH3LOKdm8vM5yC6I7xSnU6u9yudLUoAragOzDYER6osY0eledXMEDHVj1YkNbsjqQz6dDBKXdmH+YA==',key_name='tempest-TestGettingAddress-309096485',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:17:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-107kxyyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:17:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.705 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.706 254096 DEBUG nova.network.os_vif_util [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.706 254096 DEBUG os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.707 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.708 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdd7f4f6-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.715 254096 INFO os_vif [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d,network=Network(51f47401-ed2b-45e2-aea1-5cbbd48e5245),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd7f4f6-80')
Nov 25 17:18:50 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : haproxy version is 2.8.14-c23fe91
Nov 25 17:18:50 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [NOTICE]   (411483) : path to executable is /usr/sbin/haproxy
Nov 25 17:18:50 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [WARNING]  (411483) : Exiting Master process...
Nov 25 17:18:50 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [WARNING]  (411483) : Exiting Master process...
Nov 25 17:18:50 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [ALERT]    (411483) : Current worker (411485) exited with code 143 (Terminated)
Nov 25 17:18:50 compute-0 neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e[411479]: [WARNING]  (411483) : All workers exited. Exiting... (0)
Nov 25 17:18:50 compute-0 systemd[1]: libpod-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0.scope: Deactivated successfully.
Nov 25 17:18:50 compute-0 podman[413191]: 2025-11-25 17:18:50.751798512 +0000 UTC m=+0.074174520 container died 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.778 254096 DEBUG nova.compute.manager [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.779 254096 DEBUG oslo_concurrency.lockutils [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.780 254096 DEBUG oslo_concurrency.lockutils [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.780 254096 DEBUG oslo_concurrency.lockutils [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.782 254096 DEBUG nova.compute.manager [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-unplugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.782 254096 DEBUG nova.compute.manager [req-f9c6c7b0-5be3-432a-99d5-a248043fcc4f req-3cb9baf0-4107-4fc5-887e-6837a4bb4830 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0-userdata-shm.mount: Deactivated successfully.
Nov 25 17:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cda96b984478ce4719be5239caee54b3e00b48097bfc3855698629f519ccf4eb-merged.mount: Deactivated successfully.
Nov 25 17:18:50 compute-0 podman[413191]: 2025-11-25 17:18:50.801994348 +0000 UTC m=+0.124370346 container cleanup 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 17:18:50 compute-0 systemd[1]: libpod-conmon-405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0.scope: Deactivated successfully.
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.862 254096 DEBUG nova.compute.manager [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.863 254096 DEBUG nova.compute.manager [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing instance network info cache due to event network-changed-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.863 254096 DEBUG oslo_concurrency.lockutils [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.864 254096 DEBUG oslo_concurrency.lockutils [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.864 254096 DEBUG nova.network.neutron [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Refreshing network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:18:50 compute-0 podman[413246]: 2025-11-25 17:18:50.894899497 +0000 UTC m=+0.059140681 container remove 405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.902 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a70e6b6-14e3-4b61-8975-dc30594f0b71]: (4, ('Tue Nov 25 05:18:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e (405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0)\n405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0\nTue Nov 25 05:18:50 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e (405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0)\n405307c19ddb338a1e355ffcb9369ef165c44fe45a6b90f63bc3f887ad57dea0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.904 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[31d04b17-c4cf-4299-9d56-0ae239410419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.906 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a702335-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.908 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 kernel: tap4a702335-30: left promiscuous mode
Nov 25 17:18:50 compute-0 nova_compute[254092]: 2025-11-25 17:18:50.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.925 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ef13ac5c-e098-4633-96e8-f39a5fa47987]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.946 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0446432b-fc0e-4c0a-a907-99a5b9908546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.948 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b35a4a90-3c50-46d5-a21a-19b08a921771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 18 KiB/s wr, 31 op/s
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.970 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1972d516-dd16-4a73-9ad7-17d1748702a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751452, 'reachable_time': 15175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413262, 'error': None, 'target': 'ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a702335\x2d301a\x2d4b90\x2db82e\x2de616a31e5b3e.mount: Deactivated successfully.
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.978 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a702335-301a-4b90-b82e-e616a31e5b3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.979 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8419bfef-035e-4305-a4ac-17a36a716756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.980 163338 INFO neutron.agent.ovn.metadata.agent [-] Port fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d in datapath 51f47401-ed2b-45e2-aea1-5cbbd48e5245 unbound from our chassis
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.981 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51f47401-ed2b-45e2-aea1-5cbbd48e5245, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.982 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e3272c4c-20d1-4262-be2b-9605f8beb182]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:50 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:50.982 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 namespace which is not needed anymore
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.139 254096 INFO nova.virt.libvirt.driver [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deleting instance files /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_del
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.140 254096 INFO nova.virt.libvirt.driver [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deletion of /var/lib/nova/instances/e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac_del complete
Nov 25 17:18:51 compute-0 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : haproxy version is 2.8.14-c23fe91
Nov 25 17:18:51 compute-0 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [NOTICE]   (411556) : path to executable is /usr/sbin/haproxy
Nov 25 17:18:51 compute-0 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [WARNING]  (411556) : Exiting Master process...
Nov 25 17:18:51 compute-0 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [ALERT]    (411556) : Current worker (411558) exited with code 143 (Terminated)
Nov 25 17:18:51 compute-0 neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245[411552]: [WARNING]  (411556) : All workers exited. Exiting... (0)
Nov 25 17:18:51 compute-0 systemd[1]: libpod-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0.scope: Deactivated successfully.
Nov 25 17:18:51 compute-0 podman[413280]: 2025-11-25 17:18:51.157478634 +0000 UTC m=+0.060341273 container died cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0-userdata-shm.mount: Deactivated successfully.
Nov 25 17:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f061dc6daf905b054161e2e14d141c86f73e0689fee8c1b141f9292e1f077fc2-merged.mount: Deactivated successfully.
Nov 25 17:18:51 compute-0 podman[413280]: 2025-11-25 17:18:51.199005325 +0000 UTC m=+0.101867964 container cleanup cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.216 254096 INFO nova.compute.manager [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.217 254096 DEBUG oslo.service.loopingcall [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.217 254096 DEBUG nova.compute.manager [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.217 254096 DEBUG nova.network.neutron [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:18:51 compute-0 systemd[1]: libpod-conmon-cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0.scope: Deactivated successfully.
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.241 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:51 compute-0 podman[413309]: 2025-11-25 17:18:51.275538838 +0000 UTC m=+0.051091872 container remove cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.284 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f601f41-59be-4643-b364-1589a51af05d]: (4, ('Tue Nov 25 05:18:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 (cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0)\ncdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0\nTue Nov 25 05:18:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 (cdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0)\ncdf8e41c5dd43e62f962cb4a1295dcba9c7d152de0f6a8f1abec269112ba09c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.286 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a93cb4ac-d684-4b67-807b-8ad22a0803f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.287 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51f47401-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:51 compute-0 kernel: tap51f47401-e0: left promiscuous mode
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.292 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.295 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d24506-f903-4d00-8ad8-6b759bc88e33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.312 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdc37f2-e5ed-442e-9ea1-a97ccc6289d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.314 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6676708c-1909-4049-ab8b-604f6c7aa52c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a65b4a0c-3594-4c4a-9c34-b1ca28934d81]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751549, 'reachable_time': 22070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413325, 'error': None, 'target': 'ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.340 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-51f47401-ed2b-45e2-aea1-5cbbd48e5245 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:18:51 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:51.341 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[18f3cd86-afdf-4517-bd11-75f7e5029e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 17:18:51 compute-0 nova_compute[254092]: 2025-11-25 17:18:51.532 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005685645186234291 of space, bias 1.0, pg target 0.1705693555870287 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:18:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:18:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d51f47401\x2ded2b\x2d45e2\x2daea1\x2d5cbbd48e5245.mount: Deactivated successfully.
Nov 25 17:18:52 compute-0 ceph-mon[74985]: pgmap v2859: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 18 KiB/s wr, 31 op/s
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.884 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.885 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.886 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.886 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.887 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.887 254096 WARNING nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b for instance with vm_state active and task_state deleting.
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.887 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.888 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.888 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.889 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.889 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-unplugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.889 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-unplugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.890 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.890 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.890 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.891 254096 DEBUG oslo_concurrency.lockutils [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.891 254096 DEBUG nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] No waiting events found dispatching network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:18:52 compute-0 nova_compute[254092]: 2025-11-25 17:18:52.891 254096 WARNING nova.compute.manager [req-bc3521e3-5a1d-45d6-9b87-224dc3037989 req-bf7249f5-1411-40d7-ac80-c6e7d7db4aae a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received unexpected event network-vif-plugged-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d for instance with vm_state active and task_state deleting.
Nov 25 17:18:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 7.0 KiB/s wr, 31 op/s
Nov 25 17:18:53 compute-0 nova_compute[254092]: 2025-11-25 17:18:53.250 254096 DEBUG nova.compute.manager [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-deleted-a3c9174e-c8c3-4b9f-b87f-4d6244324c9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:53 compute-0 nova_compute[254092]: 2025-11-25 17:18:53.250 254096 INFO nova.compute.manager [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Neutron deleted interface a3c9174e-c8c3-4b9f-b87f-4d6244324c9b; detaching it from the instance and deleting it from the info cache
Nov 25 17:18:53 compute-0 nova_compute[254092]: 2025-11-25 17:18:53.251 254096 DEBUG nova.network.neutron [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:53 compute-0 nova_compute[254092]: 2025-11-25 17:18:53.271 254096 DEBUG nova.compute.manager [req-8c7971ec-3624-48ae-adc2-0ef9c20a2478 req-bda6c58e-141e-4f16-b532-3631e79a32c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Detach interface failed, port_id=a3c9174e-c8c3-4b9f-b87f-4d6244324c9b, reason: Instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:18:53 compute-0 nova_compute[254092]: 2025-11-25 17:18:53.290 254096 DEBUG nova.network.neutron [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updated VIF entry in instance network info cache for port a3c9174e-c8c3-4b9f-b87f-4d6244324c9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:18:53 compute-0 nova_compute[254092]: 2025-11-25 17:18:53.291 254096 DEBUG nova.network.neutron [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [{"id": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "address": "fa:16:3e:ef:d8:27", "network": {"id": "4a702335-301a-4b90-b82e-e616a31e5b3e", "bridge": "br-int", "label": "tempest-network-smoke--1708184066", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3c9174e-c8", "ovs_interfaceid": "a3c9174e-c8c3-4b9f-b87f-4d6244324c9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "address": "fa:16:3e:ac:9c:d4", "network": {"id": "51f47401-ed2b-45e2-aea1-5cbbd48e5245", "bridge": "br-int", "label": "tempest-network-smoke--1351221980", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feac:9cd4", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd7f4f6-80", "ovs_interfaceid": "fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:53 compute-0 nova_compute[254092]: 2025-11-25 17:18:53.434 254096 DEBUG oslo_concurrency.lockutils [req-4c275c3f-c82c-4308-b268-b99f531b611d req-c9b4270f-b519-434e-8ed1-ccdd1ee5c443 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:18:54 compute-0 ceph-mon[74985]: pgmap v2860: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 7.0 KiB/s wr, 31 op/s
Nov 25 17:18:54 compute-0 nova_compute[254092]: 2025-11-25 17:18:54.296 254096 DEBUG nova.network.neutron [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:18:54 compute-0 nova_compute[254092]: 2025-11-25 17:18:54.318 254096 INFO nova.compute.manager [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Took 3.10 seconds to deallocate network for instance.
Nov 25 17:18:54 compute-0 nova_compute[254092]: 2025-11-25 17:18:54.362 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:18:54 compute-0 nova_compute[254092]: 2025-11-25 17:18:54.363 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:18:54 compute-0 nova_compute[254092]: 2025-11-25 17:18:54.425 254096 DEBUG oslo_concurrency.processutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:18:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:18:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:18:54.720 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:18:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:18:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224687489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.0 KiB/s wr, 30 op/s
Nov 25 17:18:54 compute-0 nova_compute[254092]: 2025-11-25 17:18:54.961 254096 DEBUG oslo_concurrency.processutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:18:54 compute-0 nova_compute[254092]: 2025-11-25 17:18:54.971 254096 DEBUG nova.compute.provider_tree [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:18:55 compute-0 nova_compute[254092]: 2025-11-25 17:18:55.001 254096 DEBUG nova.scheduler.client.report [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:18:55 compute-0 nova_compute[254092]: 2025-11-25 17:18:55.033 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:55 compute-0 nova_compute[254092]: 2025-11-25 17:18:55.068 254096 INFO nova.scheduler.client.report [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac
Nov 25 17:18:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4224687489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:18:55 compute-0 nova_compute[254092]: 2025-11-25 17:18:55.154 254096 DEBUG oslo_concurrency.lockutils [None req-762a49d5-3a0c-4fe5-b687-f0292d9e23de a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:18:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:18:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166770898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:18:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:18:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/166770898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:18:55 compute-0 nova_compute[254092]: 2025-11-25 17:18:55.398 254096 DEBUG nova.compute.manager [req-2f76273b-7af5-442b-a10b-9e441c574181 req-ee1de235-8666-4e0a-89dd-f4b1453d3738 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Received event network-vif-deleted-fdd7f4f6-8064-4e70-ac18-8a80d2cbeb2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:18:55 compute-0 nova_compute[254092]: 2025-11-25 17:18:55.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:56 compute-0 ceph-mon[74985]: pgmap v2861: 321 pgs: 321 active+clean; 103 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.0 KiB/s wr, 30 op/s
Nov 25 17:18:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/166770898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:18:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/166770898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:18:56 compute-0 nova_compute[254092]: 2025-11-25 17:18:56.267 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:18:56 compute-0 nova_compute[254092]: 2025-11-25 17:18:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 7.2 KiB/s wr, 57 op/s
Nov 25 17:18:57 compute-0 nova_compute[254092]: 2025-11-25 17:18:57.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:18:57 compute-0 nova_compute[254092]: 2025-11-25 17:18:57.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:18:57 compute-0 nova_compute[254092]: 2025-11-25 17:18:57.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:18:58 compute-0 ceph-mon[74985]: pgmap v2862: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 7.2 KiB/s wr, 57 op/s
Nov 25 17:18:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 7.1 KiB/s wr, 47 op/s
Nov 25 17:18:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:00 compute-0 ceph-mon[74985]: pgmap v2863: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 7.1 KiB/s wr, 47 op/s
Nov 25 17:19:00 compute-0 nova_compute[254092]: 2025-11-25 17:19:00.283 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091125.282258, 1ce224dc-5e5e-4105-bc00-9953c57babd7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:19:00 compute-0 nova_compute[254092]: 2025-11-25 17:19:00.284 254096 INFO nova.compute.manager [-] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] VM Stopped (Lifecycle Event)
Nov 25 17:19:00 compute-0 nova_compute[254092]: 2025-11-25 17:19:00.301 254096 DEBUG nova.compute.manager [None req-d63e886b-8eb9-4935-8444-4adb40858532 - - - - - -] [instance: 1ce224dc-5e5e-4105-bc00-9953c57babd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:19:00 compute-0 nova_compute[254092]: 2025-11-25 17:19:00.714 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 7.1 KiB/s wr, 47 op/s
Nov 25 17:19:01 compute-0 nova_compute[254092]: 2025-11-25 17:19:01.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:01 compute-0 nova_compute[254092]: 2025-11-25 17:19:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:02 compute-0 nova_compute[254092]: 2025-11-25 17:19:02.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:02 compute-0 ceph-mon[74985]: pgmap v2864: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 7.1 KiB/s wr, 47 op/s
Nov 25 17:19:02 compute-0 nova_compute[254092]: 2025-11-25 17:19:02.277 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:19:04 compute-0 ceph-mon[74985]: pgmap v2865: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:19:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:19:05 compute-0 nova_compute[254092]: 2025-11-25 17:19:05.656 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091130.655965, e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:19:05 compute-0 nova_compute[254092]: 2025-11-25 17:19:05.657 254096 INFO nova.compute.manager [-] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] VM Stopped (Lifecycle Event)
Nov 25 17:19:05 compute-0 nova_compute[254092]: 2025-11-25 17:19:05.678 254096 DEBUG nova.compute.manager [None req-478d371e-f9c9-4339-9c29-912b00c2f278 - - - - - -] [instance: e6b2aa52-7cbc-4d94-8cc0-6d0f12e4c8ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:19:05 compute-0 nova_compute[254092]: 2025-11-25 17:19:05.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:06 compute-0 ceph-mon[74985]: pgmap v2866: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:19:06 compute-0 nova_compute[254092]: 2025-11-25 17:19:06.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:19:07 compute-0 nova_compute[254092]: 2025-11-25 17:19:07.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:07 compute-0 nova_compute[254092]: 2025-11-25 17:19:07.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:19:07 compute-0 podman[413351]: 2025-11-25 17:19:07.698535869 +0000 UTC m=+0.101350750 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 17:19:07 compute-0 podman[413350]: 2025-11-25 17:19:07.712935531 +0000 UTC m=+0.114968360 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:19:07 compute-0 podman[413352]: 2025-11-25 17:19:07.747999845 +0000 UTC m=+0.137510744 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:19:08 compute-0 ceph-mon[74985]: pgmap v2867: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:19:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:19:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:19:10 compute-0 ceph-mon[74985]: pgmap v2868: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:10 compute-0 nova_compute[254092]: 2025-11-25 17:19:10.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:11 compute-0 nova_compute[254092]: 2025-11-25 17:19:11.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:12 compute-0 ceph-mon[74985]: pgmap v2869: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:13.656 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:14 compute-0 ceph-mon[74985]: pgmap v2870: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:15 compute-0 nova_compute[254092]: 2025-11-25 17:19:15.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:16 compute-0 ceph-mon[74985]: pgmap v2871: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:16 compute-0 nova_compute[254092]: 2025-11-25 17:19:16.275 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:18 compute-0 ceph-mon[74985]: pgmap v2872: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:20 compute-0 ceph-mon[74985]: pgmap v2873: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:20 compute-0 nova_compute[254092]: 2025-11-25 17:19:20.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:21 compute-0 nova_compute[254092]: 2025-11-25 17:19:21.277 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:22 compute-0 ceph-mon[74985]: pgmap v2874: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:24 compute-0 ceph-mon[74985]: pgmap v2875: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.587 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.588 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.604 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.722 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.723 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.732 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.732 254096 INFO nova.compute.claims [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:19:24 compute-0 nova_compute[254092]: 2025-11-25 17:19:24.827 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:19:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065312171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.315 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.322 254096 DEBUG nova.compute.provider_tree [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.340 254096 DEBUG nova.scheduler.client.report [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.358 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.359 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.409 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.411 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.432 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.448 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.546 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.548 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.549 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Creating image(s)
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.587 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.611 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.639 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.645 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.772 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.773 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.774 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.775 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.804 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:19:25 compute-0 nova_compute[254092]: 2025-11-25 17:19:25.810 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 392384a1-1741-4504-b2c2-557420bbbbd0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.037 254096 DEBUG nova.policy [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.184 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 392384a1-1741-4504-b2c2-557420bbbbd0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.260 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:19:26 compute-0 ceph-mon[74985]: pgmap v2876: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:19:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4065312171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.380 254096 DEBUG nova.objects.instance [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.396 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.397 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Ensure instance console log exists: /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.397 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.398 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:26 compute-0 nova_compute[254092]: 2025-11-25 17:19:26.398 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:19:28 compute-0 ceph-mon[74985]: pgmap v2877: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:19:28 compute-0 nova_compute[254092]: 2025-11-25 17:19:28.596 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully created port: b342a143-48a8-46f1-90fc-229fadeb167e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:19:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:19:29 compute-0 nova_compute[254092]: 2025-11-25 17:19:29.016 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully created port: f6eeae44-ea00-4543-a1e0-9ce45fbc399f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:19:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:29 compute-0 sudo[413602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:29 compute-0 sudo[413602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:29 compute-0 sudo[413602]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:29 compute-0 sudo[413627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:19:29 compute-0 sudo[413627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:29 compute-0 sudo[413627]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:30 compute-0 sudo[413652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:30 compute-0 sudo[413652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:30 compute-0 sudo[413652]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:30 compute-0 sudo[413677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 17:19:30 compute-0 sudo[413677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:30 compute-0 ceph-mon[74985]: pgmap v2878: 321 pgs: 321 active+clean; 41 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:19:30 compute-0 nova_compute[254092]: 2025-11-25 17:19:30.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:30 compute-0 podman[413774]: 2025-11-25 17:19:30.847399424 +0000 UTC m=+0.104329490 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:19:30 compute-0 podman[413774]: 2025-11-25 17:19:30.953237955 +0000 UTC m=+0.210168061 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:19:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:31 compute-0 nova_compute[254092]: 2025-11-25 17:19:31.145 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully updated port: b342a143-48a8-46f1-90fc-229fadeb167e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:19:31 compute-0 nova_compute[254092]: 2025-11-25 17:19:31.246 254096 DEBUG nova.compute.manager [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:19:31 compute-0 nova_compute[254092]: 2025-11-25 17:19:31.247 254096 DEBUG nova.compute.manager [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:19:31 compute-0 nova_compute[254092]: 2025-11-25 17:19:31.247 254096 DEBUG oslo_concurrency.lockutils [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:19:31 compute-0 nova_compute[254092]: 2025-11-25 17:19:31.247 254096 DEBUG oslo_concurrency.lockutils [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:19:31 compute-0 nova_compute[254092]: 2025-11-25 17:19:31.248 254096 DEBUG nova.network.neutron [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:19:31 compute-0 nova_compute[254092]: 2025-11-25 17:19:31.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:31 compute-0 sudo[413677]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:19:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:19:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:31 compute-0 sudo[413935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:32 compute-0 sudo[413935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:32 compute-0 sudo[413935]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:32 compute-0 nova_compute[254092]: 2025-11-25 17:19:32.026 254096 DEBUG nova.network.neutron [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:19:32 compute-0 sudo[413960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:19:32 compute-0 sudo[413960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:32 compute-0 sudo[413960]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:32 compute-0 sudo[413985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:32 compute-0 sudo[413985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:32 compute-0 sudo[413985]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:32 compute-0 sudo[414010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:19:32 compute-0 sudo[414010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:32 compute-0 ceph-mon[74985]: pgmap v2879: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:32 compute-0 sudo[414010]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:19:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:19:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:19:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:19:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:19:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f54bc50b-34ef-454b-ae48-cd2cf2fe07ed does not exist
Nov 25 17:19:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f9ae48dd-75f6-4a01-bae4-8db29768eab1 does not exist
Nov 25 17:19:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3191994b-b815-47fc-9102-e195d1380f44 does not exist
Nov 25 17:19:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:19:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:19:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:19:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:19:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:19:32 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:19:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:19:32 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:19:32 compute-0 nova_compute[254092]: 2025-11-25 17:19:32.885 254096 DEBUG nova.network.neutron [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:19:32 compute-0 nova_compute[254092]: 2025-11-25 17:19:32.897 254096 DEBUG oslo_concurrency.lockutils [req-39ac8f2a-552a-4f1f-ad67-dca8206f6836 req-d9159e69-1ea9-43b3-b0bf-945b3079ecc5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:19:32 compute-0 sudo[414067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:32 compute-0 sudo[414067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:32 compute-0 sudo[414067]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:32 compute-0 nova_compute[254092]: 2025-11-25 17:19:32.984 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Successfully updated port: f6eeae44-ea00-4543-a1e0-9ce45fbc399f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:19:33 compute-0 nova_compute[254092]: 2025-11-25 17:19:33.003 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:19:33 compute-0 nova_compute[254092]: 2025-11-25 17:19:33.003 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:19:33 compute-0 nova_compute[254092]: 2025-11-25 17:19:33.003 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:19:33 compute-0 sudo[414092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:19:33 compute-0 sudo[414092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:33 compute-0 sudo[414092]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:33 compute-0 sudo[414117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:33 compute-0 sudo[414117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:33 compute-0 sudo[414117]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:33 compute-0 nova_compute[254092]: 2025-11-25 17:19:33.116 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:19:33 compute-0 sudo[414142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:19:33 compute-0 sudo[414142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:19:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:19:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:19:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:19:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:19:33 compute-0 nova_compute[254092]: 2025-11-25 17:19:33.338 254096 DEBUG nova.compute.manager [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:19:33 compute-0 nova_compute[254092]: 2025-11-25 17:19:33.338 254096 DEBUG nova.compute.manager [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-f6eeae44-ea00-4543-a1e0-9ce45fbc399f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:19:33 compute-0 nova_compute[254092]: 2025-11-25 17:19:33.338 254096 DEBUG oslo_concurrency.lockutils [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:19:33 compute-0 podman[414207]: 2025-11-25 17:19:33.502530066 +0000 UTC m=+0.050546327 container create 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:19:33 compute-0 systemd[1]: Started libpod-conmon-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope.
Nov 25 17:19:33 compute-0 podman[414207]: 2025-11-25 17:19:33.480247069 +0000 UTC m=+0.028263340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:19:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:33 compute-0 podman[414207]: 2025-11-25 17:19:33.618588544 +0000 UTC m=+0.166604845 container init 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:19:33 compute-0 podman[414207]: 2025-11-25 17:19:33.632852643 +0000 UTC m=+0.180868944 container start 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:19:33 compute-0 podman[414207]: 2025-11-25 17:19:33.637821278 +0000 UTC m=+0.185837549 container attach 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 17:19:33 compute-0 competent_curran[414223]: 167 167
Nov 25 17:19:33 compute-0 systemd[1]: libpod-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope: Deactivated successfully.
Nov 25 17:19:33 compute-0 conmon[414223]: conmon 482c3de457a644e5d31c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope/container/memory.events
Nov 25 17:19:33 compute-0 podman[414207]: 2025-11-25 17:19:33.64561339 +0000 UTC m=+0.193629661 container died 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:19:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4d01be5cbaf65ba39ec30ba508c3bf34e3b046276aa25039140b71d9c625c1a-merged.mount: Deactivated successfully.
Nov 25 17:19:33 compute-0 podman[414207]: 2025-11-25 17:19:33.691630643 +0000 UTC m=+0.239646914 container remove 482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:19:33 compute-0 systemd[1]: libpod-conmon-482c3de457a644e5d31c609a8dc6b68323c9eef45e5265a215ad2441fa599c4b.scope: Deactivated successfully.
Nov 25 17:19:33 compute-0 podman[414246]: 2025-11-25 17:19:33.902264627 +0000 UTC m=+0.067909351 container create 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:19:33 compute-0 systemd[1]: Started libpod-conmon-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope.
Nov 25 17:19:33 compute-0 podman[414246]: 2025-11-25 17:19:33.876877285 +0000 UTC m=+0.042522019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:19:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:34 compute-0 podman[414246]: 2025-11-25 17:19:34.034913897 +0000 UTC m=+0.200558611 container init 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:19:34 compute-0 podman[414246]: 2025-11-25 17:19:34.048802905 +0000 UTC m=+0.214447639 container start 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:19:34 compute-0 podman[414246]: 2025-11-25 17:19:34.053286177 +0000 UTC m=+0.218930871 container attach 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:19:34 compute-0 ceph-mon[74985]: pgmap v2880: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:35 compute-0 crazy_goodall[414263]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:19:35 compute-0 crazy_goodall[414263]: --> relative data size: 1.0
Nov 25 17:19:35 compute-0 crazy_goodall[414263]: --> All data devices are unavailable
Nov 25 17:19:35 compute-0 systemd[1]: libpod-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope: Deactivated successfully.
Nov 25 17:19:35 compute-0 systemd[1]: libpod-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope: Consumed 1.149s CPU time.
Nov 25 17:19:35 compute-0 podman[414246]: 2025-11-25 17:19:35.239057173 +0000 UTC m=+1.404701857 container died 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-24654d1ecdf1442d50c78c29a442e2e50463adf8a2196f78dc0d26fdc06ab8b7-merged.mount: Deactivated successfully.
Nov 25 17:19:35 compute-0 podman[414246]: 2025-11-25 17:19:35.314439035 +0000 UTC m=+1.480083729 container remove 4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 17:19:35 compute-0 systemd[1]: libpod-conmon-4c413c7d43f8f888e7f9a4ac11a7cc500584ca42fd2fc960061d5c5feb0a2cb1.scope: Deactivated successfully.
Nov 25 17:19:35 compute-0 sudo[414142]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:35 compute-0 sudo[414304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:35 compute-0 sudo[414304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:35 compute-0 sudo[414304]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:35 compute-0 sudo[414329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:19:35 compute-0 sudo[414329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:35 compute-0 sudo[414329]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:35 compute-0 sudo[414354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:35 compute-0 sudo[414354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:35 compute-0 sudo[414354]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:35 compute-0 nova_compute[254092]: 2025-11-25 17:19:35.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:35 compute-0 sudo[414379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:19:35 compute-0 sudo[414379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:36 compute-0 podman[414448]: 2025-11-25 17:19:36.232551464 +0000 UTC m=+0.060212049 container create 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:19:36 compute-0 systemd[1]: Started libpod-conmon-17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e.scope.
Nov 25 17:19:36 compute-0 podman[414448]: 2025-11-25 17:19:36.203683759 +0000 UTC m=+0.031344364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:36 compute-0 ceph-mon[74985]: pgmap v2881: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:36 compute-0 podman[414448]: 2025-11-25 17:19:36.382968429 +0000 UTC m=+0.210629004 container init 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:19:36 compute-0 podman[414448]: 2025-11-25 17:19:36.391419339 +0000 UTC m=+0.219079884 container start 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:19:36 compute-0 distracted_cohen[414464]: 167 167
Nov 25 17:19:36 compute-0 podman[414448]: 2025-11-25 17:19:36.396329913 +0000 UTC m=+0.223990458 container attach 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:19:36 compute-0 systemd[1]: libpod-17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e.scope: Deactivated successfully.
Nov 25 17:19:36 compute-0 podman[414448]: 2025-11-25 17:19:36.397619518 +0000 UTC m=+0.225280063 container died 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ade7895a896e6a7e7bfdcb3e72b13a16988ee8a0c0937391c3a0427c0b21d71-merged.mount: Deactivated successfully.
Nov 25 17:19:36 compute-0 podman[414448]: 2025-11-25 17:19:36.453001085 +0000 UTC m=+0.280661660 container remove 17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 25 17:19:36 compute-0 systemd[1]: libpod-conmon-17478910fc08b38c5efad594682154fe5fe60cdb36d11b45c267c2abc8669b2e.scope: Deactivated successfully.
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.483 254096 DEBUG nova.network.neutron [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.507 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.507 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance network_info: |[{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.508 254096 DEBUG oslo_concurrency.lockutils [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.508 254096 DEBUG nova.network.neutron [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port f6eeae44-ea00-4543-a1e0-9ce45fbc399f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.511 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start _get_guest_xml network_info=[{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.517 254096 WARNING nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.523 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.524 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.532 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.532 254096 DEBUG nova.virt.libvirt.host [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.533 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.533 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.533 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.534 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.535 254096 DEBUG nova.virt.hardware [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:19:36 compute-0 nova_compute[254092]: 2025-11-25 17:19:36.538 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:36 compute-0 podman[414489]: 2025-11-25 17:19:36.693090251 +0000 UTC m=+0.065617098 container create a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:19:36 compute-0 podman[414489]: 2025-11-25 17:19:36.663717201 +0000 UTC m=+0.036244098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:19:36 compute-0 systemd[1]: Started libpod-conmon-a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9.scope.
Nov 25 17:19:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:36 compute-0 podman[414489]: 2025-11-25 17:19:36.824255361 +0000 UTC m=+0.196782188 container init a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:19:36 compute-0 podman[414489]: 2025-11-25 17:19:36.835952429 +0000 UTC m=+0.208479236 container start a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:19:36 compute-0 podman[414489]: 2025-11-25 17:19:36.839198787 +0000 UTC m=+0.211725594 container attach a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:19:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:19:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143290325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.034 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.066 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.072 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1143290325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:19:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:19:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478714737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.594 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.598 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.598 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.599 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.600 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.600 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.600 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.602 254096 DEBUG nova.objects.instance [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.620 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <uuid>392384a1-1741-4504-b2c2-557420bbbbd0</uuid>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <name>instance-0000008f</name>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-473687599</nova:name>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:19:36</nova:creationTime>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:port uuid="b342a143-48a8-46f1-90fc-229fadeb167e">
Nov 25 17:19:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <nova:port uuid="f6eeae44-ea00-4543-a1e0-9ce45fbc399f">
Nov 25 17:19:37 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe0f:88e0" ipVersion="6"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <system>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <entry name="serial">392384a1-1741-4504-b2c2-557420bbbbd0</entry>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <entry name="uuid">392384a1-1741-4504-b2c2-557420bbbbd0</entry>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </system>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <os>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   </os>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <features>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   </features>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/392384a1-1741-4504-b2c2-557420bbbbd0_disk">
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       </source>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/392384a1-1741-4504-b2c2-557420bbbbd0_disk.config">
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       </source>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:19:37 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:48:d2:35"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <target dev="tapb342a143-48"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:0f:88:e0"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <target dev="tapf6eeae44-ea"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/console.log" append="off"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <video>
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </video>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:19:37 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:19:37 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:19:37 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:19:37 compute-0 nova_compute[254092]: </domain>
Nov 25 17:19:37 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.620 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Preparing to wait for external event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Preparing to wait for external event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.621 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.622 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.623 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.623 254096 DEBUG os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.624 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 kind_banach[414524]: {
Nov 25 17:19:37 compute-0 kind_banach[414524]:     "0": [
Nov 25 17:19:37 compute-0 kind_banach[414524]:         {
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "devices": [
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "/dev/loop3"
Nov 25 17:19:37 compute-0 kind_banach[414524]:             ],
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_name": "ceph_lv0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_size": "21470642176",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "name": "ceph_lv0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "tags": {
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cluster_name": "ceph",
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.624 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.crush_device_class": "",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.encrypted": "0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osd_id": "0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.type": "block",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.vdo": "0"
Nov 25 17:19:37 compute-0 kind_banach[414524]:             },
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "type": "block",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "vg_name": "ceph_vg0"
Nov 25 17:19:37 compute-0 kind_banach[414524]:         }
Nov 25 17:19:37 compute-0 kind_banach[414524]:     ],
Nov 25 17:19:37 compute-0 kind_banach[414524]:     "1": [
Nov 25 17:19:37 compute-0 kind_banach[414524]:         {
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "devices": [
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "/dev/loop4"
Nov 25 17:19:37 compute-0 kind_banach[414524]:             ],
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_name": "ceph_lv1",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_size": "21470642176",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "name": "ceph_lv1",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "tags": {
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cluster_name": "ceph",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.crush_device_class": "",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.encrypted": "0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osd_id": "1",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.type": "block",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.vdo": "0"
Nov 25 17:19:37 compute-0 kind_banach[414524]:             },
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "type": "block",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "vg_name": "ceph_vg1"
Nov 25 17:19:37 compute-0 kind_banach[414524]:         }
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.625 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:19:37 compute-0 kind_banach[414524]:     ],
Nov 25 17:19:37 compute-0 kind_banach[414524]:     "2": [
Nov 25 17:19:37 compute-0 kind_banach[414524]:         {
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "devices": [
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "/dev/loop5"
Nov 25 17:19:37 compute-0 kind_banach[414524]:             ],
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_name": "ceph_lv2",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_size": "21470642176",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "name": "ceph_lv2",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "tags": {
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.cluster_name": "ceph",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.crush_device_class": "",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.encrypted": "0",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osd_id": "2",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.type": "block",
Nov 25 17:19:37 compute-0 kind_banach[414524]:                 "ceph.vdo": "0"
Nov 25 17:19:37 compute-0 kind_banach[414524]:             },
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "type": "block",
Nov 25 17:19:37 compute-0 kind_banach[414524]:             "vg_name": "ceph_vg2"
Nov 25 17:19:37 compute-0 kind_banach[414524]:         }
Nov 25 17:19:37 compute-0 kind_banach[414524]:     ]
Nov 25 17:19:37 compute-0 kind_banach[414524]: }
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.629 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.630 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb342a143-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.631 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb342a143-48, col_values=(('external_ids', {'iface-id': 'b342a143-48a8-46f1-90fc-229fadeb167e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:d2:35', 'vm-uuid': '392384a1-1741-4504-b2c2-557420bbbbd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:37 compute-0 NetworkManager[48891]: <info>  [1764091177.6980] manager: (tapb342a143-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.697 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.707 254096 INFO os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48')
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.708 254096 DEBUG nova.virt.libvirt.vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:19:25Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.709 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.709 254096 DEBUG nova.network.os_vif_util [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.710 254096 DEBUG os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.710 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.711 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.711 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.713 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6eeae44-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.714 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6eeae44-ea, col_values=(('external_ids', {'iface-id': 'f6eeae44-ea00-4543-a1e0-9ce45fbc399f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:88:e0', 'vm-uuid': '392384a1-1741-4504-b2c2-557420bbbbd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 NetworkManager[48891]: <info>  [1764091177.7162] manager: (tapf6eeae44-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.724 254096 INFO os_vif [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea')
Nov 25 17:19:37 compute-0 systemd[1]: libpod-a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9.scope: Deactivated successfully.
Nov 25 17:19:37 compute-0 podman[414489]: 2025-11-25 17:19:37.729326698 +0000 UTC m=+1.101853505 container died a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2faddf4721c97530666cb2d771a8756641e64b902bb7a05c7a1ab99cc158165-merged.mount: Deactivated successfully.
Nov 25 17:19:37 compute-0 podman[414489]: 2025-11-25 17:19:37.804377008 +0000 UTC m=+1.176903825 container remove a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:19:37 compute-0 systemd[1]: libpod-conmon-a0443103ae20310329a0fa7e4363b0b53a540f83eb35703c7cda695bce13e7b9.scope: Deactivated successfully.
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.819 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.821 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.821 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:48:d2:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.822 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:0f:88:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.822 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Using config drive
Nov 25 17:19:37 compute-0 sudo[414379]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:37 compute-0 podman[414579]: 2025-11-25 17:19:37.857275678 +0000 UTC m=+0.096329642 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:19:37 compute-0 podman[414583]: 2025-11-25 17:19:37.86871635 +0000 UTC m=+0.091329957 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:19:37 compute-0 nova_compute[254092]: 2025-11-25 17:19:37.871 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:19:37 compute-0 sudo[414662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:37 compute-0 podman[414599]: 2025-11-25 17:19:37.929432423 +0000 UTC m=+0.136093536 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 17:19:37 compute-0 sudo[414662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:37 compute-0 sudo[414662]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:38 compute-0 sudo[414697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:19:38 compute-0 sudo[414697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:38 compute-0 sudo[414697]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:38 compute-0 sudo[414722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:38 compute-0 sudo[414722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:38 compute-0 sudo[414722]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:38 compute-0 sudo[414747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:19:38 compute-0 sudo[414747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.173 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Creating config drive at /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.178 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjfu4qcn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.259 254096 DEBUG nova.network.neutron [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated VIF entry in instance network info cache for port f6eeae44-ea00-4543-a1e0-9ce45fbc399f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.260 254096 DEBUG nova.network.neutron [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.275 254096 DEBUG oslo_concurrency.lockutils [req-d9a4af80-7066-4606-885c-db231b6195fc req-276330f0-d74e-4b84-ab84-270d221c4c96 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.342 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppjfu4qcn" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:38 compute-0 ceph-mon[74985]: pgmap v2882: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:19:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1478714737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.385 254096 DEBUG nova.storage.rbd_utils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.391 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.580 254096 DEBUG oslo_concurrency.processutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config 392384a1-1741-4504-b2c2-557420bbbbd0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.581 254096 INFO nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deleting local config drive /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0/disk.config because it was imported into RBD.
Nov 25 17:19:38 compute-0 podman[414851]: 2025-11-25 17:19:38.600347745 +0000 UTC m=+0.056076447 container create c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:19:38 compute-0 systemd[1]: Started libpod-conmon-c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc.scope.
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.6590] manager: (tapb342a143-48): new Tun device (/org/freedesktop/NetworkManager/Devices/633)
Nov 25 17:19:38 compute-0 kernel: tapb342a143-48: entered promiscuous mode
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01531|binding|INFO|Claiming lport b342a143-48a8-46f1-90fc-229fadeb167e for this chassis.
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01532|binding|INFO|b342a143-48a8-46f1-90fc-229fadeb167e: Claiming fa:16:3e:48:d2:35 10.100.0.12
Nov 25 17:19:38 compute-0 podman[414851]: 2025-11-25 17:19:38.578199902 +0000 UTC m=+0.033928614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.672 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:38 compute-0 kernel: tapf6eeae44-ea: entered promiscuous mode
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.6898] manager: (tapf6eeae44-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/634)
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.692 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:d2:35 10.100.0.12'], port_security=['fa:16:3e:48:d2:35 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b342a143-48a8-46f1-90fc-229fadeb167e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.693 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b342a143-48a8-46f1-90fc-229fadeb167e in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 bound to our chassis
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.694 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77970d23-547a-4e3a-bddf-f4770a15bf81
Nov 25 17:19:38 compute-0 podman[414851]: 2025-11-25 17:19:38.703464461 +0000 UTC m=+0.159193173 container init c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:19:38 compute-0 systemd-udevd[414891]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:19:38 compute-0 systemd-udevd[414890]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.707 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0df1bc1e-3e26-4b8f-b5d1-89e7c5cfc187]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.708 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77970d23-51 in ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.711 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77970d23-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.711 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf3fa46-4953-41fd-984a-933a9eb487c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.713 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac069815-3c97-43d4-9644-1bbcc22f5eb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.7221] device (tapf6eeae44-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.7239] device (tapf6eeae44-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.7294] device (tapb342a143-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.7309] device (tapb342a143-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:19:38 compute-0 podman[414851]: 2025-11-25 17:19:38.731150035 +0000 UTC m=+0.186878707 container start c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:19:38 compute-0 wizardly_meninsky[414875]: 167 167
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.730 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[9986febd-b0b3-4173-a0d2-f95795378847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 systemd[1]: libpod-c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc.scope: Deactivated successfully.
Nov 25 17:19:38 compute-0 podman[414851]: 2025-11-25 17:19:38.735657778 +0000 UTC m=+0.191386480 container attach c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:19:38 compute-0 podman[414851]: 2025-11-25 17:19:38.73687438 +0000 UTC m=+0.192603052 container died c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:19:38 compute-0 systemd-machined[216343]: New machine qemu-177-instance-0000008f.
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.762 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c85702f0-df2e-468d-8a75-c71d5d3e1788]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01533|binding|INFO|Setting lport b342a143-48a8-46f1-90fc-229fadeb167e ovn-installed in OVS
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01534|binding|INFO|Setting lport b342a143-48a8-46f1-90fc-229fadeb167e up in Southbound
Nov 25 17:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d101ee9a56448ffcb8b461e034b2ed6e61c3eaf5823968b5eaa6726218d05e2a-merged.mount: Deactivated successfully.
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01535|binding|INFO|Claiming lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f for this chassis.
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01536|binding|INFO|f6eeae44-ea00-4543-a1e0-9ce45fbc399f: Claiming fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.815 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.816 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[34d1a0e9-8f06-482e-bb95-cf47a2b37703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.821 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], port_security=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe0f:88e0/64', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f6eeae44-ea00-4543-a1e0-9ce45fbc399f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01537|binding|INFO|Setting lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f ovn-installed in OVS
Nov 25 17:19:38 compute-0 ovn_controller[153477]: 2025-11-25T17:19:38Z|01538|binding|INFO|Setting lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f up in Southbound
Nov 25 17:19:38 compute-0 nova_compute[254092]: 2025-11-25 17:19:38.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.8310] manager: (tap77970d23-50): new Veth device (/org/freedesktop/NetworkManager/Devices/635)
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.829 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53c11709-a71c-499e-b959-486fc484d253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 podman[414851]: 2025-11-25 17:19:38.831235179 +0000 UTC m=+0.286963841 container remove c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:19:38 compute-0 systemd[1]: libpod-conmon-c37c7a1d6109cce467c0131797bb54a418fc7a3d0cc19638c3b251a6a17f23bc.scope: Deactivated successfully.
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.870 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a087b47e-c5a1-496e-8cec-9717ad80bb20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.875 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f402471e-0296-4dfb-a3c9-be98354e7e9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 NetworkManager[48891]: <info>  [1764091178.9031] device (tap77970d23-50): carrier: link connected
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.913 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2cc56b-cc94-4580-8074-b922b88a21ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.933 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8bacdc-77bf-4e0c-a6fb-095b97c2333e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414941, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.951 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[46cc7785-e529-46fc-8b8b-8e8833df7e70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:6c82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763649, 'tstamp': 763649}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414942, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:38.968 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a01cd9d9-8f45-469c-8a19-9e483010494e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414944, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[352f47af-29bb-4de6-adaf-2661b7faa53e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 podman[414949]: 2025-11-25 17:19:39.042391236 +0000 UTC m=+0.054100833 container create 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:19:39 compute-0 systemd[1]: Started libpod-conmon-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope.
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.112 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0758d18b-131c-478c-8931-595fec44427a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.114 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.114 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.114 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77970d23-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:39 compute-0 kernel: tap77970d23-50: entered promiscuous mode
Nov 25 17:19:39 compute-0 NetworkManager[48891]: <info>  [1764091179.1174] manager: (tap77970d23-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.119 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77970d23-50, col_values=(('external_ids', {'iface-id': '56e5945a-607b-4b8d-baa7-f3eab82e874d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:39 compute-0 ovn_controller[153477]: 2025-11-25T17:19:39Z|01539|binding|INFO|Releasing lport 56e5945a-607b-4b8d-baa7-f3eab82e874d from this chassis (sb_readonly=0)
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.120 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:39 compute-0 podman[414949]: 2025-11-25 17:19:39.025186348 +0000 UTC m=+0.036895965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.137 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77970d23-547a-4e3a-bddf-f4770a15bf81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77970d23-547a-4e3a-bddf-f4770a15bf81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.138 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5458c32b-e25d-41a5-8261-37ae8e2cbe9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.139 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-77970d23-547a-4e3a-bddf-f4770a15bf81
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/77970d23-547a-4e3a-bddf-f4770a15bf81.pid.haproxy
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 77970d23-547a-4e3a-bddf-f4770a15bf81
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.139 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'env', 'PROCESS_TAG=haproxy-77970d23-547a-4e3a-bddf-f4770a15bf81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77970d23-547a-4e3a-bddf-f4770a15bf81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:39 compute-0 podman[414949]: 2025-11-25 17:19:39.161096478 +0000 UTC m=+0.172806075 container init 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:19:39 compute-0 podman[414949]: 2025-11-25 17:19:39.17444891 +0000 UTC m=+0.186158507 container start 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 17:19:39 compute-0 podman[414949]: 2025-11-25 17:19:39.177839964 +0000 UTC m=+0.189549611 container attach 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.233 254096 DEBUG nova.compute.manager [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.234 254096 DEBUG oslo_concurrency.lockutils [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.237 254096 DEBUG oslo_concurrency.lockutils [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.237 254096 DEBUG oslo_concurrency.lockutils [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.237 254096 DEBUG nova.compute.manager [req-0b06b061-321b-40ba-99b0-f576bee1befa req-824f6c48-df20-40c9-bb82-f1462168aa90 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Processing event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.249 254096 DEBUG nova.compute.manager [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.249 254096 DEBUG oslo_concurrency.lockutils [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.250 254096 DEBUG oslo_concurrency.lockutils [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.250 254096 DEBUG oslo_concurrency.lockutils [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.250 254096 DEBUG nova.compute.manager [req-4d5b85f1-c38a-4516-994e-146a5a79402e req-751aac6e-9a0f-4c28-a39e-ca7b6a42179d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Processing event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.290 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.291 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091179.289546, 392384a1-1741-4504-b2c2-557420bbbbd0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.292 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Started (Lifecycle Event)
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.299 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.302 254096 INFO nova.virt.libvirt.driver [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance spawned successfully.
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.303 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.353 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.353 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.354 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.354 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.355 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.355 254096 DEBUG nova.virt.libvirt.driver [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.359 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.363 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.395 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.395 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091179.2904263, 392384a1-1741-4504-b2c2-557420bbbbd0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.395 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Paused (Lifecycle Event)
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.414 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.417 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091179.2966878, 392384a1-1741-4504-b2c2-557420bbbbd0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.418 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Resumed (Lifecycle Event)
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.427 254096 INFO nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 13.88 seconds to spawn the instance on the hypervisor.
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.428 254096 DEBUG nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.434 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.437 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:19:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.462 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.495 254096 INFO nova.compute.manager [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 14.81 seconds to build instance.
Nov 25 17:19:39 compute-0 nova_compute[254092]: 2025-11-25 17:19:39.511 254096 DEBUG oslo_concurrency.lockutils [None req-cafffe71-82fe-427c-9ca4-f264133f8deb a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:39 compute-0 podman[415043]: 2025-11-25 17:19:39.633225538 +0000 UTC m=+0.093000912 container create e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 17:19:39 compute-0 podman[415043]: 2025-11-25 17:19:39.57928737 +0000 UTC m=+0.039062774 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:19:39 compute-0 systemd[1]: Started libpod-conmon-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope.
Nov 25 17:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e3d52ff50bf53cf2fb737f96eefa352cab2347302bd90a6108c9336c866d1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:39 compute-0 podman[415043]: 2025-11-25 17:19:39.756824072 +0000 UTC m=+0.216599456 container init e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 17:19:39 compute-0 podman[415043]: 2025-11-25 17:19:39.762127177 +0000 UTC m=+0.221902531 container start e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:19:39 compute-0 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : New worker (415065) forked
Nov 25 17:19:39 compute-0 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : Loading success.
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.826 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f6eeae44-ea00-4543-a1e0-9ce45fbc399f in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.827 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad3526c9-ce3b-41ed-ae27-775dca6a1319
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.840 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[251e1c59-c2e1-42c2-b9ff-59bfc4cdecd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.841 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad3526c9-c1 in ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.842 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad3526c9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.842 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[542e35a9-7b15-4331-8310-6e626892962f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.843 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[25ce0788-6a7e-494c-abea-213ae5f837e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.857 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[83968ab1-05fe-42b7-aa2b-e30ba359badf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.882 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3e402245-d78e-4896-948e-fb9432e1e263]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.921 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ea783e-18a3-4da9-8d54-ee89b6e44e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.929 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a3361739-faad-4f82-83ab-d322014e317c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 NetworkManager[48891]: <info>  [1764091179.9305] manager: (tapad3526c9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/637)
Nov 25 17:19:39 compute-0 systemd-udevd[414919]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.974 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8a160e0a-9d54-4743-9794-fefbdecd43b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:39.977 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a67e047f-d498-4ed5-ae34-cb43dbe5b5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 NetworkManager[48891]: <info>  [1764091180.0037] device (tapad3526c9-c0): carrier: link connected
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.014 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a33a1e99-7b6f-4fa5-bcd1-0fe686064041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.033 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[83061ffa-d3dd-4740-b116-98ed30886670]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415099, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.051 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6759224f-a31e-41f7-8750-2943205c9b9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:2348'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763759, 'tstamp': 763759}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415101, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.067 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50a99359-1c07-4c96-a729-9a80b10ed85b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415105, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.099 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[acf6dd7b-01a1-4ee7-90d0-9bc7ce1cc19f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[245ef586-7176-47c7-b555-98e04b115be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.137 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.138 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad3526c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:40 compute-0 NetworkManager[48891]: <info>  [1764091180.1412] manager: (tapad3526c9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Nov 25 17:19:40 compute-0 kernel: tapad3526c9-c0: entered promiscuous mode
Nov 25 17:19:40 compute-0 nova_compute[254092]: 2025-11-25 17:19:40.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.145 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad3526c9-c0, col_values=(('external_ids', {'iface-id': '824643fe-f8f7-44ad-9711-a38080a49171'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:40 compute-0 ovn_controller[153477]: 2025-11-25T17:19:40Z|01540|binding|INFO|Releasing lport 824643fe-f8f7-44ad-9711-a38080a49171 from this chassis (sb_readonly=0)
Nov 25 17:19:40 compute-0 nova_compute[254092]: 2025-11-25 17:19:40.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.148 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad3526c9-ce3b-41ed-ae27-775dca6a1319.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad3526c9-ce3b-41ed-ae27-775dca6a1319.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.149 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[60833f83-3831-43b1-981f-44a536613267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.150 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-ad3526c9-ce3b-41ed-ae27-775dca6a1319
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/ad3526c9-ce3b-41ed-ae27-775dca6a1319.pid.haproxy
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID ad3526c9-ce3b-41ed-ae27-775dca6a1319
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:19:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:40.151 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'env', 'PROCESS_TAG=haproxy-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad3526c9-ce3b-41ed-ae27-775dca6a1319.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:19:40 compute-0 nova_compute[254092]: 2025-11-25 17:19:40.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:40 compute-0 trusting_mayer[414990]: {
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "osd_id": 1,
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "type": "bluestore"
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:     },
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "osd_id": 2,
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "type": "bluestore"
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:     },
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "osd_id": 0,
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:         "type": "bluestore"
Nov 25 17:19:40 compute-0 trusting_mayer[414990]:     }
Nov 25 17:19:40 compute-0 trusting_mayer[414990]: }
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:19:40
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:19:40 compute-0 systemd[1]: libpod-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope: Deactivated successfully.
Nov 25 17:19:40 compute-0 systemd[1]: libpod-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope: Consumed 1.002s CPU time.
Nov 25 17:19:40 compute-0 podman[414949]: 2025-11-25 17:19:40.221240034 +0000 UTC m=+1.232949631 container died 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffbc0707dc7c2277efddd3337357f6b5c6c2e9d6863a686992defb49a8fb43ae-merged.mount: Deactivated successfully.
Nov 25 17:19:40 compute-0 podman[414949]: 2025-11-25 17:19:40.285336698 +0000 UTC m=+1.297046295 container remove 8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:19:40 compute-0 systemd[1]: libpod-conmon-8ad630ce479021604911223cbd4b98880e2480acbd525b1f4ca2f044cc25e993.scope: Deactivated successfully.
Nov 25 17:19:40 compute-0 sudo[414747]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:19:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:19:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4d0bf10f-91c6-4d3b-b055-2f24d7a74275 does not exist
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 100e28f3-6f00-44a0-949e-e9098e4b98d3 does not exist
Nov 25 17:19:40 compute-0 ceph-mon[74985]: pgmap v2883: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:19:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:19:40 compute-0 sudo[415135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:19:40 compute-0 sudo[415135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:40 compute-0 sudo[415135]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:40 compute-0 sudo[415165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:19:40 compute-0 sudo[415165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:19:40 compute-0 sudo[415165]: pam_unix(sudo:session): session closed for user root
Nov 25 17:19:40 compute-0 podman[415204]: 2025-11-25 17:19:40.545670685 +0000 UTC m=+0.052555902 container create 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:19:40 compute-0 systemd[1]: Started libpod-conmon-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6.scope.
Nov 25 17:19:40 compute-0 podman[415204]: 2025-11-25 17:19:40.519144003 +0000 UTC m=+0.026029240 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:19:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0883bcdd26bb7ed72d7d0ceebbe0bf6761be99f4ffb7fcfd366aae7f03f2ca46/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:19:40 compute-0 podman[415204]: 2025-11-25 17:19:40.640723482 +0000 UTC m=+0.147608719 container init 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:19:40 compute-0 podman[415204]: 2025-11-25 17:19:40.649123271 +0000 UTC m=+0.156008488 container start 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 17:19:40 compute-0 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : New worker (415226) forked
Nov 25 17:19:40 compute-0 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : Loading success.
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:19:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 468 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.332 254096 DEBUG nova.compute.manager [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG oslo_concurrency.lockutils [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG oslo_concurrency.lockutils [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG oslo_concurrency.lockutils [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.333 254096 DEBUG nova.compute.manager [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.334 254096 WARNING nova.compute.manager [req-15878500-29c0-46d3-acc6-f4ee4d0ad4b3 req-71cd7808-e74d-4905-8a79-5f596ac0e386 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e for instance with vm_state active and task_state None.
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.399 254096 DEBUG nova.compute.manager [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG oslo_concurrency.lockutils [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG oslo_concurrency.lockutils [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG oslo_concurrency.lockutils [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 DEBUG nova.compute.manager [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:19:41 compute-0 nova_compute[254092]: 2025-11-25 17:19:41.400 254096 WARNING nova.compute.manager [req-a82b4efd-c610-48bd-97bf-63230a2b63f2 req-34e7c674-2828-4aa4-a259-e474d57539d5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f for instance with vm_state active and task_state None.
Nov 25 17:19:42 compute-0 ceph-mon[74985]: pgmap v2884: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 468 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Nov 25 17:19:42 compute-0 nova_compute[254092]: 2025-11-25 17:19:42.716 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 25 17:19:43 compute-0 nova_compute[254092]: 2025-11-25 17:19:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:44 compute-0 ceph-mon[74985]: pgmap v2885: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 25 17:19:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:19:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4277260093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:19:45 compute-0 nova_compute[254092]: 2025-11-25 17:19:45.985 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.049 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.050 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.206 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.207 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.207 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.207 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 392384a1-1741-4504-b2c2-557420bbbbd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.279 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.279 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.320 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.357 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:46 compute-0 ceph-mon[74985]: pgmap v2886: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 12 KiB/s wr, 25 op/s
Nov 25 17:19:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4277260093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:19:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:19:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050400313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.757 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.767 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.784 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.811 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:19:46 compute-0 nova_compute[254092]: 2025-11-25 17:19:46.811 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:19:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:19:47 compute-0 NetworkManager[48891]: <info>  [1764091187.3749] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Nov 25 17:19:47 compute-0 ovn_controller[153477]: 2025-11-25T17:19:47Z|01541|binding|INFO|Releasing lport 824643fe-f8f7-44ad-9711-a38080a49171 from this chassis (sb_readonly=0)
Nov 25 17:19:47 compute-0 ovn_controller[153477]: 2025-11-25T17:19:47Z|01542|binding|INFO|Releasing lport 56e5945a-607b-4b8d-baa7-f3eab82e874d from this chassis (sb_readonly=0)
Nov 25 17:19:47 compute-0 NetworkManager[48891]: <info>  [1764091187.3759] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/640)
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:47 compute-0 ovn_controller[153477]: 2025-11-25T17:19:47Z|01543|binding|INFO|Releasing lport 824643fe-f8f7-44ad-9711-a38080a49171 from this chassis (sb_readonly=0)
Nov 25 17:19:47 compute-0 ovn_controller[153477]: 2025-11-25T17:19:47Z|01544|binding|INFO|Releasing lport 56e5945a-607b-4b8d-baa7-f3eab82e874d from this chassis (sb_readonly=0)
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.405 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1050400313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:47.745 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:19:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:47.747 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.747 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.956 254096 DEBUG nova.compute.manager [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.957 254096 DEBUG nova.compute.manager [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.958 254096 DEBUG oslo_concurrency.lockutils [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.958 254096 DEBUG oslo_concurrency.lockutils [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:19:47 compute-0 nova_compute[254092]: 2025-11-25 17:19:47.959 254096 DEBUG nova.network.neutron [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:19:48 compute-0 ceph-mon[74985]: pgmap v2887: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:19:48 compute-0 nova_compute[254092]: 2025-11-25 17:19:48.808 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:48 compute-0 nova_compute[254092]: 2025-11-25 17:19:48.808 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:19:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:49 compute-0 ceph-mon[74985]: pgmap v2888: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:19:49 compute-0 nova_compute[254092]: 2025-11-25 17:19:49.909 254096 DEBUG nova.network.neutron [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated VIF entry in instance network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:19:49 compute-0 nova_compute[254092]: 2025-11-25 17:19:49.910 254096 DEBUG nova.network.neutron [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:19:49 compute-0 nova_compute[254092]: 2025-11-25 17:19:49.929 254096 DEBUG oslo_concurrency.lockutils [req-0c8ed9f8-f11c-4c0c-a2bc-af80eaed26a6 req-b21661da-205a-4a27-a594-0d01d0b8d5eb a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:19:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 17:19:51 compute-0 nova_compute[254092]: 2025-11-25 17:19:51.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:19:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:19:52 compute-0 ceph-mon[74985]: pgmap v2889: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:19:52 compute-0 ovn_controller[153477]: 2025-11-25T17:19:52Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:d2:35 10.100.0.12
Nov 25 17:19:52 compute-0 ovn_controller[153477]: 2025-11-25T17:19:52Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:d2:35 10.100.0.12
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.720 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:19:52 compute-0 nova_compute[254092]: 2025-11-25 17:19:52.904 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:19:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Nov 25 17:19:54 compute-0 ceph-mon[74985]: pgmap v2890: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Nov 25 17:19:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:19:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Nov 25 17:19:55 compute-0 nova_compute[254092]: 2025-11-25 17:19:55.269 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:19:55 compute-0 nova_compute[254092]: 2025-11-25 17:19:55.284 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:19:55 compute-0 nova_compute[254092]: 2025-11-25 17:19:55.285 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:19:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:19:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/568605804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:19:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:19:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/568605804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:19:56 compute-0 ceph-mon[74985]: pgmap v2891: 321 pgs: 321 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Nov 25 17:19:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/568605804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:19:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/568605804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:19:56 compute-0 nova_compute[254092]: 2025-11-25 17:19:56.379 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 17:19:57 compute-0 nova_compute[254092]: 2025-11-25 17:19:57.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:57 compute-0 nova_compute[254092]: 2025-11-25 17:19:57.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:19:57 compute-0 nova_compute[254092]: 2025-11-25 17:19:57.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:19:57 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:19:57.749 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:19:58 compute-0 ceph-mon[74985]: pgmap v2892: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 17:19:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:19:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:00 compute-0 ceph-mon[74985]: pgmap v2893: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:20:01 compute-0 nova_compute[254092]: 2025-11-25 17:20:01.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:02 compute-0 ceph-mon[74985]: pgmap v2894: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:20:02 compute-0 nova_compute[254092]: 2025-11-25 17:20:02.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:04 compute-0 ceph-mon[74985]: pgmap v2895: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:06 compute-0 ceph-mon[74985]: pgmap v2896: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:06 compute-0 nova_compute[254092]: 2025-11-25 17:20:06.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:07 compute-0 nova_compute[254092]: 2025-11-25 17:20:07.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:08 compute-0 ceph-mon[74985]: pgmap v2897: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.196 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.196 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.212 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.273 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.273 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.281 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.281 254096 INFO nova.compute.claims [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.446 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:08 compute-0 podman[415284]: 2025-11-25 17:20:08.656426151 +0000 UTC m=+0.069627966 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 25 17:20:08 compute-0 podman[415283]: 2025-11-25 17:20:08.669285731 +0000 UTC m=+0.077953753 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 17:20:08 compute-0 podman[415285]: 2025-11-25 17:20:08.70086993 +0000 UTC m=+0.109242794 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:20:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:20:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247401621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.992 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:08 compute-0 nova_compute[254092]: 2025-11-25 17:20:08.999 254096 DEBUG nova.compute.provider_tree [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.009 254096 DEBUG nova.scheduler.client.report [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.025 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.025 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.065 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.065 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.079 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.094 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:20:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2247401621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.182 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.183 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.184 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Creating image(s)
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.206 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.229 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.252 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.255 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.307 254096 DEBUG nova.policy [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.353 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.355 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.356 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.356 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.376 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.380 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.721 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.781 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.890 254096 DEBUG nova.objects.instance [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 9be9cbb4-878e-4fce-be7c-44b49480ff0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.901 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.901 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Ensure instance console log exists: /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.901 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.902 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:09 compute-0 nova_compute[254092]: 2025-11-25 17:20:09.902 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:10 compute-0 nova_compute[254092]: 2025-11-25 17:20:10.024 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully created port: 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:20:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:20:10 compute-0 ceph-mon[74985]: pgmap v2898: 321 pgs: 321 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 17:20:10 compute-0 nova_compute[254092]: 2025-11-25 17:20:10.508 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully created port: 0d7b29be-145f-4598-af6d-8fec1624b66c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:20:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.173 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully updated port: 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.249 254096 DEBUG nova.compute.manager [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.249 254096 DEBUG nova.compute.manager [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.250 254096 DEBUG oslo_concurrency.lockutils [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.250 254096 DEBUG oslo_concurrency.lockutils [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.251 254096 DEBUG nova.network.neutron [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.419 254096 DEBUG nova.network.neutron [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:20:11 compute-0 nova_compute[254092]: 2025-11-25 17:20:11.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.005 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Successfully updated port: 0d7b29be-145f-4598-af6d-8fec1624b66c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.007 254096 DEBUG nova.network.neutron [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.016 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.017 254096 DEBUG oslo_concurrency.lockutils [req-fbd03001-984c-4a2b-9653-6ba01342f00f req-0690a7a0-3945-4d98-9ff3-61dd3bf14c7f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.017 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.153 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:20:12 compute-0 ceph-mon[74985]: pgmap v2899: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:12 compute-0 nova_compute[254092]: 2025-11-25 17:20:12.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:13 compute-0 nova_compute[254092]: 2025-11-25 17:20:13.349 254096 DEBUG nova.compute.manager [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:13 compute-0 nova_compute[254092]: 2025-11-25 17:20:13.350 254096 DEBUG nova.compute.manager [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-0d7b29be-145f-4598-af6d-8fec1624b66c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:20:13 compute-0 nova_compute[254092]: 2025-11-25 17:20:13.350 254096 DEBUG oslo_concurrency.lockutils [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:20:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:13.657 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:13.658 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:14 compute-0 ceph-mon[74985]: pgmap v2900: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.942 254096 DEBUG nova.network.neutron [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.968 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.969 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance network_info: |[{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.969 254096 DEBUG oslo_concurrency.lockutils [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.969 254096 DEBUG nova.network.neutron [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 0d7b29be-145f-4598-af6d-8fec1624b66c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.975 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start _get_guest_xml network_info=[{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.982 254096 WARNING nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:20:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.995 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:20:14 compute-0 nova_compute[254092]: 2025-11-25 17:20:14.997 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.002 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.003 254096 DEBUG nova.virt.libvirt.host [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.004 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.004 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.005 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.006 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.006 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.006 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.007 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.008 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.008 254096 DEBUG nova.virt.hardware [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.011 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:20:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2769234872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.484 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.515 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.520 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:20:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527014632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.995 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.997 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.998 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:15 compute-0 nova_compute[254092]: 2025-11-25 17:20:15.999 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.000 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.000 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.000 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.001 254096 DEBUG nova.objects.instance [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9be9cbb4-878e-4fce-be7c-44b49480ff0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.015 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <uuid>9be9cbb4-878e-4fce-be7c-44b49480ff0e</uuid>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <name>instance-00000090</name>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-247846949</nova:name>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:20:14</nova:creationTime>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:port uuid="67a238a8-a6f3-4b0f-b4da-7800dcf79375">
Nov 25 17:20:16 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <nova:port uuid="0d7b29be-145f-4598-af6d-8fec1624b66c">
Nov 25 17:20:16 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe53:c71b" ipVersion="6"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <system>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <entry name="serial">9be9cbb4-878e-4fce-be7c-44b49480ff0e</entry>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <entry name="uuid">9be9cbb4-878e-4fce-be7c-44b49480ff0e</entry>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </system>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <os>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   </os>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <features>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   </features>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk">
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       </source>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config">
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       </source>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:20:16 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:dd:c2:78"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <target dev="tap67a238a8-a6"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:53:c7:1b"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <target dev="tap0d7b29be-14"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/console.log" append="off"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <video>
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </video>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:20:16 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:20:16 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:20:16 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:20:16 compute-0 nova_compute[254092]: </domain>
Nov 25 17:20:16 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.016 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Preparing to wait for external event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.017 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Preparing to wait for external event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.018 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.019 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.020 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.020 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.021 254096 DEBUG os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.021 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.022 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.022 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.026 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a238a8-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.027 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67a238a8-a6, col_values=(('external_ids', {'iface-id': '67a238a8-a6f3-4b0f-b4da-7800dcf79375', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:c2:78', 'vm-uuid': '9be9cbb4-878e-4fce-be7c-44b49480ff0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 NetworkManager[48891]: <info>  [1764091216.0298] manager: (tap67a238a8-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.037 254096 INFO os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6')
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.038 254096 DEBUG nova.virt.libvirt.vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:20:09Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.039 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.040 254096 DEBUG nova.network.os_vif_util [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.040 254096 DEBUG os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.041 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.041 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.044 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d7b29be-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.045 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d7b29be-14, col_values=(('external_ids', {'iface-id': '0d7b29be-145f-4598-af6d-8fec1624b66c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:c7:1b', 'vm-uuid': '9be9cbb4-878e-4fce-be7c-44b49480ff0e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 NetworkManager[48891]: <info>  [1764091216.0467] manager: (tap0d7b29be-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.054 254096 INFO os_vif [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14')
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.108 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.108 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.109 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:dd:c2:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.109 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:53:c7:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.109 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Using config drive
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.129 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:20:16 compute-0 ceph-mon[74985]: pgmap v2901: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2769234872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:20:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1527014632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.432 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.490 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Creating config drive at /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.495 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuh1f5v0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.641 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuh1f5v0z" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.680 254096 DEBUG nova.storage.rbd_utils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.685 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.880 254096 DEBUG oslo_concurrency.processutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config 9be9cbb4-878e-4fce-be7c-44b49480ff0e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.881 254096 INFO nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deleting local config drive /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e/disk.config because it was imported into RBD.
Nov 25 17:20:16 compute-0 NetworkManager[48891]: <info>  [1764091216.9514] manager: (tap67a238a8-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/643)
Nov 25 17:20:16 compute-0 kernel: tap67a238a8-a6: entered promiscuous mode
Nov 25 17:20:16 compute-0 nova_compute[254092]: 2025-11-25 17:20:16.996 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:16 compute-0 systemd-udevd[415669]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:20:16 compute-0 ovn_controller[153477]: 2025-11-25T17:20:16Z|01545|binding|INFO|Claiming lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 for this chassis.
Nov 25 17:20:16 compute-0 ovn_controller[153477]: 2025-11-25T17:20:16Z|01546|binding|INFO|67a238a8-a6f3-4b0f-b4da-7800dcf79375: Claiming fa:16:3e:dd:c2:78 10.100.0.3
Nov 25 17:20:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:17 compute-0 NetworkManager[48891]: <info>  [1764091217.0025] manager: (tap0d7b29be-14): new Tun device (/org/freedesktop/NetworkManager/Devices/644)
Nov 25 17:20:17 compute-0 systemd-udevd[415674]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.010 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:c2:78 10.100.0.3'], port_security=['fa:16:3e:dd:c2:78 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=67a238a8-a6f3-4b0f-b4da-7800dcf79375) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:20:17 compute-0 kernel: tap0d7b29be-14: entered promiscuous mode
Nov 25 17:20:17 compute-0 ovn_controller[153477]: 2025-11-25T17:20:17Z|01547|binding|INFO|Setting lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 ovn-installed in OVS
Nov 25 17:20:17 compute-0 ovn_controller[153477]: 2025-11-25T17:20:17Z|01548|binding|INFO|Setting lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 up in Southbound
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.012 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 bound to our chassis
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.014 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77970d23-547a-4e3a-bddf-f4770a15bf81
Nov 25 17:20:17 compute-0 NetworkManager[48891]: <info>  [1764091217.0211] device (tap67a238a8-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 ovn_controller[153477]: 2025-11-25T17:20:17Z|01549|if_status|INFO|Not updating pb chassis for 0d7b29be-145f-4598-af6d-8fec1624b66c now as sb is readonly
Nov 25 17:20:17 compute-0 NetworkManager[48891]: <info>  [1764091217.0240] device (tap0d7b29be-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 NetworkManager[48891]: <info>  [1764091217.0258] device (tap67a238a8-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:20:17 compute-0 NetworkManager[48891]: <info>  [1764091217.0264] device (tap0d7b29be-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.038 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[03ddce97-5b25-4a65-b4ea-d4fbf0b65c11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.070 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[dc65fb0e-f369-4d22-9ea0-8e219af82cf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.073 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[06ba63c6-629b-46a3-902a-e362405f37e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_controller[153477]: 2025-11-25T17:20:17Z|01550|binding|INFO|Claiming lport 0d7b29be-145f-4598-af6d-8fec1624b66c for this chassis.
Nov 25 17:20:17 compute-0 ovn_controller[153477]: 2025-11-25T17:20:17Z|01551|binding|INFO|0d7b29be-145f-4598-af6d-8fec1624b66c: Claiming fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b
Nov 25 17:20:17 compute-0 ovn_controller[153477]: 2025-11-25T17:20:17Z|01552|binding|INFO|Setting lport 0d7b29be-145f-4598-af6d-8fec1624b66c ovn-installed in OVS
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 ovn_controller[153477]: 2025-11-25T17:20:17Z|01553|binding|INFO|Setting lport 0d7b29be-145f-4598-af6d-8fec1624b66c up in Southbound
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.096 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], port_security=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe53:c71b/64', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d7b29be-145f-4598-af6d-8fec1624b66c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:20:17 compute-0 systemd-machined[216343]: New machine qemu-178-instance-00000090.
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.107 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4624b368-add0-4bfb-9ea7-95dfe3ab0e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.129 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c23d59-2280-4550-b05c-3fd647017908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415686, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.151 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a79bf11d-79aa-4689-a7c6-36e269df5af2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763664, 'tstamp': 763664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415689, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763669, 'tstamp': 763669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415689, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.153 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.156 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77970d23-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.156 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.156 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77970d23-50, col_values=(('external_ids', {'iface-id': '56e5945a-607b-4b8d-baa7-f3eab82e874d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.157 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.158 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d7b29be-145f-4598-af6d-8fec1624b66c in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.159 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad3526c9-ce3b-41ed-ae27-775dca6a1319
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.178 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e36fa5-6abb-4c6f-810e-df86e4fcb2f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.209 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[41af4d10-ab04-4443-90a8-fb31a57b3d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.213 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[055d3549-f21a-4fa8-880a-c6c73b9252cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.245 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[655264e1-e316-4f18-a20d-94028054f83b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.253 254096 DEBUG nova.network.neutron [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updated VIF entry in instance network info cache for port 0d7b29be-145f-4598-af6d-8fec1624b66c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.254 254096 DEBUG nova.network.neutron [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.265 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[88286e49-7d94-496a-9220-fd4baf1ca2df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415699, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.266 254096 DEBUG oslo_concurrency.lockutils [req-71921be3-fe0e-4ce8-8cec-d06fc830647d req-335beb8b-0df8-4b1a-b6af-9f96cb4779fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.282 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7de25d3f-4f50-4aa9-9447-fc96fe3632d5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad3526c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763771, 'tstamp': 763771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415700, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.284 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.286 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.287 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.288 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad3526c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.288 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad3526c9-c0, col_values=(('external_ids', {'iface-id': '824643fe-f8f7-44ad-9711-a38080a49171'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:17 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:17.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.616 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091217.6164224, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.617 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Started (Lifecycle Event)
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.636 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.640 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091217.6166012, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.641 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Paused (Lifecycle Event)
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.666 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.670 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.688 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.907 254096 DEBUG nova.compute.manager [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.908 254096 DEBUG oslo_concurrency.lockutils [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.908 254096 DEBUG oslo_concurrency.lockutils [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.909 254096 DEBUG oslo_concurrency.lockutils [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:17 compute-0 nova_compute[254092]: 2025-11-25 17:20:17.910 254096 DEBUG nova.compute.manager [req-bc1d0ddb-130e-4de1-b450-418fd13aaba5 req-ec8aef37-5219-44d3-a79d-2823e7e80b43 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Processing event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:20:18 compute-0 ceph-mon[74985]: pgmap v2902: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.997 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No event matching network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 in dict_keys([('network-vif-plugged', '0d7b29be-145f-4598-af6d-8fec1624b66c')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 WARNING nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 for instance with vm_state building and task_state spawning.
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.998 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Processing event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:19 compute-0 nova_compute[254092]: 2025-11-25 17:20:19.999 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 DEBUG oslo_concurrency.lockutils [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 DEBUG nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.000 254096 WARNING nova.compute.manager [req-3e086e7d-87f1-434c-ba7a-33e3d5b6e1d1 req-febe01e5-861b-45aa-8dbf-93e2ba71f13a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c for instance with vm_state building and task_state spawning.
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.001 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.005 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091220.0049632, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.005 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Resumed (Lifecycle Event)
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.008 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.011 254096 INFO nova.virt.libvirt.driver [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance spawned successfully.
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.011 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.023 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.028 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.033 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.034 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.034 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.034 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.035 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.035 254096 DEBUG nova.virt.libvirt.driver [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.061 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.094 254096 INFO nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 10.91 seconds to spawn the instance on the hypervisor.
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.094 254096 DEBUG nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.149 254096 INFO nova.compute.manager [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 11.90 seconds to build instance.
Nov 25 17:20:20 compute-0 nova_compute[254092]: 2025-11-25 17:20:20.163 254096 DEBUG oslo_concurrency.lockutils [None req-e1c0a06e-bddf-4fc0-8cba-6a1ecd1e13e5 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:20 compute-0 ceph-mon[74985]: pgmap v2903: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:20:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 25 17:20:21 compute-0 nova_compute[254092]: 2025-11-25 17:20:21.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:21 compute-0 nova_compute[254092]: 2025-11-25 17:20:21.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:22 compute-0 ceph-mon[74985]: pgmap v2904: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 478 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 25 17:20:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 25 17:20:24 compute-0 ceph-mon[74985]: pgmap v2905: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 25 17:20:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:24 compute-0 nova_compute[254092]: 2025-11-25 17:20:24.658 254096 DEBUG nova.compute.manager [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:24 compute-0 nova_compute[254092]: 2025-11-25 17:20:24.659 254096 DEBUG nova.compute.manager [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:20:24 compute-0 nova_compute[254092]: 2025-11-25 17:20:24.660 254096 DEBUG oslo_concurrency.lockutils [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:20:24 compute-0 nova_compute[254092]: 2025-11-25 17:20:24.660 254096 DEBUG oslo_concurrency.lockutils [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:20:24 compute-0 nova_compute[254092]: 2025-11-25 17:20:24.660 254096 DEBUG nova.network.neutron [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:20:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 25 17:20:26 compute-0 nova_compute[254092]: 2025-11-25 17:20:26.032 254096 DEBUG nova.network.neutron [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updated VIF entry in instance network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:20:26 compute-0 nova_compute[254092]: 2025-11-25 17:20:26.034 254096 DEBUG nova.network.neutron [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:26 compute-0 nova_compute[254092]: 2025-11-25 17:20:26.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:26 compute-0 nova_compute[254092]: 2025-11-25 17:20:26.052 254096 DEBUG oslo_concurrency.lockutils [req-24bbaff1-446d-4fc9-ba28-3e23d2ce740f req-9d18af40-6a1e-40e0-9aeb-4db5f8131339 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:20:26 compute-0 ceph-mon[74985]: pgmap v2906: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 12 KiB/s wr, 24 op/s
Nov 25 17:20:26 compute-0 nova_compute[254092]: 2025-11-25 17:20:26.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:20:28 compute-0 ceph-mon[74985]: pgmap v2907: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:20:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:20:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:30 compute-0 ceph-mon[74985]: pgmap v2908: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Nov 25 17:20:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 25 17:20:31 compute-0 nova_compute[254092]: 2025-11-25 17:20:31.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:31 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 17:20:31 compute-0 nova_compute[254092]: 2025-11-25 17:20:31.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:32 compute-0 ceph-mon[74985]: pgmap v2909: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Nov 25 17:20:32 compute-0 ovn_controller[153477]: 2025-11-25T17:20:32Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:c2:78 10.100.0.3
Nov 25 17:20:32 compute-0 ovn_controller[153477]: 2025-11-25T17:20:32Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:c2:78 10.100.0.3
Nov 25 17:20:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 25 17:20:34 compute-0 ceph-mon[74985]: pgmap v2910: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 25 17:20:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 25 17:20:36 compute-0 nova_compute[254092]: 2025-11-25 17:20:36.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:36 compute-0 ceph-mon[74985]: pgmap v2911: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 25 17:20:36 compute-0 nova_compute[254092]: 2025-11-25 17:20:36.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 25 17:20:38 compute-0 ceph-mon[74985]: pgmap v2912: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Nov 25 17:20:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:39 compute-0 nova_compute[254092]: 2025-11-25 17:20:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:39 compute-0 podman[415745]: 2025-11-25 17:20:39.693888325 +0000 UTC m=+0.100577069 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:20:39 compute-0 podman[415744]: 2025-11-25 17:20:39.702716915 +0000 UTC m=+0.109300626 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:20:39 compute-0 podman[415746]: 2025-11-25 17:20:39.764854076 +0000 UTC m=+0.159730768 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:20:40
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.control']
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:20:40 compute-0 ceph-mon[74985]: pgmap v2913: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:20:40 compute-0 sudo[415805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:40 compute-0 sudo[415805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:40 compute-0 sudo[415805]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:40 compute-0 sudo[415830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:20:40 compute-0 sudo[415830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:40 compute-0 sudo[415830]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:20:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:20:40 compute-0 sudo[415855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:40 compute-0 sudo[415855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:40 compute-0 sudo[415855]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:40 compute-0 sudo[415880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:20:40 compute-0 sudo[415880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:20:41 compute-0 nova_compute[254092]: 2025-11-25 17:20:41.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:41 compute-0 sudo[415880]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:41 compute-0 nova_compute[254092]: 2025-11-25 17:20:41.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:20:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:20:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:20:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:20:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:20:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:20:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 77d31682-b1a7-4317-9a65-3557fea64c97 does not exist
Nov 25 17:20:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 11267756-94fc-450f-aa6c-59f4c5753433 does not exist
Nov 25 17:20:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0bfa43a3-e8ec-429d-9a4d-96f9c3fe2a3e does not exist
Nov 25 17:20:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:20:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:20:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:20:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:20:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:20:41 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:20:41 compute-0 sudo[415937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:41 compute-0 sudo[415937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:41 compute-0 sudo[415937]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:41 compute-0 sudo[415962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:20:41 compute-0 sudo[415962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:41 compute-0 sudo[415962]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:41 compute-0 sudo[415987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:41 compute-0 sudo[415987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:41 compute-0 sudo[415987]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:41 compute-0 sudo[416012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:20:41 compute-0 sudo[416012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:42 compute-0 podman[416078]: 2025-11-25 17:20:42.264235808 +0000 UTC m=+0.065410261 container create 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:20:42 compute-0 systemd[1]: Started libpod-conmon-8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b.scope.
Nov 25 17:20:42 compute-0 ceph-mon[74985]: pgmap v2914: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:20:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:20:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:20:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:20:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:20:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:20:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:20:42 compute-0 podman[416078]: 2025-11-25 17:20:42.230701345 +0000 UTC m=+0.031875888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:20:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:20:42 compute-0 podman[416078]: 2025-11-25 17:20:42.360437637 +0000 UTC m=+0.161612090 container init 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:20:42 compute-0 podman[416078]: 2025-11-25 17:20:42.368876276 +0000 UTC m=+0.170050729 container start 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:20:42 compute-0 podman[416078]: 2025-11-25 17:20:42.372364981 +0000 UTC m=+0.173539474 container attach 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:20:42 compute-0 elegant_jones[416095]: 167 167
Nov 25 17:20:42 compute-0 systemd[1]: libpod-8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b.scope: Deactivated successfully.
Nov 25 17:20:42 compute-0 podman[416078]: 2025-11-25 17:20:42.377357877 +0000 UTC m=+0.178532340 container died 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-42527bf5a8ed86ecd9dfdaecca4537f1f84fd4ecd82f34e5ecb91ed8c3d0abc2-merged.mount: Deactivated successfully.
Nov 25 17:20:42 compute-0 podman[416078]: 2025-11-25 17:20:42.427064481 +0000 UTC m=+0.228238934 container remove 8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:20:42 compute-0 systemd[1]: libpod-conmon-8e60b9934752ad7bbf0a7db3910b04f71aa093a7c5dd7746acc0343d08f7f47b.scope: Deactivated successfully.
Nov 25 17:20:42 compute-0 podman[416120]: 2025-11-25 17:20:42.68683226 +0000 UTC m=+0.072469063 container create fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:20:42 compute-0 systemd[1]: Started libpod-conmon-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope.
Nov 25 17:20:42 compute-0 podman[416120]: 2025-11-25 17:20:42.661513351 +0000 UTC m=+0.047150194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:20:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:42 compute-0 podman[416120]: 2025-11-25 17:20:42.811880295 +0000 UTC m=+0.197517108 container init fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:20:42 compute-0 podman[416120]: 2025-11-25 17:20:42.827949442 +0000 UTC m=+0.213586275 container start fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:20:42 compute-0 podman[416120]: 2025-11-25 17:20:42.833838132 +0000 UTC m=+0.219474965 container attach fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:20:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:20:43 compute-0 nova_compute[254092]: 2025-11-25 17:20:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:43 compute-0 sshd-session[416121]: Connection closed by authenticating user root 171.244.51.45 port 50560 [preauth]
Nov 25 17:20:43 compute-0 sad_hypatia[416138]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:20:43 compute-0 sad_hypatia[416138]: --> relative data size: 1.0
Nov 25 17:20:43 compute-0 sad_hypatia[416138]: --> All data devices are unavailable
Nov 25 17:20:43 compute-0 systemd[1]: libpod-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope: Deactivated successfully.
Nov 25 17:20:43 compute-0 systemd[1]: libpod-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope: Consumed 1.094s CPU time.
Nov 25 17:20:43 compute-0 conmon[416138]: conmon fdfed6496f870c34d850 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope/container/memory.events
Nov 25 17:20:44 compute-0 podman[416167]: 2025-11-25 17:20:44.040862407 +0000 UTC m=+0.029298609 container died fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:20:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4279a6dccc8f183bd4c3b79ab01253338275fe72198dabc77cfd5df5a19e9e6c-merged.mount: Deactivated successfully.
Nov 25 17:20:44 compute-0 podman[416167]: 2025-11-25 17:20:44.104259282 +0000 UTC m=+0.092695484 container remove fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:20:44 compute-0 systemd[1]: libpod-conmon-fdfed6496f870c34d850db6c4ce3f10aa7200cf9bfd86cc554054bb131b240df.scope: Deactivated successfully.
Nov 25 17:20:44 compute-0 sudo[416012]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:44 compute-0 sudo[416182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:44 compute-0 sudo[416182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:44 compute-0 sudo[416182]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:44 compute-0 sudo[416207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:20:44 compute-0 sudo[416207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:44 compute-0 sudo[416207]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:44 compute-0 ceph-mon[74985]: pgmap v2915: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:20:44 compute-0 sudo[416232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:44 compute-0 sudo[416232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:44 compute-0 sudo[416232]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:44 compute-0 sudo[416257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:20:44 compute-0 sudo[416257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:44 compute-0 podman[416323]: 2025-11-25 17:20:44.879061462 +0000 UTC m=+0.059872072 container create 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:20:44 compute-0 podman[416323]: 2025-11-25 17:20:44.850300669 +0000 UTC m=+0.031111329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:20:44 compute-0 systemd[1]: Started libpod-conmon-93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38.scope.
Nov 25 17:20:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:20:45 compute-0 podman[416323]: 2025-11-25 17:20:45.00392206 +0000 UTC m=+0.184732660 container init 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:20:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:20:45 compute-0 podman[416323]: 2025-11-25 17:20:45.019663548 +0000 UTC m=+0.200474118 container start 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:20:45 compute-0 podman[416323]: 2025-11-25 17:20:45.023530433 +0000 UTC m=+0.204341013 container attach 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:20:45 compute-0 sweet_bassi[416339]: 167 167
Nov 25 17:20:45 compute-0 systemd[1]: libpod-93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38.scope: Deactivated successfully.
Nov 25 17:20:45 compute-0 podman[416323]: 2025-11-25 17:20:45.031391128 +0000 UTC m=+0.212201698 container died 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6b95701d1e55919532f7d2a6b348e17171362395eae2a0f46c5e0f8a7ffb3c2-merged.mount: Deactivated successfully.
Nov 25 17:20:45 compute-0 podman[416323]: 2025-11-25 17:20:45.090724352 +0000 UTC m=+0.271534962 container remove 93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bassi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:20:45 compute-0 systemd[1]: libpod-conmon-93eb7f6b1ed0f8b6de3443879b1992e9c4b4ec7479f5bc30408fd478af255c38.scope: Deactivated successfully.
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.282 254096 DEBUG nova.compute.manager [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.285 254096 DEBUG nova.compute.manager [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing instance network info cache due to event network-changed-67a238a8-a6f3-4b0f-b4da-7800dcf79375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.285 254096 DEBUG oslo_concurrency.lockutils [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.286 254096 DEBUG oslo_concurrency.lockutils [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.286 254096 DEBUG nova.network.neutron [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Refreshing network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.328 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.328 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.328 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.329 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.329 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.330 254096 INFO nova.compute.manager [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Terminating instance
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.331 254096 DEBUG nova.compute.manager [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:20:45 compute-0 podman[416362]: 2025-11-25 17:20:45.332874594 +0000 UTC m=+0.066705427 container create a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:20:45 compute-0 systemd[1]: Started libpod-conmon-a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5.scope.
Nov 25 17:20:45 compute-0 kernel: tap67a238a8-a6 (unregistering): left promiscuous mode
Nov 25 17:20:45 compute-0 podman[416362]: 2025-11-25 17:20:45.302517928 +0000 UTC m=+0.036348631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:20:45 compute-0 NetworkManager[48891]: <info>  [1764091245.3973] device (tap67a238a8-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:20:45 compute-0 ovn_controller[153477]: 2025-11-25T17:20:45Z|01554|binding|INFO|Releasing lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 from this chassis (sb_readonly=0)
Nov 25 17:20:45 compute-0 ovn_controller[153477]: 2025-11-25T17:20:45Z|01555|binding|INFO|Setting lport 67a238a8-a6f3-4b0f-b4da-7800dcf79375 down in Southbound
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 ovn_controller[153477]: 2025-11-25T17:20:45Z|01556|binding|INFO|Removing iface tap67a238a8-a6 ovn-installed in OVS
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.411 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.419 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:c2:78 10.100.0.3'], port_security=['fa:16:3e:dd:c2:78 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=67a238a8-a6f3-4b0f-b4da-7800dcf79375) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.420 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 67a238a8-a6f3-4b0f-b4da-7800dcf79375 in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 unbound from our chassis
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.421 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77970d23-547a-4e3a-bddf-f4770a15bf81
Nov 25 17:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:45 compute-0 kernel: tap0d7b29be-14 (unregistering): left promiscuous mode
Nov 25 17:20:45 compute-0 podman[416362]: 2025-11-25 17:20:45.4495884 +0000 UTC m=+0.183419033 container init a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 17:20:45 compute-0 NetworkManager[48891]: <info>  [1764091245.4515] device (tap0d7b29be-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.456 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5c3339-e076-46f7-89fd-d55136acda05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_controller[153477]: 2025-11-25T17:20:45Z|01557|binding|INFO|Releasing lport 0d7b29be-145f-4598-af6d-8fec1624b66c from this chassis (sb_readonly=0)
Nov 25 17:20:45 compute-0 ovn_controller[153477]: 2025-11-25T17:20:45Z|01558|binding|INFO|Setting lport 0d7b29be-145f-4598-af6d-8fec1624b66c down in Southbound
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 ovn_controller[153477]: 2025-11-25T17:20:45Z|01559|binding|INFO|Removing iface tap0d7b29be-14 ovn-installed in OVS
Nov 25 17:20:45 compute-0 podman[416362]: 2025-11-25 17:20:45.483205005 +0000 UTC m=+0.217035608 container start a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:20:45 compute-0 podman[416362]: 2025-11-25 17:20:45.486961388 +0000 UTC m=+0.220792021 container attach a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.489 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], port_security=['fa:16:3e:53:c7:1b 2001:db8::f816:3eff:fe53:c71b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe53:c71b/64', 'neutron:device_id': '9be9cbb4-878e-4fce-be7c-44b49480ff0e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=0d7b29be-145f-4598-af6d-8fec1624b66c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.495 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[61f1c348-1459-4a9c-b740-31737a2d176f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.498 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[24b2ec05-1f96-4c62-b6cc-31afb34144c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.517 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.518 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.531 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[11fbca66-dc82-435b-88df-3fc25195072a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Nov 25 17:20:45 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 13.969s CPU time.
Nov 25 17:20:45 compute-0 systemd-machined[216343]: Machine qemu-178-instance-00000090 terminated.
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.561 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a61b3b-34f2-45d0-8023-190a3005f5e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77970d23-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:6c:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763649, 'reachable_time': 34708, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416400, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 NetworkManager[48891]: <info>  [1764091245.5783] manager: (tap0d7b29be-14): new Tun device (/org/freedesktop/NetworkManager/Devices/645)
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b163837e-5620-4cdb-b4fa-3064676de2f1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763664, 'tstamp': 763664}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416409, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap77970d23-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763669, 'tstamp': 763669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416409, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.598 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.600 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.612 254096 INFO nova.virt.libvirt.driver [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Instance destroyed successfully.
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.612 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.613 254096 DEBUG nova.objects.instance [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 9be9cbb4-878e-4fce-be7c-44b49480ff0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.612 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77970d23-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77970d23-50, col_values=(('external_ids', {'iface-id': '56e5945a-607b-4b8d-baa7-f3eab82e874d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.613 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.614 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 0d7b29be-145f-4598-af6d-8fec1624b66c in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.615 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad3526c9-ce3b-41ed-ae27-775dca6a1319
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.630 254096 DEBUG nova.virt.libvirt.vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:20:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:20:20Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.631 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.632 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.633 254096 DEBUG os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.633 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[cf169a03-a012-4042-9012-1052a16cfaf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.636 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a238a8-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.648 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.652 254096 INFO os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:c2:78,bridge_name='br-int',has_traffic_filtering=True,id=67a238a8-a6f3-4b0f-b4da-7800dcf79375,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a238a8-a6')
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.653 254096 DEBUG nova.virt.libvirt.vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-247846949',display_name='tempest-TestGettingAddress-server-247846949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-247846949',id=144,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:20:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-v5mcc982',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:20:20Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=9be9cbb4-878e-4fce-be7c-44b49480ff0e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.654 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.654 254096 DEBUG nova.network.os_vif_util [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.655 254096 DEBUG os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.656 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d7b29be-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.663 254096 INFO os_vif [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c7:1b,bridge_name='br-int',has_traffic_filtering=True,id=0d7b29be-145f-4598-af6d-8fec1624b66c,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d7b29be-14')
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.670 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0775b075-5077-4379-9f59-3b37b14a2fc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.675 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5966ea30-be46-4073-a704-e78a7112480b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.718 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[79280046-56ba-4cf0-908c-63f793302a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.737 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba75f6c-1275-4149-9c00-b92e7e93ed45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad3526c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:23:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 448], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763759, 'reachable_time': 26526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416467, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.756 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3aa770-8b3c-4f32-9541-1ec830c44444]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad3526c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763771, 'tstamp': 763771}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416468, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.758 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 nova_compute[254092]: 2025-11-25 17:20:45.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.761 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad3526c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.762 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.762 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad3526c9-c0, col_values=(('external_ids', {'iface-id': '824643fe-f8f7-44ad-9711-a38080a49171'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:45.762 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:20:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:20:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2249891955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.088 254096 INFO nova.virt.libvirt.driver [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deleting instance files /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e_del
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.088 254096 INFO nova.virt.libvirt.driver [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deletion of /var/lib/nova/instances/9be9cbb4-878e-4fce-be7c-44b49480ff0e_del complete
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.105 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.134 254096 INFO nova.compute.manager [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.135 254096 DEBUG oslo.service.loopingcall [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.135 254096 DEBUG nova.compute.manager [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.135 254096 DEBUG nova.network.neutron [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.173 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.174 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.334 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.335 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3406MB free_disk=59.897212982177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.335 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.335 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:46 compute-0 ceph-mon[74985]: pgmap v2916: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 25 17:20:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2249891955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:46 compute-0 gallant_joliot[416378]: {
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:     "0": [
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:         {
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "devices": [
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "/dev/loop3"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             ],
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_name": "ceph_lv0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_size": "21470642176",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "name": "ceph_lv0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "tags": {
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cluster_name": "ceph",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.crush_device_class": "",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.encrypted": "0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osd_id": "0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.type": "block",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.vdo": "0"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             },
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "type": "block",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "vg_name": "ceph_vg0"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:         }
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:     ],
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:     "1": [
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:         {
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "devices": [
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "/dev/loop4"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             ],
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_name": "ceph_lv1",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_size": "21470642176",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "name": "ceph_lv1",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "tags": {
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cluster_name": "ceph",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.crush_device_class": "",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.encrypted": "0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osd_id": "1",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.type": "block",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.vdo": "0"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             },
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "type": "block",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "vg_name": "ceph_vg1"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:         }
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:     ],
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:     "2": [
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:         {
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "devices": [
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "/dev/loop5"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             ],
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_name": "ceph_lv2",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_size": "21470642176",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "name": "ceph_lv2",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "tags": {
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.cluster_name": "ceph",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.crush_device_class": "",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.encrypted": "0",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osd_id": "2",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.type": "block",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:                 "ceph.vdo": "0"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             },
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "type": "block",
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:             "vg_name": "ceph_vg2"
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:         }
Nov 25 17:20:46 compute-0 gallant_joliot[416378]:     ]
Nov 25 17:20:46 compute-0 gallant_joliot[416378]: }
Nov 25 17:20:46 compute-0 systemd[1]: libpod-a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5.scope: Deactivated successfully.
Nov 25 17:20:46 compute-0 podman[416362]: 2025-11-25 17:20:46.403518927 +0000 UTC m=+1.137349530 container died a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a8b6628b4e6da74b9243852b604db8b1786fedac131efe87321d0b3a679d31d-merged.mount: Deactivated successfully.
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:46 compute-0 podman[416362]: 2025-11-25 17:20:46.461618178 +0000 UTC m=+1.195448781 container remove a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:20:46 compute-0 systemd[1]: libpod-conmon-a6d97c6ac8e5277679933f045cb1c5765f1e9b361b6c766f1d1e300c60a6b4f5.scope: Deactivated successfully.
Nov 25 17:20:46 compute-0 sudo[416257]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.507 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 392384a1-1741-4504-b2c2-557420bbbbd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.508 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 9be9cbb4-878e-4fce-be7c-44b49480ff0e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.509 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.509 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.574 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:20:46 compute-0 sudo[416489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:46 compute-0 sudo[416489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:46 compute-0 sudo[416489]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.621 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.622 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.641 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:20:46 compute-0 sudo[416514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:20:46 compute-0 sudo[416514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:46 compute-0 sudo[416514]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.661 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:20:46 compute-0 nova_compute[254092]: 2025-11-25 17:20:46.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:46 compute-0 sudo[416539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:46 compute-0 sudo[416539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:46 compute-0 sudo[416539]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:46 compute-0 sudo[416565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:20:46 compute-0 sudo[416565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 25 17:20:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:20:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165480102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:47 compute-0 podman[416649]: 2025-11-25 17:20:47.174990535 +0000 UTC m=+0.045209041 container create 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.181 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.188 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.205 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:20:47 compute-0 systemd[1]: Started libpod-conmon-17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82.scope.
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.230 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.230 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:47 compute-0 podman[416649]: 2025-11-25 17:20:47.158065745 +0000 UTC m=+0.028284241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:20:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:20:47 compute-0 podman[416649]: 2025-11-25 17:20:47.272853929 +0000 UTC m=+0.143072425 container init 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:20:47 compute-0 podman[416649]: 2025-11-25 17:20:47.287041975 +0000 UTC m=+0.157260491 container start 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:20:47 compute-0 podman[416649]: 2025-11-25 17:20:47.292193126 +0000 UTC m=+0.162411652 container attach 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:20:47 compute-0 focused_hofstadter[416667]: 167 167
Nov 25 17:20:47 compute-0 systemd[1]: libpod-17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82.scope: Deactivated successfully.
Nov 25 17:20:47 compute-0 podman[416649]: 2025-11-25 17:20:47.293129942 +0000 UTC m=+0.163348418 container died 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eef56225a18b4c2d1c6a8e11865821a46bc3c7d4229e6fe9f31c3efdf8a478b-merged.mount: Deactivated successfully.
Nov 25 17:20:47 compute-0 podman[416649]: 2025-11-25 17:20:47.34050838 +0000 UTC m=+0.210726856 container remove 17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:20:47 compute-0 systemd[1]: libpod-conmon-17e1c4f5ac8733278c489acc7affe77c57d03b302c5cf561a2bbce7cc3c99d82.scope: Deactivated successfully.
Nov 25 17:20:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1165480102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.411 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.412 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.412 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.412 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.413 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-unplugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.413 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.413 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.414 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.414 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.414 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 WARNING nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-67a238a8-a6f3-4b0f-b4da-7800dcf79375 for instance with vm_state active and task_state deleting.
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.415 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.416 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.416 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.416 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-unplugged-0d7b29be-145f-4598-af6d-8fec1624b66c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-unplugged-0d7b29be-145f-4598-af6d-8fec1624b66c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.417 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.418 254096 DEBUG oslo_concurrency.lockutils [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.418 254096 DEBUG nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] No waiting events found dispatching network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.420 254096 WARNING nova.compute.manager [req-6e7a583b-afe9-4560-8e5f-49900d3e18d4 req-8674b38c-f9bc-47e9-88d9-d1ce02a61613 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received unexpected event network-vif-plugged-0d7b29be-145f-4598-af6d-8fec1624b66c for instance with vm_state active and task_state deleting.
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.543 254096 DEBUG nova.network.neutron [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updated VIF entry in instance network info cache for port 67a238a8-a6f3-4b0f-b4da-7800dcf79375. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.544 254096 DEBUG nova.network.neutron [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [{"id": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "address": "fa:16:3e:dd:c2:78", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a238a8-a6", "ovs_interfaceid": "67a238a8-a6f3-4b0f-b4da-7800dcf79375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0d7b29be-145f-4598-af6d-8fec1624b66c", "address": "fa:16:3e:53:c7:1b", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:c71b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d7b29be-14", "ovs_interfaceid": "0d7b29be-145f-4598-af6d-8fec1624b66c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.571 254096 DEBUG oslo_concurrency.lockutils [req-86a35619-5965-4d68-a7e7-ff7f6c1ec3f6 req-1ce9cf46-5b21-4b08-a42c-ef804596c309 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-9be9cbb4-878e-4fce-be7c-44b49480ff0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:20:47 compute-0 podman[416692]: 2025-11-25 17:20:47.599081079 +0000 UTC m=+0.070186451 container create 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:20:47 compute-0 systemd[1]: Started libpod-conmon-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope.
Nov 25 17:20:47 compute-0 podman[416692]: 2025-11-25 17:20:47.579884786 +0000 UTC m=+0.050990178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:20:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:20:47 compute-0 podman[416692]: 2025-11-25 17:20:47.693777357 +0000 UTC m=+0.164882739 container init 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:20:47 compute-0 podman[416692]: 2025-11-25 17:20:47.699332278 +0000 UTC m=+0.170437640 container start 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:20:47 compute-0 podman[416692]: 2025-11-25 17:20:47.702822713 +0000 UTC m=+0.173928105 container attach 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:20:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:47.889 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:47 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:47.891 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.914 254096 DEBUG nova.network.neutron [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.926 254096 INFO nova.compute.manager [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Took 1.79 seconds to deallocate network for instance.
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.963 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:47 compute-0 nova_compute[254092]: 2025-11-25 17:20:47.964 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:48 compute-0 nova_compute[254092]: 2025-11-25 17:20:48.037 254096 DEBUG oslo_concurrency.processutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:48 compute-0 ceph-mon[74985]: pgmap v2917: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Nov 25 17:20:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:20:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529710750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:48 compute-0 nova_compute[254092]: 2025-11-25 17:20:48.483 254096 DEBUG oslo_concurrency.processutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:48 compute-0 nova_compute[254092]: 2025-11-25 17:20:48.492 254096 DEBUG nova.compute.provider_tree [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:20:48 compute-0 nova_compute[254092]: 2025-11-25 17:20:48.515 254096 DEBUG nova.scheduler.client.report [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:20:48 compute-0 nova_compute[254092]: 2025-11-25 17:20:48.540 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:48 compute-0 nova_compute[254092]: 2025-11-25 17:20:48.577 254096 INFO nova.scheduler.client.report [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 9be9cbb4-878e-4fce-be7c-44b49480ff0e
Nov 25 17:20:48 compute-0 nova_compute[254092]: 2025-11-25 17:20:48.643 254096 DEBUG oslo_concurrency.lockutils [None req-a25cb005-9251-4183-9ff5-1f6de76ee8e6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9be9cbb4-878e-4fce-be7c-44b49480ff0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:48 compute-0 cranky_booth[416707]: {
Nov 25 17:20:48 compute-0 cranky_booth[416707]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "osd_id": 1,
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "type": "bluestore"
Nov 25 17:20:48 compute-0 cranky_booth[416707]:     },
Nov 25 17:20:48 compute-0 cranky_booth[416707]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "osd_id": 2,
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "type": "bluestore"
Nov 25 17:20:48 compute-0 cranky_booth[416707]:     },
Nov 25 17:20:48 compute-0 cranky_booth[416707]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "osd_id": 0,
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:20:48 compute-0 cranky_booth[416707]:         "type": "bluestore"
Nov 25 17:20:48 compute-0 cranky_booth[416707]:     }
Nov 25 17:20:48 compute-0 cranky_booth[416707]: }
Nov 25 17:20:48 compute-0 systemd[1]: libpod-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope: Deactivated successfully.
Nov 25 17:20:48 compute-0 systemd[1]: libpod-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope: Consumed 1.106s CPU time.
Nov 25 17:20:48 compute-0 podman[416692]: 2025-11-25 17:20:48.803748679 +0000 UTC m=+1.274854041 container died 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c788ac5961507011194c2597659d1d93af16da00ba890977ad284e8035d699-merged.mount: Deactivated successfully.
Nov 25 17:20:48 compute-0 podman[416692]: 2025-11-25 17:20:48.853275197 +0000 UTC m=+1.324380559 container remove 575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:20:48 compute-0 systemd[1]: libpod-conmon-575fcf302e90d13ebb9e4b272d4be64d45f0888c4d3f05057b5ed97bae197377.scope: Deactivated successfully.
Nov 25 17:20:48 compute-0 sudo[416565]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:20:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:20:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:20:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:20:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d68ee3f2-6583-4ceb-a270-37a6e457aad0 does not exist
Nov 25 17:20:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a9ede9d2-45df-4942-9960-9968761e3c39 does not exist
Nov 25 17:20:49 compute-0 sudo[416773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:20:49 compute-0 sudo[416773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:49 compute-0 sudo[416773]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 6 op/s
Nov 25 17:20:49 compute-0 sudo[416798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:20:49 compute-0 sudo[416798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:20:49 compute-0 sudo[416798]: pam_unix(sudo:session): session closed for user root
Nov 25 17:20:49 compute-0 nova_compute[254092]: 2025-11-25 17:20:49.230 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:49 compute-0 nova_compute[254092]: 2025-11-25 17:20:49.231 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:20:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1529710750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:20:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:20:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:49 compute-0 nova_compute[254092]: 2025-11-25 17:20:49.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:49 compute-0 nova_compute[254092]: 2025-11-25 17:20:49.522 254096 DEBUG nova.compute.manager [req-b50f1c42-ed9f-4ecf-a361-0b978ecba198 req-099a841f-9daf-4e3c-814c-71ef74b79285 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-deleted-0d7b29be-145f-4598-af6d-8fec1624b66c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:49 compute-0 nova_compute[254092]: 2025-11-25 17:20:49.523 254096 DEBUG nova.compute.manager [req-b50f1c42-ed9f-4ecf-a361-0b978ecba198 req-099a841f-9daf-4e3c-814c-71ef74b79285 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Received event network-vif-deleted-67a238a8-a6f3-4b0f-b4da-7800dcf79375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:50 compute-0 ceph-mon[74985]: pgmap v2918: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 18 KiB/s wr, 6 op/s
Nov 25 17:20:50 compute-0 nova_compute[254092]: 2025-11-25 17:20:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:50 compute-0 nova_compute[254092]: 2025-11-25 17:20:50.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 30 op/s
Nov 25 17:20:51 compute-0 nova_compute[254092]: 2025-11-25 17:20:51.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007602269119945302 of space, bias 1.0, pg target 0.22806807359835904 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:20:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:20:52 compute-0 ceph-mon[74985]: pgmap v2919: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 30 op/s
Nov 25 17:20:52 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:52.894 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.053 254096 DEBUG nova.compute.manager [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG nova.compute.manager [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing instance network info cache due to event network-changed-b342a143-48a8-46f1-90fc-229fadeb167e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG oslo_concurrency.lockutils [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG oslo_concurrency.lockutils [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.054 254096 DEBUG nova.network.neutron [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Refreshing network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.117 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.118 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.118 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.118 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.119 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.120 254096 INFO nova.compute.manager [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Terminating instance
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.120 254096 DEBUG nova.compute.manager [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:20:53 compute-0 kernel: tapb342a143-48 (unregistering): left promiscuous mode
Nov 25 17:20:53 compute-0 NetworkManager[48891]: <info>  [1764091253.1817] device (tapb342a143-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:20:53 compute-0 ovn_controller[153477]: 2025-11-25T17:20:53Z|01560|binding|INFO|Releasing lport b342a143-48a8-46f1-90fc-229fadeb167e from this chassis (sb_readonly=0)
Nov 25 17:20:53 compute-0 ovn_controller[153477]: 2025-11-25T17:20:53Z|01561|binding|INFO|Setting lport b342a143-48a8-46f1-90fc-229fadeb167e down in Southbound
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 ovn_controller[153477]: 2025-11-25T17:20:53Z|01562|binding|INFO|Removing iface tapb342a143-48 ovn-installed in OVS
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.247 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.255 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:d2:35 10.100.0.12'], port_security=['fa:16:3e:48:d2:35 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77970d23-547a-4e3a-bddf-f4770a15bf81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1deee20b-e3d5-4b5e-b63f-386bef188aec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=b342a143-48a8-46f1-90fc-229fadeb167e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.257 163338 INFO neutron.agent.ovn.metadata.agent [-] Port b342a143-48a8-46f1-90fc-229fadeb167e in datapath 77970d23-547a-4e3a-bddf-f4770a15bf81 unbound from our chassis
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.259 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77970d23-547a-4e3a-bddf-f4770a15bf81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.260 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7807719c-f33e-4635-a26b-68ee8618254b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.261 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 namespace which is not needed anymore
Nov 25 17:20:53 compute-0 kernel: tapf6eeae44-ea (unregistering): left promiscuous mode
Nov 25 17:20:53 compute-0 NetworkManager[48891]: <info>  [1764091253.2694] device (tapf6eeae44-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.272 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 ovn_controller[153477]: 2025-11-25T17:20:53Z|01563|binding|INFO|Releasing lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f from this chassis (sb_readonly=0)
Nov 25 17:20:53 compute-0 ovn_controller[153477]: 2025-11-25T17:20:53Z|01564|binding|INFO|Setting lport f6eeae44-ea00-4543-a1e0-9ce45fbc399f down in Southbound
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 ovn_controller[153477]: 2025-11-25T17:20:53Z|01565|binding|INFO|Removing iface tapf6eeae44-ea ovn-installed in OVS
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.291 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], port_security=['fa:16:3e:0f:88:e0 2001:db8::f816:3eff:fe0f:88e0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe0f:88e0/64', 'neutron:device_id': '392384a1-1741-4504-b2c2-557420bbbbd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd0d6c638-e8a2-4056-927f-3668170c4e08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40d92b3b-5fa0-43d2-8278-bfe8f143bfed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=f6eeae44-ea00-4543-a1e0-9ce45fbc399f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Nov 25 17:20:53 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 16.563s CPU time.
Nov 25 17:20:53 compute-0 systemd-machined[216343]: Machine qemu-177-instance-0000008f terminated.
Nov 25 17:20:53 compute-0 NetworkManager[48891]: <info>  [1764091253.3464] manager: (tapb342a143-48): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Nov 25 17:20:53 compute-0 NetworkManager[48891]: <info>  [1764091253.3577] manager: (tapf6eeae44-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/647)
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.375 254096 INFO nova.virt.libvirt.driver [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Instance destroyed successfully.
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.376 254096 DEBUG nova.objects.instance [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 392384a1-1741-4504-b2c2-557420bbbbd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.388 254096 DEBUG nova.virt.libvirt.vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:19:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.389 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.391 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.391 254096 DEBUG os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.394 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb342a143-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.399 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.407 254096 INFO os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:d2:35,bridge_name='br-int',has_traffic_filtering=True,id=b342a143-48a8-46f1-90fc-229fadeb167e,network=Network(77970d23-547a-4e3a-bddf-f4770a15bf81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb342a143-48')
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.409 254096 DEBUG nova.virt.libvirt.vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:19:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-473687599',display_name='tempest-TestGettingAddress-server-473687599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-473687599',id=143,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEzqaO2W9qUVAVfLhHnbtZ9O0tM3aVdXiBKXRhRrnEjXChl2Hh7jScqfJN9XRIykUO2wAiKe7A8/q0zxul64iRZVmOh42YGZD0paqormAt73h0roulwz53Jgxaj06Lt+ZQ==',key_name='tempest-TestGettingAddress-737469986',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-avdd1di6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:19:39Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=392384a1-1741-4504-b2c2-557420bbbbd0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.409 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.410 254096 DEBUG nova.network.os_vif_util [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.411 254096 DEBUG os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.413 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.414 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6eeae44-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.422 254096 INFO os_vif [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:88:e0,bridge_name='br-int',has_traffic_filtering=True,id=f6eeae44-ea00-4543-a1e0-9ce45fbc399f,network=Network(ad3526c9-ce3b-41ed-ae27-775dca6a1319),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6eeae44-ea')
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : haproxy version is 2.8.14-c23fe91
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [NOTICE]   (415063) : path to executable is /usr/sbin/haproxy
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [WARNING]  (415063) : Exiting Master process...
Nov 25 17:20:53 compute-0 systemd[1]: libpod-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope: Deactivated successfully.
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [ALERT]    (415063) : Current worker (415065) exited with code 143 (Terminated)
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81[415059]: [WARNING]  (415063) : All workers exited. Exiting... (0)
Nov 25 17:20:53 compute-0 conmon[415059]: conmon e9e5d2e4494488d2c7db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope/container/memory.events
Nov 25 17:20:53 compute-0 podman[416874]: 2025-11-25 17:20:53.456894995 +0000 UTC m=+0.057909528 container died e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.480 254096 DEBUG nova.compute.manager [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-unplugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG oslo_concurrency.lockutils [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG oslo_concurrency.lockutils [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG oslo_concurrency.lockutils [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.481 254096 DEBUG nova.compute.manager [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-unplugged-b342a143-48a8-46f1-90fc-229fadeb167e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.482 254096 DEBUG nova.compute.manager [req-06a260ff-e2d1-44de-906d-25384323288d req-b459810f-893b-4b70-9781-a65596b7da95 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-unplugged-b342a143-48a8-46f1-90fc-229fadeb167e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249-userdata-shm.mount: Deactivated successfully.
Nov 25 17:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5e3d52ff50bf53cf2fb737f96eefa352cab2347302bd90a6108c9336c866d1c-merged.mount: Deactivated successfully.
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:20:53 compute-0 podman[416874]: 2025-11-25 17:20:53.501993673 +0000 UTC m=+0.103008206 container cleanup e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:20:53 compute-0 systemd[1]: libpod-conmon-e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249.scope: Deactivated successfully.
Nov 25 17:20:53 compute-0 podman[416924]: 2025-11-25 17:20:53.605733936 +0000 UTC m=+0.068261508 container remove e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.612 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4b02f0-6a4e-41c9-b14f-3571d9592509]: (4, ('Tue Nov 25 05:20:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 (e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249)\ne9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249\nTue Nov 25 05:20:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 (e9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249)\ne9e5d2e4494488d2c7db478ccd3bcc82bb7d52806a882ad409ab63baf126e249\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.615 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53ad154f-40d6-4fe0-b731-cfdc25147aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.615 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77970d23-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.617 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 kernel: tap77970d23-50: left promiscuous mode
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.636 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f79c9e7b-cab8-478f-9e24-d164ffaa6550]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.654 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[82cdab4a-d972-4712-925b-657246cce9b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.656 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fb09ab5d-2eca-48ef-bd43-8a46075009c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.677 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab63046-0abf-449d-96fa-3a2764610d0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763639, 'reachable_time': 22799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416940, 'error': None, 'target': 'ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d77970d23\x2d547a\x2d4e3a\x2dbddf\x2df4770a15bf81.mount: Deactivated successfully.
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.682 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77970d23-547a-4e3a-bddf-f4770a15bf81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.682 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[df6aeafa-7fdc-4478-b252-618a4798f31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.683 163338 INFO neutron.agent.ovn.metadata.agent [-] Port f6eeae44-ea00-4543-a1e0-9ce45fbc399f in datapath ad3526c9-ce3b-41ed-ae27-775dca6a1319 unbound from our chassis
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.684 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad3526c9-ce3b-41ed-ae27-775dca6a1319, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.685 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d36d086-9fa2-49d7-8941-81b500e32d14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:53.685 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 namespace which is not needed anymore
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.829 254096 INFO nova.virt.libvirt.driver [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deleting instance files /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0_del
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.831 254096 INFO nova.virt.libvirt.driver [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deletion of /var/lib/nova/instances/392384a1-1741-4504-b2c2-557420bbbbd0_del complete
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : haproxy version is 2.8.14-c23fe91
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [NOTICE]   (415224) : path to executable is /usr/sbin/haproxy
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [WARNING]  (415224) : Exiting Master process...
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [ALERT]    (415224) : Current worker (415226) exited with code 143 (Terminated)
Nov 25 17:20:53 compute-0 neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319[415220]: [WARNING]  (415224) : All workers exited. Exiting... (0)
Nov 25 17:20:53 compute-0 systemd[1]: libpod-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6.scope: Deactivated successfully.
Nov 25 17:20:53 compute-0 podman[416958]: 2025-11-25 17:20:53.847019034 +0000 UTC m=+0.065283268 container died 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6-userdata-shm.mount: Deactivated successfully.
Nov 25 17:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0883bcdd26bb7ed72d7d0ceebbe0bf6761be99f4ffb7fcfd366aae7f03f2ca46-merged.mount: Deactivated successfully.
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.890 254096 INFO nova.compute.manager [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.891 254096 DEBUG oslo.service.loopingcall [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.891 254096 DEBUG nova.compute.manager [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:20:53 compute-0 nova_compute[254092]: 2025-11-25 17:20:53.891 254096 DEBUG nova.network.neutron [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:20:53 compute-0 podman[416958]: 2025-11-25 17:20:53.895925875 +0000 UTC m=+0.114190109 container cleanup 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:20:53 compute-0 systemd[1]: libpod-conmon-05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6.scope: Deactivated successfully.
Nov 25 17:20:54 compute-0 podman[416987]: 2025-11-25 17:20:54.00302556 +0000 UTC m=+0.071660622 container remove 05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.009 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b7576db8-aebe-4b58-89b2-a540abc1b68e]: (4, ('Tue Nov 25 05:20:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 (05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6)\n05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6\nTue Nov 25 05:20:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 (05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6)\n05624e28c214e5d054291b4a01c86bab37b02489ae878b0dcb1cffcde899d1c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.011 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[375a2b59-f84f-424a-a0aa-3550785cfcde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.012 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad3526c9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:20:54 compute-0 nova_compute[254092]: 2025-11-25 17:20:54.014 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:54 compute-0 kernel: tapad3526c9-c0: left promiscuous mode
Nov 25 17:20:54 compute-0 nova_compute[254092]: 2025-11-25 17:20:54.025 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.029 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc1b319-a1a6-4193-9f08-44eefb464d60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.064 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9866a294-69bd-447e-97ee-96b7ccdd8338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.066 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d664e360-2487-4959-8419-f7b21ae1ceb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.091 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6e7010-e7c3-49b3-953b-00a5d382a668]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763750, 'reachable_time': 17365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417002, 'error': None, 'target': 'ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.094 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad3526c9-ce3b-41ed-ae27-775dca6a1319 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:20:54 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:20:54.094 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[423172fd-25c4-44f9-905d-c780b9d057ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:20:54 compute-0 ceph-mon[74985]: pgmap v2920: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 25 17:20:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dad3526c9\x2dce3b\x2d41ed\x2dae27\x2d775dca6a1319.mount: Deactivated successfully.
Nov 25 17:20:54 compute-0 nova_compute[254092]: 2025-11-25 17:20:54.646 254096 DEBUG nova.network.neutron [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updated VIF entry in instance network info cache for port b342a143-48a8-46f1-90fc-229fadeb167e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:20:54 compute-0 nova_compute[254092]: 2025-11-25 17:20:54.646 254096 DEBUG nova.network.neutron [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [{"id": "b342a143-48a8-46f1-90fc-229fadeb167e", "address": "fa:16:3e:48:d2:35", "network": {"id": "77970d23-547a-4e3a-bddf-f4770a15bf81", "bridge": "br-int", "label": "tempest-network-smoke--617030050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb342a143-48", "ovs_interfaceid": "b342a143-48a8-46f1-90fc-229fadeb167e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "address": "fa:16:3e:0f:88:e0", "network": {"id": "ad3526c9-ce3b-41ed-ae27-775dca6a1319", "bridge": "br-int", "label": "tempest-network-smoke--1601683367", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe0f:88e0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6eeae44-ea", "ovs_interfaceid": "f6eeae44-ea00-4543-a1e0-9ce45fbc399f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:54 compute-0 nova_compute[254092]: 2025-11-25 17:20:54.661 254096 DEBUG oslo_concurrency.lockutils [req-f49f661a-a3e6-4aff-b11e-e32f7396a149 req-56b13252-9e58-482c-a6ad-b2a09cb0f6d8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-392384a1-1741-4504-b2c2-557420bbbbd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:20:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.097 254096 DEBUG nova.network.neutron [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.116 254096 INFO nova.compute.manager [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Took 1.22 seconds to deallocate network for instance.
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.158 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.158 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.187 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-unplugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-unplugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.188 254096 WARNING nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-unplugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f for instance with vm_state deleted and task_state None.
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG oslo_concurrency.lockutils [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.189 254096 DEBUG nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.190 254096 WARNING nova.compute.manager [req-0176a870-d225-41cc-8ec0-487f7d64748f req-1ad1de2f-1c27-480d-a914-ce83fecce839 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-f6eeae44-ea00-4543-a1e0-9ce45fbc399f for instance with vm_state deleted and task_state None.
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.224 254096 DEBUG oslo_concurrency.processutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:20:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:20:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775007000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:20:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:20:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2775007000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:20:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2775007000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:20:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2775007000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.580 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.582 254096 DEBUG oslo_concurrency.lockutils [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.582 254096 DEBUG oslo_concurrency.lockutils [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 DEBUG oslo_concurrency.lockutils [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] No waiting events found dispatching network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 WARNING nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received unexpected event network-vif-plugged-b342a143-48a8-46f1-90fc-229fadeb167e for instance with vm_state deleted and task_state None.
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.583 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-deleted-f6eeae44-ea00-4543-a1e0-9ce45fbc399f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.584 254096 DEBUG nova.compute.manager [req-e7431bf6-dba8-4940-8367-86e6ccbdbe11 req-0197a96a-aa1a-47b4-9ffa-18a78069f21e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Received event network-vif-deleted-b342a143-48a8-46f1-90fc-229fadeb167e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:20:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:20:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2085277808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.728 254096 DEBUG oslo_concurrency.processutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.738 254096 DEBUG nova.compute.provider_tree [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.753 254096 DEBUG nova.scheduler.client.report [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.770 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.801 254096 INFO nova.scheduler.client.report [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 392384a1-1741-4504-b2c2-557420bbbbd0
Nov 25 17:20:55 compute-0 nova_compute[254092]: 2025-11-25 17:20:55.858 254096 DEBUG oslo_concurrency.lockutils [None req-33ece471-c896-47a0-8cc2-15a66db9d9ca a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "392384a1-1741-4504-b2c2-557420bbbbd0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:20:56 compute-0 ceph-mon[74985]: pgmap v2921: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 30 op/s
Nov 25 17:20:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2085277808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:20:56 compute-0 nova_compute[254092]: 2025-11-25 17:20:56.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 57 op/s
Nov 25 17:20:58 compute-0 nova_compute[254092]: 2025-11-25 17:20:58.417 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:20:58 compute-0 ceph-mon[74985]: pgmap v2922: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 13 KiB/s wr, 57 op/s
Nov 25 17:20:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 KiB/s wr, 51 op/s
Nov 25 17:20:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:20:59 compute-0 nova_compute[254092]: 2025-11-25 17:20:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:00 compute-0 ceph-mon[74985]: pgmap v2923: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 KiB/s wr, 51 op/s
Nov 25 17:21:00 compute-0 nova_compute[254092]: 2025-11-25 17:21:00.595 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091245.593738, 9be9cbb4-878e-4fce-be7c-44b49480ff0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:21:00 compute-0 nova_compute[254092]: 2025-11-25 17:21:00.596 254096 INFO nova.compute.manager [-] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] VM Stopped (Lifecycle Event)
Nov 25 17:21:00 compute-0 nova_compute[254092]: 2025-11-25 17:21:00.623 254096 DEBUG nova.compute.manager [None req-0ef22437-c9bd-44c2-9bb7-b63dbdd7bede - - - - - -] [instance: 9be9cbb4-878e-4fce-be7c-44b49480ff0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:21:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 KiB/s wr, 51 op/s
Nov 25 17:21:01 compute-0 nova_compute[254092]: 2025-11-25 17:21:01.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:02 compute-0 ceph-mon[74985]: pgmap v2924: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.4 KiB/s wr, 51 op/s
Nov 25 17:21:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:21:03 compute-0 nova_compute[254092]: 2025-11-25 17:21:03.452 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:04 compute-0 ceph-mon[74985]: pgmap v2925: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:21:04 compute-0 nova_compute[254092]: 2025-11-25 17:21:04.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:04 compute-0 nova_compute[254092]: 2025-11-25 17:21:04.933 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:21:06 compute-0 nova_compute[254092]: 2025-11-25 17:21:06.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:06 compute-0 ceph-mon[74985]: pgmap v2926: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:21:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:21:08 compute-0 nova_compute[254092]: 2025-11-25 17:21:08.374 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091253.3736198, 392384a1-1741-4504-b2c2-557420bbbbd0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:21:08 compute-0 nova_compute[254092]: 2025-11-25 17:21:08.375 254096 INFO nova.compute.manager [-] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] VM Stopped (Lifecycle Event)
Nov 25 17:21:08 compute-0 nova_compute[254092]: 2025-11-25 17:21:08.401 254096 DEBUG nova.compute.manager [None req-15471841-dae6-40ca-9669-ba9d8703fc9f - - - - - -] [instance: 392384a1-1741-4504-b2c2-557420bbbbd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:21:08 compute-0 nova_compute[254092]: 2025-11-25 17:21:08.455 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:08 compute-0 ceph-mon[74985]: pgmap v2927: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Nov 25 17:21:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:21:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:21:10 compute-0 ceph-mon[74985]: pgmap v2928: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:10 compute-0 podman[417028]: 2025-11-25 17:21:10.665500262 +0000 UTC m=+0.070528160 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:21:10 compute-0 podman[417027]: 2025-11-25 17:21:10.666201731 +0000 UTC m=+0.077463260 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:21:10 compute-0 podman[417029]: 2025-11-25 17:21:10.703617179 +0000 UTC m=+0.111231528 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:21:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:11 compute-0 nova_compute[254092]: 2025-11-25 17:21:11.497 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:12 compute-0 ceph-mon[74985]: pgmap v2929: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:13 compute-0 nova_compute[254092]: 2025-11-25 17:21:13.458 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:13.658 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:13.659 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:13.659 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:14 compute-0 ceph-mon[74985]: pgmap v2930: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:15 compute-0 ceph-mon[74985]: pgmap v2931: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:16 compute-0 nova_compute[254092]: 2025-11-25 17:21:16.499 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:18 compute-0 ceph-mon[74985]: pgmap v2932: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.383 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2 2001:db8::f816:3eff:fe42:fad8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:fad8/64', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f8afc421-e45b-4911-af18-dd32853c6b8c) old=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:21:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.385 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f8afc421-e45b-4911-af18-dd32853c6b8c in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 updated
Nov 25 17:21:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.385 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:21:18 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:18.386 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[dad5144a-5390-460f-b1b5-e63cc14f060d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:18 compute-0 nova_compute[254092]: 2025-11-25 17:21:18.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:20 compute-0 ceph-mon[74985]: pgmap v2933: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:21 compute-0 nova_compute[254092]: 2025-11-25 17:21:21.501 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:22 compute-0 ceph-mon[74985]: pgmap v2934: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:23 compute-0 nova_compute[254092]: 2025-11-25 17:21:23.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:24 compute-0 ceph-mon[74985]: pgmap v2935: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.224 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2 2001:db8:0:1:f816:3eff:fe42:fad8 2001:db8::f816:3eff:fe42:fad8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe42:fad8/64 2001:db8::f816:3eff:fe42:fad8/64', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f8afc421-e45b-4911-af18-dd32853c6b8c) old=Port_Binding(mac=['fa:16:3e:42:fa:d8 10.100.0.2 2001:db8::f816:3eff:fe42:fad8'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe42:fad8/64', 'neutron:device_id': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:21:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.226 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f8afc421-e45b-4911-af18-dd32853c6b8c in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 updated
Nov 25 17:21:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.228 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:21:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:25.230 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1925bc6c-9bbe-4c38-b0f4-fed27e4cd590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:26 compute-0 ceph-mon[74985]: pgmap v2936: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:26 compute-0 nova_compute[254092]: 2025-11-25 17:21:26.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:28 compute-0 ceph-mon[74985]: pgmap v2937: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:28 compute-0 nova_compute[254092]: 2025-11-25 17:21:28.509 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:30 compute-0 ceph-mon[74985]: pgmap v2938: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:31 compute-0 nova_compute[254092]: 2025-11-25 17:21:31.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:32 compute-0 ceph-mon[74985]: pgmap v2939: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.253 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.254 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.269 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.365 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.366 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.379 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.380 254096 INFO nova.compute.claims [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:33 compute-0 nova_compute[254092]: 2025-11-25 17:21:33.519 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:21:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517089168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.000 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.007 254096 DEBUG nova.compute.provider_tree [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.026 254096 DEBUG nova.scheduler.client.report [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.052 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.053 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.121 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.122 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.138 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.153 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.231 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.232 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.232 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Creating image(s)
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.258 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:21:34 compute-0 ceph-mon[74985]: pgmap v2940: 321 pgs: 321 active+clean; 41 MiB data, 983 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:21:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3517089168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.283 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.307 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.312 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.419 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.420 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.421 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.421 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.445 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.449 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.504 254096 DEBUG nova.policy [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.812 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:34 compute-0 nova_compute[254092]: 2025-11-25 17:21:34.900 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:21:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 75 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 1.2 MiB/s wr, 3 op/s
Nov 25 17:21:35 compute-0 nova_compute[254092]: 2025-11-25 17:21:35.089 254096 DEBUG nova.objects.instance [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:21:35 compute-0 nova_compute[254092]: 2025-11-25 17:21:35.101 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:21:35 compute-0 nova_compute[254092]: 2025-11-25 17:21:35.102 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Ensure instance console log exists: /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:21:35 compute-0 nova_compute[254092]: 2025-11-25 17:21:35.102 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:35 compute-0 nova_compute[254092]: 2025-11-25 17:21:35.103 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:35 compute-0 nova_compute[254092]: 2025-11-25 17:21:35.103 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:35 compute-0 nova_compute[254092]: 2025-11-25 17:21:35.167 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Successfully created port: c1ba1b56-3c61-42fa-b23d-44349357a11a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:21:36 compute-0 ceph-mon[74985]: pgmap v2941: 321 pgs: 321 active+clean; 75 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 1.2 MiB/s wr, 3 op/s
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.690 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Successfully updated port: c1ba1b56-3c61-42fa-b23d-44349357a11a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.705 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.706 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.706 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.818 254096 DEBUG nova.compute.manager [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.819 254096 DEBUG nova.compute.manager [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing instance network info cache due to event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.819 254096 DEBUG oslo_concurrency.lockutils [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:21:36 compute-0 nova_compute[254092]: 2025-11-25 17:21:36.890 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:21:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:21:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:21:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 60K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1322 writes, 5759 keys, 1322 commit groups, 1.0 writes per commit group, ingest: 8.57 MB, 0.01 MB/s
                                           Interval WAL: 1322 writes, 1322 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.7      2.20              0.23        41    0.054       0      0       0.0       0.0
                                             L6      1/0    8.00 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7    115.7     97.3      3.35              0.91        40    0.084    253K    22K       0.0       0.0
                                            Sum      1/0    8.00 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     69.9     71.3      5.54              1.15        81    0.068    253K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.6    154.7    154.6      0.27              0.11         8    0.034     33K   1991       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    115.7     97.3      3.35              0.91        40    0.084    253K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     32.4      2.15              0.23        40    0.054       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.068, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.39 GB write, 0.07 MB/s write, 0.38 GB read, 0.07 MB/s read, 5.5 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 45.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000589 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2956,43.58 MB,14.3359%) FilterBlock(82,707.05 KB,0.22713%) IndexBlock(82,1.14 MB,0.373805%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 17:21:38 compute-0 ceph-mon[74985]: pgmap v2942: 321 pgs: 321 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.290152) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298290189, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3433366, "memory_usage": 3500712, "flush_reason": "Manual Compaction"}
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298329204, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 3367376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59177, "largest_seqno": 61230, "table_properties": {"data_size": 3357947, "index_size": 5986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18821, "raw_average_key_size": 20, "raw_value_size": 3339374, "raw_average_value_size": 3571, "num_data_blocks": 265, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091071, "oldest_key_time": 1764091071, "file_creation_time": 1764091298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 39107 microseconds, and 10593 cpu microseconds.
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.329256) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 3367376 bytes OK
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.329280) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.331482) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.331499) EVENT_LOG_v1 {"time_micros": 1764091298331493, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.331520) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 3424761, prev total WAL file size 3424761, number of live WAL files 2.
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.332748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(3288KB)], [137(8192KB)]
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298332790, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11756627, "oldest_snapshot_seqno": -1}
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.381 254096 DEBUG nova.network.neutron [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8160 keys, 10018119 bytes, temperature: kUnknown
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298422860, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 10018119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9966072, "index_size": 30558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 212956, "raw_average_key_size": 26, "raw_value_size": 9822893, "raw_average_value_size": 1203, "num_data_blocks": 1188, "num_entries": 8160, "num_filter_entries": 8160, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.423174) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10018119 bytes
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.424534) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.4 rd, 111.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 8674, records dropped: 514 output_compression: NoCompression
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.424555) EVENT_LOG_v1 {"time_micros": 1764091298424544, "job": 84, "event": "compaction_finished", "compaction_time_micros": 90167, "compaction_time_cpu_micros": 32381, "output_level": 6, "num_output_files": 1, "total_output_size": 10018119, "num_input_records": 8674, "num_output_records": 8160, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298425472, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091298427424, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.332614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:21:38 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:21:38.427547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.432 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.433 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance network_info: |[{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.433 254096 DEBUG oslo_concurrency.lockutils [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.433 254096 DEBUG nova.network.neutron [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.436 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start _get_guest_xml network_info=[{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.440 254096 WARNING nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.445 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.445 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.448 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.448 254096 DEBUG nova.virt.libvirt.host [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.449 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.450 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.451 254096 DEBUG nova.virt.hardware [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.453 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:21:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765263425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.884 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.918 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:21:38 compute-0 nova_compute[254092]: 2025-11-25 17:21:38.923 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:21:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3765263425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:21:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:21:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854290813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.383 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.385 254096 DEBUG nova.virt.libvirt.vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:21:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1075663045',display_name='tempest-TestGettingAddress-server-1075663045',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1075663045',id=145,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-0xnzfd4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:21:34Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=36b839e5-d6db-406a-ab95-bbdcd48c531d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.385 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.386 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.387 254096 DEBUG nova.objects.instance [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.415 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <uuid>36b839e5-d6db-406a-ab95-bbdcd48c531d</uuid>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <name>instance-00000091</name>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1075663045</nova:name>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:21:38</nova:creationTime>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <nova:port uuid="c1ba1b56-3c61-42fa-b23d-44349357a11a">
Nov 25 17:21:39 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe77:8aa5" ipVersion="6"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe77:8aa5" ipVersion="6"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <system>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <entry name="serial">36b839e5-d6db-406a-ab95-bbdcd48c531d</entry>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <entry name="uuid">36b839e5-d6db-406a-ab95-bbdcd48c531d</entry>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </system>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <os>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   </os>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <features>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   </features>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/36b839e5-d6db-406a-ab95-bbdcd48c531d_disk">
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       </source>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config">
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       </source>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:21:39 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:77:8a:a5"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <target dev="tapc1ba1b56-3c"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/console.log" append="off"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <video>
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </video>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:21:39 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:21:39 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:21:39 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:21:39 compute-0 nova_compute[254092]: </domain>
Nov 25 17:21:39 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.416 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Preparing to wait for external event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.417 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.417 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.417 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.418 254096 DEBUG nova.virt.libvirt.vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:21:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1075663045',display_name='tempest-TestGettingAddress-server-1075663045',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1075663045',id=145,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-0xnzfd4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:21:34Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=36b839e5-d6db-406a-ab95-bbdcd48c531d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.419 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.420 254096 DEBUG nova.network.os_vif_util [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.420 254096 DEBUG os_vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.421 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.421 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.422 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.429 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ba1b56-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.430 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ba1b56-3c, col_values=(('external_ids', {'iface-id': 'c1ba1b56-3c61-42fa-b23d-44349357a11a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:8a:a5', 'vm-uuid': '36b839e5-d6db-406a-ab95-bbdcd48c531d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:39 compute-0 NetworkManager[48891]: <info>  [1764091299.4333] manager: (tapc1ba1b56-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.438 254096 INFO os_vif [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c')
Nov 25 17:21:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.493 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.494 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.495 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:77:8a:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.496 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Using config drive
Nov 25 17:21:39 compute-0 nova_compute[254092]: 2025-11-25 17:21:39.525 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.014 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Creating config drive at /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.019 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplg17ba1i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.170 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplg17ba1i" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.194 254096 DEBUG nova.storage.rbd_utils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.199 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:21:40
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data']
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:21:40 compute-0 ceph-mon[74985]: pgmap v2943: 321 pgs: 321 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:21:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3854290813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.361 254096 DEBUG oslo_concurrency.processutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config 36b839e5-d6db-406a-ab95-bbdcd48c531d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.362 254096 INFO nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deleting local config drive /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d/disk.config because it was imported into RBD.
Nov 25 17:21:40 compute-0 kernel: tapc1ba1b56-3c: entered promiscuous mode
Nov 25 17:21:40 compute-0 NetworkManager[48891]: <info>  [1764091300.4433] manager: (tapc1ba1b56-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/649)
Nov 25 17:21:40 compute-0 ovn_controller[153477]: 2025-11-25T17:21:40Z|01566|binding|INFO|Claiming lport c1ba1b56-3c61-42fa-b23d-44349357a11a for this chassis.
Nov 25 17:21:40 compute-0 ovn_controller[153477]: 2025-11-25T17:21:40Z|01567|binding|INFO|c1ba1b56-3c61-42fa-b23d-44349357a11a: Claiming fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.466 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], port_security=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe77:8aa5/64 2001:db8::f816:3eff:fe77:8aa5/64', 'neutron:device_id': '36b839e5-d6db-406a-ab95-bbdcd48c531d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c1ba1b56-3c61-42fa-b23d-44349357a11a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.469 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c1ba1b56-3c61-42fa-b23d-44349357a11a in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 bound to our chassis
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.470 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252
Nov 25 17:21:40 compute-0 systemd-udevd[417415]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.488 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1791ae-3f33-4c70-9573-a24057cb22e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.498 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84058e12-51 in ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:21:40 compute-0 NetworkManager[48891]: <info>  [1764091300.5000] device (tapc1ba1b56-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:21:40 compute-0 NetworkManager[48891]: <info>  [1764091300.5009] device (tapc1ba1b56-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:21:40 compute-0 systemd-machined[216343]: New machine qemu-179-instance-00000091.
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.500 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84058e12-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.500 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bda60cd8-9372-43cb-8561-d5202b9b00a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.504 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[eb44e547-68e9-47eb-bf06-71a4adca7ca3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.519 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[a116e07e-a18d-423a-b552-673ed478ff9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:40 compute-0 ovn_controller[153477]: 2025-11-25T17:21:40Z|01568|binding|INFO|Setting lport c1ba1b56-3c61-42fa-b23d-44349357a11a ovn-installed in OVS
Nov 25 17:21:40 compute-0 ovn_controller[153477]: 2025-11-25T17:21:40Z|01569|binding|INFO|Setting lport c1ba1b56-3c61-42fa-b23d-44349357a11a up in Southbound
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.549 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b06037ed-d158-488c-bde5-a412564e1e20]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.588 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2493ece1-ba7d-4b16-ba32-5d10eba2af02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.597 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e160b306-ba4e-4a75-88ff-1de8d4db2b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 NetworkManager[48891]: <info>  [1764091300.5985] manager: (tap84058e12-50): new Veth device (/org/freedesktop/NetworkManager/Devices/650)
Nov 25 17:21:40 compute-0 systemd-udevd[417421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.639 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[440968ad-cb36-4fd0-9cd4-f49eb30eb369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.642 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f17b8b1-f98c-4f00-8d50-d98b42d755fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 NetworkManager[48891]: <info>  [1764091300.6703] device (tap84058e12-50): carrier: link connected
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.677 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[4b00e507-210e-48e0-a845-952d953fbf1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.698 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[865e69e5-402f-4b4d-9979-5b535c82d2e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417453, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.713 254096 DEBUG nova.network.neutron [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated VIF entry in instance network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.714 254096 DEBUG nova.network.neutron [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.718 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e540a337-df1b-4b81-98d0-232314ecb719]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:fad8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775826, 'tstamp': 775826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 417454, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.730 254096 DEBUG oslo_concurrency.lockutils [req-794b1ea0-0282-4382-97f2-201b69c784c5 req-e659c6ae-b200-4779-9c39-8d9d5d95fefd a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.738 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2956a2-d8eb-4847-9b51-eeffd4bf1eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 417455, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.776 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e8085b-1bba-4908-a8f0-afbcfce580a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.863 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4c3b4a-1890-4309-9438-9fa9381f0e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.865 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.866 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.867 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84058e12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:21:40 compute-0 kernel: tap84058e12-50: entered promiscuous mode
Nov 25 17:21:40 compute-0 NetworkManager[48891]: <info>  [1764091300.8705] manager: (tap84058e12-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.878 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84058e12-50, col_values=(('external_ids', {'iface-id': 'f8afc421-e45b-4911-af18-dd32853c6b8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:21:40 compute-0 ovn_controller[153477]: 2025-11-25T17:21:40Z|01570|binding|INFO|Releasing lport f8afc421-e45b-4911-af18-dd32853c6b8c from this chassis (sb_readonly=0)
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.884 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84058e12-5c2c-4ee6-a8bb-052eff4cc252.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84058e12-5c2c-4ee6-a8bb-052eff4cc252.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.885 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab9ec2e-0f69-4acf-9a00-8c63a33cede6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.886 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-84058e12-5c2c-4ee6-a8bb-052eff4cc252
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/84058e12-5c2c-4ee6-a8bb-052eff4cc252.pid.haproxy
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 84058e12-5c2c-4ee6-a8bb-052eff4cc252
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:21:40 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:21:40.887 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'env', 'PROCESS_TAG=haproxy-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84058e12-5c2c-4ee6-a8bb-052eff4cc252.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:21:40 compute-0 nova_compute[254092]: 2025-11-25 17:21:40.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.147 254096 DEBUG nova.compute.manager [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG oslo_concurrency.lockutils [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG oslo_concurrency.lockutils [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG oslo_concurrency.lockutils [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.148 254096 DEBUG nova.compute.manager [req-615b0354-dfb6-4650-bd6f-1c094e38bbb5 req-8c986974-2c86-43eb-93eb-0af0f9dd4395 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Processing event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:21:41 compute-0 podman[417524]: 2025-11-25 17:21:41.274038323 +0000 UTC m=+0.055976205 container create 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.304 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091301.303422, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.305 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Started (Lifecycle Event)
Nov 25 17:21:41 compute-0 systemd[1]: Started libpod-conmon-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55.scope.
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.310 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.317 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.321 254096 INFO nova.virt.libvirt.driver [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance spawned successfully.
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.321 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.324 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.330 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:21:41 compute-0 podman[417524]: 2025-11-25 17:21:41.244501758 +0000 UTC m=+0.026439700 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:21:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.340 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.341 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.341 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.342 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.342 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.342 254096 DEBUG nova.virt.libvirt.driver [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:21:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a1850d0a22111fadf1ed03693f6973e2c59b4c9475f9c71c2ca4d4e5534c52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.357 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.357 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091301.3035274, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.358 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Paused (Lifecycle Event)
Nov 25 17:21:41 compute-0 podman[417524]: 2025-11-25 17:21:41.365942704 +0000 UTC m=+0.147880606 container init 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:21:41 compute-0 podman[417524]: 2025-11-25 17:21:41.371726662 +0000 UTC m=+0.153664534 container start 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:21:41 compute-0 podman[417544]: 2025-11-25 17:21:41.381263141 +0000 UTC m=+0.069612156 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.381 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.388 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091301.317223, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.388 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Resumed (Lifecycle Event)
Nov 25 17:21:41 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : New worker (417604) forked
Nov 25 17:21:41 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : Loading success.
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.404 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.410 254096 INFO nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 7.18 seconds to spawn the instance on the hypervisor.
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.410 254096 DEBUG nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.411 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.434 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:21:41 compute-0 podman[417547]: 2025-11-25 17:21:41.466778119 +0000 UTC m=+0.148517254 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.470 254096 INFO nova.compute.manager [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 8.15 seconds to build instance.
Nov 25 17:21:41 compute-0 podman[417546]: 2025-11-25 17:21:41.472371191 +0000 UTC m=+0.155610677 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.485 254096 DEBUG oslo_concurrency.lockutils [None req-f5f4dd19-6e6a-4ac2-bfff-a32690de75f6 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:41 compute-0 nova_compute[254092]: 2025-11-25 17:21:41.509 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:42 compute-0 ceph-mon[74985]: pgmap v2944: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:21:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:21:43 compute-0 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG nova.compute.manager [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:21:43 compute-0 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG oslo_concurrency.lockutils [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:43 compute-0 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG oslo_concurrency.lockutils [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:43 compute-0 nova_compute[254092]: 2025-11-25 17:21:43.250 254096 DEBUG oslo_concurrency.lockutils [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:43 compute-0 nova_compute[254092]: 2025-11-25 17:21:43.251 254096 DEBUG nova.compute.manager [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] No waiting events found dispatching network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:21:43 compute-0 nova_compute[254092]: 2025-11-25 17:21:43.251 254096 WARNING nova.compute.manager [req-a7bc2717-132b-466d-90d0-bf74f6548743 req-7b330f5f-5bba-4c53-b5e2-ac2a7d01c692 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received unexpected event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a for instance with vm_state active and task_state None.
Nov 25 17:21:44 compute-0 ceph-mon[74985]: pgmap v2945: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 25 17:21:44 compute-0 nova_compute[254092]: 2025-11-25 17:21:44.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:44 compute-0 nova_compute[254092]: 2025-11-25 17:21:44.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 17:21:45 compute-0 nova_compute[254092]: 2025-11-25 17:21:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:45 compute-0 nova_compute[254092]: 2025-11-25 17:21:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:45 compute-0 nova_compute[254092]: 2025-11-25 17:21:45.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:45 compute-0 nova_compute[254092]: 2025-11-25 17:21:45.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:45 compute-0 nova_compute[254092]: 2025-11-25 17:21:45.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:45 compute-0 nova_compute[254092]: 2025-11-25 17:21:45.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:21:45 compute-0 nova_compute[254092]: 2025-11-25 17:21:45.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:21:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385882845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.079 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.176 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.176 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:21:46 compute-0 ceph-mon[74985]: pgmap v2946: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 25 17:21:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2385882845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:21:46 compute-0 ovn_controller[153477]: 2025-11-25T17:21:46Z|01571|binding|INFO|Releasing lport f8afc421-e45b-4911-af18-dd32853c6b8c from this chassis (sb_readonly=0)
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:46 compute-0 NetworkManager[48891]: <info>  [1764091306.3315] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Nov 25 17:21:46 compute-0 NetworkManager[48891]: <info>  [1764091306.3327] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Nov 25 17:21:46 compute-0 ovn_controller[153477]: 2025-11-25T17:21:46Z|01572|binding|INFO|Releasing lport f8afc421-e45b-4911-af18-dd32853c6b8c from this chassis (sb_readonly=0)
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.365 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.372 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.373 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.374 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.374 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.437 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 36b839e5-d6db-406a-ab95-bbdcd48c531d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.488 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:21:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172299829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.966 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.972 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:21:46 compute-0 nova_compute[254092]: 2025-11-25 17:21:46.987 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:21:47 compute-0 nova_compute[254092]: 2025-11-25 17:21:47.007 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:21:47 compute-0 nova_compute[254092]: 2025-11-25 17:21:47.008 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:21:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 97 op/s
Nov 25 17:21:47 compute-0 nova_compute[254092]: 2025-11-25 17:21:47.262 254096 DEBUG nova.compute.manager [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:21:47 compute-0 nova_compute[254092]: 2025-11-25 17:21:47.263 254096 DEBUG nova.compute.manager [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing instance network info cache due to event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:21:47 compute-0 nova_compute[254092]: 2025-11-25 17:21:47.264 254096 DEBUG oslo_concurrency.lockutils [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:21:47 compute-0 nova_compute[254092]: 2025-11-25 17:21:47.264 254096 DEBUG oslo_concurrency.lockutils [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:21:47 compute-0 nova_compute[254092]: 2025-11-25 17:21:47.265 254096 DEBUG nova.network.neutron [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:21:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1172299829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:21:48 compute-0 ceph-mon[74985]: pgmap v2947: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 606 KiB/s wr, 97 op/s
Nov 25 17:21:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:21:49 compute-0 sudo[417666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:49 compute-0 sudo[417666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:49 compute-0 sudo[417666]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:49 compute-0 sudo[417691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:21:49 compute-0 sudo[417691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:49 compute-0 sudo[417691]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:49 compute-0 sudo[417716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:49 compute-0 sudo[417716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:49 compute-0 sudo[417716]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:49 compute-0 sudo[417741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:21:49 compute-0 sudo[417741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:49 compute-0 nova_compute[254092]: 2025-11-25 17:21:49.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:49 compute-0 nova_compute[254092]: 2025-11-25 17:21:49.442 254096 DEBUG nova.network.neutron [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated VIF entry in instance network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:21:49 compute-0 nova_compute[254092]: 2025-11-25 17:21:49.443 254096 DEBUG nova.network.neutron [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:21:49 compute-0 nova_compute[254092]: 2025-11-25 17:21:49.463 254096 DEBUG oslo_concurrency.lockutils [req-cb71219e-4052-4913-9056-bfd8b3a9100b req-60c2fab3-24c6-4c14-b7b1-ceda370dca26 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:21:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:50 compute-0 sudo[417741]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:21:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:21:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:21:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:21:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0add22ef-98d6-47e6-ac32-751a39afb176 does not exist
Nov 25 17:21:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4a22fd6a-19d1-47de-a5ca-f68d6e0cc77e does not exist
Nov 25 17:21:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a65cfcfb-80e1-432c-8745-ff3e7b913c39 does not exist
Nov 25 17:21:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:21:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:21:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:21:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:21:50 compute-0 sudo[417796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:50 compute-0 sudo[417796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:50 compute-0 sudo[417796]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:50 compute-0 sudo[417821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:21:50 compute-0 sudo[417821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:50 compute-0 sudo[417821]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:50 compute-0 ceph-mon[74985]: pgmap v2948: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:21:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:21:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:21:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:21:50 compute-0 sudo[417846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:50 compute-0 sudo[417846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:50 compute-0 sudo[417846]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:50 compute-0 sudo[417871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:21:50 compute-0 sudo[417871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:51 compute-0 podman[417936]: 2025-11-25 17:21:51.001904157 +0000 UTC m=+0.084504091 container create d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:21:51 compute-0 nova_compute[254092]: 2025-11-25 17:21:51.004 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:51 compute-0 nova_compute[254092]: 2025-11-25 17:21:51.005 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:51 compute-0 nova_compute[254092]: 2025-11-25 17:21:51.005 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:21:51 compute-0 systemd[1]: Started libpod-conmon-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope.
Nov 25 17:21:51 compute-0 podman[417936]: 2025-11-25 17:21:50.966540435 +0000 UTC m=+0.049140459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:21:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:21:51 compute-0 podman[417936]: 2025-11-25 17:21:51.089750258 +0000 UTC m=+0.172350222 container init d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:21:51 compute-0 podman[417936]: 2025-11-25 17:21:51.097461229 +0000 UTC m=+0.180061153 container start d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:21:51 compute-0 podman[417936]: 2025-11-25 17:21:51.101367065 +0000 UTC m=+0.183966999 container attach d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:21:51 compute-0 affectionate_solomon[417954]: 167 167
Nov 25 17:21:51 compute-0 systemd[1]: libpod-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope: Deactivated successfully.
Nov 25 17:21:51 compute-0 conmon[417954]: conmon d3645e96b950b4609db5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope/container/memory.events
Nov 25 17:21:51 compute-0 podman[417936]: 2025-11-25 17:21:51.105901778 +0000 UTC m=+0.188501712 container died d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-629f43ccc21f6f9b10d442d631691b756d692b9f22c80b602f0bfbd0537e1ba3-merged.mount: Deactivated successfully.
Nov 25 17:21:51 compute-0 podman[417936]: 2025-11-25 17:21:51.151199931 +0000 UTC m=+0.233799875 container remove d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_solomon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:21:51 compute-0 systemd[1]: libpod-conmon-d3645e96b950b4609db56b47a62ffbc19feaba4035cfe3857feef8afbb6d7557.scope: Deactivated successfully.
Nov 25 17:21:51 compute-0 podman[417977]: 2025-11-25 17:21:51.378439406 +0000 UTC m=+0.072330010 container create 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:21:51 compute-0 podman[417977]: 2025-11-25 17:21:51.349255641 +0000 UTC m=+0.043146285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:21:51 compute-0 systemd[1]: Started libpod-conmon-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope.
Nov 25 17:21:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:51 compute-0 podman[417977]: 2025-11-25 17:21:51.503404597 +0000 UTC m=+0.197295281 container init 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:21:51 compute-0 podman[417977]: 2025-11-25 17:21:51.51413996 +0000 UTC m=+0.208030544 container start 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 17:21:51 compute-0 podman[417977]: 2025-11-25 17:21:51.517697727 +0000 UTC m=+0.211588411 container attach 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:21:51 compute-0 nova_compute[254092]: 2025-11-25 17:21:51.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:21:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:21:52 compute-0 ceph-mon[74985]: pgmap v2949: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:21:52 compute-0 nova_compute[254092]: 2025-11-25 17:21:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:52 compute-0 unruffled_goldstine[417993]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:21:52 compute-0 unruffled_goldstine[417993]: --> relative data size: 1.0
Nov 25 17:21:52 compute-0 unruffled_goldstine[417993]: --> All data devices are unavailable
Nov 25 17:21:52 compute-0 systemd[1]: libpod-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope: Deactivated successfully.
Nov 25 17:21:52 compute-0 systemd[1]: libpod-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope: Consumed 1.136s CPU time.
Nov 25 17:21:52 compute-0 podman[417977]: 2025-11-25 17:21:52.70852837 +0000 UTC m=+1.402418964 container died 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f26e3d6c67818119cc54f22661aa64417320376b36732e40bd8c70430bef751-merged.mount: Deactivated successfully.
Nov 25 17:21:52 compute-0 podman[417977]: 2025-11-25 17:21:52.780043507 +0000 UTC m=+1.473934111 container remove 333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:21:52 compute-0 systemd[1]: libpod-conmon-333acb782d1ffd55c942643dc32ad82e77fb577e1b29f8d1513a9743e69bcd01.scope: Deactivated successfully.
Nov 25 17:21:52 compute-0 sudo[417871]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:52 compute-0 sudo[418035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:52 compute-0 sudo[418035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:52 compute-0 sudo[418035]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:52 compute-0 sudo[418060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:21:52 compute-0 sudo[418060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:52 compute-0 sudo[418060]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 17:21:53 compute-0 sudo[418085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:53 compute-0 sudo[418085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:53 compute-0 sudo[418085]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:53 compute-0 sudo[418110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:21:53 compute-0 sudo[418110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:53 compute-0 podman[418177]: 2025-11-25 17:21:53.579818726 +0000 UTC m=+0.062466382 container create 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:21:53 compute-0 systemd[1]: Started libpod-conmon-84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b.scope.
Nov 25 17:21:53 compute-0 podman[418177]: 2025-11-25 17:21:53.5549604 +0000 UTC m=+0.037608076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:21:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:21:53 compute-0 podman[418177]: 2025-11-25 17:21:53.673701741 +0000 UTC m=+0.156349417 container init 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:21:53 compute-0 podman[418177]: 2025-11-25 17:21:53.681501764 +0000 UTC m=+0.164149410 container start 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:21:53 compute-0 podman[418177]: 2025-11-25 17:21:53.685154563 +0000 UTC m=+0.167802229 container attach 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:21:53 compute-0 quizzical_yonath[418194]: 167 167
Nov 25 17:21:53 compute-0 systemd[1]: libpod-84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b.scope: Deactivated successfully.
Nov 25 17:21:53 compute-0 podman[418177]: 2025-11-25 17:21:53.691279769 +0000 UTC m=+0.173927415 container died 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4069d982c1e13df212c966c5032461abc40ecbc7166043e6e4c8160348cdb910-merged.mount: Deactivated successfully.
Nov 25 17:21:53 compute-0 podman[418177]: 2025-11-25 17:21:53.730566979 +0000 UTC m=+0.213214625 container remove 84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:21:53 compute-0 systemd[1]: libpod-conmon-84b008ea76e2fec95020c8cc6acb59ac4804806c379349a979c02bb8a2a5524b.scope: Deactivated successfully.
Nov 25 17:21:53 compute-0 podman[418219]: 2025-11-25 17:21:53.967064807 +0000 UTC m=+0.069815962 container create cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:21:54 compute-0 podman[418219]: 2025-11-25 17:21:53.933514993 +0000 UTC m=+0.036266198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:21:54 compute-0 systemd[1]: Started libpod-conmon-cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1.scope.
Nov 25 17:21:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:54 compute-0 podman[418219]: 2025-11-25 17:21:54.073764361 +0000 UTC m=+0.176515586 container init cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:21:54 compute-0 podman[418219]: 2025-11-25 17:21:54.08513585 +0000 UTC m=+0.187887015 container start cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:21:54 compute-0 podman[418219]: 2025-11-25 17:21:54.0902448 +0000 UTC m=+0.192996005 container attach cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:21:54 compute-0 ceph-mon[74985]: pgmap v2950: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 25 17:21:54 compute-0 nova_compute[254092]: 2025-11-25 17:21:54.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:54 compute-0 ovn_controller[153477]: 2025-11-25T17:21:54Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:8a:a5 10.100.0.3
Nov 25 17:21:54 compute-0 ovn_controller[153477]: 2025-11-25T17:21:54Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:8a:a5 10.100.0.3
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]: {
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:     "0": [
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:         {
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "devices": [
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "/dev/loop3"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             ],
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_name": "ceph_lv0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_size": "21470642176",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "name": "ceph_lv0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "tags": {
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cluster_name": "ceph",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.crush_device_class": "",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.encrypted": "0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osd_id": "0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.type": "block",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.vdo": "0"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             },
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "type": "block",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "vg_name": "ceph_vg0"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:         }
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:     ],
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:     "1": [
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:         {
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "devices": [
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "/dev/loop4"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             ],
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_name": "ceph_lv1",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_size": "21470642176",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "name": "ceph_lv1",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "tags": {
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cluster_name": "ceph",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.crush_device_class": "",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.encrypted": "0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osd_id": "1",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.type": "block",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.vdo": "0"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             },
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "type": "block",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "vg_name": "ceph_vg1"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:         }
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:     ],
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:     "2": [
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:         {
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "devices": [
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "/dev/loop5"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             ],
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_name": "ceph_lv2",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_size": "21470642176",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "name": "ceph_lv2",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "tags": {
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.cluster_name": "ceph",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.crush_device_class": "",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.encrypted": "0",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osd_id": "2",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.type": "block",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:                 "ceph.vdo": "0"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             },
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "type": "block",
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:             "vg_name": "ceph_vg2"
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:         }
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]:     ]
Nov 25 17:21:54 compute-0 wonderful_solomon[418236]: }
Nov 25 17:21:55 compute-0 systemd[1]: libpod-cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1.scope: Deactivated successfully.
Nov 25 17:21:55 compute-0 podman[418219]: 2025-11-25 17:21:55.012504733 +0000 UTC m=+1.115255888 container died cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:21:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 17:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a993e60732e86686467072d4c94bf650cd3f9190ec88c2d426311ea7ca9f054-merged.mount: Deactivated successfully.
Nov 25 17:21:55 compute-0 podman[418219]: 2025-11-25 17:21:55.092432809 +0000 UTC m=+1.195183934 container remove cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_solomon, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:21:55 compute-0 systemd[1]: libpod-conmon-cab4c191244c5f8ba3f6cdafe86f0f51471bfb1bddff78a623e33074db0595c1.scope: Deactivated successfully.
Nov 25 17:21:55 compute-0 sudo[418110]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:55 compute-0 sudo[418259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:55 compute-0 sudo[418259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:55 compute-0 sudo[418259]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:55 compute-0 sudo[418284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:21:55 compute-0 sudo[418284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:55 compute-0 sudo[418284]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:21:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2929909979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:21:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:21:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2929909979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:21:55 compute-0 sudo[418309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:55 compute-0 sudo[418309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:55 compute-0 sudo[418309]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2929909979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:21:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2929909979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:21:55 compute-0 sudo[418334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:21:55 compute-0 sudo[418334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:55 compute-0 nova_compute[254092]: 2025-11-25 17:21:55.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:55 compute-0 nova_compute[254092]: 2025-11-25 17:21:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:21:55 compute-0 nova_compute[254092]: 2025-11-25 17:21:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:21:55 compute-0 podman[418400]: 2025-11-25 17:21:55.86620635 +0000 UTC m=+0.057453085 container create bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:21:55 compute-0 podman[418400]: 2025-11-25 17:21:55.841949179 +0000 UTC m=+0.033195914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:21:55 compute-0 systemd[1]: Started libpod-conmon-bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a.scope.
Nov 25 17:21:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:21:56 compute-0 podman[418400]: 2025-11-25 17:21:56.019865433 +0000 UTC m=+0.211112198 container init bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:21:56 compute-0 podman[418400]: 2025-11-25 17:21:56.028181099 +0000 UTC m=+0.219427834 container start bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:21:56 compute-0 podman[418400]: 2025-11-25 17:21:56.033552415 +0000 UTC m=+0.224799210 container attach bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:21:56 compute-0 awesome_archimedes[418416]: 167 167
Nov 25 17:21:56 compute-0 systemd[1]: libpod-bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a.scope: Deactivated successfully.
Nov 25 17:21:56 compute-0 podman[418400]: 2025-11-25 17:21:56.035269841 +0000 UTC m=+0.226516576 container died bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:21:56 compute-0 nova_compute[254092]: 2025-11-25 17:21:56.038 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:21:56 compute-0 nova_compute[254092]: 2025-11-25 17:21:56.040 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:21:56 compute-0 nova_compute[254092]: 2025-11-25 17:21:56.041 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:21:56 compute-0 nova_compute[254092]: 2025-11-25 17:21:56.041 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-92dae0f95f62a01c91f3ac4362ae96f5e6b737730e5bdda49616dcc5476a8033-merged.mount: Deactivated successfully.
Nov 25 17:21:56 compute-0 podman[418400]: 2025-11-25 17:21:56.083050812 +0000 UTC m=+0.274297507 container remove bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:21:56 compute-0 systemd[1]: libpod-conmon-bc5d6d92ec3a0a2b5c390a178319ee43d6f37c175665a8521a34909d2b50192a.scope: Deactivated successfully.
Nov 25 17:21:56 compute-0 podman[418440]: 2025-11-25 17:21:56.293421268 +0000 UTC m=+0.061620438 container create 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:21:56 compute-0 systemd[1]: Started libpod-conmon-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope.
Nov 25 17:21:56 compute-0 podman[418440]: 2025-11-25 17:21:56.265297972 +0000 UTC m=+0.033497142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:21:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:21:56 compute-0 podman[418440]: 2025-11-25 17:21:56.389566766 +0000 UTC m=+0.157765926 container init 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:21:56 compute-0 podman[418440]: 2025-11-25 17:21:56.398025386 +0000 UTC m=+0.166224546 container start 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:21:56 compute-0 podman[418440]: 2025-11-25 17:21:56.402703362 +0000 UTC m=+0.170902552 container attach 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:21:56 compute-0 ceph-mon[74985]: pgmap v2951: 321 pgs: 321 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 25 17:21:56 compute-0 nova_compute[254092]: 2025-11-25 17:21:56.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]: {
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "osd_id": 1,
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "type": "bluestore"
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:     },
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "osd_id": 2,
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "type": "bluestore"
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:     },
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "osd_id": 0,
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:         "type": "bluestore"
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]:     }
Nov 25 17:21:57 compute-0 inspiring_knuth[418456]: }
Nov 25 17:21:57 compute-0 systemd[1]: libpod-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope: Deactivated successfully.
Nov 25 17:21:57 compute-0 systemd[1]: libpod-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope: Consumed 1.168s CPU time.
Nov 25 17:21:57 compute-0 conmon[418456]: conmon 7c4d910f0adf015c8ec0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope/container/memory.events
Nov 25 17:21:57 compute-0 podman[418440]: 2025-11-25 17:21:57.564867686 +0000 UTC m=+1.333066866 container died 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4672b5c93b33313f82d9f902e284ac89e4338181dae7ec4b52e10c1e0255ec96-merged.mount: Deactivated successfully.
Nov 25 17:21:57 compute-0 podman[418440]: 2025-11-25 17:21:57.637925144 +0000 UTC m=+1.406124284 container remove 7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:21:57 compute-0 systemd[1]: libpod-conmon-7c4d910f0adf015c8ec05d74faf5ad780ce536bd874a8b3c91ed6a336eab06fb.scope: Deactivated successfully.
Nov 25 17:21:57 compute-0 sudo[418334]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:21:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:21:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:21:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:21:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d6e90702-0f81-4bd7-8634-dd170be90ab5 does not exist
Nov 25 17:21:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 93360f85-3cd0-4faf-8092-c6dddb29d6b8 does not exist
Nov 25 17:21:57 compute-0 sudo[418501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:21:57 compute-0 sudo[418501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:57 compute-0 sudo[418501]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:57 compute-0 sudo[418526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:21:57 compute-0 sudo[418526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:21:57 compute-0 sudo[418526]: pam_unix(sudo:session): session closed for user root
Nov 25 17:21:58 compute-0 ceph-mon[74985]: pgmap v2952: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Nov 25 17:21:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:21:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:21:58 compute-0 nova_compute[254092]: 2025-11-25 17:21:58.458 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:21:58 compute-0 nova_compute[254092]: 2025-11-25 17:21:58.474 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:21:58 compute-0 nova_compute[254092]: 2025-11-25 17:21:58.474 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:21:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:21:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:21:59 compute-0 nova_compute[254092]: 2025-11-25 17:21:59.472 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:21:59 compute-0 nova_compute[254092]: 2025-11-25 17:21:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:21:59 compute-0 nova_compute[254092]: 2025-11-25 17:21:59.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:00 compute-0 ceph-mon[74985]: pgmap v2953: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:22:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:01 compute-0 nova_compute[254092]: 2025-11-25 17:22:01.521 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:02 compute-0 ceph-mon[74985]: pgmap v2954: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:04 compute-0 ceph-mon[74985]: pgmap v2955: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:04 compute-0 nova_compute[254092]: 2025-11-25 17:22:04.476 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:06 compute-0 ceph-mon[74985]: pgmap v2956: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.560 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.694 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.694 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.709 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.795 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.796 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.807 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.807 254096 INFO nova.compute.claims [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:22:06 compute-0 nova_compute[254092]: 2025-11-25 17:22:06.924 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 396 KiB/s wr, 32 op/s
Nov 25 17:22:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:22:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855715510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.490 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.497 254096 DEBUG nova.compute.provider_tree [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:22:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/855715510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.513 254096 DEBUG nova.scheduler.client.report [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.531 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.532 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.594 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.594 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.614 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.626 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.738 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.739 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.740 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Creating image(s)
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.773 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.806 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.838 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.842 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.901 254096 DEBUG nova.policy [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.945 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.946 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.947 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.947 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.970 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:22:07 compute-0 nova_compute[254092]: 2025-11-25 17:22:07.974 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.466 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:08 compute-0 ceph-mon[74985]: pgmap v2957: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 396 KiB/s wr, 32 op/s
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.587 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.707 254096 DEBUG nova.objects.instance [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid fc92e0f7-adfa-4591-bb62-8e875c423b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.726 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.727 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Ensure instance console log exists: /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.728 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.728 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:08 compute-0 nova_compute[254092]: 2025-11-25 17:22:08.728 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Nov 25 17:22:09 compute-0 nova_compute[254092]: 2025-11-25 17:22:09.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:09.187 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:22:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:09.191 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:22:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:09.192 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:09 compute-0 nova_compute[254092]: 2025-11-25 17:22:09.311 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Successfully created port: 6f0f388f-a1e7-4172-912a-ee02487d9833 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:22:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:09 compute-0 nova_compute[254092]: 2025-11-25 17:22:09.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.480004) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329480072, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 534, "num_deletes": 257, "total_data_size": 496584, "memory_usage": 508232, "flush_reason": "Manual Compaction"}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329485806, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 492068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61231, "largest_seqno": 61764, "table_properties": {"data_size": 489104, "index_size": 935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6995, "raw_average_key_size": 18, "raw_value_size": 483074, "raw_average_value_size": 1288, "num_data_blocks": 41, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091299, "oldest_key_time": 1764091299, "file_creation_time": 1764091329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 5840 microseconds, and 3344 cpu microseconds.
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.485855) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 492068 bytes OK
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.485877) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487423) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487443) EVENT_LOG_v1 {"time_micros": 1764091329487436, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487468) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 493501, prev total WAL file size 493501, number of live WAL files 2.
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.488097) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353134' seq:72057594037927935, type:22 .. '6C6F676D0032373637' seq:0, type:0; will stop at (end)
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(480KB)], [140(9783KB)]
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329488141, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 10510187, "oldest_snapshot_seqno": -1}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8009 keys, 10392619 bytes, temperature: kUnknown
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329568921, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10392619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10340659, "index_size": 30855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 210773, "raw_average_key_size": 26, "raw_value_size": 10199108, "raw_average_value_size": 1273, "num_data_blocks": 1199, "num_entries": 8009, "num_filter_entries": 8009, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.569235) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10392619 bytes
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.570561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.9 rd, 128.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.5) write-amplify(21.1) OK, records in: 8535, records dropped: 526 output_compression: NoCompression
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.570589) EVENT_LOG_v1 {"time_micros": 1764091329570576, "job": 86, "event": "compaction_finished", "compaction_time_micros": 80893, "compaction_time_cpu_micros": 46704, "output_level": 6, "num_output_files": 1, "total_output_size": 10392619, "num_input_records": 8535, "num_output_records": 8009, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329571131, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091329574230, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.487974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:22:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:22:09.574311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.039 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Successfully updated port: 6f0f388f-a1e7-4172-912a-ee02487d9833 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.054 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.055 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.055 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:22:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.136 254096 DEBUG nova.compute.manager [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.137 254096 DEBUG nova.compute.manager [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing instance network info cache due to event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.137 254096 DEBUG oslo_concurrency.lockutils [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:22:10 compute-0 nova_compute[254092]: 2025-11-25 17:22:10.237 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:22:10 compute-0 ceph-mon[74985]: pgmap v2958: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Nov 25 17:22:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.562 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:11 compute-0 podman[418740]: 2025-11-25 17:22:11.668699079 +0000 UTC m=+0.074476448 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:22:11 compute-0 podman[418741]: 2025-11-25 17:22:11.714809084 +0000 UTC m=+0.117564031 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:22:11 compute-0 podman[418739]: 2025-11-25 17:22:11.715989817 +0000 UTC m=+0.122536387 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.810 254096 DEBUG nova.network.neutron [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.825 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.826 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance network_info: |[{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.826 254096 DEBUG oslo_concurrency.lockutils [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.826 254096 DEBUG nova.network.neutron [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.829 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start _get_guest_xml network_info=[{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.834 254096 WARNING nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.843 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.844 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.847 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.848 254096 DEBUG nova.virt.libvirt.host [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.849 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.849 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.850 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.850 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.850 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.851 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.852 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.852 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.852 254096 DEBUG nova.virt.hardware [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:22:11 compute-0 nova_compute[254092]: 2025-11-25 17:22:11.856 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:22:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248647259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.296 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.331 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.337 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:12 compute-0 ceph-mon[74985]: pgmap v2959: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:22:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3248647259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:22:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:22:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2238586271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.763 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.767 254096 DEBUG nova.virt.libvirt.vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1446051764',display_name='tempest-TestGettingAddress-server-1446051764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1446051764',id=146,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-m99ax5ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:22:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=fc92e0f7-adfa-4591-bb62-8e875c423b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.768 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.770 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.773 254096 DEBUG nova.objects.instance [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc92e0f7-adfa-4591-bb62-8e875c423b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.790 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <uuid>fc92e0f7-adfa-4591-bb62-8e875c423b6e</uuid>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <name>instance-00000092</name>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1446051764</nova:name>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:22:11</nova:creationTime>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <nova:port uuid="6f0f388f-a1e7-4172-912a-ee02487d9833">
Nov 25 17:22:12 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe49:5345" ipVersion="6"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe49:5345" ipVersion="6"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <system>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <entry name="serial">fc92e0f7-adfa-4591-bb62-8e875c423b6e</entry>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <entry name="uuid">fc92e0f7-adfa-4591-bb62-8e875c423b6e</entry>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </system>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <os>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   </os>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <features>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   </features>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk">
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       </source>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config">
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       </source>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:22:12 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:49:53:45"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <target dev="tap6f0f388f-a1"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/console.log" append="off"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <video>
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </video>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:22:12 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:22:12 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:22:12 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:22:12 compute-0 nova_compute[254092]: </domain>
Nov 25 17:22:12 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.793 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Preparing to wait for external event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.794 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.794 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.795 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.796 254096 DEBUG nova.virt.libvirt.vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1446051764',display_name='tempest-TestGettingAddress-server-1446051764',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1446051764',id=146,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-m99ax5ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:22:07Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=fc92e0f7-adfa-4591-bb62-8e875c423b6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.797 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.799 254096 DEBUG nova.network.os_vif_util [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.800 254096 DEBUG os_vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.802 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.803 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.810 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f0f388f-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.811 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f0f388f-a1, col_values=(('external_ids', {'iface-id': '6f0f388f-a1e7-4172-912a-ee02487d9833', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:53:45', 'vm-uuid': 'fc92e0f7-adfa-4591-bb62-8e875c423b6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:12 compute-0 NetworkManager[48891]: <info>  [1764091332.8154] manager: (tap6f0f388f-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.816 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.825 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.827 254096 INFO os_vif [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1')
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.876 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.877 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.877 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:49:53:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.878 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Using config drive
Nov 25 17:22:12 compute-0 nova_compute[254092]: 2025-11-25 17:22:12.911 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:22:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.366 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Creating config drive at /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.376 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjm6nzjm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2238586271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.533 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjm6nzjm" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.570 254096 DEBUG nova.storage.rbd_utils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.575 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.642 254096 DEBUG nova.network.neutron [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updated VIF entry in instance network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.643 254096 DEBUG nova.network.neutron [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.657 254096 DEBUG oslo_concurrency.lockutils [req-4ff2f2d5-ce1c-4051-ae57-7f6a520570c2 req-7d580b38-9361-40a3-b243-d3e135673ade a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.660 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.662 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.754 254096 DEBUG oslo_concurrency.processutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config fc92e0f7-adfa-4591-bb62-8e875c423b6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.755 254096 INFO nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deleting local config drive /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e/disk.config because it was imported into RBD.
Nov 25 17:22:13 compute-0 kernel: tap6f0f388f-a1: entered promiscuous mode
Nov 25 17:22:13 compute-0 NetworkManager[48891]: <info>  [1764091333.8439] manager: (tap6f0f388f-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Nov 25 17:22:13 compute-0 ovn_controller[153477]: 2025-11-25T17:22:13Z|01573|binding|INFO|Claiming lport 6f0f388f-a1e7-4172-912a-ee02487d9833 for this chassis.
Nov 25 17:22:13 compute-0 ovn_controller[153477]: 2025-11-25T17:22:13Z|01574|binding|INFO|6f0f388f-a1e7-4172-912a-ee02487d9833: Claiming fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:13 compute-0 systemd-udevd[418931]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.915 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:13 compute-0 ovn_controller[153477]: 2025-11-25T17:22:13Z|01575|binding|INFO|Setting lport 6f0f388f-a1e7-4172-912a-ee02487d9833 ovn-installed in OVS
Nov 25 17:22:13 compute-0 ovn_controller[153477]: 2025-11-25T17:22:13Z|01576|binding|INFO|Setting lport 6f0f388f-a1e7-4172-912a-ee02487d9833 up in Southbound
Nov 25 17:22:13 compute-0 nova_compute[254092]: 2025-11-25 17:22:13.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:13 compute-0 NetworkManager[48891]: <info>  [1764091333.9190] device (tap6f0f388f-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:22:13 compute-0 NetworkManager[48891]: <info>  [1764091333.9204] device (tap6f0f388f-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.918 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], port_security=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe49:5345/64 2001:db8::f816:3eff:fe49:5345/64', 'neutron:device_id': 'fc92e0f7-adfa-4591-bb62-8e875c423b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=6f0f388f-a1e7-4172-912a-ee02487d9833) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.921 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 6f0f388f-a1e7-4172-912a-ee02487d9833 in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 bound to our chassis
Nov 25 17:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.923 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252
Nov 25 17:22:13 compute-0 systemd-machined[216343]: New machine qemu-180-instance-00000092.
Nov 25 17:22:13 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Nov 25 17:22:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:13.947 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3814c4cb-c0d1-4d06-ad27-fed1092483e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.004 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e054af91-8d78-4981-9f88-02b0ef08d2eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.009 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6000075d-e0be-491c-ab0c-380f3f7dbb43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.059 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[799a39e3-645f-4dc6-8ac6-0be9e7027a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.081 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f96c3dff-b5d4-47d9-804a-e32a73c5b101]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418948, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.109 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[38bef246-25d1-4502-9b8a-7c254d564490]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775839, 'tstamp': 775839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418949, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775844, 'tstamp': 775844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418949, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.112 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:14 compute-0 nova_compute[254092]: 2025-11-25 17:22:14.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:14 compute-0 nova_compute[254092]: 2025-11-25 17:22:14.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.119 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84058e12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.120 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.121 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84058e12-50, col_values=(('external_ids', {'iface-id': 'f8afc421-e45b-4911-af18-dd32853c6b8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:14.122 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:22:14 compute-0 nova_compute[254092]: 2025-11-25 17:22:14.123 254096 DEBUG nova.compute.manager [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:14 compute-0 nova_compute[254092]: 2025-11-25 17:22:14.124 254096 DEBUG oslo_concurrency.lockutils [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:14 compute-0 nova_compute[254092]: 2025-11-25 17:22:14.125 254096 DEBUG oslo_concurrency.lockutils [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:14 compute-0 nova_compute[254092]: 2025-11-25 17:22:14.125 254096 DEBUG oslo_concurrency.lockutils [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:14 compute-0 nova_compute[254092]: 2025-11-25 17:22:14.126 254096 DEBUG nova.compute.manager [req-43d7eb4a-7c4b-4be9-89e9-5855749a0509 req-8a656e54-6949-42a3-8598-28ba386a649a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Processing event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:22:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:14 compute-0 ceph-mon[74985]: pgmap v2960: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.055 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.056 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091335.0545356, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.057 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Started (Lifecycle Event)
Nov 25 17:22:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.064 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.069 254096 INFO nova.virt.libvirt.driver [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance spawned successfully.
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.069 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.083 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.096 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.104 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.104 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.105 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.106 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.106 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.107 254096 DEBUG nova.virt.libvirt.driver [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.122 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.122 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091335.0604644, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.123 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Paused (Lifecycle Event)
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.158 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.162 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091335.0635834, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.163 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Resumed (Lifecycle Event)
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.197 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.201 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.207 254096 INFO nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 7.47 seconds to spawn the instance on the hypervisor.
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.207 254096 DEBUG nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.218 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.276 254096 INFO nova.compute.manager [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 8.51 seconds to build instance.
Nov 25 17:22:15 compute-0 nova_compute[254092]: 2025-11-25 17:22:15.292 254096 DEBUG oslo_concurrency.lockutils [None req-9528bab2-e38f-4f75-8863-406e2692dcb0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:16 compute-0 nova_compute[254092]: 2025-11-25 17:22:16.235 254096 DEBUG nova.compute.manager [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:16 compute-0 nova_compute[254092]: 2025-11-25 17:22:16.235 254096 DEBUG oslo_concurrency.lockutils [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:16 compute-0 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 DEBUG oslo_concurrency.lockutils [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:16 compute-0 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 DEBUG oslo_concurrency.lockutils [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:16 compute-0 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 DEBUG nova.compute.manager [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] No waiting events found dispatching network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:22:16 compute-0 nova_compute[254092]: 2025-11-25 17:22:16.236 254096 WARNING nova.compute.manager [req-525f861f-541a-47b0-8f21-ce82126ac301 req-55217e88-0325-44fa-8a26-e9f8cb8f3ee4 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received unexpected event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 for instance with vm_state active and task_state None.
Nov 25 17:22:16 compute-0 ceph-mon[74985]: pgmap v2961: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Nov 25 17:22:16 compute-0 nova_compute[254092]: 2025-11-25 17:22:16.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 17:22:17 compute-0 nova_compute[254092]: 2025-11-25 17:22:17.814 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:18 compute-0 ceph-mon[74985]: pgmap v2962: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 17:22:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 17:22:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:20 compute-0 nova_compute[254092]: 2025-11-25 17:22:20.035 254096 DEBUG nova.compute.manager [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:20 compute-0 nova_compute[254092]: 2025-11-25 17:22:20.036 254096 DEBUG nova.compute.manager [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing instance network info cache due to event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:22:20 compute-0 nova_compute[254092]: 2025-11-25 17:22:20.036 254096 DEBUG oslo_concurrency.lockutils [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:22:20 compute-0 nova_compute[254092]: 2025-11-25 17:22:20.037 254096 DEBUG oslo_concurrency.lockutils [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:22:20 compute-0 nova_compute[254092]: 2025-11-25 17:22:20.038 254096 DEBUG nova.network.neutron [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:22:20 compute-0 ceph-mon[74985]: pgmap v2963: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 25 17:22:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:22:21 compute-0 nova_compute[254092]: 2025-11-25 17:22:21.567 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:22 compute-0 ceph-mon[74985]: pgmap v2964: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:22:22 compute-0 nova_compute[254092]: 2025-11-25 17:22:22.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:22:23 compute-0 nova_compute[254092]: 2025-11-25 17:22:23.499 254096 DEBUG nova.network.neutron [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updated VIF entry in instance network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:22:23 compute-0 nova_compute[254092]: 2025-11-25 17:22:23.500 254096 DEBUG nova.network.neutron [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:23 compute-0 nova_compute[254092]: 2025-11-25 17:22:23.518 254096 DEBUG oslo_concurrency.lockutils [req-9c3194a2-2f21-4498-8288-73dde41f7d52 req-90387fb5-0bbe-453e-b812-996031865503 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:22:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:24 compute-0 ceph-mon[74985]: pgmap v2965: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:22:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:22:26 compute-0 ceph-mon[74985]: pgmap v2966: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:22:26 compute-0 nova_compute[254092]: 2025-11-25 17:22:26.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 25 17:22:27 compute-0 ceph-mon[74985]: pgmap v2967: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Nov 25 17:22:27 compute-0 nova_compute[254092]: 2025-11-25 17:22:27.821 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:28 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 25 17:22:28 compute-0 ovn_controller[153477]: 2025-11-25T17:22:28Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:53:45 10.100.0.6
Nov 25 17:22:28 compute-0 ovn_controller[153477]: 2025-11-25T17:22:28Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:53:45 10.100.0.6
Nov 25 17:22:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 56 op/s
Nov 25 17:22:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:30 compute-0 ceph-mon[74985]: pgmap v2968: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 56 op/s
Nov 25 17:22:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 25 17:22:31 compute-0 nova_compute[254092]: 2025-11-25 17:22:31.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:32 compute-0 ceph-mon[74985]: pgmap v2969: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 25 17:22:32 compute-0 nova_compute[254092]: 2025-11-25 17:22:32.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:34 compute-0 ceph-mon[74985]: pgmap v2970: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:36 compute-0 ceph-mon[74985]: pgmap v2971: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:36 compute-0 nova_compute[254092]: 2025-11-25 17:22:36.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:37 compute-0 nova_compute[254092]: 2025-11-25 17:22:37.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 ceph-mon[74985]: pgmap v2972: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.254 254096 DEBUG nova.compute.manager [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.255 254096 DEBUG nova.compute.manager [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing instance network info cache due to event network-changed-6f0f388f-a1e7-4172-912a-ee02487d9833. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.255 254096 DEBUG oslo_concurrency.lockutils [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.256 254096 DEBUG oslo_concurrency.lockutils [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.256 254096 DEBUG nova.network.neutron [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Refreshing network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.338 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.338 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.338 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.339 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.339 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.340 254096 INFO nova.compute.manager [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Terminating instance
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.341 254096 DEBUG nova.compute.manager [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:22:38 compute-0 kernel: tap6f0f388f-a1 (unregistering): left promiscuous mode
Nov 25 17:22:38 compute-0 NetworkManager[48891]: <info>  [1764091358.4065] device (tap6f0f388f-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 ovn_controller[153477]: 2025-11-25T17:22:38Z|01577|binding|INFO|Releasing lport 6f0f388f-a1e7-4172-912a-ee02487d9833 from this chassis (sb_readonly=0)
Nov 25 17:22:38 compute-0 ovn_controller[153477]: 2025-11-25T17:22:38Z|01578|binding|INFO|Setting lport 6f0f388f-a1e7-4172-912a-ee02487d9833 down in Southbound
Nov 25 17:22:38 compute-0 ovn_controller[153477]: 2025-11-25T17:22:38Z|01579|binding|INFO|Removing iface tap6f0f388f-a1 ovn-installed in OVS
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.444 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], port_security=['fa:16:3e:49:53:45 10.100.0.6 2001:db8:0:1:f816:3eff:fe49:5345 2001:db8::f816:3eff:fe49:5345'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fe49:5345/64 2001:db8::f816:3eff:fe49:5345/64', 'neutron:device_id': 'fc92e0f7-adfa-4591-bb62-8e875c423b6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=6f0f388f-a1e7-4172-912a-ee02487d9833) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.447 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 6f0f388f-a1e7-4172-912a-ee02487d9833 in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 unbound from our chassis
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.449 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.456 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.480 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ac17a4a3-e66b-4393-bb20-b17293ef4fe1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:38 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 25 17:22:38 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 15.139s CPU time.
Nov 25 17:22:38 compute-0 systemd-machined[216343]: Machine qemu-180-instance-00000092 terminated.
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.536 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[18dd4376-4e16-4844-a217-0a75315416cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.540 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2033b0-712d-4532-9e8f-f4bf03aa19b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.612 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[10577990-a28f-4495-b370-867f55e9eb86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.619 254096 INFO nova.virt.libvirt.driver [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Instance destroyed successfully.
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.620 254096 DEBUG nova.objects.instance [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid fc92e0f7-adfa-4591-bb62-8e875c423b6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.629 254096 DEBUG nova.virt.libvirt.vif [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1446051764',display_name='tempest-TestGettingAddress-server-1446051764',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1446051764',id=146,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:22:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-m99ax5ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:22:15Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=fc92e0f7-adfa-4591-bb62-8e875c423b6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.629 254096 DEBUG nova.network.os_vif_util [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.630 254096 DEBUG nova.network.os_vif_util [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.631 254096 DEBUG os_vif [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.632 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.633 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f0f388f-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.634 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.636 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.639 254096 INFO os_vif [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:53:45,bridge_name='br-int',has_traffic_filtering=True,id=6f0f388f-a1e7-4172-912a-ee02487d9833,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f0f388f-a1')
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.640 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cf405a-d221-4ea9-ae10-314ab19af596]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84058e12-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fa:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 456], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775826, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419014, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.669 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b081e38d-65cc-4957-a056-d0f9bfadf150]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775839, 'tstamp': 775839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419024, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84058e12-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775844, 'tstamp': 775844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419024, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.675 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 nova_compute[254092]: 2025-11-25 17:22:38.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.682 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84058e12-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.683 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84058e12-50, col_values=(('external_ids', {'iface-id': 'f8afc421-e45b-4911-af18-dd32853c6b8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:38.684 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.010 254096 INFO nova.virt.libvirt.driver [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deleting instance files /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e_del
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.011 254096 INFO nova.virt.libvirt.driver [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deletion of /var/lib/nova/instances/fc92e0f7-adfa-4591-bb62-8e875c423b6e_del complete
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.086 254096 INFO nova.compute.manager [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.086 254096 DEBUG oslo.service.loopingcall [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.086 254096 DEBUG nova.compute.manager [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.087 254096 DEBUG nova.network.neutron [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:22:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.249 254096 DEBUG nova.compute.manager [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-unplugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.249 254096 DEBUG oslo_concurrency.lockutils [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG oslo_concurrency.lockutils [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG oslo_concurrency.lockutils [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG nova.compute.manager [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] No waiting events found dispatching network-vif-unplugged-6f0f388f-a1e7-4172-912a-ee02487d9833 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:22:39 compute-0 nova_compute[254092]: 2025-11-25 17:22:39.250 254096 DEBUG nova.compute.manager [req-c9a25336-b97a-48fb-9180-1f70b73dba82 req-e1f4896c-02c5-4d99-bb13-840d408592ec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-unplugged-6f0f388f-a1e7-4172-912a-ee02487d9833 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:22:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:22:40 compute-0 ceph-mon[74985]: pgmap v2973: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:22:40
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr']
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.261 254096 DEBUG nova.network.neutron [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.283 254096 INFO nova.compute.manager [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Took 1.20 seconds to deallocate network for instance.
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.319 254096 DEBUG nova.compute.manager [req-5f08b3f8-2f17-4ea1-9b97-bb25aa34c6a7 req-cef8b54e-00a4-40c5-b5ce-78ad0d3fcf36 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-deleted-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.347 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.348 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.417 254096 DEBUG oslo_concurrency.processutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:22:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:22:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:22:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454553442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.950 254096 DEBUG oslo_concurrency.processutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.958 254096 DEBUG nova.compute.provider_tree [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:22:40 compute-0 nova_compute[254092]: 2025-11-25 17:22:40.976 254096 DEBUG nova.scheduler.client.report [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.005 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.043 254096 INFO nova.scheduler.client.report [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance fc92e0f7-adfa-4591-bb62-8e875c423b6e
Nov 25 17:22:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.131 254096 DEBUG oslo_concurrency.lockutils [None req-2167d335-1e1c-422e-b502-c1219ae9fe11 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/454553442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.454 254096 DEBUG nova.compute.manager [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.455 254096 DEBUG oslo_concurrency.lockutils [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.456 254096 DEBUG oslo_concurrency.lockutils [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.457 254096 DEBUG oslo_concurrency.lockutils [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "fc92e0f7-adfa-4591-bb62-8e875c423b6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.457 254096 DEBUG nova.compute.manager [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] No waiting events found dispatching network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.457 254096 WARNING nova.compute.manager [req-afab0bd0-5784-4db4-a280-313960b55c39 req-bac27810-6c7b-40a7-b386-2c347e86372a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Received unexpected event network-vif-plugged-6f0f388f-a1e7-4172-912a-ee02487d9833 for instance with vm_state deleted and task_state None.
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.531 254096 DEBUG nova.network.neutron [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updated VIF entry in instance network info cache for port 6f0f388f-a1e7-4172-912a-ee02487d9833. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.532 254096 DEBUG nova.network.neutron [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Updating instance_info_cache with network_info: [{"id": "6f0f388f-a1e7-4172-912a-ee02487d9833", "address": "fa:16:3e:49:53:45", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe49:5345", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f0f388f-a1", "ovs_interfaceid": "6f0f388f-a1e7-4172-912a-ee02487d9833", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.549 254096 DEBUG oslo_concurrency.lockutils [req-006ec7bd-a1c6-4c7f-bc04-66b0c1530eb7 req-61c15ea1-f562-416c-9acb-75034e710054 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-fc92e0f7-adfa-4591-bb62-8e875c423b6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:22:41 compute-0 nova_compute[254092]: 2025-11-25 17:22:41.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:42 compute-0 ceph-mon[74985]: pgmap v2974: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:22:42 compute-0 nova_compute[254092]: 2025-11-25 17:22:42.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:42 compute-0 podman[419060]: 2025-11-25 17:22:42.662272448 +0000 UTC m=+0.075079723 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:22:42 compute-0 podman[419061]: 2025-11-25 17:22:42.667802379 +0000 UTC m=+0.073303126 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:22:42 compute-0 podman[419062]: 2025-11-25 17:22:42.729008885 +0000 UTC m=+0.129670960 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:22:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.401 254096 DEBUG nova.compute.manager [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.401 254096 DEBUG nova.compute.manager [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing instance network info cache due to event network-changed-c1ba1b56-3c61-42fa-b23d-44349357a11a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.401 254096 DEBUG oslo_concurrency.lockutils [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.402 254096 DEBUG oslo_concurrency.lockutils [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.402 254096 DEBUG nova.network.neutron [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Refreshing network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.513 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.513 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.514 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.514 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.514 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.516 254096 INFO nova.compute.manager [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Terminating instance
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.517 254096 DEBUG nova.compute.manager [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:22:43 compute-0 kernel: tapc1ba1b56-3c (unregistering): left promiscuous mode
Nov 25 17:22:43 compute-0 NetworkManager[48891]: <info>  [1764091363.5639] device (tapc1ba1b56-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:22:43 compute-0 ovn_controller[153477]: 2025-11-25T17:22:43Z|01580|binding|INFO|Releasing lport c1ba1b56-3c61-42fa-b23d-44349357a11a from this chassis (sb_readonly=0)
Nov 25 17:22:43 compute-0 ovn_controller[153477]: 2025-11-25T17:22:43Z|01581|binding|INFO|Setting lport c1ba1b56-3c61-42fa-b23d-44349357a11a down in Southbound
Nov 25 17:22:43 compute-0 ovn_controller[153477]: 2025-11-25T17:22:43Z|01582|binding|INFO|Removing iface tapc1ba1b56-3c ovn-installed in OVS
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.576 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.582 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], port_security=['fa:16:3e:77:8a:a5 10.100.0.3 2001:db8:0:1:f816:3eff:fe77:8aa5 2001:db8::f816:3eff:fe77:8aa5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe77:8aa5/64 2001:db8::f816:3eff:fe77:8aa5/64', 'neutron:device_id': '36b839e5-d6db-406a-ab95-bbdcd48c531d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb9b5bad-6662-478f-ad54-33baab50ad17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4dc08b4-4174-48fd-8366-1bed885d47d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=c1ba1b56-3c61-42fa-b23d-44349357a11a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.583 163338 INFO neutron.agent.ovn.metadata.agent [-] Port c1ba1b56-3c61-42fa-b23d-44349357a11a in datapath 84058e12-5c2c-4ee6-a8bb-052eff4cc252 unbound from our chassis
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.584 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84058e12-5c2c-4ee6-a8bb-052eff4cc252, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.585 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fbefbc2d-83b6-4d3a-8833-5dda31a98c55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.586 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 namespace which is not needed anymore
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.635 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 25 17:22:43 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 16.485s CPU time.
Nov 25 17:22:43 compute-0 systemd-machined[216343]: Machine qemu-179-instance-00000091 terminated.
Nov 25 17:22:43 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : haproxy version is 2.8.14-c23fe91
Nov 25 17:22:43 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [NOTICE]   (417594) : path to executable is /usr/sbin/haproxy
Nov 25 17:22:43 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [WARNING]  (417594) : Exiting Master process...
Nov 25 17:22:43 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [WARNING]  (417594) : Exiting Master process...
Nov 25 17:22:43 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [ALERT]    (417594) : Current worker (417604) exited with code 143 (Terminated)
Nov 25 17:22:43 compute-0 neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252[417552]: [WARNING]  (417594) : All workers exited. Exiting... (0)
Nov 25 17:22:43 compute-0 systemd[1]: libpod-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55.scope: Deactivated successfully.
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.762 254096 INFO nova.virt.libvirt.driver [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Instance destroyed successfully.
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.763 254096 DEBUG nova.objects.instance [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 36b839e5-d6db-406a-ab95-bbdcd48c531d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:22:43 compute-0 podman[419143]: 2025-11-25 17:22:43.763517614 +0000 UTC m=+0.064066585 container died 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.774 254096 DEBUG nova.virt.libvirt.vif [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:21:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1075663045',display_name='tempest-TestGettingAddress-server-1075663045',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1075663045',id=145,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIRhAmR3DOLtGAzZ0xNgg5fZOngalELipYPxditOta+vsl4USGsDMHdwACzWpYO9hL+8kf+OvOhS+QLbsXFPxV626VeNASOG72UHufjWxf78foL9bNoVwzZLv4H5DTuJTA==',key_name='tempest-TestGettingAddress-910447973',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:21:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-0xnzfd4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:21:41Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=36b839e5-d6db-406a-ab95-bbdcd48c531d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.774 254096 DEBUG nova.network.os_vif_util [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.775 254096 DEBUG nova.network.os_vif_util [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.775 254096 DEBUG os_vif [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.779 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ba1b56-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.784 254096 INFO os_vif [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:8a:a5,bridge_name='br-int',has_traffic_filtering=True,id=c1ba1b56-3c61-42fa-b23d-44349357a11a,network=Network(84058e12-5c2c-4ee6-a8bb-052eff4cc252),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ba1b56-3c')
Nov 25 17:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55-userdata-shm.mount: Deactivated successfully.
Nov 25 17:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a1850d0a22111fadf1ed03693f6973e2c59b4c9475f9c71c2ca4d4e5534c52-merged.mount: Deactivated successfully.
Nov 25 17:22:43 compute-0 podman[419143]: 2025-11-25 17:22:43.824163445 +0000 UTC m=+0.124712426 container cleanup 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:22:43 compute-0 systemd[1]: libpod-conmon-7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55.scope: Deactivated successfully.
Nov 25 17:22:43 compute-0 podman[419200]: 2025-11-25 17:22:43.897941413 +0000 UTC m=+0.049183210 container remove 7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.908 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[19e39e3b-f5c4-4c54-996d-58e581c43f86]: (4, ('Tue Nov 25 05:22:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 (7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55)\n7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55\nTue Nov 25 05:22:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 (7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55)\n7c13e2ed5d8ee54b2a99610badde6c4e9d1779148bac578c849e2c08efd5cb55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.910 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[293b18f3-3e8f-49f6-8dd3-3a351f00c6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.911 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84058e12-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 kernel: tap84058e12-50: left promiscuous mode
Nov 25 17:22:43 compute-0 nova_compute[254092]: 2025-11-25 17:22:43.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.938 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d1dd40-6b18-4d2d-a412-343fc2239fb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.954 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[34d57346-52a3-4f5c-ac0c-5d2def994ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.955 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e14acd-f62d-4717-b9bf-aeebad79fcd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.977 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[10a8e00b-b8f9-4575-96bb-06f12e0116ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775817, 'reachable_time': 36428, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419216, 'error': None, 'target': 'ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d84058e12\x2d5c2c\x2d4ee6\x2da8bb\x2d052eff4cc252.mount: Deactivated successfully.
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.983 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84058e12-5c2c-4ee6-a8bb-052eff4cc252 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:22:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:22:43.983 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[26edf965-37b5-441c-abf0-4d4c27abe2a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:22:44 compute-0 nova_compute[254092]: 2025-11-25 17:22:44.160 254096 INFO nova.virt.libvirt.driver [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deleting instance files /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d_del
Nov 25 17:22:44 compute-0 nova_compute[254092]: 2025-11-25 17:22:44.161 254096 INFO nova.virt.libvirt.driver [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deletion of /var/lib/nova/instances/36b839e5-d6db-406a-ab95-bbdcd48c531d_del complete
Nov 25 17:22:44 compute-0 ceph-mon[74985]: pgmap v2975: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 14 KiB/s wr, 29 op/s
Nov 25 17:22:44 compute-0 nova_compute[254092]: 2025-11-25 17:22:44.217 254096 INFO nova.compute.manager [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 17:22:44 compute-0 nova_compute[254092]: 2025-11-25 17:22:44.218 254096 DEBUG oslo.service.loopingcall [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:22:44 compute-0 nova_compute[254092]: 2025-11-25 17:22:44.218 254096 DEBUG nova.compute.manager [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:22:44 compute-0 nova_compute[254092]: 2025-11-25 17:22:44.218 254096 DEBUG nova.network.neutron [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:22:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:44 compute-0 nova_compute[254092]: 2025-11-25 17:22:44.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.073 254096 DEBUG nova.network.neutron [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.087 254096 INFO nova.compute.manager [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Took 0.87 seconds to deallocate network for instance.
Nov 25 17:22:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 58 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 42 op/s
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.151 254096 DEBUG nova.network.neutron [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updated VIF entry in instance network info cache for port c1ba1b56-3c61-42fa-b23d-44349357a11a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.151 254096 DEBUG nova.network.neutron [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [{"id": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "address": "fa:16:3e:77:8a:a5", "network": {"id": "84058e12-5c2c-4ee6-a8bb-052eff4cc252", "bridge": "br-int", "label": "tempest-network-smoke--7131343", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe77:8aa5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ba1b56-3c", "ovs_interfaceid": "c1ba1b56-3c61-42fa-b23d-44349357a11a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.168 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.169 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.184 254096 DEBUG oslo_concurrency.lockutils [req-abc3b12f-12e1-4040-8c37-f5043189868c req-e54371c8-b2e4-479d-9c8c-a7c87489b545 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-36b839e5-d6db-406a-ab95-bbdcd48c531d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.224 254096 DEBUG oslo_concurrency.processutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.478 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-unplugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.479 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] No waiting events found dispatching network-vif-unplugged-c1ba1b56-3c61-42fa-b23d-44349357a11a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 WARNING nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received unexpected event network-vif-unplugged-c1ba1b56-3c61-42fa-b23d-44349357a11a for instance with vm_state deleted and task_state None.
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG oslo_concurrency.lockutils [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] No waiting events found dispatching network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.480 254096 WARNING nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received unexpected event network-vif-plugged-c1ba1b56-3c61-42fa-b23d-44349357a11a for instance with vm_state deleted and task_state None.
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.481 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Received event network-vif-deleted-c1ba1b56-3c61-42fa-b23d-44349357a11a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.481 254096 INFO nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Neutron deleted interface c1ba1b56-3c61-42fa-b23d-44349357a11a; detaching it from the instance and deleting it from the info cache
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.481 254096 DEBUG nova.network.neutron [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.504 254096 DEBUG nova.compute.manager [req-329beb3c-c2fc-42bb-a564-35191604caff req-452c8425-4c40-4b80-8207-9963a7b6d117 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Detach interface failed, port_id=c1ba1b56-3c61-42fa-b23d-44349357a11a, reason: Instance 36b839e5-d6db-406a-ab95-bbdcd48c531d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 17:22:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:22:45 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033778206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.691 254096 DEBUG oslo_concurrency.processutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.697 254096 DEBUG nova.compute.provider_tree [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.710 254096 DEBUG nova.scheduler.client.report [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.725 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.749 254096 INFO nova.scheduler.client.report [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 36b839e5-d6db-406a-ab95-bbdcd48c531d
Nov 25 17:22:45 compute-0 nova_compute[254092]: 2025-11-25 17:22:45.794 254096 DEBUG oslo_concurrency.lockutils [None req-b8df7008-cb33-468c-91ca-35534ed0b7f1 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "36b839e5-d6db-406a-ab95-bbdcd48c531d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:46 compute-0 ceph-mon[74985]: pgmap v2976: 321 pgs: 321 active+clean; 58 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 42 op/s
Nov 25 17:22:46 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3033778206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:22:46 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942597773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:46 compute-0 nova_compute[254092]: 2025-11-25 17:22:46.972 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.5 KiB/s wr, 56 op/s
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.158 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.159 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3632MB free_disk=59.9774169921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.159 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.159 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:22:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2942597773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.219 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.220 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.253 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:22:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:22:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/453444774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.710 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.727 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.746 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:22:47 compute-0 nova_compute[254092]: 2025-11-25 17:22:47.747 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:22:48 compute-0 ceph-mon[74985]: pgmap v2977: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.5 KiB/s wr, 56 op/s
Nov 25 17:22:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/453444774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:22:48 compute-0 nova_compute[254092]: 2025-11-25 17:22:48.782 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Nov 25 17:22:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:50 compute-0 ceph-mon[74985]: pgmap v2978: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Nov 25 17:22:50 compute-0 nova_compute[254092]: 2025-11-25 17:22:50.742 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:50 compute-0 nova_compute[254092]: 2025-11-25 17:22:50.743 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:50 compute-0 nova_compute[254092]: 2025-11-25 17:22:50.744 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Nov 25 17:22:51 compute-0 nova_compute[254092]: 2025-11-25 17:22:51.582 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:22:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:22:52 compute-0 ceph-mon[74985]: pgmap v2979: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 56 op/s
Nov 25 17:22:52 compute-0 nova_compute[254092]: 2025-11-25 17:22:52.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:52 compute-0 nova_compute[254092]: 2025-11-25 17:22:52.644 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:52 compute-0 nova_compute[254092]: 2025-11-25 17:22:52.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 25 17:22:53 compute-0 nova_compute[254092]: 2025-11-25 17:22:53.619 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091358.6175413, fc92e0f7-adfa-4591-bb62-8e875c423b6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:22:53 compute-0 nova_compute[254092]: 2025-11-25 17:22:53.619 254096 INFO nova.compute.manager [-] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] VM Stopped (Lifecycle Event)
Nov 25 17:22:53 compute-0 nova_compute[254092]: 2025-11-25 17:22:53.642 254096 DEBUG nova.compute.manager [None req-456ad93e-5af9-4513-982b-76fb0faafd8c - - - - - -] [instance: fc92e0f7-adfa-4591-bb62-8e875c423b6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:22:53 compute-0 nova_compute[254092]: 2025-11-25 17:22:53.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:54 compute-0 ceph-mon[74985]: pgmap v2980: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 25 17:22:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 25 17:22:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:22:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/138076452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:22:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:22:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/138076452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:22:55 compute-0 nova_compute[254092]: 2025-11-25 17:22:55.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:55 compute-0 nova_compute[254092]: 2025-11-25 17:22:55.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:22:55 compute-0 nova_compute[254092]: 2025-11-25 17:22:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:22:55 compute-0 nova_compute[254092]: 2025-11-25 17:22:55.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:22:56 compute-0 ceph-mon[74985]: pgmap v2981: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Nov 25 17:22:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/138076452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:22:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/138076452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:22:56 compute-0 nova_compute[254092]: 2025-11-25 17:22:56.584 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Nov 25 17:22:57 compute-0 sudo[419287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:22:57 compute-0 sudo[419287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:57 compute-0 sudo[419287]: pam_unix(sudo:session): session closed for user root
Nov 25 17:22:58 compute-0 sudo[419312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:22:58 compute-0 sudo[419312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:58 compute-0 sudo[419312]: pam_unix(sudo:session): session closed for user root
Nov 25 17:22:58 compute-0 sudo[419337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:22:58 compute-0 sudo[419337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:58 compute-0 sudo[419337]: pam_unix(sudo:session): session closed for user root
Nov 25 17:22:58 compute-0 ceph-mon[74985]: pgmap v2982: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Nov 25 17:22:58 compute-0 sudo[419362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:22:58 compute-0 sudo[419362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:58 compute-0 nova_compute[254092]: 2025-11-25 17:22:58.761 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091363.759886, 36b839e5-d6db-406a-ab95-bbdcd48c531d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:22:58 compute-0 nova_compute[254092]: 2025-11-25 17:22:58.762 254096 INFO nova.compute.manager [-] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] VM Stopped (Lifecycle Event)
Nov 25 17:22:58 compute-0 nova_compute[254092]: 2025-11-25 17:22:58.804 254096 DEBUG nova.compute.manager [None req-60e15fcd-4cfd-43f5-9e91-37c8dc169ac4 - - - - - -] [instance: 36b839e5-d6db-406a-ab95-bbdcd48c531d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:22:58 compute-0 nova_compute[254092]: 2025-11-25 17:22:58.830 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:22:58 compute-0 sudo[419362]: pam_unix(sudo:session): session closed for user root
Nov 25 17:22:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:22:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:22:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:22:58 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:22:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:22:58 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:22:58 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d468b05a-32c1-4df1-b31b-7a4b431c2714 does not exist
Nov 25 17:22:58 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2ffc52ef-405e-4642-88b9-80b53e2f2c48 does not exist
Nov 25 17:22:58 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a0fce6ca-9192-4f27-ad55-4ab4928ad93e does not exist
Nov 25 17:22:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:22:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:22:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:22:58 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:22:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:22:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:22:59 compute-0 sudo[419418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:22:59 compute-0 sudo[419418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:59 compute-0 sudo[419418]: pam_unix(sudo:session): session closed for user root
Nov 25 17:22:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:22:59 compute-0 sudo[419443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:22:59 compute-0 sudo[419443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:59 compute-0 sudo[419443]: pam_unix(sudo:session): session closed for user root
Nov 25 17:22:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:22:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:22:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:22:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:22:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:22:59 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:22:59 compute-0 sudo[419468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:22:59 compute-0 sudo[419468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:59 compute-0 sudo[419468]: pam_unix(sudo:session): session closed for user root
Nov 25 17:22:59 compute-0 sudo[419493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:22:59 compute-0 sudo[419493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:22:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:22:59 compute-0 nova_compute[254092]: 2025-11-25 17:22:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:22:59 compute-0 podman[419558]: 2025-11-25 17:22:59.869837246 +0000 UTC m=+0.058718479 container create c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:22:59 compute-0 podman[419558]: 2025-11-25 17:22:59.840034914 +0000 UTC m=+0.028916187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:22:59 compute-0 systemd[1]: Started libpod-conmon-c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167.scope.
Nov 25 17:22:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:23:00 compute-0 podman[419558]: 2025-11-25 17:23:00.013114986 +0000 UTC m=+0.201996209 container init c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:23:00 compute-0 podman[419558]: 2025-11-25 17:23:00.025368469 +0000 UTC m=+0.214249702 container start c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:23:00 compute-0 podman[419558]: 2025-11-25 17:23:00.030128459 +0000 UTC m=+0.219009692 container attach c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:23:00 compute-0 competent_stonebraker[419574]: 167 167
Nov 25 17:23:00 compute-0 systemd[1]: libpod-c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167.scope: Deactivated successfully.
Nov 25 17:23:00 compute-0 podman[419558]: 2025-11-25 17:23:00.039069502 +0000 UTC m=+0.227950735 container died c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c2759a5c1cfc5e91f6e0185399c9381ef5dd0155efc1116c327aa7e42452283-merged.mount: Deactivated successfully.
Nov 25 17:23:00 compute-0 podman[419558]: 2025-11-25 17:23:00.107404163 +0000 UTC m=+0.296285396 container remove c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:23:00 compute-0 systemd[1]: libpod-conmon-c1b9a27f39eeaa8aaab152b7b51644afbd45f10a37491e6af3e04759f9b4f167.scope: Deactivated successfully.
Nov 25 17:23:00 compute-0 ceph-mon[74985]: pgmap v2983: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:00 compute-0 podman[419597]: 2025-11-25 17:23:00.298569056 +0000 UTC m=+0.056208381 container create 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:23:00 compute-0 systemd[1]: Started libpod-conmon-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope.
Nov 25 17:23:00 compute-0 podman[419597]: 2025-11-25 17:23:00.271897959 +0000 UTC m=+0.029537354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:23:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:00 compute-0 podman[419597]: 2025-11-25 17:23:00.434850226 +0000 UTC m=+0.192489641 container init 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:23:00 compute-0 podman[419597]: 2025-11-25 17:23:00.448776795 +0000 UTC m=+0.206416150 container start 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:23:00 compute-0 podman[419597]: 2025-11-25 17:23:00.453108482 +0000 UTC m=+0.210747837 container attach 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:23:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:01 compute-0 sad_tu[419614]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:23:01 compute-0 sad_tu[419614]: --> relative data size: 1.0
Nov 25 17:23:01 compute-0 sad_tu[419614]: --> All data devices are unavailable
Nov 25 17:23:01 compute-0 nova_compute[254092]: 2025-11-25 17:23:01.587 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:01 compute-0 systemd[1]: libpod-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope: Deactivated successfully.
Nov 25 17:23:01 compute-0 systemd[1]: libpod-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope: Consumed 1.147s CPU time.
Nov 25 17:23:01 compute-0 podman[419597]: 2025-11-25 17:23:01.639747362 +0000 UTC m=+1.397386707 container died 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f55c36318f57a012f47d8f1463c69bd9d34a57d26b07c486be3238c202bc369d-merged.mount: Deactivated successfully.
Nov 25 17:23:01 compute-0 podman[419597]: 2025-11-25 17:23:01.70947077 +0000 UTC m=+1.467110075 container remove 1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:23:01 compute-0 systemd[1]: libpod-conmon-1e30565856c5d85ae354de3e3dadbd14e8939c587e2a778fb0157cf56b05bdd7.scope: Deactivated successfully.
Nov 25 17:23:01 compute-0 sudo[419493]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:01 compute-0 sudo[419656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:23:01 compute-0 sudo[419656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:01 compute-0 sudo[419656]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:01 compute-0 sudo[419681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:23:01 compute-0 sudo[419681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:01 compute-0 sudo[419681]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:02 compute-0 sudo[419706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:23:02 compute-0 sudo[419706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:02 compute-0 sudo[419706]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:02 compute-0 sudo[419731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:23:02 compute-0 sudo[419731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.273 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2 2001:db8::f816:3eff:fec7:79ce'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec7:79ce/64', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=63e3c701-69aa-46d9-a2c2-91ded4242c02) old=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:23:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.275 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 63e3c701-69aa-46d9-a2c2-91ded4242c02 in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 updated
Nov 25 17:23:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.277 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:23:02 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:02.279 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8d7384-6e7a-481a-aea6-f9a0c5f82e7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:02 compute-0 ceph-mon[74985]: pgmap v2984: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:02 compute-0 podman[419796]: 2025-11-25 17:23:02.597389838 +0000 UTC m=+0.037985755 container create 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:23:02 compute-0 systemd[1]: Started libpod-conmon-76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2.scope.
Nov 25 17:23:02 compute-0 podman[419796]: 2025-11-25 17:23:02.582942404 +0000 UTC m=+0.023538351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:23:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:23:02 compute-0 podman[419796]: 2025-11-25 17:23:02.696609949 +0000 UTC m=+0.137205916 container init 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:23:02 compute-0 podman[419796]: 2025-11-25 17:23:02.708339738 +0000 UTC m=+0.148935695 container start 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:23:02 compute-0 podman[419796]: 2025-11-25 17:23:02.712240434 +0000 UTC m=+0.152836441 container attach 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:23:02 compute-0 agitated_heisenberg[419812]: 167 167
Nov 25 17:23:02 compute-0 systemd[1]: libpod-76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2.scope: Deactivated successfully.
Nov 25 17:23:02 compute-0 podman[419796]: 2025-11-25 17:23:02.715349909 +0000 UTC m=+0.155945876 container died 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-69e273e3d311f4a64250df1c40ee2ee2730fa19de282e83393c1a039b412c0ee-merged.mount: Deactivated successfully.
Nov 25 17:23:02 compute-0 podman[419796]: 2025-11-25 17:23:02.76976886 +0000 UTC m=+0.210364827 container remove 76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 17:23:02 compute-0 systemd[1]: libpod-conmon-76399aa395cdbf96bb04d1df11b7c6ca6a24ece89d4afb4839020b26c2dc9ff2.scope: Deactivated successfully.
Nov 25 17:23:02 compute-0 podman[419838]: 2025-11-25 17:23:02.984285739 +0000 UTC m=+0.058012760 container create 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:23:03 compute-0 systemd[1]: Started libpod-conmon-8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7.scope.
Nov 25 17:23:03 compute-0 podman[419838]: 2025-11-25 17:23:02.955190977 +0000 UTC m=+0.028918058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:23:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:03 compute-0 podman[419838]: 2025-11-25 17:23:03.094209751 +0000 UTC m=+0.167936832 container init 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:23:03 compute-0 podman[419838]: 2025-11-25 17:23:03.106257709 +0000 UTC m=+0.179984740 container start 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 17:23:03 compute-0 podman[419838]: 2025-11-25 17:23:03.110523035 +0000 UTC m=+0.184250106 container attach 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:23:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:03 compute-0 nova_compute[254092]: 2025-11-25 17:23:03.833 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]: {
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:     "0": [
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:         {
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "devices": [
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "/dev/loop3"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             ],
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_name": "ceph_lv0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_size": "21470642176",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "name": "ceph_lv0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "tags": {
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cluster_name": "ceph",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.crush_device_class": "",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.encrypted": "0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osd_id": "0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.type": "block",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.vdo": "0"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             },
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "type": "block",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "vg_name": "ceph_vg0"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:         }
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:     ],
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:     "1": [
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:         {
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "devices": [
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "/dev/loop4"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             ],
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_name": "ceph_lv1",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_size": "21470642176",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "name": "ceph_lv1",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "tags": {
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cluster_name": "ceph",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.crush_device_class": "",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.encrypted": "0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osd_id": "1",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.type": "block",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.vdo": "0"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             },
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "type": "block",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "vg_name": "ceph_vg1"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:         }
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:     ],
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:     "2": [
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:         {
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "devices": [
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "/dev/loop5"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             ],
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_name": "ceph_lv2",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_size": "21470642176",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "name": "ceph_lv2",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "tags": {
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.cluster_name": "ceph",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.crush_device_class": "",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.encrypted": "0",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osd_id": "2",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.type": "block",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:                 "ceph.vdo": "0"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             },
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "type": "block",
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:             "vg_name": "ceph_vg2"
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:         }
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]:     ]
Nov 25 17:23:03 compute-0 quizzical_kepler[419854]: }
Nov 25 17:23:03 compute-0 systemd[1]: libpod-8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7.scope: Deactivated successfully.
Nov 25 17:23:03 compute-0 podman[419838]: 2025-11-25 17:23:03.989602163 +0000 UTC m=+1.063329194 container died 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0ce4e6f7c49b478552de5bdf645698503f679a3922f64d360726cd2bc891af9-merged.mount: Deactivated successfully.
Nov 25 17:23:04 compute-0 podman[419838]: 2025-11-25 17:23:04.087411426 +0000 UTC m=+1.161138457 container remove 8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:23:04 compute-0 systemd[1]: libpod-conmon-8b9e586c7b8da4baec7f307ba7e879d3fd6a3f8fc56c6c1be37d49dfa1bd34b7.scope: Deactivated successfully.
Nov 25 17:23:04 compute-0 sudo[419731]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:04 compute-0 sudo[419874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:23:04 compute-0 sudo[419874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:04 compute-0 sudo[419874]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:04 compute-0 ceph-mon[74985]: pgmap v2985: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:04 compute-0 sudo[419899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:23:04 compute-0 sudo[419899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:04 compute-0 sudo[419899]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:04 compute-0 sudo[419924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:23:04 compute-0 sudo[419924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:04 compute-0 sudo[419924]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:04 compute-0 sudo[419949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:23:04 compute-0 sudo[419949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:04 compute-0 podman[420013]: 2025-11-25 17:23:04.886154477 +0000 UTC m=+0.054302080 container create 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 17:23:04 compute-0 systemd[1]: Started libpod-conmon-86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55.scope.
Nov 25 17:23:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:23:04 compute-0 podman[420013]: 2025-11-25 17:23:04.859459939 +0000 UTC m=+0.027607602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:23:04 compute-0 podman[420013]: 2025-11-25 17:23:04.971188371 +0000 UTC m=+0.139336024 container init 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 25 17:23:04 compute-0 podman[420013]: 2025-11-25 17:23:04.978944522 +0000 UTC m=+0.147092135 container start 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 17:23:04 compute-0 podman[420013]: 2025-11-25 17:23:04.98329081 +0000 UTC m=+0.151438573 container attach 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:23:04 compute-0 adoring_ishizaka[420030]: 167 167
Nov 25 17:23:04 compute-0 systemd[1]: libpod-86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55.scope: Deactivated successfully.
Nov 25 17:23:04 compute-0 podman[420013]: 2025-11-25 17:23:04.987892846 +0000 UTC m=+0.156040429 container died 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:23:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-94780c40ff144a02e697b24b3db591ab700c51ee95c3029f069bb61886eb768e-merged.mount: Deactivated successfully.
Nov 25 17:23:05 compute-0 podman[420013]: 2025-11-25 17:23:05.036693113 +0000 UTC m=+0.204840686 container remove 86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ishizaka, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:23:05 compute-0 systemd[1]: libpod-conmon-86ea0823c8c9b38435da03d0c46a2dc8a9201daaa88fd858883e9a7383758b55.scope: Deactivated successfully.
Nov 25 17:23:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:05 compute-0 podman[420054]: 2025-11-25 17:23:05.242336411 +0000 UTC m=+0.071655711 container create 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:23:05 compute-0 systemd[1]: Started libpod-conmon-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope.
Nov 25 17:23:05 compute-0 podman[420054]: 2025-11-25 17:23:05.21437232 +0000 UTC m=+0.043691670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:23:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:05 compute-0 podman[420054]: 2025-11-25 17:23:05.351321058 +0000 UTC m=+0.180640408 container init 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:23:05 compute-0 podman[420054]: 2025-11-25 17:23:05.373932543 +0000 UTC m=+0.203251853 container start 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 17:23:05 compute-0 podman[420054]: 2025-11-25 17:23:05.378314512 +0000 UTC m=+0.207633822 container attach 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:23:06 compute-0 ceph-mon[74985]: pgmap v2986: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]: {
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "osd_id": 1,
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "type": "bluestore"
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:     },
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "osd_id": 2,
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "type": "bluestore"
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:     },
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "osd_id": 0,
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:         "type": "bluestore"
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]:     }
Nov 25 17:23:06 compute-0 vigilant_nightingale[420071]: }
Nov 25 17:23:06 compute-0 systemd[1]: libpod-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope: Deactivated successfully.
Nov 25 17:23:06 compute-0 podman[420054]: 2025-11-25 17:23:06.494367271 +0000 UTC m=+1.323686581 container died 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:23:06 compute-0 systemd[1]: libpod-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope: Consumed 1.129s CPU time.
Nov 25 17:23:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-240c3a9bf9f13f666e853cc7c101c88ad544085774e617691edd097119abd084-merged.mount: Deactivated successfully.
Nov 25 17:23:06 compute-0 podman[420054]: 2025-11-25 17:23:06.563605015 +0000 UTC m=+1.392924285 container remove 829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:23:06 compute-0 systemd[1]: libpod-conmon-829c449466cf180234930f59bc60d007d584dd9b9ca1454ca728d09272a35c9e.scope: Deactivated successfully.
Nov 25 17:23:06 compute-0 nova_compute[254092]: 2025-11-25 17:23:06.589 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:06 compute-0 sudo[419949]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:23:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:23:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:23:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:23:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7780156a-c6c8-4537-a679-7f9422bc6bde does not exist
Nov 25 17:23:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b6721c70-3c10-44e5-bee8-64e9c4023f91 does not exist
Nov 25 17:23:06 compute-0 sudo[420115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:23:06 compute-0 sudo[420115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:06 compute-0 sudo[420115]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:06 compute-0 sudo[420140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:23:06 compute-0 sudo[420140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:23:06 compute-0 sudo[420140]: pam_unix(sudo:session): session closed for user root
Nov 25 17:23:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:23:07 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:23:07 compute-0 ceph-mon[74985]: pgmap v2987: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.804 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2 2001:db8:0:1:f816:3eff:fec7:79ce 2001:db8::f816:3eff:fec7:79ce'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fec7:79ce/64 2001:db8::f816:3eff:fec7:79ce/64', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=63e3c701-69aa-46d9-a2c2-91ded4242c02) old=Port_Binding(mac=['fa:16:3e:c7:79:ce 10.100.0.2 2001:db8::f816:3eff:fec7:79ce'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec7:79ce/64', 'neutron:device_id': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:23:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.806 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 63e3c701-69aa-46d9-a2c2-91ded4242c02 in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 updated
Nov 25 17:23:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.807 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:23:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:07.809 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0d303963-aaa6-4156-a409-c1f731d42c11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:08 compute-0 nova_compute[254092]: 2025-11-25 17:23:08.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:23:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:23:10 compute-0 ceph-mon[74985]: pgmap v2988: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:11 compute-0 nova_compute[254092]: 2025-11-25 17:23:11.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:12 compute-0 ceph-mon[74985]: pgmap v2989: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:13.660 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:13 compute-0 podman[420165]: 2025-11-25 17:23:13.665180603 +0000 UTC m=+0.075612619 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:23:13 compute-0 podman[420166]: 2025-11-25 17:23:13.69774959 +0000 UTC m=+0.097728121 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:23:13 compute-0 podman[420167]: 2025-11-25 17:23:13.766855081 +0000 UTC m=+0.162067082 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:23:13 compute-0 nova_compute[254092]: 2025-11-25 17:23:13.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:14 compute-0 ceph-mon[74985]: pgmap v2990: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.661 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.661 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.684 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.790 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.791 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.804 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.805 254096 INFO nova.compute.claims [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:23:14 compute-0 nova_compute[254092]: 2025-11-25 17:23:14.923 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:23:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113816730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.429 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.436 254096 DEBUG nova.compute.provider_tree [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.451 254096 DEBUG nova.scheduler.client.report [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.483 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.484 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.535 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.535 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.555 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.577 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.670 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.673 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.673 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Creating image(s)
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.707 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.733 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.761 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.766 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.848 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.850 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.851 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.851 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.876 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:15 compute-0 nova_compute[254092]: 2025-11-25 17:23:15.881 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 287c21fa-3b34-448c-ba84-c777124fbb3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.195 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 287c21fa-3b34-448c-ba84-c777124fbb3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:16 compute-0 ceph-mon[74985]: pgmap v2991: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:23:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4113816730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.246 254096 DEBUG nova.policy [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.295 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.414 254096 DEBUG nova.objects.instance [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.544 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.545 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Ensure instance console log exists: /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.546 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.547 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.548 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:16 compute-0 nova_compute[254092]: 2025-11-25 17:23:16.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:23:18 compute-0 ceph-mon[74985]: pgmap v2992: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:23:18 compute-0 nova_compute[254092]: 2025-11-25 17:23:18.364 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Successfully created port: e7f2fe0c-53c0-4be7-8362-47c92514001c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:23:18 compute-0 nova_compute[254092]: 2025-11-25 17:23:18.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:23:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:19 compute-0 nova_compute[254092]: 2025-11-25 17:23:19.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:19.738 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:23:19 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:19.739 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:23:19 compute-0 nova_compute[254092]: 2025-11-25 17:23:19.895 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Successfully updated port: e7f2fe0c-53c0-4be7-8362-47c92514001c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:23:20 compute-0 nova_compute[254092]: 2025-11-25 17:23:20.005 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:23:20 compute-0 nova_compute[254092]: 2025-11-25 17:23:20.005 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:23:20 compute-0 nova_compute[254092]: 2025-11-25 17:23:20.005 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:23:20 compute-0 ceph-mon[74985]: pgmap v2993: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Nov 25 17:23:21 compute-0 nova_compute[254092]: 2025-11-25 17:23:21.070 254096 DEBUG nova.compute.manager [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:23:21 compute-0 nova_compute[254092]: 2025-11-25 17:23:21.070 254096 DEBUG nova.compute.manager [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing instance network info cache due to event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:23:21 compute-0 nova_compute[254092]: 2025-11-25 17:23:21.070 254096 DEBUG oslo_concurrency.lockutils [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:23:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:21 compute-0 nova_compute[254092]: 2025-11-25 17:23:21.247 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:23:21 compute-0 nova_compute[254092]: 2025-11-25 17:23:21.630 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:21 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:21.741 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:22 compute-0 ceph-mon[74985]: pgmap v2994: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:23 compute-0 nova_compute[254092]: 2025-11-25 17:23:23.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.252 254096 DEBUG nova.network.neutron [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:23:24 compute-0 ceph-mon[74985]: pgmap v2995: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.280 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.281 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance network_info: |[{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.283 254096 DEBUG oslo_concurrency.lockutils [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.283 254096 DEBUG nova.network.neutron [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.293 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start _get_guest_xml network_info=[{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.304 254096 WARNING nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.312 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.313 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.325 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.326 254096 DEBUG nova.virt.libvirt.host [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.327 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.328 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.329 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.330 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.331 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.331 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.332 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.332 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.333 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.334 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.334 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.335 254096 DEBUG nova.virt.hardware [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.343 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:23:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1492400474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.813 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.845 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:24 compute-0 nova_compute[254092]: 2025-11-25 17:23:24.851 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1492400474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:23:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306123827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.350 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.354 254096 DEBUG nova.virt.libvirt.vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628099618',display_name='tempest-TestGettingAddress-server-1628099618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628099618',id=147,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-7s8020gn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:15Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=287c21fa-3b34-448c-ba84-c777124fbb3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.354 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.356 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.358 254096 DEBUG nova.objects.instance [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.374 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <uuid>287c21fa-3b34-448c-ba84-c777124fbb3d</uuid>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <name>instance-00000093</name>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1628099618</nova:name>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:23:24</nova:creationTime>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <nova:port uuid="e7f2fe0c-53c0-4be7-8362-47c92514001c">
Nov 25 17:23:25 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe41:f7b1" ipVersion="6"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe41:f7b1" ipVersion="6"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <system>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <entry name="serial">287c21fa-3b34-448c-ba84-c777124fbb3d</entry>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <entry name="uuid">287c21fa-3b34-448c-ba84-c777124fbb3d</entry>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </system>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <os>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   </os>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <features>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   </features>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/287c21fa-3b34-448c-ba84-c777124fbb3d_disk">
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       </source>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config">
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       </source>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:23:25 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:41:f7:b1"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <target dev="tape7f2fe0c-53"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/console.log" append="off"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <video>
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </video>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:23:25 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:23:25 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:23:25 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:23:25 compute-0 nova_compute[254092]: </domain>
Nov 25 17:23:25 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.375 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Preparing to wait for external event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.377 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.377 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.377 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.378 254096 DEBUG nova.virt.libvirt.vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628099618',display_name='tempest-TestGettingAddress-server-1628099618',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628099618',id=147,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-7s8020gn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:15Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=287c21fa-3b34-448c-ba84-c777124fbb3d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.378 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.379 254096 DEBUG nova.network.os_vif_util [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.379 254096 DEBUG os_vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.380 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.381 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.388 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7f2fe0c-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.389 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7f2fe0c-53, col_values=(('external_ids', {'iface-id': 'e7f2fe0c-53c0-4be7-8362-47c92514001c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:f7:b1', 'vm-uuid': '287c21fa-3b34-448c-ba84-c777124fbb3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:23:25 compute-0 NetworkManager[48891]: <info>  [1764091405.3923] manager: (tape7f2fe0c-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.405 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.406 254096 INFO os_vif [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53')
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.468 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.468 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.469 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:41:f7:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.470 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Using config drive
Nov 25 17:23:25 compute-0 nova_compute[254092]: 2025-11-25 17:23:25.509 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:26 compute-0 ceph-mon[74985]: pgmap v2996: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/306123827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.293 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Creating config drive at /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.297 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvles7kle execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.454 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvles7kle" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.490 254096 DEBUG nova.storage.rbd_utils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.495 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.633 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.689 254096 DEBUG oslo_concurrency.processutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config 287c21fa-3b34-448c-ba84-c777124fbb3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.690 254096 INFO nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deleting local config drive /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d/disk.config because it was imported into RBD.
Nov 25 17:23:26 compute-0 kernel: tape7f2fe0c-53: entered promiscuous mode
Nov 25 17:23:26 compute-0 NetworkManager[48891]: <info>  [1764091406.7787] manager: (tape7f2fe0c-53): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:26 compute-0 ovn_controller[153477]: 2025-11-25T17:23:26Z|01583|binding|INFO|Claiming lport e7f2fe0c-53c0-4be7-8362-47c92514001c for this chassis.
Nov 25 17:23:26 compute-0 ovn_controller[153477]: 2025-11-25T17:23:26Z|01584|binding|INFO|e7f2fe0c-53c0-4be7-8362-47c92514001c: Claiming fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.784 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.798 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], port_security=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe41:f7b1/64 2001:db8::f816:3eff:fe41:f7b1/64', 'neutron:device_id': '287c21fa-3b34-448c-ba84-c777124fbb3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7f2fe0c-53c0-4be7-8362-47c92514001c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.799 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7f2fe0c-53c0-4be7-8362-47c92514001c in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 bound to our chassis
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.800 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.815 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a9646b86-2a64-4342-8aa3-28897948075b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.816 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap285f996f-d1 in ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.819 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap285f996f-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.819 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ce39c3-c30e-4436-8f2e-1266f7dedb2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.820 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5d39c576-9644-4d1c-81b8-1d8bdb617dd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 systemd-machined[216343]: New machine qemu-181-instance-00000093.
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.837 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ebe701-4c11-4b90-903a-35ee46409343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:26 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Nov 25 17:23:26 compute-0 ovn_controller[153477]: 2025-11-25T17:23:26Z|01585|binding|INFO|Setting lport e7f2fe0c-53c0-4be7-8362-47c92514001c ovn-installed in OVS
Nov 25 17:23:26 compute-0 ovn_controller[153477]: 2025-11-25T17:23:26Z|01586|binding|INFO|Setting lport e7f2fe0c-53c0-4be7-8362-47c92514001c up in Southbound
Nov 25 17:23:26 compute-0 nova_compute[254092]: 2025-11-25 17:23:26.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:26 compute-0 systemd-udevd[420555]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.872 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a755382-3028-4857-af96-00552e08cbf6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 NetworkManager[48891]: <info>  [1764091406.8844] device (tape7f2fe0c-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:23:26 compute-0 NetworkManager[48891]: <info>  [1764091406.8870] device (tape7f2fe0c-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.915 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0a28a28d-fca0-4e19-b110-fa0439fab627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.923 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc771a7-9d12-49ec-a817-f4323ed8a1be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 NetworkManager[48891]: <info>  [1764091406.9249] manager: (tap285f996f-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.966 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1e043c45-bb00-4f05-910b-b6252301d40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:26 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:26.974 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[548b555e-6202-41a0-82db-7759ee1eb24d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 NetworkManager[48891]: <info>  [1764091407.0039] device (tap285f996f-d0): carrier: link connected
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.012 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0e291a0e-e7c3-493a-b05f-40d49b2b3f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.038 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a25e878-2e18-435a-94e9-d263e1b3c30b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420587, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.059 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26879668-2d60-43e6-9c3c-7b8b1afa095e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:79ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786459, 'tstamp': 786459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420588, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.087 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b76e746-2a56-42da-8802-db0668fbe11f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 420589, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.134 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf823cf3-22a9-4613-a778-2686e5cab2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.244 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b797d6a-63c4-43d0-ba7f-6db003a09f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.246 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.246 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.247 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap285f996f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:27 compute-0 kernel: tap285f996f-d0: entered promiscuous mode
Nov 25 17:23:27 compute-0 NetworkManager[48891]: <info>  [1764091407.2516] manager: (tap285f996f-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.251 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.255 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap285f996f-d0, col_values=(('external_ids', {'iface-id': '63e3c701-69aa-46d9-a2c2-91ded4242c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:27 compute-0 ovn_controller[153477]: 2025-11-25T17:23:27Z|01587|binding|INFO|Releasing lport 63e3c701-69aa-46d9-a2c2-91ded4242c02 from this chassis (sb_readonly=0)
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.259 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.260 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/285f996f-d0be-4c9e-9d0c-b8d730990ab6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/285f996f-d0be-4c9e-9d0c-b8d730990ab6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.262 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f98a2763-20f6-4f32-9fa8-680439e02085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.263 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-285f996f-d0be-4c9e-9d0c-b8d730990ab6
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/285f996f-d0be-4c9e-9d0c-b8d730990ab6.pid.haproxy
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 285f996f-d0be-4c9e-9d0c-b8d730990ab6
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:23:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:27.264 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'env', 'PROCESS_TAG=haproxy-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/285f996f-d0be-4c9e-9d0c-b8d730990ab6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.276 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.413 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091407.4130409, 287c21fa-3b34-448c-ba84-c777124fbb3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.414 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Started (Lifecycle Event)
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.442 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.448 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091407.4162164, 287c21fa-3b34-448c-ba84-c777124fbb3d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.448 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Paused (Lifecycle Event)
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.462 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.467 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.495 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.590 254096 DEBUG nova.compute.manager [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG oslo_concurrency.lockutils [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG oslo_concurrency.lockutils [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG oslo_concurrency.lockutils [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.591 254096 DEBUG nova.compute.manager [req-1f2044b6-0866-4247-aadc-b492b33bed73 req-1be09dc7-c3c3-4d11-b3a3-f46a5a51eedf a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Processing event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.592 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.596 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091407.5960097, 287c21fa-3b34-448c-ba84-c777124fbb3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.597 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Resumed (Lifecycle Event)
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.601 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.604 254096 DEBUG nova.network.neutron [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated VIF entry in instance network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.605 254096 DEBUG nova.network.neutron [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.611 254096 INFO nova.virt.libvirt.driver [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance spawned successfully.
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.612 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.634 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.640 254096 DEBUG oslo_concurrency.lockutils [req-a7755765-5d32-40eb-9cdb-8c64e340afb6 req-71ca0f59-5ccf-425d-a593-3ced2e76edc1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.642 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.646 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.646 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.647 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.647 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.647 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.648 254096 DEBUG nova.virt.libvirt.driver [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.681 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.707 254096 INFO nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 12.04 seconds to spawn the instance on the hypervisor.
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.708 254096 DEBUG nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.772 254096 INFO nova.compute.manager [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 13.03 seconds to build instance.
Nov 25 17:23:27 compute-0 podman[420663]: 2025-11-25 17:23:27.790081916 +0000 UTC m=+0.078124268 container create 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:23:27 compute-0 nova_compute[254092]: 2025-11-25 17:23:27.788 254096 DEBUG oslo_concurrency.lockutils [None req-72ef2d4a-45ab-4be4-810f-60f7b52a5115 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:27 compute-0 podman[420663]: 2025-11-25 17:23:27.744467373 +0000 UTC m=+0.032509745 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:23:27 compute-0 systemd[1]: Started libpod-conmon-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1.scope.
Nov 25 17:23:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:23:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97e2b4204e8786f8aea331eed1d7b42c5b5b099c4f387b397a3ecb178dc7bf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:23:27 compute-0 podman[420663]: 2025-11-25 17:23:27.910423251 +0000 UTC m=+0.198465623 container init 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 17:23:27 compute-0 podman[420663]: 2025-11-25 17:23:27.925050069 +0000 UTC m=+0.213092411 container start 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 17:23:27 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : New worker (420684) forked
Nov 25 17:23:27 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : Loading success.
Nov 25 17:23:28 compute-0 ceph-mon[74985]: pgmap v2997: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:23:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:29 compute-0 nova_compute[254092]: 2025-11-25 17:23:29.666 254096 DEBUG nova.compute.manager [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:23:29 compute-0 nova_compute[254092]: 2025-11-25 17:23:29.667 254096 DEBUG oslo_concurrency.lockutils [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:29 compute-0 nova_compute[254092]: 2025-11-25 17:23:29.667 254096 DEBUG oslo_concurrency.lockutils [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:29 compute-0 nova_compute[254092]: 2025-11-25 17:23:29.668 254096 DEBUG oslo_concurrency.lockutils [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:29 compute-0 nova_compute[254092]: 2025-11-25 17:23:29.668 254096 DEBUG nova.compute.manager [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] No waiting events found dispatching network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:23:29 compute-0 nova_compute[254092]: 2025-11-25 17:23:29.668 254096 WARNING nova.compute.manager [req-e4307d83-7e54-4022-aec8-a2db9f7999e9 req-49d016be-6f10-4b38-a16a-b0e63a4dafb0 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received unexpected event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c for instance with vm_state active and task_state None.
Nov 25 17:23:30 compute-0 nova_compute[254092]: 2025-11-25 17:23:30.391 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:30 compute-0 ceph-mon[74985]: pgmap v2998: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 25 17:23:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:23:31 compute-0 nova_compute[254092]: 2025-11-25 17:23:31.638 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:32 compute-0 ceph-mon[74985]: pgmap v2999: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:23:32 compute-0 ovn_controller[153477]: 2025-11-25T17:23:32Z|01588|binding|INFO|Releasing lport 63e3c701-69aa-46d9-a2c2-91ded4242c02 from this chassis (sb_readonly=0)
Nov 25 17:23:32 compute-0 NetworkManager[48891]: <info>  [1764091412.4913] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Nov 25 17:23:32 compute-0 nova_compute[254092]: 2025-11-25 17:23:32.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:32 compute-0 NetworkManager[48891]: <info>  [1764091412.4933] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Nov 25 17:23:32 compute-0 ovn_controller[153477]: 2025-11-25T17:23:32Z|01589|binding|INFO|Releasing lport 63e3c701-69aa-46d9-a2c2-91ded4242c02 from this chassis (sb_readonly=0)
Nov 25 17:23:32 compute-0 nova_compute[254092]: 2025-11-25 17:23:32.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:32 compute-0 nova_compute[254092]: 2025-11-25 17:23:32.565 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:33 compute-0 nova_compute[254092]: 2025-11-25 17:23:33.097 254096 DEBUG nova.compute.manager [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:23:33 compute-0 nova_compute[254092]: 2025-11-25 17:23:33.098 254096 DEBUG nova.compute.manager [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing instance network info cache due to event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:23:33 compute-0 nova_compute[254092]: 2025-11-25 17:23:33.099 254096 DEBUG oslo_concurrency.lockutils [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:23:33 compute-0 nova_compute[254092]: 2025-11-25 17:23:33.099 254096 DEBUG oslo_concurrency.lockutils [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:23:33 compute-0 nova_compute[254092]: 2025-11-25 17:23:33.099 254096 DEBUG nova.network.neutron [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:23:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:34 compute-0 ceph-mon[74985]: pgmap v3000: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:35 compute-0 nova_compute[254092]: 2025-11-25 17:23:35.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:35 compute-0 nova_compute[254092]: 2025-11-25 17:23:35.718 254096 DEBUG nova.network.neutron [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated VIF entry in instance network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:23:35 compute-0 nova_compute[254092]: 2025-11-25 17:23:35.719 254096 DEBUG nova.network.neutron [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:23:35 compute-0 nova_compute[254092]: 2025-11-25 17:23:35.739 254096 DEBUG oslo_concurrency.lockutils [req-7d488563-d4d7-498a-ab80-9eac7244bf87 req-345d0cf3-f8fe-4e6d-80df-16d7e962f2b9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:23:36 compute-0 ceph-mon[74985]: pgmap v3001: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:36 compute-0 nova_compute[254092]: 2025-11-25 17:23:36.639 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:38 compute-0 ceph-mon[74985]: pgmap v3002: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:23:40
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'backups']
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:23:40 compute-0 nova_compute[254092]: 2025-11-25 17:23:40.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:40 compute-0 ceph-mon[74985]: pgmap v3003: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:23:40 compute-0 ovn_controller[153477]: 2025-11-25T17:23:40Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:f7:b1 10.100.0.4
Nov 25 17:23:40 compute-0 ovn_controller[153477]: 2025-11-25T17:23:40Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:f7:b1 10.100.0.4
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:23:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:23:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.2 total, 600.0 interval
                                           Cumulative writes: 42K writes, 173K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 42K writes, 15K syncs, 2.82 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2774 writes, 10K keys, 2774 commit groups, 1.0 writes per commit group, ingest: 12.22 MB, 0.02 MB/s
                                           Interval WAL: 2774 writes, 1128 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:23:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:23:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 110 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 25 17:23:41 compute-0 nova_compute[254092]: 2025-11-25 17:23:41.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:42 compute-0 ceph-mon[74985]: pgmap v3004: 321 pgs: 321 active+clean; 110 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 25 17:23:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 110 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Nov 25 17:23:44 compute-0 ceph-mon[74985]: pgmap v3005: 321 pgs: 321 active+clean; 110 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Nov 25 17:23:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:44 compute-0 nova_compute[254092]: 2025-11-25 17:23:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:44 compute-0 podman[420694]: 2025-11-25 17:23:44.679181082 +0000 UTC m=+0.088463988 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 17:23:44 compute-0 podman[420695]: 2025-11-25 17:23:44.686721888 +0000 UTC m=+0.097343671 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:23:44 compute-0 podman[420696]: 2025-11-25 17:23:44.714419872 +0000 UTC m=+0.118988161 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:23:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:23:45 compute-0 nova_compute[254092]: 2025-11-25 17:23:45.398 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:45 compute-0 nova_compute[254092]: 2025-11-25 17:23:45.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:46 compute-0 ceph-mon[74985]: pgmap v3006: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:46 compute-0 nova_compute[254092]: 2025-11-25 17:23:46.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:23:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073357996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.047 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.138 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.138 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:23:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.409 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.412 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3448MB free_disk=59.94289016723633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.413 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:47 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3073357996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.501 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 287c21fa-3b34-448c-ba84-c777124fbb3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.501 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.502 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:23:47 compute-0 nova_compute[254092]: 2025-11-25 17:23:47.561 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:23:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375878292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:48 compute-0 nova_compute[254092]: 2025-11-25 17:23:48.056 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:48 compute-0 nova_compute[254092]: 2025-11-25 17:23:48.063 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:23:48 compute-0 nova_compute[254092]: 2025-11-25 17:23:48.080 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:23:48 compute-0 nova_compute[254092]: 2025-11-25 17:23:48.105 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:23:48 compute-0 nova_compute[254092]: 2025-11-25 17:23:48.106 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:48 compute-0 ceph-mon[74985]: pgmap v3007: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:23:48 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/375878292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:23:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:23:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.2 total, 600.0 interval
                                           Cumulative writes: 44K writes, 176K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 44K writes, 15K syncs, 2.82 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2510 writes, 10K keys, 2510 commit groups, 1.0 writes per commit group, ingest: 12.82 MB, 0.02 MB/s
                                           Interval WAL: 2510 writes, 964 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:23:50 compute-0 nova_compute[254092]: 2025-11-25 17:23:50.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:50 compute-0 ceph-mon[74985]: pgmap v3008: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:23:51 compute-0 nova_compute[254092]: 2025-11-25 17:23:51.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:23:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:23:51 compute-0 nova_compute[254092]: 2025-11-25 17:23:51.987 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:51 compute-0 nova_compute[254092]: 2025-11-25 17:23:51.988 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.005 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.073 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.074 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.083 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.084 254096 INFO nova.compute.claims [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.224 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:52 compute-0 ceph-mon[74985]: pgmap v3009: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:23:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:23:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121381838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.688 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.698 254096 DEBUG nova.compute.provider_tree [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.718 254096 DEBUG nova.scheduler.client.report [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.738 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.739 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.785 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.786 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.802 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.819 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.896 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.897 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.897 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Creating image(s)
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.924 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.951 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.976 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:52 compute-0 nova_compute[254092]: 2025-11-25 17:23:52.980 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.033 254096 DEBUG nova.policy [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.086 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.088 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.089 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.090 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.121 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.125 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a54a9759-b1e7-4cbe-87b4-97878794f76e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 95 KiB/s wr, 17 op/s
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.166 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.168 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.169 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.169 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.425 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 a54a9759-b1e7-4cbe-87b4-97878794f76e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.502 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:23:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1121381838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.627 254096 DEBUG nova.objects.instance [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid a54a9759-b1e7-4cbe-87b4-97878794f76e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.640 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.641 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Ensure instance console log exists: /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.641 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.642 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:53 compute-0 nova_compute[254092]: 2025-11-25 17:23:53.643 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:54 compute-0 nova_compute[254092]: 2025-11-25 17:23:54.235 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Successfully created port: e68f7346-b984-4fd0-9e4a-3d8c74e511fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:23:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:54 compute-0 ceph-mon[74985]: pgmap v3010: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 95 KiB/s wr, 17 op/s
Nov 25 17:23:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 152 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.316 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Successfully updated port: e68f7346-b984-4fd0-9e4a-3d8c74e511fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.329 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.329 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.330 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:23:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:23:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088015571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:23:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:23:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4088015571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.426 254096 DEBUG nova.compute.manager [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.427 254096 DEBUG nova.compute.manager [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing instance network info cache due to event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.427 254096 DEBUG oslo_concurrency.lockutils [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.493 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.500 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 17:23:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4088015571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:23:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4088015571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.675 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.675 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.676 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:23:55 compute-0 nova_compute[254092]: 2025-11-25 17:23:55.676 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:23:56 compute-0 ceph-mon[74985]: pgmap v3011: 321 pgs: 321 active+clean; 152 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Nov 25 17:23:56 compute-0 nova_compute[254092]: 2025-11-25 17:23:56.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.271 254096 DEBUG nova.network.neutron [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.291 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.292 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance network_info: |[{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.292 254096 DEBUG oslo_concurrency.lockutils [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.292 254096 DEBUG nova.network.neutron [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.295 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start _get_guest_xml network_info=[{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.301 254096 WARNING nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.306 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.306 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.315 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.316 254096 DEBUG nova.virt.libvirt.host [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.317 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.317 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.318 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.318 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.318 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.319 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.320 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.320 254096 DEBUG nova.virt.hardware [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.324 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:23:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441355535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.801 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.836 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:57 compute-0 nova_compute[254092]: 2025-11-25 17:23:57.841 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.259 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.285 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.286 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:23:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:23:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925252480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.314 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.317 254096 DEBUG nova.virt.libvirt.vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1586289454',display_name='tempest-TestGettingAddress-server-1586289454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1586289454',id=148,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-vp2qu6ef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:52Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a54a9759-b1e7-4cbe-87b4-97878794f76e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.318 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.320 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.322 254096 DEBUG nova.objects.instance [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid a54a9759-b1e7-4cbe-87b4-97878794f76e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.343 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <uuid>a54a9759-b1e7-4cbe-87b4-97878794f76e</uuid>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <name>instance-00000094</name>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1586289454</nova:name>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:23:57</nova:creationTime>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <nova:port uuid="e68f7346-b984-4fd0-9e4a-3d8c74e511fd">
Nov 25 17:23:58 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb9:c187" ipVersion="6"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb9:c187" ipVersion="6"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <system>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <entry name="serial">a54a9759-b1e7-4cbe-87b4-97878794f76e</entry>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <entry name="uuid">a54a9759-b1e7-4cbe-87b4-97878794f76e</entry>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </system>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <os>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   </os>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <features>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   </features>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a54a9759-b1e7-4cbe-87b4-97878794f76e_disk">
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       </source>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config">
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       </source>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:23:58 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:b9:c1:87"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <target dev="tape68f7346-b9"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/console.log" append="off"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <video>
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </video>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:23:58 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:23:58 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:23:58 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:23:58 compute-0 nova_compute[254092]: </domain>
Nov 25 17:23:58 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.344 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Preparing to wait for external event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.345 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.345 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.346 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.347 254096 DEBUG nova.virt.libvirt.vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:23:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1586289454',display_name='tempest-TestGettingAddress-server-1586289454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1586289454',id=148,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-vp2qu6ef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:23:52Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a54a9759-b1e7-4cbe-87b4-97878794f76e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.347 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.349 254096 DEBUG nova.network.os_vif_util [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.350 254096 DEBUG os_vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.351 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.352 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.353 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68f7346-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.361 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape68f7346-b9, col_values=(('external_ids', {'iface-id': 'e68f7346-b984-4fd0-9e4a-3d8c74e511fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:c1:87', 'vm-uuid': 'a54a9759-b1e7-4cbe-87b4-97878794f76e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:58 compute-0 NetworkManager[48891]: <info>  [1764091438.3650] manager: (tape68f7346-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.366 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.373 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.376 254096 INFO os_vif [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9')
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.429 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.429 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.429 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:b9:c1:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.430 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Using config drive
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.457 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:58 compute-0 ceph-mon[74985]: pgmap v3012: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2441355535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1925252480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.718 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Creating config drive at /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.728 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3r6fnge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.881 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3r6fnge" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.914 254096 DEBUG nova.storage.rbd_utils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:23:58 compute-0 nova_compute[254092]: 2025-11-25 17:23:58.919 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.112 254096 DEBUG oslo_concurrency.processutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config a54a9759-b1e7-4cbe-87b4-97878794f76e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.113 254096 INFO nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deleting local config drive /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e/disk.config because it was imported into RBD.
Nov 25 17:23:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:23:59 compute-0 kernel: tape68f7346-b9: entered promiscuous mode
Nov 25 17:23:59 compute-0 NetworkManager[48891]: <info>  [1764091439.1746] manager: (tape68f7346-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Nov 25 17:23:59 compute-0 ovn_controller[153477]: 2025-11-25T17:23:59Z|01590|binding|INFO|Claiming lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd for this chassis.
Nov 25 17:23:59 compute-0 ovn_controller[153477]: 2025-11-25T17:23:59Z|01591|binding|INFO|e68f7346-b984-4fd0-9e4a-3d8c74e511fd: Claiming fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.175 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:59 compute-0 ovn_controller[153477]: 2025-11-25T17:23:59Z|01592|binding|INFO|Setting lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd ovn-installed in OVS
Nov 25 17:23:59 compute-0 ovn_controller[153477]: 2025-11-25T17:23:59Z|01593|binding|INFO|Setting lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd up in Southbound
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.189 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], port_security=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feb9:c187/64 2001:db8::f816:3eff:feb9:c187/64', 'neutron:device_id': 'a54a9759-b1e7-4cbe-87b4-97878794f76e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e68f7346-b984-4fd0-9e4a-3d8c74e511fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.190 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e68f7346-b984-4fd0-9e4a-3d8c74e511fd in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 bound to our chassis
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.191 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:59 compute-0 systemd-udevd[421126]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.211 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ba52d0f2-9ee7-4bbd-94ec-508cd0eca59c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:59 compute-0 systemd-machined[216343]: New machine qemu-182-instance-00000094.
Nov 25 17:23:59 compute-0 NetworkManager[48891]: <info>  [1764091439.2285] device (tape68f7346-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:23:59 compute-0 NetworkManager[48891]: <info>  [1764091439.2294] device (tape68f7346-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:23:59 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.246 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f2956179-876e-4e5c-8a29-d179d5ffbbfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.249 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[64796761-b524-44e0-93c4-c3b3b330dd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.269 254096 DEBUG nova.network.neutron [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updated VIF entry in instance network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.269 254096 DEBUG nova.network.neutron [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.283 254096 DEBUG oslo_concurrency.lockutils [req-563107f1-1d11-4d84-b65c-4db10e649710 req-cef2febb-f939-493a-b99f-1cd8b475085a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.283 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[223604d4-f32f-4a51-9d6b-76fb1e4727a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.309 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3624e202-ed3f-4ade-b9d2-9783837d590f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421138, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.337 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7744b8da-e396-46cc-92d2-683cb6119ddc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786477, 'tstamp': 786477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421141, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786482, 'tstamp': 786482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421141, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.339 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap285f996f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.344 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap285f996f-d0, col_values=(('external_ids', {'iface-id': '63e3c701-69aa-46d9-a2c2-91ded4242c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:23:59 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:23:59.345 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.411 254096 DEBUG nova.compute.manager [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.412 254096 DEBUG oslo_concurrency.lockutils [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.412 254096 DEBUG oslo_concurrency.lockutils [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.413 254096 DEBUG oslo_concurrency.lockutils [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.413 254096 DEBUG nova.compute.manager [req-b017389d-9382-4107-bfcf-9f84da222231 req-b32aa0fe-a121-4cb2-87cc-594d68c27c2e a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Processing event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:23:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.603 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.604 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091439.6029212, a54a9759-b1e7-4cbe-87b4-97878794f76e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.605 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Started (Lifecycle Event)
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.609 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.612 254096 INFO nova.virt.libvirt.driver [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance spawned successfully.
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.612 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.633 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.643 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.650 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.651 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.652 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.653 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.654 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.655 254096 DEBUG nova.virt.libvirt.driver [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.663 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.664 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091439.6032248, a54a9759-b1e7-4cbe-87b4-97878794f76e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.664 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Paused (Lifecycle Event)
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.701 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.706 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091439.6092396, a54a9759-b1e7-4cbe-87b4-97878794f76e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.706 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Resumed (Lifecycle Event)
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.731 254096 INFO nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 6.83 seconds to spawn the instance on the hypervisor.
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.731 254096 DEBUG nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.733 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.740 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.777 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.799 254096 INFO nova.compute.manager [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 7.74 seconds to build instance.
Nov 25 17:23:59 compute-0 nova_compute[254092]: 2025-11-25 17:23:59.812 254096 DEBUG oslo_concurrency.lockutils [None req-aa9cdac5-5524-4326-8d4d-65ed6fb78a3c a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:00 compute-0 nova_compute[254092]: 2025-11-25 17:24:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:00 compute-0 nova_compute[254092]: 2025-11-25 17:24:00.531 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:00 compute-0 ceph-mon[74985]: pgmap v3013: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:24:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.480 254096 DEBUG nova.compute.manager [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.480 254096 DEBUG oslo_concurrency.lockutils [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 DEBUG oslo_concurrency.lockutils [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 DEBUG oslo_concurrency.lockutils [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 DEBUG nova.compute.manager [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] No waiting events found dispatching network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.481 254096 WARNING nova.compute.manager [req-f04a7fe0-a23b-4761-9373-a8ceca7ab2d4 req-8384ed47-2d85-410f-b13a-22a44de6d067 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received unexpected event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd for instance with vm_state active and task_state None.
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:24:01 compute-0 ceph-mon[74985]: pgmap v3014: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 25 17:24:01 compute-0 nova_compute[254092]: 2025-11-25 17:24:01.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 17:24:03 compute-0 nova_compute[254092]: 2025-11-25 17:24:03.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:04 compute-0 ceph-mon[74985]: pgmap v3015: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 25 17:24:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:24:06 compute-0 ceph-mon[74985]: pgmap v3016: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:24:06 compute-0 nova_compute[254092]: 2025-11-25 17:24:06.355 254096 DEBUG nova.compute.manager [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:06 compute-0 nova_compute[254092]: 2025-11-25 17:24:06.355 254096 DEBUG nova.compute.manager [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing instance network info cache due to event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:24:06 compute-0 nova_compute[254092]: 2025-11-25 17:24:06.356 254096 DEBUG oslo_concurrency.lockutils [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:24:06 compute-0 nova_compute[254092]: 2025-11-25 17:24:06.356 254096 DEBUG oslo_concurrency.lockutils [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:24:06 compute-0 nova_compute[254092]: 2025-11-25 17:24:06.357 254096 DEBUG nova.network.neutron [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:24:06 compute-0 nova_compute[254092]: 2025-11-25 17:24:06.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:06 compute-0 sudo[421184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:06 compute-0 sudo[421184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:06 compute-0 sudo[421184]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:06 compute-0 sudo[421209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:24:06 compute-0 sudo[421209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:06 compute-0 sudo[421209]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:07 compute-0 sudo[421234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:07 compute-0 sudo[421234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:07 compute-0 sudo[421234]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:07 compute-0 sudo[421259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:24:07 compute-0 sudo[421259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 86 op/s
Nov 25 17:24:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:24:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5401.5 total, 600.0 interval
                                           Cumulative writes: 37K writes, 149K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2136 writes, 8742 keys, 2136 commit groups, 1.0 writes per commit group, ingest: 10.69 MB, 0.02 MB/s
                                           Interval WAL: 2136 writes, 831 syncs, 2.57 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:24:07 compute-0 sudo[421259]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:24:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:24:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:24:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:24:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:24:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:24:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d5524e19-2d43-409a-9094-82f4123f7ca9 does not exist
Nov 25 17:24:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f8fec3ff-4c02-4f7b-901c-fc5a8ae6487b does not exist
Nov 25 17:24:07 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 11a9a55c-5b04-494d-9dc3-b494320b8eb4 does not exist
Nov 25 17:24:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:24:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:24:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:24:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:24:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:24:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:24:07 compute-0 sudo[421313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:07 compute-0 sudo[421313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:07 compute-0 sudo[421313]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:07 compute-0 sudo[421338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:24:07 compute-0 sudo[421338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:07 compute-0 sudo[421338]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:07 compute-0 sudo[421363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:07 compute-0 sudo[421363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:07 compute-0 sudo[421363]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:08 compute-0 sudo[421388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:24:08 compute-0 sudo[421388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:08 compute-0 ceph-mon[74985]: pgmap v3017: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 476 KiB/s wr, 86 op/s
Nov 25 17:24:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:24:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:24:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:24:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:24:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:24:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:24:08 compute-0 nova_compute[254092]: 2025-11-25 17:24:08.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:08 compute-0 nova_compute[254092]: 2025-11-25 17:24:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:08 compute-0 nova_compute[254092]: 2025-11-25 17:24:08.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:24:08 compute-0 podman[421451]: 2025-11-25 17:24:08.583314053 +0000 UTC m=+0.054448883 container create 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:24:08 compute-0 systemd[1]: Started libpod-conmon-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope.
Nov 25 17:24:08 compute-0 podman[421451]: 2025-11-25 17:24:08.559866675 +0000 UTC m=+0.031001515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:24:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:24:08 compute-0 podman[421451]: 2025-11-25 17:24:08.692784043 +0000 UTC m=+0.163918913 container init 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:24:08 compute-0 podman[421451]: 2025-11-25 17:24:08.703107674 +0000 UTC m=+0.174242504 container start 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:24:08 compute-0 podman[421451]: 2025-11-25 17:24:08.707624017 +0000 UTC m=+0.178758867 container attach 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 17:24:08 compute-0 festive_proskuriakova[421467]: 167 167
Nov 25 17:24:08 compute-0 systemd[1]: libpod-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope: Deactivated successfully.
Nov 25 17:24:08 compute-0 conmon[421467]: conmon 536528a6a72e508137b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope/container/memory.events
Nov 25 17:24:08 compute-0 podman[421451]: 2025-11-25 17:24:08.713762594 +0000 UTC m=+0.184897454 container died 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:24:08 compute-0 nova_compute[254092]: 2025-11-25 17:24:08.739 254096 DEBUG nova.network.neutron [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updated VIF entry in instance network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:24:08 compute-0 nova_compute[254092]: 2025-11-25 17:24:08.741 254096 DEBUG nova.network.neutron [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e223e0d48bbb5959f97c5f005fe09c02503d2d1afa71671b6ae6a57b62bbf34-merged.mount: Deactivated successfully.
Nov 25 17:24:08 compute-0 podman[421451]: 2025-11-25 17:24:08.768495273 +0000 UTC m=+0.239630133 container remove 536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:24:08 compute-0 nova_compute[254092]: 2025-11-25 17:24:08.776 254096 DEBUG oslo_concurrency.lockutils [req-1bf77e10-967a-4ac0-9955-cd3ca427a1dc req-b0da9e00-932f-448d-9564-abfc68cb4ea6 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:24:08 compute-0 systemd[1]: libpod-conmon-536528a6a72e508137b66b80e05837c659d9a93aece8eef830bbba3bba72f72d.scope: Deactivated successfully.
Nov 25 17:24:08 compute-0 podman[421490]: 2025-11-25 17:24:08.9976185 +0000 UTC m=+0.051104092 container create cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:24:09 compute-0 systemd[1]: Started libpod-conmon-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope.
Nov 25 17:24:09 compute-0 podman[421490]: 2025-11-25 17:24:08.97998395 +0000 UTC m=+0.033469562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:24:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:09 compute-0 podman[421490]: 2025-11-25 17:24:09.127243079 +0000 UTC m=+0.180728771 container init cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:24:09 compute-0 podman[421490]: 2025-11-25 17:24:09.137931969 +0000 UTC m=+0.191417561 container start cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:24:09 compute-0 podman[421490]: 2025-11-25 17:24:09.143567303 +0000 UTC m=+0.197052905 container attach cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:24:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:24:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:24:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:24:10 compute-0 ceph-mon[74985]: pgmap v3018: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:24:10 compute-0 musing_hofstadter[421506]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:24:10 compute-0 musing_hofstadter[421506]: --> relative data size: 1.0
Nov 25 17:24:10 compute-0 musing_hofstadter[421506]: --> All data devices are unavailable
Nov 25 17:24:10 compute-0 systemd[1]: libpod-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope: Deactivated successfully.
Nov 25 17:24:10 compute-0 podman[421490]: 2025-11-25 17:24:10.419517163 +0000 UTC m=+1.473002785 container died cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:24:10 compute-0 systemd[1]: libpod-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope: Consumed 1.209s CPU time.
Nov 25 17:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d3dde66906270e32c934a5944e65c4ba1f7e748ab9fd4103db209d779c8c367-merged.mount: Deactivated successfully.
Nov 25 17:24:10 compute-0 podman[421490]: 2025-11-25 17:24:10.492364696 +0000 UTC m=+1.545850288 container remove cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:24:10 compute-0 nova_compute[254092]: 2025-11-25 17:24:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:10 compute-0 systemd[1]: libpod-conmon-cb2891d75555bbdd0871028bbe0dc9f54f83bb80f1a2f66a48d71dda9ec2c385.scope: Deactivated successfully.
Nov 25 17:24:10 compute-0 sudo[421388]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:10 compute-0 sudo[421547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:10 compute-0 sudo[421547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:10 compute-0 sudo[421547]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:10 compute-0 sudo[421572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:24:10 compute-0 sudo[421572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:10 compute-0 sudo[421572]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:10 compute-0 sudo[421597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:10 compute-0 sudo[421597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:10 compute-0 sudo[421597]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:10 compute-0 sudo[421622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:24:10 compute-0 sudo[421622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 168 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 335 KiB/s wr, 79 op/s
Nov 25 17:24:11 compute-0 podman[421685]: 2025-11-25 17:24:11.233946042 +0000 UTC m=+0.058005260 container create cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:24:11 compute-0 systemd[1]: Started libpod-conmon-cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0.scope.
Nov 25 17:24:11 compute-0 podman[421685]: 2025-11-25 17:24:11.207949043 +0000 UTC m=+0.032008281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:24:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:24:11 compute-0 podman[421685]: 2025-11-25 17:24:11.324119496 +0000 UTC m=+0.148178714 container init cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:24:11 compute-0 podman[421685]: 2025-11-25 17:24:11.333410959 +0000 UTC m=+0.157470177 container start cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:24:11 compute-0 podman[421685]: 2025-11-25 17:24:11.337082988 +0000 UTC m=+0.161142206 container attach cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:24:11 compute-0 charming_jackson[421701]: 167 167
Nov 25 17:24:11 compute-0 systemd[1]: libpod-cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0.scope: Deactivated successfully.
Nov 25 17:24:11 compute-0 podman[421685]: 2025-11-25 17:24:11.34042781 +0000 UTC m=+0.164487058 container died cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecca809090520be2f9f4f49afabd370a85ca8080d88bd7a6c4284ea4c8757319-merged.mount: Deactivated successfully.
Nov 25 17:24:11 compute-0 podman[421685]: 2025-11-25 17:24:11.383309517 +0000 UTC m=+0.207368755 container remove cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:24:11 compute-0 systemd[1]: libpod-conmon-cce81e6b8a7efdcb12cf3b835e41ef1ee7ad29b137c316e3a40568c2e29356b0.scope: Deactivated successfully.
Nov 25 17:24:11 compute-0 podman[421725]: 2025-11-25 17:24:11.60311269 +0000 UTC m=+0.067020306 container create d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:24:11 compute-0 nova_compute[254092]: 2025-11-25 17:24:11.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:11 compute-0 systemd[1]: Started libpod-conmon-d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4.scope.
Nov 25 17:24:11 compute-0 podman[421725]: 2025-11-25 17:24:11.575556289 +0000 UTC m=+0.039463895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:24:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:11 compute-0 podman[421725]: 2025-11-25 17:24:11.729017677 +0000 UTC m=+0.192925293 container init d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:24:11 compute-0 podman[421725]: 2025-11-25 17:24:11.74124957 +0000 UTC m=+0.205157196 container start d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:24:11 compute-0 podman[421725]: 2025-11-25 17:24:11.745791103 +0000 UTC m=+0.209698719 container attach d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:24:11 compute-0 ovn_controller[153477]: 2025-11-25T17:24:11Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:c1:87 10.100.0.6
Nov 25 17:24:11 compute-0 ovn_controller[153477]: 2025-11-25T17:24:11Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:c1:87 10.100.0.6
Nov 25 17:24:12 compute-0 ceph-mon[74985]: pgmap v3019: 321 pgs: 321 active+clean; 168 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 335 KiB/s wr, 79 op/s
Nov 25 17:24:12 compute-0 boring_swanson[421741]: {
Nov 25 17:24:12 compute-0 boring_swanson[421741]:     "0": [
Nov 25 17:24:12 compute-0 boring_swanson[421741]:         {
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "devices": [
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "/dev/loop3"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             ],
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_name": "ceph_lv0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_size": "21470642176",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "name": "ceph_lv0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "tags": {
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cluster_name": "ceph",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.crush_device_class": "",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.encrypted": "0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osd_id": "0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.type": "block",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.vdo": "0"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             },
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "type": "block",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "vg_name": "ceph_vg0"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:         }
Nov 25 17:24:12 compute-0 boring_swanson[421741]:     ],
Nov 25 17:24:12 compute-0 boring_swanson[421741]:     "1": [
Nov 25 17:24:12 compute-0 boring_swanson[421741]:         {
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "devices": [
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "/dev/loop4"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             ],
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_name": "ceph_lv1",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_size": "21470642176",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "name": "ceph_lv1",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "tags": {
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cluster_name": "ceph",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.crush_device_class": "",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.encrypted": "0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osd_id": "1",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.type": "block",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.vdo": "0"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             },
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "type": "block",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "vg_name": "ceph_vg1"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:         }
Nov 25 17:24:12 compute-0 boring_swanson[421741]:     ],
Nov 25 17:24:12 compute-0 boring_swanson[421741]:     "2": [
Nov 25 17:24:12 compute-0 boring_swanson[421741]:         {
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "devices": [
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "/dev/loop5"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             ],
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_name": "ceph_lv2",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_size": "21470642176",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "name": "ceph_lv2",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "tags": {
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.cluster_name": "ceph",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.crush_device_class": "",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.encrypted": "0",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osd_id": "2",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.type": "block",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:                 "ceph.vdo": "0"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             },
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "type": "block",
Nov 25 17:24:12 compute-0 boring_swanson[421741]:             "vg_name": "ceph_vg2"
Nov 25 17:24:12 compute-0 boring_swanson[421741]:         }
Nov 25 17:24:12 compute-0 boring_swanson[421741]:     ]
Nov 25 17:24:12 compute-0 boring_swanson[421741]: }
Nov 25 17:24:12 compute-0 systemd[1]: libpod-d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4.scope: Deactivated successfully.
Nov 25 17:24:12 compute-0 podman[421725]: 2025-11-25 17:24:12.596572481 +0000 UTC m=+1.060480077 container died d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd8906731c61cf45b5d146c6dc461fe162d6b43ec152459826875024ffe1efd3-merged.mount: Deactivated successfully.
Nov 25 17:24:12 compute-0 podman[421725]: 2025-11-25 17:24:12.663065241 +0000 UTC m=+1.126972827 container remove d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_swanson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:24:12 compute-0 systemd[1]: libpod-conmon-d3fcbe4a6e3f0aee5e93772305451fc24eb6db516e9b1edfd2e502f0094627d4.scope: Deactivated successfully.
Nov 25 17:24:12 compute-0 sudo[421622]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:12 compute-0 sudo[421762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:12 compute-0 sudo[421762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:12 compute-0 sudo[421762]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:12 compute-0 sudo[421787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:24:12 compute-0 sudo[421787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:12 compute-0 sudo[421787]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:12 compute-0 sudo[421812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:12 compute-0 sudo[421812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:12 compute-0 sudo[421812]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:13 compute-0 sudo[421837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:24:13 compute-0 sudo[421837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 168 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 323 KiB/s wr, 50 op/s
Nov 25 17:24:13 compute-0 nova_compute[254092]: 2025-11-25 17:24:13.370 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:13 compute-0 podman[421903]: 2025-11-25 17:24:13.521589449 +0000 UTC m=+0.065611586 container create d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:24:13 compute-0 systemd[1]: Started libpod-conmon-d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d.scope.
Nov 25 17:24:13 compute-0 podman[421903]: 2025-11-25 17:24:13.49039544 +0000 UTC m=+0.034417627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:24:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:24:13 compute-0 podman[421903]: 2025-11-25 17:24:13.638591924 +0000 UTC m=+0.182614111 container init d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:24:13 compute-0 podman[421903]: 2025-11-25 17:24:13.649196823 +0000 UTC m=+0.193218930 container start d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:24:13 compute-0 podman[421903]: 2025-11-25 17:24:13.653436038 +0000 UTC m=+0.197458175 container attach d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:24:13 compute-0 hardcore_almeida[421919]: 167 167
Nov 25 17:24:13 compute-0 systemd[1]: libpod-d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d.scope: Deactivated successfully.
Nov 25 17:24:13 compute-0 podman[421903]: 2025-11-25 17:24:13.658862256 +0000 UTC m=+0.202884403 container died d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 17:24:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:13.661 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:13.664 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:13.666 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e35157267a79c246699a0d7e67b39f02fbee09a0c444ab656233678a34cebf9a-merged.mount: Deactivated successfully.
Nov 25 17:24:13 compute-0 podman[421903]: 2025-11-25 17:24:13.703906332 +0000 UTC m=+0.247928449 container remove d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 17:24:13 compute-0 systemd[1]: libpod-conmon-d1ca9b6b0785ae0ae07a0303c1d531aec782282486d403eb030929f430c8950d.scope: Deactivated successfully.
Nov 25 17:24:13 compute-0 podman[421942]: 2025-11-25 17:24:13.973285104 +0000 UTC m=+0.078218770 container create 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:24:14 compute-0 systemd[1]: Started libpod-conmon-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope.
Nov 25 17:24:14 compute-0 podman[421942]: 2025-11-25 17:24:13.940177233 +0000 UTC m=+0.045110939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:24:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:24:14 compute-0 podman[421942]: 2025-11-25 17:24:14.07930374 +0000 UTC m=+0.184237456 container init 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:24:14 compute-0 podman[421942]: 2025-11-25 17:24:14.087729569 +0000 UTC m=+0.192663275 container start 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:24:14 compute-0 podman[421942]: 2025-11-25 17:24:14.092369606 +0000 UTC m=+0.197303312 container attach 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:24:14 compute-0 ceph-mon[74985]: pgmap v3020: 321 pgs: 321 active+clean; 168 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 323 KiB/s wr, 50 op/s
Nov 25 17:24:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:15 compute-0 objective_carson[421959]: {
Nov 25 17:24:15 compute-0 objective_carson[421959]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "osd_id": 1,
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "type": "bluestore"
Nov 25 17:24:15 compute-0 objective_carson[421959]:     },
Nov 25 17:24:15 compute-0 objective_carson[421959]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "osd_id": 2,
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "type": "bluestore"
Nov 25 17:24:15 compute-0 objective_carson[421959]:     },
Nov 25 17:24:15 compute-0 objective_carson[421959]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "osd_id": 0,
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:24:15 compute-0 objective_carson[421959]:         "type": "bluestore"
Nov 25 17:24:15 compute-0 objective_carson[421959]:     }
Nov 25 17:24:15 compute-0 objective_carson[421959]: }
Nov 25 17:24:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 87 op/s
Nov 25 17:24:15 compute-0 systemd[1]: libpod-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope: Deactivated successfully.
Nov 25 17:24:15 compute-0 systemd[1]: libpod-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope: Consumed 1.132s CPU time.
Nov 25 17:24:15 compute-0 podman[421993]: 2025-11-25 17:24:15.306206155 +0000 UTC m=+0.068797883 container died 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d76dd1a35594cc62443ac97fa36a74b26e0a81c8f65e6e45e94f7956f86c77f2-merged.mount: Deactivated successfully.
Nov 25 17:24:15 compute-0 podman[421995]: 2025-11-25 17:24:15.406383142 +0000 UTC m=+0.159339968 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 17:24:15 compute-0 podman[421993]: 2025-11-25 17:24:15.425561044 +0000 UTC m=+0.188152792 container remove 5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:24:15 compute-0 podman[421992]: 2025-11-25 17:24:15.433722126 +0000 UTC m=+0.188472681 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 17:24:15 compute-0 systemd[1]: libpod-conmon-5630133b6c43703397c14795d6083e43b14fe5faa814b52caf85b8416a77c227.scope: Deactivated successfully.
Nov 25 17:24:15 compute-0 sudo[421837]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:24:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:24:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:24:15 compute-0 podman[421996]: 2025-11-25 17:24:15.521180717 +0000 UTC m=+0.270296259 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 17:24:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:24:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3b9e0868-960b-43c7-9b13-656be2fa11cc does not exist
Nov 25 17:24:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b6afd4b7-c854-4cca-b42b-3ccceb2f423d does not exist
Nov 25 17:24:15 compute-0 sudo[422070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:24:15 compute-0 sudo[422070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:15 compute-0 sudo[422070]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:15 compute-0 sudo[422095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:24:15 compute-0 sudo[422095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:24:15 compute-0 sudo[422095]: pam_unix(sudo:session): session closed for user root
Nov 25 17:24:16 compute-0 ceph-mon[74985]: pgmap v3021: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 87 op/s
Nov 25 17:24:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:24:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:24:16 compute-0 nova_compute[254092]: 2025-11-25 17:24:16.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:24:18 compute-0 nova_compute[254092]: 2025-11-25 17:24:18.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:18 compute-0 ceph-mon[74985]: pgmap v3022: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:24:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:24:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:20 compute-0 ceph-mon[74985]: pgmap v3023: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:24:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:24:21 compute-0 nova_compute[254092]: 2025-11-25 17:24:21.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:22 compute-0 ceph-mon[74985]: pgmap v3024: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:24:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 17:24:23 compute-0 nova_compute[254092]: 2025-11-25 17:24:23.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:24 compute-0 ceph-mon[74985]: pgmap v3025: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:24.562 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:24:24 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:24.563 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.840 254096 DEBUG nova.compute.manager [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.840 254096 DEBUG nova.compute.manager [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing instance network info cache due to event network-changed-e68f7346-b984-4fd0-9e4a-3d8c74e511fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.841 254096 DEBUG oslo_concurrency.lockutils [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.842 254096 DEBUG oslo_concurrency.lockutils [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.842 254096 DEBUG nova.network.neutron [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Refreshing network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.892 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.893 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.894 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.894 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.895 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.897 254096 INFO nova.compute.manager [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Terminating instance
Nov 25 17:24:24 compute-0 nova_compute[254092]: 2025-11-25 17:24:24.899 254096 DEBUG nova.compute.manager [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:24:24 compute-0 kernel: tape68f7346-b9 (unregistering): left promiscuous mode
Nov 25 17:24:24 compute-0 NetworkManager[48891]: <info>  [1764091464.9639] device (tape68f7346-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:24:25 compute-0 ovn_controller[153477]: 2025-11-25T17:24:25Z|01594|binding|INFO|Releasing lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd from this chassis (sb_readonly=0)
Nov 25 17:24:25 compute-0 ovn_controller[153477]: 2025-11-25T17:24:25Z|01595|binding|INFO|Setting lport e68f7346-b984-4fd0-9e4a-3d8c74e511fd down in Southbound
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 ovn_controller[153477]: 2025-11-25T17:24:25Z|01596|binding|INFO|Removing iface tape68f7346-b9 ovn-installed in OVS
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.035 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], port_security=['fa:16:3e:b9:c1:87 10.100.0.6 2001:db8:0:1:f816:3eff:feb9:c187 2001:db8::f816:3eff:feb9:c187'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:feb9:c187/64 2001:db8::f816:3eff:feb9:c187/64', 'neutron:device_id': 'a54a9759-b1e7-4cbe-87b4-97878794f76e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e68f7346-b984-4fd0-9e4a-3d8c74e511fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.036 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e68f7346-b984-4fd0-9e4a-3d8c74e511fd in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 unbound from our chassis
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.037 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.043 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.062 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7652e7c5-fc5e-49e3-b6e7-5f0219b8cb46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.106 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[5e792bb2-6d05-42ee-ad8a-3ac9b1ce10b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:25 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Nov 25 17:24:25 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 14.037s CPU time.
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.113 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[b899dfa9-0e2b-438d-b305-8ff7b77d7f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:25 compute-0 systemd-machined[216343]: Machine qemu-182-instance-00000094 terminated.
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.161 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[30582e10-4181-4d81-a89a-05c34a7a3ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.187 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c0f47-e321-4648-b871-a248d014b97d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap285f996f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:79:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786459, 'reachable_time': 24898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422131, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.210 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe3fbfb-f219-41da-ba4e-8999e85fee58]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786477, 'tstamp': 786477}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422132, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap285f996f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786482, 'tstamp': 786482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422132, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.212 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap285f996f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.220 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap285f996f-d0, col_values=(('external_ids', {'iface-id': '63e3c701-69aa-46d9-a2c2-91ded4242c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:24:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:25.221 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.346 254096 INFO nova.virt.libvirt.driver [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Instance destroyed successfully.
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.347 254096 DEBUG nova.objects.instance [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid a54a9759-b1e7-4cbe-87b4-97878794f76e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.358 254096 DEBUG nova.virt.libvirt.vif [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:23:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1586289454',display_name='tempest-TestGettingAddress-server-1586289454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1586289454',id=148,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:23:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-vp2qu6ef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:23:59Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=a54a9759-b1e7-4cbe-87b4-97878794f76e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.359 254096 DEBUG nova.network.os_vif_util [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.360 254096 DEBUG nova.network.os_vif_util [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.361 254096 DEBUG os_vif [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.365 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68f7346-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:24:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.388 254096 INFO os_vif [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:c1:87,bridge_name='br-int',has_traffic_filtering=True,id=e68f7346-b984-4fd0-9e4a-3d8c74e511fd,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape68f7346-b9')
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.859 254096 INFO nova.virt.libvirt.driver [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deleting instance files /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e_del
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.860 254096 INFO nova.virt.libvirt.driver [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deletion of /var/lib/nova/instances/a54a9759-b1e7-4cbe-87b4-97878794f76e_del complete
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.912 254096 INFO nova.compute.manager [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.913 254096 DEBUG oslo.service.loopingcall [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.913 254096 DEBUG nova.compute.manager [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:24:25 compute-0 nova_compute[254092]: 2025-11-25 17:24:25.913 254096 DEBUG nova.network.neutron [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:24:26 compute-0 ceph-mon[74985]: pgmap v3026: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.964 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-unplugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.964 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.965 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.965 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.965 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] No waiting events found dispatching network-vif-unplugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.966 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-unplugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.966 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.966 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.967 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.967 254096 DEBUG oslo_concurrency.lockutils [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.967 254096 DEBUG nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] No waiting events found dispatching network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:24:26 compute-0 nova_compute[254092]: 2025-11-25 17:24:26.968 254096 WARNING nova.compute.manager [req-d9d7cfa1-2693-4309-a88e-df2fd2c3d934 req-40243a3a-15b7-4536-acfb-f1a720287841 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received unexpected event network-vif-plugged-e68f7346-b984-4fd0-9e4a-3d8c74e511fd for instance with vm_state active and task_state deleting.
Nov 25 17:24:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 182 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 165 KiB/s wr, 35 op/s
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.732 254096 DEBUG nova.network.neutron [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.734 254096 DEBUG nova.network.neutron [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updated VIF entry in instance network info cache for port e68f7346-b984-4fd0-9e4a-3d8c74e511fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.735 254096 DEBUG nova.network.neutron [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Updating instance_info_cache with network_info: [{"id": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "address": "fa:16:3e:b9:c1:87", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:c187", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape68f7346-b9", "ovs_interfaceid": "e68f7346-b984-4fd0-9e4a-3d8c74e511fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.752 254096 DEBUG oslo_concurrency.lockutils [req-bc6ed57c-11a0-42e3-931c-447e92d1e9d2 req-743c4713-45ab-48d3-b503-9b40bcf2475f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-a54a9759-b1e7-4cbe-87b4-97878794f76e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.754 254096 INFO nova.compute.manager [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Took 1.84 seconds to deallocate network for instance.
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.791 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.792 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:27 compute-0 nova_compute[254092]: 2025-11-25 17:24:27.879 254096 DEBUG oslo_concurrency.processutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:24:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:24:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/497703530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:28 compute-0 nova_compute[254092]: 2025-11-25 17:24:28.334 254096 DEBUG oslo_concurrency.processutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:24:28 compute-0 nova_compute[254092]: 2025-11-25 17:24:28.346 254096 DEBUG nova.compute.provider_tree [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:24:28 compute-0 nova_compute[254092]: 2025-11-25 17:24:28.372 254096 DEBUG nova.scheduler.client.report [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:24:28 compute-0 nova_compute[254092]: 2025-11-25 17:24:28.422 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:28 compute-0 nova_compute[254092]: 2025-11-25 17:24:28.451 254096 INFO nova.scheduler.client.report [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance a54a9759-b1e7-4cbe-87b4-97878794f76e
Nov 25 17:24:28 compute-0 nova_compute[254092]: 2025-11-25 17:24:28.524 254096 DEBUG oslo_concurrency.lockutils [None req-2111f30f-af34-4f8f-8535-d3189dcd08e0 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "a54a9759-b1e7-4cbe-87b4-97878794f76e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:28 compute-0 ceph-mon[74985]: pgmap v3027: 321 pgs: 321 active+clean; 182 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 160 KiB/s rd, 165 KiB/s wr, 35 op/s
Nov 25 17:24:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/497703530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.084 254096 DEBUG nova.compute.manager [req-4f85f826-dae4-4501-b239-f5a6f4829437 req-365d071b-a1cc-4152-beb2-9a6def57582a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Received event network-vif-deleted-e68f7346-b984-4fd0-9e4a-3d8c74e511fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 182 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 13 op/s
Nov 25 17:24:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.627 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.628 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.629 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.629 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.630 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.632 254096 INFO nova.compute.manager [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Terminating instance
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.634 254096 DEBUG nova.compute.manager [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:24:29 compute-0 kernel: tape7f2fe0c-53 (unregistering): left promiscuous mode
Nov 25 17:24:29 compute-0 NetworkManager[48891]: <info>  [1764091469.7343] device (tape7f2fe0c-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:24:29 compute-0 ovn_controller[153477]: 2025-11-25T17:24:29Z|01597|binding|INFO|Releasing lport e7f2fe0c-53c0-4be7-8362-47c92514001c from this chassis (sb_readonly=0)
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 ovn_controller[153477]: 2025-11-25T17:24:29Z|01598|binding|INFO|Setting lport e7f2fe0c-53c0-4be7-8362-47c92514001c down in Southbound
Nov 25 17:24:29 compute-0 ovn_controller[153477]: 2025-11-25T17:24:29Z|01599|binding|INFO|Removing iface tape7f2fe0c-53 ovn-installed in OVS
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.754 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], port_security=['fa:16:3e:41:f7:b1 10.100.0.4 2001:db8:0:1:f816:3eff:fe41:f7b1 2001:db8::f816:3eff:fe41:f7b1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8:0:1:f816:3eff:fe41:f7b1/64 2001:db8::f816:3eff:fe41:f7b1/64', 'neutron:device_id': '287c21fa-3b34-448c-ba84-c777124fbb3d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52a2959e-419d-400c-8c05-829f5b8d02c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ece7728-71e8-4a85-9646-5e1c30bdfaf7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e7f2fe0c-53c0-4be7-8362-47c92514001c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:24:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.755 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e7f2fe0c-53c0-4be7-8362-47c92514001c in datapath 285f996f-d0be-4c9e-9d0c-b8d730990ab6 unbound from our chassis
Nov 25 17:24:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.756 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 285f996f-d0be-4c9e-9d0c-b8d730990ab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:24:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.757 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[157b6e13-dece-46df-bc3e-f25f25bb94f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:29.758 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 namespace which is not needed anymore
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.779 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Nov 25 17:24:29 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 16.490s CPU time.
Nov 25 17:24:29 compute-0 systemd-machined[216343]: Machine qemu-181-instance-00000093 terminated.
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.867 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.883 254096 INFO nova.virt.libvirt.driver [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Instance destroyed successfully.
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.884 254096 DEBUG nova.objects.instance [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 287c21fa-3b34-448c-ba84-c777124fbb3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.895 254096 DEBUG nova.virt.libvirt.vif [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628099618',display_name='tempest-TestGettingAddress-server-1628099618',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628099618',id=147,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZ6smT8tsN0VvN7F7g1rHxOYiNqIkE8IzwqVs2aHnroQdRZ2PL6s9ZrD+ZO9sn2Dm0U7Nqx1ksvxRYiufxUI7TiN4d7vuuX9VAm+c8Urdvu83TEqeMhlMk1VfLezHQlfw==',key_name='tempest-TestGettingAddress-1206328727',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:23:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-7s8020gn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:23:27Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=287c21fa-3b34-448c-ba84-c777124fbb3d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.896 254096 DEBUG nova.network.os_vif_util [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.898 254096 DEBUG nova.network.os_vif_util [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.899 254096 DEBUG os_vif [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.901 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7f2fe0c-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.904 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:29 compute-0 nova_compute[254092]: 2025-11-25 17:24:29.910 254096 INFO os_vif [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:f7:b1,bridge_name='br-int',has_traffic_filtering=True,id=e7f2fe0c-53c0-4be7-8362-47c92514001c,network=Network(285f996f-d0be-4c9e-9d0c-b8d730990ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7f2fe0c-53')
Nov 25 17:24:29 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : haproxy version is 2.8.14-c23fe91
Nov 25 17:24:29 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [NOTICE]   (420682) : path to executable is /usr/sbin/haproxy
Nov 25 17:24:29 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [WARNING]  (420682) : Exiting Master process...
Nov 25 17:24:29 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [WARNING]  (420682) : Exiting Master process...
Nov 25 17:24:29 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [ALERT]    (420682) : Current worker (420684) exited with code 143 (Terminated)
Nov 25 17:24:29 compute-0 neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6[420678]: [WARNING]  (420682) : All workers exited. Exiting... (0)
Nov 25 17:24:29 compute-0 systemd[1]: libpod-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1.scope: Deactivated successfully.
Nov 25 17:24:29 compute-0 podman[422219]: 2025-11-25 17:24:29.981856428 +0000 UTC m=+0.061333710 container died 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 17:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1-userdata-shm.mount: Deactivated successfully.
Nov 25 17:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b97e2b4204e8786f8aea331eed1d7b42c5b5b099c4f387b397a3ecb178dc7bf5-merged.mount: Deactivated successfully.
Nov 25 17:24:30 compute-0 podman[422219]: 2025-11-25 17:24:30.038231123 +0000 UTC m=+0.117708415 container cleanup 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:24:30 compute-0 systemd[1]: libpod-conmon-73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1.scope: Deactivated successfully.
Nov 25 17:24:30 compute-0 podman[422264]: 2025-11-25 17:24:30.141985277 +0000 UTC m=+0.071292971 container remove 73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.160 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[aadff6e1-ee84-482c-bdfa-98a0fad06887]: (4, ('Tue Nov 25 05:24:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 (73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1)\n73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1\nTue Nov 25 05:24:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 (73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1)\n73773450f7fa2f9883a6f5bdeabebbefa01940be8ab1dd81f9957470a3a9b5d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.163 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fc6848-f389-4392-8dc9-7ed37bdc96c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.164 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap285f996f-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:30 compute-0 kernel: tap285f996f-d0: left promiscuous mode
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.200 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc09232-f7e9-4279-aad7-8c9eee235b66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.216 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a794f0-b764-4ac3-b931-c54ad0b1fe5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.217 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[765042e7-5e9f-425d-b756-1bd77a1cf8b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.246 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[33c48aeb-6a31-4e92-8190-98756e34c16e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786449, 'reachable_time': 40400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422283, 'error': None, 'target': 'ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d285f996f\x2dd0be\x2d4c9e\x2d9d0c\x2db8d730990ab6.mount: Deactivated successfully.
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.252 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-285f996f-d0be-4c9e-9d0c-b8d730990ab6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:24:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:30.253 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[05469f18-e78a-4d66-bff7-80f2637aa70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.324 254096 INFO nova.virt.libvirt.driver [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deleting instance files /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d_del
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.324 254096 INFO nova.virt.libvirt.driver [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deletion of /var/lib/nova/instances/287c21fa-3b34-448c-ba84-c777124fbb3d_del complete
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.401 254096 INFO nova.compute.manager [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.402 254096 DEBUG oslo.service.loopingcall [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.403 254096 DEBUG nova.compute.manager [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:24:30 compute-0 nova_compute[254092]: 2025-11-25 17:24:30.403 254096 DEBUG nova.network.neutron [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:24:30 compute-0 ceph-mon[74985]: pgmap v3028: 321 pgs: 321 active+clean; 182 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 13 op/s
Nov 25 17:24:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 24 KiB/s wr, 33 op/s
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.230 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing instance network info cache due to event network-changed-e7f2fe0c-53c0-4be7-8362-47c92514001c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.231 254096 DEBUG nova.network.neutron [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Refreshing network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.613 254096 DEBUG nova.network.neutron [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:24:31 compute-0 ceph-mon[74985]: pgmap v3029: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 24 KiB/s wr, 33 op/s
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.639 254096 INFO nova.compute.manager [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Took 1.24 seconds to deallocate network for instance.
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.690 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.695 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.696 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:31 compute-0 nova_compute[254092]: 2025-11-25 17:24:31.772 254096 DEBUG oslo_concurrency.processutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:24:32 compute-0 nova_compute[254092]: 2025-11-25 17:24:32.052 254096 DEBUG nova.compute.manager [req-a0e1b1f8-59d9-4ad8-be0a-079c0ff0b47e req-a1b95170-e697-4276-be51-35b53894123d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-deleted-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:24:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/775001318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:32 compute-0 nova_compute[254092]: 2025-11-25 17:24:32.301 254096 DEBUG oslo_concurrency.processutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:24:32 compute-0 nova_compute[254092]: 2025-11-25 17:24:32.311 254096 DEBUG nova.compute.provider_tree [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:24:32 compute-0 nova_compute[254092]: 2025-11-25 17:24:32.338 254096 DEBUG nova.scheduler.client.report [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:24:32 compute-0 nova_compute[254092]: 2025-11-25 17:24:32.363 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:32 compute-0 nova_compute[254092]: 2025-11-25 17:24:32.407 254096 INFO nova.scheduler.client.report [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 287c21fa-3b34-448c-ba84-c777124fbb3d
Nov 25 17:24:32 compute-0 nova_compute[254092]: 2025-11-25 17:24:32.465 254096 DEBUG oslo_concurrency.lockutils [None req-883c7950-a930-4e92-adc0-0ed631748e2b a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:32 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:32.565 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:24:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/775001318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 32 op/s
Nov 25 17:24:33 compute-0 ceph-mon[74985]: pgmap v3030: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 32 op/s
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.915 254096 DEBUG nova.network.neutron [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updated VIF entry in instance network info cache for port e7f2fe0c-53c0-4be7-8362-47c92514001c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.917 254096 DEBUG nova.network.neutron [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Updating instance_info_cache with network_info: [{"id": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "address": "fa:16:3e:41:f7:b1", "network": {"id": "285f996f-d0be-4c9e-9d0c-b8d730990ab6", "bridge": "br-int", "label": "tempest-network-smoke--1382409704", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe41:f7b1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7f2fe0c-53", "ovs_interfaceid": "e7f2fe0c-53c0-4be7-8362-47c92514001c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.940 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-287c21fa-3b34-448c-ba84-c777124fbb3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.941 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-unplugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.942 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.942 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.943 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.943 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] No waiting events found dispatching network-vif-unplugged-e7f2fe0c-53c0-4be7-8362-47c92514001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.944 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-unplugged-e7f2fe0c-53c0-4be7-8362-47c92514001c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.944 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.945 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.945 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.946 254096 DEBUG oslo_concurrency.lockutils [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "287c21fa-3b34-448c-ba84-c777124fbb3d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.946 254096 DEBUG nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] No waiting events found dispatching network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:24:33 compute-0 nova_compute[254092]: 2025-11-25 17:24:33.947 254096 WARNING nova.compute.manager [req-1918ddfb-9da1-4efb-a877-e5746f6f0ec5 req-f7f96e3e-7561-4c82-8d65-ba5035057128 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Received unexpected event network-vif-plugged-e7f2fe0c-53c0-4be7-8362-47c92514001c for instance with vm_state active and task_state deleting.
Nov 25 17:24:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:34 compute-0 nova_compute[254092]: 2025-11-25 17:24:34.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 41 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 17:24:36 compute-0 ceph-mon[74985]: pgmap v3031: 321 pgs: 321 active+clean; 41 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 17:24:36 compute-0 nova_compute[254092]: 2025-11-25 17:24:36.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 17:24:38 compute-0 ceph-mon[74985]: pgmap v3032: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 57 op/s
Nov 25 17:24:38 compute-0 nova_compute[254092]: 2025-11-25 17:24:38.305 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:38 compute-0 nova_compute[254092]: 2025-11-25 17:24:38.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 45 op/s
Nov 25 17:24:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:39 compute-0 nova_compute[254092]: 2025-11-25 17:24:39.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:24:40
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:24:40 compute-0 ceph-mon[74985]: pgmap v3033: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 45 op/s
Nov 25 17:24:40 compute-0 nova_compute[254092]: 2025-11-25 17:24:40.344 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091465.342227, a54a9759-b1e7-4cbe-87b4-97878794f76e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:24:40 compute-0 nova_compute[254092]: 2025-11-25 17:24:40.344 254096 INFO nova.compute.manager [-] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] VM Stopped (Lifecycle Event)
Nov 25 17:24:40 compute-0 nova_compute[254092]: 2025-11-25 17:24:40.368 254096 DEBUG nova.compute.manager [None req-9276b327-c376-43c0-a400-015fba9dd344 - - - - - -] [instance: a54a9759-b1e7-4cbe-87b4-97878794f76e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:24:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:24:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 45 op/s
Nov 25 17:24:41 compute-0 nova_compute[254092]: 2025-11-25 17:24:41.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:42 compute-0 ceph-mon[74985]: pgmap v3034: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 45 op/s
Nov 25 17:24:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 938 B/s wr, 25 op/s
Nov 25 17:24:44 compute-0 ceph-mon[74985]: pgmap v3035: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 938 B/s wr, 25 op/s
Nov 25 17:24:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:44 compute-0 nova_compute[254092]: 2025-11-25 17:24:44.880 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091469.879128, 287c21fa-3b34-448c-ba84-c777124fbb3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:24:44 compute-0 nova_compute[254092]: 2025-11-25 17:24:44.881 254096 INFO nova.compute.manager [-] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] VM Stopped (Lifecycle Event)
Nov 25 17:24:44 compute-0 nova_compute[254092]: 2025-11-25 17:24:44.897 254096 DEBUG nova.compute.manager [None req-8e9de573-e3b3-401b-9535-28eb81215ca6 - - - - - -] [instance: 287c21fa-3b34-448c-ba84-c777124fbb3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:24:44 compute-0 nova_compute[254092]: 2025-11-25 17:24:44.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 938 B/s wr, 25 op/s
Nov 25 17:24:45 compute-0 podman[422307]: 2025-11-25 17:24:45.662935247 +0000 UTC m=+0.071051996 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 17:24:45 compute-0 podman[422308]: 2025-11-25 17:24:45.694618069 +0000 UTC m=+0.096472268 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:24:45 compute-0 podman[422309]: 2025-11-25 17:24:45.700194251 +0000 UTC m=+0.108418893 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Nov 25 17:24:46 compute-0 ceph-mon[74985]: pgmap v3036: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 938 B/s wr, 25 op/s
Nov 25 17:24:46 compute-0 nova_compute[254092]: 2025-11-25 17:24:46.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:46 compute-0 nova_compute[254092]: 2025-11-25 17:24:46.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:46 compute-0 nova_compute[254092]: 2025-11-25 17:24:46.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:48 compute-0 ceph-mon[74985]: pgmap v3037: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:48 compute-0 nova_compute[254092]: 2025-11-25 17:24:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:48 compute-0 nova_compute[254092]: 2025-11-25 17:24:48.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:48 compute-0 nova_compute[254092]: 2025-11-25 17:24:48.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:48 compute-0 nova_compute[254092]: 2025-11-25 17:24:48.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:48 compute-0 nova_compute[254092]: 2025-11-25 17:24:48.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:48 compute-0 nova_compute[254092]: 2025-11-25 17:24:48.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:24:48 compute-0 nova_compute[254092]: 2025-11-25 17:24:48.524 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:24:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:24:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3393384912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.030 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.181 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.182 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3665MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:24:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.236 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.237 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:24:49 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3393384912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:24:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865904016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.743 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.750 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.764 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.782 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.783 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:24:49 compute-0 nova_compute[254092]: 2025-11-25 17:24:49.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:50 compute-0 ceph-mon[74985]: pgmap v3038: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:50 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3865904016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:51 compute-0 nova_compute[254092]: 2025-11-25 17:24:51.736 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:24:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:24:52 compute-0 ceph-mon[74985]: pgmap v3039: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.624 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e9:6e 10.100.0.2 2001:db8::f816:3eff:fed7:e96e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed7:e96e/64', 'neutron:device_id': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=10a05078-1fb1-4ddb-9ad4-b52f3a95063f) old=Port_Binding(mac=['fa:16:3e:d7:e9:6e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:24:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.626 163338 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 10a05078-1fb1-4ddb-9ad4-b52f3a95063f in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d updated
Nov 25 17:24:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.628 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88c42593-de32-4e23-b7f7-5cac507fe68d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:24:53 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:24:53.629 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d49b7e5a-8f8e-4113-a037-76d32b0e6008]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:24:53 compute-0 nova_compute[254092]: 2025-11-25 17:24:53.782 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:53 compute-0 nova_compute[254092]: 2025-11-25 17:24:53.782 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:53 compute-0 nova_compute[254092]: 2025-11-25 17:24:53.783 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:53 compute-0 nova_compute[254092]: 2025-11-25 17:24:53.783 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:24:54 compute-0 ceph-mon[74985]: pgmap v3040: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:54 compute-0 nova_compute[254092]: 2025-11-25 17:24:54.921 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:24:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521902727' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:24:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:24:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521902727' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:24:56 compute-0 ceph-mon[74985]: pgmap v3041: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3521902727' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:24:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3521902727' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:24:56 compute-0 nova_compute[254092]: 2025-11-25 17:24:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:24:56 compute-0 nova_compute[254092]: 2025-11-25 17:24:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:24:56 compute-0 nova_compute[254092]: 2025-11-25 17:24:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:24:56 compute-0 nova_compute[254092]: 2025-11-25 17:24:56.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:24:56 compute-0 nova_compute[254092]: 2025-11-25 17:24:56.739 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:24:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:58 compute-0 ceph-mon[74985]: pgmap v3042: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:24:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:24:59 compute-0 nova_compute[254092]: 2025-11-25 17:24:59.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.152 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.153 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.170 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.244 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.245 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.255 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.256 254096 INFO nova.compute.claims [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:25:00 compute-0 ceph-mon[74985]: pgmap v3043: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.378 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:25:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398225616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.871 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.879 254096 DEBUG nova.compute.provider_tree [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.893 254096 DEBUG nova.scheduler.client.report [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.908 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.909 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.948 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.948 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.971 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:25:00 compute-0 nova_compute[254092]: 2025-11-25 17:25:00.985 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.071 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.073 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.073 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Creating image(s)
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.117 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.150 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.179 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.184 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.298 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.299 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.300 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.301 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.323 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.327 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/398225616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.385 254096 DEBUG nova.policy [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.631 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.688 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.817 254096 DEBUG nova.objects.instance [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.847 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.848 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Ensure instance console log exists: /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.849 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.849 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:01 compute-0 nova_compute[254092]: 2025-11-25 17:25:01.849 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:02 compute-0 ceph-mon[74985]: pgmap v3044: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:25:02 compute-0 nova_compute[254092]: 2025-11-25 17:25:02.710 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Successfully created port: 329205f7-aac9-4c77-b1eb-b7af04c34038 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:25:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.548 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Successfully updated port: 329205f7-aac9-4c77-b1eb-b7af04c34038 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.566 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.567 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.567 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.667 254096 DEBUG nova.compute.manager [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.668 254096 DEBUG nova.compute.manager [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing instance network info cache due to event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.668 254096 DEBUG oslo_concurrency.lockutils [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:25:03 compute-0 nova_compute[254092]: 2025-11-25 17:25:03.716 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:25:04 compute-0 ceph-mon[74985]: pgmap v3045: 321 pgs: 321 active+clean; 41 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:25:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.862 254096 DEBUG nova.network.neutron [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.883 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.884 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance network_info: |[{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.885 254096 DEBUG oslo_concurrency.lockutils [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.885 254096 DEBUG nova.network.neutron [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.888 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start _get_guest_xml network_info=[{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.894 254096 WARNING nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.900 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.901 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.910 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.911 254096 DEBUG nova.virt.libvirt.host [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.911 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.912 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.913 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.914 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.914 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.914 254096 DEBUG nova.virt.hardware [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.917 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:04 compute-0 nova_compute[254092]: 2025-11-25 17:25:04.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 74 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Nov 25 17:25:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:25:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325347158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.435 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.475 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.481 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:25:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931644791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.983 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.985 254096 DEBUG nova.virt.libvirt.vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1562740359',display_name='tempest-TestGettingAddress-server-1562740359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1562740359',id=149,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-01dpdy1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:01Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=03d40f4f-1bf3-4e1d-8844-bae4b32443cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.986 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.987 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:25:05 compute-0 nova_compute[254092]: 2025-11-25 17:25:05.989 254096 DEBUG nova.objects.instance [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.002 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <uuid>03d40f4f-1bf3-4e1d-8844-bae4b32443cc</uuid>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <name>instance-00000095</name>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1562740359</nova:name>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:25:04</nova:creationTime>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <nova:port uuid="329205f7-aac9-4c77-b1eb-b7af04c34038">
Nov 25 17:25:06 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8d:692e" ipVersion="6"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <system>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <entry name="serial">03d40f4f-1bf3-4e1d-8844-bae4b32443cc</entry>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <entry name="uuid">03d40f4f-1bf3-4e1d-8844-bae4b32443cc</entry>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </system>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <os>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   </os>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <features>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   </features>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk">
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       </source>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config">
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       </source>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:25:06 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:8d:69:2e"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <target dev="tap329205f7-aa"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/console.log" append="off"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <video>
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </video>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:25:06 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:25:06 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:25:06 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:25:06 compute-0 nova_compute[254092]: </domain>
Nov 25 17:25:06 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.004 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Preparing to wait for external event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.005 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.006 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.006 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.007 254096 DEBUG nova.virt.libvirt.vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1562740359',display_name='tempest-TestGettingAddress-server-1562740359',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1562740359',id=149,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-01dpdy1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:01Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=03d40f4f-1bf3-4e1d-8844-bae4b32443cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.008 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.009 254096 DEBUG nova.network.os_vif_util [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.009 254096 DEBUG os_vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.010 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.011 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.011 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.016 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap329205f7-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.017 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap329205f7-aa, col_values=(('external_ids', {'iface-id': '329205f7-aac9-4c77-b1eb-b7af04c34038', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:69:2e', 'vm-uuid': '03d40f4f-1bf3-4e1d-8844-bae4b32443cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:06 compute-0 NetworkManager[48891]: <info>  [1764091506.0223] manager: (tap329205f7-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.023 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.029 254096 INFO os_vif [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa')
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.069 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.070 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.070 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:8d:69:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.071 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Using config drive
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.097 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:06 compute-0 ceph-mon[74985]: pgmap v3046: 321 pgs: 321 active+clean; 74 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Nov 25 17:25:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3325347158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1931644791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:06 compute-0 nova_compute[254092]: 2025-11-25 17:25:06.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.421 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Creating config drive at /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.427 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98_qw4mz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.599 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp98_qw4mz" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.644 254096 DEBUG nova.storage.rbd_utils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.650 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.816 254096 DEBUG oslo_concurrency.processutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config 03d40f4f-1bf3-4e1d-8844-bae4b32443cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.818 254096 INFO nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deleting local config drive /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc/disk.config because it was imported into RBD.
Nov 25 17:25:07 compute-0 kernel: tap329205f7-aa: entered promiscuous mode
Nov 25 17:25:07 compute-0 NetworkManager[48891]: <info>  [1764091507.8767] manager: (tap329205f7-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Nov 25 17:25:07 compute-0 ovn_controller[153477]: 2025-11-25T17:25:07Z|01600|binding|INFO|Claiming lport 329205f7-aac9-4c77-b1eb-b7af04c34038 for this chassis.
Nov 25 17:25:07 compute-0 ovn_controller[153477]: 2025-11-25T17:25:07Z|01601|binding|INFO|329205f7-aac9-4c77-b1eb-b7af04c34038: Claiming fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.912 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], port_security=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe8d:692e/64', 'neutron:device_id': '03d40f4f-1bf3-4e1d-8844-bae4b32443cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=329205f7-aac9-4c77-b1eb-b7af04c34038) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.914 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 329205f7-aac9-4c77-b1eb-b7af04c34038 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d bound to our chassis
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.915 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88c42593-de32-4e23-b7f7-5cac507fe68d
Nov 25 17:25:07 compute-0 systemd-machined[216343]: New machine qemu-183-instance-00000095.
Nov 25 17:25:07 compute-0 systemd-udevd[422738]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.928 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[53afe16e-3504-4a80-b5e0-6b6349a9c54c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.929 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88c42593-d1 in ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.931 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88c42593-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.931 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddbb61cc-04fd-4cbd-afb9-63cc2a24b759]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.932 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6327ce-2f34-48d9-8718-be67d4c8a93f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:07 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.942 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe575e9-4cec-475b-9837-f7e6bf30231b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:07 compute-0 NetworkManager[48891]: <info>  [1764091507.9523] device (tap329205f7-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:25:07 compute-0 NetworkManager[48891]: <info>  [1764091507.9538] device (tap329205f7-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:25:07 compute-0 ovn_controller[153477]: 2025-11-25T17:25:07Z|01602|binding|INFO|Setting lport 329205f7-aac9-4c77-b1eb-b7af04c34038 ovn-installed in OVS
Nov 25 17:25:07 compute-0 ovn_controller[153477]: 2025-11-25T17:25:07Z|01603|binding|INFO|Setting lport 329205f7-aac9-4c77-b1eb-b7af04c34038 up in Southbound
Nov 25 17:25:07 compute-0 nova_compute[254092]: 2025-11-25 17:25:07.964 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:07 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:07.969 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[577eaad2-b5b9-4e0f-a832-6211993ece3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.005 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[e082f5dd-b413-465e-9940-64b1a09d506f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.013 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0a45030d-9cdb-4eec-8c26-4699b31dcac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 NetworkManager[48891]: <info>  [1764091508.0151] manager: (tap88c42593-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/666)
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.054 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[99ad78ab-682a-4cbe-981e-b84e1f6edbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.058 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[909e30b8-7d9f-474d-b653-23a5dd24a5eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 NetworkManager[48891]: <info>  [1764091508.0905] device (tap88c42593-d0): carrier: link connected
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.098 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb8b48a-72ba-42a5-bb8a-8f65a43bb7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.116 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[18f7ed6b-1588-4837-b48a-9addb1b07f07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422770, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.136 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[63198f45-81a4-4433-9a9e-5016074b811a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:e96e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796568, 'tstamp': 796568}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422771, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.162 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6accc09-8b11-4442-a91b-076e1cfbd1b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 422772, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.208 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[20474430-9f3f-45b2-91ca-ece60439f454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.214 254096 DEBUG nova.network.neutron [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated VIF entry in instance network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.215 254096 DEBUG nova.network.neutron [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.226 254096 DEBUG nova.compute.manager [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.227 254096 DEBUG oslo_concurrency.lockutils [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.227 254096 DEBUG oslo_concurrency.lockutils [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.229 254096 DEBUG oslo_concurrency.lockutils [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.230 254096 DEBUG nova.compute.manager [req-ac92e3e9-7448-4b99-b283-2f7213d74af4 req-518f88d9-fd3b-479e-9c95-ea254c61de0f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Processing event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.233 254096 DEBUG oslo_concurrency.lockutils [req-746388d1-42be-4767-88ee-c1b6babafdee req-8b529b46-023e-4ff2-b16c-2f527d4b08fa a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.288 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7299f0-e640-4a35-8347-90ed4bc932ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.289 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.290 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88c42593-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.293 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:08 compute-0 NetworkManager[48891]: <info>  [1764091508.2944] manager: (tap88c42593-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/667)
Nov 25 17:25:08 compute-0 kernel: tap88c42593-d0: entered promiscuous mode
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.295 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.296 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88c42593-d0, col_values=(('external_ids', {'iface-id': '10a05078-1fb1-4ddb-9ad4-b52f3a95063f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:08 compute-0 ovn_controller[153477]: 2025-11-25T17:25:08Z|01604|binding|INFO|Releasing lport 10a05078-1fb1-4ddb-9ad4-b52f3a95063f from this chassis (sb_readonly=0)
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.300 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.300 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88c42593-de32-4e23-b7f7-5cac507fe68d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88c42593-de32-4e23-b7f7-5cac507fe68d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.301 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a802b99d-2778-4008-bb50-c0380a2c2ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.302 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-88c42593-de32-4e23-b7f7-5cac507fe68d
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/88c42593-de32-4e23-b7f7-5cac507fe68d.pid.haproxy
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 88c42593-de32-4e23-b7f7-5cac507fe68d
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:25:08 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:08.303 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'env', 'PROCESS_TAG=haproxy-88c42593-de32-4e23-b7f7-5cac507fe68d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88c42593-de32-4e23-b7f7-5cac507fe68d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:08 compute-0 ceph-mon[74985]: pgmap v3047: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.731 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.732 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091508.7321813, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.732 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Started (Lifecycle Event)
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.736 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.740 254096 INFO nova.virt.libvirt.driver [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance spawned successfully.
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.740 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.757 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:08 compute-0 podman[422846]: 2025-11-25 17:25:08.758275035 +0000 UTC m=+0.061049373 container create 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.771 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.777 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.778 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.779 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.779 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.779 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.780 254096 DEBUG nova.virt.libvirt.driver [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.810 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.811 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091508.7328572, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.811 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Paused (Lifecycle Event)
Nov 25 17:25:08 compute-0 systemd[1]: Started libpod-conmon-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb.scope.
Nov 25 17:25:08 compute-0 podman[422846]: 2025-11-25 17:25:08.72427627 +0000 UTC m=+0.027050648 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.846 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.853 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091508.736073, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.853 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Resumed (Lifecycle Event)
Nov 25 17:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2aaa9402eb79ede98028373e6a92c83002a08f5e7749bcbd352c3d9be74b73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.859 254096 INFO nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 7.79 seconds to spawn the instance on the hypervisor.
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.860 254096 DEBUG nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:08 compute-0 podman[422846]: 2025-11-25 17:25:08.870424688 +0000 UTC m=+0.173199046 container init 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.874 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:08 compute-0 podman[422846]: 2025-11-25 17:25:08.878531989 +0000 UTC m=+0.181306347 container start 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.879 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:25:08 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : New worker (422867) forked
Nov 25 17:25:08 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : Loading success.
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.905 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.945 254096 INFO nova.compute.manager [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 8.74 seconds to build instance.
Nov 25 17:25:08 compute-0 nova_compute[254092]: 2025-11-25 17:25:08.964 254096 DEBUG oslo_concurrency.lockutils [None req-f0dae703-7df7-4c60-b1f3-4e5878e263af a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:09 compute-0 nova_compute[254092]: 2025-11-25 17:25:09.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:09 compute-0 nova_compute[254092]: 2025-11-25 17:25:09.500 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Triggering sync for uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 17:25:09 compute-0 nova_compute[254092]: 2025-11-25 17:25:09.501 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:09 compute-0 nova_compute[254092]: 2025-11-25 17:25:09.502 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:09 compute-0 nova_compute[254092]: 2025-11-25 17:25:09.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:25:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:25:10 compute-0 nova_compute[254092]: 2025-11-25 17:25:10.318 254096 DEBUG nova.compute.manager [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:10 compute-0 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG oslo_concurrency.lockutils [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:10 compute-0 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG oslo_concurrency.lockutils [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:10 compute-0 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG oslo_concurrency.lockutils [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:10 compute-0 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 DEBUG nova.compute.manager [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] No waiting events found dispatching network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:25:10 compute-0 nova_compute[254092]: 2025-11-25 17:25:10.319 254096 WARNING nova.compute.manager [req-8d3e3b34-95fb-463a-8ff2-fca34f0b6c1a req-080f2b44-6abc-4053-a9b0-037402ff603f a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received unexpected event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 for instance with vm_state active and task_state None.
Nov 25 17:25:10 compute-0 ceph-mon[74985]: pgmap v3048: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:11 compute-0 nova_compute[254092]: 2025-11-25 17:25:11.020 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 17:25:11 compute-0 nova_compute[254092]: 2025-11-25 17:25:11.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:12 compute-0 ceph-mon[74985]: pgmap v3049: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 17:25:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 17:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:13.663 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:13.664 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:13.665 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:14 compute-0 ceph-mon[74985]: pgmap v3050: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 25 17:25:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:25:15 compute-0 nova_compute[254092]: 2025-11-25 17:25:15.474 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:15 compute-0 ovn_controller[153477]: 2025-11-25T17:25:15Z|01605|binding|INFO|Releasing lport 10a05078-1fb1-4ddb-9ad4-b52f3a95063f from this chassis (sb_readonly=0)
Nov 25 17:25:15 compute-0 NetworkManager[48891]: <info>  [1764091515.4802] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Nov 25 17:25:15 compute-0 NetworkManager[48891]: <info>  [1764091515.4811] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Nov 25 17:25:15 compute-0 ovn_controller[153477]: 2025-11-25T17:25:15Z|01606|binding|INFO|Releasing lport 10a05078-1fb1-4ddb-9ad4-b52f3a95063f from this chassis (sb_readonly=0)
Nov 25 17:25:15 compute-0 nova_compute[254092]: 2025-11-25 17:25:15.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:15 compute-0 nova_compute[254092]: 2025-11-25 17:25:15.731 254096 DEBUG nova.compute.manager [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:15 compute-0 nova_compute[254092]: 2025-11-25 17:25:15.732 254096 DEBUG nova.compute.manager [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing instance network info cache due to event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:25:15 compute-0 nova_compute[254092]: 2025-11-25 17:25:15.733 254096 DEBUG oslo_concurrency.lockutils [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:25:15 compute-0 nova_compute[254092]: 2025-11-25 17:25:15.734 254096 DEBUG oslo_concurrency.lockutils [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:25:15 compute-0 nova_compute[254092]: 2025-11-25 17:25:15.734 254096 DEBUG nova.network.neutron [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:25:15 compute-0 sudo[422877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:15 compute-0 sudo[422877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:15 compute-0 sudo[422877]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:15 compute-0 podman[422902]: 2025-11-25 17:25:15.869459036 +0000 UTC m=+0.060951651 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:25:15 compute-0 sudo[422920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:25:15 compute-0 podman[422901]: 2025-11-25 17:25:15.879216032 +0000 UTC m=+0.071363384 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:25:15 compute-0 sudo[422920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:15 compute-0 sudo[422920]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:15 compute-0 podman[422903]: 2025-11-25 17:25:15.911630023 +0000 UTC m=+0.095236233 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 17:25:15 compute-0 sudo[422986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:15 compute-0 sudo[422986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:15 compute-0 sudo[422986]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:15 compute-0 sudo[423014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:25:15 compute-0 sudo[423014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:16 compute-0 nova_compute[254092]: 2025-11-25 17:25:16.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:16 compute-0 ceph-mon[74985]: pgmap v3051: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:25:16 compute-0 sudo[423014]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:16 compute-0 sudo[423070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:16 compute-0 sudo[423070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:16 compute-0 sudo[423070]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:16 compute-0 sudo[423095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:25:16 compute-0 sudo[423095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:16 compute-0 sudo[423095]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:16 compute-0 nova_compute[254092]: 2025-11-25 17:25:16.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:16 compute-0 sudo[423120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:16 compute-0 sudo[423120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:16 compute-0 sudo[423120]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:16 compute-0 sudo[423145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 17:25:16 compute-0 sudo[423145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:17 compute-0 sudo[423145]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 394 KiB/s wr, 75 op/s
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 62726802-90c4-4322-b3b1-ac4710ef7ed2 does not exist
Nov 25 17:25:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev da264df7-a29b-4ba8-b19e-42288d873774 does not exist
Nov 25 17:25:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 76787b66-8543-4100-a347-f8b8099f673a does not exist
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:25:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:25:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:25:17 compute-0 sudo[423186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:17 compute-0 sudo[423186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:17 compute-0 sudo[423186]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:17 compute-0 sudo[423211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:25:17 compute-0 sudo[423211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:17 compute-0 sudo[423211]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:17 compute-0 sudo[423236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:17 compute-0 sudo[423236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:17 compute-0 sudo[423236]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:17 compute-0 sudo[423261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:25:17 compute-0 sudo[423261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:17 compute-0 podman[423326]: 2025-11-25 17:25:17.984600588 +0000 UTC m=+0.049200720 container create 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:25:18 compute-0 systemd[1]: Started libpod-conmon-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope.
Nov 25 17:25:18 compute-0 podman[423326]: 2025-11-25 17:25:17.958729614 +0000 UTC m=+0.023329766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:25:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:25:18 compute-0 podman[423326]: 2025-11-25 17:25:18.089758111 +0000 UTC m=+0.154358263 container init 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:25:18 compute-0 podman[423326]: 2025-11-25 17:25:18.099178987 +0000 UTC m=+0.163779119 container start 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:25:18 compute-0 podman[423326]: 2025-11-25 17:25:18.102650011 +0000 UTC m=+0.167250173 container attach 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:25:18 compute-0 adoring_villani[423342]: 167 167
Nov 25 17:25:18 compute-0 systemd[1]: libpod-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope: Deactivated successfully.
Nov 25 17:25:18 compute-0 conmon[423342]: conmon 2b93f973026daab90b32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope/container/memory.events
Nov 25 17:25:18 compute-0 podman[423326]: 2025-11-25 17:25:18.111188403 +0000 UTC m=+0.175788545 container died 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 17:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfff5c4a80ff1f2e265c2c5a9df8674c06ce95e21780ea9b64e5b1c3c9a46dc4-merged.mount: Deactivated successfully.
Nov 25 17:25:18 compute-0 podman[423326]: 2025-11-25 17:25:18.15439613 +0000 UTC m=+0.218996272 container remove 2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:25:18 compute-0 systemd[1]: libpod-conmon-2b93f973026daab90b32f3921a1e611664d5e70ef1cf6a031f9c89e315bd191e.scope: Deactivated successfully.
Nov 25 17:25:18 compute-0 ceph-mon[74985]: pgmap v3052: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 394 KiB/s wr, 75 op/s
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:25:18 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:25:18 compute-0 nova_compute[254092]: 2025-11-25 17:25:18.303 254096 DEBUG nova.network.neutron [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated VIF entry in instance network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:25:18 compute-0 nova_compute[254092]: 2025-11-25 17:25:18.303 254096 DEBUG nova.network.neutron [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:25:18 compute-0 nova_compute[254092]: 2025-11-25 17:25:18.319 254096 DEBUG oslo_concurrency.lockutils [req-baace0b7-d5d1-49be-a9e1-27d78800471b req-e34533cd-ad24-472b-8122-24c60ffa59c7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:25:18 compute-0 podman[423364]: 2025-11-25 17:25:18.338316157 +0000 UTC m=+0.043787374 container create d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:25:18 compute-0 systemd[1]: Started libpod-conmon-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope.
Nov 25 17:25:18 compute-0 podman[423364]: 2025-11-25 17:25:18.321381935 +0000 UTC m=+0.026853172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:25:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:18 compute-0 podman[423364]: 2025-11-25 17:25:18.454330454 +0000 UTC m=+0.159801701 container init d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:25:18 compute-0 podman[423364]: 2025-11-25 17:25:18.463568375 +0000 UTC m=+0.169039612 container start d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:25:18 compute-0 podman[423364]: 2025-11-25 17:25:18.467496203 +0000 UTC m=+0.172967440 container attach d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:25:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:25:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:19 compute-0 charming_jang[423380]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:25:19 compute-0 charming_jang[423380]: --> relative data size: 1.0
Nov 25 17:25:19 compute-0 charming_jang[423380]: --> All data devices are unavailable
Nov 25 17:25:19 compute-0 systemd[1]: libpod-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope: Deactivated successfully.
Nov 25 17:25:19 compute-0 podman[423364]: 2025-11-25 17:25:19.586751137 +0000 UTC m=+1.292222424 container died d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:25:19 compute-0 systemd[1]: libpod-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope: Consumed 1.041s CPU time.
Nov 25 17:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e320070d2435c74c2cbb4e03953c388009a47c2075627dacf79d17f44948caa-merged.mount: Deactivated successfully.
Nov 25 17:25:19 compute-0 podman[423364]: 2025-11-25 17:25:19.665163552 +0000 UTC m=+1.370634799 container remove d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:25:19 compute-0 systemd[1]: libpod-conmon-d590baa82a8271dcaecba9644f9a949060652227fa50e546c7de58605b7c9031.scope: Deactivated successfully.
Nov 25 17:25:19 compute-0 sudo[423261]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:19 compute-0 sudo[423422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:19 compute-0 sudo[423422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:19 compute-0 sudo[423422]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:19 compute-0 sudo[423447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:25:19 compute-0 sudo[423447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:19 compute-0 sudo[423447]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:19 compute-0 sudo[423472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:19 compute-0 sudo[423472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:19 compute-0 sudo[423472]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:20 compute-0 sudo[423497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:25:20 compute-0 sudo[423497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:20 compute-0 ceph-mon[74985]: pgmap v3053: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.247925) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520248038, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1784, "num_deletes": 251, "total_data_size": 2885057, "memory_usage": 2930352, "flush_reason": "Manual Compaction"}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520268332, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 2834984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61765, "largest_seqno": 63548, "table_properties": {"data_size": 2826763, "index_size": 5034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16813, "raw_average_key_size": 20, "raw_value_size": 2810406, "raw_average_value_size": 3357, "num_data_blocks": 224, "num_entries": 837, "num_filter_entries": 837, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091329, "oldest_key_time": 1764091329, "file_creation_time": 1764091520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 20464 microseconds, and 12047 cpu microseconds.
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.268402) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 2834984 bytes OK
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.268435) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.270240) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.270267) EVENT_LOG_v1 {"time_micros": 1764091520270259, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.270297) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2877430, prev total WAL file size 2877430, number of live WAL files 2.
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.272165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(2768KB)], [143(10149KB)]
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520272224, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 13227603, "oldest_snapshot_seqno": -1}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8332 keys, 11488264 bytes, temperature: kUnknown
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520345142, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 11488264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11433003, "index_size": 33324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 218159, "raw_average_key_size": 26, "raw_value_size": 11284780, "raw_average_value_size": 1354, "num_data_blocks": 1299, "num_entries": 8332, "num_filter_entries": 8332, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.345578) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 11488264 bytes
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.346910) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.1 rd, 157.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.9 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.7) write-amplify(4.1) OK, records in: 8846, records dropped: 514 output_compression: NoCompression
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.346970) EVENT_LOG_v1 {"time_micros": 1764091520346951, "job": 88, "event": "compaction_finished", "compaction_time_micros": 73049, "compaction_time_cpu_micros": 45288, "output_level": 6, "num_output_files": 1, "total_output_size": 11488264, "num_input_records": 8846, "num_output_records": 8332, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520347721, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091520349265, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.272016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:20 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:20.349342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:20 compute-0 podman[423561]: 2025-11-25 17:25:20.454921388 +0000 UTC m=+0.054966737 container create 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:25:20 compute-0 systemd[1]: Started libpod-conmon-1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b.scope.
Nov 25 17:25:20 compute-0 podman[423561]: 2025-11-25 17:25:20.429399174 +0000 UTC m=+0.029444553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:25:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:25:20 compute-0 podman[423561]: 2025-11-25 17:25:20.547767585 +0000 UTC m=+0.147812944 container init 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:25:20 compute-0 podman[423561]: 2025-11-25 17:25:20.556971987 +0000 UTC m=+0.157017336 container start 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:25:20 compute-0 podman[423561]: 2025-11-25 17:25:20.560214895 +0000 UTC m=+0.160260244 container attach 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:25:20 compute-0 quirky_mahavira[423578]: 167 167
Nov 25 17:25:20 compute-0 systemd[1]: libpod-1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b.scope: Deactivated successfully.
Nov 25 17:25:20 compute-0 podman[423561]: 2025-11-25 17:25:20.562671142 +0000 UTC m=+0.162716481 container died 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-df3e55b07c47c0aa3a58accee35009433053e48301956abd7e5eee6c6e446f91-merged.mount: Deactivated successfully.
Nov 25 17:25:20 compute-0 podman[423561]: 2025-11-25 17:25:20.603232835 +0000 UTC m=+0.203278194 container remove 1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:25:20 compute-0 systemd[1]: libpod-conmon-1d58b3bdd83345022d8ba0462df4c638458fd4ce1bca54a5fe5a64f54ab8be0b.scope: Deactivated successfully.
Nov 25 17:25:20 compute-0 podman[423603]: 2025-11-25 17:25:20.772778211 +0000 UTC m=+0.044259426 container create 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:25:20 compute-0 systemd[1]: Started libpod-conmon-24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee.scope.
Nov 25 17:25:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:20 compute-0 podman[423603]: 2025-11-25 17:25:20.754801381 +0000 UTC m=+0.026282626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:20 compute-0 podman[423603]: 2025-11-25 17:25:20.86387645 +0000 UTC m=+0.135357695 container init 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:25:20 compute-0 podman[423603]: 2025-11-25 17:25:20.872130034 +0000 UTC m=+0.143611249 container start 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:25:20 compute-0 podman[423603]: 2025-11-25 17:25:20.876593606 +0000 UTC m=+0.148074821 container attach 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:25:21 compute-0 nova_compute[254092]: 2025-11-25 17:25:21.025 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 422 KiB/s wr, 81 op/s
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]: {
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:     "0": [
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:         {
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "devices": [
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "/dev/loop3"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             ],
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_name": "ceph_lv0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_size": "21470642176",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "name": "ceph_lv0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "tags": {
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cluster_name": "ceph",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.crush_device_class": "",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.encrypted": "0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osd_id": "0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.type": "block",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.vdo": "0"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             },
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "type": "block",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "vg_name": "ceph_vg0"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:         }
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:     ],
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:     "1": [
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:         {
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "devices": [
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "/dev/loop4"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             ],
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_name": "ceph_lv1",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_size": "21470642176",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "name": "ceph_lv1",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "tags": {
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cluster_name": "ceph",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.crush_device_class": "",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.encrypted": "0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osd_id": "1",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.type": "block",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.vdo": "0"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             },
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "type": "block",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "vg_name": "ceph_vg1"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:         }
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:     ],
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:     "2": [
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:         {
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "devices": [
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "/dev/loop5"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             ],
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_name": "ceph_lv2",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_size": "21470642176",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "name": "ceph_lv2",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "tags": {
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.cluster_name": "ceph",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.crush_device_class": "",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.encrypted": "0",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osd_id": "2",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.type": "block",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:                 "ceph.vdo": "0"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             },
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "type": "block",
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:             "vg_name": "ceph_vg2"
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:         }
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]:     ]
Nov 25 17:25:21 compute-0 lucid_blackburn[423620]: }
Nov 25 17:25:21 compute-0 systemd[1]: libpod-24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee.scope: Deactivated successfully.
Nov 25 17:25:21 compute-0 podman[423603]: 2025-11-25 17:25:21.712322034 +0000 UTC m=+0.983803289 container died 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-654d055b7373c2d1be6b6154af992807e80674a9a68be616f6bfd2b775f2a678-merged.mount: Deactivated successfully.
Nov 25 17:25:21 compute-0 nova_compute[254092]: 2025-11-25 17:25:21.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:21 compute-0 podman[423603]: 2025-11-25 17:25:21.793078372 +0000 UTC m=+1.064559597 container remove 24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackburn, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:25:21 compute-0 systemd[1]: libpod-conmon-24c8a8d8f10b4a38ead4d58d18b67ddeb5c530fc0dff9c22d7b501565fb032ee.scope: Deactivated successfully.
Nov 25 17:25:21 compute-0 ovn_controller[153477]: 2025-11-25T17:25:21Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:69:2e 10.100.0.13
Nov 25 17:25:21 compute-0 ovn_controller[153477]: 2025-11-25T17:25:21Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:69:2e 10.100.0.13
Nov 25 17:25:21 compute-0 sudo[423497]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:21 compute-0 sudo[423644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:21 compute-0 sudo[423644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:21 compute-0 sudo[423644]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:22 compute-0 sudo[423669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:25:22 compute-0 sudo[423669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:22 compute-0 sudo[423669]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:22 compute-0 sudo[423694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:22 compute-0 sudo[423694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:22 compute-0 sudo[423694]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:22 compute-0 sudo[423719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:25:22 compute-0 sudo[423719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:22 compute-0 ceph-mon[74985]: pgmap v3054: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 422 KiB/s wr, 81 op/s
Nov 25 17:25:22 compute-0 podman[423784]: 2025-11-25 17:25:22.643487009 +0000 UTC m=+0.064298231 container create 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:25:22 compute-0 systemd[1]: Started libpod-conmon-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope.
Nov 25 17:25:22 compute-0 podman[423784]: 2025-11-25 17:25:22.612742422 +0000 UTC m=+0.033553704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:25:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:25:22 compute-0 podman[423784]: 2025-11-25 17:25:22.742042763 +0000 UTC m=+0.162853985 container init 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:25:22 compute-0 podman[423784]: 2025-11-25 17:25:22.755139028 +0000 UTC m=+0.175950220 container start 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:25:22 compute-0 podman[423784]: 2025-11-25 17:25:22.759173119 +0000 UTC m=+0.179984351 container attach 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:25:22 compute-0 nostalgic_chatterjee[423800]: 167 167
Nov 25 17:25:22 compute-0 systemd[1]: libpod-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope: Deactivated successfully.
Nov 25 17:25:22 compute-0 conmon[423800]: conmon 6d6436ede7eac6b4c310 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope/container/memory.events
Nov 25 17:25:22 compute-0 podman[423805]: 2025-11-25 17:25:22.828631948 +0000 UTC m=+0.046094465 container died 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d50950902a9e0a3d5de0d32e5128817906869367e20d3cab4a4345aa52a8ea-merged.mount: Deactivated successfully.
Nov 25 17:25:22 compute-0 podman[423805]: 2025-11-25 17:25:22.884370716 +0000 UTC m=+0.101833173 container remove 6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chatterjee, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:25:22 compute-0 systemd[1]: libpod-conmon-6d6436ede7eac6b4c3104b4b2976beec23383195298a809070931f9bab08fea3.scope: Deactivated successfully.
Nov 25 17:25:23 compute-0 podman[423827]: 2025-11-25 17:25:23.139883081 +0000 UTC m=+0.058635637 container create 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:25:23 compute-0 systemd[1]: Started libpod-conmon-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope.
Nov 25 17:25:23 compute-0 podman[423827]: 2025-11-25 17:25:23.111052916 +0000 UTC m=+0.029805532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:25:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 410 KiB/s wr, 26 op/s
Nov 25 17:25:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:25:23 compute-0 podman[423827]: 2025-11-25 17:25:23.241878207 +0000 UTC m=+0.160630763 container init 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:25:23 compute-0 podman[423827]: 2025-11-25 17:25:23.252120616 +0000 UTC m=+0.170873152 container start 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:25:23 compute-0 podman[423827]: 2025-11-25 17:25:23.255511488 +0000 UTC m=+0.174264004 container attach 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:25:24 compute-0 ceph-mon[74985]: pgmap v3055: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 410 KiB/s wr, 26 op/s
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]: {
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "osd_id": 1,
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "type": "bluestore"
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:     },
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "osd_id": 2,
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "type": "bluestore"
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:     },
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "osd_id": 0,
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:         "type": "bluestore"
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]:     }
Nov 25 17:25:24 compute-0 ecstatic_neumann[423843]: }
Nov 25 17:25:24 compute-0 systemd[1]: libpod-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope: Deactivated successfully.
Nov 25 17:25:24 compute-0 systemd[1]: libpod-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope: Consumed 1.160s CPU time.
Nov 25 17:25:24 compute-0 podman[423827]: 2025-11-25 17:25:24.405372906 +0000 UTC m=+1.324125442 container died 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe9b41a9ab2bea7a21dde6253daa2b262481bf18a458b1b95ba6f1ac63ac5863-merged.mount: Deactivated successfully.
Nov 25 17:25:24 compute-0 podman[423827]: 2025-11-25 17:25:24.46943963 +0000 UTC m=+1.388192156 container remove 682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:25:24 compute-0 systemd[1]: libpod-conmon-682add130897af54bae644e58c99e8b4ade46d914df1db8a0660b8eedd8df34f.scope: Deactivated successfully.
Nov 25 17:25:24 compute-0 sudo[423719]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:25:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:25:24 compute-0 sshd-session[423846]: Connection closed by authenticating user root 171.244.51.45 port 57230 [preauth]
Nov 25 17:25:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fc391700-7c3c-4f5f-85be-7edc32d6d5dd does not exist
Nov 25 17:25:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7276d3de-8df8-4ecf-9911-be8efac50051 does not exist
Nov 25 17:25:24 compute-0 sudo[423891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:25:24 compute-0 sudo[423891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:24 compute-0 sudo[423891]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:24 compute-0 sudo[423916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:25:24 compute-0 sudo[423916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:25:24 compute-0 sudo[423916]: pam_unix(sudo:session): session closed for user root
Nov 25 17:25:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 929 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Nov 25 17:25:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:25:26 compute-0 nova_compute[254092]: 2025-11-25 17:25:26.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:26 compute-0 ceph-mon[74985]: pgmap v3056: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 929 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Nov 25 17:25:26 compute-0 nova_compute[254092]: 2025-11-25 17:25:26.787 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:25:28 compute-0 ceph-mon[74985]: pgmap v3057: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:25:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:25:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.525188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529525259, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 333, "num_deletes": 250, "total_data_size": 173340, "memory_usage": 180392, "flush_reason": "Manual Compaction"}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529528459, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 171718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63549, "largest_seqno": 63881, "table_properties": {"data_size": 169548, "index_size": 333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5897, "raw_average_key_size": 20, "raw_value_size": 165273, "raw_average_value_size": 571, "num_data_blocks": 14, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091521, "oldest_key_time": 1764091521, "file_creation_time": 1764091529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 3302 microseconds, and 1213 cpu microseconds.
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.528501) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 171718 bytes OK
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.528520) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530007) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530030) EVENT_LOG_v1 {"time_micros": 1764091529530022, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530054) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 171047, prev total WAL file size 171047, number of live WAL files 2.
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353130' seq:72057594037927935, type:22 .. '6D6772737461740032373631' seq:0, type:0; will stop at (end)
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(167KB)], [146(10MB)]
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529530587, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 11659982, "oldest_snapshot_seqno": -1}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8111 keys, 8360841 bytes, temperature: kUnknown
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529569122, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 8360841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8311883, "index_size": 27597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 213738, "raw_average_key_size": 26, "raw_value_size": 8172268, "raw_average_value_size": 1007, "num_data_blocks": 1060, "num_entries": 8111, "num_filter_entries": 8111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.569465) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 8360841 bytes
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.570948) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 301.5 rd, 216.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.0 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(116.6) write-amplify(48.7) OK, records in: 8621, records dropped: 510 output_compression: NoCompression
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.570982) EVENT_LOG_v1 {"time_micros": 1764091529570968, "job": 90, "event": "compaction_finished", "compaction_time_micros": 38678, "compaction_time_cpu_micros": 24565, "output_level": 6, "num_output_files": 1, "total_output_size": 8360841, "num_input_records": 8621, "num_output_records": 8111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529571467, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091529575794, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.530476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:25:29.575985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:25:30 compute-0 ceph-mon[74985]: pgmap v3058: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:25:31 compute-0 nova_compute[254092]: 2025-11-25 17:25:31.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:25:31 compute-0 nova_compute[254092]: 2025-11-25 17:25:31.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:32 compute-0 ceph-mon[74985]: pgmap v3059: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Nov 25 17:25:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 1.7 MiB/s wr, 58 op/s
Nov 25 17:25:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:34 compute-0 ceph-mon[74985]: pgmap v3060: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 1.7 MiB/s wr, 58 op/s
Nov 25 17:25:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 1.7 MiB/s wr, 58 op/s
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.551 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.551 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.564 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:25:36 compute-0 ceph-mon[74985]: pgmap v3061: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 1.7 MiB/s wr, 58 op/s
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.627 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.628 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.636 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.637 254096 INFO nova.compute.claims [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.762 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:36 compute-0 nova_compute[254092]: 2025-11-25 17:25:36.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:25:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706073263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 59 KiB/s wr, 6 op/s
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.222 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.230 254096 DEBUG nova.compute.provider_tree [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.245 254096 DEBUG nova.scheduler.client.report [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.267 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.268 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.316 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.317 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.338 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.368 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.478 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.480 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.481 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Creating image(s)
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.516 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.561 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:37 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3706073263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.620 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.629 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.752 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.754 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.755 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.756 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.793 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.798 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 88446199-25eb-4303-8df1-334acb721afc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:37 compute-0 nova_compute[254092]: 2025-11-25 17:25:37.851 254096 DEBUG nova.policy [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a647bb22c9a54e5fa9ddfff588126693', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67b7f8e8aac6441f992a182f8d893100', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.156 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 88446199-25eb-4303-8df1-334acb721afc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.240 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] resizing rbd image 88446199-25eb-4303-8df1-334acb721afc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.376 254096 DEBUG nova.objects.instance [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'migration_context' on Instance uuid 88446199-25eb-4303-8df1-334acb721afc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.391 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.391 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Ensure instance console log exists: /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.392 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.392 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.392 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:38.549 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:38 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:38.551 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:25:38 compute-0 ceph-mon[74985]: pgmap v3062: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 59 KiB/s wr, 6 op/s
Nov 25 17:25:38 compute-0 nova_compute[254092]: 2025-11-25 17:25:38.644 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Successfully created port: e42bce82-3ee1-4272-b032-6bf70412fe94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:25:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.409 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Successfully updated port: e42bce82-3ee1-4272-b032-6bf70412fe94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.426 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.427 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.427 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.503 254096 DEBUG nova.compute.manager [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.504 254096 DEBUG nova.compute.manager [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing instance network info cache due to event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.505 254096 DEBUG oslo_concurrency.lockutils [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:25:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:39 compute-0 ceph-mon[74985]: pgmap v3063: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Nov 25 17:25:39 compute-0 nova_compute[254092]: 2025-11-25 17:25:39.834 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:25:40
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control', 'backups']
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:25:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.037 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.550 254096 DEBUG nova.network.neutron [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.569 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.569 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance network_info: |[{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.570 254096 DEBUG oslo_concurrency.lockutils [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.570 254096 DEBUG nova.network.neutron [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.574 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start _get_guest_xml network_info=[{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.579 254096 WARNING nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.585 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.586 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.593 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.594 254096 DEBUG nova.virt.libvirt.host [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.595 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.595 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.595 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.596 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.597 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.597 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.597 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.598 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.598 254096 DEBUG nova.virt.hardware [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.601 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:41 compute-0 nova_compute[254092]: 2025-11-25 17:25:41.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:25:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/780226783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.048 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.088 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.095 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:42 compute-0 ceph-mon[74985]: pgmap v3064: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/780226783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:25:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304960193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.660 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.663 254096 DEBUG nova.virt.libvirt.vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1061700885',display_name='tempest-TestGettingAddress-server-1061700885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1061700885',id=150,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-d90lz9sr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:37Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=88446199-25eb-4303-8df1-334acb721afc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.664 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.666 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.668 254096 DEBUG nova.objects.instance [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88446199-25eb-4303-8df1-334acb721afc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.695 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <uuid>88446199-25eb-4303-8df1-334acb721afc</uuid>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <name>instance-00000096</name>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <nova:name>tempest-TestGettingAddress-server-1061700885</nova:name>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:25:41</nova:creationTime>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:user uuid="a647bb22c9a54e5fa9ddfff588126693">tempest-TestGettingAddress-207202764-project-member</nova:user>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:project uuid="67b7f8e8aac6441f992a182f8d893100">tempest-TestGettingAddress-207202764</nova:project>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <nova:port uuid="e42bce82-3ee1-4272-b032-6bf70412fe94">
Nov 25 17:25:42 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2d:491a" ipVersion="6"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <system>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <entry name="serial">88446199-25eb-4303-8df1-334acb721afc</entry>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <entry name="uuid">88446199-25eb-4303-8df1-334acb721afc</entry>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </system>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <os>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   </os>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <features>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   </features>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/88446199-25eb-4303-8df1-334acb721afc_disk">
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       </source>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/88446199-25eb-4303-8df1-334acb721afc_disk.config">
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       </source>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:25:42 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:2d:49:1a"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <target dev="tape42bce82-3e"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/console.log" append="off"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <video>
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </video>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:25:42 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:25:42 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:25:42 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:25:42 compute-0 nova_compute[254092]: </domain>
Nov 25 17:25:42 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.696 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Preparing to wait for external event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.697 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.698 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.698 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.699 254096 DEBUG nova.virt.libvirt.vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1061700885',display_name='tempest-TestGettingAddress-server-1061700885',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1061700885',id=150,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-d90lz9sr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:25:37Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=88446199-25eb-4303-8df1-334acb721afc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.700 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.701 254096 DEBUG nova.network.os_vif_util [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.702 254096 DEBUG os_vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.703 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.704 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.704 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.709 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape42bce82-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.710 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape42bce82-3e, col_values=(('external_ids', {'iface-id': 'e42bce82-3ee1-4272-b032-6bf70412fe94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:49:1a', 'vm-uuid': '88446199-25eb-4303-8df1-334acb721afc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:42 compute-0 NetworkManager[48891]: <info>  [1764091542.7651] manager: (tape42bce82-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.768 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.773 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.774 254096 INFO os_vif [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e')
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.848 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.849 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.851 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] No VIF found with MAC fa:16:3e:2d:49:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.852 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Using config drive
Nov 25 17:25:42 compute-0 nova_compute[254092]: 2025-11-25 17:25:42.890 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:43 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3304960193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.343 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Creating config drive at /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.354 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsra3kles execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.510 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsra3kles" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.555 254096 DEBUG nova.storage.rbd_utils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] rbd image 88446199-25eb-4303-8df1-334acb721afc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.558 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config 88446199-25eb-4303-8df1-334acb721afc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.729 254096 DEBUG oslo_concurrency.processutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config 88446199-25eb-4303-8df1-334acb721afc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.730 254096 INFO nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deleting local config drive /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc/disk.config because it was imported into RBD.
Nov 25 17:25:43 compute-0 kernel: tape42bce82-3e: entered promiscuous mode
Nov 25 17:25:43 compute-0 NetworkManager[48891]: <info>  [1764091543.8044] manager: (tape42bce82-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/671)
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.823 254096 DEBUG nova.network.neutron [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updated VIF entry in instance network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.825 254096 DEBUG nova.network.neutron [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:43 compute-0 ovn_controller[153477]: 2025-11-25T17:25:43Z|01607|binding|INFO|Claiming lport e42bce82-3ee1-4272-b032-6bf70412fe94 for this chassis.
Nov 25 17:25:43 compute-0 ovn_controller[153477]: 2025-11-25T17:25:43Z|01608|binding|INFO|e42bce82-3ee1-4272-b032-6bf70412fe94: Claiming fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a
Nov 25 17:25:43 compute-0 systemd-udevd[424263]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.847 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], port_security=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe2d:491a/64', 'neutron:device_id': '88446199-25eb-4303-8df1-334acb721afc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e42bce82-3ee1-4272-b032-6bf70412fe94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.849 254096 DEBUG oslo_concurrency.lockutils [req-3fb4df58-cb43-4a8a-b4a3-4479c8f0ec0d req-c850cea5-4547-40f6-a875-a21c4d601eec a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.848 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e42bce82-3ee1-4272-b032-6bf70412fe94 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d bound to our chassis
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.850 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88c42593-de32-4e23-b7f7-5cac507fe68d
Nov 25 17:25:43 compute-0 ovn_controller[153477]: 2025-11-25T17:25:43Z|01609|binding|INFO|Setting lport e42bce82-3ee1-4272-b032-6bf70412fe94 ovn-installed in OVS
Nov 25 17:25:43 compute-0 ovn_controller[153477]: 2025-11-25T17:25:43Z|01610|binding|INFO|Setting lport e42bce82-3ee1-4272-b032-6bf70412fe94 up in Southbound
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:43 compute-0 NetworkManager[48891]: <info>  [1764091543.8645] device (tape42bce82-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:25:43 compute-0 NetworkManager[48891]: <info>  [1764091543.8656] device (tape42bce82-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:25:43 compute-0 systemd-machined[216343]: New machine qemu-184-instance-00000096.
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.870 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[39a33e3a-cd1d-4e7d-a641-b6a4fe82d4a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:43 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.906 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccc9397-fda4-4478-a489-5f58371571e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.910 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[d03f937b-fb4c-45a8-9797-c94490975a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.935 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7200018d-8ac2-457f-ad38-e15d07a59c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.956 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[83d57320-9405-4643-b327-d90f3c2ea229]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424279, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.974 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[ddab87c1-2fa9-45e7-83e5-37756076fcbf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796583, 'tstamp': 796583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424281, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796586, 'tstamp': 796586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424281, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.976 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88c42593-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.979 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88c42593-d0, col_values=(('external_ids', {'iface-id': '10a05078-1fb1-4ddb-9ad4-b52f3a95063f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:43 compute-0 nova_compute[254092]: 2025-11-25 17:25:43.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:43 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:43.980 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:25:44 compute-0 ceph-mon[74985]: pgmap v3065: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:25:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.572 254096 DEBUG nova.compute.manager [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.572 254096 DEBUG oslo_concurrency.lockutils [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.573 254096 DEBUG oslo_concurrency.lockutils [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.573 254096 DEBUG oslo_concurrency.lockutils [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.574 254096 DEBUG nova.compute.manager [req-f605e023-d5c7-430e-9483-34860fbf76d2 req-8f8d1e5b-da03-47c6-8666-05f53f160b75 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Processing event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.649 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091544.648669, 88446199-25eb-4303-8df1-334acb721afc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.649 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Started (Lifecycle Event)
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.651 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.655 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.658 254096 INFO nova.virt.libvirt.driver [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance spawned successfully.
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.658 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.680 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.684 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.685 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.685 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.686 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.686 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.686 254096 DEBUG nova.virt.libvirt.driver [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.691 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.713 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.713 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091544.6495414, 88446199-25eb-4303-8df1-334acb721afc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.714 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Paused (Lifecycle Event)
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.737 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.741 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091544.654381, 88446199-25eb-4303-8df1-334acb721afc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.741 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Resumed (Lifecycle Event)
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.746 254096 INFO nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 7.27 seconds to spawn the instance on the hypervisor.
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.747 254096 DEBUG nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.756 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.759 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.779 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.805 254096 INFO nova.compute.manager [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 8.20 seconds to build instance.
Nov 25 17:25:44 compute-0 nova_compute[254092]: 2025-11-25 17:25:44.821 254096 DEBUG oslo_concurrency.lockutils [None req-3a357188-3e31-440d-8f54-64d0a5bfa04a a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 17:25:46 compute-0 ceph-mon[74985]: pgmap v3066: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:46 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:25:46.554 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.635 254096 DEBUG nova.compute.manager [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.635 254096 DEBUG oslo_concurrency.lockutils [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 DEBUG oslo_concurrency.lockutils [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 DEBUG oslo_concurrency.lockutils [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 DEBUG nova.compute.manager [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] No waiting events found dispatching network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.636 254096 WARNING nova.compute.manager [req-78d2000b-7f13-4b75-87c1-1e9cfc6b25ef req-fbce2ed0-9b32-4918-8310-00025b865ed8 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received unexpected event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 for instance with vm_state active and task_state None.
Nov 25 17:25:46 compute-0 podman[424325]: 2025-11-25 17:25:46.659531205 +0000 UTC m=+0.063168490 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 17:25:46 compute-0 podman[424324]: 2025-11-25 17:25:46.665745834 +0000 UTC m=+0.071841636 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 17:25:46 compute-0 podman[424326]: 2025-11-25 17:25:46.694582369 +0000 UTC m=+0.096683173 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:25:46 compute-0 nova_compute[254092]: 2025-11-25 17:25:46.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 17:25:47 compute-0 nova_compute[254092]: 2025-11-25 17:25:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:47 compute-0 nova_compute[254092]: 2025-11-25 17:25:47.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:48 compute-0 ceph-mon[74985]: pgmap v3067: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 17:25:48 compute-0 nova_compute[254092]: 2025-11-25 17:25:48.748 254096 DEBUG nova.compute.manager [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:25:48 compute-0 nova_compute[254092]: 2025-11-25 17:25:48.748 254096 DEBUG nova.compute.manager [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing instance network info cache due to event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:25:48 compute-0 nova_compute[254092]: 2025-11-25 17:25:48.749 254096 DEBUG oslo_concurrency.lockutils [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:25:48 compute-0 nova_compute[254092]: 2025-11-25 17:25:48.749 254096 DEBUG oslo_concurrency.lockutils [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:25:48 compute-0 nova_compute[254092]: 2025-11-25 17:25:48.749 254096 DEBUG nova.network.neutron [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:25:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 17:25:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:50 compute-0 ceph-mon[74985]: pgmap v3068: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.560 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.561 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.561 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.562 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.562 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.620 254096 DEBUG nova.network.neutron [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updated VIF entry in instance network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.621 254096 DEBUG nova.network.neutron [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:25:50 compute-0 nova_compute[254092]: 2025-11-25 17:25:50.650 254096 DEBUG oslo_concurrency.lockutils [req-c8154016-4905-4733-8573-eabb8170ac52 req-8d040ad3-2049-40b7-b8cc-20cba2aa02b2 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:25:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:25:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809576172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.061 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.147 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.148 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.154 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.154 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:25:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3809576172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.328 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.329 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3247MB free_disk=59.921836853027344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.329 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.330 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 03d40f4f-1bf3-4e1d-8844-bae4b32443cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 88446199-25eb-4303-8df1-334acb721afc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.532 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.621 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.691 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.692 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.709 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.733 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:51 compute-0 nova_compute[254092]: 2025-11-25 17:25:51.810 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011076229488181286 of space, bias 1.0, pg target 0.3322868846454386 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:25:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:25:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:25:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520429479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:52 compute-0 nova_compute[254092]: 2025-11-25 17:25:52.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:25:52 compute-0 nova_compute[254092]: 2025-11-25 17:25:52.260 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:25:52 compute-0 nova_compute[254092]: 2025-11-25 17:25:52.280 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:25:52 compute-0 nova_compute[254092]: 2025-11-25 17:25:52.300 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:25:52 compute-0 nova_compute[254092]: 2025-11-25 17:25:52.301 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:25:52 compute-0 ceph-mon[74985]: pgmap v3069: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 25 17:25:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/520429479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:25:52 compute-0 nova_compute[254092]: 2025-11-25 17:25:52.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:25:54 compute-0 ceph-mon[74985]: pgmap v3070: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:25:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:25:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Nov 25 17:25:55 compute-0 nova_compute[254092]: 2025-11-25 17:25:55.300 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:55 compute-0 nova_compute[254092]: 2025-11-25 17:25:55.302 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:55 compute-0 nova_compute[254092]: 2025-11-25 17:25:55.302 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:25:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:25:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4266833940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:25:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:25:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4266833940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:25:55 compute-0 nova_compute[254092]: 2025-11-25 17:25:55.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:56 compute-0 ceph-mon[74985]: pgmap v3071: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Nov 25 17:25:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4266833940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:25:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4266833940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:25:56 compute-0 nova_compute[254092]: 2025-11-25 17:25:56.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:25:56 compute-0 nova_compute[254092]: 2025-11-25 17:25:56.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:25:56 compute-0 nova_compute[254092]: 2025-11-25 17:25:56.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:25:56 compute-0 nova_compute[254092]: 2025-11-25 17:25:56.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 66 op/s
Nov 25 17:25:57 compute-0 nova_compute[254092]: 2025-11-25 17:25:57.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:25:57 compute-0 nova_compute[254092]: 2025-11-25 17:25:57.280 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:25:57 compute-0 nova_compute[254092]: 2025-11-25 17:25:57.281 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:25:57 compute-0 nova_compute[254092]: 2025-11-25 17:25:57.281 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:25:57 compute-0 nova_compute[254092]: 2025-11-25 17:25:57.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:25:58 compute-0 ceph-mon[74985]: pgmap v3072: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 66 op/s
Nov 25 17:25:58 compute-0 ovn_controller[153477]: 2025-11-25T17:25:58Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:49:1a 10.100.0.3
Nov 25 17:25:58 compute-0 ovn_controller[153477]: 2025-11-25T17:25:58Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:49:1a 10.100.0.3
Nov 25 17:25:58 compute-0 nova_compute[254092]: 2025-11-25 17:25:58.706 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:25:58 compute-0 nova_compute[254092]: 2025-11-25 17:25:58.725 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:25:58 compute-0 nova_compute[254092]: 2025-11-25 17:25:58.725 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:25:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 60 op/s
Nov 25 17:25:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:00 compute-0 ceph-mon[74985]: pgmap v3073: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 60 op/s
Nov 25 17:26:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 17:26:01 compute-0 nova_compute[254092]: 2025-11-25 17:26:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:01 compute-0 nova_compute[254092]: 2025-11-25 17:26:01.523 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:01 compute-0 nova_compute[254092]: 2025-11-25 17:26:01.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:02 compute-0 ceph-mon[74985]: pgmap v3074: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 25 17:26:02 compute-0 nova_compute[254092]: 2025-11-25 17:26:02.775 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:26:04 compute-0 ceph-mon[74985]: pgmap v3075: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:26:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG nova.compute.manager [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG nova.compute.manager [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing instance network info cache due to event network-changed-e42bce82-3ee1-4272-b032-6bf70412fe94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG oslo_concurrency.lockutils [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.050 254096 DEBUG oslo_concurrency.lockutils [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.051 254096 DEBUG nova.network.neutron [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Refreshing network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.122 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.123 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.124 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.124 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.125 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.127 254096 INFO nova.compute.manager [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Terminating instance
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.130 254096 DEBUG nova.compute.manager [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:26:06 compute-0 kernel: tape42bce82-3e (unregistering): left promiscuous mode
Nov 25 17:26:06 compute-0 NetworkManager[48891]: <info>  [1764091566.1908] device (tape42bce82-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:26:06 compute-0 ovn_controller[153477]: 2025-11-25T17:26:06Z|01611|binding|INFO|Releasing lport e42bce82-3ee1-4272-b032-6bf70412fe94 from this chassis (sb_readonly=0)
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.198 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 ovn_controller[153477]: 2025-11-25T17:26:06Z|01612|binding|INFO|Setting lport e42bce82-3ee1-4272-b032-6bf70412fe94 down in Southbound
Nov 25 17:26:06 compute-0 ovn_controller[153477]: 2025-11-25T17:26:06Z|01613|binding|INFO|Removing iface tape42bce82-3e ovn-installed in OVS
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.208 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], port_security=['fa:16:3e:2d:49:1a 10.100.0.3 2001:db8::f816:3eff:fe2d:491a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe2d:491a/64', 'neutron:device_id': '88446199-25eb-4303-8df1-334acb721afc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=e42bce82-3ee1-4272-b032-6bf70412fe94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.210 163338 INFO neutron.agent.ovn.metadata.agent [-] Port e42bce82-3ee1-4272-b032-6bf70412fe94 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d unbound from our chassis
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.210 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88c42593-de32-4e23-b7f7-5cac507fe68d
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.231 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[70e6b545-42dd-4380-918c-693cfe1c4368]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:06 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 25 17:26:06 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 14.231s CPU time.
Nov 25 17:26:06 compute-0 systemd-machined[216343]: Machine qemu-184-instance-00000096 terminated.
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.266 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[abda95db-9aaf-468a-83e8-e14a3e7f1254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.272 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[0c293111-6757-4b9f-996e-73090b60586f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.309 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[7e55a6c5-2915-4216-b2b2-b449ecc18bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.329 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f39e5b-d3c2-4e50-a86d-cc1572b8d885]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88c42593-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:e9:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 466], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796568, 'reachable_time': 41537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424444, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.350 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b514f167-5b3b-4a2a-84a3-9e0b4db54e1d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796583, 'tstamp': 796583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424445, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88c42593-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 796586, 'tstamp': 796586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424445, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.354 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.379 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88c42593-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88c42593-d0, col_values=(('external_ids', {'iface-id': '10a05078-1fb1-4ddb-9ad4-b52f3a95063f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:26:06 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:06.380 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:26:06 compute-0 ceph-mon[74985]: pgmap v3076: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.393 254096 INFO nova.virt.libvirt.driver [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Instance destroyed successfully.
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.394 254096 DEBUG nova.objects.instance [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 88446199-25eb-4303-8df1-334acb721afc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.404 254096 DEBUG nova.virt.libvirt.vif [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:25:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1061700885',display_name='tempest-TestGettingAddress-server-1061700885',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1061700885',id=150,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:25:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-d90lz9sr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:25:44Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=88446199-25eb-4303-8df1-334acb721afc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.404 254096 DEBUG nova.network.os_vif_util [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.406 254096 DEBUG nova.network.os_vif_util [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.407 254096 DEBUG os_vif [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.409 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.409 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape42bce82-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.411 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.415 254096 INFO os_vif [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:49:1a,bridge_name='br-int',has_traffic_filtering=True,id=e42bce82-3ee1-4272-b032-6bf70412fe94,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape42bce82-3e')
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.804 254096 INFO nova.virt.libvirt.driver [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deleting instance files /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc_del
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.806 254096 INFO nova.virt.libvirt.driver [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deletion of /var/lib/nova/instances/88446199-25eb-4303-8df1-334acb721afc_del complete
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.811 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.882 254096 INFO nova.compute.manager [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.884 254096 DEBUG oslo.service.loopingcall [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.884 254096 DEBUG nova.compute.manager [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:26:06 compute-0 nova_compute[254092]: 2025-11-25 17:26:06.885 254096 DEBUG nova.network.neutron [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:26:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.519 254096 DEBUG nova.network.neutron [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updated VIF entry in instance network info cache for port e42bce82-3ee1-4272-b032-6bf70412fe94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.520 254096 DEBUG nova.network.neutron [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [{"id": "e42bce82-3ee1-4272-b032-6bf70412fe94", "address": "fa:16:3e:2d:49:1a", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2d:491a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape42bce82-3e", "ovs_interfaceid": "e42bce82-3ee1-4272-b032-6bf70412fe94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.539 254096 DEBUG nova.network.neutron [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.542 254096 DEBUG oslo_concurrency.lockutils [req-20b083f7-b9ff-4964-9c0a-6d5016c6c51c req-a1814a42-3b8f-4fc2-a6a7-74d5bc09a8e3 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-88446199-25eb-4303-8df1-334acb721afc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.563 254096 INFO nova.compute.manager [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] Took 0.68 seconds to deallocate network for instance.
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.596 254096 DEBUG nova.compute.manager [req-85d1dc0b-ffa7-480a-9988-72eb835818fc req-4339aea6-0e7d-4416-99a4-6f67e7e79be1 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-deleted-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.638 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.639 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:07 compute-0 nova_compute[254092]: 2025-11-25 17:26:07.729 254096 DEBUG oslo_concurrency.processutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.124 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-unplugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.125 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.126 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.127 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.127 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] No waiting events found dispatching network-vif-unplugged-e42bce82-3ee1-4272-b032-6bf70412fe94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.127 254096 WARNING nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received unexpected event network-vif-unplugged-e42bce82-3ee1-4272-b032-6bf70412fe94 for instance with vm_state deleted and task_state None.
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.128 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.128 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "88446199-25eb-4303-8df1-334acb721afc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.129 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.129 254096 DEBUG oslo_concurrency.lockutils [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.129 254096 DEBUG nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] No waiting events found dispatching network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.130 254096 WARNING nova.compute.manager [req-60357a2f-399f-4918-b8f3-caaf99bca0d2 req-83459f80-b482-4a74-b029-bb47c831244d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 88446199-25eb-4303-8df1-334acb721afc] Received unexpected event network-vif-plugged-e42bce82-3ee1-4272-b032-6bf70412fe94 for instance with vm_state deleted and task_state None.
Nov 25 17:26:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:26:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900996606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.204 254096 DEBUG oslo_concurrency.processutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.214 254096 DEBUG nova.compute.provider_tree [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.239 254096 DEBUG nova.scheduler.client.report [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.266 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.296 254096 INFO nova.scheduler.client.report [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 88446199-25eb-4303-8df1-334acb721afc
Nov 25 17:26:08 compute-0 nova_compute[254092]: 2025-11-25 17:26:08.365 254096 DEBUG oslo_concurrency.lockutils [None req-3008d156-a48b-4318-b138-2b3d4769b309 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "88446199-25eb-4303-8df1-334acb721afc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:08 compute-0 ceph-mon[74985]: pgmap v3077: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Nov 25 17:26:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1900996606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 17:26:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.857 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.857 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.858 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.858 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.858 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.860 254096 INFO nova.compute.manager [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Terminating instance
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.861 254096 DEBUG nova.compute.manager [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:26:09 compute-0 kernel: tap329205f7-aa (unregistering): left promiscuous mode
Nov 25 17:26:09 compute-0 NetworkManager[48891]: <info>  [1764091569.9232] device (tap329205f7-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:09 compute-0 ovn_controller[153477]: 2025-11-25T17:26:09Z|01614|binding|INFO|Releasing lport 329205f7-aac9-4c77-b1eb-b7af04c34038 from this chassis (sb_readonly=0)
Nov 25 17:26:09 compute-0 ovn_controller[153477]: 2025-11-25T17:26:09Z|01615|binding|INFO|Setting lport 329205f7-aac9-4c77-b1eb-b7af04c34038 down in Southbound
Nov 25 17:26:09 compute-0 ovn_controller[153477]: 2025-11-25T17:26:09Z|01616|binding|INFO|Removing iface tap329205f7-aa ovn-installed in OVS
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.973 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], port_security=['fa:16:3e:8d:69:2e 10.100.0.13 2001:db8::f816:3eff:fe8d:692e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe8d:692e/64', 'neutron:device_id': '03d40f4f-1bf3-4e1d-8844-bae4b32443cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88c42593-de32-4e23-b7f7-5cac507fe68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67b7f8e8aac6441f992a182f8d893100', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae658bb9-0cc7-4dfb-9d61-46336c3a204c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc6b1ea5-74e3-4908-b1a6-823fc7f5175e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=329205f7-aac9-4c77-b1eb-b7af04c34038) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:26:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.975 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 329205f7-aac9-4c77-b1eb-b7af04c34038 in datapath 88c42593-de32-4e23-b7f7-5cac507fe68d unbound from our chassis
Nov 25 17:26:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.977 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88c42593-de32-4e23-b7f7-5cac507fe68d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:26:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.978 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f78e335b-7213-4047-9bd0-72e3007359a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:09.979 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d namespace which is not needed anymore
Nov 25 17:26:09 compute-0 nova_compute[254092]: 2025-11-25 17:26:09.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:10 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Nov 25 17:26:10 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 16.582s CPU time.
Nov 25 17:26:10 compute-0 systemd-machined[216343]: Machine qemu-183-instance-00000095 terminated.
Nov 25 17:26:10 compute-0 kernel: tap329205f7-aa: entered promiscuous mode
Nov 25 17:26:10 compute-0 kernel: tap329205f7-aa (unregistering): left promiscuous mode
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.099 254096 INFO nova.virt.libvirt.driver [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Instance destroyed successfully.
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.100 254096 DEBUG nova.objects.instance [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lazy-loading 'resources' on Instance uuid 03d40f4f-1bf3-4e1d-8844-bae4b32443cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:26:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.116 254096 DEBUG nova.virt.libvirt.vif [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:24:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1562740359',display_name='tempest-TestGettingAddress-server-1562740359',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1562740359',id=149,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIGApEL69TBJfF0aCv+AtYr+DxsRqmx1WU3nexhWfLc3znEGVl+tJi462cWrzK0K7v6BMW214uJp8PUOC2Xg8eQ3E5Du3KhcTVIPqLHPMIkpL2RlGG9q2oDwpns6tg2WOA==',key_name='tempest-TestGettingAddress-1324577306',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:25:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67b7f8e8aac6441f992a182f8d893100',ramdisk_id='',reservation_id='r-01dpdy1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-207202764',owner_user_name='tempest-TestGettingAddress-207202764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:25:08Z,user_data=None,user_id='a647bb22c9a54e5fa9ddfff588126693',uuid=03d40f4f-1bf3-4e1d-8844-bae4b32443cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.116 254096 DEBUG nova.network.os_vif_util [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converting VIF {"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.118 254096 DEBUG nova.network.os_vif_util [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.118 254096 DEBUG os_vif [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.122 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap329205f7-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.126 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.129 254096 INFO os_vif [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:69:2e,bridge_name='br-int',has_traffic_filtering=True,id=329205f7-aac9-4c77-b1eb-b7af04c34038,network=Network(88c42593-de32-4e23-b7f7-5cac507fe68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329205f7-aa')
Nov 25 17:26:10 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : haproxy version is 2.8.14-c23fe91
Nov 25 17:26:10 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [NOTICE]   (422865) : path to executable is /usr/sbin/haproxy
Nov 25 17:26:10 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [WARNING]  (422865) : Exiting Master process...
Nov 25 17:26:10 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [WARNING]  (422865) : Exiting Master process...
Nov 25 17:26:10 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [ALERT]    (422865) : Current worker (422867) exited with code 143 (Terminated)
Nov 25 17:26:10 compute-0 neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d[422861]: [WARNING]  (422865) : All workers exited. Exiting... (0)
Nov 25 17:26:10 compute-0 systemd[1]: libpod-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb.scope: Deactivated successfully.
Nov 25 17:26:10 compute-0 podman[424533]: 2025-11-25 17:26:10.170522557 +0000 UTC m=+0.056071697 container died 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb-userdata-shm.mount: Deactivated successfully.
Nov 25 17:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a2aaa9402eb79ede98028373e6a92c83002a08f5e7749bcbd352c3d9be74b73-merged.mount: Deactivated successfully.
Nov 25 17:26:10 compute-0 podman[424533]: 2025-11-25 17:26:10.214811373 +0000 UTC m=+0.100360513 container cleanup 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.230 254096 DEBUG nova.compute.manager [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-unplugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:10 compute-0 systemd[1]: libpod-conmon-5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb.scope: Deactivated successfully.
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.230 254096 DEBUG oslo_concurrency.lockutils [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.230 254096 DEBUG oslo_concurrency.lockutils [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.231 254096 DEBUG oslo_concurrency.lockutils [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.231 254096 DEBUG nova.compute.manager [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] No waiting events found dispatching network-vif-unplugged-329205f7-aac9-4c77-b1eb-b7af04c34038 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.231 254096 DEBUG nova.compute.manager [req-7892449f-ef35-47b4-9a45-50a9ea86c998 req-d23d9af0-e48c-4e0f-8c75-0d8451fbe483 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-unplugged-329205f7-aac9-4c77-b1eb-b7af04c34038 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:26:10 compute-0 podman[424579]: 2025-11-25 17:26:10.291592533 +0000 UTC m=+0.048378588 container remove 5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.303 254096 DEBUG nova.compute.manager [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG nova.compute.manager [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing instance network info cache due to event network-changed-329205f7-aac9-4c77-b1eb-b7af04c34038. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG oslo_concurrency.lockutils [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG oslo_concurrency.lockutils [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.304 254096 DEBUG nova.network.neutron [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Refreshing network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.304 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3c565c72-93b6-4fe4-91bf-01b249e4847b]: (4, ('Tue Nov 25 05:26:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d (5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb)\n5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb\nTue Nov 25 05:26:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d (5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb)\n5bd12a84d113877a5a62a11c95c94e61a229075b07ccac357f07c84b7520dfbb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.307 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[146d4061-c60b-4579-8e57-ed3377b49ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.308 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88c42593-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:26:10 compute-0 kernel: tap88c42593-d0: left promiscuous mode
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.312 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.316 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[26baf4db-aaec-4d32-9939-a76c60718166]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.333 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[de68d10d-694e-4525-98c6-25c7d9242eb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.334 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a2443b04-e27f-42b3-8435-6cb4755ab0d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[f7440897-987f-4d20-864c-5d1a98941836]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 796558, 'reachable_time': 20320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424594, 'error': None, 'target': 'ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d88c42593\x2dde32\x2d4e23\x2db7f7\x2d5cac507fe68d.mount: Deactivated successfully.
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.361 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88c42593-de32-4e23-b7f7-5cac507fe68d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:26:10 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:10.362 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5abdcd-92d5-4fce-abdf-ffc234a1ca74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:26:10 compute-0 ceph-mon[74985]: pgmap v3078: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.558 254096 INFO nova.virt.libvirt.driver [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deleting instance files /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_del
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.559 254096 INFO nova.virt.libvirt.driver [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deletion of /var/lib/nova/instances/03d40f4f-1bf3-4e1d-8844-bae4b32443cc_del complete
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.650 254096 INFO nova.compute.manager [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.651 254096 DEBUG oslo.service.loopingcall [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.652 254096 DEBUG nova.compute.manager [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:26:10 compute-0 nova_compute[254092]: 2025-11-25 17:26:10.652 254096 DEBUG nova.network.neutron [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.192 254096 DEBUG nova.network.neutron [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.208 254096 INFO nova.compute.manager [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Took 0.56 seconds to deallocate network for instance.
Nov 25 17:26:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 99 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.2 MiB/s wr, 154 op/s
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.256 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.257 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.309 254096 DEBUG oslo_concurrency.processutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:26:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:26:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583867834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.801 254096 DEBUG oslo_concurrency.processutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.810 254096 DEBUG nova.compute.provider_tree [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.834 254096 DEBUG nova.scheduler.client.report [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.868 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.905 254096 INFO nova.scheduler.client.report [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Deleted allocations for instance 03d40f4f-1bf3-4e1d-8844-bae4b32443cc
Nov 25 17:26:11 compute-0 nova_compute[254092]: 2025-11-25 17:26:11.973 254096 DEBUG oslo_concurrency.lockutils [None req-3f0a5e0f-3683-4eb7-bfd2-e440cf243034 a647bb22c9a54e5fa9ddfff588126693 67b7f8e8aac6441f992a182f8d893100 - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.203 254096 DEBUG nova.network.neutron [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updated VIF entry in instance network info cache for port 329205f7-aac9-4c77-b1eb-b7af04c34038. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.205 254096 DEBUG nova.network.neutron [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Updating instance_info_cache with network_info: [{"id": "329205f7-aac9-4c77-b1eb-b7af04c34038", "address": "fa:16:3e:8d:69:2e", "network": {"id": "88c42593-de32-4e23-b7f7-5cac507fe68d", "bridge": "br-int", "label": "tempest-network-smoke--1758391990", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:692e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "67b7f8e8aac6441f992a182f8d893100", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329205f7-aa", "ovs_interfaceid": "329205f7-aac9-4c77-b1eb-b7af04c34038", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.237 254096 DEBUG oslo_concurrency.lockutils [req-5f133421-1974-4faf-bd8f-cc0f3e78b33d req-cd9e5151-7b71-4af4-97f8-64bbb2b706e7 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-03d40f4f-1bf3-4e1d-8844-bae4b32443cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.327 254096 DEBUG nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.328 254096 DEBUG oslo_concurrency.lockutils [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.329 254096 DEBUG oslo_concurrency.lockutils [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.330 254096 DEBUG oslo_concurrency.lockutils [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "03d40f4f-1bf3-4e1d-8844-bae4b32443cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.330 254096 DEBUG nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] No waiting events found dispatching network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.331 254096 WARNING nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received unexpected event network-vif-plugged-329205f7-aac9-4c77-b1eb-b7af04c34038 for instance with vm_state deleted and task_state None.
Nov 25 17:26:12 compute-0 nova_compute[254092]: 2025-11-25 17:26:12.331 254096 DEBUG nova.compute.manager [req-afa9b9c2-69b5-4ece-bf69-bfdb02bf4ded req-b6579bc0-c7a9-40af-a826-514e408200f9 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Received event network-vif-deleted-329205f7-aac9-4c77-b1eb-b7af04c34038 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:26:12 compute-0 ceph-mon[74985]: pgmap v3079: 321 pgs: 321 active+clean; 99 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.2 MiB/s wr, 154 op/s
Nov 25 17:26:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3583867834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 99 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 24 KiB/s wr, 91 op/s
Nov 25 17:26:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:13.665 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:13.665 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:13.666 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:14 compute-0 ceph-mon[74985]: pgmap v3080: 321 pgs: 321 active+clean; 99 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 24 KiB/s wr, 91 op/s
Nov 25 17:26:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:15 compute-0 nova_compute[254092]: 2025-11-25 17:26:15.128 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 41 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 25 KiB/s wr, 118 op/s
Nov 25 17:26:16 compute-0 nova_compute[254092]: 2025-11-25 17:26:16.027 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:16 compute-0 nova_compute[254092]: 2025-11-25 17:26:16.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:16 compute-0 ceph-mon[74985]: pgmap v3081: 321 pgs: 321 active+clean; 41 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 25 KiB/s wr, 118 op/s
Nov 25 17:26:16 compute-0 nova_compute[254092]: 2025-11-25 17:26:16.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 23 KiB/s wr, 88 op/s
Nov 25 17:26:17 compute-0 podman[424619]: 2025-11-25 17:26:17.70132793 +0000 UTC m=+0.104299780 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:26:17 compute-0 podman[424620]: 2025-11-25 17:26:17.717106349 +0000 UTC m=+0.115923747 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:26:17 compute-0 podman[424621]: 2025-11-25 17:26:17.737866244 +0000 UTC m=+0.135392196 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:26:18 compute-0 ceph-mon[74985]: pgmap v3082: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 23 KiB/s wr, 88 op/s
Nov 25 17:26:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 12 KiB/s wr, 69 op/s
Nov 25 17:26:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:20 compute-0 nova_compute[254092]: 2025-11-25 17:26:20.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:20 compute-0 ceph-mon[74985]: pgmap v3083: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 12 KiB/s wr, 69 op/s
Nov 25 17:26:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 12 KiB/s wr, 69 op/s
Nov 25 17:26:21 compute-0 nova_compute[254092]: 2025-11-25 17:26:21.390 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091566.388775, 88446199-25eb-4303-8df1-334acb721afc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:26:21 compute-0 nova_compute[254092]: 2025-11-25 17:26:21.390 254096 INFO nova.compute.manager [-] [instance: 88446199-25eb-4303-8df1-334acb721afc] VM Stopped (Lifecycle Event)
Nov 25 17:26:21 compute-0 nova_compute[254092]: 2025-11-25 17:26:21.415 254096 DEBUG nova.compute.manager [None req-e867a435-bd72-4c7a-9452-8adfc35fcb1b - - - - - -] [instance: 88446199-25eb-4303-8df1-334acb721afc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:26:21 compute-0 nova_compute[254092]: 2025-11-25 17:26:21.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:22 compute-0 ceph-mon[74985]: pgmap v3084: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 12 KiB/s wr, 69 op/s
Nov 25 17:26:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:26:24 compute-0 ceph-mon[74985]: pgmap v3085: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:26:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:24 compute-0 sudo[424680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:24 compute-0 sudo[424680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:24 compute-0 sudo[424680]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:24 compute-0 sudo[424705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:26:24 compute-0 sudo[424705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:24 compute-0 sudo[424705]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:24 compute-0 sudo[424730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:24 compute-0 sudo[424730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:24 compute-0 sudo[424730]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:25 compute-0 sudo[424755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 17:26:25 compute-0 sudo[424755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:25 compute-0 nova_compute[254092]: 2025-11-25 17:26:25.097 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091570.096119, 03d40f4f-1bf3-4e1d-8844-bae4b32443cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:26:25 compute-0 nova_compute[254092]: 2025-11-25 17:26:25.098 254096 INFO nova.compute.manager [-] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] VM Stopped (Lifecycle Event)
Nov 25 17:26:25 compute-0 nova_compute[254092]: 2025-11-25 17:26:25.118 254096 DEBUG nova.compute.manager [None req-b9f7e2fe-6e4a-4058-8c02-634a77cf18fd - - - - - -] [instance: 03d40f4f-1bf3-4e1d-8844-bae4b32443cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:26:25 compute-0 nova_compute[254092]: 2025-11-25 17:26:25.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:26:25 compute-0 sudo[424755]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:26:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:26:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:25 compute-0 sudo[424800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:25 compute-0 sudo[424800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:25 compute-0 sudo[424800]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:25 compute-0 sudo[424825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:26:25 compute-0 sudo[424825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:25 compute-0 sudo[424825]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:25 compute-0 sudo[424850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:25 compute-0 sudo[424850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:25 compute-0 sudo[424850]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:25 compute-0 sudo[424875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:26:25 compute-0 sudo[424875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:26 compute-0 sudo[424875]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:26 compute-0 sudo[424931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:26 compute-0 sudo[424931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:26 compute-0 sudo[424931]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:26 compute-0 ceph-mon[74985]: pgmap v3086: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 25 17:26:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:26 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:26 compute-0 sudo[424956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:26:26 compute-0 sudo[424956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:26 compute-0 sudo[424956]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:26 compute-0 sudo[424981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:26 compute-0 sudo[424981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:26 compute-0 sudo[424981]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:26 compute-0 sudo[425006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- inventory --format=json-pretty --filter-for-batch
Nov 25 17:26:26 compute-0 sudo[425006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:26 compute-0 nova_compute[254092]: 2025-11-25 17:26:26.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:27 compute-0 podman[425070]: 2025-11-25 17:26:27.039674354 +0000 UTC m=+0.064315552 container create 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:26:27 compute-0 systemd[1]: Started libpod-conmon-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope.
Nov 25 17:26:27 compute-0 podman[425070]: 2025-11-25 17:26:27.015471075 +0000 UTC m=+0.040112273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:27 compute-0 podman[425070]: 2025-11-25 17:26:27.152207437 +0000 UTC m=+0.176848635 container init 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:26:27 compute-0 podman[425070]: 2025-11-25 17:26:27.162439845 +0000 UTC m=+0.187081003 container start 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:26:27 compute-0 podman[425070]: 2025-11-25 17:26:27.166532347 +0000 UTC m=+0.191173505 container attach 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:26:27 compute-0 systemd[1]: libpod-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope: Deactivated successfully.
Nov 25 17:26:27 compute-0 modest_noyce[425086]: 167 167
Nov 25 17:26:27 compute-0 conmon[425086]: conmon 35eaa49a9671d1ba87cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope/container/memory.events
Nov 25 17:26:27 compute-0 podman[425070]: 2025-11-25 17:26:27.175560572 +0000 UTC m=+0.200201730 container died 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 17:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1702c71715a6ce3cd8fee68b53ca713d20ce5ae5352ffcc4984dea94ee2edca-merged.mount: Deactivated successfully.
Nov 25 17:26:27 compute-0 podman[425070]: 2025-11-25 17:26:27.211750588 +0000 UTC m=+0.236391766 container remove 35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:26:27 compute-0 systemd[1]: libpod-conmon-35eaa49a9671d1ba87cf366b153f2c64aad6a27a5c18221d4905d8315abbf00f.scope: Deactivated successfully.
Nov 25 17:26:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:27 compute-0 podman[425109]: 2025-11-25 17:26:27.395615123 +0000 UTC m=+0.053817667 container create 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:26:27 compute-0 systemd[1]: Started libpod-conmon-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope.
Nov 25 17:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:27 compute-0 podman[425109]: 2025-11-25 17:26:27.367259411 +0000 UTC m=+0.025461935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:27 compute-0 podman[425109]: 2025-11-25 17:26:27.48995457 +0000 UTC m=+0.148157174 container init 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:26:27 compute-0 podman[425109]: 2025-11-25 17:26:27.501126504 +0000 UTC m=+0.159329018 container start 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:26:27 compute-0 podman[425109]: 2025-11-25 17:26:27.50539221 +0000 UTC m=+0.163594744 container attach 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:26:28 compute-0 ceph-mon[74985]: pgmap v3087: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:29 compute-0 magical_mestorf[425125]: [
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:     {
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "available": false,
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "ceph_device": false,
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "lsm_data": {},
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "lvs": [],
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "path": "/dev/sr0",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "rejected_reasons": [
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "Has a FileSystem",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "Insufficient space (<5GB)"
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         ],
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         "sys_api": {
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "actuators": null,
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "device_nodes": "sr0",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "devname": "sr0",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "human_readable_size": "482.00 KB",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "id_bus": "ata",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "model": "QEMU DVD-ROM",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "nr_requests": "2",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "parent": "/dev/sr0",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "partitions": {},
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "path": "/dev/sr0",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "removable": "1",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "rev": "2.5+",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "ro": "0",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "rotational": "1",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "sas_address": "",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "sas_device_handle": "",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "scheduler_mode": "mq-deadline",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "sectors": 0,
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "sectorsize": "2048",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "size": 493568.0,
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "support_discard": "2048",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "type": "disk",
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:             "vendor": "QEMU"
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:         }
Nov 25 17:26:29 compute-0 magical_mestorf[425125]:     }
Nov 25 17:26:29 compute-0 magical_mestorf[425125]: ]
Nov 25 17:26:29 compute-0 systemd[1]: libpod-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope: Deactivated successfully.
Nov 25 17:26:29 compute-0 podman[425109]: 2025-11-25 17:26:29.435789065 +0000 UTC m=+2.093991569 container died 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 17:26:29 compute-0 systemd[1]: libpod-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope: Consumed 2.017s CPU time.
Nov 25 17:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-85d46575a9c15a5792d3b8956fb6166bc9af88cc0a3fb2ac57f0d85fa045d9e2-merged.mount: Deactivated successfully.
Nov 25 17:26:29 compute-0 podman[425109]: 2025-11-25 17:26:29.49659923 +0000 UTC m=+2.154801724 container remove 61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:26:29 compute-0 systemd[1]: libpod-conmon-61e8ad7d0ec1ecb7ae854c2bb00178535dfabb01be8346d426581e1aa61446db.scope: Deactivated successfully.
Nov 25 17:26:29 compute-0 sudo[425006]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b5eb80e1-e79c-43e7-99d2-9f0d15d969bc does not exist
Nov 25 17:26:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 36232c1e-7870-487c-9b46-8a89dff5876c does not exist
Nov 25 17:26:29 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cbfed74a-6692-48fe-b73d-cca45bd837d9 does not exist
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:26:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:26:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:26:29 compute-0 sudo[427305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:29 compute-0 sudo[427305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:29 compute-0 sudo[427305]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:29 compute-0 sudo[427330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:26:29 compute-0 sudo[427330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:29 compute-0 sudo[427330]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:29 compute-0 sudo[427355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:29 compute-0 sudo[427355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:29 compute-0 sudo[427355]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:29 compute-0 sudo[427380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:26:29 compute-0 sudo[427380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:30 compute-0 nova_compute[254092]: 2025-11-25 17:26:30.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:30 compute-0 podman[427445]: 2025-11-25 17:26:30.263579976 +0000 UTC m=+0.050214527 container create cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:26:30 compute-0 systemd[1]: Started libpod-conmon-cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a.scope.
Nov 25 17:26:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:30 compute-0 podman[427445]: 2025-11-25 17:26:30.24166157 +0000 UTC m=+0.028296131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:30 compute-0 podman[427445]: 2025-11-25 17:26:30.34491545 +0000 UTC m=+0.131549991 container init cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:26:30 compute-0 podman[427445]: 2025-11-25 17:26:30.353400751 +0000 UTC m=+0.140035252 container start cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:26:30 compute-0 podman[427445]: 2025-11-25 17:26:30.357253716 +0000 UTC m=+0.143888277 container attach cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:26:30 compute-0 distracted_kowalevski[427461]: 167 167
Nov 25 17:26:30 compute-0 systemd[1]: libpod-cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a.scope: Deactivated successfully.
Nov 25 17:26:30 compute-0 podman[427445]: 2025-11-25 17:26:30.361325107 +0000 UTC m=+0.147959648 container died cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a826506abb3e61211dd27a28915cd8c6ad4d0d0ac711234302088d21bd36154c-merged.mount: Deactivated successfully.
Nov 25 17:26:30 compute-0 podman[427445]: 2025-11-25 17:26:30.402887718 +0000 UTC m=+0.189522249 container remove cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 17:26:30 compute-0 systemd[1]: libpod-conmon-cceb7066ed13ee1eae3fc0c189829a421c27194e6d196f99426d962c9d3deb3a.scope: Deactivated successfully.
Nov 25 17:26:30 compute-0 ceph-mon[74985]: pgmap v3088: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:26:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:26:30 compute-0 podman[427486]: 2025-11-25 17:26:30.587561815 +0000 UTC m=+0.055698527 container create 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 17:26:30 compute-0 systemd[1]: Started libpod-conmon-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope.
Nov 25 17:26:30 compute-0 podman[427486]: 2025-11-25 17:26:30.559372738 +0000 UTC m=+0.027509530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:30 compute-0 podman[427486]: 2025-11-25 17:26:30.689122079 +0000 UTC m=+0.157258841 container init 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 17:26:30 compute-0 podman[427486]: 2025-11-25 17:26:30.699658826 +0000 UTC m=+0.167795548 container start 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:26:30 compute-0 podman[427486]: 2025-11-25 17:26:30.703691326 +0000 UTC m=+0.171828128 container attach 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 17:26:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:31 compute-0 eloquent_liskov[427502]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:26:31 compute-0 eloquent_liskov[427502]: --> relative data size: 1.0
Nov 25 17:26:31 compute-0 eloquent_liskov[427502]: --> All data devices are unavailable
Nov 25 17:26:31 compute-0 nova_compute[254092]: 2025-11-25 17:26:31.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:31 compute-0 systemd[1]: libpod-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope: Deactivated successfully.
Nov 25 17:26:31 compute-0 systemd[1]: libpod-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope: Consumed 1.141s CPU time.
Nov 25 17:26:31 compute-0 conmon[427502]: conmon 4c4786442844c6c8b103 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope/container/memory.events
Nov 25 17:26:31 compute-0 podman[427486]: 2025-11-25 17:26:31.897818939 +0000 UTC m=+1.365955691 container died 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d84fcc01ecafdab63ca18f595bdb7b973143daa2aed3dee1451fec5ba2ae661e-merged.mount: Deactivated successfully.
Nov 25 17:26:31 compute-0 podman[427486]: 2025-11-25 17:26:31.975682619 +0000 UTC m=+1.443819341 container remove 4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:26:31 compute-0 systemd[1]: libpod-conmon-4c4786442844c6c8b1030709a721ca1d18c86589f09907e72cefcdbe10d1ed58.scope: Deactivated successfully.
Nov 25 17:26:32 compute-0 sudo[427380]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:32 compute-0 sudo[427542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:32 compute-0 sudo[427542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:32 compute-0 sudo[427542]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:32 compute-0 sudo[427567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:26:32 compute-0 sudo[427567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:32 compute-0 sudo[427567]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:32 compute-0 sudo[427592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:32 compute-0 sudo[427592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:32 compute-0 sudo[427592]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:32 compute-0 sudo[427617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:26:32 compute-0 sudo[427617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:32 compute-0 ceph-mon[74985]: pgmap v3089: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:32 compute-0 podman[427680]: 2025-11-25 17:26:32.874866164 +0000 UTC m=+0.051991297 container create d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:26:32 compute-0 systemd[1]: Started libpod-conmon-d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0.scope.
Nov 25 17:26:32 compute-0 podman[427680]: 2025-11-25 17:26:32.856055672 +0000 UTC m=+0.033180825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:32 compute-0 podman[427680]: 2025-11-25 17:26:32.973821237 +0000 UTC m=+0.150946360 container init d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:26:32 compute-0 podman[427680]: 2025-11-25 17:26:32.984919369 +0000 UTC m=+0.162044492 container start d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:26:32 compute-0 podman[427680]: 2025-11-25 17:26:32.988935348 +0000 UTC m=+0.166060511 container attach d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:26:32 compute-0 practical_nobel[427696]: 167 167
Nov 25 17:26:32 compute-0 systemd[1]: libpod-d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0.scope: Deactivated successfully.
Nov 25 17:26:32 compute-0 podman[427680]: 2025-11-25 17:26:32.995269101 +0000 UTC m=+0.172394254 container died d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc25570abb5f6776ce301496b22c21939a3f4658a09c2f32922f2c7713bbe727-merged.mount: Deactivated successfully.
Nov 25 17:26:33 compute-0 podman[427680]: 2025-11-25 17:26:33.041786517 +0000 UTC m=+0.218911640 container remove d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nobel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:26:33 compute-0 systemd[1]: libpod-conmon-d2f7e308d2549ed9de139fd46596bedcbf132148153c9132cfa1aed0a1f30bb0.scope: Deactivated successfully.
Nov 25 17:26:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:33 compute-0 podman[427718]: 2025-11-25 17:26:33.263779449 +0000 UTC m=+0.062523293 container create 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:26:33 compute-0 systemd[1]: Started libpod-conmon-97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd.scope.
Nov 25 17:26:33 compute-0 podman[427718]: 2025-11-25 17:26:33.233577978 +0000 UTC m=+0.032321872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:33 compute-0 podman[427718]: 2025-11-25 17:26:33.381072762 +0000 UTC m=+0.179816656 container init 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:26:33 compute-0 podman[427718]: 2025-11-25 17:26:33.392102232 +0000 UTC m=+0.190846046 container start 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:26:33 compute-0 podman[427718]: 2025-11-25 17:26:33.395814694 +0000 UTC m=+0.194558528 container attach 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]: {
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:     "0": [
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:         {
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "devices": [
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "/dev/loop3"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             ],
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_name": "ceph_lv0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_size": "21470642176",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "name": "ceph_lv0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "tags": {
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cluster_name": "ceph",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.crush_device_class": "",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.encrypted": "0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osd_id": "0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.type": "block",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.vdo": "0"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             },
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "type": "block",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "vg_name": "ceph_vg0"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:         }
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:     ],
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:     "1": [
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:         {
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "devices": [
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "/dev/loop4"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             ],
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_name": "ceph_lv1",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_size": "21470642176",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "name": "ceph_lv1",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "tags": {
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cluster_name": "ceph",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.crush_device_class": "",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.encrypted": "0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osd_id": "1",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.type": "block",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.vdo": "0"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             },
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "type": "block",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "vg_name": "ceph_vg1"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:         }
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:     ],
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:     "2": [
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:         {
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "devices": [
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "/dev/loop5"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             ],
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_name": "ceph_lv2",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_size": "21470642176",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "name": "ceph_lv2",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "tags": {
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.cluster_name": "ceph",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.crush_device_class": "",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.encrypted": "0",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osd_id": "2",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.type": "block",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:                 "ceph.vdo": "0"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             },
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "type": "block",
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:             "vg_name": "ceph_vg2"
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:         }
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]:     ]
Nov 25 17:26:34 compute-0 relaxed_snyder[427734]: }
Nov 25 17:26:34 compute-0 systemd[1]: libpod-97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd.scope: Deactivated successfully.
Nov 25 17:26:34 compute-0 podman[427718]: 2025-11-25 17:26:34.264521779 +0000 UTC m=+1.063265583 container died 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b17e113c03c25a4a78bbaaee19eaae9cd2a97c645d9b8e6fdfedfcbbe76967b-merged.mount: Deactivated successfully.
Nov 25 17:26:34 compute-0 podman[427718]: 2025-11-25 17:26:34.340543378 +0000 UTC m=+1.139287192 container remove 97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_snyder, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:26:34 compute-0 systemd[1]: libpod-conmon-97153d1e57cf7a86a34ffcb76b2093dcff7f391e85fcc2ef456ec6a1154fc3dd.scope: Deactivated successfully.
Nov 25 17:26:34 compute-0 sudo[427617]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:34 compute-0 sudo[427756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:34 compute-0 sudo[427756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:34 compute-0 sudo[427756]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:34 compute-0 ceph-mon[74985]: pgmap v3090: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:34 compute-0 sudo[427781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:26:34 compute-0 sudo[427781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:34 compute-0 sudo[427781]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:34 compute-0 sudo[427806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:34 compute-0 sudo[427806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:34 compute-0 sudo[427806]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:34 compute-0 sudo[427831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:26:34 compute-0 sudo[427831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:35 compute-0 nova_compute[254092]: 2025-11-25 17:26:35.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:35 compute-0 podman[427898]: 2025-11-25 17:26:35.257034414 +0000 UTC m=+0.059537681 container create 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:26:35 compute-0 systemd[1]: Started libpod-conmon-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope.
Nov 25 17:26:35 compute-0 podman[427898]: 2025-11-25 17:26:35.23522484 +0000 UTC m=+0.037728137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:35 compute-0 podman[427898]: 2025-11-25 17:26:35.365114367 +0000 UTC m=+0.167617704 container init 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:26:35 compute-0 podman[427898]: 2025-11-25 17:26:35.37519447 +0000 UTC m=+0.177697777 container start 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:26:35 compute-0 podman[427898]: 2025-11-25 17:26:35.379634872 +0000 UTC m=+0.182138189 container attach 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 17:26:35 compute-0 bold_merkle[427915]: 167 167
Nov 25 17:26:35 compute-0 systemd[1]: libpod-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope: Deactivated successfully.
Nov 25 17:26:35 compute-0 conmon[427915]: conmon 15111bdc3d8ac45854dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope/container/memory.events
Nov 25 17:26:35 compute-0 podman[427898]: 2025-11-25 17:26:35.384780662 +0000 UTC m=+0.187283939 container died 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 17:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f8789c09ab604725dca5c6e4cf67d697f87085f5ae762be989f40cd7653e5a9-merged.mount: Deactivated successfully.
Nov 25 17:26:35 compute-0 podman[427898]: 2025-11-25 17:26:35.43432629 +0000 UTC m=+0.236829567 container remove 15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 17:26:35 compute-0 systemd[1]: libpod-conmon-15111bdc3d8ac45854ddb64fdee5c585ccc88b163ff39470c982c7988788f7fa.scope: Deactivated successfully.
Nov 25 17:26:35 compute-0 podman[427939]: 2025-11-25 17:26:35.651153822 +0000 UTC m=+0.045550161 container create 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:26:35 compute-0 systemd[1]: Started libpod-conmon-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope.
Nov 25 17:26:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:26:35 compute-0 podman[427939]: 2025-11-25 17:26:35.633068329 +0000 UTC m=+0.027464648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:26:35 compute-0 podman[427939]: 2025-11-25 17:26:35.74438959 +0000 UTC m=+0.138785889 container init 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:26:35 compute-0 podman[427939]: 2025-11-25 17:26:35.75836479 +0000 UTC m=+0.152761099 container start 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:26:35 compute-0 podman[427939]: 2025-11-25 17:26:35.761895456 +0000 UTC m=+0.156291755 container attach 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:26:36 compute-0 ceph-mon[74985]: pgmap v3091: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:36 compute-0 strange_feistel[427954]: {
Nov 25 17:26:36 compute-0 strange_feistel[427954]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "osd_id": 1,
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "type": "bluestore"
Nov 25 17:26:36 compute-0 strange_feistel[427954]:     },
Nov 25 17:26:36 compute-0 strange_feistel[427954]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "osd_id": 2,
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "type": "bluestore"
Nov 25 17:26:36 compute-0 strange_feistel[427954]:     },
Nov 25 17:26:36 compute-0 strange_feistel[427954]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "osd_id": 0,
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:26:36 compute-0 strange_feistel[427954]:         "type": "bluestore"
Nov 25 17:26:36 compute-0 strange_feistel[427954]:     }
Nov 25 17:26:36 compute-0 strange_feistel[427954]: }
Nov 25 17:26:36 compute-0 systemd[1]: libpod-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope: Deactivated successfully.
Nov 25 17:26:36 compute-0 systemd[1]: libpod-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope: Consumed 1.026s CPU time.
Nov 25 17:26:36 compute-0 podman[427939]: 2025-11-25 17:26:36.775391633 +0000 UTC m=+1.169787982 container died 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c517ba97216eb311c5c8ae5612a9b214143288c7c7e0235902bf3e2c8f6d7b1-merged.mount: Deactivated successfully.
Nov 25 17:26:36 compute-0 podman[427939]: 2025-11-25 17:26:36.843004273 +0000 UTC m=+1.237400582 container remove 6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:26:36 compute-0 systemd[1]: libpod-conmon-6c23723d0509142533834ceaf2350442951bec6e15299df4c747358a26051eba.scope: Deactivated successfully.
Nov 25 17:26:36 compute-0 sudo[427831]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:36 compute-0 nova_compute[254092]: 2025-11-25 17:26:36.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:26:36 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:26:36 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:36 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7bdfde25-ac8d-4822-a8ea-959246ebe259 does not exist
Nov 25 17:26:36 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a96007a6-b467-45d6-81b1-4e857bc609e1 does not exist
Nov 25 17:26:37 compute-0 sudo[428001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:26:37 compute-0 sudo[428001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:37 compute-0 sudo[428001]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:37 compute-0 sudo[428026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:26:37 compute-0 sudo[428026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:26:37 compute-0 sudo[428026]: pam_unix(sudo:session): session closed for user root
Nov 25 17:26:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:37 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:26:37 compute-0 ceph-mon[74985]: pgmap v3092: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:39 compute-0 nova_compute[254092]: 2025-11-25 17:26:39.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:39.254 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:26:39 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:39.255 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:26:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:26:40 compute-0 nova_compute[254092]: 2025-11-25 17:26:40.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:26:40
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:26:40 compute-0 ceph-mon[74985]: pgmap v3093: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:26:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:26:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:41 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:26:41.257 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:26:41 compute-0 nova_compute[254092]: 2025-11-25 17:26:41.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:42 compute-0 ceph-mon[74985]: pgmap v3094: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:44 compute-0 ceph-mon[74985]: pgmap v3095: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:45 compute-0 nova_compute[254092]: 2025-11-25 17:26:45.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:46 compute-0 ceph-mon[74985]: pgmap v3096: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:46 compute-0 nova_compute[254092]: 2025-11-25 17:26:46.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:47 compute-0 nova_compute[254092]: 2025-11-25 17:26:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:48 compute-0 ceph-mon[74985]: pgmap v3097: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:48 compute-0 nova_compute[254092]: 2025-11-25 17:26:48.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:48 compute-0 podman[428052]: 2025-11-25 17:26:48.653206167 +0000 UTC m=+0.066629925 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:26:48 compute-0 podman[428051]: 2025-11-25 17:26:48.65919616 +0000 UTC m=+0.072626578 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:26:48 compute-0 podman[428053]: 2025-11-25 17:26:48.695663583 +0000 UTC m=+0.102176492 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:26:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:50 compute-0 ceph-mon[74985]: pgmap v3098: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:26:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:26:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173695511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:50 compute-0 nova_compute[254092]: 2025-11-25 17:26:50.990 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.219 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.222 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3621MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.222 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.222 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.319 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.345 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:26:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/173695511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:26:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3717067192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.839 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.848 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.867 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:26:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.899 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.900 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:26:51 compute-0 nova_compute[254092]: 2025-11-25 17:26:51.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:52 compute-0 ceph-mon[74985]: pgmap v3099: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3717067192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:26:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:54 compute-0 ceph-mon[74985]: pgmap v3100: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:26:54 compute-0 nova_compute[254092]: 2025-11-25 17:26:54.900 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:54 compute-0 nova_compute[254092]: 2025-11-25 17:26:54.901 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:54 compute-0 nova_compute[254092]: 2025-11-25 17:26:54.901 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:26:55 compute-0 nova_compute[254092]: 2025-11-25 17:26:55.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:26:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/420285285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:26:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:26:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/420285285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:26:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/420285285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:26:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/420285285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:26:56 compute-0 ceph-mon[74985]: pgmap v3101: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:56 compute-0 nova_compute[254092]: 2025-11-25 17:26:56.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:26:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:57 compute-0 nova_compute[254092]: 2025-11-25 17:26:57.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:58 compute-0 ceph-mon[74985]: pgmap v3102: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:58 compute-0 nova_compute[254092]: 2025-11-25 17:26:58.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:26:58 compute-0 nova_compute[254092]: 2025-11-25 17:26:58.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:26:58 compute-0 nova_compute[254092]: 2025-11-25 17:26:58.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:26:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:26:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:00 compute-0 nova_compute[254092]: 2025-11-25 17:27:00.160 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:00 compute-0 ceph-mon[74985]: pgmap v3103: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:01 compute-0 nova_compute[254092]: 2025-11-25 17:27:01.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:02 compute-0 ceph-mon[74985]: pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:03 compute-0 nova_compute[254092]: 2025-11-25 17:27:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:04 compute-0 ceph-mon[74985]: pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:04 compute-0 ovn_controller[153477]: 2025-11-25T17:27:04Z|01617|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 17:27:05 compute-0 nova_compute[254092]: 2025-11-25 17:27:05.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:06 compute-0 ceph-mon[74985]: pgmap v3106: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:07 compute-0 nova_compute[254092]: 2025-11-25 17:27:07.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:08 compute-0 ceph-mon[74985]: pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:27:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:27:10 compute-0 nova_compute[254092]: 2025-11-25 17:27:10.169 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:10 compute-0 ceph-mon[74985]: pgmap v3108: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:12 compute-0 nova_compute[254092]: 2025-11-25 17:27:12.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:12 compute-0 ceph-mon[74985]: pgmap v3109: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:13.666 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:13.667 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:13.667 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:14 compute-0 ceph-mon[74985]: pgmap v3110: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:15 compute-0 nova_compute[254092]: 2025-11-25 17:27:15.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:16 compute-0 ceph-mon[74985]: pgmap v3111: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:17 compute-0 nova_compute[254092]: 2025-11-25 17:27:17.007 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:18 compute-0 ceph-mon[74985]: pgmap v3112: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.509 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.509 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.526 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:27:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.612 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.613 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.624 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.625 254096 INFO nova.compute.claims [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:27:19 compute-0 podman[428160]: 2025-11-25 17:27:19.676280865 +0000 UTC m=+0.075893196 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:27:19 compute-0 podman[428159]: 2025-11-25 17:27:19.698231133 +0000 UTC m=+0.105601056 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:27:19 compute-0 podman[428167]: 2025-11-25 17:27:19.755381738 +0000 UTC m=+0.135128559 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:27:19 compute-0 nova_compute[254092]: 2025-11-25 17:27:19.798 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:27:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500796303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.340 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.348 254096 DEBUG nova.compute.provider_tree [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.365 254096 DEBUG nova.scheduler.client.report [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.396 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.398 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.472 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.473 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.504 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:27:20 compute-0 ceph-mon[74985]: pgmap v3113: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1500796303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.533 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.637 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.639 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.640 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Creating image(s)
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.679 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.720 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.749 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.755 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.851 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.852 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.853 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.881 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:27:20 compute-0 nova_compute[254092]: 2025-11-25 17:27:20.885 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.373 254096 DEBUG nova.policy [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e6ab922303af4fc0a70862a72b3ea9c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.636 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.710 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] resizing rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.811 254096 DEBUG nova.objects.instance [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.825 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.826 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Ensure instance console log exists: /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.827 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.827 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:21 compute-0 nova_compute[254092]: 2025-11-25 17:27:21.827 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:22 compute-0 nova_compute[254092]: 2025-11-25 17:27:22.009 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:22 compute-0 ceph-mon[74985]: pgmap v3114: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:23 compute-0 nova_compute[254092]: 2025-11-25 17:27:23.230 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Successfully created port: 4c9f1115-1d14-4772-b092-e842077e160a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:27:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:24 compute-0 ceph-mon[74985]: pgmap v3115: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 72 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.371 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Successfully updated port: 4c9f1115-1d14-4772-b092-e842077e160a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.388 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.388 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.388 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.528 254096 DEBUG nova.compute.manager [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-changed-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.528 254096 DEBUG nova.compute.manager [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing instance network info cache due to event network-changed-4c9f1115-1d14-4772-b092-e842077e160a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.529 254096 DEBUG oslo_concurrency.lockutils [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:27:25 compute-0 nova_compute[254092]: 2025-11-25 17:27:25.562 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:27:26 compute-0 ceph-mon[74985]: pgmap v3116: 321 pgs: 321 active+clean; 72 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 25 17:27:27 compute-0 nova_compute[254092]: 2025-11-25 17:27:27.011 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:27:27 compute-0 ceph-mon[74985]: pgmap v3117: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:27:27 compute-0 nova_compute[254092]: 2025-11-25 17:27:27.982 254096 DEBUG nova.network.neutron [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.001 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.001 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance network_info: |[{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.001 254096 DEBUG oslo_concurrency.lockutils [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.002 254096 DEBUG nova.network.neutron [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.004 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start _get_guest_xml network_info=[{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.008 254096 WARNING nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.011 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.012 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.018 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.018 254096 DEBUG nova.virt.libvirt.host [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.019 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.020 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.021 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.021 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.021 254096 DEBUG nova.virt.hardware [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.024 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:27:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385939879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.523 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.546 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:27:28 compute-0 nova_compute[254092]: 2025-11-25 17:27:28.550 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2385939879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:27:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:27:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/265014662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.025 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.027 254096 DEBUG nova.virt.libvirt.vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-142394671',display_name='tempest-TestSnapshotPattern-server-142394671',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-142394671',id=151,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-pb0kf19x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:27:20Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=7d3f09ec-6bad-4674-ab8b-907560448ab0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.027 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.028 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.029 254096 DEBUG nova.objects.instance [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.041 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <uuid>7d3f09ec-6bad-4674-ab8b-907560448ab0</uuid>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <name>instance-00000097</name>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <nova:name>tempest-TestSnapshotPattern-server-142394671</nova:name>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:27:28</nova:creationTime>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:user uuid="e6ab922303af4fc0a70862a72b3ea9c8">tempest-TestSnapshotPattern-1072505445-project-member</nova:user>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:project uuid="d295e1cfcd234c4391fda20fc4264d70">tempest-TestSnapshotPattern-1072505445</nova:project>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <nova:port uuid="4c9f1115-1d14-4772-b092-e842077e160a">
Nov 25 17:27:29 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <system>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <entry name="serial">7d3f09ec-6bad-4674-ab8b-907560448ab0</entry>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <entry name="uuid">7d3f09ec-6bad-4674-ab8b-907560448ab0</entry>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </system>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <os>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   </os>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <features>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   </features>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7d3f09ec-6bad-4674-ab8b-907560448ab0_disk">
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       </source>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config">
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       </source>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:27:29 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:dc:a2:36"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <target dev="tap4c9f1115-1d"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/console.log" append="off"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <video>
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </video>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:27:29 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:27:29 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:27:29 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:27:29 compute-0 nova_compute[254092]: </domain>
Nov 25 17:27:29 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.043 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Preparing to wait for external event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.043 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.043 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.044 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.044 254096 DEBUG nova.virt.libvirt.vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-142394671',display_name='tempest-TestSnapshotPattern-server-142394671',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-142394671',id=151,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-pb0kf19x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:27:20Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=7d3f09ec-6bad-4674-ab8b-907560448ab0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.045 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.045 254096 DEBUG nova.network.os_vif_util [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.046 254096 DEBUG os_vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.047 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.047 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.050 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c9f1115-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.051 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c9f1115-1d, col_values=(('external_ids', {'iface-id': '4c9f1115-1d14-4772-b092-e842077e160a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:a2:36', 'vm-uuid': '7d3f09ec-6bad-4674-ab8b-907560448ab0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:27:29 compute-0 NetworkManager[48891]: <info>  [1764091649.0534] manager: (tap4c9f1115-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/672)
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.063 254096 INFO os_vif [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d')
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.111 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.112 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.112 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No VIF found with MAC fa:16:3e:dc:a2:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.113 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Using config drive
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.132 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:27:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.386 254096 DEBUG nova.network.neutron [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated VIF entry in instance network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.386 254096 DEBUG nova.network.neutron [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.398 254096 DEBUG oslo_concurrency.lockutils [req-b8a594c8-96e3-4aa2-befa-e0055cecbbc3 req-080ef3c3-c9c3-497a-a236-980f188e7788 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.459 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Creating config drive at /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.463 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpct9ky4_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.620 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpct9ky4_c" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.648 254096 DEBUG nova.storage.rbd_utils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.653 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.835 254096 DEBUG oslo_concurrency.processutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config 7d3f09ec-6bad-4674-ab8b-907560448ab0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.837 254096 INFO nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deleting local config drive /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0/disk.config because it was imported into RBD.
Nov 25 17:27:29 compute-0 kernel: tap4c9f1115-1d: entered promiscuous mode
Nov 25 17:27:29 compute-0 NetworkManager[48891]: <info>  [1764091649.9056] manager: (tap4c9f1115-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/673)
Nov 25 17:27:29 compute-0 ovn_controller[153477]: 2025-11-25T17:27:29Z|01618|binding|INFO|Claiming lport 4c9f1115-1d14-4772-b092-e842077e160a for this chassis.
Nov 25 17:27:29 compute-0 ovn_controller[153477]: 2025-11-25T17:27:29Z|01619|binding|INFO|4c9f1115-1d14-4772-b092-e842077e160a: Claiming fa:16:3e:dc:a2:36 10.100.0.14
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/265014662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:27:29 compute-0 ceph-mon[74985]: pgmap v3118: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.912 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.925 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:a2:36 10.100.0.14'], port_security=['fa:16:3e:dc:a2:36 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7d3f09ec-6bad-4674-ab8b-907560448ab0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '2', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4c9f1115-1d14-4772-b092-e842077e160a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.926 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4c9f1115-1d14-4772-b092-e842077e160a in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 bound to our chassis
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.927 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a1a00fe-6b82-48c5-a534-9040cbe84499
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.941 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cd7cbe-a594-45ba-b0be-98d3385ba55d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.942 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a1a00fe-61 in ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.944 270486 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a1a00fe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.944 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7bc51e-cfbe-4fb4-a3c9-d8556d0347fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.945 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[af139fda-c795-4337-a81a-05124ced9e96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:29 compute-0 systemd-udevd[428548]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:27:29 compute-0 systemd-machined[216343]: New machine qemu-185-instance-00000097.
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.958 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[57a72459-add4-4a97-90a5-f6e2f8712833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:29 compute-0 NetworkManager[48891]: <info>  [1764091649.9759] device (tap4c9f1115-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:27:29 compute-0 NetworkManager[48891]: <info>  [1764091649.9767] device (tap4c9f1115-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:27:29 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Nov 25 17:27:29 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:29.988 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[01ab046c-0f3d-4387-ad91-5cb733befe9b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:29 compute-0 ovn_controller[153477]: 2025-11-25T17:27:29Z|01620|binding|INFO|Setting lport 4c9f1115-1d14-4772-b092-e842077e160a ovn-installed in OVS
Nov 25 17:27:29 compute-0 ovn_controller[153477]: 2025-11-25T17:27:29Z|01621|binding|INFO|Setting lport 4c9f1115-1d14-4772-b092-e842077e160a up in Southbound
Nov 25 17:27:29 compute-0 nova_compute[254092]: 2025-11-25 17:27:29.995 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.032 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[8f014617-26c4-4706-bda6-4d5f4e60fc01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.037 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[54c86242-f0cb-4ee8-a0d8-99747d059da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 NetworkManager[48891]: <info>  [1764091650.0391] manager: (tap7a1a00fe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/674)
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.093 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6c31f4-fced-4e8b-a03c-76deee77681d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.098 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6da27bfb-0d00-4f83-a78f-b486b2783349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 NetworkManager[48891]: <info>  [1764091650.1299] device (tap7a1a00fe-60): carrier: link connected
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.140 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4f1b69-8250-4041-9d80-a2aa549c71c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.166 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1e9fb0-6d73-412c-b993-7fed167457c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428580, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.188 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[0120d931-d4be-47f7-a2c1-1fdd58f9f98e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:4820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810772, 'tstamp': 810772}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428581, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.212 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[50004794-c253-4feb-9118-54735feca32d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 428582, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.249 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[66dcd667-7768-47f0-b9a8-66cabbad3433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.323 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[5117b06a-7eec-44e0-a9ef-04e148dffa31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.325 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.325 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.326 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a1a00fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:27:30 compute-0 NetworkManager[48891]: <info>  [1764091650.3284] manager: (tap7a1a00fe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/675)
Nov 25 17:27:30 compute-0 kernel: tap7a1a00fe-60: entered promiscuous mode
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.327 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.332 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a1a00fe-60, col_values=(('external_ids', {'iface-id': '0a114fd0-0e8c-4ae1-8b45-56c99d3c790e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.332 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:30 compute-0 ovn_controller[153477]: 2025-11-25T17:27:30Z|01622|binding|INFO|Releasing lport 0a114fd0-0e8c-4ae1-8b45-56c99d3c790e from this chassis (sb_readonly=0)
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.354 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.355 163338 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a1a00fe-6b82-48c5-a534-9040cbe84499.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a1a00fe-6b82-48c5-a534-9040cbe84499.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.356 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[a4bcfb81-ebdf-495b-81c9-fba26129f4ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.357 163338 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: global
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     log         /dev/log local0 debug
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     log-tag     haproxy-metadata-proxy-7a1a00fe-6b82-48c5-a534-9040cbe84499
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     user        root
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     group       root
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     maxconn     1024
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     pidfile     /var/lib/neutron/external/pids/7a1a00fe-6b82-48c5-a534-9040cbe84499.pid.haproxy
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     daemon
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: defaults
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     log global
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     mode http
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     option httplog
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     option dontlognull
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     option http-server-close
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     option forwardfor
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     retries                 3
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     timeout http-request    30s
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     timeout connect         30s
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     timeout client          32s
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     timeout server          32s
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     timeout http-keep-alive 30s
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: listen listener
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     bind 169.254.169.254:80
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:     http-request add-header X-OVN-Network-ID 7a1a00fe-6b82-48c5-a534-9040cbe84499
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 17:27:30 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:27:30.358 163338 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'env', 'PROCESS_TAG=haproxy-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a1a00fe-6b82-48c5-a534-9040cbe84499.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 17:27:30 compute-0 podman[428651]: 2025-11-25 17:27:30.736726672 +0000 UTC m=+0.048933902 container create 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 17:27:30 compute-0 systemd[1]: Started libpod-conmon-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b.scope.
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.777 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091650.7769167, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.778 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Started (Lifecycle Event)
Nov 25 17:27:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:27:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bdd69f08ecf9e368f25903ee5b299b22e79808015eca7d99f37cf6be366f960/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:30 compute-0 podman[428651]: 2025-11-25 17:27:30.709848831 +0000 UTC m=+0.022056091 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 17:27:30 compute-0 podman[428651]: 2025-11-25 17:27:30.81084988 +0000 UTC m=+0.123057130 container init 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:27:30 compute-0 podman[428651]: 2025-11-25 17:27:30.816104393 +0000 UTC m=+0.128311613 container start 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.819 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.823 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091650.7771146, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.823 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Paused (Lifecycle Event)
Nov 25 17:27:30 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : New worker (428677) forked
Nov 25 17:27:30 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : Loading success.
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.860 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.864 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:27:30 compute-0 nova_compute[254092]: 2025-11-25 17:27:30.879 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:27:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.503 254096 DEBUG nova.compute.manager [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.505 254096 DEBUG oslo_concurrency.lockutils [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.505 254096 DEBUG oslo_concurrency.lockutils [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.506 254096 DEBUG oslo_concurrency.lockutils [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.506 254096 DEBUG nova.compute.manager [req-e3d1ae62-bc6d-4f89-b680-517a790e781f req-5b40d46b-906e-4bdd-a8d1-ea2ed9d9aae5 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Processing event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.507 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.511 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091651.5108566, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.511 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Resumed (Lifecycle Event)
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.513 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.515 254096 INFO nova.virt.libvirt.driver [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance spawned successfully.
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.515 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.528 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.532 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.544 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.545 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.546 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.546 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.547 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.548 254096 DEBUG nova.virt.libvirt.driver [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.554 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.600 254096 INFO nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 10.96 seconds to spawn the instance on the hypervisor.
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.600 254096 DEBUG nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.654 254096 INFO nova.compute.manager [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 12.08 seconds to build instance.
Nov 25 17:27:31 compute-0 nova_compute[254092]: 2025-11-25 17:27:31.670 254096 DEBUG oslo_concurrency.lockutils [None req-5c05b06b-4849-4c9f-8418-572496a66b94 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:32 compute-0 nova_compute[254092]: 2025-11-25 17:27:32.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:32 compute-0 ceph-mon[74985]: pgmap v3119: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 17:27:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 17:27:33 compute-0 nova_compute[254092]: 2025-11-25 17:27:33.610 254096 DEBUG nova.compute.manager [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:27:33 compute-0 nova_compute[254092]: 2025-11-25 17:27:33.610 254096 DEBUG oslo_concurrency.lockutils [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:33 compute-0 nova_compute[254092]: 2025-11-25 17:27:33.610 254096 DEBUG oslo_concurrency.lockutils [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:33 compute-0 nova_compute[254092]: 2025-11-25 17:27:33.611 254096 DEBUG oslo_concurrency.lockutils [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:33 compute-0 nova_compute[254092]: 2025-11-25 17:27:33.611 254096 DEBUG nova.compute.manager [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] No waiting events found dispatching network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:27:33 compute-0 nova_compute[254092]: 2025-11-25 17:27:33.611 254096 WARNING nova.compute.manager [req-15b255f1-c4ff-41e9-9712-11d3b8951faf req-b6e0e572-b13a-4c80-aa23-9da118bf9729 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received unexpected event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a for instance with vm_state active and task_state None.
Nov 25 17:27:34 compute-0 nova_compute[254092]: 2025-11-25 17:27:34.053 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:34 compute-0 ceph-mon[74985]: pgmap v3120: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 25 17:27:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 17:27:36 compute-0 ceph-mon[74985]: pgmap v3121: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Nov 25 17:27:37 compute-0 nova_compute[254092]: 2025-11-25 17:27:37.086 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:37 compute-0 sudo[428686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:37 compute-0 sudo[428686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:37 compute-0 sudo[428686]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 464 KiB/s wr, 75 op/s
Nov 25 17:27:37 compute-0 sudo[428711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:27:37 compute-0 sudo[428711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:37 compute-0 sudo[428711]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:37 compute-0 sudo[428736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:37 compute-0 sudo[428736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:37 compute-0 sudo[428736]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:37 compute-0 sudo[428761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:27:37 compute-0 sudo[428761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:37 compute-0 NetworkManager[48891]: <info>  [1764091657.7180] manager: (patch-br-int-to-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/676)
Nov 25 17:27:37 compute-0 NetworkManager[48891]: <info>  [1764091657.7196] manager: (patch-provnet-8e08dbfe-c57b-4927-a85b-ecf0c6cb1c28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Nov 25 17:27:37 compute-0 nova_compute[254092]: 2025-11-25 17:27:37.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:37 compute-0 nova_compute[254092]: 2025-11-25 17:27:37.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:37 compute-0 ovn_controller[153477]: 2025-11-25T17:27:37Z|01623|binding|INFO|Releasing lport 0a114fd0-0e8c-4ae1-8b45-56c99d3c790e from this chassis (sb_readonly=0)
Nov 25 17:27:37 compute-0 nova_compute[254092]: 2025-11-25 17:27:37.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:38 compute-0 sudo[428761]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:27:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:27:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:27:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:27:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fb92a4ae-d742-4141-9f4c-0826ec552bbb does not exist
Nov 25 17:27:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 15019d22-2e8e-46d9-9126-f9651f047b3c does not exist
Nov 25 17:27:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a6b0a232-6f22-4666-8246-a19a9e19ff24 does not exist
Nov 25 17:27:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:27:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:27:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:27:38 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:27:38 compute-0 sudo[428819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:38 compute-0 sudo[428819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:38 compute-0 sudo[428819]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:38 compute-0 sudo[428844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:27:38 compute-0 sudo[428844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:38 compute-0 sudo[428844]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:38 compute-0 ceph-mon[74985]: pgmap v3122: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 464 KiB/s wr, 75 op/s
Nov 25 17:27:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:27:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:27:38 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:27:38 compute-0 sudo[428869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:38 compute-0 sudo[428869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:38 compute-0 sudo[428869]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:38 compute-0 sudo[428894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:27:38 compute-0 sudo[428894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:38 compute-0 nova_compute[254092]: 2025-11-25 17:27:38.511 254096 DEBUG nova.compute.manager [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-changed-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:27:38 compute-0 nova_compute[254092]: 2025-11-25 17:27:38.512 254096 DEBUG nova.compute.manager [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing instance network info cache due to event network-changed-4c9f1115-1d14-4772-b092-e842077e160a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:27:38 compute-0 nova_compute[254092]: 2025-11-25 17:27:38.512 254096 DEBUG oslo_concurrency.lockutils [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:27:38 compute-0 nova_compute[254092]: 2025-11-25 17:27:38.513 254096 DEBUG oslo_concurrency.lockutils [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:27:38 compute-0 nova_compute[254092]: 2025-11-25 17:27:38.513 254096 DEBUG nova.network.neutron [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:27:38 compute-0 podman[428959]: 2025-11-25 17:27:38.813480665 +0000 UTC m=+0.055014257 container create a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:27:38 compute-0 systemd[1]: Started libpod-conmon-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope.
Nov 25 17:27:38 compute-0 podman[428959]: 2025-11-25 17:27:38.785194476 +0000 UTC m=+0.026728078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:27:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:27:38 compute-0 podman[428959]: 2025-11-25 17:27:38.904897584 +0000 UTC m=+0.146431176 container init a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:27:38 compute-0 podman[428959]: 2025-11-25 17:27:38.911805453 +0000 UTC m=+0.153339025 container start a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:27:38 compute-0 podman[428959]: 2025-11-25 17:27:38.915499853 +0000 UTC m=+0.157033415 container attach a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:27:38 compute-0 vigorous_gagarin[428975]: 167 167
Nov 25 17:27:38 compute-0 systemd[1]: libpod-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope: Deactivated successfully.
Nov 25 17:27:38 compute-0 conmon[428975]: conmon a275309ccd5f2f044997 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope/container/memory.events
Nov 25 17:27:38 compute-0 podman[428959]: 2025-11-25 17:27:38.922689748 +0000 UTC m=+0.164223300 container died a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:27:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-15c09957d2981aed2073673d5fce38c66f508f0078d4c8ad7b3a665396e93480-merged.mount: Deactivated successfully.
Nov 25 17:27:38 compute-0 podman[428959]: 2025-11-25 17:27:38.970240522 +0000 UTC m=+0.211774084 container remove a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:27:38 compute-0 systemd[1]: libpod-conmon-a275309ccd5f2f044997e65258d64905c05b5846ca76364d1a0e2b1d35c357bc.scope: Deactivated successfully.
Nov 25 17:27:39 compute-0 nova_compute[254092]: 2025-11-25 17:27:39.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:39 compute-0 podman[428999]: 2025-11-25 17:27:39.189351117 +0000 UTC m=+0.061408572 container create b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 25 17:27:39 compute-0 systemd[1]: Started libpod-conmon-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope.
Nov 25 17:27:39 compute-0 podman[428999]: 2025-11-25 17:27:39.161130389 +0000 UTC m=+0.033187904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:27:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:27:39 compute-0 podman[428999]: 2025-11-25 17:27:39.322909602 +0000 UTC m=+0.194967107 container init b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:27:39 compute-0 podman[428999]: 2025-11-25 17:27:39.335250898 +0000 UTC m=+0.207308353 container start b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 17:27:39 compute-0 podman[428999]: 2025-11-25 17:27:39.341053226 +0000 UTC m=+0.213110721 container attach b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:27:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:39 compute-0 nova_compute[254092]: 2025-11-25 17:27:39.891 254096 DEBUG nova.network.neutron [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated VIF entry in instance network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:27:39 compute-0 nova_compute[254092]: 2025-11-25 17:27:39.892 254096 DEBUG nova.network.neutron [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:27:39 compute-0 nova_compute[254092]: 2025-11-25 17:27:39.915 254096 DEBUG oslo_concurrency.lockutils [req-659dfbfb-7124-4c4d-9791-0fc485d2f899 req-1af996c5-6761-4b9a-9046-a36abe5c37cc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:27:40
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control']
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:27:40 compute-0 ceph-mon[74985]: pgmap v3123: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:27:40 compute-0 friendly_benz[429015]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:27:40 compute-0 friendly_benz[429015]: --> relative data size: 1.0
Nov 25 17:27:40 compute-0 friendly_benz[429015]: --> All data devices are unavailable
Nov 25 17:27:40 compute-0 systemd[1]: libpod-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope: Deactivated successfully.
Nov 25 17:27:40 compute-0 systemd[1]: libpod-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope: Consumed 1.182s CPU time.
Nov 25 17:27:40 compute-0 podman[428999]: 2025-11-25 17:27:40.579038663 +0000 UTC m=+1.451096108 container died b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9f7f2f96ac4804f5980c55a578b7fe572f56de0828da75844748402e25627d2-merged.mount: Deactivated successfully.
Nov 25 17:27:40 compute-0 podman[428999]: 2025-11-25 17:27:40.658592108 +0000 UTC m=+1.530649523 container remove b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 17:27:40 compute-0 systemd[1]: libpod-conmon-b147051caefbd86c9fa8963e047872b76d338d654cee0e01832d71057db97f46.scope: Deactivated successfully.
Nov 25 17:27:40 compute-0 sudo[428894]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:27:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:27:40 compute-0 sudo[429058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:40 compute-0 sudo[429058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:40 compute-0 sudo[429058]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:40 compute-0 sudo[429083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:27:40 compute-0 sudo[429083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:40 compute-0 sudo[429083]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:40 compute-0 sudo[429108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:40 compute-0 sudo[429108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:40 compute-0 sudo[429108]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:41 compute-0 sudo[429133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:27:41 compute-0 sudo[429133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:27:41 compute-0 podman[429196]: 2025-11-25 17:27:41.437081968 +0000 UTC m=+0.043984019 container create 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:27:41 compute-0 systemd[1]: Started libpod-conmon-317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669.scope.
Nov 25 17:27:41 compute-0 podman[429196]: 2025-11-25 17:27:41.41548234 +0000 UTC m=+0.022384411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:27:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:27:41 compute-0 podman[429196]: 2025-11-25 17:27:41.536500314 +0000 UTC m=+0.143402375 container init 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 17:27:41 compute-0 podman[429196]: 2025-11-25 17:27:41.543307419 +0000 UTC m=+0.150209460 container start 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 17:27:41 compute-0 podman[429196]: 2025-11-25 17:27:41.546681851 +0000 UTC m=+0.153583922 container attach 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:27:41 compute-0 angry_brattain[429212]: 167 167
Nov 25 17:27:41 compute-0 systemd[1]: libpod-317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669.scope: Deactivated successfully.
Nov 25 17:27:41 compute-0 podman[429217]: 2025-11-25 17:27:41.592317813 +0000 UTC m=+0.030374097 container died 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a0f1e498a38357313d3aa1f7e3b07686691bc655e6dab15bf01040713d9fdf-merged.mount: Deactivated successfully.
Nov 25 17:27:41 compute-0 podman[429217]: 2025-11-25 17:27:41.626542095 +0000 UTC m=+0.064598349 container remove 317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_brattain, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:27:41 compute-0 systemd[1]: libpod-conmon-317468a05e301d55919b1c594d945bb550d3a18ec669e8b3c43176b883205669.scope: Deactivated successfully.
Nov 25 17:27:41 compute-0 podman[429236]: 2025-11-25 17:27:41.853273497 +0000 UTC m=+0.060828267 container create 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 17:27:41 compute-0 systemd[1]: Started libpod-conmon-7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56.scope.
Nov 25 17:27:41 compute-0 podman[429236]: 2025-11-25 17:27:41.829659493 +0000 UTC m=+0.037214293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:27:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:41 compute-0 podman[429236]: 2025-11-25 17:27:41.965198372 +0000 UTC m=+0.172753162 container init 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:27:41 compute-0 podman[429236]: 2025-11-25 17:27:41.973687154 +0000 UTC m=+0.181241924 container start 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:27:42 compute-0 podman[429236]: 2025-11-25 17:27:42.067441466 +0000 UTC m=+0.274996236 container attach 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:27:42 compute-0 nova_compute[254092]: 2025-11-25 17:27:42.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:42 compute-0 ceph-mon[74985]: pgmap v3124: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 25 17:27:42 compute-0 elated_meitner[429250]: {
Nov 25 17:27:42 compute-0 elated_meitner[429250]:     "0": [
Nov 25 17:27:42 compute-0 elated_meitner[429250]:         {
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "devices": [
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "/dev/loop3"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             ],
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_name": "ceph_lv0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_size": "21470642176",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "name": "ceph_lv0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "tags": {
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cluster_name": "ceph",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.crush_device_class": "",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.encrypted": "0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osd_id": "0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.type": "block",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.vdo": "0"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             },
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "type": "block",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "vg_name": "ceph_vg0"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:         }
Nov 25 17:27:42 compute-0 elated_meitner[429250]:     ],
Nov 25 17:27:42 compute-0 elated_meitner[429250]:     "1": [
Nov 25 17:27:42 compute-0 elated_meitner[429250]:         {
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "devices": [
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "/dev/loop4"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             ],
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_name": "ceph_lv1",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_size": "21470642176",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "name": "ceph_lv1",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "tags": {
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cluster_name": "ceph",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.crush_device_class": "",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.encrypted": "0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osd_id": "1",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.type": "block",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.vdo": "0"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             },
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "type": "block",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "vg_name": "ceph_vg1"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:         }
Nov 25 17:27:42 compute-0 elated_meitner[429250]:     ],
Nov 25 17:27:42 compute-0 elated_meitner[429250]:     "2": [
Nov 25 17:27:42 compute-0 elated_meitner[429250]:         {
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "devices": [
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "/dev/loop5"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             ],
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_name": "ceph_lv2",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_size": "21470642176",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "name": "ceph_lv2",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "tags": {
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.cluster_name": "ceph",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.crush_device_class": "",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.encrypted": "0",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osd_id": "2",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.type": "block",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:                 "ceph.vdo": "0"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             },
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "type": "block",
Nov 25 17:27:42 compute-0 elated_meitner[429250]:             "vg_name": "ceph_vg2"
Nov 25 17:27:42 compute-0 elated_meitner[429250]:         }
Nov 25 17:27:42 compute-0 elated_meitner[429250]:     ]
Nov 25 17:27:42 compute-0 elated_meitner[429250]: }
Nov 25 17:27:42 compute-0 systemd[1]: libpod-7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56.scope: Deactivated successfully.
Nov 25 17:27:42 compute-0 podman[429236]: 2025-11-25 17:27:42.824233105 +0000 UTC m=+1.031787885 container died 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-110b78c8d176bb1236f1da4763a423180a184792fac9f4c60c51dff41df81152-merged.mount: Deactivated successfully.
Nov 25 17:27:42 compute-0 podman[429236]: 2025-11-25 17:27:42.893896531 +0000 UTC m=+1.101451301 container remove 7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_meitner, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:27:42 compute-0 systemd[1]: libpod-conmon-7ba9e6224bcfe8434c373fca15090488a4194967abcc66b52d581d0fdd1a1c56.scope: Deactivated successfully.
Nov 25 17:27:42 compute-0 sudo[429133]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:43 compute-0 sudo[429275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:43 compute-0 sudo[429275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:43 compute-0 sudo[429275]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:43 compute-0 sudo[429300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:27:43 compute-0 sudo[429300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:43 compute-0 sudo[429300]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:43 compute-0 sudo[429325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:43 compute-0 sudo[429325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:43 compute-0 sudo[429325]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:43 compute-0 sudo[429350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:27:43 compute-0 sudo[429350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 17:27:43 compute-0 podman[429415]: 2025-11-25 17:27:43.595752016 +0000 UTC m=+0.044459492 container create 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:27:43 compute-0 systemd[1]: Started libpod-conmon-8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2.scope.
Nov 25 17:27:43 compute-0 podman[429415]: 2025-11-25 17:27:43.577245472 +0000 UTC m=+0.025952968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:27:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:27:43 compute-0 podman[429415]: 2025-11-25 17:27:43.708902645 +0000 UTC m=+0.157610211 container init 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:27:43 compute-0 podman[429415]: 2025-11-25 17:27:43.724498419 +0000 UTC m=+0.173205935 container start 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:27:43 compute-0 podman[429415]: 2025-11-25 17:27:43.729276089 +0000 UTC m=+0.177983795 container attach 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:27:43 compute-0 flamboyant_matsumoto[429431]: 167 167
Nov 25 17:27:43 compute-0 systemd[1]: libpod-8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2.scope: Deactivated successfully.
Nov 25 17:27:43 compute-0 podman[429415]: 2025-11-25 17:27:43.737883634 +0000 UTC m=+0.186591180 container died 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0506b0cc3545dbd2e64915f3b0184445da0995b0524b9dfe00dcecf4e9ee0e8c-merged.mount: Deactivated successfully.
Nov 25 17:27:43 compute-0 podman[429415]: 2025-11-25 17:27:43.803225703 +0000 UTC m=+0.251933179 container remove 8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:27:43 compute-0 systemd[1]: libpod-conmon-8e9d69af6b6e6aa445bd31844be78c2b3425e4167e700092abb0b27d59e4add2.scope: Deactivated successfully.
Nov 25 17:27:44 compute-0 podman[429455]: 2025-11-25 17:27:44.063305751 +0000 UTC m=+0.085793255 container create d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:27:44 compute-0 nova_compute[254092]: 2025-11-25 17:27:44.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:44 compute-0 podman[429455]: 2025-11-25 17:27:44.025938154 +0000 UTC m=+0.048425708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:27:44 compute-0 systemd[1]: Started libpod-conmon-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope.
Nov 25 17:27:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:27:44 compute-0 podman[429455]: 2025-11-25 17:27:44.212988316 +0000 UTC m=+0.235475860 container init d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:27:44 compute-0 podman[429455]: 2025-11-25 17:27:44.220519021 +0000 UTC m=+0.243006495 container start d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:27:44 compute-0 podman[429455]: 2025-11-25 17:27:44.224632343 +0000 UTC m=+0.247119857 container attach d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:27:44 compute-0 ceph-mon[74985]: pgmap v3125: 321 pgs: 321 active+clean; 88 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Nov 25 17:27:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:45 compute-0 ovn_controller[153477]: 2025-11-25T17:27:45Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:a2:36 10.100.0.14
Nov 25 17:27:45 compute-0 ovn_controller[153477]: 2025-11-25T17:27:45Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:a2:36 10.100.0.14
Nov 25 17:27:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 118 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Nov 25 17:27:45 compute-0 festive_rubin[429471]: {
Nov 25 17:27:45 compute-0 festive_rubin[429471]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "osd_id": 1,
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "type": "bluestore"
Nov 25 17:27:45 compute-0 festive_rubin[429471]:     },
Nov 25 17:27:45 compute-0 festive_rubin[429471]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "osd_id": 2,
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "type": "bluestore"
Nov 25 17:27:45 compute-0 festive_rubin[429471]:     },
Nov 25 17:27:45 compute-0 festive_rubin[429471]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "osd_id": 0,
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:27:45 compute-0 festive_rubin[429471]:         "type": "bluestore"
Nov 25 17:27:45 compute-0 festive_rubin[429471]:     }
Nov 25 17:27:45 compute-0 festive_rubin[429471]: }
Nov 25 17:27:45 compute-0 podman[429455]: 2025-11-25 17:27:45.367986634 +0000 UTC m=+1.390474098 container died d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:27:45 compute-0 systemd[1]: libpod-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope: Deactivated successfully.
Nov 25 17:27:45 compute-0 systemd[1]: libpod-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope: Consumed 1.157s CPU time.
Nov 25 17:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5670150dca8d523e04e1ed6f13a5d4ced316d227e1426b1581754da3b50a774a-merged.mount: Deactivated successfully.
Nov 25 17:27:45 compute-0 podman[429455]: 2025-11-25 17:27:45.677797097 +0000 UTC m=+1.700284561 container remove d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:27:45 compute-0 systemd[1]: libpod-conmon-d9babc6768545da79cf87e0b890a7adc4821f77875e944d61992eb39c86e3337.scope: Deactivated successfully.
Nov 25 17:27:45 compute-0 sudo[429350]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:27:45 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:27:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:27:45 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:27:45 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ccd406ff-bfba-4a4b-8d27-e626b7a64848 does not exist
Nov 25 17:27:45 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2d12a761-2798-44d2-b389-7e67b2757320 does not exist
Nov 25 17:27:45 compute-0 sudo[429518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:27:45 compute-0 sudo[429518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:45 compute-0 sudo[429518]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:46 compute-0 sudo[429543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:27:46 compute-0 sudo[429543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:27:46 compute-0 sudo[429543]: pam_unix(sudo:session): session closed for user root
Nov 25 17:27:46 compute-0 ceph-mon[74985]: pgmap v3126: 321 pgs: 321 active+clean; 118 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Nov 25 17:27:46 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:27:46 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:27:47 compute-0 nova_compute[254092]: 2025-11-25 17:27:47.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 17:27:48 compute-0 ceph-mon[74985]: pgmap v3127: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 25 17:27:49 compute-0 nova_compute[254092]: 2025-11-25 17:27:49.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:27:49 compute-0 nova_compute[254092]: 2025-11-25 17:27:49.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:50 compute-0 ceph-mon[74985]: pgmap v3128: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.549 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:27:50 compute-0 nova_compute[254092]: 2025-11-25 17:27:50.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:50 compute-0 podman[429570]: 2025-11-25 17:27:50.692282046 +0000 UTC m=+0.097431133 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 17:27:50 compute-0 podman[429568]: 2025-11-25 17:27:50.693870409 +0000 UTC m=+0.098774539 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:27:50 compute-0 podman[429571]: 2025-11-25 17:27:50.701595759 +0000 UTC m=+0.106836579 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:27:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:27:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319224096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.050 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.144 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.145 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.366 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.367 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3418MB free_disk=59.94288635253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.368 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.368 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.458 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7d3f09ec-6bad-4674-ab8b-907560448ab0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.458 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.459 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:27:51 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3319224096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:27:51 compute-0 nova_compute[254092]: 2025-11-25 17:27:51.506 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007588279164224784 of space, bias 1.0, pg target 0.2276483749267435 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:27:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:27:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:27:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259980534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:27:52 compute-0 nova_compute[254092]: 2025-11-25 17:27:52.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:27:52 compute-0 nova_compute[254092]: 2025-11-25 17:27:52.015 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:27:52 compute-0 nova_compute[254092]: 2025-11-25 17:27:52.034 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:27:52 compute-0 nova_compute[254092]: 2025-11-25 17:27:52.085 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:27:52 compute-0 nova_compute[254092]: 2025-11-25 17:27:52.086 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:27:52 compute-0 nova_compute[254092]: 2025-11-25 17:27:52.131 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:52 compute-0 ceph-mon[74985]: pgmap v3129: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:27:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/259980534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:27:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:27:54 compute-0 nova_compute[254092]: 2025-11-25 17:27:54.097 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:27:54 compute-0 ceph-mon[74985]: pgmap v3130: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 25 17:27:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 25 17:27:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:27:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/154453331' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:27:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:27:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/154453331' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:27:55 compute-0 nova_compute[254092]: 2025-11-25 17:27:55.737 254096 DEBUG nova.compute.manager [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:27:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/154453331' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:27:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/154453331' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:27:55 compute-0 nova_compute[254092]: 2025-11-25 17:27:55.773 254096 INFO nova.compute.manager [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] instance snapshotting
Nov 25 17:27:56 compute-0 nova_compute[254092]: 2025-11-25 17:27:56.001 254096 INFO nova.virt.libvirt.driver [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Beginning live snapshot process
Nov 25 17:27:56 compute-0 nova_compute[254092]: 2025-11-25 17:27:56.087 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:56 compute-0 nova_compute[254092]: 2025-11-25 17:27:56.088 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:56 compute-0 nova_compute[254092]: 2025-11-25 17:27:56.088 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:27:56 compute-0 nova_compute[254092]: 2025-11-25 17:27:56.320 254096 DEBUG nova.virt.libvirt.imagebackend [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No parent info for 8b512c8e-2281-41de-a668-eb983e174ba0; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 25 17:27:56 compute-0 nova_compute[254092]: 2025-11-25 17:27:56.551 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(69e781c0eef74d57bbbfc30df767b991) on rbd image(7d3f09ec-6bad-4674-ab8b-907560448ab0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 17:27:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 25 17:27:56 compute-0 ceph-mon[74985]: pgmap v3131: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 25 17:27:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 25 17:27:57 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 25 17:27:57 compute-0 nova_compute[254092]: 2025-11-25 17:27:57.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 36 KiB/s wr, 3 op/s
Nov 25 17:27:57 compute-0 nova_compute[254092]: 2025-11-25 17:27:57.960 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] cloning vms/7d3f09ec-6bad-4674-ab8b-907560448ab0_disk@69e781c0eef74d57bbbfc30df767b991 to images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 17:27:58 compute-0 ceph-mon[74985]: osdmap e285: 3 total, 3 up, 3 in
Nov 25 17:27:58 compute-0 ceph-mon[74985]: pgmap v3133: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 36 KiB/s wr, 3 op/s
Nov 25 17:27:58 compute-0 nova_compute[254092]: 2025-11-25 17:27:58.447 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] flattening images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 17:27:58 compute-0 nova_compute[254092]: 2025-11-25 17:27:58.550 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:27:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 36 KiB/s wr, 3 op/s
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.516 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.517 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.517 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:27:59 compute-0 nova_compute[254092]: 2025-11-25 17:27:59.517 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:27:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:00 compute-0 ceph-mon[74985]: pgmap v3134: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 36 KiB/s wr, 3 op/s
Nov 25 17:28:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 66 op/s
Nov 25 17:28:01 compute-0 nova_compute[254092]: 2025-11-25 17:28:01.457 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] removing snapshot(69e781c0eef74d57bbbfc30df767b991) on rbd image(7d3f09ec-6bad-4674-ab8b-907560448ab0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 17:28:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 25 17:28:01 compute-0 nova_compute[254092]: 2025-11-25 17:28:01.583 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:28:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 25 17:28:01 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 25 17:28:01 compute-0 nova_compute[254092]: 2025-11-25 17:28:01.600 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:28:01 compute-0 nova_compute[254092]: 2025-11-25 17:28:01.600 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:28:01 compute-0 nova_compute[254092]: 2025-11-25 17:28:01.623 254096 DEBUG nova.storage.rbd_utils [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(snap) on rbd image(eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 17:28:02 compute-0 nova_compute[254092]: 2025-11-25 17:28:02.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 25 17:28:02 compute-0 ceph-mon[74985]: pgmap v3135: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 66 op/s
Nov 25 17:28:02 compute-0 ceph-mon[74985]: osdmap e286: 3 total, 3 up, 3 in
Nov 25 17:28:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 25 17:28:02 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 25 17:28:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 6.7 MiB/s wr, 101 op/s
Nov 25 17:28:03 compute-0 nova_compute[254092]: 2025-11-25 17:28:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:03 compute-0 nova_compute[254092]: 2025-11-25 17:28:03.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:03 compute-0 ceph-mon[74985]: osdmap e287: 3 total, 3 up, 3 in
Nov 25 17:28:03 compute-0 nova_compute[254092]: 2025-11-25 17:28:03.943 254096 INFO nova.virt.libvirt.driver [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Snapshot image upload complete
Nov 25 17:28:03 compute-0 nova_compute[254092]: 2025-11-25 17:28:03.945 254096 INFO nova.compute.manager [None req-7825becb-54fd-4abb-b014-b2da0f70e818 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 8.17 seconds to snapshot the instance on the hypervisor.
Nov 25 17:28:04 compute-0 nova_compute[254092]: 2025-11-25 17:28:04.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:04 compute-0 ceph-mon[74985]: pgmap v3138: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 6.7 MiB/s wr, 101 op/s
Nov 25 17:28:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 109 op/s
Nov 25 17:28:06 compute-0 ceph-mon[74985]: pgmap v3139: 321 pgs: 321 active+clean; 193 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 109 op/s
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 134 op/s
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.421 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.422 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.442 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.524 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.525 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.537 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.538 254096 INFO nova.compute.claims [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:28:07 compute-0 nova_compute[254092]: 2025-11-25 17:28:07.697 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:28:07 compute-0 ovn_controller[153477]: 2025-11-25T17:28:07Z|01624|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 17:28:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:28:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222383044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.244 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.253 254096 DEBUG nova.compute.provider_tree [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.274 254096 DEBUG nova.scheduler.client.report [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.304 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.305 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.378 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.378 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.401 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.425 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.536 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.538 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.538 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Creating image(s)
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.559 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.582 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.601 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.604 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "86dd2f95414ac23cbfb3f0889876ddcf1ef4f38f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.605 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "86dd2f95414ac23cbfb3f0889876ddcf1ef4f38f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.610 254096 DEBUG nova.policy [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e6ab922303af4fc0a70862a72b3ea9c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 17:28:08 compute-0 ceph-mon[74985]: pgmap v3140: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 134 op/s
Nov 25 17:28:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/222383044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.850 254096 DEBUG nova.virt.libvirt.imagebackend [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image locations are: [{'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.916 254096 DEBUG nova.virt.libvirt.imagebackend [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Selected location: {'url': 'rbd://d82baeae-c742-50a4-b8f6-b5257c8a2c92/images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 17:28:08 compute-0 nova_compute[254092]: 2025-11-25 17:28:08.918 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] cloning images/eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82@snap to None/c983f16d-bf50-4a2c-aa05-213890fb387a_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.063 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "86dd2f95414ac23cbfb3f0889876ddcf1ef4f38f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:09.181 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:28:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:09.184 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.254 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.261 254096 DEBUG nova.objects.instance [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'migration_context' on Instance uuid c983f16d-bf50-4a2c-aa05-213890fb387a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.284 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.285 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Ensure instance console log exists: /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.285 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.285 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.286 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 582 KiB/s wr, 55 op/s
Nov 25 17:28:09 compute-0 nova_compute[254092]: 2025-11-25 17:28:09.473 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Successfully created port: 78592e85-0e7a-4c36-bf1d-981efc74361b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 17:28:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 25 17:28:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 25 17:28:09 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 25 17:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:28:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:28:10 compute-0 nova_compute[254092]: 2025-11-25 17:28:10.164 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Successfully updated port: 78592e85-0e7a-4c36-bf1d-981efc74361b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 17:28:10 compute-0 nova_compute[254092]: 2025-11-25 17:28:10.186 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:28:10 compute-0 nova_compute[254092]: 2025-11-25 17:28:10.187 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:28:10 compute-0 nova_compute[254092]: 2025-11-25 17:28:10.188 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:28:10 compute-0 nova_compute[254092]: 2025-11-25 17:28:10.267 254096 DEBUG nova.compute.manager [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:28:10 compute-0 nova_compute[254092]: 2025-11-25 17:28:10.267 254096 DEBUG nova.compute.manager [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing instance network info cache due to event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:28:10 compute-0 nova_compute[254092]: 2025-11-25 17:28:10.268 254096 DEBUG oslo_concurrency.lockutils [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:28:10 compute-0 ceph-mon[74985]: pgmap v3141: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 582 KiB/s wr, 55 op/s
Nov 25 17:28:10 compute-0 ceph-mon[74985]: osdmap e288: 3 total, 3 up, 3 in
Nov 25 17:28:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 537 KiB/s wr, 88 op/s
Nov 25 17:28:11 compute-0 nova_compute[254092]: 2025-11-25 17:28:11.373 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:28:12 compute-0 nova_compute[254092]: 2025-11-25 17:28:12.144 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:12 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:12.186 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:28:12 compute-0 ceph-mon[74985]: pgmap v3143: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 537 KiB/s wr, 88 op/s
Nov 25 17:28:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 466 KiB/s wr, 76 op/s
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.474 254096 DEBUG nova.network.neutron [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.490 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.491 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance network_info: |[{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.491 254096 DEBUG oslo_concurrency.lockutils [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.492 254096 DEBUG nova.network.neutron [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.494 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start _get_guest_xml network_info=[{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T17:27:54Z,direct_url=<?>,disk_format='raw',id=eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-2017666674',owner='d295e1cfcd234c4391fda20fc4264d70',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T17:28:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': 'eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.500 254096 WARNING nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.508 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.509 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.518 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.519 254096 DEBUG nova.virt.libvirt.host [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.520 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.520 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-25T17:27:54Z,direct_url=<?>,disk_format='raw',id=eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-2017666674',owner='d295e1cfcd234c4391fda20fc4264d70',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-25T17:28:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.521 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.521 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.522 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.522 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.523 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.523 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.523 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.524 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.524 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.524 254096 DEBUG nova.virt.hardware [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.529 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:28:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:13.668 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:13.668 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:13.669 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:28:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11281694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:28:13 compute-0 nova_compute[254092]: 2025-11-25 17:28:13.989 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.019 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.025 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.256 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:28:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260961190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.462 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.464 254096 DEBUG nova.virt.libvirt.vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-934976337',display_name='tempest-TestSnapshotPattern-server-934976337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-934976337',id=152,image_ref='eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-ryy3wl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d3f09ec-6bad-4674-ab8b-907560448ab0',image_min_disk='1',image_min_ram='0',image_owner_id='d295e1cfcd234c4391fda20fc4264d70',image_owner_project_name='tempest-TestSnapshotPattern-1072505445',image_owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member',image_user_id='e6ab922303af4fc0a70862a72b3ea9c8',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:28:08Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=c983f16d-bf50-4a2c-aa05-213890fb387a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.465 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.466 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.468 254096 DEBUG nova.objects.instance [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'pci_devices' on Instance uuid c983f16d-bf50-4a2c-aa05-213890fb387a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.486 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <uuid>c983f16d-bf50-4a2c-aa05-213890fb387a</uuid>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <name>instance-00000098</name>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <nova:name>tempest-TestSnapshotPattern-server-934976337</nova:name>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:28:13</nova:creationTime>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:user uuid="e6ab922303af4fc0a70862a72b3ea9c8">tempest-TestSnapshotPattern-1072505445-project-member</nova:user>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:project uuid="d295e1cfcd234c4391fda20fc4264d70">tempest-TestSnapshotPattern-1072505445</nova:project>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <nova:ports>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <nova:port uuid="78592e85-0e7a-4c36-bf1d-981efc74361b">
Nov 25 17:28:14 compute-0 nova_compute[254092]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:         </nova:port>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       </nova:ports>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <system>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <entry name="serial">c983f16d-bf50-4a2c-aa05-213890fb387a</entry>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <entry name="uuid">c983f16d-bf50-4a2c-aa05-213890fb387a</entry>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </system>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <os>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   </os>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <features>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   </features>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c983f16d-bf50-4a2c-aa05-213890fb387a_disk">
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       </source>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config">
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       </source>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:28:14 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <interface type="ethernet">
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <mac address="fa:16:3e:f0:b0:be"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <mtu size="1442"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <target dev="tap78592e85-0e"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </interface>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/console.log" append="off"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <video>
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </video>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <input type="keyboard" bus="usb"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:28:14 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:28:14 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:28:14 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:28:14 compute-0 nova_compute[254092]: </domain>
Nov 25 17:28:14 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.488 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Preparing to wait for external event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.489 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.489 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.489 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.491 254096 DEBUG nova.virt.libvirt.vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T17:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-934976337',display_name='tempest-TestSnapshotPattern-server-934976337',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-934976337',id=152,image_ref='eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-ryy3wl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d3f09ec-6bad-4674-ab8b-907560448ab0',image_min_disk='1',image_min_ram='0',image_owner_id='d295e1cfcd234c4391fda20fc4264d70',image_owner_project_name='tempest-TestSnapshotPattern-1072505445',image_owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member',image_user_id='e6ab922303af4fc0a70862a72b3ea9c8',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T17:28:08Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=c983f16d-bf50-4a2c-aa05-213890fb387a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.491 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.492 254096 DEBUG nova.network.os_vif_util [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.492 254096 DEBUG os_vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.493 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.494 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.494 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.500 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.500 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78592e85-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.501 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78592e85-0e, col_values=(('external_ids', {'iface-id': '78592e85-0e7a-4c36-bf1d-981efc74361b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:b0:be', 'vm-uuid': 'c983f16d-bf50-4a2c-aa05-213890fb387a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:14 compute-0 NetworkManager[48891]: <info>  [1764091694.5045] manager: (tap78592e85-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/678)
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.516 254096 INFO os_vif [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e')
Nov 25 17:28:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.578 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.578 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.579 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] No VIF found with MAC fa:16:3e:f0:b0:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.579 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Using config drive
Nov 25 17:28:14 compute-0 ceph-mon[74985]: pgmap v3144: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 466 KiB/s wr, 76 op/s
Nov 25 17:28:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/11281694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:28:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3260961190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:28:14 compute-0 nova_compute[254092]: 2025-11-25 17:28:14.600 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:28:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 8.2 KiB/s wr, 52 op/s
Nov 25 17:28:15 compute-0 nova_compute[254092]: 2025-11-25 17:28:15.589 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Creating config drive at /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config
Nov 25 17:28:15 compute-0 nova_compute[254092]: 2025-11-25 17:28:15.596 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm3_i7t41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:28:15 compute-0 nova_compute[254092]: 2025-11-25 17:28:15.746 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm3_i7t41" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:28:15 compute-0 nova_compute[254092]: 2025-11-25 17:28:15.780 254096 DEBUG nova.storage.rbd_utils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] rbd image c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:28:15 compute-0 nova_compute[254092]: 2025-11-25 17:28:15.784 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:28:15 compute-0 nova_compute[254092]: 2025-11-25 17:28:15.979 254096 DEBUG oslo_concurrency.processutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config c983f16d-bf50-4a2c-aa05-213890fb387a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:28:15 compute-0 nova_compute[254092]: 2025-11-25 17:28:15.981 254096 INFO nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deleting local config drive /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a/disk.config because it was imported into RBD.
Nov 25 17:28:16 compute-0 kernel: tap78592e85-0e: entered promiscuous mode
Nov 25 17:28:16 compute-0 NetworkManager[48891]: <info>  [1764091696.0799] manager: (tap78592e85-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/679)
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.082 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:16 compute-0 ovn_controller[153477]: 2025-11-25T17:28:16Z|01625|binding|INFO|Claiming lport 78592e85-0e7a-4c36-bf1d-981efc74361b for this chassis.
Nov 25 17:28:16 compute-0 ovn_controller[153477]: 2025-11-25T17:28:16Z|01626|binding|INFO|78592e85-0e7a-4c36-bf1d-981efc74361b: Claiming fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.097 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:b0:be 10.100.0.12'], port_security=['fa:16:3e:f0:b0:be 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c983f16d-bf50-4a2c-aa05-213890fb387a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '2', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=78592e85-0e7a-4c36-bf1d-981efc74361b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.098 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 78592e85-0e7a-4c36-bf1d-981efc74361b in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 bound to our chassis
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.099 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a1a00fe-6b82-48c5-a534-9040cbe84499
Nov 25 17:28:16 compute-0 ovn_controller[153477]: 2025-11-25T17:28:16Z|01627|binding|INFO|Setting lport 78592e85-0e7a-4c36-bf1d-981efc74361b ovn-installed in OVS
Nov 25 17:28:16 compute-0 ovn_controller[153477]: 2025-11-25T17:28:16Z|01628|binding|INFO|Setting lport 78592e85-0e7a-4c36-bf1d-981efc74361b up in Southbound
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.117 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.125 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b59d864-c7af-4171-b5b1-25fd3dc6948a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:28:16 compute-0 systemd-machined[216343]: New machine qemu-186-instance-00000098.
Nov 25 17:28:16 compute-0 systemd-udevd[430156]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 17:28:16 compute-0 NetworkManager[48891]: <info>  [1764091696.1541] device (tap78592e85-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 17:28:16 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Nov 25 17:28:16 compute-0 NetworkManager[48891]: <info>  [1764091696.1553] device (tap78592e85-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.186 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b4023f-b4f1-47ce-87d4-92ad1de34b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.192 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[293519c1-e9b3-4927-bb97-b69a3693411f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.242 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[6f668f3f-8035-4190-b122-493aa3721fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.271 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[bf757338-6a79-4d5d-b157-1be6792c71bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430169, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.294 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d282adc2-3e2e-42b6-b701-0e59e1e04421]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810787, 'tstamp': 810787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430170, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810790, 'tstamp': 810790}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430170, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.297 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.303 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a1a00fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.303 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.304 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a1a00fe-60, col_values=(('external_ids', {'iface-id': '0a114fd0-0e8c-4ae1-8b45-56c99d3c790e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:28:16 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:28:16.304 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.559 254096 DEBUG nova.compute.manager [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.560 254096 DEBUG oslo_concurrency.lockutils [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.561 254096 DEBUG oslo_concurrency.lockutils [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.561 254096 DEBUG oslo_concurrency.lockutils [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.562 254096 DEBUG nova.compute.manager [req-139c4062-2985-4f7c-a147-2b65e9dd2fad req-9c909f95-fa93-45e2-9a96-7bc3e3ca5fab a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Processing event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 17:28:16 compute-0 ceph-mon[74985]: pgmap v3145: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 8.2 KiB/s wr, 52 op/s
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.682 254096 DEBUG nova.network.neutron [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updated VIF entry in instance network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.683 254096 DEBUG nova.network.neutron [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:28:16 compute-0 nova_compute[254092]: 2025-11-25 17:28:16.696 254096 DEBUG oslo_concurrency.lockutils [req-a242f40f-ba88-4da6-bc3e-fdd4f465f4ab req-8dad7072-9f00-4fee-918f-33cc418a994c a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.313 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091697.3121057, c983f16d-bf50-4a2c-aa05-213890fb387a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.314 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Started (Lifecycle Event)
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.317 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.322 254096 DEBUG nova.virt.libvirt.driver [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.326 254096 INFO nova.virt.libvirt.driver [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance spawned successfully.
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.326 254096 INFO nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 8.79 seconds to spawn the instance on the hypervisor.
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.327 254096 DEBUG nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.335 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.338 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.361 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.361 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091697.316541, c983f16d-bf50-4a2c-aa05-213890fb387a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.361 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Paused (Lifecycle Event)
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.401 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.413 254096 INFO nova.compute.manager [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 9.93 seconds to build instance.
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.417 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091697.3210852, c983f16d-bf50-4a2c-aa05-213890fb387a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.417 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Resumed (Lifecycle Event)
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.438 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.441 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:28:17 compute-0 nova_compute[254092]: 2025-11-25 17:28:17.443 254096 DEBUG oslo_concurrency.lockutils [None req-bdb97a25-5b7c-4aff-8e6f-287c98749353 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:18 compute-0 ceph-mon[74985]: pgmap v3146: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Nov 25 17:28:18 compute-0 nova_compute[254092]: 2025-11-25 17:28:18.622 254096 DEBUG nova.compute.manager [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:28:18 compute-0 nova_compute[254092]: 2025-11-25 17:28:18.623 254096 DEBUG oslo_concurrency.lockutils [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:18 compute-0 nova_compute[254092]: 2025-11-25 17:28:18.623 254096 DEBUG oslo_concurrency.lockutils [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:18 compute-0 nova_compute[254092]: 2025-11-25 17:28:18.624 254096 DEBUG oslo_concurrency.lockutils [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:18 compute-0 nova_compute[254092]: 2025-11-25 17:28:18.624 254096 DEBUG nova.compute.manager [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] No waiting events found dispatching network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:28:18 compute-0 nova_compute[254092]: 2025-11-25 17:28:18.625 254096 WARNING nova.compute.manager [req-6c4ba88f-3b37-4ba4-a105-7b6177d7b057 req-fa18cd2f-9528-4e8c-92da-6a83a0ed15fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received unexpected event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b for instance with vm_state active and task_state None.
Nov 25 17:28:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Nov 25 17:28:19 compute-0 nova_compute[254092]: 2025-11-25 17:28:19.503 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:20 compute-0 ceph-mon[74985]: pgmap v3147: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Nov 25 17:28:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 106 op/s
Nov 25 17:28:21 compute-0 podman[430213]: 2025-11-25 17:28:21.672100287 +0000 UTC m=+0.079790423 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:28:21 compute-0 podman[430214]: 2025-11-25 17:28:21.673466024 +0000 UTC m=+0.075276060 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:28:21 compute-0 podman[430215]: 2025-11-25 17:28:21.703958834 +0000 UTC m=+0.101518034 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 17:28:22 compute-0 nova_compute[254092]: 2025-11-25 17:28:22.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:22 compute-0 ceph-mon[74985]: pgmap v3148: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 106 op/s
Nov 25 17:28:23 compute-0 nova_compute[254092]: 2025-11-25 17:28:23.112 254096 DEBUG nova.compute.manager [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:28:23 compute-0 nova_compute[254092]: 2025-11-25 17:28:23.112 254096 DEBUG nova.compute.manager [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing instance network info cache due to event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:28:23 compute-0 nova_compute[254092]: 2025-11-25 17:28:23.113 254096 DEBUG oslo_concurrency.lockutils [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:28:23 compute-0 nova_compute[254092]: 2025-11-25 17:28:23.113 254096 DEBUG oslo_concurrency.lockutils [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:28:23 compute-0 nova_compute[254092]: 2025-11-25 17:28:23.113 254096 DEBUG nova.network.neutron [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:28:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 17:28:23 compute-0 ceph-mon[74985]: pgmap v3149: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Nov 25 17:28:24 compute-0 nova_compute[254092]: 2025-11-25 17:28:24.253 254096 DEBUG nova.network.neutron [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updated VIF entry in instance network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:28:24 compute-0 nova_compute[254092]: 2025-11-25 17:28:24.254 254096 DEBUG nova.network.neutron [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:28:24 compute-0 nova_compute[254092]: 2025-11-25 17:28:24.273 254096 DEBUG oslo_concurrency.lockutils [req-5e337862-7c18-48f2-9df4-246c9bcf413b req-8046358a-9781-4f8c-b9c8-6d2391fa12fc a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:28:24 compute-0 nova_compute[254092]: 2025-11-25 17:28:24.509 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Nov 25 17:28:26 compute-0 ceph-mon[74985]: pgmap v3150: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Nov 25 17:28:27 compute-0 nova_compute[254092]: 2025-11-25 17:28:27.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 78 op/s
Nov 25 17:28:28 compute-0 ceph-mon[74985]: pgmap v3151: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 78 op/s
Nov 25 17:28:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 78 op/s
Nov 25 17:28:29 compute-0 nova_compute[254092]: 2025-11-25 17:28:29.512 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:29 compute-0 ovn_controller[153477]: 2025-11-25T17:28:29Z|00200|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Nov 25 17:28:29 compute-0 ovn_controller[153477]: 2025-11-25T17:28:29Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 17:28:30 compute-0 ceph-mon[74985]: pgmap v3152: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 78 op/s
Nov 25 17:28:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 512 KiB/s wr, 130 op/s
Nov 25 17:28:32 compute-0 nova_compute[254092]: 2025-11-25 17:28:32.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:32 compute-0 ceph-mon[74985]: pgmap v3153: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 512 KiB/s wr, 130 op/s
Nov 25 17:28:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 500 KiB/s wr, 53 op/s
Nov 25 17:28:34 compute-0 ovn_controller[153477]: 2025-11-25T17:28:34Z|00202|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Nov 25 17:28:34 compute-0 ovn_controller[153477]: 2025-11-25T17:28:34Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 17:28:34 compute-0 ceph-mon[74985]: pgmap v3154: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 500 KiB/s wr, 53 op/s
Nov 25 17:28:34 compute-0 nova_compute[254092]: 2025-11-25 17:28:34.516 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:34 compute-0 ovn_controller[153477]: 2025-11-25T17:28:34Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 17:28:34 compute-0 ovn_controller[153477]: 2025-11-25T17:28:34Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:b0:be 10.100.0.12
Nov 25 17:28:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 536 KiB/s wr, 54 op/s
Nov 25 17:28:36 compute-0 ceph-mon[74985]: pgmap v3155: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 536 KiB/s wr, 54 op/s
Nov 25 17:28:37 compute-0 nova_compute[254092]: 2025-11-25 17:28:37.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 17:28:38 compute-0 ceph-mon[74985]: pgmap v3156: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 17:28:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 17:28:39 compute-0 nova_compute[254092]: 2025-11-25 17:28:39.530 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:28:40
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes']
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:28:40 compute-0 ceph-mon[74985]: pgmap v3157: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:28:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:28:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 17:28:42 compute-0 nova_compute[254092]: 2025-11-25 17:28:42.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:42 compute-0 ceph-mon[74985]: pgmap v3158: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Nov 25 17:28:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 47 KiB/s wr, 2 op/s
Nov 25 17:28:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:44 compute-0 ceph-mon[74985]: pgmap v3159: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 47 KiB/s wr, 2 op/s
Nov 25 17:28:44 compute-0 nova_compute[254092]: 2025-11-25 17:28:44.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 47 KiB/s wr, 2 op/s
Nov 25 17:28:46 compute-0 sudo[430276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:46 compute-0 sudo[430276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:46 compute-0 sudo[430276]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:46 compute-0 sudo[430301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:28:46 compute-0 sudo[430301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:46 compute-0 sudo[430301]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:46 compute-0 sudo[430326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:46 compute-0 sudo[430326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:46 compute-0 sudo[430326]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:46 compute-0 sudo[430351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:28:46 compute-0 sudo[430351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:46 compute-0 ceph-mon[74985]: pgmap v3160: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 47 KiB/s wr, 2 op/s
Nov 25 17:28:46 compute-0 sudo[430351]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 17:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:28:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev be3a758c-5b78-4165-95ac-ac25709a50f8 does not exist
Nov 25 17:28:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3748cdcf-4796-4dfa-8139-8556d5c5b7e4 does not exist
Nov 25 17:28:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev bdfbd9ab-5fae-4a39-854b-6d874168acaa does not exist
Nov 25 17:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:28:47 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:28:47 compute-0 sudo[430407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:47 compute-0 sudo[430407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:47 compute-0 sudo[430407]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:47 compute-0 nova_compute[254092]: 2025-11-25 17:28:47.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:47 compute-0 sudo[430432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:28:47 compute-0 sudo[430432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:47 compute-0 sudo[430432]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:47 compute-0 sudo[430457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:47 compute-0 sudo[430457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:47 compute-0 sudo[430457]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Nov 25 17:28:47 compute-0 sudo[430482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:28:47 compute-0 sudo[430482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:28:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:28:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:28:47 compute-0 podman[430547]: 2025-11-25 17:28:47.880366447 +0000 UTC m=+0.086057664 container create e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:28:47 compute-0 podman[430547]: 2025-11-25 17:28:47.844274414 +0000 UTC m=+0.049965681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:28:47 compute-0 systemd[1]: Started libpod-conmon-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope.
Nov 25 17:28:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:28:48 compute-0 podman[430547]: 2025-11-25 17:28:48.035763996 +0000 UTC m=+0.241455203 container init e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:28:48 compute-0 podman[430547]: 2025-11-25 17:28:48.050758565 +0000 UTC m=+0.256449792 container start e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 17:28:48 compute-0 crazy_murdock[430564]: 167 167
Nov 25 17:28:48 compute-0 systemd[1]: libpod-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope: Deactivated successfully.
Nov 25 17:28:48 compute-0 conmon[430564]: conmon e3c9bf9c51df3f9b8957 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope/container/memory.events
Nov 25 17:28:48 compute-0 podman[430547]: 2025-11-25 17:28:48.077297737 +0000 UTC m=+0.282989014 container attach e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:28:48 compute-0 podman[430547]: 2025-11-25 17:28:48.079450396 +0000 UTC m=+0.285141633 container died e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:28:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a1ab75f076945080c0416fa3013f445bf747952b6c47798cda6845fa3ccfd57-merged.mount: Deactivated successfully.
Nov 25 17:28:48 compute-0 podman[430547]: 2025-11-25 17:28:48.148041893 +0000 UTC m=+0.353733120 container remove e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:28:48 compute-0 systemd[1]: libpod-conmon-e3c9bf9c51df3f9b8957c03a6f3531b17a85c454eafbe1ae90f2adfcd5cf7772.scope: Deactivated successfully.
Nov 25 17:28:48 compute-0 podman[430589]: 2025-11-25 17:28:48.39592879 +0000 UTC m=+0.058097283 container create 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:28:48 compute-0 systemd[1]: Started libpod-conmon-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope.
Nov 25 17:28:48 compute-0 podman[430589]: 2025-11-25 17:28:48.374582559 +0000 UTC m=+0.036751082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:28:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:48 compute-0 podman[430589]: 2025-11-25 17:28:48.506901971 +0000 UTC m=+0.169070534 container init 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:28:48 compute-0 podman[430589]: 2025-11-25 17:28:48.518782614 +0000 UTC m=+0.180951137 container start 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:28:48 compute-0 podman[430589]: 2025-11-25 17:28:48.524271704 +0000 UTC m=+0.186440237 container attach 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:28:48 compute-0 ceph-mon[74985]: pgmap v3161: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Nov 25 17:28:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:28:49 compute-0 nova_compute[254092]: 2025-11-25 17:28:49.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:49 compute-0 nova_compute[254092]: 2025-11-25 17:28:49.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:49 compute-0 interesting_turing[430606]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:28:49 compute-0 interesting_turing[430606]: --> relative data size: 1.0
Nov 25 17:28:49 compute-0 interesting_turing[430606]: --> All data devices are unavailable
Nov 25 17:28:49 compute-0 systemd[1]: libpod-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope: Deactivated successfully.
Nov 25 17:28:49 compute-0 systemd[1]: libpod-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope: Consumed 1.150s CPU time.
Nov 25 17:28:49 compute-0 podman[430589]: 2025-11-25 17:28:49.735205025 +0000 UTC m=+1.397373548 container died 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e5fe7cdab272bc2e901ed6938160f29c32757e551d3de3c856c61732c6db9c-merged.mount: Deactivated successfully.
Nov 25 17:28:49 compute-0 podman[430589]: 2025-11-25 17:28:49.803533834 +0000 UTC m=+1.465702327 container remove 239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 17:28:49 compute-0 systemd[1]: libpod-conmon-239d2793c9e00f969ba11bfdcc6bd394475010bbd5e9e9106071b280d1f9a368.scope: Deactivated successfully.
Nov 25 17:28:49 compute-0 sudo[430482]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:49 compute-0 sudo[430649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:49 compute-0 sudo[430649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:49 compute-0 sudo[430649]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:50 compute-0 sudo[430674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:28:50 compute-0 sudo[430674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:50 compute-0 sudo[430674]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:50 compute-0 sudo[430699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:50 compute-0 sudo[430699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:50 compute-0 sudo[430699]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:50 compute-0 sudo[430724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:28:50 compute-0 sudo[430724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:50 compute-0 ceph-mon[74985]: pgmap v3162: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:28:50 compute-0 podman[430790]: 2025-11-25 17:28:50.65531692 +0000 UTC m=+0.047766912 container create 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:28:50 compute-0 systemd[1]: Started libpod-conmon-7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e.scope.
Nov 25 17:28:50 compute-0 podman[430790]: 2025-11-25 17:28:50.6365927 +0000 UTC m=+0.029042712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:28:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:28:50 compute-0 podman[430790]: 2025-11-25 17:28:50.753501643 +0000 UTC m=+0.145951655 container init 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:28:50 compute-0 podman[430790]: 2025-11-25 17:28:50.759712521 +0000 UTC m=+0.152162523 container start 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:28:50 compute-0 podman[430790]: 2025-11-25 17:28:50.76409753 +0000 UTC m=+0.156547552 container attach 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:28:50 compute-0 pensive_blackwell[430806]: 167 167
Nov 25 17:28:50 compute-0 systemd[1]: libpod-7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e.scope: Deactivated successfully.
Nov 25 17:28:50 compute-0 podman[430790]: 2025-11-25 17:28:50.767763591 +0000 UTC m=+0.160213583 container died 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-199518b486907e55b5e950abfa182e36c059ad05382cc3bcc5127d64f8f8a69a-merged.mount: Deactivated successfully.
Nov 25 17:28:50 compute-0 podman[430790]: 2025-11-25 17:28:50.821614456 +0000 UTC m=+0.214064468 container remove 7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_blackwell, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:28:50 compute-0 systemd[1]: libpod-conmon-7b36b3f1e26694e2db60a7067041bfe3a936e95b72bb5445ebec31d591befa8e.scope: Deactivated successfully.
Nov 25 17:28:51 compute-0 podman[430829]: 2025-11-25 17:28:51.072075123 +0000 UTC m=+0.062960744 container create 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:28:51 compute-0 systemd[1]: Started libpod-conmon-4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36.scope.
Nov 25 17:28:51 compute-0 podman[430829]: 2025-11-25 17:28:51.050138906 +0000 UTC m=+0.041024577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:28:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:51 compute-0 podman[430829]: 2025-11-25 17:28:51.187710511 +0000 UTC m=+0.178596192 container init 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:28:51 compute-0 podman[430829]: 2025-11-25 17:28:51.196330446 +0000 UTC m=+0.187216067 container start 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:28:51 compute-0 podman[430829]: 2025-11-25 17:28:51.198974908 +0000 UTC m=+0.189860579 container attach 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:28:51 compute-0 nova_compute[254092]: 2025-11-25 17:28:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:51 compute-0 nova_compute[254092]: 2025-11-25 17:28:51.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:51 compute-0 nova_compute[254092]: 2025-11-25 17:28:51.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:51 compute-0 nova_compute[254092]: 2025-11-25 17:28:51.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:51 compute-0 nova_compute[254092]: 2025-11-25 17:28:51.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:51 compute-0 nova_compute[254092]: 2025-11-25 17:28:51.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:28:51 compute-0 nova_compute[254092]: 2025-11-25 17:28:51.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008648972170671355 of space, bias 1.0, pg target 0.2594691651201407 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014241139016409456 of space, bias 1.0, pg target 0.4272341704922837 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:28:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:28:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:28:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867709054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.028 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:28:52 compute-0 sweet_haslett[430847]: {
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:     "0": [
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:         {
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "devices": [
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "/dev/loop3"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             ],
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_name": "ceph_lv0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_size": "21470642176",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "name": "ceph_lv0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "tags": {
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cluster_name": "ceph",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.crush_device_class": "",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.encrypted": "0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osd_id": "0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.type": "block",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.vdo": "0"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             },
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "type": "block",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "vg_name": "ceph_vg0"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:         }
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:     ],
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:     "1": [
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:         {
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "devices": [
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "/dev/loop4"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             ],
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_name": "ceph_lv1",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_size": "21470642176",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "name": "ceph_lv1",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "tags": {
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cluster_name": "ceph",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.crush_device_class": "",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.encrypted": "0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osd_id": "1",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.type": "block",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.vdo": "0"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             },
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "type": "block",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "vg_name": "ceph_vg1"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:         }
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:     ],
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:     "2": [
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:         {
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "devices": [
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "/dev/loop5"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             ],
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_name": "ceph_lv2",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_size": "21470642176",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "name": "ceph_lv2",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "tags": {
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.cluster_name": "ceph",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.crush_device_class": "",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.encrypted": "0",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osd_id": "2",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.type": "block",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:                 "ceph.vdo": "0"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             },
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "type": "block",
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:             "vg_name": "ceph_vg2"
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:         }
Nov 25 17:28:52 compute-0 sweet_haslett[430847]:     ]
Nov 25 17:28:52 compute-0 sweet_haslett[430847]: }
Nov 25 17:28:52 compute-0 ovn_controller[153477]: 2025-11-25T17:28:52Z|01629|memory_trim|INFO|Detected inactivity (last active 30030 ms ago): trimming memory
Nov 25 17:28:52 compute-0 systemd[1]: libpod-4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36.scope: Deactivated successfully.
Nov 25 17:28:52 compute-0 podman[430829]: 2025-11-25 17:28:52.094539704 +0000 UTC m=+1.085425365 container died 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c8f6be2e059cfca1a5840da27c5bb7970f8ada6c700f680d0b50787e182f95a-merged.mount: Deactivated successfully.
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.146 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.147 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.150 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.151 254096 DEBUG nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:52 compute-0 podman[430829]: 2025-11-25 17:28:52.1722598 +0000 UTC m=+1.163145421 container remove 4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_haslett, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:28:52 compute-0 systemd[1]: libpod-conmon-4dc855ca8c230bdee2d8e51ab34ebac806841600875305502cb31a75786a5d36.scope: Deactivated successfully.
Nov 25 17:28:52 compute-0 podman[430879]: 2025-11-25 17:28:52.201058613 +0000 UTC m=+0.107445385 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:28:52 compute-0 sudo[430724]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:52 compute-0 podman[430880]: 2025-11-25 17:28:52.220896783 +0000 UTC m=+0.119550324 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 17:28:52 compute-0 podman[430881]: 2025-11-25 17:28:52.232147599 +0000 UTC m=+0.138325965 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:28:52 compute-0 sudo[430950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:52 compute-0 sudo[430950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:52 compute-0 sudo[430950]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:52 compute-0 sudo[430975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:28:52 compute-0 sudo[430975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:52 compute-0 sudo[430975]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.380 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.383 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3154MB free_disk=59.936397552490234GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.383 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.384 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:28:52 compute-0 sudo[431000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:52 compute-0 sudo[431000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:52 compute-0 sudo[431000]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.471 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance 7d3f09ec-6bad-4674-ab8b-907560448ab0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.471 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Instance c983f16d-bf50-4a2c-aa05-213890fb387a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.471 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.472 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:28:52 compute-0 sudo[431025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:28:52 compute-0 sudo[431025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:52 compute-0 nova_compute[254092]: 2025-11-25 17:28:52.539 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:28:52 compute-0 ceph-mon[74985]: pgmap v3163: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:28:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3867709054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:28:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:28:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123940767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:28:53 compute-0 nova_compute[254092]: 2025-11-25 17:28:53.002 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:28:53 compute-0 nova_compute[254092]: 2025-11-25 17:28:53.012 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:28:53 compute-0 nova_compute[254092]: 2025-11-25 17:28:53.036 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:28:53 compute-0 podman[431108]: 2025-11-25 17:28:53.051388369 +0000 UTC m=+0.088468429 container create 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:28:53 compute-0 nova_compute[254092]: 2025-11-25 17:28:53.067 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:28:53 compute-0 nova_compute[254092]: 2025-11-25 17:28:53.067 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:28:53 compute-0 systemd[1]: Started libpod-conmon-5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c.scope.
Nov 25 17:28:53 compute-0 podman[431108]: 2025-11-25 17:28:53.013837856 +0000 UTC m=+0.050917976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:28:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:28:53 compute-0 podman[431108]: 2025-11-25 17:28:53.150918598 +0000 UTC m=+0.187998708 container init 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:28:53 compute-0 podman[431108]: 2025-11-25 17:28:53.163774978 +0000 UTC m=+0.200855038 container start 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:28:53 compute-0 podman[431108]: 2025-11-25 17:28:53.168732873 +0000 UTC m=+0.205813003 container attach 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:28:53 compute-0 relaxed_leavitt[431126]: 167 167
Nov 25 17:28:53 compute-0 systemd[1]: libpod-5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c.scope: Deactivated successfully.
Nov 25 17:28:53 compute-0 podman[431108]: 2025-11-25 17:28:53.174416598 +0000 UTC m=+0.211496698 container died 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0409e7a7a222f7077d6284cf1c7e2f6fb8cc3b0dcc5909b5bf928274bc0e9ba0-merged.mount: Deactivated successfully.
Nov 25 17:28:53 compute-0 podman[431108]: 2025-11-25 17:28:53.228631343 +0000 UTC m=+0.265711413 container remove 5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_leavitt, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:28:53 compute-0 systemd[1]: libpod-conmon-5171f3eaa32bed9ded4cc27ff7d3fb6a3f688d74467a17e8d04586cc6de3608c.scope: Deactivated successfully.
Nov 25 17:28:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:28:53 compute-0 podman[431148]: 2025-11-25 17:28:53.453712619 +0000 UTC m=+0.057260149 container create 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:28:53 compute-0 systemd[1]: Started libpod-conmon-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope.
Nov 25 17:28:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:28:53 compute-0 podman[431148]: 2025-11-25 17:28:53.432058471 +0000 UTC m=+0.035606041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:28:53 compute-0 podman[431148]: 2025-11-25 17:28:53.547277257 +0000 UTC m=+0.150824807 container init 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:28:53 compute-0 podman[431148]: 2025-11-25 17:28:53.556613281 +0000 UTC m=+0.160160821 container start 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:28:53 compute-0 podman[431148]: 2025-11-25 17:28:53.560346563 +0000 UTC m=+0.163894143 container attach 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:28:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2123940767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:28:54 compute-0 nova_compute[254092]: 2025-11-25 17:28:54.065 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:54 compute-0 trusting_brown[431165]: {
Nov 25 17:28:54 compute-0 trusting_brown[431165]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "osd_id": 1,
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "type": "bluestore"
Nov 25 17:28:54 compute-0 trusting_brown[431165]:     },
Nov 25 17:28:54 compute-0 trusting_brown[431165]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "osd_id": 2,
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "type": "bluestore"
Nov 25 17:28:54 compute-0 trusting_brown[431165]:     },
Nov 25 17:28:54 compute-0 trusting_brown[431165]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "osd_id": 0,
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:28:54 compute-0 trusting_brown[431165]:         "type": "bluestore"
Nov 25 17:28:54 compute-0 trusting_brown[431165]:     }
Nov 25 17:28:54 compute-0 trusting_brown[431165]: }
Nov 25 17:28:54 compute-0 nova_compute[254092]: 2025-11-25 17:28:54.586 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:54 compute-0 systemd[1]: libpod-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope: Deactivated successfully.
Nov 25 17:28:54 compute-0 systemd[1]: libpod-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope: Consumed 1.053s CPU time.
Nov 25 17:28:54 compute-0 conmon[431165]: conmon 2a8e9df723a9fddde62a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope/container/memory.events
Nov 25 17:28:54 compute-0 podman[431148]: 2025-11-25 17:28:54.600063743 +0000 UTC m=+1.203611273 container died 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:28:54 compute-0 ceph-mon[74985]: pgmap v3164: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Nov 25 17:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0edc2449c1a47846aaf5d85aa69151ac3867c69a9a413603a89eae1a9b0e853f-merged.mount: Deactivated successfully.
Nov 25 17:28:54 compute-0 podman[431148]: 2025-11-25 17:28:54.664364183 +0000 UTC m=+1.267911703 container remove 2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:28:54 compute-0 systemd[1]: libpod-conmon-2a8e9df723a9fddde62af794429d09ba243b1ff6a59e732b145cbb2513b8ba70.scope: Deactivated successfully.
Nov 25 17:28:54 compute-0 sudo[431025]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:28:54 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:28:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:28:54 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:28:54 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e7d64f74-97b1-4bf7-9329-2ed56ea96738 does not exist
Nov 25 17:28:54 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a115b60d-1ed1-41cb-bbfa-fc7f85455915 does not exist
Nov 25 17:28:54 compute-0 sudo[431213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:28:54 compute-0 sudo[431213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:54 compute-0 sudo[431213]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:54 compute-0 sudo[431238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:28:54 compute-0 sudo[431238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:28:54 compute-0 sudo[431238]: pam_unix(sudo:session): session closed for user root
Nov 25 17:28:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 3.3 KiB/s wr, 2 op/s
Nov 25 17:28:55 compute-0 nova_compute[254092]: 2025-11-25 17:28:55.381 254096 DEBUG nova.compute.manager [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:28:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:28:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686158814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:28:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:28:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686158814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:28:55 compute-0 nova_compute[254092]: 2025-11-25 17:28:55.421 254096 INFO nova.compute.manager [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] instance snapshotting
Nov 25 17:28:55 compute-0 nova_compute[254092]: 2025-11-25 17:28:55.683 254096 INFO nova.virt.libvirt.driver [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Beginning live snapshot process
Nov 25 17:28:55 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:28:55 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:28:55 compute-0 ceph-mon[74985]: pgmap v3165: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 3.3 KiB/s wr, 2 op/s
Nov 25 17:28:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/686158814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:28:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/686158814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:28:55 compute-0 nova_compute[254092]: 2025-11-25 17:28:55.854 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(e24d3925bfa3487ab0d43d6acc87a7fd) on rbd image(c983f16d-bf50-4a2c-aa05-213890fb387a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 17:28:56 compute-0 nova_compute[254092]: 2025-11-25 17:28:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:56 compute-0 nova_compute[254092]: 2025-11-25 17:28:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:28:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 25 17:28:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 25 17:28:56 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 25 17:28:56 compute-0 nova_compute[254092]: 2025-11-25 17:28:56.832 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] cloning vms/c983f16d-bf50-4a2c-aa05-213890fb387a_disk@e24d3925bfa3487ab0d43d6acc87a7fd to images/b74fa3ec-ffe6-4470-93cb-d345ffbadb0e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 17:28:56 compute-0 nova_compute[254092]: 2025-11-25 17:28:56.963 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] flattening images/b74fa3ec-ffe6-4470-93cb-d345ffbadb0e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 17:28:57 compute-0 nova_compute[254092]: 2025-11-25 17:28:57.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 6 op/s
Nov 25 17:28:57 compute-0 nova_compute[254092]: 2025-11-25 17:28:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:57 compute-0 nova_compute[254092]: 2025-11-25 17:28:57.589 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] removing snapshot(e24d3925bfa3487ab0d43d6acc87a7fd) on rbd image(c983f16d-bf50-4a2c-aa05-213890fb387a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 17:28:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.769104) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737769155, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 2004, "num_deletes": 252, "total_data_size": 3323455, "memory_usage": 3374968, "flush_reason": "Manual Compaction"}
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 25 17:28:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 25 17:28:57 compute-0 ceph-mon[74985]: osdmap e289: 3 total, 3 up, 3 in
Nov 25 17:28:57 compute-0 ceph-mon[74985]: pgmap v3167: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 6 op/s
Nov 25 17:28:57 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737789757, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 3246834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63882, "largest_seqno": 65885, "table_properties": {"data_size": 3237545, "index_size": 5846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18744, "raw_average_key_size": 20, "raw_value_size": 3219111, "raw_average_value_size": 3483, "num_data_blocks": 259, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091529, "oldest_key_time": 1764091529, "file_creation_time": 1764091737, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 20732 microseconds, and 10943 cpu microseconds.
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.789833) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 3246834 bytes OK
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.789867) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.793936) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.793967) EVENT_LOG_v1 {"time_micros": 1764091737793957, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.793995) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 3314971, prev total WAL file size 3315012, number of live WAL files 2.
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.795757) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(3170KB)], [149(8164KB)]
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737795800, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11607675, "oldest_snapshot_seqno": -1}
Nov 25 17:28:57 compute-0 nova_compute[254092]: 2025-11-25 17:28:57.815 254096 DEBUG nova.storage.rbd_utils [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] creating snapshot(snap) on rbd image(b74fa3ec-ffe6-4470-93cb-d345ffbadb0e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8515 keys, 9871488 bytes, temperature: kUnknown
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737839932, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9871488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9818208, "index_size": 30895, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21317, "raw_key_size": 222853, "raw_average_key_size": 26, "raw_value_size": 9669951, "raw_average_value_size": 1135, "num_data_blocks": 1195, "num_entries": 8515, "num_filter_entries": 8515, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091737, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.840594) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9871488 bytes
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.842576) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.4 rd, 223.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 9035, records dropped: 520 output_compression: NoCompression
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.842607) EVENT_LOG_v1 {"time_micros": 1764091737842593, "job": 92, "event": "compaction_finished", "compaction_time_micros": 44232, "compaction_time_cpu_micros": 24105, "output_level": 6, "num_output_files": 1, "total_output_size": 9871488, "num_input_records": 9035, "num_output_records": 8515, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737844140, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091737847262, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.795622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:28:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:28:57.847361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:28:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 25 17:28:58 compute-0 ceph-mon[74985]: osdmap e290: 3 total, 3 up, 3 in
Nov 25 17:28:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 25 17:28:58 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 25 17:28:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 11 op/s
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:28:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:28:59 compute-0 ceph-mon[74985]: osdmap e291: 3 total, 3 up, 3 in
Nov 25 17:28:59 compute-0 ceph-mon[74985]: pgmap v3170: 321 pgs: 321 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 421 KiB/s rd, 11 op/s
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.863 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.864 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.864 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 17:28:59 compute-0 nova_compute[254092]: 2025-11-25 17:28:59.864 254096 DEBUG nova.objects.instance [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:29:00 compute-0 nova_compute[254092]: 2025-11-25 17:29:00.291 254096 INFO nova.virt.libvirt.driver [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Snapshot image upload complete
Nov 25 17:29:00 compute-0 nova_compute[254092]: 2025-11-25 17:29:00.291 254096 INFO nova.compute.manager [None req-f592d871-06b5-4c82-b15c-fd2f045cb7a7 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 4.87 seconds to snapshot the instance on the hypervisor.
Nov 25 17:29:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 25 17:29:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 25 17:29:01 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 25 17:29:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 257 op/s
Nov 25 17:29:01 compute-0 nova_compute[254092]: 2025-11-25 17:29:01.636 254096 DEBUG nova.network.neutron [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:29:01 compute-0 nova_compute[254092]: 2025-11-25 17:29:01.651 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:29:01 compute-0 nova_compute[254092]: 2025-11-25 17:29:01.651 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 17:29:02 compute-0 nova_compute[254092]: 2025-11-25 17:29:02.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:02 compute-0 ceph-mon[74985]: osdmap e292: 3 total, 3 up, 3 in
Nov 25 17:29:02 compute-0 ceph-mon[74985]: pgmap v3172: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 257 op/s
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.207 254096 DEBUG nova.compute.manager [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.208 254096 DEBUG nova.compute.manager [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing instance network info cache due to event network-changed-78592e85-0e7a-4c36-bf1d-981efc74361b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.209 254096 DEBUG oslo_concurrency.lockutils [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.209 254096 DEBUG oslo_concurrency.lockutils [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.210 254096 DEBUG nova.network.neutron [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Refreshing network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:29:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 15 MiB/s wr, 196 op/s
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.402 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.403 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.403 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.404 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.404 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.406 254096 INFO nova.compute.manager [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Terminating instance
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.409 254096 DEBUG nova.compute.manager [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:29:03 compute-0 kernel: tap78592e85-0e (unregistering): left promiscuous mode
Nov 25 17:29:03 compute-0 NetworkManager[48891]: <info>  [1764091743.6723] device (tap78592e85-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:29:03 compute-0 ovn_controller[153477]: 2025-11-25T17:29:03Z|01630|binding|INFO|Releasing lport 78592e85-0e7a-4c36-bf1d-981efc74361b from this chassis (sb_readonly=0)
Nov 25 17:29:03 compute-0 ovn_controller[153477]: 2025-11-25T17:29:03Z|01631|binding|INFO|Setting lport 78592e85-0e7a-4c36-bf1d-981efc74361b down in Southbound
Nov 25 17:29:03 compute-0 ovn_controller[153477]: 2025-11-25T17:29:03Z|01632|binding|INFO|Removing iface tap78592e85-0e ovn-installed in OVS
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.694 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:b0:be 10.100.0.12'], port_security=['fa:16:3e:f0:b0:be 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c983f16d-bf50-4a2c-aa05-213890fb387a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '4', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=78592e85-0e7a-4c36-bf1d-981efc74361b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.697 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 78592e85-0e7a-4c36-bf1d-981efc74361b in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 unbound from our chassis
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.698 163338 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a1a00fe-6b82-48c5-a534-9040cbe84499
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.718 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.730 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[35ba5d6e-e2b9-49d1-aab1-f3e35a747fab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:03 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Nov 25 17:29:03 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 16.074s CPU time.
Nov 25 17:29:03 compute-0 systemd-machined[216343]: Machine qemu-186-instance-00000098 terminated.
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.804 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[82770369-fca3-4482-91cf-be4d6914e533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.810 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[a9632087-5fb5-427d-b497-36396c556e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.861 270879 DEBUG oslo.privsep.daemon [-] privsep: reply[3941c028-ba8a-4113-980c-1547848b0e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.866 254096 INFO nova.virt.libvirt.driver [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Instance destroyed successfully.
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.867 254096 DEBUG nova.objects.instance [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'resources' on Instance uuid c983f16d-bf50-4a2c-aa05-213890fb387a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.881 254096 DEBUG nova.virt.libvirt.vif [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:28:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-934976337',display_name='tempest-TestSnapshotPattern-server-934976337',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-934976337',id=152,image_ref='eaaf0c90-8ebf-4675-ac5d-b65d63e9cd82',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:28:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-ryy3wl7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='7d3f09ec-6bad-4674-ab8b-907560448ab0',image_min_disk='1',image_min_ram='0',image_owner_id='d295e1cfcd234c4391fda20fc4264d70',image_owner_project_name='tempest-TestSnapshotPattern-1072505445',image_owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member',image_user_id='e6ab922303af4fc0a70862a72b3ea9c8',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:29:00Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=c983f16d-bf50-4a2c-aa05-213890fb387a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.882 254096 DEBUG nova.network.os_vif_util [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.883 254096 DEBUG nova.network.os_vif_util [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.884 254096 DEBUG os_vif [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.887 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78592e85-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.893 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[7f51aa23-ad01-4952-8954-059cacc73731]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a1a00fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:48:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 471], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810772, 'reachable_time': 37101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431426, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.900 254096 INFO os_vif [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b0:be,bridge_name='br-int',has_traffic_filtering=True,id=78592e85-0e7a-4c36-bf1d-981efc74361b,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78592e85-0e')
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.922 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[765ec5c0-c85b-4779-b5df-1c6499074cd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810787, 'tstamp': 810787}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431428, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a1a00fe-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810790, 'tstamp': 810790}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431428, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.924 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.930 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a1a00fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.930 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.931 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a1a00fe-60, col_values=(('external_ids', {'iface-id': '0a114fd0-0e8c-4ae1-8b45-56c99d3c790e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:29:03 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:03.931 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.934 254096 DEBUG nova.compute.manager [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-unplugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.934 254096 DEBUG oslo_concurrency.lockutils [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.934 254096 DEBUG oslo_concurrency.lockutils [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.935 254096 DEBUG oslo_concurrency.lockutils [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.935 254096 DEBUG nova.compute.manager [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] No waiting events found dispatching network-vif-unplugged-78592e85-0e7a-4c36-bf1d-981efc74361b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.935 254096 DEBUG nova.compute.manager [req-2a9c166d-e27b-4934-bf62-8bfeb794e05d req-aedee8b0-ce62-424a-b62c-10d0b4e1a549 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-unplugged-78592e85-0e7a-4c36-bf1d-981efc74361b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:29:03 compute-0 nova_compute[254092]: 2025-11-25 17:29:03.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.329 254096 INFO nova.virt.libvirt.driver [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deleting instance files /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a_del
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.330 254096 INFO nova.virt.libvirt.driver [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deletion of /var/lib/nova/instances/c983f16d-bf50-4a2c-aa05-213890fb387a_del complete
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.421 254096 INFO nova.compute.manager [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 1.01 seconds to destroy the instance on the hypervisor.
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.422 254096 DEBUG oslo.service.loopingcall [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.422 254096 DEBUG nova.compute.manager [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.423 254096 DEBUG nova.network.neutron [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:04 compute-0 ceph-mon[74985]: pgmap v3173: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 15 MiB/s wr, 196 op/s
Nov 25 17:29:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 25 17:29:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 25 17:29:04 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.915 254096 DEBUG nova.network.neutron [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updated VIF entry in instance network info cache for port 78592e85-0e7a-4c36-bf1d-981efc74361b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.916 254096 DEBUG nova.network.neutron [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [{"id": "78592e85-0e7a-4c36-bf1d-981efc74361b", "address": "fa:16:3e:f0:b0:be", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78592e85-0e", "ovs_interfaceid": "78592e85-0e7a-4c36-bf1d-981efc74361b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:29:04 compute-0 nova_compute[254092]: 2025-11-25 17:29:04.942 254096 DEBUG oslo_concurrency.lockutils [req-0ad97433-51ee-40b9-80f3-b246f34b2ced req-afaee2f2-98c3-45c3-bd60-97937de0e8fe a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-c983f16d-bf50-4a2c-aa05-213890fb387a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:29:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 266 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 14 MiB/s wr, 201 op/s
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.394 254096 DEBUG nova.network.neutron [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.408 254096 INFO nova.compute.manager [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Took 0.99 seconds to deallocate network for instance.
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.449 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.450 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.512 254096 DEBUG nova.compute.manager [req-7023a8cc-8127-4d87-945f-95252fc20864 req-7a46357f-63f6-4de1-aa96-e0f384e4d58a a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-deleted-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.544 254096 DEBUG oslo_concurrency.processutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:05 compute-0 ceph-mon[74985]: osdmap e293: 3 total, 3 up, 3 in
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.986 254096 DEBUG nova.compute.manager [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.988 254096 DEBUG oslo_concurrency.lockutils [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.989 254096 DEBUG oslo_concurrency.lockutils [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.991 254096 DEBUG oslo_concurrency.lockutils [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.992 254096 DEBUG nova.compute.manager [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] No waiting events found dispatching network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:29:05 compute-0 nova_compute[254092]: 2025-11-25 17:29:05.992 254096 WARNING nova.compute.manager [req-ebca9615-85e8-4224-9c58-4bfc41b9e99a req-154bf0f0-14ad-4da2-a408-2dc9ee5d5e4d a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Received unexpected event network-vif-plugged-78592e85-0e7a-4c36-bf1d-981efc74361b for instance with vm_state deleted and task_state None.
Nov 25 17:29:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:29:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359872651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:06 compute-0 nova_compute[254092]: 2025-11-25 17:29:06.015 254096 DEBUG oslo_concurrency.processutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:06 compute-0 nova_compute[254092]: 2025-11-25 17:29:06.022 254096 DEBUG nova.compute.provider_tree [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:29:06 compute-0 nova_compute[254092]: 2025-11-25 17:29:06.042 254096 DEBUG nova.scheduler.client.report [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:29:06 compute-0 nova_compute[254092]: 2025-11-25 17:29:06.060 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:06 compute-0 nova_compute[254092]: 2025-11-25 17:29:06.085 254096 INFO nova.scheduler.client.report [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Deleted allocations for instance c983f16d-bf50-4a2c-aa05-213890fb387a
Nov 25 17:29:06 compute-0 nova_compute[254092]: 2025-11-25 17:29:06.133 254096 DEBUG oslo_concurrency.lockutils [None req-cddb7fcc-3270-41d2-9e69-08c0d30ce3a9 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "c983f16d-bf50-4a2c-aa05-213890fb387a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:06 compute-0 ceph-mon[74985]: pgmap v3175: 321 pgs: 321 active+clean; 266 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 14 MiB/s wr, 201 op/s
Nov 25 17:29:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3359872651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:07 compute-0 nova_compute[254092]: 2025-11-25 17:29:07.173 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 221 op/s
Nov 25 17:29:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 25 17:29:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 25 17:29:07 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:29:08 compute-0 ceph-mon[74985]: pgmap v3176: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 221 op/s
Nov 25 17:29:08 compute-0 ceph-mon[74985]: osdmap e294: 3 total, 3 up, 3 in
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.879 254096 DEBUG nova.compute.manager [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-changed-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.880 254096 DEBUG nova.compute.manager [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing instance network info cache due to event network-changed-4c9f1115-1d14-4772-b092-e842077e160a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.880 254096 DEBUG oslo_concurrency.lockutils [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.881 254096 DEBUG oslo_concurrency.lockutils [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquired lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.881 254096 DEBUG nova.network.neutron [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Refreshing network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.905 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.906 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.906 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.907 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.907 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.909 254096 INFO nova.compute.manager [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Terminating instance
Nov 25 17:29:08 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.910 254096 DEBUG nova.compute.manager [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:29:08 compute-0 kernel: tap4c9f1115-1d (unregistering): left promiscuous mode
Nov 25 17:29:08 compute-0 NetworkManager[48891]: <info>  [1764091748.9920] device (tap4c9f1115-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 17:29:09 compute-0 ovn_controller[153477]: 2025-11-25T17:29:08Z|01633|binding|INFO|Releasing lport 4c9f1115-1d14-4772-b092-e842077e160a from this chassis (sb_readonly=0)
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:08.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 ovn_controller[153477]: 2025-11-25T17:29:08Z|01634|binding|INFO|Setting lport 4c9f1115-1d14-4772-b092-e842077e160a down in Southbound
Nov 25 17:29:09 compute-0 ovn_controller[153477]: 2025-11-25T17:29:09Z|01635|binding|INFO|Removing iface tap4c9f1115-1d ovn-installed in OVS
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.010 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:a2:36 10.100.0.14'], port_security=['fa:16:3e:dc:a2:36 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7d3f09ec-6bad-4674-ab8b-907560448ab0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd295e1cfcd234c4391fda20fc4264d70', 'neutron:revision_number': '4', 'neutron:security_group_ids': '549c359f-07b7-49ee-bcd4-0d9c2b40c1ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e580454e-cbcb-4071-803c-15d1a2e87179, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>], logical_port=4c9f1115-1d14-4772-b092-e842077e160a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0ed0a7af0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.011 163338 INFO neutron.agent.ovn.metadata.agent [-] Port 4c9f1115-1d14-4772-b092-e842077e160a in datapath 7a1a00fe-6b82-48c5-a534-9040cbe84499 unbound from our chassis
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.013 163338 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a1a00fe-6b82-48c5-a534-9040cbe84499, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.014 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9611eb-f1af-4676-abc3-d5221a42006f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.015 163338 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 namespace which is not needed anymore
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Nov 25 17:29:09 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 17.324s CPU time.
Nov 25 17:29:09 compute-0 systemd-machined[216343]: Machine qemu-185-instance-00000097 terminated.
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.159 254096 INFO nova.virt.libvirt.driver [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Instance destroyed successfully.
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.160 254096 DEBUG nova.objects.instance [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lazy-loading 'resources' on Instance uuid 7d3f09ec-6bad-4674-ab8b-907560448ab0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.173 254096 DEBUG nova.virt.libvirt.vif [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T17:27:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-142394671',display_name='tempest-TestSnapshotPattern-server-142394671',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-142394671',id=151,image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPdD8/FH5EbDI3xVD9XL5KNVQ4Lzs3xHz1AUjnnFvQmxyC/VbF+auKCZxkKmIeFDv9Sg6Emei+wcjh3M01sOiN6AXBLXE6uzmUge5MlLrHdGnnVCH6sch0v/Lk4Ix4P1ww==',key_name='tempest-TestSnapshotPattern-1261516286',keypairs=<?>,launch_index=0,launched_at=2025-11-25T17:27:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d295e1cfcd234c4391fda20fc4264d70',ramdisk_id='',reservation_id='r-pb0kf19x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='8b512c8e-2281-41de-a668-eb983e174ba0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1072505445',owner_user_name='tempest-TestSnapshotPattern-1072505445-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T17:28:03Z,user_data=None,user_id='e6ab922303af4fc0a70862a72b3ea9c8',uuid=7d3f09ec-6bad-4674-ab8b-907560448ab0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.174 254096 DEBUG nova.network.os_vif_util [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converting VIF {"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.175 254096 DEBUG nova.network.os_vif_util [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.175 254096 DEBUG os_vif [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.177 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.177 254096 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c9f1115-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.179 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.187 254096 INFO os_vif [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:a2:36,bridge_name='br-int',has_traffic_filtering=True,id=4c9f1115-1d14-4772-b092-e842077e160a,network=Network(7a1a00fe-6b82-48c5-a534-9040cbe84499),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c9f1115-1d')
Nov 25 17:29:09 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : haproxy version is 2.8.14-c23fe91
Nov 25 17:29:09 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [NOTICE]   (428675) : path to executable is /usr/sbin/haproxy
Nov 25 17:29:09 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [WARNING]  (428675) : Exiting Master process...
Nov 25 17:29:09 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [WARNING]  (428675) : Exiting Master process...
Nov 25 17:29:09 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [ALERT]    (428675) : Current worker (428677) exited with code 143 (Terminated)
Nov 25 17:29:09 compute-0 neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499[428671]: [WARNING]  (428675) : All workers exited. Exiting... (0)
Nov 25 17:29:09 compute-0 systemd[1]: libpod-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b.scope: Deactivated successfully.
Nov 25 17:29:09 compute-0 podman[431500]: 2025-11-25 17:29:09.238952354 +0000 UTC m=+0.070062198 container died 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bdd69f08ecf9e368f25903ee5b299b22e79808015eca7d99f37cf6be366f960-merged.mount: Deactivated successfully.
Nov 25 17:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b-userdata-shm.mount: Deactivated successfully.
Nov 25 17:29:09 compute-0 podman[431500]: 2025-11-25 17:29:09.308541678 +0000 UTC m=+0.139651552 container cleanup 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:29:09 compute-0 systemd[1]: libpod-conmon-251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b.scope: Deactivated successfully.
Nov 25 17:29:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 74 op/s
Nov 25 17:29:09 compute-0 podman[431548]: 2025-11-25 17:29:09.401364834 +0000 UTC m=+0.056232821 container remove 251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.407 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[b223ca17-465b-4584-b6b8-4f02f88b1e1e]: (4, ('Tue Nov 25 05:29:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 (251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b)\n251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b\nTue Nov 25 05:29:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 (251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b)\n251698afd09a6ee9e733792ce41610f2c56a9f0833c08a2fae7d6e44bde5b81b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.408 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[3af4bb45-b4f8-4d08-80a9-faee206853d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.409 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a1a00fe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.454 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 kernel: tap7a1a00fe-60: left promiscuous mode
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.468 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.471 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a76c88-6c4a-4418-a380-31d6efa71c22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.494 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[62348a13-91d6-44ff-9604-f751c264e591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.495 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f65cb0-b178-44c3-8560-06a458c693ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.510 270486 DEBUG oslo.privsep.daemon [-] privsep: reply[725bc97a-9f65-41cd-ab3e-166d639c2a21]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810761, 'reachable_time': 27080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431564, 'error': None, 'target': 'ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d7a1a00fe\x2d6b82\x2d48c5\x2da534\x2d9040cbe84499.mount: Deactivated successfully.
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.514 163453 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a1a00fe-6b82-48c5-a534-9040cbe84499 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.514 163453 DEBUG oslo.privsep.daemon [-] privsep: reply[42f7f3c1-680a-49ff-b5bb-bbfb84aefabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 17:29:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 25 17:29:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 25 17:29:09 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.635 254096 INFO nova.virt.libvirt.driver [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deleting instance files /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0_del
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.636 254096 INFO nova.virt.libvirt.driver [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deletion of /var/lib/nova/instances/7d3f09ec-6bad-4674-ab8b-907560448ab0_del complete
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.691 254096 INFO nova.compute.manager [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.692 254096 DEBUG oslo.service.loopingcall [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.692 254096 DEBUG nova.compute.manager [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.692 254096 DEBUG nova.network.neutron [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.724 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:29:09 compute-0 nova_compute[254092]: 2025-11-25 17:29:09.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:09 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:09.725 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:29:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:29:10 compute-0 ceph-mon[74985]: pgmap v3178: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 KiB/s wr, 74 op/s
Nov 25 17:29:10 compute-0 ceph-mon[74985]: osdmap e295: 3 total, 3 up, 3 in
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.978 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-unplugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.979 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.980 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.980 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.981 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] No waiting events found dispatching network-vif-unplugged-4c9f1115-1d14-4772-b092-e842077e160a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.981 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-unplugged-4c9f1115-1d14-4772-b092-e842077e160a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.982 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.982 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Acquiring lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.983 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.983 254096 DEBUG oslo_concurrency.lockutils [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.984 254096 DEBUG nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] No waiting events found dispatching network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 17:29:10 compute-0 nova_compute[254092]: 2025-11-25 17:29:10.984 254096 WARNING nova.compute.manager [req-5a76d6f2-b9e8-4761-ad8c-2aa54be2704c req-c0bff231-dd08-4447-9f77-6459d44cfbef a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received unexpected event network-vif-plugged-4c9f1115-1d14-4772-b092-e842077e160a for instance with vm_state active and task_state deleting.
Nov 25 17:29:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 8.6 KiB/s wr, 178 op/s
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.449 254096 DEBUG nova.network.neutron [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updated VIF entry in instance network info cache for port 4c9f1115-1d14-4772-b092-e842077e160a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.450 254096 DEBUG nova.network.neutron [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [{"id": "4c9f1115-1d14-4772-b092-e842077e160a", "address": "fa:16:3e:dc:a2:36", "network": {"id": "7a1a00fe-6b82-48c5-a534-9040cbe84499", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-324533992-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d295e1cfcd234c4391fda20fc4264d70", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c9f1115-1d", "ovs_interfaceid": "4c9f1115-1d14-4772-b092-e842077e160a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.475 254096 DEBUG oslo_concurrency.lockutils [req-2885d7fd-faa3-48fe-8eef-e85ae60760c9 req-1abfd873-848c-493f-9f7b-95014a9a4482 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] Releasing lock "refresh_cache-7d3f09ec-6bad-4674-ab8b-907560448ab0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.476 254096 DEBUG nova.network.neutron [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.487 254096 INFO nova.compute.manager [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Took 1.80 seconds to deallocate network for instance.
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.528 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.529 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:11 compute-0 nova_compute[254092]: 2025-11-25 17:29:11.581 254096 DEBUG oslo_concurrency.processutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:29:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163608115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:12 compute-0 nova_compute[254092]: 2025-11-25 17:29:12.026 254096 DEBUG oslo_concurrency.processutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:12 compute-0 nova_compute[254092]: 2025-11-25 17:29:12.038 254096 DEBUG nova.compute.provider_tree [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:29:12 compute-0 nova_compute[254092]: 2025-11-25 17:29:12.065 254096 DEBUG nova.scheduler.client.report [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:29:12 compute-0 nova_compute[254092]: 2025-11-25 17:29:12.091 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:12 compute-0 nova_compute[254092]: 2025-11-25 17:29:12.123 254096 INFO nova.scheduler.client.report [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Deleted allocations for instance 7d3f09ec-6bad-4674-ab8b-907560448ab0
Nov 25 17:29:12 compute-0 nova_compute[254092]: 2025-11-25 17:29:12.177 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:12 compute-0 nova_compute[254092]: 2025-11-25 17:29:12.215 254096 DEBUG oslo_concurrency.lockutils [None req-d0d42a36-883a-4b27-9288-dbf22010acc0 e6ab922303af4fc0a70862a72b3ea9c8 d295e1cfcd234c4391fda20fc4264d70 - - default default] Lock "7d3f09ec-6bad-4674-ab8b-907560448ab0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:12 compute-0 ceph-mon[74985]: pgmap v3180: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 8.6 KiB/s wr, 178 op/s
Nov 25 17:29:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2163608115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:13 compute-0 nova_compute[254092]: 2025-11-25 17:29:13.087 254096 DEBUG nova.compute.manager [req-0fb21cb3-9df6-41be-8873-77e04dc75d93 req-2fc0bacc-4d8e-408b-aece-c8fe348cea82 a343892051c54822bb8c05f4e83cc635 8bfdcec9b8724aecb51627572571e95d - - default default] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Received event network-vif-deleted-4c9f1115-1d14-4772-b092-e842077e160a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 17:29:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 6.5 KiB/s wr, 133 op/s
Nov 25 17:29:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:13.669 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:13.669 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:14 compute-0 nova_compute[254092]: 2025-11-25 17:29:14.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 25 17:29:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 25 17:29:14 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 25 17:29:14 compute-0 ceph-mon[74985]: pgmap v3181: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 6.5 KiB/s wr, 133 op/s
Nov 25 17:29:14 compute-0 ceph-mon[74985]: osdmap e296: 3 total, 3 up, 3 in
Nov 25 17:29:14 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:29:14.727 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:29:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 78 op/s
Nov 25 17:29:16 compute-0 ceph-mon[74985]: pgmap v3183: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 KiB/s wr, 78 op/s
Nov 25 17:29:17 compute-0 nova_compute[254092]: 2025-11-25 17:29:17.201 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Nov 25 17:29:17 compute-0 nova_compute[254092]: 2025-11-25 17:29:17.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:17 compute-0 nova_compute[254092]: 2025-11-25 17:29:17.874 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:18 compute-0 ceph-mon[74985]: pgmap v3184: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Nov 25 17:29:18 compute-0 nova_compute[254092]: 2025-11-25 17:29:18.863 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091743.861149, c983f16d-bf50-4a2c-aa05-213890fb387a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:29:18 compute-0 nova_compute[254092]: 2025-11-25 17:29:18.863 254096 INFO nova.compute.manager [-] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] VM Stopped (Lifecycle Event)
Nov 25 17:29:18 compute-0 nova_compute[254092]: 2025-11-25 17:29:18.901 254096 DEBUG nova.compute.manager [None req-fabedd34-61f2-4cdf-a63c-5e0a3a69495f - - - - - -] [instance: c983f16d-bf50-4a2c-aa05-213890fb387a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:29:19 compute-0 nova_compute[254092]: 2025-11-25 17:29:19.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Nov 25 17:29:19 compute-0 nova_compute[254092]: 2025-11-25 17:29:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:20 compute-0 ceph-mon[74985]: pgmap v3185: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Nov 25 17:29:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:22 compute-0 nova_compute[254092]: 2025-11-25 17:29:22.204 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:22 compute-0 ceph-mon[74985]: pgmap v3186: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:22 compute-0 podman[431589]: 2025-11-25 17:29:22.649523658 +0000 UTC m=+0.062824530 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 25 17:29:22 compute-0 podman[431588]: 2025-11-25 17:29:22.660077066 +0000 UTC m=+0.076018750 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 17:29:22 compute-0 podman[431590]: 2025-11-25 17:29:22.696084746 +0000 UTC m=+0.107549908 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 17:29:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:24 compute-0 nova_compute[254092]: 2025-11-25 17:29:24.157 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091749.1553006, 7d3f09ec-6bad-4674-ab8b-907560448ab0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:29:24 compute-0 nova_compute[254092]: 2025-11-25 17:29:24.157 254096 INFO nova.compute.manager [-] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] VM Stopped (Lifecycle Event)
Nov 25 17:29:24 compute-0 nova_compute[254092]: 2025-11-25 17:29:24.183 254096 DEBUG nova.compute.manager [None req-2599a26e-1353-4e55-a5ff-af6b3e64d7c2 - - - - - -] [instance: 7d3f09ec-6bad-4674-ab8b-907560448ab0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:29:24 compute-0 nova_compute[254092]: 2025-11-25 17:29:24.266 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:24 compute-0 ceph-mon[74985]: pgmap v3187: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:26 compute-0 ceph-mon[74985]: pgmap v3188: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:27 compute-0 nova_compute[254092]: 2025-11-25 17:29:27.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:28 compute-0 ceph-mon[74985]: pgmap v3189: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:29 compute-0 nova_compute[254092]: 2025-11-25 17:29:29.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:30 compute-0 ceph-mon[74985]: pgmap v3190: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.115 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.116 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.129 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.203 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.204 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.215 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.216 254096 INFO nova.compute.claims [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Claim successful on node compute-0.ctlplane.example.com
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.324 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:32 compute-0 ceph-mon[74985]: pgmap v3191: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:29:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2261067995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.796 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.802 254096 DEBUG nova.compute.provider_tree [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.815 254096 DEBUG nova.scheduler.client.report [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.834 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.835 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.869 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.870 254096 DEBUG nova.network.neutron [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.885 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.899 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.982 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.984 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 17:29:32 compute-0 nova_compute[254092]: 2025-11-25 17:29:32.985 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Creating image(s)
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.026 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.069 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.100 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.105 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.191 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.192 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "9e29bca11122733e2b34fccd9459097794a3a169" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.193 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.193 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "9e29bca11122733e2b34fccd9459097794a3a169" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.219 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.224 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.363 254096 DEBUG nova.network.neutron [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.364 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.493 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.575 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] resizing rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 17:29:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2261067995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:33 compute-0 ceph-mon[74985]: pgmap v3192: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.687 254096 DEBUG nova.objects.instance [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lazy-loading 'migration_context' on Instance uuid 93a89288-892f-44cf-8e55-4b1f01a9bbb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.704 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.704 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Ensure instance console log exists: /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.705 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.705 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.706 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.708 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'image_id': '8b512c8e-2281-41de-a668-eb983e174ba0'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.714 254096 WARNING nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.720 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.720 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.724 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.725 254096 DEBUG nova.virt.libvirt.host [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.725 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.725 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T16:23:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fcdb6923-b47d-40c5-8958-abad57255484',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T16:23:34Z,direct_url=<?>,disk_format='qcow2',id=8b512c8e-2281-41de-a668-eb983e174ba0,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e74a6356cdd84eed896b6568945cecc5',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T16:23:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.726 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.726 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.727 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.728 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.728 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.728 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.729 254096 DEBUG nova.virt.hardware [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 17:29:33 compute-0 nova_compute[254092]: 2025-11-25 17:29:33.732 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:29:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724810595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.235 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.282 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.290 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2724810595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:29:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 17:29:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028815429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.809 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.813 254096 DEBUG nova.objects.instance [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93a89288-892f-44cf-8e55-4b1f01a9bbb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.842 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] End _get_guest_xml xml=<domain type="kvm">
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <uuid>93a89288-892f-44cf-8e55-4b1f01a9bbb5</uuid>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <name>instance-00000099</name>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <memory>131072</memory>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <vcpu>1</vcpu>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <metadata>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <nova:name>tempest-AggregatesAdminTestJSON-server-1301247845</nova:name>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <nova:creationTime>2025-11-25 17:29:33</nova:creationTime>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <nova:flavor name="m1.nano">
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <nova:memory>128</nova:memory>
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <nova:disk>1</nova:disk>
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <nova:swap>0</nova:swap>
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <nova:vcpus>1</nova:vcpus>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       </nova:flavor>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <nova:owner>
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <nova:user uuid="1a8467fdfffc42839788565288728335">tempest-AggregatesAdminTestJSON-1160613454-project-member</nova:user>
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <nova:project uuid="8d10d0047c0e447bb9993376e290a416">tempest-AggregatesAdminTestJSON-1160613454</nova:project>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       </nova:owner>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <nova:root type="image" uuid="8b512c8e-2281-41de-a668-eb983e174ba0"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <nova:ports/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </nova:instance>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   </metadata>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <sysinfo type="smbios">
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <system>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <entry name="manufacturer">RDO</entry>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <entry name="product">OpenStack Compute</entry>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <entry name="serial">93a89288-892f-44cf-8e55-4b1f01a9bbb5</entry>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <entry name="uuid">93a89288-892f-44cf-8e55-4b1f01a9bbb5</entry>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <entry name="family">Virtual Machine</entry>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </system>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   </sysinfo>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <os>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <boot dev="hd"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <smbios mode="sysinfo"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   </os>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <features>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <acpi/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <apic/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <vmcoreinfo/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   </features>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <clock offset="utc">
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <timer name="hpet" present="no"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   </clock>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <cpu mode="host-model" match="exact">
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   </cpu>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   <devices>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <disk type="network" device="disk">
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk">
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       </source>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <target dev="vda" bus="virtio"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <disk type="network" device="cdrom">
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <driver type="raw" cache="none"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <source protocol="rbd" name="vms/93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config">
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <host name="192.168.122.100" port="6789"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       </source>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <auth username="openstack">
Nov 25 17:29:34 compute-0 nova_compute[254092]:         <secret type="ceph" uuid="d82baeae-c742-50a4-b8f6-b5257c8a2c92"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       </auth>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <target dev="sda" bus="sata"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </disk>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <serial type="pty">
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <log file="/var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/console.log" append="off"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </serial>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <video>
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <model type="virtio"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </video>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <input type="tablet" bus="usb"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <rng model="virtio">
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <backend model="random">/dev/urandom</backend>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </rng>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <controller type="usb" index="0"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     <memballoon model="virtio">
Nov 25 17:29:34 compute-0 nova_compute[254092]:       <stats period="10"/>
Nov 25 17:29:34 compute-0 nova_compute[254092]:     </memballoon>
Nov 25 17:29:34 compute-0 nova_compute[254092]:   </devices>
Nov 25 17:29:34 compute-0 nova_compute[254092]: </domain>
Nov 25 17:29:34 compute-0 nova_compute[254092]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 17:29:34 compute-0 systemd[1]: Starting dnf makecache...
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.903 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.903 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.904 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Using config drive
Nov 25 17:29:34 compute-0 nova_compute[254092]: 2025-11-25 17:29:34.927 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:29:35 compute-0 dnf[431901]: Metadata cache refreshed recently.
Nov 25 17:29:35 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 17:29:35 compute-0 systemd[1]: Finished dnf makecache.
Nov 25 17:29:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 25 op/s
Nov 25 17:29:35 compute-0 nova_compute[254092]: 2025-11-25 17:29:35.364 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Creating config drive at /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config
Nov 25 17:29:35 compute-0 nova_compute[254092]: 2025-11-25 17:29:35.369 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5kvwtnlz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:35 compute-0 nova_compute[254092]: 2025-11-25 17:29:35.512 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5kvwtnlz" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:35 compute-0 nova_compute[254092]: 2025-11-25 17:29:35.543 254096 DEBUG nova.storage.rbd_utils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] rbd image 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 17:29:35 compute-0 nova_compute[254092]: 2025-11-25 17:29:35.548 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2028815429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 17:29:35 compute-0 ceph-mon[74985]: pgmap v3193: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 25 op/s
Nov 25 17:29:35 compute-0 nova_compute[254092]: 2025-11-25 17:29:35.761 254096 DEBUG oslo_concurrency.processutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config 93a89288-892f-44cf-8e55-4b1f01a9bbb5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:35 compute-0 nova_compute[254092]: 2025-11-25 17:29:35.763 254096 INFO nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deleting local config drive /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5/disk.config because it was imported into RBD.
Nov 25 17:29:35 compute-0 systemd-machined[216343]: New machine qemu-187-instance-00000099.
Nov 25 17:29:35 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.337 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.341 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.342 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091776.3362472, 93a89288-892f-44cf-8e55-4b1f01a9bbb5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.342 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] VM Resumed (Lifecycle Event)
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.352 254096 INFO nova.virt.libvirt.driver [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance spawned successfully.
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.353 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.363 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.370 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.375 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.375 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.376 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.376 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.377 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.378 254096 DEBUG nova.virt.libvirt.driver [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.407 254096 DEBUG nova.virt.driver [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] Emitting event <LifecycleEvent: 1764091776.3366756, 93a89288-892f-44cf-8e55-4b1f01a9bbb5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.407 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] VM Started (Lifecycle Event)
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.429 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.434 254096 DEBUG nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.441 254096 INFO nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 3.46 seconds to spawn the instance on the hypervisor.
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.441 254096 DEBUG nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.453 254096 INFO nova.compute.manager [None req-dcbb9bdc-1592-41c1-b184-75afd35e7ff2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.496 254096 INFO nova.compute.manager [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 4.33 seconds to build instance.
Nov 25 17:29:36 compute-0 nova_compute[254092]: 2025-11-25 17:29:36.510 254096 DEBUG oslo_concurrency.lockutils [None req-1dd1e74d-4375-4e44-bfd0-74e727c8cc31 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:37 compute-0 nova_compute[254092]: 2025-11-25 17:29:37.240 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 17:29:38 compute-0 ceph-mon[74985]: pgmap v3194: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 17:29:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.345 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.567 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.568 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.569 254096 INFO nova.compute.manager [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Terminating instance
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.570 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "refresh_cache-93a89288-892f-44cf-8e55-4b1f01a9bbb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.570 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquired lock "refresh_cache-93a89288-892f-44cf-8e55-4b1f01a9bbb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.570 254096 DEBUG nova.network.neutron [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 17:29:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.585833) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779585869, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 650, "num_deletes": 259, "total_data_size": 697120, "memory_usage": 709560, "flush_reason": "Manual Compaction"}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779596256, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 689736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65887, "largest_seqno": 66535, "table_properties": {"data_size": 686234, "index_size": 1345, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7909, "raw_average_key_size": 19, "raw_value_size": 679197, "raw_average_value_size": 1640, "num_data_blocks": 60, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091737, "oldest_key_time": 1764091737, "file_creation_time": 1764091779, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 10481 microseconds, and 3119 cpu microseconds.
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.596313) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 689736 bytes OK
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.596333) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.597860) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.597872) EVENT_LOG_v1 {"time_micros": 1764091779597868, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.597889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 693619, prev total WAL file size 693619, number of live WAL files 2.
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.598441) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373636' seq:72057594037927935, type:22 .. '6C6F676D0033303138' seq:0, type:0; will stop at (end)
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(673KB)], [152(9640KB)]
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779598514, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 10561224, "oldest_snapshot_seqno": -1}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8400 keys, 10457021 bytes, temperature: kUnknown
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779660970, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10457021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10403088, "index_size": 31804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 221436, "raw_average_key_size": 26, "raw_value_size": 10255322, "raw_average_value_size": 1220, "num_data_blocks": 1232, "num_entries": 8400, "num_filter_entries": 8400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091779, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.661515) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10457021 bytes
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.663502) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.3 rd, 166.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(30.5) write-amplify(15.2) OK, records in: 8929, records dropped: 529 output_compression: NoCompression
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.663536) EVENT_LOG_v1 {"time_micros": 1764091779663521, "job": 94, "event": "compaction_finished", "compaction_time_micros": 62752, "compaction_time_cpu_micros": 37665, "output_level": 6, "num_output_files": 1, "total_output_size": 10457021, "num_input_records": 8929, "num_output_records": 8400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779664421, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091779666819, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.598291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:29:39 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:29:39.666911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:29:39 compute-0 nova_compute[254092]: 2025-11-25 17:29:39.848 254096 DEBUG nova.network.neutron [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.060 254096 DEBUG nova.network.neutron [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.074 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Releasing lock "refresh_cache-93a89288-892f-44cf-8e55-4b1f01a9bbb5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.075 254096 DEBUG nova.compute.manager [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:29:40 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 25 17:29:40 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 4.367s CPU time.
Nov 25 17:29:40 compute-0 systemd-machined[216343]: Machine qemu-187-instance-00000099 terminated.
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:29:40
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'vms']
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.303 254096 INFO nova.virt.libvirt.driver [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance destroyed successfully.
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.303 254096 DEBUG nova.objects.instance [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lazy-loading 'resources' on Instance uuid 93a89288-892f-44cf-8e55-4b1f01a9bbb5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 17:29:40 compute-0 ceph-mon[74985]: pgmap v3195: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.687 254096 INFO nova.virt.libvirt.driver [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deleting instance files /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5_del
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.688 254096 INFO nova.virt.libvirt.driver [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deletion of /var/lib/nova/instances/93a89288-892f-44cf-8e55-4b1f01a9bbb5_del complete
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.730 254096 INFO nova.compute.manager [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 0.65 seconds to destroy the instance on the hypervisor.
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.731 254096 DEBUG oslo.service.loopingcall [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.731 254096 DEBUG nova.compute.manager [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 17:29:40 compute-0 nova_compute[254092]: 2025-11-25 17:29:40.732 254096 DEBUG nova.network.neutron [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:29:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:29:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 25 17:29:41 compute-0 nova_compute[254092]: 2025-11-25 17:29:41.379 254096 DEBUG nova.network.neutron [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 17:29:41 compute-0 nova_compute[254092]: 2025-11-25 17:29:41.392 254096 DEBUG nova.network.neutron [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 17:29:41 compute-0 nova_compute[254092]: 2025-11-25 17:29:41.404 254096 INFO nova.compute.manager [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Took 0.67 seconds to deallocate network for instance.
Nov 25 17:29:41 compute-0 nova_compute[254092]: 2025-11-25 17:29:41.468 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:41 compute-0 nova_compute[254092]: 2025-11-25 17:29:41.468 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:41 compute-0 nova_compute[254092]: 2025-11-25 17:29:41.536 254096 DEBUG oslo_concurrency.processutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:29:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617938552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:42 compute-0 nova_compute[254092]: 2025-11-25 17:29:42.027 254096 DEBUG oslo_concurrency.processutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:42 compute-0 nova_compute[254092]: 2025-11-25 17:29:42.034 254096 DEBUG nova.compute.provider_tree [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:29:42 compute-0 nova_compute[254092]: 2025-11-25 17:29:42.053 254096 DEBUG nova.scheduler.client.report [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:29:42 compute-0 nova_compute[254092]: 2025-11-25 17:29:42.080 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:42 compute-0 nova_compute[254092]: 2025-11-25 17:29:42.110 254096 INFO nova.scheduler.client.report [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Deleted allocations for instance 93a89288-892f-44cf-8e55-4b1f01a9bbb5
Nov 25 17:29:42 compute-0 nova_compute[254092]: 2025-11-25 17:29:42.156 254096 DEBUG oslo_concurrency.lockutils [None req-a97fb133-7529-440d-b0b7-2357aa8b68cf 1a8467fdfffc42839788565288728335 8d10d0047c0e447bb9993376e290a416 - - default default] Lock "93a89288-892f-44cf-8e55-4b1f01a9bbb5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:42 compute-0 nova_compute[254092]: 2025-11-25 17:29:42.241 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:42 compute-0 ceph-mon[74985]: pgmap v3196: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 25 17:29:42 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1617938552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 25 17:29:44 compute-0 nova_compute[254092]: 2025-11-25 17:29:44.389 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:44 compute-0 ceph-mon[74985]: pgmap v3197: 321 pgs: 321 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Nov 25 17:29:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 17:29:46 compute-0 ceph-mon[74985]: pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 25 17:29:47 compute-0 nova_compute[254092]: 2025-11-25 17:29:47.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 214 KiB/s wr, 101 op/s
Nov 25 17:29:48 compute-0 ceph-mon[74985]: pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 214 KiB/s wr, 101 op/s
Nov 25 17:29:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 25 17:29:49 compute-0 nova_compute[254092]: 2025-11-25 17:29:49.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:50 compute-0 nova_compute[254092]: 2025-11-25 17:29:50.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:50 compute-0 ceph-mon[74985]: pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 25 17:29:51 compute-0 nova_compute[254092]: 2025-11-25 17:29:51.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:51 compute-0 nova_compute[254092]: 2025-11-25 17:29:51.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:51 compute-0 nova_compute[254092]: 2025-11-25 17:29:51.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:51 compute-0 nova_compute[254092]: 2025-11-25 17:29:51.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:51 compute-0 nova_compute[254092]: 2025-11-25 17:29:51.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:29:51 compute-0 nova_compute[254092]: 2025-11-25 17:29:51.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:29:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:29:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:29:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002245299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:51 compute-0 nova_compute[254092]: 2025-11-25 17:29:51.967 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:52 compute-0 ovn_controller[153477]: 2025-11-25T17:29:52Z|01636|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.182 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.183 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.242 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.260 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.297 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:52 compute-0 ceph-mon[74985]: pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 96 op/s
Nov 25 17:29:52 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2002245299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:29:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/308748755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.684 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.690 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.708 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.726 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:29:52 compute-0 nova_compute[254092]: 2025-11-25 17:29:52.727 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:29:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 25 17:29:53 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/308748755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:29:53 compute-0 podman[432107]: 2025-11-25 17:29:53.676578571 +0000 UTC m=+0.077193372 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:29:53 compute-0 podman[432106]: 2025-11-25 17:29:53.715790299 +0000 UTC m=+0.116600696 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:29:53 compute-0 nova_compute[254092]: 2025-11-25 17:29:53.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:53 compute-0 nova_compute[254092]: 2025-11-25 17:29:53.728 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:53 compute-0 podman[432108]: 2025-11-25 17:29:53.729519612 +0000 UTC m=+0.128230421 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 17:29:54 compute-0 nova_compute[254092]: 2025-11-25 17:29:54.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:54 compute-0 ceph-mon[74985]: pgmap v3202: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 25 17:29:54 compute-0 sudo[432173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:54 compute-0 sudo[432173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:54 compute-0 sudo[432173]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:55 compute-0 sudo[432198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:29:55 compute-0 sudo[432198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:55 compute-0 sudo[432198]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:55 compute-0 sudo[432223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:55 compute-0 sudo[432223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:55 compute-0 sudo[432223]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:55 compute-0 sudo[432248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 17:29:55 compute-0 sudo[432248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:55 compute-0 nova_compute[254092]: 2025-11-25 17:29:55.302 254096 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764091780.3008893, 93a89288-892f-44cf-8e55-4b1f01a9bbb5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 17:29:55 compute-0 nova_compute[254092]: 2025-11-25 17:29:55.303 254096 INFO nova.compute.manager [-] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] VM Stopped (Lifecycle Event)
Nov 25 17:29:55 compute-0 nova_compute[254092]: 2025-11-25 17:29:55.322 254096 DEBUG nova.compute.manager [None req-1913e5e1-d16e-4252-93ff-0af95a0043f2 - - - - - -] [instance: 93a89288-892f-44cf-8e55-4b1f01a9bbb5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 17:29:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 25 17:29:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:29:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687164341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:29:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:29:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687164341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:29:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2687164341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:29:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2687164341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:29:55 compute-0 podman[432345]: 2025-11-25 17:29:55.681791202 +0000 UTC m=+0.074890130 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 25 17:29:55 compute-0 podman[432345]: 2025-11-25 17:29:55.802031614 +0000 UTC m=+0.195130542 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:29:56 compute-0 sudo[432248]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:29:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:29:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:29:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:29:56 compute-0 sudo[432508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:56 compute-0 sudo[432508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:56 compute-0 sudo[432508]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:56 compute-0 sudo[432533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:29:56 compute-0 sudo[432533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:56 compute-0 sudo[432533]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:56 compute-0 sudo[432558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:56 compute-0 sudo[432558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:56 compute-0 sudo[432558]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:56 compute-0 sudo[432583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:29:56 compute-0 sudo[432583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:56 compute-0 ceph-mon[74985]: pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 25 17:29:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:29:56 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:29:57 compute-0 sudo[432583]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:29:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:29:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:29:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:29:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b4e2a07c-82e1-4687-be96-0007e7ed57cc does not exist
Nov 25 17:29:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 817bd08e-0a7b-4cb0-8350-5952278f3435 does not exist
Nov 25 17:29:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8e78218d-dd64-4d28-b473-9bca3b5eb555 does not exist
Nov 25 17:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:29:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:29:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:29:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:29:57 compute-0 sudo[432640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:57 compute-0 sudo[432640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:57 compute-0 sudo[432640]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:57 compute-0 sudo[432665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:29:57 compute-0 sudo[432665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:57 compute-0 sudo[432665]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:57 compute-0 nova_compute[254092]: 2025-11-25 17:29:57.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:57 compute-0 sudo[432690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:57 compute-0 sudo[432690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:57 compute-0 sudo[432690]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:57 compute-0 sudo[432715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:29:57 compute-0 sudo[432715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:57 compute-0 nova_compute[254092]: 2025-11-25 17:29:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:57 compute-0 nova_compute[254092]: 2025-11-25 17:29:57.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:29:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:29:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:29:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:29:57 compute-0 podman[432778]: 2025-11-25 17:29:57.754293974 +0000 UTC m=+0.050262469 container create 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:29:57 compute-0 systemd[1]: Started libpod-conmon-5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200.scope.
Nov 25 17:29:57 compute-0 podman[432778]: 2025-11-25 17:29:57.734268068 +0000 UTC m=+0.030236593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:29:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:29:57 compute-0 podman[432778]: 2025-11-25 17:29:57.867814353 +0000 UTC m=+0.163782938 container init 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:29:57 compute-0 podman[432778]: 2025-11-25 17:29:57.882874934 +0000 UTC m=+0.178843469 container start 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:29:57 compute-0 podman[432778]: 2025-11-25 17:29:57.88752578 +0000 UTC m=+0.183494385 container attach 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:29:57 compute-0 stupefied_beaver[432795]: 167 167
Nov 25 17:29:57 compute-0 systemd[1]: libpod-5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200.scope: Deactivated successfully.
Nov 25 17:29:57 compute-0 podman[432778]: 2025-11-25 17:29:57.894386237 +0000 UTC m=+0.190354742 container died 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6323b20d2db43485b88fd39d56eb3387641afe3dabbf39302103a07e1fb2c267-merged.mount: Deactivated successfully.
Nov 25 17:29:57 compute-0 podman[432778]: 2025-11-25 17:29:57.943298358 +0000 UTC m=+0.239266863 container remove 5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_beaver, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:29:57 compute-0 systemd[1]: libpod-conmon-5073e799c383212ab14d4783ca623dfd279eab054b8efd3835e9ea039d6f2200.scope: Deactivated successfully.
Nov 25 17:29:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 25 17:29:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 25 17:29:58 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 25 17:29:58 compute-0 podman[432817]: 2025-11-25 17:29:58.181878632 +0000 UTC m=+0.069931715 container create fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:29:58 compute-0 systemd[1]: Started libpod-conmon-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope.
Nov 25 17:29:58 compute-0 podman[432817]: 2025-11-25 17:29:58.144253158 +0000 UTC m=+0.032306281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:29:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:29:58 compute-0 podman[432817]: 2025-11-25 17:29:58.306420292 +0000 UTC m=+0.194473395 container init fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:29:58 compute-0 podman[432817]: 2025-11-25 17:29:58.319105907 +0000 UTC m=+0.207158990 container start fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:29:58 compute-0 podman[432817]: 2025-11-25 17:29:58.323145337 +0000 UTC m=+0.211198460 container attach fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:29:58 compute-0 ceph-mon[74985]: pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:58 compute-0 ceph-mon[74985]: osdmap e297: 3 total, 3 up, 3 in
Nov 25 17:29:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:59 compute-0 nova_compute[254092]: 2025-11-25 17:29:59.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:29:59 compute-0 nova_compute[254092]: 2025-11-25 17:29:59.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:59 compute-0 gifted_hertz[432834]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:29:59 compute-0 gifted_hertz[432834]: --> relative data size: 1.0
Nov 25 17:29:59 compute-0 gifted_hertz[432834]: --> All data devices are unavailable
Nov 25 17:29:59 compute-0 nova_compute[254092]: 2025-11-25 17:29:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:59 compute-0 nova_compute[254092]: 2025-11-25 17:29:59.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:29:59 compute-0 nova_compute[254092]: 2025-11-25 17:29:59.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:29:59 compute-0 nova_compute[254092]: 2025-11-25 17:29:59.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:29:59 compute-0 systemd[1]: libpod-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope: Deactivated successfully.
Nov 25 17:29:59 compute-0 systemd[1]: libpod-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope: Consumed 1.116s CPU time.
Nov 25 17:29:59 compute-0 podman[432817]: 2025-11-25 17:29:59.521720992 +0000 UTC m=+1.409774105 container died fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ee39a126062c3d0dde2bdca7f02a4e7801182310e90428d4411fd94b2d189a-merged.mount: Deactivated successfully.
Nov 25 17:29:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:29:59 compute-0 podman[432817]: 2025-11-25 17:29:59.589547128 +0000 UTC m=+1.477600201 container remove fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hertz, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:29:59 compute-0 systemd[1]: libpod-conmon-fdbacb7a15e3f2ea882f6ae1a66f88a230ed18735741515339017929d5f0c947.scope: Deactivated successfully.
Nov 25 17:29:59 compute-0 sudo[432715]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:59 compute-0 ceph-mon[74985]: pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:29:59 compute-0 sudo[432874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:59 compute-0 sudo[432874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:59 compute-0 sudo[432874]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:59 compute-0 sudo[432899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:29:59 compute-0 sudo[432899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:59 compute-0 sudo[432899]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:59 compute-0 sudo[432924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:29:59 compute-0 sudo[432924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:29:59 compute-0 sudo[432924]: pam_unix(sudo:session): session closed for user root
Nov 25 17:29:59 compute-0 sudo[432949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:29:59 compute-0 sudo[432949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:30:00 compute-0 podman[433016]: 2025-11-25 17:30:00.388966947 +0000 UTC m=+0.041007328 container create 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:30:00 compute-0 systemd[1]: Started libpod-conmon-92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156.scope.
Nov 25 17:30:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:30:00 compute-0 podman[433016]: 2025-11-25 17:30:00.374858923 +0000 UTC m=+0.026899324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:30:00 compute-0 podman[433016]: 2025-11-25 17:30:00.479926983 +0000 UTC m=+0.131967374 container init 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:30:00 compute-0 podman[433016]: 2025-11-25 17:30:00.487630292 +0000 UTC m=+0.139670673 container start 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:30:00 compute-0 podman[433016]: 2025-11-25 17:30:00.490797568 +0000 UTC m=+0.142837969 container attach 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:30:00 compute-0 jovial_cerf[433031]: 167 167
Nov 25 17:30:00 compute-0 systemd[1]: libpod-92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156.scope: Deactivated successfully.
Nov 25 17:30:00 compute-0 podman[433016]: 2025-11-25 17:30:00.498808787 +0000 UTC m=+0.150849218 container died 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 17:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e51e4ca6d5c21b47159c306af04fac18fdef4e72c1d0456c5eceeb1cd535aaed-merged.mount: Deactivated successfully.
Nov 25 17:30:00 compute-0 podman[433016]: 2025-11-25 17:30:00.541312113 +0000 UTC m=+0.193352484 container remove 92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:30:00 compute-0 systemd[1]: libpod-conmon-92f8bf0bb65f18eb2667418a376bbf6d52f0d84cc0b0d66649a8a8c07386b156.scope: Deactivated successfully.
Nov 25 17:30:00 compute-0 podman[433055]: 2025-11-25 17:30:00.701842754 +0000 UTC m=+0.043666130 container create 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:30:00 compute-0 systemd[1]: Started libpod-conmon-850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5.scope.
Nov 25 17:30:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:00 compute-0 podman[433055]: 2025-11-25 17:30:00.682789255 +0000 UTC m=+0.024612631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:30:00 compute-0 podman[433055]: 2025-11-25 17:30:00.789865899 +0000 UTC m=+0.131689285 container init 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:30:00 compute-0 podman[433055]: 2025-11-25 17:30:00.796706776 +0000 UTC m=+0.138530132 container start 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:30:00 compute-0 podman[433055]: 2025-11-25 17:30:00.801038833 +0000 UTC m=+0.142862209 container attach 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:30:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]: {
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:     "0": [
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:         {
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "devices": [
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "/dev/loop3"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             ],
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_name": "ceph_lv0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_size": "21470642176",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "name": "ceph_lv0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "tags": {
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cluster_name": "ceph",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.crush_device_class": "",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.encrypted": "0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osd_id": "0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.type": "block",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.vdo": "0"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             },
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "type": "block",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "vg_name": "ceph_vg0"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:         }
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:     ],
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:     "1": [
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:         {
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "devices": [
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "/dev/loop4"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             ],
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_name": "ceph_lv1",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_size": "21470642176",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "name": "ceph_lv1",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "tags": {
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cluster_name": "ceph",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.crush_device_class": "",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.encrypted": "0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osd_id": "1",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.type": "block",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.vdo": "0"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             },
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "type": "block",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "vg_name": "ceph_vg1"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:         }
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:     ],
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:     "2": [
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:         {
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "devices": [
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "/dev/loop5"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             ],
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_name": "ceph_lv2",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_size": "21470642176",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "name": "ceph_lv2",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "tags": {
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.cluster_name": "ceph",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.crush_device_class": "",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.encrypted": "0",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osd_id": "2",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.type": "block",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:                 "ceph.vdo": "0"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             },
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "type": "block",
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:             "vg_name": "ceph_vg2"
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:         }
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]:     ]
Nov 25 17:30:01 compute-0 reverent_ishizaka[433072]: }
Nov 25 17:30:01 compute-0 systemd[1]: libpod-850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5.scope: Deactivated successfully.
Nov 25 17:30:01 compute-0 podman[433055]: 2025-11-25 17:30:01.556556318 +0000 UTC m=+0.898379694 container died 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a9560531dd9cd62bc878e0beb8c28e6c3845641aafee744fbf6663341321f33-merged.mount: Deactivated successfully.
Nov 25 17:30:01 compute-0 podman[433055]: 2025-11-25 17:30:01.616843469 +0000 UTC m=+0.958666825 container remove 850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ishizaka, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:30:01 compute-0 systemd[1]: libpod-conmon-850a03dbf0ac0d6918193dec79467b1e2f7a8d606e46aebf9dedac2b2d4267d5.scope: Deactivated successfully.
Nov 25 17:30:01 compute-0 sudo[432949]: pam_unix(sudo:session): session closed for user root
Nov 25 17:30:01 compute-0 sudo[433094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:30:01 compute-0 sudo[433094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:30:01 compute-0 sudo[433094]: pam_unix(sudo:session): session closed for user root
Nov 25 17:30:01 compute-0 sudo[433119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:30:01 compute-0 sudo[433119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:30:01 compute-0 sudo[433119]: pam_unix(sudo:session): session closed for user root
Nov 25 17:30:01 compute-0 sudo[433144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:30:01 compute-0 sudo[433144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:30:01 compute-0 sudo[433144]: pam_unix(sudo:session): session closed for user root
Nov 25 17:30:01 compute-0 sudo[433169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:30:01 compute-0 sudo[433169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:30:02 compute-0 nova_compute[254092]: 2025-11-25 17:30:02.302 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:02 compute-0 podman[433234]: 2025-11-25 17:30:02.415509369 +0000 UTC m=+0.069742320 container create f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:30:02 compute-0 ceph-mon[74985]: pgmap v3207: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:30:02 compute-0 systemd[1]: Started libpod-conmon-f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a.scope.
Nov 25 17:30:02 compute-0 podman[433234]: 2025-11-25 17:30:02.39208691 +0000 UTC m=+0.046319891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:30:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:30:02 compute-0 podman[433234]: 2025-11-25 17:30:02.525121792 +0000 UTC m=+0.179355353 container init f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:30:02 compute-0 podman[433234]: 2025-11-25 17:30:02.53349348 +0000 UTC m=+0.187726441 container start f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:30:02 compute-0 podman[433234]: 2025-11-25 17:30:02.537447387 +0000 UTC m=+0.191680368 container attach f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:30:02 compute-0 modest_khayyam[433250]: 167 167
Nov 25 17:30:02 compute-0 podman[433234]: 2025-11-25 17:30:02.541461526 +0000 UTC m=+0.195694507 container died f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:30:02 compute-0 systemd[1]: libpod-f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a.scope: Deactivated successfully.
Nov 25 17:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d967cef0342e2fe356fb55e271ad141c67696e9bd1551a48bb77fb5f2cce5613-merged.mount: Deactivated successfully.
Nov 25 17:30:02 compute-0 podman[433234]: 2025-11-25 17:30:02.592418664 +0000 UTC m=+0.246651615 container remove f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:30:02 compute-0 systemd[1]: libpod-conmon-f459c9082ae698e4f85a0c5f758c622634e36820d1a2e310587d61857036910a.scope: Deactivated successfully.
Nov 25 17:30:02 compute-0 podman[433274]: 2025-11-25 17:30:02.848844963 +0000 UTC m=+0.076630077 container create ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:30:02 compute-0 podman[433274]: 2025-11-25 17:30:02.820596854 +0000 UTC m=+0.048382058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:30:02 compute-0 systemd[1]: Started libpod-conmon-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope.
Nov 25 17:30:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:30:02 compute-0 podman[433274]: 2025-11-25 17:30:02.976329833 +0000 UTC m=+0.204114997 container init ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 17:30:02 compute-0 podman[433274]: 2025-11-25 17:30:02.987091147 +0000 UTC m=+0.214876251 container start ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 17:30:02 compute-0 podman[433274]: 2025-11-25 17:30:02.990423497 +0000 UTC m=+0.218208681 container attach ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:30:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:30:03 compute-0 nova_compute[254092]: 2025-11-25 17:30:03.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:30:04 compute-0 modest_wing[433291]: {
Nov 25 17:30:04 compute-0 modest_wing[433291]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "osd_id": 1,
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "type": "bluestore"
Nov 25 17:30:04 compute-0 modest_wing[433291]:     },
Nov 25 17:30:04 compute-0 modest_wing[433291]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "osd_id": 2,
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "type": "bluestore"
Nov 25 17:30:04 compute-0 modest_wing[433291]:     },
Nov 25 17:30:04 compute-0 modest_wing[433291]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "osd_id": 0,
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:30:04 compute-0 modest_wing[433291]:         "type": "bluestore"
Nov 25 17:30:04 compute-0 modest_wing[433291]:     }
Nov 25 17:30:04 compute-0 modest_wing[433291]: }
Nov 25 17:30:04 compute-0 systemd[1]: libpod-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope: Deactivated successfully.
Nov 25 17:30:04 compute-0 systemd[1]: libpod-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope: Consumed 1.090s CPU time.
Nov 25 17:30:04 compute-0 podman[433274]: 2025-11-25 17:30:04.06735108 +0000 UTC m=+1.295136194 container died ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bef915fc6b54603b62b6514fa1e595408fc168e5afefd0d6c260fef2ec045339-merged.mount: Deactivated successfully.
Nov 25 17:30:04 compute-0 podman[433274]: 2025-11-25 17:30:04.151404718 +0000 UTC m=+1.379189832 container remove ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:30:04 compute-0 systemd[1]: libpod-conmon-ef6f3bbdc2658a8f7edb0e14e9711f41fb295308eaf7b0129eef4d0cccc136e2.scope: Deactivated successfully.
Nov 25 17:30:04 compute-0 sudo[433169]: pam_unix(sudo:session): session closed for user root
Nov 25 17:30:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:30:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:30:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:30:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:30:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e2f53b5a-94b2-4cea-8a5e-826a1bf8273c does not exist
Nov 25 17:30:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d7a9b2b1-a8fc-4894-99aa-85e93284cff4 does not exist
Nov 25 17:30:04 compute-0 sudo[433336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:30:04 compute-0 sudo[433336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:30:04 compute-0 sudo[433336]: pam_unix(sudo:session): session closed for user root
Nov 25 17:30:04 compute-0 sudo[433361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:30:04 compute-0 sudo[433361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:30:04 compute-0 sudo[433361]: pam_unix(sudo:session): session closed for user root
Nov 25 17:30:04 compute-0 ceph-mon[74985]: pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:30:04 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:30:04 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:30:04 compute-0 nova_compute[254092]: 2025-11-25 17:30:04.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 25 17:30:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 25 17:30:04 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 25 17:30:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 25 17:30:05 compute-0 nova_compute[254092]: 2025-11-25 17:30:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:30:05 compute-0 ceph-mon[74985]: osdmap e298: 3 total, 3 up, 3 in
Nov 25 17:30:06 compute-0 ceph-mon[74985]: pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Nov 25 17:30:07 compute-0 nova_compute[254092]: 2025-11-25 17:30:07.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 25 17:30:08 compute-0 sshd-session[433386]: Connection closed by authenticating user root 171.244.51.45 port 37178 [preauth]
Nov 25 17:30:08 compute-0 ceph-mon[74985]: pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Nov 25 17:30:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:30:09 compute-0 nova_compute[254092]: 2025-11-25 17:30:09.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:30:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:30:10 compute-0 ceph-mon[74985]: pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:30:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:12 compute-0 nova_compute[254092]: 2025-11-25 17:30:12.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:12 compute-0 ceph-mon[74985]: pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:30:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:30:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:30:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:30:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:30:13.670 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:30:14 compute-0 nova_compute[254092]: 2025-11-25 17:30:14.534 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:14 compute-0 ceph-mon[74985]: pgmap v3214: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:16 compute-0 ceph-mon[74985]: pgmap v3215: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:17 compute-0 nova_compute[254092]: 2025-11-25 17:30:17.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:17 compute-0 ceph-mon[74985]: pgmap v3216: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:19 compute-0 nova_compute[254092]: 2025-11-25 17:30:19.536 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:20 compute-0 ceph-mon[74985]: pgmap v3217: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:22 compute-0 nova_compute[254092]: 2025-11-25 17:30:22.308 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:22 compute-0 ceph-mon[74985]: pgmap v3218: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:24 compute-0 ceph-mon[74985]: pgmap v3219: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:24 compute-0 nova_compute[254092]: 2025-11-25 17:30:24.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:24 compute-0 podman[433389]: 2025-11-25 17:30:24.71771284 +0000 UTC m=+0.116913113 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:30:24 compute-0 podman[433388]: 2025-11-25 17:30:24.729599904 +0000 UTC m=+0.134701948 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:30:24 compute-0 podman[433390]: 2025-11-25 17:30:24.776558962 +0000 UTC m=+0.172072295 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 25 17:30:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:26 compute-0 ceph-mon[74985]: pgmap v3220: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:27 compute-0 nova_compute[254092]: 2025-11-25 17:30:27.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:28 compute-0 ceph-mon[74985]: pgmap v3221: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:29 compute-0 nova_compute[254092]: 2025-11-25 17:30:29.541 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:30 compute-0 ceph-mon[74985]: pgmap v3222: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:31 compute-0 ceph-mon[74985]: pgmap v3223: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:32 compute-0 nova_compute[254092]: 2025-11-25 17:30:32.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:34 compute-0 ceph-mon[74985]: pgmap v3224: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:34 compute-0 nova_compute[254092]: 2025-11-25 17:30:34.568 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:36 compute-0 ceph-mon[74985]: pgmap v3225: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:37 compute-0 nova_compute[254092]: 2025-11-25 17:30:37.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:38 compute-0 ceph-mon[74985]: pgmap v3226: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:39 compute-0 nova_compute[254092]: 2025-11-25 17:30:39.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:30:40
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images']
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:30:40 compute-0 ceph-mon[74985]: pgmap v3227: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:30:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:30:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:42 compute-0 ceph-mon[74985]: pgmap v3228: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:42 compute-0 nova_compute[254092]: 2025-11-25 17:30:42.654 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:44 compute-0 ceph-mon[74985]: pgmap v3229: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:44 compute-0 nova_compute[254092]: 2025-11-25 17:30:44.642 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:46 compute-0 ceph-mon[74985]: pgmap v3230: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:47 compute-0 nova_compute[254092]: 2025-11-25 17:30:47.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:48 compute-0 ceph-mon[74985]: pgmap v3231: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:49 compute-0 nova_compute[254092]: 2025-11-25 17:30:49.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:50 compute-0 ceph-mon[74985]: pgmap v3232: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:30:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:30:52 compute-0 nova_compute[254092]: 2025-11-25 17:30:52.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:30:52 compute-0 ceph-mon[74985]: pgmap v3233: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:52 compute-0 nova_compute[254092]: 2025-11-25 17:30:52.660 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:53 compute-0 nova_compute[254092]: 2025-11-25 17:30:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:30:53 compute-0 nova_compute[254092]: 2025-11-25 17:30:53.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:30:53 compute-0 nova_compute[254092]: 2025-11-25 17:30:53.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:30:53 compute-0 nova_compute[254092]: 2025-11-25 17:30:53.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:30:53 compute-0 nova_compute[254092]: 2025-11-25 17:30:53.528 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:30:53 compute-0 nova_compute[254092]: 2025-11-25 17:30:53.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:30:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:30:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336401234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.037 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.314 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.316 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.316 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:30:54 compute-0 ceph-mon[74985]: pgmap v3234: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3336401234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:30:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.603 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.604 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.749 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.900 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.900 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.923 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.946 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:30:54 compute-0 nova_compute[254092]: 2025-11-25 17:30:54.965 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:30:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:30:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11922419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:30:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:30:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11922419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:30:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:30:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717963504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:30:55 compute-0 nova_compute[254092]: 2025-11-25 17:30:55.481 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:30:55 compute-0 nova_compute[254092]: 2025-11-25 17:30:55.489 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:30:55 compute-0 nova_compute[254092]: 2025-11-25 17:30:55.508 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:30:55 compute-0 nova_compute[254092]: 2025-11-25 17:30:55.510 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:30:55 compute-0 nova_compute[254092]: 2025-11-25 17:30:55.510 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:30:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/11922419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:30:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/11922419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:30:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/717963504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:30:55 compute-0 podman[433496]: 2025-11-25 17:30:55.68366179 +0000 UTC m=+0.079474304 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:30:55 compute-0 podman[433495]: 2025-11-25 17:30:55.692051109 +0000 UTC m=+0.091998595 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 17:30:55 compute-0 podman[433497]: 2025-11-25 17:30:55.714938582 +0000 UTC m=+0.104340331 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:30:56 compute-0 nova_compute[254092]: 2025-11-25 17:30:56.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:30:56 compute-0 nova_compute[254092]: 2025-11-25 17:30:56.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:30:56 compute-0 ceph-mon[74985]: pgmap v3235: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:57 compute-0 nova_compute[254092]: 2025-11-25 17:30:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:30:57 compute-0 nova_compute[254092]: 2025-11-25 17:30:57.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:30:57 compute-0 nova_compute[254092]: 2025-11-25 17:30:57.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:30:58 compute-0 ceph-mon[74985]: pgmap v3236: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:30:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:30:59 compute-0 nova_compute[254092]: 2025-11-25 17:30:59.753 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:00 compute-0 nova_compute[254092]: 2025-11-25 17:31:00.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:00 compute-0 ceph-mon[74985]: pgmap v3237: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:01 compute-0 nova_compute[254092]: 2025-11-25 17:31:01.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:01 compute-0 nova_compute[254092]: 2025-11-25 17:31:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:01 compute-0 nova_compute[254092]: 2025-11-25 17:31:01.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:31:01 compute-0 nova_compute[254092]: 2025-11-25 17:31:01.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:31:01 compute-0 nova_compute[254092]: 2025-11-25 17:31:01.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:31:02 compute-0 ceph-mon[74985]: pgmap v3238: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:02 compute-0 nova_compute[254092]: 2025-11-25 17:31:02.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:04 compute-0 sudo[433558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:04 compute-0 sudo[433558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:04 compute-0 sudo[433558]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:04 compute-0 ceph-mon[74985]: pgmap v3239: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:04 compute-0 sudo[433583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:31:04 compute-0 sudo[433583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:04 compute-0 sudo[433583]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:04 compute-0 sudo[433608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:04 compute-0 sudo[433608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:04 compute-0 sudo[433608]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:04 compute-0 nova_compute[254092]: 2025-11-25 17:31:04.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:04 compute-0 sudo[433633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:31:04 compute-0 sudo[433633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:05 compute-0 sudo[433633]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:31:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:31:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:31:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:31:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c42811f7-e9f8-461c-a505-fab691245d6b does not exist
Nov 25 17:31:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 501ee60f-edb5-4ca9-949f-b705dea6d416 does not exist
Nov 25 17:31:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d3832470-cbdd-4cbf-b531-c3a38b9a7faf does not exist
Nov 25 17:31:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:31:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:31:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:31:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:31:05 compute-0 sudo[433691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:05 compute-0 sudo[433691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:31:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:31:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:31:05 compute-0 sudo[433691]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:05 compute-0 sudo[433716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:31:05 compute-0 sudo[433716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:05 compute-0 sudo[433716]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:05 compute-0 sudo[433741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:05 compute-0 sudo[433741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:05 compute-0 sudo[433741]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:05 compute-0 sudo[433766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:31:05 compute-0 sudo[433766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:06 compute-0 podman[433831]: 2025-11-25 17:31:06.339090391 +0000 UTC m=+0.067797856 container create 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:31:06 compute-0 systemd[1]: Started libpod-conmon-7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d.scope.
Nov 25 17:31:06 compute-0 podman[433831]: 2025-11-25 17:31:06.308795077 +0000 UTC m=+0.037502602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:31:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:31:06 compute-0 podman[433831]: 2025-11-25 17:31:06.448195371 +0000 UTC m=+0.176902886 container init 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:31:06 compute-0 podman[433831]: 2025-11-25 17:31:06.461863383 +0000 UTC m=+0.190570838 container start 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:31:06 compute-0 podman[433831]: 2025-11-25 17:31:06.466188121 +0000 UTC m=+0.194895596 container attach 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:31:06 compute-0 quirky_leakey[433847]: 167 167
Nov 25 17:31:06 compute-0 systemd[1]: libpod-7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d.scope: Deactivated successfully.
Nov 25 17:31:06 compute-0 podman[433831]: 2025-11-25 17:31:06.471554597 +0000 UTC m=+0.200262102 container died 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:31:06 compute-0 nova_compute[254092]: 2025-11-25 17:31:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c667413d1bb319dc28b41a4989350a538e72a7ea1600870fd7c55a7d6ce2e0e-merged.mount: Deactivated successfully.
Nov 25 17:31:06 compute-0 podman[433831]: 2025-11-25 17:31:06.535480347 +0000 UTC m=+0.264187812 container remove 7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:31:06 compute-0 systemd[1]: libpod-conmon-7c0a577a7f520032e2ae5045f4a9f21c0b8aece1cd4f53ff450348e49fb94c7d.scope: Deactivated successfully.
Nov 25 17:31:06 compute-0 ceph-mon[74985]: pgmap v3240: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:06 compute-0 podman[433869]: 2025-11-25 17:31:06.80929438 +0000 UTC m=+0.073857591 container create b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:31:06 compute-0 systemd[1]: Started libpod-conmon-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope.
Nov 25 17:31:06 compute-0 podman[433869]: 2025-11-25 17:31:06.782400998 +0000 UTC m=+0.046964259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:31:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:06 compute-0 podman[433869]: 2025-11-25 17:31:06.943250836 +0000 UTC m=+0.207814087 container init b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:31:06 compute-0 podman[433869]: 2025-11-25 17:31:06.964243777 +0000 UTC m=+0.228806998 container start b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:31:06 compute-0 podman[433869]: 2025-11-25 17:31:06.968819522 +0000 UTC m=+0.233382743 container attach b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:31:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:07 compute-0 nova_compute[254092]: 2025-11-25 17:31:07.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:08 compute-0 blissful_raman[433885]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:31:08 compute-0 blissful_raman[433885]: --> relative data size: 1.0
Nov 25 17:31:08 compute-0 blissful_raman[433885]: --> All data devices are unavailable
Nov 25 17:31:08 compute-0 systemd[1]: libpod-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope: Deactivated successfully.
Nov 25 17:31:08 compute-0 systemd[1]: libpod-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope: Consumed 1.231s CPU time.
Nov 25 17:31:08 compute-0 podman[433869]: 2025-11-25 17:31:08.230207596 +0000 UTC m=+1.494770807 container died b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6027b5e9cd547ab2a37b743be6adf30d4d102425d8deed6103d301d1784bd9d-merged.mount: Deactivated successfully.
Nov 25 17:31:08 compute-0 podman[433869]: 2025-11-25 17:31:08.312338201 +0000 UTC m=+1.576901392 container remove b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_raman, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 17:31:08 compute-0 systemd[1]: libpod-conmon-b0a2e85abc502120ad97b6bac708752df9dfd7720d6702d46c9d64991c56398e.scope: Deactivated successfully.
Nov 25 17:31:08 compute-0 sudo[433766]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:08 compute-0 sudo[433925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:08 compute-0 sudo[433925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:08 compute-0 sudo[433925]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:08 compute-0 sudo[433950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:31:08 compute-0 sudo[433950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:08 compute-0 sudo[433950]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:08 compute-0 ceph-mon[74985]: pgmap v3241: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:08 compute-0 sudo[433975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:08 compute-0 sudo[433975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:08 compute-0 sudo[433975]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:08 compute-0 sudo[434000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:31:08 compute-0 sudo[434000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:09 compute-0 podman[434066]: 2025-11-25 17:31:09.249181552 +0000 UTC m=+0.070309865 container create f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:31:09 compute-0 systemd[1]: Started libpod-conmon-f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979.scope.
Nov 25 17:31:09 compute-0 podman[434066]: 2025-11-25 17:31:09.22300891 +0000 UTC m=+0.044137293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:31:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:31:09 compute-0 podman[434066]: 2025-11-25 17:31:09.355926307 +0000 UTC m=+0.177054640 container init f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 25 17:31:09 compute-0 podman[434066]: 2025-11-25 17:31:09.367076341 +0000 UTC m=+0.188204664 container start f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:31:09 compute-0 podman[434066]: 2025-11-25 17:31:09.372711044 +0000 UTC m=+0.193839367 container attach f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:31:09 compute-0 elastic_chebyshev[434082]: 167 167
Nov 25 17:31:09 compute-0 systemd[1]: libpod-f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979.scope: Deactivated successfully.
Nov 25 17:31:09 compute-0 podman[434066]: 2025-11-25 17:31:09.376049995 +0000 UTC m=+0.197178318 container died f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:31:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-39913a41e03ae972fc0f4796ecdcba03d52c45b54517cb395f499ce27d33acf1-merged.mount: Deactivated successfully.
Nov 25 17:31:09 compute-0 podman[434066]: 2025-11-25 17:31:09.438566716 +0000 UTC m=+0.259695029 container remove f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:31:09 compute-0 systemd[1]: libpod-conmon-f6b6a8e6242a1daf6cc9c8bb9e65244c2094d25a59674b2fba48fd543953c979.scope: Deactivated successfully.
Nov 25 17:31:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:09 compute-0 podman[434107]: 2025-11-25 17:31:09.701905064 +0000 UTC m=+0.084436589 container create fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:31:09 compute-0 podman[434107]: 2025-11-25 17:31:09.669938885 +0000 UTC m=+0.052470470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:31:09 compute-0 nova_compute[254092]: 2025-11-25 17:31:09.758 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:09 compute-0 systemd[1]: Started libpod-conmon-fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df.scope.
Nov 25 17:31:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:09 compute-0 podman[434107]: 2025-11-25 17:31:09.840198439 +0000 UTC m=+0.222729994 container init fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:31:09 compute-0 podman[434107]: 2025-11-25 17:31:09.851523707 +0000 UTC m=+0.234055222 container start fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:31:09 compute-0 podman[434107]: 2025-11-25 17:31:09.855993278 +0000 UTC m=+0.238524863 container attach fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:31:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:31:10 compute-0 ceph-mon[74985]: pgmap v3242: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]: {
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:     "0": [
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:         {
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "devices": [
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "/dev/loop3"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             ],
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_name": "ceph_lv0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_size": "21470642176",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "name": "ceph_lv0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "tags": {
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cluster_name": "ceph",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.crush_device_class": "",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.encrypted": "0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osd_id": "0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.type": "block",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.vdo": "0"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             },
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "type": "block",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "vg_name": "ceph_vg0"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:         }
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:     ],
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:     "1": [
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:         {
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "devices": [
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "/dev/loop4"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             ],
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_name": "ceph_lv1",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_size": "21470642176",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "name": "ceph_lv1",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "tags": {
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cluster_name": "ceph",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.crush_device_class": "",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.encrypted": "0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osd_id": "1",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.type": "block",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.vdo": "0"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             },
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "type": "block",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "vg_name": "ceph_vg1"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:         }
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:     ],
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:     "2": [
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:         {
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "devices": [
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "/dev/loop5"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             ],
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_name": "ceph_lv2",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_size": "21470642176",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "name": "ceph_lv2",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "tags": {
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.cluster_name": "ceph",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.crush_device_class": "",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.encrypted": "0",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osd_id": "2",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.type": "block",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:                 "ceph.vdo": "0"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             },
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "type": "block",
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:             "vg_name": "ceph_vg2"
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:         }
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]:     ]
Nov 25 17:31:10 compute-0 hopeful_sanderson[434124]: }
Nov 25 17:31:10 compute-0 systemd[1]: libpod-fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df.scope: Deactivated successfully.
Nov 25 17:31:10 compute-0 podman[434107]: 2025-11-25 17:31:10.678681602 +0000 UTC m=+1.061213167 container died fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-caf63b24a5ec219d864f7927ef9c769f28ab3b2e555f0fc044731cb1505aff12-merged.mount: Deactivated successfully.
Nov 25 17:31:10 compute-0 podman[434107]: 2025-11-25 17:31:10.761321551 +0000 UTC m=+1.143853036 container remove fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:31:10 compute-0 systemd[1]: libpod-conmon-fc4f3e2d1f0d2cce82a550321162afc0548bb6b7e4cbdfd5c9566eaff3ce00df.scope: Deactivated successfully.
Nov 25 17:31:10 compute-0 sudo[434000]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:10 compute-0 sudo[434147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:10 compute-0 sudo[434147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:10 compute-0 sudo[434147]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:10 compute-0 sudo[434172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:31:10 compute-0 sudo[434172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:10 compute-0 sudo[434172]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:11 compute-0 sudo[434197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:11 compute-0 sudo[434197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:11 compute-0 sudo[434197]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:11 compute-0 sudo[434222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:31:11 compute-0 sudo[434222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:11 compute-0 podman[434288]: 2025-11-25 17:31:11.586726148 +0000 UTC m=+0.060271391 container create 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:31:11 compute-0 systemd[1]: Started libpod-conmon-86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638.scope.
Nov 25 17:31:11 compute-0 podman[434288]: 2025-11-25 17:31:11.562880729 +0000 UTC m=+0.036426082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:31:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:31:11 compute-0 podman[434288]: 2025-11-25 17:31:11.696392483 +0000 UTC m=+0.169937766 container init 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:31:11 compute-0 podman[434288]: 2025-11-25 17:31:11.70729618 +0000 UTC m=+0.180841423 container start 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:31:11 compute-0 podman[434288]: 2025-11-25 17:31:11.711934166 +0000 UTC m=+0.185479479 container attach 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:31:11 compute-0 affectionate_proskuriakova[434305]: 167 167
Nov 25 17:31:11 compute-0 systemd[1]: libpod-86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638.scope: Deactivated successfully.
Nov 25 17:31:11 compute-0 podman[434288]: 2025-11-25 17:31:11.716773468 +0000 UTC m=+0.190318771 container died 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6de9c67cc3ca49a5d97f31047f6e5edd01f83218c1251a704f0dc61838b49862-merged.mount: Deactivated successfully.
Nov 25 17:31:11 compute-0 podman[434288]: 2025-11-25 17:31:11.760029905 +0000 UTC m=+0.233575148 container remove 86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 17:31:11 compute-0 systemd[1]: libpod-conmon-86fbd737f2a913c70e042b4b4d0e7f63152213f08562e58d12707a99f5486638.scope: Deactivated successfully.
Nov 25 17:31:12 compute-0 podman[434329]: 2025-11-25 17:31:12.017615526 +0000 UTC m=+0.088672425 container create e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:31:12 compute-0 podman[434329]: 2025-11-25 17:31:11.975244923 +0000 UTC m=+0.046301862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:31:12 compute-0 systemd[1]: Started libpod-conmon-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope.
Nov 25 17:31:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:31:12 compute-0 podman[434329]: 2025-11-25 17:31:12.150785212 +0000 UTC m=+0.221842171 container init e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:31:12 compute-0 podman[434329]: 2025-11-25 17:31:12.167351393 +0000 UTC m=+0.238408292 container start e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:31:12 compute-0 podman[434329]: 2025-11-25 17:31:12.182762062 +0000 UTC m=+0.253818941 container attach e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:31:12 compute-0 ceph-mon[74985]: pgmap v3243: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:12 compute-0 nova_compute[254092]: 2025-11-25 17:31:12.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]: {
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "osd_id": 1,
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "type": "bluestore"
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:     },
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "osd_id": 2,
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "type": "bluestore"
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:     },
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "osd_id": 0,
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:         "type": "bluestore"
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]:     }
Nov 25 17:31:13 compute-0 pensive_mirzakhani[434346]: }
Nov 25 17:31:13 compute-0 systemd[1]: libpod-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope: Deactivated successfully.
Nov 25 17:31:13 compute-0 systemd[1]: libpod-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope: Consumed 1.206s CPU time.
Nov 25 17:31:13 compute-0 podman[434329]: 2025-11-25 17:31:13.377396349 +0000 UTC m=+1.448453248 container died e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:31:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2ed53a01c2db07260fd85b178d26083506ab9f8ed59bb3e9754b376697d28d1-merged.mount: Deactivated successfully.
Nov 25 17:31:13 compute-0 podman[434329]: 2025-11-25 17:31:13.463153323 +0000 UTC m=+1.534210222 container remove e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:31:13 compute-0 systemd[1]: libpod-conmon-e936d91f531f259739ba60ba2f451a24bed1fed995bc80a8320776b387e3e339.scope: Deactivated successfully.
Nov 25 17:31:13 compute-0 sudo[434222]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:31:13 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:31:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:31:13 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:31:13 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d7a00c30-e265-462d-a410-88d1001ef831 does not exist
Nov 25 17:31:13 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2b6ba937-46c8-41d6-a807-41512cccd881 does not exist
Nov 25 17:31:13 compute-0 sudo[434392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:31:13 compute-0 sudo[434392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:13 compute-0 sudo[434392]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:31:13.672 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:31:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:31:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:31:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:31:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:31:13 compute-0 sudo[434417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:31:13 compute-0 sudo[434417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:31:13 compute-0 sudo[434417]: pam_unix(sudo:session): session closed for user root
Nov 25 17:31:14 compute-0 ceph-mon[74985]: pgmap v3244: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:31:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:31:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:14 compute-0 nova_compute[254092]: 2025-11-25 17:31:14.802 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:16 compute-0 ceph-mon[74985]: pgmap v3245: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:17 compute-0 nova_compute[254092]: 2025-11-25 17:31:17.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:18 compute-0 ceph-mon[74985]: pgmap v3246: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:19 compute-0 nova_compute[254092]: 2025-11-25 17:31:19.804 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:20 compute-0 ceph-mon[74985]: pgmap v3247: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:22 compute-0 ceph-mon[74985]: pgmap v3248: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:22 compute-0 nova_compute[254092]: 2025-11-25 17:31:22.675 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:24 compute-0 ceph-mon[74985]: pgmap v3249: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:24 compute-0 nova_compute[254092]: 2025-11-25 17:31:24.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:26 compute-0 ceph-mon[74985]: pgmap v3250: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:26 compute-0 podman[434443]: 2025-11-25 17:31:26.690598981 +0000 UTC m=+0.088129140 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:31:26 compute-0 podman[434442]: 2025-11-25 17:31:26.727509166 +0000 UTC m=+0.123739419 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:31:26 compute-0 podman[434444]: 2025-11-25 17:31:26.776629573 +0000 UTC m=+0.160933582 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 17:31:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:27 compute-0 nova_compute[254092]: 2025-11-25 17:31:27.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:28 compute-0 ceph-mon[74985]: pgmap v3251: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:29 compute-0 nova_compute[254092]: 2025-11-25 17:31:29.809 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:30 compute-0 ceph-mon[74985]: pgmap v3252: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:32 compute-0 ceph-mon[74985]: pgmap v3253: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:32 compute-0 nova_compute[254092]: 2025-11-25 17:31:32.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:34 compute-0 ceph-mon[74985]: pgmap v3254: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:34 compute-0 nova_compute[254092]: 2025-11-25 17:31:34.812 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:35 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 17:31:36 compute-0 ceph-mon[74985]: pgmap v3255: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:31:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 14K writes, 67K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1392 writes, 6544 keys, 1392 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s
                                           Interval WAL: 1392 writes, 1392 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.8      2.30              0.27        47    0.049       0      0       0.0       0.0
                                             L6      1/0    9.97 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   4.8    121.4    102.6      3.74              1.12        46    0.081    306K    25K       0.0       0.0
                                            Sum      1/0    9.97 MB   0.0      0.4     0.1      0.4       0.5      0.1       0.0   5.8     75.2     76.8      6.03              1.40        93    0.065    306K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6    135.0    139.0      0.49              0.25        12    0.041     52K   3113       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    121.4    102.6      3.74              1.12        46    0.081    306K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.5      2.25              0.27        46    0.049       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.078, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.45 GB write, 0.08 MB/s write, 0.44 GB read, 0.08 MB/s read, 6.0 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 52.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000353 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3412,50.53 MB,16.6222%) FilterBlock(94,849.67 KB,0.272947%) IndexBlock(94,1.35 MB,0.444688%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 17:31:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:37 compute-0 nova_compute[254092]: 2025-11-25 17:31:37.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:38 compute-0 ceph-mon[74985]: pgmap v3256: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:39 compute-0 nova_compute[254092]: 2025-11-25 17:31:39.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:31:40
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control']
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:31:40 compute-0 ceph-mon[74985]: pgmap v3257: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:31:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:31:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:42 compute-0 nova_compute[254092]: 2025-11-25 17:31:42.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:42 compute-0 ceph-mon[74985]: pgmap v3258: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:44 compute-0 ceph-mon[74985]: pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:44 compute-0 nova_compute[254092]: 2025-11-25 17:31:44.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:46 compute-0 ceph-mon[74985]: pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:47 compute-0 nova_compute[254092]: 2025-11-25 17:31:47.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:48 compute-0 ceph-mon[74985]: pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:49 compute-0 nova_compute[254092]: 2025-11-25 17:31:49.960 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:50 compute-0 ceph-mon[74985]: pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:51 compute-0 ceph-mon[74985]: pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:31:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:31:52 compute-0 nova_compute[254092]: 2025-11-25 17:31:52.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:53 compute-0 nova_compute[254092]: 2025-11-25 17:31:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:53 compute-0 nova_compute[254092]: 2025-11-25 17:31:53.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:53 compute-0 nova_compute[254092]: 2025-11-25 17:31:53.708 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:31:53 compute-0 nova_compute[254092]: 2025-11-25 17:31:53.708 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:31:53 compute-0 nova_compute[254092]: 2025-11-25 17:31:53.709 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:31:53 compute-0 nova_compute[254092]: 2025-11-25 17:31:53.709 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:31:53 compute-0 nova_compute[254092]: 2025-11-25 17:31:53.710 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:31:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:31:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1723494631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.184 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.413 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.415 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3642MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.415 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.415 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.511 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.512 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:31:54 compute-0 ceph-mon[74985]: pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:31:54 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1723494631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:31:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:54 compute-0 nova_compute[254092]: 2025-11-25 17:31:54.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:31:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956116899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:31:55 compute-0 nova_compute[254092]: 2025-11-25 17:31:55.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:31:55 compute-0 nova_compute[254092]: 2025-11-25 17:31:55.015 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:31:55 compute-0 nova_compute[254092]: 2025-11-25 17:31:55.034 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:31:55 compute-0 nova_compute[254092]: 2025-11-25 17:31:55.036 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:31:55 compute-0 nova_compute[254092]: 2025-11-25 17:31:55.036 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:31:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:31:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:31:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2117747339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:31:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:31:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2117747339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:31:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1956116899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:31:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2117747339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:31:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2117747339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:31:56 compute-0 nova_compute[254092]: 2025-11-25 17:31:56.037 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:56 compute-0 nova_compute[254092]: 2025-11-25 17:31:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:56 compute-0 ceph-mon[74985]: pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:31:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:31:57 compute-0 nova_compute[254092]: 2025-11-25 17:31:57.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:31:57 compute-0 nova_compute[254092]: 2025-11-25 17:31:57.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:31:57 compute-0 podman[434550]: 2025-11-25 17:31:57.659869306 +0000 UTC m=+0.075420724 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:31:57 compute-0 podman[434551]: 2025-11-25 17:31:57.661261924 +0000 UTC m=+0.072661130 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:31:57 compute-0 nova_compute[254092]: 2025-11-25 17:31:57.694 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:31:57 compute-0 podman[434552]: 2025-11-25 17:31:57.720732283 +0000 UTC m=+0.131958344 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 17:31:58 compute-0 ceph-mon[74985]: pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:31:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:31:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:31:59 compute-0 nova_compute[254092]: 2025-11-25 17:31:59.966 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:00 compute-0 nova_compute[254092]: 2025-11-25 17:32:00.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:00 compute-0 ceph-mon[74985]: pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:32:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:32:02 compute-0 nova_compute[254092]: 2025-11-25 17:32:02.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:02 compute-0 nova_compute[254092]: 2025-11-25 17:32:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:02 compute-0 nova_compute[254092]: 2025-11-25 17:32:02.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:32:02 compute-0 nova_compute[254092]: 2025-11-25 17:32:02.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:32:02 compute-0 nova_compute[254092]: 2025-11-25 17:32:02.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:32:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 25 17:32:02 compute-0 ceph-mon[74985]: pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 25 17:32:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 25 17:32:02 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 25 17:32:02 compute-0 nova_compute[254092]: 2025-11-25 17:32:02.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Nov 25 17:32:03 compute-0 ceph-mon[74985]: osdmap e299: 3 total, 3 up, 3 in
Nov 25 17:32:04 compute-0 ceph-mon[74985]: pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Nov 25 17:32:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:05 compute-0 nova_compute[254092]: 2025-11-25 17:32:05.001 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Nov 25 17:32:06 compute-0 nova_compute[254092]: 2025-11-25 17:32:06.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 25 17:32:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 25 17:32:06 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 25 17:32:06 compute-0 ceph-mon[74985]: pgmap v3271: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Nov 25 17:32:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 25 17:32:07 compute-0 ceph-mon[74985]: osdmap e300: 3 total, 3 up, 3 in
Nov 25 17:32:07 compute-0 nova_compute[254092]: 2025-11-25 17:32:07.738 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:08 compute-0 nova_compute[254092]: 2025-11-25 17:32:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:08 compute-0 ceph-mon[74985]: pgmap v3273: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 25 17:32:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 25 17:32:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 25 17:32:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 25 17:32:09 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 25 17:32:10 compute-0 nova_compute[254092]: 2025-11-25 17:32:10.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:32:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:32:10 compute-0 ceph-mon[74985]: pgmap v3274: 321 pgs: 321 active+clean; 21 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 25 17:32:10 compute-0 ceph-mon[74985]: osdmap e301: 3 total, 3 up, 3 in
Nov 25 17:32:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 25 17:32:12 compute-0 ceph-mon[74985]: pgmap v3276: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Nov 25 17:32:12 compute-0 nova_compute[254092]: 2025-11-25 17:32:12.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 37 op/s
Nov 25 17:32:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:32:13.672 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:32:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:32:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:32:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:32:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:32:13 compute-0 sudo[434611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:13 compute-0 sudo[434611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:13 compute-0 sudo[434611]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:13 compute-0 sudo[434636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:32:13 compute-0 sudo[434636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:13 compute-0 sudo[434636]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:14 compute-0 sudo[434661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:14 compute-0 sudo[434661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:14 compute-0 sudo[434661]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:14 compute-0 sudo[434686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:32:14 compute-0 sudo[434686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 25 17:32:14 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 25 17:32:14 compute-0 ceph-mon[74985]: pgmap v3277: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.4 KiB/s wr, 37 op/s
Nov 25 17:32:14 compute-0 ceph-mon[74985]: osdmap e302: 3 total, 3 up, 3 in
Nov 25 17:32:14 compute-0 sudo[434686]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:32:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:32:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:32:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:32:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b4c8bc2d-ae81-49c5-80d7-4e3c6d92247f does not exist
Nov 25 17:32:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8a97e76b-5fba-49a4-b865-a328be8c0d9a does not exist
Nov 25 17:32:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 53fffaab-87d6-4c85-a9a7-e365d792d32e does not exist
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:32:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:32:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:32:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:32:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:32:14 compute-0 sudo[434743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:14 compute-0 sudo[434743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:14 compute-0 sudo[434743]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:14 compute-0 sudo[434768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:32:14 compute-0 sudo[434768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:14 compute-0 sudo[434768]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:15 compute-0 sudo[434793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:15 compute-0 sudo[434793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:15 compute-0 sudo[434793]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:15 compute-0 nova_compute[254092]: 2025-11-25 17:32:15.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:15 compute-0 sudo[434818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:32:15 compute-0 sudo[434818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Nov 25 17:32:15 compute-0 podman[434884]: 2025-11-25 17:32:15.561393234 +0000 UTC m=+0.064390124 container create 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:32:15 compute-0 systemd[1]: Started libpod-conmon-5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b.scope.
Nov 25 17:32:15 compute-0 podman[434884]: 2025-11-25 17:32:15.530680268 +0000 UTC m=+0.033677208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:32:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:32:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:32:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:32:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:32:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:32:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:32:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:32:15 compute-0 podman[434884]: 2025-11-25 17:32:15.686023326 +0000 UTC m=+0.189020226 container init 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:32:15 compute-0 podman[434884]: 2025-11-25 17:32:15.699740049 +0000 UTC m=+0.202736919 container start 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:32:15 compute-0 podman[434884]: 2025-11-25 17:32:15.704744925 +0000 UTC m=+0.207741825 container attach 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:32:15 compute-0 cranky_bohr[434900]: 167 167
Nov 25 17:32:15 compute-0 systemd[1]: libpod-5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b.scope: Deactivated successfully.
Nov 25 17:32:15 compute-0 podman[434884]: 2025-11-25 17:32:15.711531021 +0000 UTC m=+0.214527921 container died 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce18828d000e6817007b550ed20b0c479c1145c00ec0999c7b4e5148f1ae3145-merged.mount: Deactivated successfully.
Nov 25 17:32:15 compute-0 podman[434884]: 2025-11-25 17:32:15.773315412 +0000 UTC m=+0.276312312 container remove 5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:32:15 compute-0 systemd[1]: libpod-conmon-5d4c7cee68e2c97a7ea8e9ff468b463ef8823402860fd4d63d329f77d23d8c0b.scope: Deactivated successfully.
Nov 25 17:32:16 compute-0 podman[434924]: 2025-11-25 17:32:16.014261811 +0000 UTC m=+0.054031683 container create 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:32:16 compute-0 systemd[1]: Started libpod-conmon-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope.
Nov 25 17:32:16 compute-0 podman[434924]: 2025-11-25 17:32:15.988003115 +0000 UTC m=+0.027772957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:32:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:16 compute-0 podman[434924]: 2025-11-25 17:32:16.143087277 +0000 UTC m=+0.182857119 container init 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:32:16 compute-0 podman[434924]: 2025-11-25 17:32:16.155903295 +0000 UTC m=+0.195673137 container start 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:32:16 compute-0 podman[434924]: 2025-11-25 17:32:16.160183752 +0000 UTC m=+0.199953594 container attach 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 17:32:16 compute-0 ceph-mon[74985]: pgmap v3279: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Nov 25 17:32:17 compute-0 youthful_jennings[434940]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:32:17 compute-0 youthful_jennings[434940]: --> relative data size: 1.0
Nov 25 17:32:17 compute-0 youthful_jennings[434940]: --> All data devices are unavailable
Nov 25 17:32:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Nov 25 17:32:17 compute-0 systemd[1]: libpod-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope: Deactivated successfully.
Nov 25 17:32:17 compute-0 systemd[1]: libpod-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope: Consumed 1.257s CPU time.
Nov 25 17:32:17 compute-0 podman[434924]: 2025-11-25 17:32:17.453088634 +0000 UTC m=+1.492858486 container died 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea2c3324a0e405073ae23e4e9d0a570ec5238e425ef068b0a2dd3670f33d8824-merged.mount: Deactivated successfully.
Nov 25 17:32:17 compute-0 podman[434924]: 2025-11-25 17:32:17.542850047 +0000 UTC m=+1.582619899 container remove 8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:32:17 compute-0 systemd[1]: libpod-conmon-8a7d531fee9d43634d2d092b64cb4fce8beef3a04e4e05eb2f16f98c957da346.scope: Deactivated successfully.
Nov 25 17:32:17 compute-0 sudo[434818]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:17 compute-0 sudo[434981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:17 compute-0 sudo[434981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:17 compute-0 sudo[434981]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:17 compute-0 nova_compute[254092]: 2025-11-25 17:32:17.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:17 compute-0 sudo[435006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:32:17 compute-0 sudo[435006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:17 compute-0 sudo[435006]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:17 compute-0 sudo[435031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:17 compute-0 sudo[435031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:17 compute-0 sudo[435031]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:17 compute-0 sudo[435056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:32:17 compute-0 sudo[435056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:18 compute-0 podman[435122]: 2025-11-25 17:32:18.352979139 +0000 UTC m=+0.052567912 container create 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 17:32:18 compute-0 systemd[1]: Started libpod-conmon-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope.
Nov 25 17:32:18 compute-0 podman[435122]: 2025-11-25 17:32:18.327866555 +0000 UTC m=+0.027455328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:32:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:32:18 compute-0 podman[435122]: 2025-11-25 17:32:18.464508984 +0000 UTC m=+0.164097797 container init 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:32:18 compute-0 podman[435122]: 2025-11-25 17:32:18.476720317 +0000 UTC m=+0.176309070 container start 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:32:18 compute-0 podman[435122]: 2025-11-25 17:32:18.481776895 +0000 UTC m=+0.181365668 container attach 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:32:18 compute-0 infallible_blackwell[435138]: 167 167
Nov 25 17:32:18 compute-0 systemd[1]: libpod-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope: Deactivated successfully.
Nov 25 17:32:18 compute-0 conmon[435138]: conmon 5f8f1f80ba8caec70346 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope/container/memory.events
Nov 25 17:32:18 compute-0 podman[435122]: 2025-11-25 17:32:18.488258291 +0000 UTC m=+0.187847074 container died 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-8796a0a6bffe97d77cb1f211750de705d7d8ef058b3c2cb9fb1482985ae7d5c0-merged.mount: Deactivated successfully.
Nov 25 17:32:18 compute-0 podman[435122]: 2025-11-25 17:32:18.541306325 +0000 UTC m=+0.240895068 container remove 5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_blackwell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:32:18 compute-0 systemd[1]: libpod-conmon-5f8f1f80ba8caec70346349e38c4c152376cc69bc4cfd7f246b9b662ad68ae5f.scope: Deactivated successfully.
Nov 25 17:32:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 25 17:32:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 25 17:32:18 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 25 17:32:18 compute-0 ceph-mon[74985]: pgmap v3280: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 30 op/s
Nov 25 17:32:18 compute-0 podman[435162]: 2025-11-25 17:32:18.754532708 +0000 UTC m=+0.075635859 container create 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:32:18 compute-0 podman[435162]: 2025-11-25 17:32:18.725055266 +0000 UTC m=+0.046158427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:32:18 compute-0 systemd[1]: Started libpod-conmon-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope.
Nov 25 17:32:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:18 compute-0 podman[435162]: 2025-11-25 17:32:18.897958892 +0000 UTC m=+0.219062033 container init 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:32:18 compute-0 podman[435162]: 2025-11-25 17:32:18.911299876 +0000 UTC m=+0.232402977 container start 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:32:18 compute-0 podman[435162]: 2025-11-25 17:32:18.915283514 +0000 UTC m=+0.236386645 container attach 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:32:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:19 compute-0 practical_yalow[435178]: {
Nov 25 17:32:19 compute-0 practical_yalow[435178]:     "0": [
Nov 25 17:32:19 compute-0 practical_yalow[435178]:         {
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "devices": [
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "/dev/loop3"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             ],
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_name": "ceph_lv0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_size": "21470642176",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "name": "ceph_lv0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "tags": {
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cluster_name": "ceph",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.crush_device_class": "",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.encrypted": "0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osd_id": "0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.type": "block",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.vdo": "0"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             },
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "type": "block",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "vg_name": "ceph_vg0"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:         }
Nov 25 17:32:19 compute-0 practical_yalow[435178]:     ],
Nov 25 17:32:19 compute-0 practical_yalow[435178]:     "1": [
Nov 25 17:32:19 compute-0 practical_yalow[435178]:         {
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "devices": [
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "/dev/loop4"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             ],
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_name": "ceph_lv1",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_size": "21470642176",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "name": "ceph_lv1",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "tags": {
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cluster_name": "ceph",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.crush_device_class": "",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.encrypted": "0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osd_id": "1",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.type": "block",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.vdo": "0"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             },
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "type": "block",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "vg_name": "ceph_vg1"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:         }
Nov 25 17:32:19 compute-0 practical_yalow[435178]:     ],
Nov 25 17:32:19 compute-0 practical_yalow[435178]:     "2": [
Nov 25 17:32:19 compute-0 practical_yalow[435178]:         {
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "devices": [
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "/dev/loop5"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             ],
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_name": "ceph_lv2",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_size": "21470642176",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "name": "ceph_lv2",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "tags": {
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.cluster_name": "ceph",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.crush_device_class": "",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.encrypted": "0",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osd_id": "2",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.type": "block",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:                 "ceph.vdo": "0"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             },
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "type": "block",
Nov 25 17:32:19 compute-0 practical_yalow[435178]:             "vg_name": "ceph_vg2"
Nov 25 17:32:19 compute-0 practical_yalow[435178]:         }
Nov 25 17:32:19 compute-0 practical_yalow[435178]:     ]
Nov 25 17:32:19 compute-0 practical_yalow[435178]: }
Nov 25 17:32:19 compute-0 ceph-mon[74985]: osdmap e303: 3 total, 3 up, 3 in
Nov 25 17:32:19 compute-0 systemd[1]: libpod-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope: Deactivated successfully.
Nov 25 17:32:19 compute-0 conmon[435178]: conmon 56b1094af973e28e3332 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope/container/memory.events
Nov 25 17:32:19 compute-0 podman[435162]: 2025-11-25 17:32:19.707417065 +0000 UTC m=+1.028520216 container died 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-27189c9cd551335ee14e130265ad99e5d58b9cc954bbe9ec6a476123a60f9667-merged.mount: Deactivated successfully.
Nov 25 17:32:19 compute-0 podman[435162]: 2025-11-25 17:32:19.784769601 +0000 UTC m=+1.105872722 container remove 56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:32:19 compute-0 systemd[1]: libpod-conmon-56b1094af973e28e3332cb85eaeb460424bee995e76db41904a9b0ae68f04646.scope: Deactivated successfully.
Nov 25 17:32:19 compute-0 sudo[435056]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:19 compute-0 sudo[435199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:19 compute-0 sudo[435199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:19 compute-0 sudo[435199]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:20 compute-0 sudo[435224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:32:20 compute-0 sudo[435224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:20 compute-0 sudo[435224]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:20 compute-0 nova_compute[254092]: 2025-11-25 17:32:20.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:20 compute-0 sudo[435249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:20 compute-0 sudo[435249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:20 compute-0 sudo[435249]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:20 compute-0 sudo[435274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:32:20 compute-0 sudo[435274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:20 compute-0 podman[435340]: 2025-11-25 17:32:20.617522688 +0000 UTC m=+0.053830186 container create 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:32:20 compute-0 systemd[1]: Started libpod-conmon-8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35.scope.
Nov 25 17:32:20 compute-0 podman[435340]: 2025-11-25 17:32:20.592085855 +0000 UTC m=+0.028393193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:32:20 compute-0 ceph-mon[74985]: pgmap v3282: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:32:20 compute-0 podman[435340]: 2025-11-25 17:32:20.724503659 +0000 UTC m=+0.160811017 container init 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:32:20 compute-0 podman[435340]: 2025-11-25 17:32:20.738906741 +0000 UTC m=+0.175213989 container start 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:32:20 compute-0 podman[435340]: 2025-11-25 17:32:20.742235912 +0000 UTC m=+0.178543270 container attach 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:32:20 compute-0 practical_swirles[435356]: 167 167
Nov 25 17:32:20 compute-0 systemd[1]: libpod-8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35.scope: Deactivated successfully.
Nov 25 17:32:20 compute-0 podman[435340]: 2025-11-25 17:32:20.74838958 +0000 UTC m=+0.184696898 container died 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfd8fcec1ba7707b2e8543ca108cc973efb64ca4d82f730544d60276199485de-merged.mount: Deactivated successfully.
Nov 25 17:32:20 compute-0 podman[435340]: 2025-11-25 17:32:20.795851652 +0000 UTC m=+0.232158920 container remove 8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swirles, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:32:20 compute-0 systemd[1]: libpod-conmon-8eb14b3e8e8259fa013bbe267e1a04104288c472adb70155a30470736afa2c35.scope: Deactivated successfully.
Nov 25 17:32:21 compute-0 podman[435381]: 2025-11-25 17:32:21.054713138 +0000 UTC m=+0.066389949 container create 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:32:21 compute-0 systemd[1]: Started libpod-conmon-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope.
Nov 25 17:32:21 compute-0 podman[435381]: 2025-11-25 17:32:21.035523705 +0000 UTC m=+0.047200486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:32:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:32:21 compute-0 podman[435381]: 2025-11-25 17:32:21.160740404 +0000 UTC m=+0.172417245 container init 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:32:21 compute-0 podman[435381]: 2025-11-25 17:32:21.17456859 +0000 UTC m=+0.186245381 container start 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:32:21 compute-0 podman[435381]: 2025-11-25 17:32:21.178898437 +0000 UTC m=+0.190575288 container attach 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:32:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]: {
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "osd_id": 1,
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "type": "bluestore"
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:     },
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "osd_id": 2,
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "type": "bluestore"
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:     },
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "osd_id": 0,
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:         "type": "bluestore"
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]:     }
Nov 25 17:32:22 compute-0 stupefied_perlman[435397]: }
Nov 25 17:32:22 compute-0 systemd[1]: libpod-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope: Deactivated successfully.
Nov 25 17:32:22 compute-0 systemd[1]: libpod-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope: Consumed 1.143s CPU time.
Nov 25 17:32:22 compute-0 podman[435381]: 2025-11-25 17:32:22.308108334 +0000 UTC m=+1.319785165 container died 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae8d9a70438712ae368c39298a63e458c0b18683e8f2ff43cfc197c5c34f6e15-merged.mount: Deactivated successfully.
Nov 25 17:32:22 compute-0 podman[435381]: 2025-11-25 17:32:22.386995142 +0000 UTC m=+1.398671903 container remove 67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:32:22 compute-0 systemd[1]: libpod-conmon-67655b86df0a8b661c2eab5116a93963f336a0e28b81147af9f7d85aa6cd72bb.scope: Deactivated successfully.
Nov 25 17:32:22 compute-0 sudo[435274]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:32:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:32:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:32:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:32:22 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e980db8a-84ff-4c8b-9233-729382f9adf0 does not exist
Nov 25 17:32:22 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c1d47fbf-40ed-4f99-8218-820b5e4ebf0c does not exist
Nov 25 17:32:22 compute-0 sudo[435444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:32:22 compute-0 sudo[435444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:22 compute-0 sudo[435444]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:22 compute-0 sudo[435469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:32:22 compute-0 sudo[435469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:32:22 compute-0 sudo[435469]: pam_unix(sudo:session): session closed for user root
Nov 25 17:32:22 compute-0 ceph-mon[74985]: pgmap v3283: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Nov 25 17:32:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:32:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:32:22 compute-0 nova_compute[254092]: 2025-11-25 17:32:22.745 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Nov 25 17:32:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:24 compute-0 ceph-mon[74985]: pgmap v3284: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 17 op/s
Nov 25 17:32:25 compute-0 nova_compute[254092]: 2025-11-25 17:32:25.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 17:32:26 compute-0 ceph-mon[74985]: pgmap v3285: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 17:32:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 17:32:27 compute-0 nova_compute[254092]: 2025-11-25 17:32:27.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:28 compute-0 podman[435495]: 2025-11-25 17:32:28.70447481 +0000 UTC m=+0.104192327 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 17:32:28 compute-0 podman[435494]: 2025-11-25 17:32:28.718968154 +0000 UTC m=+0.118169277 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:32:28 compute-0 ceph-mon[74985]: pgmap v3286: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Nov 25 17:32:28 compute-0 podman[435496]: 2025-11-25 17:32:28.7706302 +0000 UTC m=+0.164160759 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:32:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Nov 25 17:32:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.750026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949750139, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1724, "num_deletes": 257, "total_data_size": 2715082, "memory_usage": 2760592, "flush_reason": "Manual Compaction"}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949771737, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 2654235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66536, "largest_seqno": 68259, "table_properties": {"data_size": 2646235, "index_size": 4877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16645, "raw_average_key_size": 20, "raw_value_size": 2630127, "raw_average_value_size": 3219, "num_data_blocks": 217, "num_entries": 817, "num_filter_entries": 817, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091780, "oldest_key_time": 1764091780, "file_creation_time": 1764091949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 21771 microseconds, and 12314 cpu microseconds.
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.771808) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 2654235 bytes OK
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.771840) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.774144) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.774172) EVENT_LOG_v1 {"time_micros": 1764091949774162, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.774206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 2707641, prev total WAL file size 2707641, number of live WAL files 2.
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.776102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(2592KB)], [155(10211KB)]
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949776187, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 13111256, "oldest_snapshot_seqno": -1}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8690 keys, 11399065 bytes, temperature: kUnknown
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949853571, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 11399065, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11342108, "index_size": 34124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 228123, "raw_average_key_size": 26, "raw_value_size": 11188223, "raw_average_value_size": 1287, "num_data_blocks": 1326, "num_entries": 8690, "num_filter_entries": 8690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764091949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.854019) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 11399065 bytes
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.855718) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.0 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.2) write-amplify(4.3) OK, records in: 9217, records dropped: 527 output_compression: NoCompression
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.855749) EVENT_LOG_v1 {"time_micros": 1764091949855734, "job": 96, "event": "compaction_finished", "compaction_time_micros": 77560, "compaction_time_cpu_micros": 53034, "output_level": 6, "num_output_files": 1, "total_output_size": 11399065, "num_input_records": 9217, "num_output_records": 8690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949856931, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764091949860935, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.775891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:32:29 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:32:29.861201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:32:30 compute-0 nova_compute[254092]: 2025-11-25 17:32:30.103 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:30 compute-0 ceph-mon[74985]: pgmap v3287: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Nov 25 17:32:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Nov 25 17:32:32 compute-0 nova_compute[254092]: 2025-11-25 17:32:32.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:32 compute-0 ceph-mon[74985]: pgmap v3288: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Nov 25 17:32:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:34 compute-0 ceph-mon[74985]: pgmap v3289: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:35 compute-0 nova_compute[254092]: 2025-11-25 17:32:35.107 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:36 compute-0 ceph-mon[74985]: pgmap v3290: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:37 compute-0 nova_compute[254092]: 2025-11-25 17:32:37.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:37 compute-0 ceph-mon[74985]: pgmap v3291: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:40 compute-0 nova_compute[254092]: 2025-11-25 17:32:40.110 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:32:40
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:32:40 compute-0 ceph-mon[74985]: pgmap v3292: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:32:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:32:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:42 compute-0 ceph-mon[74985]: pgmap v3293: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:42 compute-0 nova_compute[254092]: 2025-11-25 17:32:42.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:44 compute-0 ceph-mon[74985]: pgmap v3294: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:45 compute-0 nova_compute[254092]: 2025-11-25 17:32:45.135 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:46 compute-0 ceph-mon[74985]: pgmap v3295: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:47 compute-0 nova_compute[254092]: 2025-11-25 17:32:47.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:48 compute-0 ceph-mon[74985]: pgmap v3296: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:50 compute-0 nova_compute[254092]: 2025-11-25 17:32:50.138 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:50 compute-0 ceph-mon[74985]: pgmap v3297: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:32:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:32:52 compute-0 ceph-mon[74985]: pgmap v3298: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:52 compute-0 nova_compute[254092]: 2025-11-25 17:32:52.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:54 compute-0 ceph-mon[74985]: pgmap v3299: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:32:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2829513400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:32:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:32:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2829513400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:32:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.521 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:32:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2829513400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:32:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2829513400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:32:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:32:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892037524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:32:55 compute-0 nova_compute[254092]: 2025-11-25 17:32:55.977 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.141 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.142 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3638MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.143 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.143 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.199 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.199 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.213 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:32:56 compute-0 ceph-mon[74985]: pgmap v3300: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1892037524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:32:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:32:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3597018792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.660 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.665 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.682 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.683 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:32:56 compute-0 nova_compute[254092]: 2025-11-25 17:32:56.684 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:32:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3597018792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:32:57 compute-0 nova_compute[254092]: 2025-11-25 17:32:57.683 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:57 compute-0 nova_compute[254092]: 2025-11-25 17:32:57.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:32:58 compute-0 ceph-mon[74985]: pgmap v3301: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:59 compute-0 sshd-session[435600]: Received disconnect from 80.94.93.233 port 46141:11:  [preauth]
Nov 25 17:32:59 compute-0 sshd-session[435600]: Disconnected from authenticating user root 80.94.93.233 port 46141 [preauth]
Nov 25 17:32:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:32:59 compute-0 nova_compute[254092]: 2025-11-25 17:32:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:32:59 compute-0 nova_compute[254092]: 2025-11-25 17:32:59.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:32:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:32:59 compute-0 podman[435603]: 2025-11-25 17:32:59.66134254 +0000 UTC m=+0.062895883 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:32:59 compute-0 podman[435602]: 2025-11-25 17:32:59.681887699 +0000 UTC m=+0.084899352 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 25 17:32:59 compute-0 podman[435604]: 2025-11-25 17:32:59.723823161 +0000 UTC m=+0.108223867 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:33:00 compute-0 nova_compute[254092]: 2025-11-25 17:33:00.142 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:00 compute-0 ceph-mon[74985]: pgmap v3302: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:02 compute-0 nova_compute[254092]: 2025-11-25 17:33:02.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:02 compute-0 ceph-mon[74985]: pgmap v3303: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:02 compute-0 nova_compute[254092]: 2025-11-25 17:33:02.765 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:03 compute-0 nova_compute[254092]: 2025-11-25 17:33:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:03 compute-0 nova_compute[254092]: 2025-11-25 17:33:03.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:33:03 compute-0 nova_compute[254092]: 2025-11-25 17:33:03.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:33:03 compute-0 nova_compute[254092]: 2025-11-25 17:33:03.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:33:04 compute-0 nova_compute[254092]: 2025-11-25 17:33:04.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:04 compute-0 ceph-mon[74985]: pgmap v3304: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:05 compute-0 nova_compute[254092]: 2025-11-25 17:33:05.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:06 compute-0 ceph-mon[74985]: pgmap v3305: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:07 compute-0 nova_compute[254092]: 2025-11-25 17:33:07.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:08 compute-0 ceph-mon[74985]: pgmap v3306: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:33:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:33:10 compute-0 nova_compute[254092]: 2025-11-25 17:33:10.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:10 compute-0 nova_compute[254092]: 2025-11-25 17:33:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:10 compute-0 ceph-mon[74985]: pgmap v3307: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:12 compute-0 ceph-mon[74985]: pgmap v3308: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:12 compute-0 nova_compute[254092]: 2025-11-25 17:33:12.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:33:13.673 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:33:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:33:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:33:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:33:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:33:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:14 compute-0 ceph-mon[74985]: pgmap v3309: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:15 compute-0 nova_compute[254092]: 2025-11-25 17:33:15.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:16 compute-0 ceph-mon[74985]: pgmap v3310: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:17 compute-0 nova_compute[254092]: 2025-11-25 17:33:17.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:18 compute-0 ceph-mon[74985]: pgmap v3311: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:20 compute-0 nova_compute[254092]: 2025-11-25 17:33:20.234 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:20 compute-0 ceph-mon[74985]: pgmap v3312: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:22 compute-0 ceph-mon[74985]: pgmap v3313: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:22 compute-0 sudo[435667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:22 compute-0 sudo[435667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:22 compute-0 nova_compute[254092]: 2025-11-25 17:33:22.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:22 compute-0 sudo[435667]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:22 compute-0 sudo[435692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:33:22 compute-0 sudo[435692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:22 compute-0 sudo[435692]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:22 compute-0 sudo[435717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:22 compute-0 sudo[435717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:22 compute-0 sudo[435717]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:23 compute-0 sudo[435742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:33:23 compute-0 sudo[435742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:23 compute-0 sudo[435742]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:33:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 68223c93-765c-4941-bfa8-6a2813cd0b73 does not exist
Nov 25 17:33:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 75113497-9b25-4e3f-922e-b65fa090f56d does not exist
Nov 25 17:33:23 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 10cf8c44-9c41-404d-be3c-dfd8e15db1a5 does not exist
Nov 25 17:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:33:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:33:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:33:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:33:23 compute-0 sudo[435798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:23 compute-0 sudo[435798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:23 compute-0 sudo[435798]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:23 compute-0 sudo[435823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:33:23 compute-0 sudo[435823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:23 compute-0 sudo[435823]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:23 compute-0 sudo[435848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:23 compute-0 sudo[435848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:23 compute-0 sudo[435848]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:23 compute-0 sudo[435873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:33:23 compute-0 sudo[435873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:24 compute-0 podman[435939]: 2025-11-25 17:33:24.340915239 +0000 UTC m=+0.041381598 container create 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:33:24 compute-0 systemd[1]: Started libpod-conmon-325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3.scope.
Nov 25 17:33:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:33:24 compute-0 podman[435939]: 2025-11-25 17:33:24.322789716 +0000 UTC m=+0.023256105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:33:24 compute-0 podman[435939]: 2025-11-25 17:33:24.435961246 +0000 UTC m=+0.136427665 container init 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:33:24 compute-0 podman[435939]: 2025-11-25 17:33:24.447584112 +0000 UTC m=+0.148050471 container start 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:33:24 compute-0 charming_sutherland[435955]: 167 167
Nov 25 17:33:24 compute-0 podman[435939]: 2025-11-25 17:33:24.453524334 +0000 UTC m=+0.153990723 container attach 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:33:24 compute-0 systemd[1]: libpod-325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3.scope: Deactivated successfully.
Nov 25 17:33:24 compute-0 podman[435939]: 2025-11-25 17:33:24.454415259 +0000 UTC m=+0.154881658 container died 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eb6dc0c778a1b9fee46e781b3bbd8b29fde3a60da06a2d7cfac8fd9f3e2d3cb-merged.mount: Deactivated successfully.
Nov 25 17:33:24 compute-0 podman[435939]: 2025-11-25 17:33:24.500084242 +0000 UTC m=+0.200550601 container remove 325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:33:24 compute-0 systemd[1]: libpod-conmon-325ff3ae55c24666a8d590336cc6b3a55e090947d35a3937cc3833adb34097e3.scope: Deactivated successfully.
Nov 25 17:33:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:24 compute-0 podman[435980]: 2025-11-25 17:33:24.702260125 +0000 UTC m=+0.067484738 container create 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:33:24 compute-0 ceph-mon[74985]: pgmap v3314: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:24 compute-0 systemd[1]: Started libpod-conmon-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope.
Nov 25 17:33:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:24 compute-0 podman[435980]: 2025-11-25 17:33:24.684610694 +0000 UTC m=+0.049835377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:24 compute-0 podman[435980]: 2025-11-25 17:33:24.798983768 +0000 UTC m=+0.164208481 container init 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:33:24 compute-0 podman[435980]: 2025-11-25 17:33:24.806136112 +0000 UTC m=+0.171360765 container start 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:33:24 compute-0 podman[435980]: 2025-11-25 17:33:24.809689048 +0000 UTC m=+0.174913711 container attach 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:33:25 compute-0 nova_compute[254092]: 2025-11-25 17:33:25.236 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:25 compute-0 xenodochial_kapitsa[435996]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:33:25 compute-0 xenodochial_kapitsa[435996]: --> relative data size: 1.0
Nov 25 17:33:25 compute-0 xenodochial_kapitsa[435996]: --> All data devices are unavailable
Nov 25 17:33:25 compute-0 systemd[1]: libpod-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope: Deactivated successfully.
Nov 25 17:33:25 compute-0 podman[435980]: 2025-11-25 17:33:25.898200957 +0000 UTC m=+1.263425580 container died 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:33:25 compute-0 systemd[1]: libpod-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope: Consumed 1.043s CPU time.
Nov 25 17:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d6a3f53481b1c6a56c8cb2d5e540a6f761f5ea86dafbf47fe5f215e9f1d32b6-merged.mount: Deactivated successfully.
Nov 25 17:33:25 compute-0 podman[435980]: 2025-11-25 17:33:25.954681474 +0000 UTC m=+1.319906137 container remove 287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:33:25 compute-0 systemd[1]: libpod-conmon-287381bd77e93da871541f5efb844770fb721989e6f25be108103369f68f1ab9.scope: Deactivated successfully.
Nov 25 17:33:26 compute-0 sudo[435873]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:26 compute-0 sudo[436037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:26 compute-0 sudo[436037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:26 compute-0 sudo[436037]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:26 compute-0 sudo[436062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:33:26 compute-0 sudo[436062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:26 compute-0 sudo[436062]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:26 compute-0 sudo[436087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:26 compute-0 sudo[436087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:26 compute-0 sudo[436087]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:26 compute-0 sudo[436112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:33:26 compute-0 sudo[436112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:26 compute-0 ceph-mon[74985]: pgmap v3315: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:26 compute-0 podman[436180]: 2025-11-25 17:33:26.747080003 +0000 UTC m=+0.055274796 container create 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 17:33:26 compute-0 systemd[1]: Started libpod-conmon-30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1.scope.
Nov 25 17:33:26 compute-0 podman[436180]: 2025-11-25 17:33:26.719526213 +0000 UTC m=+0.027721066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:33:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:33:26 compute-0 podman[436180]: 2025-11-25 17:33:26.843796425 +0000 UTC m=+0.151991218 container init 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:33:26 compute-0 podman[436180]: 2025-11-25 17:33:26.851296349 +0000 UTC m=+0.159491122 container start 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 17:33:26 compute-0 podman[436180]: 2025-11-25 17:33:26.854837976 +0000 UTC m=+0.163032769 container attach 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:33:26 compute-0 objective_wilbur[436196]: 167 167
Nov 25 17:33:26 compute-0 systemd[1]: libpod-30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1.scope: Deactivated successfully.
Nov 25 17:33:26 compute-0 podman[436180]: 2025-11-25 17:33:26.860039017 +0000 UTC m=+0.168233810 container died 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b818e21b04b0f46d924697cb0fe9a6d891cfbab4876bd5a289cb2a2ec6f03b0-merged.mount: Deactivated successfully.
Nov 25 17:33:26 compute-0 podman[436180]: 2025-11-25 17:33:26.899745708 +0000 UTC m=+0.207940471 container remove 30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:33:26 compute-0 systemd[1]: libpod-conmon-30639007eb68575f312c7b43510359ad7a24eb03ea5631e43e5470bb0f911ad1.scope: Deactivated successfully.
Nov 25 17:33:27 compute-0 podman[436220]: 2025-11-25 17:33:27.109162898 +0000 UTC m=+0.047357610 container create 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:33:27 compute-0 systemd[1]: Started libpod-conmon-893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac.scope.
Nov 25 17:33:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:27 compute-0 podman[436220]: 2025-11-25 17:33:27.088733403 +0000 UTC m=+0.026928145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:33:27 compute-0 podman[436220]: 2025-11-25 17:33:27.194590344 +0000 UTC m=+0.132785086 container init 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:33:27 compute-0 podman[436220]: 2025-11-25 17:33:27.211453673 +0000 UTC m=+0.149648395 container start 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:33:27 compute-0 podman[436220]: 2025-11-25 17:33:27.215093192 +0000 UTC m=+0.153287914 container attach 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:33:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:27 compute-0 nova_compute[254092]: 2025-11-25 17:33:27.781 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:27 compute-0 nervous_bose[436236]: {
Nov 25 17:33:27 compute-0 nervous_bose[436236]:     "0": [
Nov 25 17:33:27 compute-0 nervous_bose[436236]:         {
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "devices": [
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "/dev/loop3"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             ],
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_name": "ceph_lv0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_size": "21470642176",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "name": "ceph_lv0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "tags": {
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cluster_name": "ceph",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.crush_device_class": "",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.encrypted": "0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osd_id": "0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.type": "block",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.vdo": "0"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             },
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "type": "block",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "vg_name": "ceph_vg0"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:         }
Nov 25 17:33:27 compute-0 nervous_bose[436236]:     ],
Nov 25 17:33:27 compute-0 nervous_bose[436236]:     "1": [
Nov 25 17:33:27 compute-0 nervous_bose[436236]:         {
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "devices": [
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "/dev/loop4"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             ],
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_name": "ceph_lv1",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_size": "21470642176",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "name": "ceph_lv1",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "tags": {
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cluster_name": "ceph",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.crush_device_class": "",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.encrypted": "0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osd_id": "1",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.type": "block",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.vdo": "0"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             },
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "type": "block",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "vg_name": "ceph_vg1"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:         }
Nov 25 17:33:27 compute-0 nervous_bose[436236]:     ],
Nov 25 17:33:27 compute-0 nervous_bose[436236]:     "2": [
Nov 25 17:33:27 compute-0 nervous_bose[436236]:         {
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "devices": [
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "/dev/loop5"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             ],
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_name": "ceph_lv2",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_size": "21470642176",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "name": "ceph_lv2",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "tags": {
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.cluster_name": "ceph",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.crush_device_class": "",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.encrypted": "0",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osd_id": "2",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.type": "block",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:                 "ceph.vdo": "0"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             },
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "type": "block",
Nov 25 17:33:27 compute-0 nervous_bose[436236]:             "vg_name": "ceph_vg2"
Nov 25 17:33:27 compute-0 nervous_bose[436236]:         }
Nov 25 17:33:27 compute-0 nervous_bose[436236]:     ]
Nov 25 17:33:27 compute-0 nervous_bose[436236]: }
Nov 25 17:33:28 compute-0 systemd[1]: libpod-893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac.scope: Deactivated successfully.
Nov 25 17:33:28 compute-0 podman[436220]: 2025-11-25 17:33:28.027785213 +0000 UTC m=+0.965979935 container died 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c04af2aa501a1d3fd24ede200654a2500bcdca36b8dc708a63d032dd2a290798-merged.mount: Deactivated successfully.
Nov 25 17:33:28 compute-0 podman[436220]: 2025-11-25 17:33:28.086077569 +0000 UTC m=+1.024272281 container remove 893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:33:28 compute-0 systemd[1]: libpod-conmon-893e5d0ab5e3a8022e01dd0cebfbe6ca467f92e496066899a91219aaa163c8ac.scope: Deactivated successfully.
Nov 25 17:33:28 compute-0 sudo[436112]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:28 compute-0 sudo[436259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:28 compute-0 sudo[436259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:28 compute-0 sudo[436259]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:28 compute-0 sudo[436284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:33:28 compute-0 sudo[436284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:28 compute-0 sudo[436284]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:28 compute-0 sudo[436309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:28 compute-0 sudo[436309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:28 compute-0 sudo[436309]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:28 compute-0 sudo[436335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:33:28 compute-0 sudo[436335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:28 compute-0 ceph-mon[74985]: pgmap v3316: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:28 compute-0 podman[436402]: 2025-11-25 17:33:28.79956394 +0000 UTC m=+0.042145868 container create b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:33:28 compute-0 systemd[1]: Started libpod-conmon-b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b.scope.
Nov 25 17:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:33:28 compute-0 podman[436402]: 2025-11-25 17:33:28.784086489 +0000 UTC m=+0.026668397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:33:28 compute-0 podman[436402]: 2025-11-25 17:33:28.888905832 +0000 UTC m=+0.131487820 container init b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:33:28 compute-0 podman[436402]: 2025-11-25 17:33:28.898740269 +0000 UTC m=+0.141322217 container start b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:33:28 compute-0 podman[436402]: 2025-11-25 17:33:28.902834481 +0000 UTC m=+0.145416389 container attach b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:33:28 compute-0 unruffled_benz[436418]: 167 167
Nov 25 17:33:28 compute-0 systemd[1]: libpod-b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b.scope: Deactivated successfully.
Nov 25 17:33:28 compute-0 podman[436402]: 2025-11-25 17:33:28.906134461 +0000 UTC m=+0.148716379 container died b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a709fbf8284229747b81f75a75b01fd79cb0d91f6afc66564323c7f5ece038f-merged.mount: Deactivated successfully.
Nov 25 17:33:28 compute-0 podman[436402]: 2025-11-25 17:33:28.945463701 +0000 UTC m=+0.188045619 container remove b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:33:28 compute-0 systemd[1]: libpod-conmon-b8c962e85a15e04a2fdb627881f81f02f9b2345443f807a56842e1bc9720e04b.scope: Deactivated successfully.
Nov 25 17:33:29 compute-0 podman[436442]: 2025-11-25 17:33:29.185593907 +0000 UTC m=+0.070724076 container create 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:33:29 compute-0 systemd[1]: Started libpod-conmon-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope.
Nov 25 17:33:29 compute-0 podman[436442]: 2025-11-25 17:33:29.16217554 +0000 UTC m=+0.047305679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:33:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:33:29 compute-0 podman[436442]: 2025-11-25 17:33:29.300620449 +0000 UTC m=+0.185750608 container init 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:33:29 compute-0 podman[436442]: 2025-11-25 17:33:29.316932032 +0000 UTC m=+0.202062201 container start 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:33:29 compute-0 podman[436442]: 2025-11-25 17:33:29.321533778 +0000 UTC m=+0.206663937 container attach 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:33:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:30 compute-0 nova_compute[254092]: 2025-11-25 17:33:30.239 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]: {
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "osd_id": 1,
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "type": "bluestore"
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:     },
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "osd_id": 2,
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "type": "bluestore"
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:     },
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "osd_id": 0,
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:         "type": "bluestore"
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]:     }
Nov 25 17:33:30 compute-0 vibrant_mclaren[436458]: }
Nov 25 17:33:30 compute-0 systemd[1]: libpod-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope: Deactivated successfully.
Nov 25 17:33:30 compute-0 systemd[1]: libpod-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope: Consumed 1.034s CPU time.
Nov 25 17:33:30 compute-0 podman[436442]: 2025-11-25 17:33:30.335013874 +0000 UTC m=+1.220144003 container died 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6664f5343d5d88d955e5641c532a0bc0041c368fff5839ec3ad53582592f5548-merged.mount: Deactivated successfully.
Nov 25 17:33:30 compute-0 podman[436442]: 2025-11-25 17:33:30.42783001 +0000 UTC m=+1.312960149 container remove 85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 17:33:30 compute-0 systemd[1]: libpod-conmon-85d784858d442f4afdb359ea704c806a3631dbdf0f80393811fa27ade3a2703c.scope: Deactivated successfully.
Nov 25 17:33:30 compute-0 podman[436500]: 2025-11-25 17:33:30.48771896 +0000 UTC m=+0.103498048 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 17:33:30 compute-0 podman[436492]: 2025-11-25 17:33:30.489460088 +0000 UTC m=+0.103726805 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:33:30 compute-0 sudo[436335]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:33:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:33:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:33:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:33:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 65176985-d690-4942-8598-3cc82828e14d does not exist
Nov 25 17:33:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 34efc1e6-d172-40f0-873d-38896587622a does not exist
Nov 25 17:33:30 compute-0 podman[436501]: 2025-11-25 17:33:30.533348373 +0000 UTC m=+0.138203744 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:33:30 compute-0 sudo[436567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:33:30 compute-0 sudo[436567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:30 compute-0 sudo[436567]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:30 compute-0 sudo[436592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:33:30 compute-0 sudo[436592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:33:30 compute-0 sudo[436592]: pam_unix(sudo:session): session closed for user root
Nov 25 17:33:30 compute-0 ceph-mon[74985]: pgmap v3317: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:33:30 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:33:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:32 compute-0 ceph-mon[74985]: pgmap v3318: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:32 compute-0 nova_compute[254092]: 2025-11-25 17:33:32.784 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:34 compute-0 ceph-mon[74985]: pgmap v3319: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:35 compute-0 nova_compute[254092]: 2025-11-25 17:33:35.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:36 compute-0 ceph-mon[74985]: pgmap v3320: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:37 compute-0 nova_compute[254092]: 2025-11-25 17:33:37.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:38 compute-0 ceph-mon[74985]: pgmap v3321: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:33:40 compute-0 nova_compute[254092]: 2025-11-25 17:33:40.245 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:33:40
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr', 'images', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:33:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:33:40 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:33:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:33:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.2 total, 600.0 interval
                                           Cumulative writes: 44K writes, 181K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 44K writes, 16K syncs, 2.80 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2211 writes, 8310 keys, 2211 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                           Interval WAL: 2211 writes, 889 syncs, 2.49 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563f6487b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:33:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:33:40 compute-0 ceph-mon[74985]: pgmap v3322: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.496 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.497 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.510 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.516 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.516 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.517 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.518 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 25 17:33:41 compute-0 nova_compute[254092]: 2025-11-25 17:33:41.518 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 25 17:33:41 compute-0 ceph-mon[74985]: pgmap v3323: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:42 compute-0 nova_compute[254092]: 2025-11-25 17:33:42.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:44 compute-0 ceph-mon[74985]: pgmap v3324: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:45 compute-0 nova_compute[254092]: 2025-11-25 17:33:45.248 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:46 compute-0 ceph-mon[74985]: pgmap v3325: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:47 compute-0 nova_compute[254092]: 2025-11-25 17:33:47.790 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:48 compute-0 ceph-mon[74985]: pgmap v3326: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.658005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029658045, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 869, "num_deletes": 251, "total_data_size": 1185047, "memory_usage": 1205936, "flush_reason": "Manual Compaction"}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029664082, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 727050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68260, "largest_seqno": 69128, "table_properties": {"data_size": 723532, "index_size": 1297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9378, "raw_average_key_size": 20, "raw_value_size": 715999, "raw_average_value_size": 1573, "num_data_blocks": 59, "num_entries": 455, "num_filter_entries": 455, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764091950, "oldest_key_time": 1764091950, "file_creation_time": 1764092029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 6109 microseconds, and 2682 cpu microseconds.
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.664117) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 727050 bytes OK
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.664132) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.665963) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.665977) EVENT_LOG_v1 {"time_micros": 1764092029665972, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.665993) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1180801, prev total WAL file size 1180801, number of live WAL files 2.
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.666692) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373630' seq:72057594037927935, type:22 .. '6D6772737461740033303132' seq:0, type:0; will stop at (end)
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(710KB)], [158(10MB)]
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029666775, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12126115, "oldest_snapshot_seqno": -1}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8665 keys, 9259099 bytes, temperature: kUnknown
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029728250, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 9259099, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9206149, "index_size": 30181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 227748, "raw_average_key_size": 26, "raw_value_size": 9056441, "raw_average_value_size": 1045, "num_data_blocks": 1165, "num_entries": 8665, "num_filter_entries": 8665, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.728482) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 9259099 bytes
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.732975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.1 rd, 150.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(29.4) write-amplify(12.7) OK, records in: 9145, records dropped: 480 output_compression: NoCompression
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.732991) EVENT_LOG_v1 {"time_micros": 1764092029732983, "job": 98, "event": "compaction_finished", "compaction_time_micros": 61536, "compaction_time_cpu_micros": 40591, "output_level": 6, "num_output_files": 1, "total_output_size": 9259099, "num_input_records": 9145, "num_output_records": 8665, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029733937, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092029735860, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.666498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:33:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:33:49.736059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:33:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:33:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.2 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2173 writes, 8480 keys, 2173 commit groups, 1.0 writes per commit group, ingest: 7.93 MB, 0.01 MB/s
                                           Interval WAL: 2173 writes, 862 syncs, 2.52 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.16              0.00         1    0.155       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.19              0.00         1    0.193       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.19              0.00         1    0.193       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.19              0.00         1    0.193       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da751090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da751090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.20              0.00         1    0.200       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.20              0.00         1    0.200       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.20              0.00         1    0.200       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da751090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.100       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.100       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.100       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5618da7511f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 17:33:50 compute-0 nova_compute[254092]: 2025-11-25 17:33:50.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:50 compute-0 ceph-mon[74985]: pgmap v3327: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:51 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:33:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:33:52 compute-0 ceph-mon[74985]: pgmap v3328: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:52 compute-0 nova_compute[254092]: 2025-11-25 17:33:52.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:33:54 compute-0 ceph-mon[74985]: pgmap v3329: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.301 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:33:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1635799409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:33:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:33:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1635799409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:33:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.547 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.548 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:33:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1635799409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:33:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1635799409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:33:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:33:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1453461597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:33:55 compute-0 nova_compute[254092]: 2025-11-25 17:33:55.968 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.123 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.125 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.125 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.126 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.219 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.220 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:33:56 compute-0 ceph-mon[74985]: pgmap v3330: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1453461597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:33:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:33:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2179969112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.732 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.738 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.751 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.752 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:33:56 compute-0 nova_compute[254092]: 2025-11-25 17:33:56.753 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:33:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2179969112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:33:57 compute-0 nova_compute[254092]: 2025-11-25 17:33:57.731 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:57 compute-0 nova_compute[254092]: 2025-11-25 17:33:57.731 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:33:57 compute-0 nova_compute[254092]: 2025-11-25 17:33:57.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:33:58 compute-0 ceph-mon[74985]: pgmap v3331: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:33:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:00 compute-0 nova_compute[254092]: 2025-11-25 17:34:00.304 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:00 compute-0 podman[436664]: 2025-11-25 17:34:00.671935139 +0000 UTC m=+0.072732821 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 17:34:00 compute-0 podman[436663]: 2025-11-25 17:34:00.680514432 +0000 UTC m=+0.084496581 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:34:00 compute-0 ceph-mon[74985]: pgmap v3332: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:00 compute-0 podman[436665]: 2025-11-25 17:34:00.747944338 +0000 UTC m=+0.140376952 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:34:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:01 compute-0 nova_compute[254092]: 2025-11-25 17:34:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:01 compute-0 nova_compute[254092]: 2025-11-25 17:34:01.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:34:02 compute-0 ceph-mon[74985]: pgmap v3333: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:02 compute-0 nova_compute[254092]: 2025-11-25 17:34:02.795 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 25 17:34:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 25 17:34:03 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 25 17:34:04 compute-0 nova_compute[254092]: 2025-11-25 17:34:04.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:04 compute-0 nova_compute[254092]: 2025-11-25 17:34:04.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:04 compute-0 nova_compute[254092]: 2025-11-25 17:34:04.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:34:04 compute-0 nova_compute[254092]: 2025-11-25 17:34:04.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:34:04 compute-0 nova_compute[254092]: 2025-11-25 17:34:04.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:34:04 compute-0 nova_compute[254092]: 2025-11-25 17:34:04.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:04 compute-0 ceph-mon[74985]: pgmap v3334: 321 pgs: 321 active+clean; 457 KiB data, 978 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:04 compute-0 ceph-mon[74985]: osdmap e304: 3 total, 3 up, 3 in
Nov 25 17:34:05 compute-0 nova_compute[254092]: 2025-11-25 17:34:05.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 29 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 2.8 MiB/s wr, 11 op/s
Nov 25 17:34:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 25 17:34:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 25 17:34:05 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 25 17:34:06 compute-0 ceph-mon[74985]: pgmap v3336: 321 pgs: 321 active+clean; 29 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 2.8 MiB/s wr, 11 op/s
Nov 25 17:34:06 compute-0 ceph-mon[74985]: osdmap e305: 3 total, 3 up, 3 in
Nov 25 17:34:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 17:34:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:34:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6001.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 154K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.78 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1570 writes, 5388 keys, 1570 commit groups, 1.0 writes per commit group, ingest: 4.15 MB, 0.01 MB/s
                                           Interval WAL: 1570 writes, 670 syncs, 2.34 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.10              0.00         1    0.097       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.118       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.12              0.00         1    0.118       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.12              0.00         1    0.118       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.146       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.146       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.146       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6001.5 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55750a5b2dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 17:34:07 compute-0 nova_compute[254092]: 2025-11-25 17:34:07.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:08 compute-0 nova_compute[254092]: 2025-11-25 17:34:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:08 compute-0 nova_compute[254092]: 2025-11-25 17:34:08.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:34:08 compute-0 nova_compute[254092]: 2025-11-25 17:34:08.525 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:34:08 compute-0 ceph-mon[74985]: pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 17:34:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 17:34:09 compute-0 nova_compute[254092]: 2025-11-25 17:34:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:09 compute-0 nova_compute[254092]: 2025-11-25 17:34:09.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:09 compute-0 nova_compute[254092]: 2025-11-25 17:34:09.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:34:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:34:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:34:10 compute-0 nova_compute[254092]: 2025-11-25 17:34:10.339 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:10 compute-0 ceph-mon[74985]: pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Nov 25 17:34:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 25 17:34:11 compute-0 nova_compute[254092]: 2025-11-25 17:34:11.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:12 compute-0 ceph-mon[74985]: pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 25 17:34:12 compute-0 nova_compute[254092]: 2025-11-25 17:34:12.800 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Nov 25 17:34:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:34:13.674 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:34:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:34:13.675 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:34:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:34:13.675 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:34:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:14 compute-0 ceph-mon[74985]: pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Nov 25 17:34:15 compute-0 nova_compute[254092]: 2025-11-25 17:34:15.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 17:34:15 compute-0 ceph-mon[74985]: pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Nov 25 17:34:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 22 op/s
Nov 25 17:34:17 compute-0 nova_compute[254092]: 2025-11-25 17:34:17.854 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:18 compute-0 ceph-mon[74985]: pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 22 op/s
Nov 25 17:34:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 25 17:34:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:20 compute-0 nova_compute[254092]: 2025-11-25 17:34:20.369 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:20 compute-0 ceph-mon[74985]: pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 25 17:34:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 25 17:34:22 compute-0 ceph-mon[74985]: pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Nov 25 17:34:22 compute-0 nova_compute[254092]: 2025-11-25 17:34:22.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:24 compute-0 ceph-mon[74985]: pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:25 compute-0 nova_compute[254092]: 2025-11-25 17:34:25.372 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 17:34:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:26 compute-0 ceph-mon[74985]: pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:27 compute-0 nova_compute[254092]: 2025-11-25 17:34:27.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:34:27.872 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:34:27 compute-0 nova_compute[254092]: 2025-11-25 17:34:27.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:27 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:34:27.874 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:34:28 compute-0 ceph-mon[74985]: pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:30 compute-0 nova_compute[254092]: 2025-11-25 17:34:30.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:30 compute-0 nova_compute[254092]: 2025-11-25 17:34:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:30 compute-0 ceph-mon[74985]: pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:30 compute-0 sudo[436726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:30 compute-0 sudo[436726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:30 compute-0 sudo[436726]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:30 compute-0 sudo[436776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:34:30 compute-0 podman[436751]: 2025-11-25 17:34:30.891887384 +0000 UTC m=+0.081486040 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 17:34:30 compute-0 sudo[436776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:30 compute-0 sudo[436776]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:30 compute-0 podman[436750]: 2025-11-25 17:34:30.900885759 +0000 UTC m=+0.097569677 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:34:30 compute-0 sudo[436832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:30 compute-0 sudo[436832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:30 compute-0 sudo[436832]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:30 compute-0 podman[436752]: 2025-11-25 17:34:30.963927495 +0000 UTC m=+0.142344096 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:34:31 compute-0 sudo[436863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:34:31 compute-0 sudo[436863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:31 compute-0 sudo[436863]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:34:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:34:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:34:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:34:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:34:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:34:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1e95d0de-81f3-42b0-8efc-c863d48bc6fd does not exist
Nov 25 17:34:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c2fe32bc-ce43-4859-850d-7325e2008fac does not exist
Nov 25 17:34:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 831910be-5c10-4f6d-af71-df1191a23dde does not exist
Nov 25 17:34:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:34:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:34:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:34:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:34:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:34:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:34:31 compute-0 sudo[436920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:31 compute-0 sudo[436920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:31 compute-0 sudo[436920]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:31 compute-0 sudo[436945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:34:31 compute-0 sudo[436945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:31 compute-0 sudo[436945]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:31 compute-0 sudo[436970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:31 compute-0 sudo[436970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:31 compute-0 sudo[436970]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:32 compute-0 sudo[436995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:34:32 compute-0 sudo[436995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:32 compute-0 podman[437059]: 2025-11-25 17:34:32.4623181 +0000 UTC m=+0.051063201 container create b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:34:32 compute-0 systemd[1]: Started libpod-conmon-b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7.scope.
Nov 25 17:34:32 compute-0 podman[437059]: 2025-11-25 17:34:32.44212602 +0000 UTC m=+0.030871121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:34:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:34:32 compute-0 podman[437059]: 2025-11-25 17:34:32.572588701 +0000 UTC m=+0.161333872 container init b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:34:32 compute-0 podman[437059]: 2025-11-25 17:34:32.586482829 +0000 UTC m=+0.175227940 container start b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:34:32 compute-0 podman[437059]: 2025-11-25 17:34:32.590670473 +0000 UTC m=+0.179415664 container attach b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:34:32 compute-0 admiring_davinci[437075]: 167 167
Nov 25 17:34:32 compute-0 systemd[1]: libpod-b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7.scope: Deactivated successfully.
Nov 25 17:34:32 compute-0 podman[437059]: 2025-11-25 17:34:32.594300742 +0000 UTC m=+0.183045863 container died b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:34:32 compute-0 ceph-mon[74985]: pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:34:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:34:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:34:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:34:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:34:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d89cb80d1df7ff6bae566a4dc8fd3a0a2de032d54125b544289356d657e6c484-merged.mount: Deactivated successfully.
Nov 25 17:34:32 compute-0 podman[437059]: 2025-11-25 17:34:32.64749376 +0000 UTC m=+0.236238841 container remove b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 17:34:32 compute-0 systemd[1]: libpod-conmon-b1b8da9cfbaa16d4cba7d8e17212f0d0cfacc587e12f9cf4dab654adff72c0a7.scope: Deactivated successfully.
Nov 25 17:34:32 compute-0 podman[437099]: 2025-11-25 17:34:32.856845269 +0000 UTC m=+0.059811389 container create f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:34:32 compute-0 nova_compute[254092]: 2025-11-25 17:34:32.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:32 compute-0 podman[437099]: 2025-11-25 17:34:32.82603264 +0000 UTC m=+0.028998770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:34:32 compute-0 systemd[1]: Started libpod-conmon-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope.
Nov 25 17:34:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:32 compute-0 podman[437099]: 2025-11-25 17:34:32.985544842 +0000 UTC m=+0.188511042 container init f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:34:33 compute-0 podman[437099]: 2025-11-25 17:34:33.001498946 +0000 UTC m=+0.204465096 container start f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:34:33 compute-0 podman[437099]: 2025-11-25 17:34:33.005865665 +0000 UTC m=+0.208831785 container attach f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:34:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:34 compute-0 boring_almeida[437116]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:34:34 compute-0 boring_almeida[437116]: --> relative data size: 1.0
Nov 25 17:34:34 compute-0 boring_almeida[437116]: --> All data devices are unavailable
Nov 25 17:34:34 compute-0 systemd[1]: libpod-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope: Deactivated successfully.
Nov 25 17:34:34 compute-0 systemd[1]: libpod-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope: Consumed 1.156s CPU time.
Nov 25 17:34:34 compute-0 podman[437099]: 2025-11-25 17:34:34.214590846 +0000 UTC m=+1.417556946 container died f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c98f2558897039ec497996c12108792734ac2377f35a0a860fcf30e5ded4cb6-merged.mount: Deactivated successfully.
Nov 25 17:34:34 compute-0 podman[437099]: 2025-11-25 17:34:34.310751163 +0000 UTC m=+1.513717263 container remove f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:34:34 compute-0 systemd[1]: libpod-conmon-f9ee759260be6c3b0a6cc7baaab239e59965b29ce7dede1a79aeb3594cbd65a0.scope: Deactivated successfully.
Nov 25 17:34:34 compute-0 sudo[436995]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:34 compute-0 sudo[437158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:34 compute-0 sudo[437158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:34 compute-0 sudo[437158]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:34 compute-0 sudo[437183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:34:34 compute-0 sudo[437183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:34 compute-0 sudo[437183]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:34 compute-0 ceph-mon[74985]: pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:34 compute-0 sudo[437208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:34 compute-0 sudo[437208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:34 compute-0 sudo[437208]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:34 compute-0 sudo[437233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:34:34 compute-0 sudo[437233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:35 compute-0 podman[437301]: 2025-11-25 17:34:35.159246628 +0000 UTC m=+0.061882415 container create 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:34:35 compute-0 systemd[1]: Started libpod-conmon-7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50.scope.
Nov 25 17:34:35 compute-0 podman[437301]: 2025-11-25 17:34:35.139169902 +0000 UTC m=+0.041805699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:34:35 compute-0 podman[437301]: 2025-11-25 17:34:35.276286144 +0000 UTC m=+0.178921951 container init 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 17:34:35 compute-0 podman[437301]: 2025-11-25 17:34:35.290112421 +0000 UTC m=+0.192748188 container start 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:34:35 compute-0 podman[437301]: 2025-11-25 17:34:35.3000437 +0000 UTC m=+0.202679587 container attach 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:34:35 compute-0 flamboyant_thompson[437317]: 167 167
Nov 25 17:34:35 compute-0 systemd[1]: libpod-7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50.scope: Deactivated successfully.
Nov 25 17:34:35 compute-0 podman[437301]: 2025-11-25 17:34:35.302542819 +0000 UTC m=+0.205178586 container died 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c62b5edf2f0834719a21b5da5b15356328648d8faea16d3efac7946075499ae7-merged.mount: Deactivated successfully.
Nov 25 17:34:35 compute-0 podman[437301]: 2025-11-25 17:34:35.351157611 +0000 UTC m=+0.253793408 container remove 7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:34:35 compute-0 systemd[1]: libpod-conmon-7b7b09840edfed0c0ca9847b43ca66470c9ebf63e46a0709542b73feb8dfdf50.scope: Deactivated successfully.
Nov 25 17:34:35 compute-0 nova_compute[254092]: 2025-11-25 17:34:35.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 511 B/s wr, 7 op/s
Nov 25 17:34:35 compute-0 podman[437341]: 2025-11-25 17:34:35.604131438 +0000 UTC m=+0.061561357 container create 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:34:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 25 17:34:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 25 17:34:35 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 25 17:34:35 compute-0 systemd[1]: Started libpod-conmon-5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e.scope.
Nov 25 17:34:35 compute-0 podman[437341]: 2025-11-25 17:34:35.58255682 +0000 UTC m=+0.039986689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:35 compute-0 podman[437341]: 2025-11-25 17:34:35.722671244 +0000 UTC m=+0.180101143 container init 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:34:35 compute-0 podman[437341]: 2025-11-25 17:34:35.731884235 +0000 UTC m=+0.189314144 container start 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 17:34:35 compute-0 podman[437341]: 2025-11-25 17:34:35.735673228 +0000 UTC m=+0.193103117 container attach 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]: {
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:     "0": [
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:         {
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "devices": [
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "/dev/loop3"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             ],
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_name": "ceph_lv0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_size": "21470642176",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "name": "ceph_lv0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "tags": {
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cluster_name": "ceph",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.crush_device_class": "",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.encrypted": "0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osd_id": "0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.type": "block",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.vdo": "0"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             },
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "type": "block",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "vg_name": "ceph_vg0"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:         }
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:     ],
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:     "1": [
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:         {
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "devices": [
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "/dev/loop4"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             ],
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_name": "ceph_lv1",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_size": "21470642176",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "name": "ceph_lv1",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "tags": {
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cluster_name": "ceph",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.crush_device_class": "",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.encrypted": "0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osd_id": "1",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.type": "block",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.vdo": "0"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             },
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "type": "block",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "vg_name": "ceph_vg1"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:         }
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:     ],
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:     "2": [
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:         {
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "devices": [
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "/dev/loop5"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             ],
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_name": "ceph_lv2",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_size": "21470642176",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "name": "ceph_lv2",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "tags": {
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.cluster_name": "ceph",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.crush_device_class": "",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.encrypted": "0",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osd_id": "2",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.type": "block",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:                 "ceph.vdo": "0"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             },
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "type": "block",
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:             "vg_name": "ceph_vg2"
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:         }
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]:     ]
Nov 25 17:34:36 compute-0 naughty_sutherland[437357]: }
Nov 25 17:34:36 compute-0 systemd[1]: libpod-5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e.scope: Deactivated successfully.
Nov 25 17:34:36 compute-0 podman[437341]: 2025-11-25 17:34:36.540471494 +0000 UTC m=+0.997901383 container died 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:34:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-091550d42135b8a5faceff83205d19709d3de6d1df2af3cca3001b54ee851368-merged.mount: Deactivated successfully.
Nov 25 17:34:36 compute-0 podman[437341]: 2025-11-25 17:34:36.610919521 +0000 UTC m=+1.068349420 container remove 5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:34:36 compute-0 systemd[1]: libpod-conmon-5886020f1c8032560d657f2dca7f65f49650b3465084092e628c15eaa89b659e.scope: Deactivated successfully.
Nov 25 17:34:36 compute-0 ceph-mon[74985]: pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 511 B/s wr, 7 op/s
Nov 25 17:34:36 compute-0 ceph-mon[74985]: osdmap e306: 3 total, 3 up, 3 in
Nov 25 17:34:36 compute-0 sudo[437233]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:36 compute-0 sudo[437379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:36 compute-0 sudo[437379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:36 compute-0 sudo[437379]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:36 compute-0 sudo[437404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:34:36 compute-0 sudo[437404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:36 compute-0 sudo[437404]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:36 compute-0 sudo[437429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:36 compute-0 sudo[437429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:36 compute-0 sudo[437429]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:37 compute-0 sudo[437454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:34:37 compute-0 sudo[437454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 17:34:37 compute-0 podman[437520]: 2025-11-25 17:34:37.5370441 +0000 UTC m=+0.070207412 container create ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:34:37 compute-0 systemd[1]: Started libpod-conmon-ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b.scope.
Nov 25 17:34:37 compute-0 podman[437520]: 2025-11-25 17:34:37.509745587 +0000 UTC m=+0.042908949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:34:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:34:37 compute-0 podman[437520]: 2025-11-25 17:34:37.660816369 +0000 UTC m=+0.193979741 container init ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:34:37 compute-0 podman[437520]: 2025-11-25 17:34:37.676520486 +0000 UTC m=+0.209683808 container start ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:34:37 compute-0 podman[437520]: 2025-11-25 17:34:37.681885602 +0000 UTC m=+0.215048974 container attach ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:34:37 compute-0 crazy_northcutt[437536]: 167 167
Nov 25 17:34:37 compute-0 systemd[1]: libpod-ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b.scope: Deactivated successfully.
Nov 25 17:34:37 compute-0 podman[437520]: 2025-11-25 17:34:37.687879135 +0000 UTC m=+0.221042467 container died ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-905d3921c6acd4ecb271d3bdc8a824213cbc25a9f50473b3f3c9d98582920c95-merged.mount: Deactivated successfully.
Nov 25 17:34:37 compute-0 podman[437520]: 2025-11-25 17:34:37.749500752 +0000 UTC m=+0.282664074 container remove ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 17:34:37 compute-0 systemd[1]: libpod-conmon-ddff14649db7cc631522110af0694a0860f21bc56403ec83876e9ee473ae5d0b.scope: Deactivated successfully.
Nov 25 17:34:37 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:34:37.876 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:34:37 compute-0 nova_compute[254092]: 2025-11-25 17:34:37.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:38 compute-0 podman[437561]: 2025-11-25 17:34:38.01134016 +0000 UTC m=+0.076351049 container create eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:34:38 compute-0 podman[437561]: 2025-11-25 17:34:37.981262031 +0000 UTC m=+0.046272990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:34:38 compute-0 systemd[1]: Started libpod-conmon-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope.
Nov 25 17:34:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:34:38 compute-0 podman[437561]: 2025-11-25 17:34:38.140413443 +0000 UTC m=+0.205424312 container init eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:34:38 compute-0 podman[437561]: 2025-11-25 17:34:38.16196666 +0000 UTC m=+0.226977529 container start eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:34:38 compute-0 podman[437561]: 2025-11-25 17:34:38.166104942 +0000 UTC m=+0.231115811 container attach eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:34:38 compute-0 ceph-mon[74985]: pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]: {
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "osd_id": 1,
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "type": "bluestore"
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:     },
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "osd_id": 2,
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "type": "bluestore"
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:     },
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "osd_id": 0,
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:         "type": "bluestore"
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]:     }
Nov 25 17:34:39 compute-0 peaceful_liskov[437577]: }
Nov 25 17:34:39 compute-0 systemd[1]: libpod-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope: Deactivated successfully.
Nov 25 17:34:39 compute-0 systemd[1]: libpod-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope: Consumed 1.251s CPU time.
Nov 25 17:34:39 compute-0 podman[437611]: 2025-11-25 17:34:39.490968144 +0000 UTC m=+0.052448009 container died eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:34:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 17:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-70f20e6e05f2eab8de54119a229c59505659d09829b0d3198b8c4ea639e6c965-merged.mount: Deactivated successfully.
Nov 25 17:34:39 compute-0 podman[437611]: 2025-11-25 17:34:39.574521349 +0000 UTC m=+0.136001184 container remove eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:34:39 compute-0 systemd[1]: libpod-conmon-eb77077fb29e86d04fdb7e194de46e2a944a5cbfd8c84cf854bccbc60cafb12b.scope: Deactivated successfully.
Nov 25 17:34:39 compute-0 sudo[437454]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:34:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:34:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:34:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:34:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d181ffa5-d3ef-4a0c-b27d-86b07695ca1a does not exist
Nov 25 17:34:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8eb25c9f-0574-478d-9184-476883d5abce does not exist
Nov 25 17:34:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:39 compute-0 sudo[437626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:34:39 compute-0 sudo[437626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:39 compute-0 sudo[437626]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:39 compute-0 sudo[437651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:34:39 compute-0 sudo[437651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:34:39 compute-0 sudo[437651]: pam_unix(sudo:session): session closed for user root
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:34:40
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control']
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:34:40 compute-0 nova_compute[254092]: 2025-11-25 17:34:40.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:40 compute-0 ceph-mon[74985]: pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 614 B/s wr, 9 op/s
Nov 25 17:34:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:34:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:34:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:34:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:34:42 compute-0 ceph-mon[74985]: pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:34:42 compute-0 nova_compute[254092]: 2025-11-25 17:34:42.963 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:34:44 compute-0 ceph-mon[74985]: pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 25 17:34:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 25 17:34:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 25 17:34:44 compute-0 ceph-mon[74985]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 25 17:34:45 compute-0 nova_compute[254092]: 2025-11-25 17:34:45.461 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 829 B/s wr, 16 op/s
Nov 25 17:34:45 compute-0 ceph-mon[74985]: osdmap e307: 3 total, 3 up, 3 in
Nov 25 17:34:46 compute-0 ceph-mon[74985]: pgmap v3359: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 829 B/s wr, 16 op/s
Nov 25 17:34:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Nov 25 17:34:48 compute-0 nova_compute[254092]: 2025-11-25 17:34:48.012 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:48 compute-0 ceph-mon[74985]: pgmap v3360: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Nov 25 17:34:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Nov 25 17:34:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:50 compute-0 nova_compute[254092]: 2025-11-25 17:34:50.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:50 compute-0 ceph-mon[74985]: pgmap v3361: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 818 B/s wr, 15 op/s
Nov 25 17:34:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:34:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:34:52 compute-0 ceph-mon[74985]: pgmap v3362: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:53 compute-0 nova_compute[254092]: 2025-11-25 17:34:53.065 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:34:54 compute-0 ceph-mon[74985]: pgmap v3363: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:34:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871584981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:34:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:34:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2871584981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:34:55 compute-0 nova_compute[254092]: 2025-11-25 17:34:55.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:55 compute-0 nova_compute[254092]: 2025-11-25 17:34:55.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:55 compute-0 nova_compute[254092]: 2025-11-25 17:34:55.577 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:34:55 compute-0 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:34:55 compute-0 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:34:55 compute-0 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:34:55 compute-0 nova_compute[254092]: 2025-11-25 17:34:55.578 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:34:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2871584981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:34:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2871584981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:34:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:34:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394049784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.064 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.251 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.253 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.253 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.254 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.343 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.344 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.360 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:34:56 compute-0 ceph-mon[74985]: pgmap v3364: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1394049784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:34:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:34:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702891268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.958 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.969 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.985 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.988 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:34:56 compute-0 nova_compute[254092]: 2025-11-25 17:34:56.989 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:34:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1702891268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:34:57 compute-0 nova_compute[254092]: 2025-11-25 17:34:57.976 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:57 compute-0 nova_compute[254092]: 2025-11-25 17:34:57.977 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:57 compute-0 nova_compute[254092]: 2025-11-25 17:34:57.977 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:34:58 compute-0 nova_compute[254092]: 2025-11-25 17:34:58.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:34:58 compute-0 ceph-mon[74985]: pgmap v3365: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:34:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:00 compute-0 nova_compute[254092]: 2025-11-25 17:35:00.557 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:00 compute-0 ceph-mon[74985]: pgmap v3366: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:01 compute-0 nova_compute[254092]: 2025-11-25 17:35:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:35:01 compute-0 nova_compute[254092]: 2025-11-25 17:35:01.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:35:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:01 compute-0 podman[437721]: 2025-11-25 17:35:01.68875949 +0000 UTC m=+0.084531752 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 17:35:01 compute-0 podman[437720]: 2025-11-25 17:35:01.726988641 +0000 UTC m=+0.130356100 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:35:01 compute-0 podman[437722]: 2025-11-25 17:35:01.737218099 +0000 UTC m=+0.128705415 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 17:35:02 compute-0 ceph-mon[74985]: pgmap v3367: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:03 compute-0 nova_compute[254092]: 2025-11-25 17:35:03.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:04 compute-0 nova_compute[254092]: 2025-11-25 17:35:04.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:35:04 compute-0 nova_compute[254092]: 2025-11-25 17:35:04.499 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:35:04 compute-0 nova_compute[254092]: 2025-11-25 17:35:04.500 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:35:04 compute-0 nova_compute[254092]: 2025-11-25 17:35:04.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:35:04 compute-0 nova_compute[254092]: 2025-11-25 17:35:04.521 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:35:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:04 compute-0 ceph-mon[74985]: pgmap v3368: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:05 compute-0 nova_compute[254092]: 2025-11-25 17:35:05.561 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:06 compute-0 nova_compute[254092]: 2025-11-25 17:35:06.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:35:06 compute-0 ceph-mon[74985]: pgmap v3369: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:08 compute-0 nova_compute[254092]: 2025-11-25 17:35:08.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:08 compute-0 ceph-mon[74985]: pgmap v3370: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:09 compute-0 ceph-mon[74985]: pgmap v3371: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:35:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:35:10 compute-0 nova_compute[254092]: 2025-11-25 17:35:10.563 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:12 compute-0 nova_compute[254092]: 2025-11-25 17:35:12.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:35:12 compute-0 ceph-mon[74985]: pgmap v3372: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:13 compute-0 nova_compute[254092]: 2025-11-25 17:35:13.132 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:35:13.675 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:35:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:35:13.676 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:35:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:35:13.676 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:35:14 compute-0 ceph-mon[74985]: pgmap v3373: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:14 compute-0 sshd-session[437781]: Connection closed by authenticating user root 171.244.51.45 port 49406 [preauth]
Nov 25 17:35:15 compute-0 nova_compute[254092]: 2025-11-25 17:35:15.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:35:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:15 compute-0 nova_compute[254092]: 2025-11-25 17:35:15.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:16 compute-0 ceph-mon[74985]: pgmap v3374: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:18 compute-0 nova_compute[254092]: 2025-11-25 17:35:18.136 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:18 compute-0 ceph-mon[74985]: pgmap v3375: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:20 compute-0 nova_compute[254092]: 2025-11-25 17:35:20.569 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:20 compute-0 ceph-mon[74985]: pgmap v3376: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:22 compute-0 ceph-mon[74985]: pgmap v3377: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:23 compute-0 nova_compute[254092]: 2025-11-25 17:35:23.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:24 compute-0 ceph-mon[74985]: pgmap v3378: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:25 compute-0 nova_compute[254092]: 2025-11-25 17:35:25.571 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:26 compute-0 ceph-mon[74985]: pgmap v3379: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:28 compute-0 nova_compute[254092]: 2025-11-25 17:35:28.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:28 compute-0 ceph-mon[74985]: pgmap v3380: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:30 compute-0 nova_compute[254092]: 2025-11-25 17:35:30.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:30 compute-0 ceph-mon[74985]: pgmap v3381: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:32 compute-0 podman[437784]: 2025-11-25 17:35:32.646136357 +0000 UTC m=+0.064940898 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 17:35:32 compute-0 podman[437785]: 2025-11-25 17:35:32.665764301 +0000 UTC m=+0.070075578 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 17:35:32 compute-0 ceph-mon[74985]: pgmap v3382: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:32 compute-0 podman[437786]: 2025-11-25 17:35:32.727956664 +0000 UTC m=+0.136679991 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:35:33 compute-0 nova_compute[254092]: 2025-11-25 17:35:33.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:34 compute-0 ceph-mon[74985]: pgmap v3383: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:35 compute-0 nova_compute[254092]: 2025-11-25 17:35:35.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:36 compute-0 ceph-mon[74985]: pgmap v3384: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:38 compute-0 nova_compute[254092]: 2025-11-25 17:35:38.146 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:38 compute-0 ceph-mon[74985]: pgmap v3385: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:39 compute-0 sudo[437847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:39 compute-0 sudo[437847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:39 compute-0 sudo[437847]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:39 compute-0 sudo[437872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:35:40 compute-0 sudo[437872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:40 compute-0 sudo[437872]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:40 compute-0 sudo[437897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:40 compute-0 sudo[437897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:40 compute-0 sudo[437897]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:35:40 compute-0 sudo[437922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:35:40 compute-0 sudo[437922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:35:40
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', '.rgw.root', '.mgr', 'backups']
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:35:40 compute-0 nova_compute[254092]: 2025-11-25 17:35:40.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:35:40 compute-0 sudo[437922]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:40 compute-0 ceph-mon[74985]: pgmap v3386: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:35:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:35:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:35:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:35:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:35:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 86bbf3ab-00be-4bc6-957d-2fd5c3428cbb does not exist
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 93f8f155-a37d-4a5f-9aa8-86191c7c6407 does not exist
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 58a5d2a9-3cc0-467b-a973-7249307bce5f does not exist
Nov 25 17:35:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:35:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:35:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:35:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:35:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:35:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:35:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:35:40 compute-0 sudo[437979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:40 compute-0 sudo[437979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:40 compute-0 sudo[437979]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:40 compute-0 sudo[438004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:35:40 compute-0 sudo[438004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:40 compute-0 sudo[438004]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:41 compute-0 sudo[438029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:41 compute-0 sudo[438029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:41 compute-0 sudo[438029]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:41 compute-0 sudo[438054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:35:41 compute-0 sudo[438054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:41 compute-0 podman[438119]: 2025-11-25 17:35:41.539446975 +0000 UTC m=+0.046534177 container create 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:35:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:41 compute-0 systemd[1]: Started libpod-conmon-6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373.scope.
Nov 25 17:35:41 compute-0 podman[438119]: 2025-11-25 17:35:41.520261242 +0000 UTC m=+0.027348524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:35:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:35:41 compute-0 podman[438119]: 2025-11-25 17:35:41.668628901 +0000 UTC m=+0.175716153 container init 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:35:41 compute-0 podman[438119]: 2025-11-25 17:35:41.677825692 +0000 UTC m=+0.184912924 container start 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 17:35:41 compute-0 podman[438119]: 2025-11-25 17:35:41.684495323 +0000 UTC m=+0.191582575 container attach 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:35:41 compute-0 sharp_rhodes[438135]: 167 167
Nov 25 17:35:41 compute-0 systemd[1]: libpod-6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373.scope: Deactivated successfully.
Nov 25 17:35:41 compute-0 podman[438119]: 2025-11-25 17:35:41.69172655 +0000 UTC m=+0.198813762 container died 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e34b47b7b1f9a7d94f5f6fff758edd9979f68fe861afc30918531e6b7c8877-merged.mount: Deactivated successfully.
Nov 25 17:35:41 compute-0 podman[438119]: 2025-11-25 17:35:41.734760771 +0000 UTC m=+0.241847973 container remove 6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:35:41 compute-0 systemd[1]: libpod-conmon-6d986aa3831e802c0402b201e3c5153362a5216da492693f7ce06e176c2fa373.scope: Deactivated successfully.
Nov 25 17:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:35:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:35:41 compute-0 podman[438159]: 2025-11-25 17:35:41.951052398 +0000 UTC m=+0.066149581 container create a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:35:42 compute-0 systemd[1]: Started libpod-conmon-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope.
Nov 25 17:35:42 compute-0 podman[438159]: 2025-11-25 17:35:41.929571824 +0000 UTC m=+0.044669047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:35:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:42 compute-0 podman[438159]: 2025-11-25 17:35:42.051210294 +0000 UTC m=+0.166307567 container init a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:35:42 compute-0 podman[438159]: 2025-11-25 17:35:42.063918961 +0000 UTC m=+0.179016154 container start a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:35:42 compute-0 podman[438159]: 2025-11-25 17:35:42.068234888 +0000 UTC m=+0.183332091 container attach a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:35:42 compute-0 ceph-mon[74985]: pgmap v3387: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:43 compute-0 nova_compute[254092]: 2025-11-25 17:35:43.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:43 compute-0 peaceful_knuth[438175]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:35:43 compute-0 peaceful_knuth[438175]: --> relative data size: 1.0
Nov 25 17:35:43 compute-0 peaceful_knuth[438175]: --> All data devices are unavailable
Nov 25 17:35:43 compute-0 systemd[1]: libpod-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope: Deactivated successfully.
Nov 25 17:35:43 compute-0 systemd[1]: libpod-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope: Consumed 1.068s CPU time.
Nov 25 17:35:43 compute-0 podman[438159]: 2025-11-25 17:35:43.178245161 +0000 UTC m=+1.293342354 container died a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:35:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1697db60e054a5955b2dc0f7a212dba96a85ef1c0f291511b827bfdde840d109-merged.mount: Deactivated successfully.
Nov 25 17:35:43 compute-0 podman[438159]: 2025-11-25 17:35:43.282589362 +0000 UTC m=+1.397686625 container remove a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_knuth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:35:43 compute-0 systemd[1]: libpod-conmon-a0ae887da1de0cbe49d7d1ed6e6602cd6f088b39a2d86fc7e380d48777d412db.scope: Deactivated successfully.
Nov 25 17:35:43 compute-0 sudo[438054]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:43 compute-0 sudo[438218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:43 compute-0 sudo[438218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:43 compute-0 sudo[438218]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:43 compute-0 sudo[438243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:35:43 compute-0 sudo[438243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:43 compute-0 sudo[438243]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:43 compute-0 sudo[438268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:43 compute-0 sudo[438268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:43 compute-0 sudo[438268]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:43 compute-0 sudo[438293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:35:43 compute-0 sudo[438293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:43 compute-0 podman[438358]: 2025-11-25 17:35:43.917438222 +0000 UTC m=+0.054207876 container create e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:35:43 compute-0 systemd[1]: Started libpod-conmon-e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7.scope.
Nov 25 17:35:43 compute-0 podman[438358]: 2025-11-25 17:35:43.88688148 +0000 UTC m=+0.023651214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:35:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:35:44 compute-0 podman[438358]: 2025-11-25 17:35:44.010073113 +0000 UTC m=+0.146842837 container init e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:35:44 compute-0 podman[438358]: 2025-11-25 17:35:44.017002362 +0000 UTC m=+0.153772016 container start e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:35:44 compute-0 podman[438358]: 2025-11-25 17:35:44.021616657 +0000 UTC m=+0.158386381 container attach e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:35:44 compute-0 agitated_shockley[438374]: 167 167
Nov 25 17:35:44 compute-0 systemd[1]: libpod-e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7.scope: Deactivated successfully.
Nov 25 17:35:44 compute-0 podman[438358]: 2025-11-25 17:35:44.025144544 +0000 UTC m=+0.161914198 container died e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:35:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-24fee3c7070f2c40bee1150a903a04321e31e3bf5c7fa9a49a0f626b328c7063-merged.mount: Deactivated successfully.
Nov 25 17:35:44 compute-0 podman[438358]: 2025-11-25 17:35:44.064365062 +0000 UTC m=+0.201134726 container remove e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:35:44 compute-0 systemd[1]: libpod-conmon-e5f1342158b4af6b42d224f20befb8c3140240148b3190563e65ca6f65c455a7.scope: Deactivated successfully.
Nov 25 17:35:44 compute-0 podman[438398]: 2025-11-25 17:35:44.247431984 +0000 UTC m=+0.045048277 container create 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 17:35:44 compute-0 systemd[1]: Started libpod-conmon-6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625.scope.
Nov 25 17:35:44 compute-0 podman[438398]: 2025-11-25 17:35:44.225332453 +0000 UTC m=+0.022948656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:35:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:44 compute-0 podman[438398]: 2025-11-25 17:35:44.357183431 +0000 UTC m=+0.154799674 container init 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:35:44 compute-0 podman[438398]: 2025-11-25 17:35:44.367338568 +0000 UTC m=+0.164954751 container start 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:35:44 compute-0 podman[438398]: 2025-11-25 17:35:44.373986419 +0000 UTC m=+0.171602652 container attach 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:35:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:44 compute-0 ceph-mon[74985]: pgmap v3388: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:45 compute-0 distracted_moser[438414]: {
Nov 25 17:35:45 compute-0 distracted_moser[438414]:     "0": [
Nov 25 17:35:45 compute-0 distracted_moser[438414]:         {
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "devices": [
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "/dev/loop3"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             ],
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_name": "ceph_lv0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_size": "21470642176",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "name": "ceph_lv0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "tags": {
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cluster_name": "ceph",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.crush_device_class": "",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.encrypted": "0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osd_id": "0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.type": "block",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.vdo": "0"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             },
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "type": "block",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "vg_name": "ceph_vg0"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:         }
Nov 25 17:35:45 compute-0 distracted_moser[438414]:     ],
Nov 25 17:35:45 compute-0 distracted_moser[438414]:     "1": [
Nov 25 17:35:45 compute-0 distracted_moser[438414]:         {
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "devices": [
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "/dev/loop4"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             ],
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_name": "ceph_lv1",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_size": "21470642176",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "name": "ceph_lv1",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "tags": {
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cluster_name": "ceph",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.crush_device_class": "",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.encrypted": "0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osd_id": "1",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.type": "block",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.vdo": "0"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             },
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "type": "block",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "vg_name": "ceph_vg1"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:         }
Nov 25 17:35:45 compute-0 distracted_moser[438414]:     ],
Nov 25 17:35:45 compute-0 distracted_moser[438414]:     "2": [
Nov 25 17:35:45 compute-0 distracted_moser[438414]:         {
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "devices": [
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "/dev/loop5"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             ],
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_name": "ceph_lv2",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_size": "21470642176",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "name": "ceph_lv2",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "tags": {
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.cluster_name": "ceph",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.crush_device_class": "",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.encrypted": "0",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osd_id": "2",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.type": "block",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:                 "ceph.vdo": "0"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             },
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "type": "block",
Nov 25 17:35:45 compute-0 distracted_moser[438414]:             "vg_name": "ceph_vg2"
Nov 25 17:35:45 compute-0 distracted_moser[438414]:         }
Nov 25 17:35:45 compute-0 distracted_moser[438414]:     ]
Nov 25 17:35:45 compute-0 distracted_moser[438414]: }
Nov 25 17:35:45 compute-0 systemd[1]: libpod-6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625.scope: Deactivated successfully.
Nov 25 17:35:45 compute-0 podman[438398]: 2025-11-25 17:35:45.278633663 +0000 UTC m=+1.076249886 container died 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:35:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f794b2d08fb31529b5e84b98bfc568cc6e5d8993f7a4b415f12adda06ea21ea2-merged.mount: Deactivated successfully.
Nov 25 17:35:45 compute-0 podman[438398]: 2025-11-25 17:35:45.344014632 +0000 UTC m=+1.141630815 container remove 6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:35:45 compute-0 systemd[1]: libpod-conmon-6c4a0e62bd53d31c7f1a7cf949fa00dcd0e3b52cb1685de8fb77bd781ff98625.scope: Deactivated successfully.
Nov 25 17:35:45 compute-0 sudo[438293]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:45 compute-0 sudo[438435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:45 compute-0 sudo[438435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:45 compute-0 sudo[438435]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:45 compute-0 sudo[438460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:35:45 compute-0 sudo[438460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:45 compute-0 sudo[438460]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:45 compute-0 nova_compute[254092]: 2025-11-25 17:35:45.611 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:45 compute-0 sudo[438485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:45 compute-0 sudo[438485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:45 compute-0 sudo[438485]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:45 compute-0 sudo[438510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:35:45 compute-0 sudo[438510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:46 compute-0 podman[438575]: 2025-11-25 17:35:46.030178249 +0000 UTC m=+0.044671247 container create 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:35:46 compute-0 systemd[1]: Started libpod-conmon-62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e.scope.
Nov 25 17:35:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:35:46 compute-0 podman[438575]: 2025-11-25 17:35:46.010793401 +0000 UTC m=+0.025286449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:35:46 compute-0 podman[438575]: 2025-11-25 17:35:46.113892028 +0000 UTC m=+0.128385026 container init 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:35:46 compute-0 podman[438575]: 2025-11-25 17:35:46.119659684 +0000 UTC m=+0.134152732 container start 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:35:46 compute-0 podman[438575]: 2025-11-25 17:35:46.123483239 +0000 UTC m=+0.137976257 container attach 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:35:46 compute-0 condescending_swartz[438591]: 167 167
Nov 25 17:35:46 compute-0 systemd[1]: libpod-62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e.scope: Deactivated successfully.
Nov 25 17:35:46 compute-0 podman[438575]: 2025-11-25 17:35:46.125780771 +0000 UTC m=+0.140273779 container died 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:35:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd4a3a7d5b8ff599d3d8db73a9f401ff96a31600c62ead7eafdb81c418aa4087-merged.mount: Deactivated successfully.
Nov 25 17:35:46 compute-0 podman[438575]: 2025-11-25 17:35:46.200802773 +0000 UTC m=+0.215295811 container remove 62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:35:46 compute-0 systemd[1]: libpod-conmon-62c358f21ebc5b594533d1f38894d02e104b4236d8039485b6dc8d49a7c5f10e.scope: Deactivated successfully.
Nov 25 17:35:46 compute-0 podman[438616]: 2025-11-25 17:35:46.369791964 +0000 UTC m=+0.047900886 container create 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 17:35:46 compute-0 systemd[1]: Started libpod-conmon-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope.
Nov 25 17:35:46 compute-0 podman[438616]: 2025-11-25 17:35:46.346633252 +0000 UTC m=+0.024742265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:35:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:35:46 compute-0 podman[438616]: 2025-11-25 17:35:46.496331328 +0000 UTC m=+0.174440280 container init 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:35:46 compute-0 podman[438616]: 2025-11-25 17:35:46.504417798 +0000 UTC m=+0.182526750 container start 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:35:46 compute-0 podman[438616]: 2025-11-25 17:35:46.510079982 +0000 UTC m=+0.188188884 container attach 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:35:46 compute-0 ceph-mon[74985]: pgmap v3389: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:47 compute-0 jovial_swirles[438632]: {
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "osd_id": 1,
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "type": "bluestore"
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:     },
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "osd_id": 2,
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "type": "bluestore"
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:     },
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "osd_id": 0,
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:         "type": "bluestore"
Nov 25 17:35:47 compute-0 jovial_swirles[438632]:     }
Nov 25 17:35:47 compute-0 jovial_swirles[438632]: }
Nov 25 17:35:47 compute-0 systemd[1]: libpod-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope: Deactivated successfully.
Nov 25 17:35:47 compute-0 podman[438616]: 2025-11-25 17:35:47.686542505 +0000 UTC m=+1.364651447 container died 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:35:47 compute-0 systemd[1]: libpod-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope: Consumed 1.189s CPU time.
Nov 25 17:35:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f12bdb7f80cedf27fb3c0afaedfd5c242e55da20cc748d4aeebb63d0ff66c4e-merged.mount: Deactivated successfully.
Nov 25 17:35:47 compute-0 podman[438616]: 2025-11-25 17:35:47.760249971 +0000 UTC m=+1.438358903 container remove 5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:35:47 compute-0 systemd[1]: libpod-conmon-5d8e143d2fe27a19b113be3eb9311524b551d644d3cd1283d18d7a7b6a3beb78.scope: Deactivated successfully.
Nov 25 17:35:47 compute-0 sudo[438510]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:35:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:35:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:35:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:35:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 444caea5-b475-4157-92c3-6c975755088a does not exist
Nov 25 17:35:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 80fe0c97-4267-4d88-9648-8f7fecd1b5b4 does not exist
Nov 25 17:35:47 compute-0 ceph-mon[74985]: pgmap v3390: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:35:47 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:35:47 compute-0 sudo[438679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:35:47 compute-0 sudo[438679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:47 compute-0 sudo[438679]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:48 compute-0 sudo[438704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:35:48 compute-0 sudo[438704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:35:48 compute-0 sudo[438704]: pam_unix(sudo:session): session closed for user root
Nov 25 17:35:48 compute-0 nova_compute[254092]: 2025-11-25 17:35:48.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:50 compute-0 ceph-mon[74985]: pgmap v3391: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:50 compute-0 nova_compute[254092]: 2025-11-25 17:35:50.641 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:35:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:35:52 compute-0 ceph-mon[74985]: pgmap v3392: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:53 compute-0 nova_compute[254092]: 2025-11-25 17:35:53.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:54 compute-0 ceph-mon[74985]: pgmap v3393: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:35:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:35:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1678690733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:35:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:35:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1678690733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.531 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.531 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.531 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:35:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.645 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1678690733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:35:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1678690733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:35:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:35:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939635113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:35:55 compute-0 nova_compute[254092]: 2025-11-25 17:35:55.991 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.153 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.155 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3607MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.155 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.397 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.398 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.477 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.572 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.572 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.584 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.603 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:35:56 compute-0 nova_compute[254092]: 2025-11-25 17:35:56.620 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:35:56 compute-0 ceph-mon[74985]: pgmap v3394: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2939635113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:35:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:35:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918263461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:35:57 compute-0 nova_compute[254092]: 2025-11-25 17:35:57.109 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:35:57 compute-0 nova_compute[254092]: 2025-11-25 17:35:57.116 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:35:57 compute-0 nova_compute[254092]: 2025-11-25 17:35:57.129 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:35:57 compute-0 nova_compute[254092]: 2025-11-25 17:35:57.130 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:35:57 compute-0 nova_compute[254092]: 2025-11-25 17:35:57.130 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:35:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3918263461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:35:58 compute-0 nova_compute[254092]: 2025-11-25 17:35:58.154 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:35:58 compute-0 ceph-mon[74985]: pgmap v3395: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:35:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:00 compute-0 nova_compute[254092]: 2025-11-25 17:36:00.130 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:00 compute-0 nova_compute[254092]: 2025-11-25 17:36:00.131 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:00 compute-0 nova_compute[254092]: 2025-11-25 17:36:00.131 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:00 compute-0 nova_compute[254092]: 2025-11-25 17:36:00.647 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:00 compute-0 ceph-mon[74985]: pgmap v3396: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:02 compute-0 nova_compute[254092]: 2025-11-25 17:36:02.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:02 compute-0 nova_compute[254092]: 2025-11-25 17:36:02.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:36:02 compute-0 ceph-mon[74985]: pgmap v3397: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:03 compute-0 nova_compute[254092]: 2025-11-25 17:36:03.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:03 compute-0 podman[438774]: 2025-11-25 17:36:03.697838191 +0000 UTC m=+0.092665623 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 17:36:03 compute-0 podman[438773]: 2025-11-25 17:36:03.713163929 +0000 UTC m=+0.108475824 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:36:03 compute-0 podman[438775]: 2025-11-25 17:36:03.782886656 +0000 UTC m=+0.178265113 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 17:36:04 compute-0 nova_compute[254092]: 2025-11-25 17:36:04.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:04 compute-0 nova_compute[254092]: 2025-11-25 17:36:04.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:36:04 compute-0 nova_compute[254092]: 2025-11-25 17:36:04.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:36:04 compute-0 nova_compute[254092]: 2025-11-25 17:36:04.520 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:36:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:04 compute-0 ceph-mon[74985]: pgmap v3398: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:05 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Nov 25 17:36:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 17:36:05 compute-0 nova_compute[254092]: 2025-11-25 17:36:05.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:06 compute-0 nova_compute[254092]: 2025-11-25 17:36:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:06 compute-0 nova_compute[254092]: 2025-11-25 17:36:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:06 compute-0 ceph-mon[74985]: pgmap v3399: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 17:36:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 25 17:36:08 compute-0 nova_compute[254092]: 2025-11-25 17:36:08.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:08 compute-0 ceph-mon[74985]: pgmap v3400: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 25 17:36:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 25 17:36:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:36:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:36:10 compute-0 nova_compute[254092]: 2025-11-25 17:36:10.652 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:10 compute-0 ceph-mon[74985]: pgmap v3401: 321 pgs: 321 active+clean; 41 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 25 17:36:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:36:11 compute-0 ceph-mon[74985]: pgmap v3402: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:36:12 compute-0 nova_compute[254092]: 2025-11-25 17:36:12.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:13 compute-0 nova_compute[254092]: 2025-11-25 17:36:13.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:13 compute-0 nova_compute[254092]: 2025-11-25 17:36:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:36:13.676 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:36:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:36:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:36:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:36:14 compute-0 ceph-mon[74985]: pgmap v3403: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.626854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174626905, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1441, "num_deletes": 252, "total_data_size": 2204578, "memory_usage": 2243624, "flush_reason": "Manual Compaction"}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174646413, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 2171544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69129, "largest_seqno": 70569, "table_properties": {"data_size": 2164759, "index_size": 3919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14246, "raw_average_key_size": 20, "raw_value_size": 2151051, "raw_average_value_size": 3033, "num_data_blocks": 175, "num_entries": 709, "num_filter_entries": 709, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092030, "oldest_key_time": 1764092030, "file_creation_time": 1764092174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 19651 microseconds, and 7280 cpu microseconds.
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.646503) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 2171544 bytes OK
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.646533) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.647658) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.647675) EVENT_LOG_v1 {"time_micros": 1764092174647669, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.647700) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2198223, prev total WAL file size 2198223, number of live WAL files 2.
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.648524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(2120KB)], [161(9042KB)]
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174648567, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 11430643, "oldest_snapshot_seqno": -1}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8854 keys, 9661007 bytes, temperature: kUnknown
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174709444, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9661007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9606298, "index_size": 31492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 232365, "raw_average_key_size": 26, "raw_value_size": 9452737, "raw_average_value_size": 1067, "num_data_blocks": 1213, "num_entries": 8854, "num_filter_entries": 8854, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.709940) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9661007 bytes
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.711307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.3 rd, 158.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.4) OK, records in: 9374, records dropped: 520 output_compression: NoCompression
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.711346) EVENT_LOG_v1 {"time_micros": 1764092174711326, "job": 100, "event": "compaction_finished", "compaction_time_micros": 61030, "compaction_time_cpu_micros": 24417, "output_level": 6, "num_output_files": 1, "total_output_size": 9661007, "num_input_records": 9374, "num_output_records": 8854, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174712503, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092174716126, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.648459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:36:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:36:14.716287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:36:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:36:15 compute-0 nova_compute[254092]: 2025-11-25 17:36:15.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:16 compute-0 ceph-mon[74985]: pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:36:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Nov 25 17:36:18 compute-0 nova_compute[254092]: 2025-11-25 17:36:18.202 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:18 compute-0 ceph-mon[74985]: pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Nov 25 17:36:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 17:36:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:20 compute-0 ceph-mon[74985]: pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 17:36:20 compute-0 nova_compute[254092]: 2025-11-25 17:36:20.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 17:36:22 compute-0 sshd-session[438837]: Accepted publickey for zuul from 192.168.122.30 port 53554 ssh2: ECDSA SHA256:9KqzpXmppnMwGwVHF2wOKwwhXNcutlJnRXXU19Lreu4
Nov 25 17:36:22 compute-0 systemd-logind[791]: New session 52 of user zuul.
Nov 25 17:36:22 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 25 17:36:22 compute-0 sshd-session[438837]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 17:36:22 compute-0 ceph-mon[74985]: pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Nov 25 17:36:23 compute-0 nova_compute[254092]: 2025-11-25 17:36:23.242 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:23 compute-0 ceph-mon[74985]: pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:36:25.575 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:36:25 compute-0 nova_compute[254092]: 2025-11-25 17:36:25.575 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:25 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:36:25.576 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:36:25 compute-0 nova_compute[254092]: 2025-11-25 17:36:25.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:26 compute-0 ceph-mon[74985]: pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:28 compute-0 ceph-mon[74985]: pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:28 compute-0 nova_compute[254092]: 2025-11-25 17:36:28.321 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:29 compute-0 sshd-session[438840]: Connection closed by 192.168.122.30 port 53554
Nov 25 17:36:29 compute-0 sshd-session[438837]: pam_unix(sshd:session): session closed for user zuul
Nov 25 17:36:29 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 17:36:29 compute-0 systemd-logind[791]: Session 52 logged out. Waiting for processes to exit.
Nov 25 17:36:29 compute-0 systemd-logind[791]: Removed session 52.
Nov 25 17:36:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:30 compute-0 ceph-mon[74985]: pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:30 compute-0 nova_compute[254092]: 2025-11-25 17:36:30.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:31 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:36:31.579 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:36:32 compute-0 ceph-mon[74985]: pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:33 compute-0 nova_compute[254092]: 2025-11-25 17:36:33.387 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:34 compute-0 ceph-mon[74985]: pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:34 compute-0 podman[439095]: 2025-11-25 17:36:34.753847181 +0000 UTC m=+0.150395645 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 17:36:34 compute-0 podman[439094]: 2025-11-25 17:36:34.7927306 +0000 UTC m=+0.189281664 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 17:36:34 compute-0 podman[439096]: 2025-11-25 17:36:34.81368305 +0000 UTC m=+0.208808834 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:36:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:35 compute-0 nova_compute[254092]: 2025-11-25 17:36:35.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:36 compute-0 ceph-mon[74985]: pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:37 compute-0 ceph-mon[74985]: pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:38 compute-0 nova_compute[254092]: 2025-11-25 17:36:38.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:36:40
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'volumes']
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:36:40 compute-0 nova_compute[254092]: 2025-11-25 17:36:40.670 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:36:40 compute-0 ceph-mon[74985]: pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:36:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:36:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:42 compute-0 ceph-mon[74985]: pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:43 compute-0 nova_compute[254092]: 2025-11-25 17:36:43.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:44 compute-0 ceph-mon[74985]: pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:45 compute-0 nova_compute[254092]: 2025-11-25 17:36:45.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:46 compute-0 ceph-mon[74985]: pgmap v3419: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:48 compute-0 sudo[439156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:48 compute-0 sudo[439156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:48 compute-0 sudo[439156]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:48 compute-0 sudo[439181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:36:48 compute-0 sudo[439181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:48 compute-0 sudo[439181]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:48 compute-0 sudo[439206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:48 compute-0 sudo[439206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:48 compute-0 sudo[439206]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:48 compute-0 sudo[439231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 17:36:48 compute-0 sudo[439231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:48 compute-0 nova_compute[254092]: 2025-11-25 17:36:48.516 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:48 compute-0 sudo[439231]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:36:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:36:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:48 compute-0 sudo[439276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:48 compute-0 sudo[439276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:48 compute-0 sudo[439276]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:48 compute-0 sudo[439301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:36:48 compute-0 sudo[439301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:48 compute-0 sudo[439301]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:48 compute-0 ceph-mon[74985]: pgmap v3420: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:48 compute-0 sudo[439326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:48 compute-0 sudo[439326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:48 compute-0 sudo[439326]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:48 compute-0 sudo[439351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:36:48 compute-0 sudo[439351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:49 compute-0 sudo[439351]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 52902d94-519a-4ea1-b8a3-2a6a0a1a8a81 does not exist
Nov 25 17:36:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 30966eff-ac50-4586-82b3-80c084e5b896 does not exist
Nov 25 17:36:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 39ddf466-7e27-4d64-bdde-1d4c27b9a175 does not exist
Nov 25 17:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:36:49 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:36:49 compute-0 sudo[439407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:49 compute-0 sudo[439407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:49 compute-0 sudo[439407]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:49 compute-0 sudo[439432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:36:49 compute-0 sudo[439432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:49 compute-0 sudo[439432]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:49 compute-0 sudo[439457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:49 compute-0 sudo[439457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:49 compute-0 sudo[439457]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:49 compute-0 sudo[439482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:36:49 compute-0 sudo[439482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:36:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:36:50 compute-0 podman[439548]: 2025-11-25 17:36:50.021702007 +0000 UTC m=+0.057318411 container create 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:36:50 compute-0 systemd[1]: Started libpod-conmon-3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6.scope.
Nov 25 17:36:50 compute-0 podman[439548]: 2025-11-25 17:36:49.996514961 +0000 UTC m=+0.032131375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:36:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:36:50 compute-0 podman[439548]: 2025-11-25 17:36:50.132516044 +0000 UTC m=+0.168132498 container init 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:36:50 compute-0 podman[439548]: 2025-11-25 17:36:50.143010029 +0000 UTC m=+0.178626433 container start 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:36:50 compute-0 podman[439548]: 2025-11-25 17:36:50.146703161 +0000 UTC m=+0.182319615 container attach 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 17:36:50 compute-0 xenodochial_fermat[439564]: 167 167
Nov 25 17:36:50 compute-0 systemd[1]: libpod-3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6.scope: Deactivated successfully.
Nov 25 17:36:50 compute-0 podman[439548]: 2025-11-25 17:36:50.152733775 +0000 UTC m=+0.188350149 container died 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e1b89a7546aaed64659f5c15a7bc41fb8ba71ae5f50df4f3f9c208004c7dd45-merged.mount: Deactivated successfully.
Nov 25 17:36:50 compute-0 podman[439548]: 2025-11-25 17:36:50.205284945 +0000 UTC m=+0.240901319 container remove 3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:36:50 compute-0 systemd[1]: libpod-conmon-3d04a252cc5e5528c28c114a4577ea415259f833fc07b1bae4990a86086769c6.scope: Deactivated successfully.
Nov 25 17:36:50 compute-0 podman[439590]: 2025-11-25 17:36:50.407298555 +0000 UTC m=+0.054750722 container create f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 17:36:50 compute-0 systemd[1]: Started libpod-conmon-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope.
Nov 25 17:36:50 compute-0 podman[439590]: 2025-11-25 17:36:50.374505392 +0000 UTC m=+0.021957539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:36:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:50 compute-0 podman[439590]: 2025-11-25 17:36:50.509678292 +0000 UTC m=+0.157130439 container init f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:36:50 compute-0 podman[439590]: 2025-11-25 17:36:50.523676653 +0000 UTC m=+0.171128800 container start f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:36:50 compute-0 podman[439590]: 2025-11-25 17:36:50.526851789 +0000 UTC m=+0.174303936 container attach f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:36:50 compute-0 nova_compute[254092]: 2025-11-25 17:36:50.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:50 compute-0 ceph-mon[74985]: pgmap v3421: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:51 compute-0 elastic_grothendieck[439607]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:36:51 compute-0 elastic_grothendieck[439607]: --> relative data size: 1.0
Nov 25 17:36:51 compute-0 elastic_grothendieck[439607]: --> All data devices are unavailable
Nov 25 17:36:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:51 compute-0 podman[439590]: 2025-11-25 17:36:51.600522969 +0000 UTC m=+1.247975096 container died f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:36:51 compute-0 systemd[1]: libpod-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope: Deactivated successfully.
Nov 25 17:36:51 compute-0 systemd[1]: libpod-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope: Consumed 1.014s CPU time.
Nov 25 17:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-35b2ad55b82d80f47ad783b281a6e99df12b5b0c1e28be6c79593a853953c51a-merged.mount: Deactivated successfully.
Nov 25 17:36:51 compute-0 podman[439590]: 2025-11-25 17:36:51.675545411 +0000 UTC m=+1.322997568 container remove f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:36:51 compute-0 systemd[1]: libpod-conmon-f64edb45abd2eff220801deee1346ba56c46a5818be45fe96d4d2de960e5e3ba.scope: Deactivated successfully.
Nov 25 17:36:51 compute-0 sudo[439482]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:51 compute-0 sudo[439647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:51 compute-0 sudo[439647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:51 compute-0 sudo[439647]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:51 compute-0 ceph-mon[74985]: pgmap v3422: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:51 compute-0 sudo[439672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:36:51 compute-0 sudo[439672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:51 compute-0 sudo[439672]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:51 compute-0 sudo[439697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:51 compute-0 sudo[439697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:51 compute-0 sudo[439697]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:52 compute-0 sudo[439722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:36:52 compute-0 sudo[439722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:36:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:36:52 compute-0 podman[439789]: 2025-11-25 17:36:52.540693933 +0000 UTC m=+0.060367494 container create d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:36:52 compute-0 systemd[1]: Started libpod-conmon-d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990.scope.
Nov 25 17:36:52 compute-0 podman[439789]: 2025-11-25 17:36:52.511514899 +0000 UTC m=+0.031188550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:36:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:36:52 compute-0 podman[439789]: 2025-11-25 17:36:52.643842391 +0000 UTC m=+0.163515992 container init d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:36:52 compute-0 podman[439789]: 2025-11-25 17:36:52.653374591 +0000 UTC m=+0.173048192 container start d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:36:52 compute-0 podman[439789]: 2025-11-25 17:36:52.657330739 +0000 UTC m=+0.177004330 container attach d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:36:52 compute-0 epic_diffie[439806]: 167 167
Nov 25 17:36:52 compute-0 systemd[1]: libpod-d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990.scope: Deactivated successfully.
Nov 25 17:36:52 compute-0 podman[439789]: 2025-11-25 17:36:52.659552949 +0000 UTC m=+0.179226510 container died d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd84b395dbfb62b8cd6367f7cb09d59329a1e438140da89d568e1603bb8bbd06-merged.mount: Deactivated successfully.
Nov 25 17:36:52 compute-0 podman[439789]: 2025-11-25 17:36:52.706467207 +0000 UTC m=+0.226140768 container remove d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:36:52 compute-0 systemd[1]: libpod-conmon-d030ce6d23d3463717221e7f4da409271334dd82840dd38a3dbccb900db5f990.scope: Deactivated successfully.
Nov 25 17:36:52 compute-0 podman[439828]: 2025-11-25 17:36:52.929542549 +0000 UTC m=+0.059090490 container create a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:36:52 compute-0 systemd[1]: Started libpod-conmon-a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9.scope.
Nov 25 17:36:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:36:53 compute-0 podman[439828]: 2025-11-25 17:36:52.91231868 +0000 UTC m=+0.041866641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:53 compute-0 podman[439828]: 2025-11-25 17:36:53.017822263 +0000 UTC m=+0.147370214 container init a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:36:53 compute-0 podman[439828]: 2025-11-25 17:36:53.030444666 +0000 UTC m=+0.159992607 container start a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:36:53 compute-0 podman[439828]: 2025-11-25 17:36:53.034166257 +0000 UTC m=+0.163714248 container attach a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:36:53 compute-0 nova_compute[254092]: 2025-11-25 17:36:53.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:53 compute-0 elated_rubin[439844]: {
Nov 25 17:36:53 compute-0 elated_rubin[439844]:     "0": [
Nov 25 17:36:53 compute-0 elated_rubin[439844]:         {
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "devices": [
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "/dev/loop3"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             ],
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_name": "ceph_lv0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_size": "21470642176",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "name": "ceph_lv0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "tags": {
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cluster_name": "ceph",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.crush_device_class": "",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.encrypted": "0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osd_id": "0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.type": "block",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.vdo": "0"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             },
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "type": "block",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "vg_name": "ceph_vg0"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:         }
Nov 25 17:36:53 compute-0 elated_rubin[439844]:     ],
Nov 25 17:36:53 compute-0 elated_rubin[439844]:     "1": [
Nov 25 17:36:53 compute-0 elated_rubin[439844]:         {
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "devices": [
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "/dev/loop4"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             ],
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_name": "ceph_lv1",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_size": "21470642176",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "name": "ceph_lv1",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "tags": {
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cluster_name": "ceph",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.crush_device_class": "",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.encrypted": "0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osd_id": "1",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.type": "block",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.vdo": "0"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             },
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "type": "block",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "vg_name": "ceph_vg1"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:         }
Nov 25 17:36:53 compute-0 elated_rubin[439844]:     ],
Nov 25 17:36:53 compute-0 elated_rubin[439844]:     "2": [
Nov 25 17:36:53 compute-0 elated_rubin[439844]:         {
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "devices": [
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "/dev/loop5"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             ],
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_name": "ceph_lv2",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_size": "21470642176",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "name": "ceph_lv2",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "tags": {
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.cluster_name": "ceph",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.crush_device_class": "",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.encrypted": "0",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osd_id": "2",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.type": "block",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:                 "ceph.vdo": "0"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             },
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "type": "block",
Nov 25 17:36:53 compute-0 elated_rubin[439844]:             "vg_name": "ceph_vg2"
Nov 25 17:36:53 compute-0 elated_rubin[439844]:         }
Nov 25 17:36:53 compute-0 elated_rubin[439844]:     ]
Nov 25 17:36:53 compute-0 elated_rubin[439844]: }
Nov 25 17:36:53 compute-0 systemd[1]: libpod-a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9.scope: Deactivated successfully.
Nov 25 17:36:53 compute-0 podman[439828]: 2025-11-25 17:36:53.868357817 +0000 UTC m=+0.997905788 container died a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 17:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e9a3a069da839e2c53e57f697ff7ce9de3a8b4c1cbe15d265fca8208537b36-merged.mount: Deactivated successfully.
Nov 25 17:36:53 compute-0 podman[439828]: 2025-11-25 17:36:53.941228371 +0000 UTC m=+1.070776342 container remove a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:36:53 compute-0 systemd[1]: libpod-conmon-a863de7200d0b5f0b5109ef5f1e5a30b3c4cc593683949165bae9ce22011a1f9.scope: Deactivated successfully.
Nov 25 17:36:54 compute-0 sudo[439722]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:54 compute-0 sudo[439864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:54 compute-0 sudo[439864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:54 compute-0 sudo[439864]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:54 compute-0 sudo[439889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:36:54 compute-0 sudo[439889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:54 compute-0 sudo[439889]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:54 compute-0 sudo[439914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:54 compute-0 sudo[439914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:54 compute-0 sudo[439914]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:54 compute-0 sudo[439939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:36:54 compute-0 sudo[439939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:54 compute-0 ceph-mon[74985]: pgmap v3423: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:36:54 compute-0 podman[440006]: 2025-11-25 17:36:54.847144434 +0000 UTC m=+0.061844114 container create 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:36:54 compute-0 systemd[1]: Started libpod-conmon-365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4.scope.
Nov 25 17:36:54 compute-0 podman[440006]: 2025-11-25 17:36:54.818727971 +0000 UTC m=+0.033427701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:36:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:36:54 compute-0 podman[440006]: 2025-11-25 17:36:54.956289785 +0000 UTC m=+0.170989535 container init 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:36:54 compute-0 podman[440006]: 2025-11-25 17:36:54.969561356 +0000 UTC m=+0.184261016 container start 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:36:54 compute-0 podman[440006]: 2025-11-25 17:36:54.974129651 +0000 UTC m=+0.188829351 container attach 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:36:54 compute-0 elastic_pasteur[440022]: 167 167
Nov 25 17:36:54 compute-0 systemd[1]: libpod-365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4.scope: Deactivated successfully.
Nov 25 17:36:54 compute-0 podman[440006]: 2025-11-25 17:36:54.979278571 +0000 UTC m=+0.193978261 container died 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4228cc62361bc74e4b1724bb2b5ce0a292480286a85d48516fcea08e2bc883-merged.mount: Deactivated successfully.
Nov 25 17:36:55 compute-0 podman[440006]: 2025-11-25 17:36:55.033189118 +0000 UTC m=+0.247888808 container remove 365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pasteur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:36:55 compute-0 systemd[1]: libpod-conmon-365cebd120d83943357c56cf80129b5b655fa46034f1c2b139a5cc2c8d1f79c4.scope: Deactivated successfully.
Nov 25 17:36:55 compute-0 podman[440048]: 2025-11-25 17:36:55.297541955 +0000 UTC m=+0.078627232 container create f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:36:55 compute-0 systemd[1]: Started libpod-conmon-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope.
Nov 25 17:36:55 compute-0 podman[440048]: 2025-11-25 17:36:55.267129948 +0000 UTC m=+0.048215285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:36:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:36:55 compute-0 podman[440048]: 2025-11-25 17:36:55.391119123 +0000 UTC m=+0.172204390 container init f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:36:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:36:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4153111508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:36:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:36:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4153111508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:36:55 compute-0 podman[440048]: 2025-11-25 17:36:55.408419844 +0000 UTC m=+0.189505131 container start f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:36:55 compute-0 podman[440048]: 2025-11-25 17:36:55.412548206 +0000 UTC m=+0.193633493 container attach f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.523 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.524 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:36:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4153111508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:36:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4153111508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.677 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:36:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423538353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:36:55 compute-0 nova_compute[254092]: 2025-11-25 17:36:55.974 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.124 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.125 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.125 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.125 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.194 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.194 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.213 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:36:56 compute-0 elastic_haslett[440065]: {
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "osd_id": 1,
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "type": "bluestore"
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:     },
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "osd_id": 2,
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "type": "bluestore"
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:     },
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "osd_id": 0,
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:         "type": "bluestore"
Nov 25 17:36:56 compute-0 elastic_haslett[440065]:     }
Nov 25 17:36:56 compute-0 elastic_haslett[440065]: }
Nov 25 17:36:56 compute-0 systemd[1]: libpod-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope: Deactivated successfully.
Nov 25 17:36:56 compute-0 systemd[1]: libpod-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope: Consumed 1.144s CPU time.
Nov 25 17:36:56 compute-0 podman[440140]: 2025-11-25 17:36:56.605998987 +0000 UTC m=+0.031390837 container died f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a85e212de2c44a899545dd335d3a2b07415a2f8e68a5706e0c8637e656d44c0-merged.mount: Deactivated successfully.
Nov 25 17:36:56 compute-0 podman[440140]: 2025-11-25 17:36:56.655678668 +0000 UTC m=+0.081070518 container remove f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haslett, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:36:56 compute-0 ceph-mon[74985]: pgmap v3424: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/423538353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:36:56 compute-0 systemd[1]: libpod-conmon-f0df43c9239cf68bca7191dd1d120f86af3e0b10e6dc0baf00de73f753d6bf5e.scope: Deactivated successfully.
Nov 25 17:36:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:36:56 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796075242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.690 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:36:56 compute-0 sudo[439939]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.698 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:36:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:36:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:36:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.720 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:36:56 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c616c376-5dbb-4aca-89d4-91ed880c745f does not exist
Nov 25 17:36:56 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d3f7b4c3-67dc-4aed-9ebe-4230a3fae304 does not exist
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.722 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:36:56 compute-0 nova_compute[254092]: 2025-11-25 17:36:56.722 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:36:56 compute-0 sudo[440157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:36:56 compute-0 sudo[440157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:56 compute-0 sudo[440157]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:56 compute-0 sudo[440182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:36:56 compute-0 sudo[440182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:36:56 compute-0 sudo[440182]: pam_unix(sudo:session): session closed for user root
Nov 25 17:36:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:57 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1796075242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:36:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:36:58 compute-0 nova_compute[254092]: 2025-11-25 17:36:58.554 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:36:58 compute-0 ceph-mon[74985]: pgmap v3425: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:36:59 compute-0 nova_compute[254092]: 2025-11-25 17:36:59.724 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:59 compute-0 nova_compute[254092]: 2025-11-25 17:36:59.725 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:36:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:00 compute-0 nova_compute[254092]: 2025-11-25 17:37:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:00 compute-0 nova_compute[254092]: 2025-11-25 17:37:00.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:00 compute-0 ceph-mon[74985]: pgmap v3426: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:02 compute-0 ceph-mon[74985]: pgmap v3427: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:03 compute-0 nova_compute[254092]: 2025-11-25 17:37:03.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:03 compute-0 nova_compute[254092]: 2025-11-25 17:37:03.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:37:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:03 compute-0 nova_compute[254092]: 2025-11-25 17:37:03.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:04 compute-0 ceph-mon[74985]: pgmap v3428: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:05 compute-0 podman[440208]: 2025-11-25 17:37:05.665881089 +0000 UTC m=+0.071960840 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:37:05 compute-0 nova_compute[254092]: 2025-11-25 17:37:05.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:05 compute-0 podman[440207]: 2025-11-25 17:37:05.687656931 +0000 UTC m=+0.102862200 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 17:37:05 compute-0 podman[440209]: 2025-11-25 17:37:05.690685074 +0000 UTC m=+0.103429936 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 17:37:06 compute-0 nova_compute[254092]: 2025-11-25 17:37:06.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:06 compute-0 nova_compute[254092]: 2025-11-25 17:37:06.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:06 compute-0 nova_compute[254092]: 2025-11-25 17:37:06.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:37:06 compute-0 nova_compute[254092]: 2025-11-25 17:37:06.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:37:06 compute-0 nova_compute[254092]: 2025-11-25 17:37:06.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:37:06 compute-0 nova_compute[254092]: 2025-11-25 17:37:06.519 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:06 compute-0 ceph-mon[74985]: pgmap v3429: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:08 compute-0 nova_compute[254092]: 2025-11-25 17:37:08.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:08 compute-0 ceph-mon[74985]: pgmap v3430: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.762949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229763019, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 705, "num_deletes": 255, "total_data_size": 867604, "memory_usage": 881704, "flush_reason": "Manual Compaction"}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229770779, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 848800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70570, "largest_seqno": 71274, "table_properties": {"data_size": 845114, "index_size": 1529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8141, "raw_average_key_size": 18, "raw_value_size": 837709, "raw_average_value_size": 1939, "num_data_blocks": 68, "num_entries": 432, "num_filter_entries": 432, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092175, "oldest_key_time": 1764092175, "file_creation_time": 1764092229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 7842 microseconds, and 2947 cpu microseconds.
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.770808) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 848800 bytes OK
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.770822) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772187) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772200) EVENT_LOG_v1 {"time_micros": 1764092229772195, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772213) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 863926, prev total WAL file size 890414, number of live WAL files 2.
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772809) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303137' seq:72057594037927935, type:22 .. '6C6F676D0033323638' seq:0, type:0; will stop at (end)
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(828KB)], [164(9434KB)]
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229772850, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10509807, "oldest_snapshot_seqno": -1}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8765 keys, 10396532 bytes, temperature: kUnknown
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229840275, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10396532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10341020, "index_size": 32487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 231420, "raw_average_key_size": 26, "raw_value_size": 10187550, "raw_average_value_size": 1162, "num_data_blocks": 1256, "num_entries": 8765, "num_filter_entries": 8765, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092229, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.840851) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10396532 bytes
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.843687) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 153.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(24.6) write-amplify(12.2) OK, records in: 9286, records dropped: 521 output_compression: NoCompression
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.843728) EVENT_LOG_v1 {"time_micros": 1764092229843710, "job": 102, "event": "compaction_finished", "compaction_time_micros": 67573, "compaction_time_cpu_micros": 36972, "output_level": 6, "num_output_files": 1, "total_output_size": 10396532, "num_input_records": 9286, "num_output_records": 8765, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229844259, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092229847111, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.772721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:37:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:37:09.847173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:37:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:37:10 compute-0 nova_compute[254092]: 2025-11-25 17:37:10.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:10 compute-0 ceph-mon[74985]: pgmap v3431: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:12 compute-0 ceph-mon[74985]: pgmap v3432: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:13 compute-0 nova_compute[254092]: 2025-11-25 17:37:13.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:37:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:37:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:37:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:37:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:37:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:37:14 compute-0 nova_compute[254092]: 2025-11-25 17:37:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:14 compute-0 ceph-mon[74985]: pgmap v3433: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:15 compute-0 nova_compute[254092]: 2025-11-25 17:37:15.684 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:16 compute-0 ceph-mon[74985]: pgmap v3434: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:18 compute-0 nova_compute[254092]: 2025-11-25 17:37:18.650 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:18 compute-0 ceph-mon[74985]: pgmap v3435: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:20 compute-0 nova_compute[254092]: 2025-11-25 17:37:20.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:20 compute-0 ceph-mon[74985]: pgmap v3436: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:22 compute-0 ceph-mon[74985]: pgmap v3437: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:23 compute-0 nova_compute[254092]: 2025-11-25 17:37:23.678 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:23 compute-0 ceph-mon[74985]: pgmap v3438: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:25 compute-0 nova_compute[254092]: 2025-11-25 17:37:25.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:26 compute-0 ceph-mon[74985]: pgmap v3439: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:27 compute-0 ceph-mon[74985]: pgmap v3440: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:28 compute-0 nova_compute[254092]: 2025-11-25 17:37:28.681 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:30 compute-0 ceph-mon[74985]: pgmap v3441: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:30 compute-0 nova_compute[254092]: 2025-11-25 17:37:30.691 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:32 compute-0 ceph-mon[74985]: pgmap v3442: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:33 compute-0 nova_compute[254092]: 2025-11-25 17:37:33.682 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:34 compute-0 ceph-mon[74985]: pgmap v3443: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:35 compute-0 nova_compute[254092]: 2025-11-25 17:37:35.693 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:36 compute-0 nova_compute[254092]: 2025-11-25 17:37:36.673 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:37:36.672 163338 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '2e:00:2c', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:25:c2:56:d5:a2'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 17:37:36 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:37:36.674 163338 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 17:37:36 compute-0 podman[440271]: 2025-11-25 17:37:36.676066647 +0000 UTC m=+0.074221061 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:37:36 compute-0 podman[440272]: 2025-11-25 17:37:36.694033707 +0000 UTC m=+0.087810942 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 17:37:36 compute-0 ceph-mon[74985]: pgmap v3444: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:36 compute-0 podman[440273]: 2025-11-25 17:37:36.730937022 +0000 UTC m=+0.121848279 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 17:37:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:38 compute-0 nova_compute[254092]: 2025-11-25 17:37:38.685 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:38 compute-0 ceph-mon[74985]: pgmap v3445: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:37:40
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'images']
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:37:40 compute-0 sshd-session[440336]: Accepted publickey for zuul from 192.168.122.30 port 54712 ssh2: ECDSA SHA256:9KqzpXmppnMwGwVHF2wOKwwhXNcutlJnRXXU19Lreu4
Nov 25 17:37:40 compute-0 systemd-logind[791]: New session 53 of user zuul.
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:37:40 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 25 17:37:40 compute-0 nova_compute[254092]: 2025-11-25 17:37:40.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:40 compute-0 sshd-session[440336]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 17:37:40 compute-0 ceph-mon[74985]: pgmap v3446: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:37:40 compute-0 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 17:37:41 compute-0 sudo[440432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Nov 25 17:37:41 compute-0 sudo[440432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:41 compute-0 sudo[440432]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:41 compute-0 sudo[440458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Nov 25 17:37:41 compute-0 sudo[440458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:41 compute-0 groupadd[440460]: group added to /etc/group: name=podman, GID=42479
Nov 25 17:37:41 compute-0 groupadd[440460]: group added to /etc/gshadow: name=podman
Nov 25 17:37:41 compute-0 groupadd[440460]: new group: name=podman, GID=42479
Nov 25 17:37:41 compute-0 sudo[440458]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:41 compute-0 ceph-mon[74985]: pgmap v3447: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:41 compute-0 sudo[440466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Nov 25 17:37:41 compute-0 sudo[440466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:42 compute-0 usermod[440468]: add 'zuul' to group 'podman'
Nov 25 17:37:42 compute-0 usermod[440468]: add 'zuul' to shadow group 'podman'
Nov 25 17:37:42 compute-0 sudo[440466]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:42 compute-0 sudo[440475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Nov 25 17:37:42 compute-0 sudo[440475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:42 compute-0 sudo[440475]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:42 compute-0 sudo[440478]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Nov 25 17:37:42 compute-0 sudo[440478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:42 compute-0 sudo[440478]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:42 compute-0 sudo[440481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Nov 25 17:37:42 compute-0 sudo[440481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:42 compute-0 sudo[440481]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:42 compute-0 sudo[440484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Nov 25 17:37:42 compute-0 sudo[440484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:42 compute-0 sudo[440484]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:42 compute-0 sudo[440487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Nov 25 17:37:42 compute-0 sudo[440487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:42 compute-0 sudo[440487]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:42 compute-0 sudo[440490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Nov 25 17:37:42 compute-0 sudo[440490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:42 compute-0 systemd[1]: Reloading.
Nov 25 17:37:42 compute-0 systemd-rc-local-generator[440518]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 17:37:42 compute-0 systemd-sysv-generator[440524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 17:37:43 compute-0 sudo[440490]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:43 compute-0 sudo[440528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Nov 25 17:37:43 compute-0 sudo[440528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:43 compute-0 sudo[440528]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:43 compute-0 sudo[440531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Nov 25 17:37:43 compute-0 sudo[440531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:43 compute-0 systemd[1]: Reloading.
Nov 25 17:37:43 compute-0 systemd-sysv-generator[440564]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 17:37:43 compute-0 systemd-rc-local-generator[440561]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 17:37:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:43 compute-0 nova_compute[254092]: 2025-11-25 17:37:43.688 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:43 compute-0 systemd[1]: Starting Podman API Socket...
Nov 25 17:37:43 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 25 17:37:43 compute-0 sudo[440531]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:43 compute-0 sudo[440570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Nov 25 17:37:43 compute-0 sudo[440570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:43 compute-0 sudo[440570]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:43 compute-0 sudo[440573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Nov 25 17:37:43 compute-0 sudo[440573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:43 compute-0 sudo[440573]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:43 compute-0 sudo[440576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Nov 25 17:37:43 compute-0 sudo[440576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:43 compute-0 sudo[440576]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:43 compute-0 sudo[440579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Nov 25 17:37:43 compute-0 sudo[440579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:43 compute-0 sudo[440579]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:43 compute-0 sudo[440582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Nov 25 17:37:43 compute-0 sudo[440582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:43 compute-0 sudo[440582]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:44 compute-0 sudo[440585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Nov 25 17:37:44 compute-0 dbus-broker-launch[775]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Nov 25 17:37:44 compute-0 sudo[440585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:44 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Nov 25 17:37:44 compute-0 systemd[1]: Closed Podman API Socket.
Nov 25 17:37:44 compute-0 systemd[1]: Stopping Podman API Socket...
Nov 25 17:37:44 compute-0 systemd[1]: Starting Podman API Socket...
Nov 25 17:37:44 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 25 17:37:44 compute-0 sudo[440585]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:44 compute-0 sudo[440435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Nov 25 17:37:44 compute-0 sudo[440435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:37:44 compute-0 sudo[440435]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:44 compute-0 sshd-session[440591]: Accepted publickey for zuul from 192.168.122.30 port 56624 ssh2: ECDSA SHA256:9KqzpXmppnMwGwVHF2wOKwwhXNcutlJnRXXU19Lreu4
Nov 25 17:37:44 compute-0 systemd-logind[791]: New session 54 of user zuul.
Nov 25 17:37:44 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 25 17:37:44 compute-0 sshd-session[440591]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 17:37:44 compute-0 systemd[1]: Starting Podman API Service...
Nov 25 17:37:44 compute-0 systemd[1]: Started Podman API Service.
Nov 25 17:37:44 compute-0 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 25 17:37:44 compute-0 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Setting parallel job count to 25"
Nov 25 17:37:44 compute-0 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Using sqlite as database backend"
Nov 25 17:37:44 compute-0 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 25 17:37:44 compute-0 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 25 17:37:44 compute-0 podman[440595]: time="2025-11-25T17:37:44Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 25 17:37:44 compute-0 podman[440595]: @ - - [25/Nov/2025:17:37:44 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 17:37:44 compute-0 podman[440595]: @ - - [25/Nov/2025:17:37:44 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 24899 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Nov 25 17:37:44 compute-0 ceph-mon[74985]: pgmap v3448: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:45 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:37:45.676 163338 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=124ba75b-cc77-4d76-b66e-16ed1bbb2181, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 17:37:45 compute-0 nova_compute[254092]: 2025-11-25 17:37:45.697 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:46 compute-0 ceph-mon[74985]: pgmap v3449: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:48 compute-0 ceph-mon[74985]: pgmap v3450: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:48 compute-0 nova_compute[254092]: 2025-11-25 17:37:48.689 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:50 compute-0 ceph-mon[74985]: pgmap v3451: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:50 compute-0 nova_compute[254092]: 2025-11-25 17:37:50.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:37:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:37:52 compute-0 ceph-mon[74985]: pgmap v3452: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:53 compute-0 nova_compute[254092]: 2025-11-25 17:37:53.700 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:54 compute-0 ceph-mon[74985]: pgmap v3453: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:37:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:37:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3528567440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:37:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:37:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3528567440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:37:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:55 compute-0 nova_compute[254092]: 2025-11-25 17:37:55.701 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3528567440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:37:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3528567440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:37:56 compute-0 ceph-mon[74985]: pgmap v3454: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:57 compute-0 sudo[440609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:37:57 compute-0 sudo[440609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:57 compute-0 sudo[440609]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:57 compute-0 sudo[440634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:37:57 compute-0 sudo[440634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:57 compute-0 sudo[440634]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:57 compute-0 sudo[440659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:37:57 compute-0 sudo[440659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:57 compute-0 sudo[440659]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:57 compute-0 sudo[440684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:37:57 compute-0 sudo[440684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:57 compute-0 nova_compute[254092]: 2025-11-25 17:37:57.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:57 compute-0 nova_compute[254092]: 2025-11-25 17:37:57.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:37:57 compute-0 nova_compute[254092]: 2025-11-25 17:37:57.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:37:57 compute-0 nova_compute[254092]: 2025-11-25 17:37:57.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:37:57 compute-0 nova_compute[254092]: 2025-11-25 17:37:57.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:37:57 compute-0 nova_compute[254092]: 2025-11-25 17:37:57.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:37:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:57 compute-0 sudo[440684]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:37:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:37:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:37:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:37:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:37:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:37:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e22c35af-c143-4887-89ea-b10563f13b84 does not exist
Nov 25 17:37:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fd86b1f6-8764-4f48-8191-c0dc37a75412 does not exist
Nov 25 17:37:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a80a8bd1-72ab-4977-97c4-f68de8672e2e does not exist
Nov 25 17:37:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:37:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:37:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:37:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:37:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:37:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:37:57 compute-0 sudo[440760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:37:57 compute-0 sudo[440760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:57 compute-0 sudo[440760]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:37:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7614586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:37:58 compute-0 sudo[440785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.009 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:37:58 compute-0 sudo[440785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:58 compute-0 sudo[440785]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:58 compute-0 sudo[440812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:37:58 compute-0 sudo[440812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:58 compute-0 sudo[440812]: pam_unix(sudo:session): session closed for user root
Nov 25 17:37:58 compute-0 sudo[440838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:37:58 compute-0 sudo[440838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.158 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.160 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3604MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.229 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.229 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.254 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:37:58 compute-0 podman[440922]: 2025-11-25 17:37:58.475987851 +0000 UTC m=+0.041214503 container create 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:37:58 compute-0 systemd[1]: Started libpod-conmon-290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd.scope.
Nov 25 17:37:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:37:58 compute-0 podman[440922]: 2025-11-25 17:37:58.459596925 +0000 UTC m=+0.024823587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:37:58 compute-0 podman[440922]: 2025-11-25 17:37:58.566162596 +0000 UTC m=+0.131389238 container init 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:37:58 compute-0 podman[440922]: 2025-11-25 17:37:58.578330478 +0000 UTC m=+0.143557130 container start 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:37:58 compute-0 podman[440922]: 2025-11-25 17:37:58.58134331 +0000 UTC m=+0.146569982 container attach 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:37:58 compute-0 zealous_euler[440938]: 167 167
Nov 25 17:37:58 compute-0 systemd[1]: libpod-290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd.scope: Deactivated successfully.
Nov 25 17:37:58 compute-0 podman[440922]: 2025-11-25 17:37:58.585423991 +0000 UTC m=+0.150650673 container died 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d810108bbb9960d985fbace81c7b86b75b749265bfe3bd88225c5ae645fa3070-merged.mount: Deactivated successfully.
Nov 25 17:37:58 compute-0 podman[440922]: 2025-11-25 17:37:58.630983571 +0000 UTC m=+0.196210223 container remove 290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_euler, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:37:58 compute-0 systemd[1]: libpod-conmon-290d0a7e190fc35e80ca3a83d173e99b52132203441da3e296ac7307b781a0fd.scope: Deactivated successfully.
Nov 25 17:37:58 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:37:58 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683460061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.709 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.722 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.723 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:37:58 compute-0 nova_compute[254092]: 2025-11-25 17:37:58.724 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:37:58 compute-0 ceph-mon[74985]: pgmap v3455: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/7614586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:37:58 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1683460061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:37:58 compute-0 podman[440964]: 2025-11-25 17:37:58.801120833 +0000 UTC m=+0.041356368 container create 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:37:58 compute-0 systemd[1]: Started libpod-conmon-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope.
Nov 25 17:37:58 compute-0 podman[440964]: 2025-11-25 17:37:58.785273561 +0000 UTC m=+0.025509126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:37:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:37:58 compute-0 podman[440964]: 2025-11-25 17:37:58.909148174 +0000 UTC m=+0.149383829 container init 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:37:58 compute-0 podman[440964]: 2025-11-25 17:37:58.918724824 +0000 UTC m=+0.158960359 container start 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 17:37:58 compute-0 podman[440964]: 2025-11-25 17:37:58.922023324 +0000 UTC m=+0.162258899 container attach 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:37:59 compute-0 podman[440595]: time="2025-11-25T17:37:59Z" level=info msg="Received shutdown.Stop(), terminating!" PID=440595
Nov 25 17:37:59 compute-0 systemd[1]: podman.service: Deactivated successfully.
Nov 25 17:37:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:37:59 compute-0 nova_compute[254092]: 2025-11-25 17:37:59.724 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:59 compute-0 nova_compute[254092]: 2025-11-25 17:37:59.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:37:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:00 compute-0 sad_brahmagupta[440981]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:38:00 compute-0 sad_brahmagupta[440981]: --> relative data size: 1.0
Nov 25 17:38:00 compute-0 sad_brahmagupta[440981]: --> All data devices are unavailable
Nov 25 17:38:00 compute-0 systemd[1]: libpod-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope: Deactivated successfully.
Nov 25 17:38:00 compute-0 systemd[1]: libpod-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope: Consumed 1.088s CPU time.
Nov 25 17:38:00 compute-0 podman[440964]: 2025-11-25 17:38:00.076857992 +0000 UTC m=+1.317093527 container died 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 17:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-13f6941f0314687138a411841ee7d9ce38db873a078b51d4114c2cb32cc9bd54-merged.mount: Deactivated successfully.
Nov 25 17:38:00 compute-0 podman[440964]: 2025-11-25 17:38:00.190586899 +0000 UTC m=+1.430822444 container remove 125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brahmagupta, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:38:00 compute-0 systemd[1]: libpod-conmon-125f141361c288136edaa6d75e9bf0f55ad585b673c8e4370dd1074097a37d14.scope: Deactivated successfully.
Nov 25 17:38:00 compute-0 sudo[440838]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:00 compute-0 sudo[441024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:38:00 compute-0 sudo[441024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:00 compute-0 sudo[441024]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:00 compute-0 sudo[441049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:38:00 compute-0 sudo[441049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:00 compute-0 sudo[441049]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:00 compute-0 sudo[441074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:38:00 compute-0 sudo[441074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:00 compute-0 sudo[441074]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:00 compute-0 nova_compute[254092]: 2025-11-25 17:38:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:00 compute-0 sudo[441099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:38:00 compute-0 sudo[441099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:00 compute-0 nova_compute[254092]: 2025-11-25 17:38:00.702 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:00 compute-0 ceph-mon[74985]: pgmap v3456: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:00 compute-0 podman[441162]: 2025-11-25 17:38:00.854496213 +0000 UTC m=+0.039469806 container create 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:38:00 compute-0 systemd[1]: Started libpod-conmon-6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a.scope.
Nov 25 17:38:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:38:00 compute-0 podman[441162]: 2025-11-25 17:38:00.835722211 +0000 UTC m=+0.020695814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:38:00 compute-0 podman[441162]: 2025-11-25 17:38:00.946806696 +0000 UTC m=+0.131780309 container init 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:38:00 compute-0 podman[441162]: 2025-11-25 17:38:00.952988304 +0000 UTC m=+0.137961917 container start 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:38:00 compute-0 goofy_bell[441179]: 167 167
Nov 25 17:38:00 compute-0 systemd[1]: libpod-6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a.scope: Deactivated successfully.
Nov 25 17:38:00 compute-0 podman[441162]: 2025-11-25 17:38:00.957039095 +0000 UTC m=+0.142012698 container attach 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:38:00 compute-0 podman[441162]: 2025-11-25 17:38:00.957587269 +0000 UTC m=+0.142560852 container died 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14b868327b4a186f835ec36ac2ab1539405de6a97bf53860867e4f306856f14-merged.mount: Deactivated successfully.
Nov 25 17:38:00 compute-0 podman[441162]: 2025-11-25 17:38:00.991613286 +0000 UTC m=+0.176586859 container remove 6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:38:01 compute-0 systemd[1]: libpod-conmon-6ec38816c8e411fcd48e862b7f51271c6f3a598c24f6db9b100d6f8772640c6a.scope: Deactivated successfully.
Nov 25 17:38:01 compute-0 podman[441203]: 2025-11-25 17:38:01.159251989 +0000 UTC m=+0.041460569 container create dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 17:38:01 compute-0 systemd[1]: Started libpod-conmon-dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b.scope.
Nov 25 17:38:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:01 compute-0 podman[441203]: 2025-11-25 17:38:01.145105984 +0000 UTC m=+0.027314584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:38:01 compute-0 podman[441203]: 2025-11-25 17:38:01.244577712 +0000 UTC m=+0.126786332 container init dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:38:01 compute-0 podman[441203]: 2025-11-25 17:38:01.251970474 +0000 UTC m=+0.134179064 container start dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:38:01 compute-0 podman[441203]: 2025-11-25 17:38:01.25478769 +0000 UTC m=+0.136996290 container attach dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:38:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]: {
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:     "0": [
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:         {
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "devices": [
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "/dev/loop3"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             ],
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_name": "ceph_lv0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_size": "21470642176",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "name": "ceph_lv0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "tags": {
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cluster_name": "ceph",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.crush_device_class": "",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.encrypted": "0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osd_id": "0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.type": "block",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.vdo": "0"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             },
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "type": "block",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "vg_name": "ceph_vg0"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:         }
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:     ],
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:     "1": [
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:         {
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "devices": [
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "/dev/loop4"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             ],
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_name": "ceph_lv1",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_size": "21470642176",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "name": "ceph_lv1",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "tags": {
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cluster_name": "ceph",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.crush_device_class": "",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.encrypted": "0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osd_id": "1",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.type": "block",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.vdo": "0"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             },
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "type": "block",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "vg_name": "ceph_vg1"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:         }
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:     ],
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:     "2": [
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:         {
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "devices": [
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "/dev/loop5"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             ],
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_name": "ceph_lv2",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_size": "21470642176",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "name": "ceph_lv2",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "tags": {
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.cluster_name": "ceph",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.crush_device_class": "",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.encrypted": "0",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osd_id": "2",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.type": "block",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:                 "ceph.vdo": "0"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             },
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "type": "block",
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:             "vg_name": "ceph_vg2"
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:         }
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]:     ]
Nov 25 17:38:01 compute-0 hardcore_ganguly[441219]: }
Nov 25 17:38:02 compute-0 systemd[1]: libpod-dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b.scope: Deactivated successfully.
Nov 25 17:38:02 compute-0 podman[441203]: 2025-11-25 17:38:02.013213858 +0000 UTC m=+0.895422448 container died dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ee7020ccc5bc75e3cf24a6e1c1087ae8d50b2320a0bd17cdbc6102533ae84be-merged.mount: Deactivated successfully.
Nov 25 17:38:02 compute-0 podman[441203]: 2025-11-25 17:38:02.069622543 +0000 UTC m=+0.951831133 container remove dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:38:02 compute-0 systemd[1]: libpod-conmon-dd0941d94c321ecd9c516d4303545576c066d7fcdea23e2ed7aa744b6eb4b54b.scope: Deactivated successfully.
Nov 25 17:38:02 compute-0 sudo[441099]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:02 compute-0 sudo[441242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:38:02 compute-0 sudo[441242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:02 compute-0 sudo[441242]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:02 compute-0 sudo[441267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:38:02 compute-0 sudo[441267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:02 compute-0 sudo[441267]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:02 compute-0 sudo[441292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:38:02 compute-0 sudo[441292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:02 compute-0 sudo[441292]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:02 compute-0 sudo[441317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:38:02 compute-0 sudo[441317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:02 compute-0 podman[441383]: 2025-11-25 17:38:02.751150036 +0000 UTC m=+0.059200802 container create 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:38:02 compute-0 ceph-mon[74985]: pgmap v3457: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:02 compute-0 systemd[1]: Started libpod-conmon-66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3.scope.
Nov 25 17:38:02 compute-0 podman[441383]: 2025-11-25 17:38:02.721345695 +0000 UTC m=+0.029396261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:38:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:38:02 compute-0 podman[441383]: 2025-11-25 17:38:02.864475022 +0000 UTC m=+0.172525618 container init 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:38:02 compute-0 podman[441383]: 2025-11-25 17:38:02.873011574 +0000 UTC m=+0.181062090 container start 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:38:02 compute-0 podman[441383]: 2025-11-25 17:38:02.877033673 +0000 UTC m=+0.185084229 container attach 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:38:02 compute-0 peaceful_jackson[441400]: 167 167
Nov 25 17:38:02 compute-0 systemd[1]: libpod-66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3.scope: Deactivated successfully.
Nov 25 17:38:02 compute-0 podman[441383]: 2025-11-25 17:38:02.880746104 +0000 UTC m=+0.188796650 container died 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-12841acdc9f6ff0b914ee087c30d649b9671f2625d433742338cbbc5883c52fa-merged.mount: Deactivated successfully.
Nov 25 17:38:02 compute-0 podman[441383]: 2025-11-25 17:38:02.932128904 +0000 UTC m=+0.240179450 container remove 66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jackson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:38:02 compute-0 systemd[1]: libpod-conmon-66b5a3642d4f680edca95bfa6f333b6a6e1356ace53cdb549c9e9ce2690241c3.scope: Deactivated successfully.
Nov 25 17:38:03 compute-0 podman[441423]: 2025-11-25 17:38:03.148763351 +0000 UTC m=+0.050988869 container create a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:38:03 compute-0 systemd[1]: Started libpod-conmon-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope.
Nov 25 17:38:03 compute-0 podman[441423]: 2025-11-25 17:38:03.123839842 +0000 UTC m=+0.026065370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:38:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:38:03 compute-0 podman[441423]: 2025-11-25 17:38:03.254929921 +0000 UTC m=+0.157155439 container init a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 17:38:03 compute-0 podman[441423]: 2025-11-25 17:38:03.268279375 +0000 UTC m=+0.170504883 container start a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:38:03 compute-0 podman[441423]: 2025-11-25 17:38:03.273511907 +0000 UTC m=+0.175737495 container attach a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:38:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:03 compute-0 nova_compute[254092]: 2025-11-25 17:38:03.742 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:04 compute-0 determined_gates[441440]: {
Nov 25 17:38:04 compute-0 determined_gates[441440]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "osd_id": 1,
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "type": "bluestore"
Nov 25 17:38:04 compute-0 determined_gates[441440]:     },
Nov 25 17:38:04 compute-0 determined_gates[441440]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "osd_id": 2,
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "type": "bluestore"
Nov 25 17:38:04 compute-0 determined_gates[441440]:     },
Nov 25 17:38:04 compute-0 determined_gates[441440]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "osd_id": 0,
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:38:04 compute-0 determined_gates[441440]:         "type": "bluestore"
Nov 25 17:38:04 compute-0 determined_gates[441440]:     }
Nov 25 17:38:04 compute-0 determined_gates[441440]: }
Nov 25 17:38:04 compute-0 systemd[1]: libpod-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope: Deactivated successfully.
Nov 25 17:38:04 compute-0 podman[441423]: 2025-11-25 17:38:04.463686607 +0000 UTC m=+1.365912095 container died a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:38:04 compute-0 systemd[1]: libpod-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope: Consumed 1.211s CPU time.
Nov 25 17:38:04 compute-0 nova_compute[254092]: 2025-11-25 17:38:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:04 compute-0 nova_compute[254092]: 2025-11-25 17:38:04.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d60561a01913edfa240bdd49b6634352947a9c628a7846c9c03c431aef1cce-merged.mount: Deactivated successfully.
Nov 25 17:38:04 compute-0 podman[441423]: 2025-11-25 17:38:04.553559975 +0000 UTC m=+1.455785493 container remove a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gates, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:38:04 compute-0 systemd[1]: libpod-conmon-a579177db60144bd61e3a0ab1d91bd1b00995ff40c0be06814a15b3d9f07e2cd.scope: Deactivated successfully.
Nov 25 17:38:04 compute-0 sudo[441317]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:38:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:38:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:38:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:38:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 28754b26-1130-455e-b794-eaa71f2a8d34 does not exist
Nov 25 17:38:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f86d45ee-fd2f-4309-a6a3-2e4e947edd03 does not exist
Nov 25 17:38:04 compute-0 sudo[441485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:38:04 compute-0 ceph-mon[74985]: pgmap v3458: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:04 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:38:04 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:38:04 compute-0 sudo[441485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:04 compute-0 sudo[441485]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:04 compute-0 sudo[441510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:38:04 compute-0 sudo[441510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:38:04 compute-0 sudo[441510]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:05 compute-0 nova_compute[254092]: 2025-11-25 17:38:05.704 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:06 compute-0 nova_compute[254092]: 2025-11-25 17:38:06.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:06 compute-0 nova_compute[254092]: 2025-11-25 17:38:06.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:06 compute-0 sudo[441535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Nov 25 17:38:06 compute-0 sudo[441535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:38:06 compute-0 sudo[441535]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:06 compute-0 ceph-mon[74985]: pgmap v3459: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:06 compute-0 sudo[441560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Nov 25 17:38:06 compute-0 sudo[441560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 17:38:06 compute-0 sudo[441560]: pam_unix(sudo:session): session closed for user root
Nov 25 17:38:06 compute-0 podman[441585]: 2025-11-25 17:38:06.898154851 +0000 UTC m=+0.090802312 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 17:38:06 compute-0 podman[441584]: 2025-11-25 17:38:06.911108484 +0000 UTC m=+0.103549060 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:38:06 compute-0 sshd-session[440339]: Connection closed by 192.168.122.30 port 54712
Nov 25 17:38:06 compute-0 sshd-session[440336]: pam_unix(sshd:session): session closed for user zuul
Nov 25 17:38:06 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 17:38:06 compute-0 systemd[1]: session-53.scope: Consumed 1.709s CPU time.
Nov 25 17:38:06 compute-0 systemd-logind[791]: Session 53 logged out. Waiting for processes to exit.
Nov 25 17:38:06 compute-0 systemd-logind[791]: Removed session 53.
Nov 25 17:38:06 compute-0 podman[441586]: 2025-11-25 17:38:06.951757791 +0000 UTC m=+0.136759164 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 17:38:07 compute-0 nova_compute[254092]: 2025-11-25 17:38:07.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:07 compute-0 nova_compute[254092]: 2025-11-25 17:38:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:38:07 compute-0 nova_compute[254092]: 2025-11-25 17:38:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:38:07 compute-0 nova_compute[254092]: 2025-11-25 17:38:07.530 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:38:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:07 compute-0 sshd-session[440594]: Connection closed by 192.168.122.30 port 56624
Nov 25 17:38:07 compute-0 sshd-session[440591]: pam_unix(sshd:session): session closed for user zuul
Nov 25 17:38:07 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 17:38:07 compute-0 systemd-logind[791]: Session 54 logged out. Waiting for processes to exit.
Nov 25 17:38:07 compute-0 systemd-logind[791]: Removed session 54.
Nov 25 17:38:08 compute-0 nova_compute[254092]: 2025-11-25 17:38:08.786 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:08 compute-0 ceph-mon[74985]: pgmap v3460: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:38:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:38:10 compute-0 nova_compute[254092]: 2025-11-25 17:38:10.706 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:10 compute-0 ceph-mon[74985]: pgmap v3461: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:12 compute-0 ceph-mon[74985]: pgmap v3462: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:38:13.677 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:38:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:38:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:38:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:38:13.678 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:38:13 compute-0 nova_compute[254092]: 2025-11-25 17:38:13.789 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:13 compute-0 ceph-mon[74985]: pgmap v3463: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:14 compute-0 nova_compute[254092]: 2025-11-25 17:38:14.524 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:15 compute-0 nova_compute[254092]: 2025-11-25 17:38:15.709 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:16 compute-0 nova_compute[254092]: 2025-11-25 17:38:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:16 compute-0 ceph-mon[74985]: pgmap v3464: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:18 compute-0 ceph-mon[74985]: pgmap v3465: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:18 compute-0 nova_compute[254092]: 2025-11-25 17:38:18.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:20 compute-0 ceph-mon[74985]: pgmap v3466: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:20 compute-0 nova_compute[254092]: 2025-11-25 17:38:20.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:22 compute-0 ceph-mon[74985]: pgmap v3467: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:23 compute-0 nova_compute[254092]: 2025-11-25 17:38:23.793 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:24 compute-0 ceph-mon[74985]: pgmap v3468: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:25 compute-0 nova_compute[254092]: 2025-11-25 17:38:25.748 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:26 compute-0 ceph-mon[74985]: pgmap v3469: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:28 compute-0 ceph-mon[74985]: pgmap v3470: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:28 compute-0 nova_compute[254092]: 2025-11-25 17:38:28.796 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:30 compute-0 ceph-mon[74985]: pgmap v3471: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:30 compute-0 nova_compute[254092]: 2025-11-25 17:38:30.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:32 compute-0 ceph-mon[74985]: pgmap v3472: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:33 compute-0 nova_compute[254092]: 2025-11-25 17:38:33.800 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:34 compute-0 ceph-mon[74985]: pgmap v3473: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:35 compute-0 nova_compute[254092]: 2025-11-25 17:38:35.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:36 compute-0 ceph-mon[74985]: pgmap v3474: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:37 compute-0 podman[441653]: 2025-11-25 17:38:37.733539675 +0000 UTC m=+0.089080786 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 25 17:38:37 compute-0 podman[441652]: 2025-11-25 17:38:37.745915012 +0000 UTC m=+0.108989958 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:38:37 compute-0 podman[441654]: 2025-11-25 17:38:37.787021271 +0000 UTC m=+0.136802955 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:38:38 compute-0 ceph-mon[74985]: pgmap v3475: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:38 compute-0 nova_compute[254092]: 2025-11-25 17:38:38.836 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:38:40
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'images', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:38:40 compute-0 nova_compute[254092]: 2025-11-25 17:38:40.757 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:40 compute-0 ceph-mon[74985]: pgmap v3476: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:38:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:38:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:42 compute-0 ceph-mon[74985]: pgmap v3477: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:43 compute-0 nova_compute[254092]: 2025-11-25 17:38:43.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:44 compute-0 ceph-mon[74985]: pgmap v3478: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:45 compute-0 nova_compute[254092]: 2025-11-25 17:38:45.760 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:46 compute-0 ceph-mon[74985]: pgmap v3479: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:48 compute-0 ceph-mon[74985]: pgmap v3480: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:48 compute-0 nova_compute[254092]: 2025-11-25 17:38:48.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:50 compute-0 ceph-mon[74985]: pgmap v3481: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:50 compute-0 nova_compute[254092]: 2025-11-25 17:38:50.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:38:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:38:52 compute-0 ceph-mon[74985]: pgmap v3482: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:53 compute-0 nova_compute[254092]: 2025-11-25 17:38:53.842 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:54 compute-0 ceph-mon[74985]: pgmap v3483: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:38:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438611634' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:38:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:38:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438611634' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:38:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/438611634' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:38:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/438611634' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:38:55 compute-0 nova_compute[254092]: 2025-11-25 17:38:55.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:56 compute-0 ceph-mon[74985]: pgmap v3484: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:58 compute-0 ceph-mon[74985]: pgmap v3485: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:58 compute-0 nova_compute[254092]: 2025-11-25 17:38:58.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.540 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.540 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.541 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.541 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:38:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:38:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:38:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:38:59 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2122273370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:38:59 compute-0 nova_compute[254092]: 2025-11-25 17:38:59.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.236 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.238 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3650MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.238 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.238 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.315 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.315 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.335 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.767 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:00 compute-0 ceph-mon[74985]: pgmap v3486: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:00 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2122273370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:39:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:39:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087428060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.889 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.896 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.913 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.915 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:39:00 compute-0 nova_compute[254092]: 2025-11-25 17:39:00.916 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:39:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2087428060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:39:02 compute-0 ceph-mon[74985]: pgmap v3487: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:02 compute-0 nova_compute[254092]: 2025-11-25 17:39:02.917 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:03 compute-0 nova_compute[254092]: 2025-11-25 17:39:03.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:04 compute-0 ceph-mon[74985]: pgmap v3488: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:04 compute-0 sudo[441759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:04 compute-0 sudo[441759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:04 compute-0 sudo[441759]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:05 compute-0 sudo[441784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:39:05 compute-0 sudo[441784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:05 compute-0 sudo[441784]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:05 compute-0 sudo[441809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:05 compute-0 sudo[441809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:05 compute-0 sudo[441809]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:05 compute-0 sudo[441834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:39:05 compute-0 sudo[441834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:05 compute-0 sudo[441834]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 17:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:39:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4e87997d-1d25-4e7c-902e-f005f91b2dc0 does not exist
Nov 25 17:39:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b3f2b56c-5c45-4458-8bfa-ad609700c756 does not exist
Nov 25 17:39:05 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 07fcd31c-98ec-49be-8e6a-1eb20552b708 does not exist
Nov 25 17:39:05 compute-0 nova_compute[254092]: 2025-11-25 17:39:05.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:39:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:39:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:39:05 compute-0 sudo[441890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:05 compute-0 sudo[441890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:05 compute-0 sudo[441890]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:05 compute-0 sudo[441915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:39:05 compute-0 sudo[441915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:05 compute-0 sudo[441915]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:06 compute-0 sudo[441940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:06 compute-0 sudo[441940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:06 compute-0 sudo[441940]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:06 compute-0 sudo[441965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:39:06 compute-0 sudo[441965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:06 compute-0 podman[442030]: 2025-11-25 17:39:06.418707572 +0000 UTC m=+0.042608400 container create 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:39:06 compute-0 systemd[1]: Started libpod-conmon-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope.
Nov 25 17:39:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:39:06 compute-0 nova_compute[254092]: 2025-11-25 17:39:06.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:06 compute-0 podman[442030]: 2025-11-25 17:39:06.494011162 +0000 UTC m=+0.117912010 container init 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 17:39:06 compute-0 podman[442030]: 2025-11-25 17:39:06.400351943 +0000 UTC m=+0.024252811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:39:06 compute-0 nova_compute[254092]: 2025-11-25 17:39:06.494 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:06 compute-0 nova_compute[254092]: 2025-11-25 17:39:06.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:39:06 compute-0 podman[442030]: 2025-11-25 17:39:06.501287301 +0000 UTC m=+0.125188129 container start 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:39:06 compute-0 podman[442030]: 2025-11-25 17:39:06.504494828 +0000 UTC m=+0.128395656 container attach 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:39:06 compute-0 vibrant_shannon[442046]: 167 167
Nov 25 17:39:06 compute-0 systemd[1]: libpod-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope: Deactivated successfully.
Nov 25 17:39:06 compute-0 conmon[442046]: conmon 90177fe2be0b7f1a8cf1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope/container/memory.events
Nov 25 17:39:06 compute-0 podman[442030]: 2025-11-25 17:39:06.508793165 +0000 UTC m=+0.132694023 container died 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 25 17:39:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-02db7752988c1d92774d568ef3285969c14a6fafe4ab80b2641b2c11f8b1b785-merged.mount: Deactivated successfully.
Nov 25 17:39:06 compute-0 podman[442030]: 2025-11-25 17:39:06.559158886 +0000 UTC m=+0.183059714 container remove 90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:39:06 compute-0 systemd[1]: libpod-conmon-90177fe2be0b7f1a8cf189f722f230808a5e8619adaecc0ec654786bcc7eaf03.scope: Deactivated successfully.
Nov 25 17:39:06 compute-0 podman[442069]: 2025-11-25 17:39:06.724160788 +0000 UTC m=+0.041258514 container create 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:39:06 compute-0 systemd[1]: Started libpod-conmon-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope.
Nov 25 17:39:06 compute-0 podman[442069]: 2025-11-25 17:39:06.707195686 +0000 UTC m=+0.024293432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:39:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:06 compute-0 podman[442069]: 2025-11-25 17:39:06.823678787 +0000 UTC m=+0.140776543 container init 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:39:06 compute-0 podman[442069]: 2025-11-25 17:39:06.831527791 +0000 UTC m=+0.148625517 container start 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:39:06 compute-0 podman[442069]: 2025-11-25 17:39:06.834697027 +0000 UTC m=+0.151794753 container attach 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 17:39:06 compute-0 ceph-mon[74985]: pgmap v3489: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:07 compute-0 nova_compute[254092]: 2025-11-25 17:39:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:07 compute-0 nova_compute[254092]: 2025-11-25 17:39:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:39:07 compute-0 nova_compute[254092]: 2025-11-25 17:39:07.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:39:07 compute-0 nova_compute[254092]: 2025-11-25 17:39:07.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:39:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:07 compute-0 ceph-mon[74985]: pgmap v3490: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:08 compute-0 ecstatic_mcclintock[442086]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:39:08 compute-0 ecstatic_mcclintock[442086]: --> relative data size: 1.0
Nov 25 17:39:08 compute-0 ecstatic_mcclintock[442086]: --> All data devices are unavailable
Nov 25 17:39:08 compute-0 systemd[1]: libpod-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope: Deactivated successfully.
Nov 25 17:39:08 compute-0 systemd[1]: libpod-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope: Consumed 1.167s CPU time.
Nov 25 17:39:08 compute-0 podman[442069]: 2025-11-25 17:39:08.053463327 +0000 UTC m=+1.370561083 container died 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d756f515e5de12fa9c73c41a257d6c37490eab1038ff33dfab1bcac8f9076b-merged.mount: Deactivated successfully.
Nov 25 17:39:08 compute-0 podman[442069]: 2025-11-25 17:39:08.148415282 +0000 UTC m=+1.465513018 container remove 3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:39:08 compute-0 systemd[1]: libpod-conmon-3649ab2718a286b1108e3dbaf1e043b15825f70b2c18b758a3ccdfc8de694fee.scope: Deactivated successfully.
Nov 25 17:39:08 compute-0 sudo[441965]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:08 compute-0 podman[442130]: 2025-11-25 17:39:08.194984919 +0000 UTC m=+0.081643584 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 17:39:08 compute-0 podman[442116]: 2025-11-25 17:39:08.221595074 +0000 UTC m=+0.114731205 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:39:08 compute-0 podman[442131]: 2025-11-25 17:39:08.228430309 +0000 UTC m=+0.122042812 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 17:39:08 compute-0 sudo[442190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:08 compute-0 sudo[442190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:08 compute-0 sudo[442190]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:08 compute-0 sudo[442216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:39:08 compute-0 sudo[442216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:08 compute-0 sudo[442216]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:08 compute-0 sudo[442241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:08 compute-0 sudo[442241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:08 compute-0 sudo[442241]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:08 compute-0 sudo[442266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:39:08 compute-0 sudo[442266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:08 compute-0 nova_compute[254092]: 2025-11-25 17:39:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:08 compute-0 podman[442331]: 2025-11-25 17:39:08.769795578 +0000 UTC m=+0.054101314 container create f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:39:08 compute-0 systemd[1]: Started libpod-conmon-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope.
Nov 25 17:39:08 compute-0 podman[442331]: 2025-11-25 17:39:08.738844425 +0000 UTC m=+0.023150171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:39:08 compute-0 nova_compute[254092]: 2025-11-25 17:39:08.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:39:08 compute-0 podman[442331]: 2025-11-25 17:39:08.891510501 +0000 UTC m=+0.175816287 container init f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:39:08 compute-0 podman[442331]: 2025-11-25 17:39:08.904017791 +0000 UTC m=+0.188323527 container start f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:39:08 compute-0 podman[442331]: 2025-11-25 17:39:08.91017527 +0000 UTC m=+0.194481006 container attach f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:39:08 compute-0 systemd[1]: libpod-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope: Deactivated successfully.
Nov 25 17:39:08 compute-0 amazing_brahmagupta[442348]: 167 167
Nov 25 17:39:08 compute-0 conmon[442348]: conmon f859571eb8f84b7b8999 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope/container/memory.events
Nov 25 17:39:08 compute-0 podman[442331]: 2025-11-25 17:39:08.915522635 +0000 UTC m=+0.199828371 container died f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-af7e5ed73d35694b3061cb98a0d57b205a0e0b3b3025b2865c12838145e6dd3d-merged.mount: Deactivated successfully.
Nov 25 17:39:08 compute-0 podman[442331]: 2025-11-25 17:39:08.982277922 +0000 UTC m=+0.266583628 container remove f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:39:09 compute-0 systemd[1]: libpod-conmon-f859571eb8f84b7b8999fba97a6e8cc989125e186d81a305b731e9739c9a1aa2.scope: Deactivated successfully.
Nov 25 17:39:09 compute-0 podman[442371]: 2025-11-25 17:39:09.20260677 +0000 UTC m=+0.048087790 container create 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:39:09 compute-0 systemd[1]: Started libpod-conmon-0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63.scope.
Nov 25 17:39:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:09 compute-0 podman[442371]: 2025-11-25 17:39:09.182906704 +0000 UTC m=+0.028387764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:39:09 compute-0 podman[442371]: 2025-11-25 17:39:09.290701028 +0000 UTC m=+0.136182088 container init 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:39:09 compute-0 podman[442371]: 2025-11-25 17:39:09.298686156 +0000 UTC m=+0.144167166 container start 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:39:09 compute-0 podman[442371]: 2025-11-25 17:39:09.306473618 +0000 UTC m=+0.151954678 container attach 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 25 17:39:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:10 compute-0 magical_goldberg[442387]: {
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:     "0": [
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:         {
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "devices": [
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "/dev/loop3"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             ],
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_name": "ceph_lv0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_size": "21470642176",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "name": "ceph_lv0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "tags": {
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cluster_name": "ceph",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.crush_device_class": "",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.encrypted": "0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osd_id": "0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.type": "block",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.vdo": "0"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             },
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "type": "block",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "vg_name": "ceph_vg0"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:         }
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:     ],
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:     "1": [
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:         {
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "devices": [
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "/dev/loop4"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             ],
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_name": "ceph_lv1",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_size": "21470642176",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "name": "ceph_lv1",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "tags": {
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cluster_name": "ceph",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.crush_device_class": "",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.encrypted": "0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osd_id": "1",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.type": "block",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.vdo": "0"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             },
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "type": "block",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "vg_name": "ceph_vg1"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:         }
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:     ],
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:     "2": [
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:         {
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "devices": [
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "/dev/loop5"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             ],
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_name": "ceph_lv2",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_size": "21470642176",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "name": "ceph_lv2",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "tags": {
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.cluster_name": "ceph",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.crush_device_class": "",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.encrypted": "0",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osd_id": "2",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.type": "block",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:                 "ceph.vdo": "0"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             },
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "type": "block",
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:             "vg_name": "ceph_vg2"
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:         }
Nov 25 17:39:10 compute-0 magical_goldberg[442387]:     ]
Nov 25 17:39:10 compute-0 magical_goldberg[442387]: }
Nov 25 17:39:10 compute-0 systemd[1]: libpod-0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63.scope: Deactivated successfully.
Nov 25 17:39:10 compute-0 podman[442371]: 2025-11-25 17:39:10.080457318 +0000 UTC m=+0.925938338 container died 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:39:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3290c8d254e30fe8a6c7cee3c72438272b64fdb049ed58b7fca2b6c63d26999b-merged.mount: Deactivated successfully.
Nov 25 17:39:10 compute-0 podman[442371]: 2025-11-25 17:39:10.36149878 +0000 UTC m=+1.206979800 container remove 0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:39:10 compute-0 systemd[1]: libpod-conmon-0db8099ffacd6b387b2f9d9223d9de0d8ff1f8d7e4fdc830dadd0aecc3757f63.scope: Deactivated successfully.
Nov 25 17:39:10 compute-0 sudo[442266]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:10 compute-0 sudo[442410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:10 compute-0 sudo[442410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:10 compute-0 sudo[442410]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:10 compute-0 sudo[442435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:39:10 compute-0 sudo[442435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:10 compute-0 sudo[442435]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:10 compute-0 sudo[442460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:10 compute-0 sudo[442460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:10 compute-0 sudo[442460]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:10 compute-0 sudo[442485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:39:10 compute-0 sudo[442485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:10 compute-0 ceph-mon[74985]: pgmap v3491: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:10 compute-0 nova_compute[254092]: 2025-11-25 17:39:10.813 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:11 compute-0 podman[442550]: 2025-11-25 17:39:11.13253489 +0000 UTC m=+0.089545369 container create 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:39:11 compute-0 podman[442550]: 2025-11-25 17:39:11.064284142 +0000 UTC m=+0.021294641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:39:11 compute-0 systemd[1]: Started libpod-conmon-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope.
Nov 25 17:39:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:39:11 compute-0 podman[442550]: 2025-11-25 17:39:11.350791842 +0000 UTC m=+0.307802321 container init 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:39:11 compute-0 podman[442550]: 2025-11-25 17:39:11.361226136 +0000 UTC m=+0.318236655 container start 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:39:11 compute-0 dreamy_chatterjee[442566]: 167 167
Nov 25 17:39:11 compute-0 systemd[1]: libpod-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope: Deactivated successfully.
Nov 25 17:39:11 compute-0 conmon[442566]: conmon 8d160eb83e242ede2ec3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope/container/memory.events
Nov 25 17:39:11 compute-0 podman[442550]: 2025-11-25 17:39:11.438009066 +0000 UTC m=+0.395019585 container attach 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:39:11 compute-0 podman[442550]: 2025-11-25 17:39:11.438613162 +0000 UTC m=+0.395623681 container died 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 17:39:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e592d8ea9847b47b87b04a061b70d718b501a3d3b2eec9b14812d47c8bb700d-merged.mount: Deactivated successfully.
Nov 25 17:39:12 compute-0 podman[442550]: 2025-11-25 17:39:12.066889327 +0000 UTC m=+1.023899836 container remove 8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_chatterjee, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:39:12 compute-0 ceph-mon[74985]: pgmap v3492: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:12 compute-0 systemd[1]: libpod-conmon-8d160eb83e242ede2ec32b3081c80740970b137713521d438777683ce3d348b6.scope: Deactivated successfully.
Nov 25 17:39:12 compute-0 podman[442589]: 2025-11-25 17:39:12.37667218 +0000 UTC m=+0.114625381 container create 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:39:12 compute-0 podman[442589]: 2025-11-25 17:39:12.305552295 +0000 UTC m=+0.043505516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:39:12 compute-0 systemd[1]: Started libpod-conmon-7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d.scope.
Nov 25 17:39:12 compute-0 nova_compute[254092]: 2025-11-25 17:39:12.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:12 compute-0 nova_compute[254092]: 2025-11-25 17:39:12.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:39:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:39:12 compute-0 podman[442589]: 2025-11-25 17:39:12.550618236 +0000 UTC m=+0.288571467 container init 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:39:12 compute-0 podman[442589]: 2025-11-25 17:39:12.563144337 +0000 UTC m=+0.301097558 container start 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 17:39:12 compute-0 podman[442589]: 2025-11-25 17:39:12.625202656 +0000 UTC m=+0.363156147 container attach 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]: {
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "osd_id": 1,
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "type": "bluestore"
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:     },
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "osd_id": 2,
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "type": "bluestore"
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:     },
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "osd_id": 0,
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:         "type": "bluestore"
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]:     }
Nov 25 17:39:13 compute-0 charming_chandrasekhar[442606]: }
Nov 25 17:39:13 compute-0 systemd[1]: libpod-7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d.scope: Deactivated successfully.
Nov 25 17:39:13 compute-0 podman[442589]: 2025-11-25 17:39:13.567267153 +0000 UTC m=+1.305220354 container died 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:39:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:39:13.679 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:39:13.680 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:39:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:39:13.680 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:39:13 compute-0 nova_compute[254092]: 2025-11-25 17:39:13.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-331ffd9d42e045042c52b8a0c8e786a97441362d49e3f4918f9abb717e057a77-merged.mount: Deactivated successfully.
Nov 25 17:39:14 compute-0 podman[442589]: 2025-11-25 17:39:14.280387067 +0000 UTC m=+2.018340268 container remove 7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chandrasekhar, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:39:14 compute-0 sudo[442485]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:14 compute-0 systemd[1]: libpod-conmon-7bee5cee9996a95c9604728ccd5fe66a7adc9c4639d6f81f66a350aa16716a4d.scope: Deactivated successfully.
Nov 25 17:39:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:39:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:39:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:39:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:39:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5c4caf6e-2f8b-4365-b92c-880a73e06c40 does not exist
Nov 25 17:39:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0a5758d5-bdd4-4429-a74b-906eade08a53 does not exist
Nov 25 17:39:14 compute-0 sudo[442651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:39:14 compute-0 sudo[442651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:14 compute-0 sudo[442651]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:14 compute-0 sudo[442676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:39:14 compute-0 sudo[442676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:39:14 compute-0 sudo[442676]: pam_unix(sudo:session): session closed for user root
Nov 25 17:39:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:14 compute-0 ceph-mon[74985]: pgmap v3493: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:39:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:39:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:15 compute-0 nova_compute[254092]: 2025-11-25 17:39:15.817 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:15 compute-0 ceph-mon[74985]: pgmap v3494: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:16 compute-0 nova_compute[254092]: 2025-11-25 17:39:16.518 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:16 compute-0 nova_compute[254092]: 2025-11-25 17:39:16.518 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:39:16 compute-0 nova_compute[254092]: 2025-11-25 17:39:16.542 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:39:17 compute-0 nova_compute[254092]: 2025-11-25 17:39:17.519 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:18 compute-0 nova_compute[254092]: 2025-11-25 17:39:18.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:18 compute-0 ceph-mon[74985]: pgmap v3495: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:20 compute-0 ceph-mon[74985]: pgmap v3496: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:20 compute-0 nova_compute[254092]: 2025-11-25 17:39:20.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:22 compute-0 ceph-mon[74985]: pgmap v3497: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:23 compute-0 nova_compute[254092]: 2025-11-25 17:39:23.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:24 compute-0 ceph-mon[74985]: pgmap v3498: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:25 compute-0 nova_compute[254092]: 2025-11-25 17:39:25.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:26 compute-0 ceph-mon[74985]: pgmap v3499: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:28 compute-0 ceph-mon[74985]: pgmap v3500: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:28 compute-0 nova_compute[254092]: 2025-11-25 17:39:28.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:30 compute-0 nova_compute[254092]: 2025-11-25 17:39:30.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:30 compute-0 ceph-mon[74985]: pgmap v3501: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:31 compute-0 nova_compute[254092]: 2025-11-25 17:39:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:32 compute-0 ceph-mon[74985]: pgmap v3502: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:33 compute-0 nova_compute[254092]: 2025-11-25 17:39:33.898 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:34 compute-0 ceph-mon[74985]: pgmap v3503: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:35 compute-0 nova_compute[254092]: 2025-11-25 17:39:35.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:36 compute-0 ceph-mon[74985]: pgmap v3504: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:37 compute-0 ceph-mon[74985]: pgmap v3505: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:38 compute-0 podman[442702]: 2025-11-25 17:39:38.690993521 +0000 UTC m=+0.094911304 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 17:39:38 compute-0 podman[442701]: 2025-11-25 17:39:38.703471291 +0000 UTC m=+0.108057803 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:39:38 compute-0 podman[442703]: 2025-11-25 17:39:38.744613851 +0000 UTC m=+0.145004518 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 17:39:38 compute-0 nova_compute[254092]: 2025-11-25 17:39:38.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:39:40
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:39:40 compute-0 ceph-mon[74985]: pgmap v3506: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:39:40 compute-0 nova_compute[254092]: 2025-11-25 17:39:40.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:39:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:39:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:42 compute-0 ceph-mon[74985]: pgmap v3507: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:43 compute-0 nova_compute[254092]: 2025-11-25 17:39:43.923 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:44 compute-0 ceph-mon[74985]: pgmap v3508: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:45 compute-0 nova_compute[254092]: 2025-11-25 17:39:45.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:46 compute-0 ceph-mon[74985]: pgmap v3509: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:48 compute-0 ceph-mon[74985]: pgmap v3510: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:48 compute-0 nova_compute[254092]: 2025-11-25 17:39:48.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:50 compute-0 ceph-mon[74985]: pgmap v3511: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:50 compute-0 nova_compute[254092]: 2025-11-25 17:39:50.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:39:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:39:52 compute-0 ceph-mon[74985]: pgmap v3512: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:53 compute-0 nova_compute[254092]: 2025-11-25 17:39:53.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:54 compute-0 ceph-mon[74985]: pgmap v3513: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:39:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3355613458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:39:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:39:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3355613458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:39:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:55 compute-0 nova_compute[254092]: 2025-11-25 17:39:55.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3355613458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:39:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3355613458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:39:56 compute-0 ceph-mon[74985]: pgmap v3514: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:57 compute-0 ceph-mon[74985]: pgmap v3515: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:58 compute-0 nova_compute[254092]: 2025-11-25 17:39:58.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:39:59 compute-0 nova_compute[254092]: 2025-11-25 17:39:59.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:59 compute-0 nova_compute[254092]: 2025-11-25 17:39:59.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:39:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:39:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:39:59 compute-0 ceph-mon[74985]: pgmap v3516: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:00 compute-0 nova_compute[254092]: 2025-11-25 17:40:00.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:01 compute-0 nova_compute[254092]: 2025-11-25 17:40:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:01 compute-0 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:40:01 compute-0 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:40:01 compute-0 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:40:01 compute-0 nova_compute[254092]: 2025-11-25 17:40:01.524 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:40:01 compute-0 nova_compute[254092]: 2025-11-25 17:40:01.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:40:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:40:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2769921081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:40:01 compute-0 nova_compute[254092]: 2025-11-25 17:40:01.975 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.165 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.167 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3636MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.167 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.168 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.253 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.253 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.296 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:40:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:40:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543408170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.747 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.752 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.763 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.765 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:40:02 compute-0 nova_compute[254092]: 2025-11-25 17:40:02.766 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:40:02 compute-0 ceph-mon[74985]: pgmap v3517: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2769921081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:40:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2543408170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:40:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:03 compute-0 nova_compute[254092]: 2025-11-25 17:40:03.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:04 compute-0 nova_compute[254092]: 2025-11-25 17:40:04.767 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:04 compute-0 ceph-mon[74985]: pgmap v3518: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.939234) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092404939275, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1593, "num_deletes": 251, "total_data_size": 2580828, "memory_usage": 2624384, "flush_reason": "Manual Compaction"}
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092404959140, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 2545220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71275, "largest_seqno": 72867, "table_properties": {"data_size": 2537733, "index_size": 4493, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15097, "raw_average_key_size": 19, "raw_value_size": 2522906, "raw_average_value_size": 3332, "num_data_blocks": 200, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092229, "oldest_key_time": 1764092229, "file_creation_time": 1764092404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 19952 microseconds, and 10254 cpu microseconds.
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.959185) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 2545220 bytes OK
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.959206) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.960933) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.960946) EVENT_LOG_v1 {"time_micros": 1764092404960942, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.960962) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 2573938, prev total WAL file size 2573938, number of live WAL files 2.
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.961711) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(2485KB)], [167(10152KB)]
Nov 25 17:40:04 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092404961773, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 12941752, "oldest_snapshot_seqno": -1}
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 9008 keys, 11214481 bytes, temperature: kUnknown
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092405027176, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 11214481, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11156630, "index_size": 34250, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 237090, "raw_average_key_size": 26, "raw_value_size": 10998145, "raw_average_value_size": 1220, "num_data_blocks": 1324, "num_entries": 9008, "num_filter_entries": 9008, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.027425) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11214481 bytes
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.028679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 171.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.9 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(9.5) write-amplify(4.4) OK, records in: 9522, records dropped: 514 output_compression: NoCompression
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.028699) EVENT_LOG_v1 {"time_micros": 1764092405028689, "job": 104, "event": "compaction_finished", "compaction_time_micros": 65470, "compaction_time_cpu_micros": 29115, "output_level": 6, "num_output_files": 1, "total_output_size": 11214481, "num_input_records": 9522, "num_output_records": 9008, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092405029331, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092405031510, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:04.961591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:40:05 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:40:05.031657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:40:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:05 compute-0 nova_compute[254092]: 2025-11-25 17:40:05.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:05 compute-0 ceph-mon[74985]: pgmap v3519: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:06 compute-0 nova_compute[254092]: 2025-11-25 17:40:06.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:07 compute-0 nova_compute[254092]: 2025-11-25 17:40:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:07 compute-0 nova_compute[254092]: 2025-11-25 17:40:07.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:40:07 compute-0 nova_compute[254092]: 2025-11-25 17:40:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:40:07 compute-0 nova_compute[254092]: 2025-11-25 17:40:07.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:40:07 compute-0 nova_compute[254092]: 2025-11-25 17:40:07.515 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:07 compute-0 nova_compute[254092]: 2025-11-25 17:40:07.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:40:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:08 compute-0 ceph-mon[74985]: pgmap v3520: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:08 compute-0 nova_compute[254092]: 2025-11-25 17:40:08.989 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:09 compute-0 nova_compute[254092]: 2025-11-25 17:40:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:09 compute-0 podman[442809]: 2025-11-25 17:40:09.687615041 +0000 UTC m=+0.093310741 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:40:09 compute-0 podman[442810]: 2025-11-25 17:40:09.688784513 +0000 UTC m=+0.088765958 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 17:40:09 compute-0 podman[442811]: 2025-11-25 17:40:09.737715405 +0000 UTC m=+0.140351242 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 17:40:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:10 compute-0 ceph-mon[74985]: pgmap v3521: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:40:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:40:10 compute-0 nova_compute[254092]: 2025-11-25 17:40:10.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:12 compute-0 ceph-mon[74985]: pgmap v3522: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:13 compute-0 sshd-session[442874]: Connection closed by authenticating user root 171.244.51.45 port 37394 [preauth]
Nov 25 17:40:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:40:13.681 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:40:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:40:13.682 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:40:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:40:13.682 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:40:14 compute-0 nova_compute[254092]: 2025-11-25 17:40:14.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:14 compute-0 sudo[442876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:14 compute-0 sudo[442876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:14 compute-0 sudo[442876]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:14 compute-0 ceph-mon[74985]: pgmap v3523: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:14 compute-0 sudo[442901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:40:14 compute-0 sudo[442901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:14 compute-0 sudo[442901]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:14 compute-0 sudo[442926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:14 compute-0 sudo[442926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:14 compute-0 sudo[442926]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:14 compute-0 sudo[442951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 17:40:14 compute-0 sudo[442951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:15 compute-0 podman[443049]: 2025-11-25 17:40:15.424240153 +0000 UTC m=+0.062926695 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:40:15 compute-0 podman[443049]: 2025-11-25 17:40:15.539241503 +0000 UTC m=+0.177928025 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:40:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:15 compute-0 nova_compute[254092]: 2025-11-25 17:40:15.940 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:16 compute-0 sudo[442951]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:40:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:40:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:16 compute-0 sudo[443204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:16 compute-0 sudo[443204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:16 compute-0 sudo[443204]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:16 compute-0 sudo[443229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:40:16 compute-0 sudo[443229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:16 compute-0 sudo[443229]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:16 compute-0 sudo[443254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:16 compute-0 sudo[443254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:16 compute-0 sudo[443254]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:16 compute-0 sudo[443279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:40:16 compute-0 sudo[443279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:16 compute-0 ceph-mon[74985]: pgmap v3524: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:16 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:17 compute-0 sudo[443279]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:40:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:40:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:40:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 93c65c62-ac1c-4266-b62d-cfe844726382 does not exist
Nov 25 17:40:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 89099479-2f0f-4907-aaf4-8bd6366e7a96 does not exist
Nov 25 17:40:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 93bf0c59-554c-4c46-b91b-e5e35fa8f057 does not exist
Nov 25 17:40:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:40:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:40:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:40:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:40:17 compute-0 sudo[443335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:17 compute-0 sudo[443335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:17 compute-0 sudo[443335]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:17 compute-0 sudo[443360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:40:17 compute-0 sudo[443360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:17 compute-0 sudo[443360]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:17 compute-0 sudo[443385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:17 compute-0 sudo[443385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:17 compute-0 sudo[443385]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:17 compute-0 nova_compute[254092]: 2025-11-25 17:40:17.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:17 compute-0 sudo[443410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:40:17 compute-0 sudo[443410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:40:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:40:17 compute-0 podman[443475]: 2025-11-25 17:40:17.849283261 +0000 UTC m=+0.052479019 container create acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:40:17 compute-0 systemd[1]: Started libpod-conmon-acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e.scope.
Nov 25 17:40:17 compute-0 podman[443475]: 2025-11-25 17:40:17.821103054 +0000 UTC m=+0.024298852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:40:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:40:18 compute-0 podman[443475]: 2025-11-25 17:40:18.000273841 +0000 UTC m=+0.203469609 container init acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:40:18 compute-0 podman[443475]: 2025-11-25 17:40:18.008089434 +0000 UTC m=+0.211285192 container start acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:40:18 compute-0 podman[443475]: 2025-11-25 17:40:18.011275781 +0000 UTC m=+0.214471539 container attach acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:40:18 compute-0 optimistic_black[443491]: 167 167
Nov 25 17:40:18 compute-0 systemd[1]: libpod-acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e.scope: Deactivated successfully.
Nov 25 17:40:18 compute-0 podman[443475]: 2025-11-25 17:40:18.016711649 +0000 UTC m=+0.219907437 container died acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e04389922ea911b2381835f99eef2d5a9d0e11033204723007a20d921a9696-merged.mount: Deactivated successfully.
Nov 25 17:40:18 compute-0 podman[443475]: 2025-11-25 17:40:18.0619253 +0000 UTC m=+0.265121058 container remove acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_black, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:40:18 compute-0 systemd[1]: libpod-conmon-acef1a26b8b38fa9939cc90b34e4a2a358366f29971aa0b5ff7ae108bcb9588e.scope: Deactivated successfully.
Nov 25 17:40:18 compute-0 podman[443514]: 2025-11-25 17:40:18.276113061 +0000 UTC m=+0.063518710 container create 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 17:40:18 compute-0 systemd[1]: Started libpod-conmon-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope.
Nov 25 17:40:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:18 compute-0 podman[443514]: 2025-11-25 17:40:18.256694862 +0000 UTC m=+0.044100541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:40:18 compute-0 podman[443514]: 2025-11-25 17:40:18.355479381 +0000 UTC m=+0.142885030 container init 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:40:18 compute-0 podman[443514]: 2025-11-25 17:40:18.369139443 +0000 UTC m=+0.156545092 container start 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:40:18 compute-0 podman[443514]: 2025-11-25 17:40:18.372089934 +0000 UTC m=+0.159495603 container attach 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:40:18 compute-0 ceph-mon[74985]: pgmap v3525: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:19 compute-0 nova_compute[254092]: 2025-11-25 17:40:19.088 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:19 compute-0 recursing_noether[443530]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:40:19 compute-0 recursing_noether[443530]: --> relative data size: 1.0
Nov 25 17:40:19 compute-0 recursing_noether[443530]: --> All data devices are unavailable
Nov 25 17:40:19 compute-0 systemd[1]: libpod-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope: Deactivated successfully.
Nov 25 17:40:19 compute-0 systemd[1]: libpod-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope: Consumed 1.015s CPU time.
Nov 25 17:40:19 compute-0 nova_compute[254092]: 2025-11-25 17:40:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:40:19 compute-0 podman[443559]: 2025-11-25 17:40:19.516763656 +0000 UTC m=+0.049399206 container died 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 17:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec6199c02be017506e8061f4b3ee50fde616856b0cf99888ecd8b0c363fcb038-merged.mount: Deactivated successfully.
Nov 25 17:40:19 compute-0 podman[443559]: 2025-11-25 17:40:19.62122895 +0000 UTC m=+0.153864440 container remove 53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:40:19 compute-0 systemd[1]: libpod-conmon-53e53bf6053832a09dec39c7dbcc61f8114731867f5569dcbb3a235583c4f821.scope: Deactivated successfully.
Nov 25 17:40:19 compute-0 sudo[443410]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:19 compute-0 sudo[443574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:19 compute-0 sudo[443574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:19 compute-0 sudo[443574]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:19 compute-0 sudo[443599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:40:19 compute-0 sudo[443599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:19 compute-0 sudo[443599]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:19 compute-0 sudo[443624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:19 compute-0 sudo[443624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:19 compute-0 sudo[443624]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:19 compute-0 sudo[443649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:40:19 compute-0 sudo[443649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:20 compute-0 podman[443714]: 2025-11-25 17:40:20.335118395 +0000 UTC m=+0.044787151 container create 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:40:20 compute-0 systemd[1]: Started libpod-conmon-7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416.scope.
Nov 25 17:40:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:40:20 compute-0 podman[443714]: 2025-11-25 17:40:20.409385806 +0000 UTC m=+0.119054612 container init 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:40:20 compute-0 podman[443714]: 2025-11-25 17:40:20.318484292 +0000 UTC m=+0.028153068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:40:20 compute-0 podman[443714]: 2025-11-25 17:40:20.421815185 +0000 UTC m=+0.131483951 container start 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 25 17:40:20 compute-0 podman[443714]: 2025-11-25 17:40:20.425208827 +0000 UTC m=+0.134877593 container attach 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:40:20 compute-0 intelligent_ride[443731]: 167 167
Nov 25 17:40:20 compute-0 systemd[1]: libpod-7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416.scope: Deactivated successfully.
Nov 25 17:40:20 compute-0 podman[443714]: 2025-11-25 17:40:20.428215089 +0000 UTC m=+0.137883875 container died 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c75429d6f1f23969e3973ab35fbc6093cfad6d20f3381b04f2a5e2aad257ee27-merged.mount: Deactivated successfully.
Nov 25 17:40:20 compute-0 podman[443714]: 2025-11-25 17:40:20.484933333 +0000 UTC m=+0.194602119 container remove 7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:40:20 compute-0 systemd[1]: libpod-conmon-7fe337d4750bf927fc190a49715252f89f59bffd8fa4e330506e568e5ba45416.scope: Deactivated successfully.
Nov 25 17:40:20 compute-0 podman[443755]: 2025-11-25 17:40:20.741346264 +0000 UTC m=+0.073640066 container create e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 17:40:20 compute-0 ceph-mon[74985]: pgmap v3526: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:20 compute-0 systemd[1]: Started libpod-conmon-e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7.scope.
Nov 25 17:40:20 compute-0 podman[443755]: 2025-11-25 17:40:20.715437198 +0000 UTC m=+0.047730980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:40:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:20 compute-0 podman[443755]: 2025-11-25 17:40:20.857575137 +0000 UTC m=+0.189868939 container init e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:40:20 compute-0 podman[443755]: 2025-11-25 17:40:20.870822288 +0000 UTC m=+0.203116090 container start e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 17:40:20 compute-0 podman[443755]: 2025-11-25 17:40:20.875601479 +0000 UTC m=+0.207895281 container attach e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:40:20 compute-0 nova_compute[254092]: 2025-11-25 17:40:20.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:21 compute-0 pensive_boyd[443771]: {
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:     "0": [
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:         {
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "devices": [
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "/dev/loop3"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             ],
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_name": "ceph_lv0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_size": "21470642176",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "name": "ceph_lv0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "tags": {
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cluster_name": "ceph",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.crush_device_class": "",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.encrypted": "0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osd_id": "0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.type": "block",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.vdo": "0"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             },
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "type": "block",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "vg_name": "ceph_vg0"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:         }
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:     ],
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:     "1": [
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:         {
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "devices": [
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "/dev/loop4"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             ],
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_name": "ceph_lv1",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_size": "21470642176",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "name": "ceph_lv1",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "tags": {
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cluster_name": "ceph",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.crush_device_class": "",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.encrypted": "0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osd_id": "1",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.type": "block",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.vdo": "0"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             },
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "type": "block",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "vg_name": "ceph_vg1"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:         }
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:     ],
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:     "2": [
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:         {
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "devices": [
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "/dev/loop5"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             ],
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_name": "ceph_lv2",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_size": "21470642176",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "name": "ceph_lv2",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "tags": {
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.cluster_name": "ceph",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.crush_device_class": "",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.encrypted": "0",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osd_id": "2",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.type": "block",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:                 "ceph.vdo": "0"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             },
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "type": "block",
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:             "vg_name": "ceph_vg2"
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:         }
Nov 25 17:40:21 compute-0 pensive_boyd[443771]:     ]
Nov 25 17:40:21 compute-0 pensive_boyd[443771]: }
Nov 25 17:40:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:21 compute-0 systemd[1]: libpod-e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7.scope: Deactivated successfully.
Nov 25 17:40:21 compute-0 podman[443755]: 2025-11-25 17:40:21.68598562 +0000 UTC m=+1.018279422 container died e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f9cf9e8f24542523aa64e3df688c86f2042969fda97828a4596f5ffe21a9382-merged.mount: Deactivated successfully.
Nov 25 17:40:21 compute-0 podman[443755]: 2025-11-25 17:40:21.759153942 +0000 UTC m=+1.091447704 container remove e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:40:21 compute-0 systemd[1]: libpod-conmon-e67b202cf1a77a2d23fa32277ea77005acf39655fe3111e48c883925a74506d7.scope: Deactivated successfully.
Nov 25 17:40:21 compute-0 sudo[443649]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:21 compute-0 sudo[443795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:21 compute-0 sudo[443795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:21 compute-0 sudo[443795]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:21 compute-0 sudo[443820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:40:21 compute-0 sudo[443820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:21 compute-0 sudo[443820]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:22 compute-0 sudo[443845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:22 compute-0 sudo[443845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:22 compute-0 sudo[443845]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:22 compute-0 sudo[443870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:40:22 compute-0 sudo[443870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:22 compute-0 podman[443935]: 2025-11-25 17:40:22.489546036 +0000 UTC m=+0.047684059 container create d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:40:22 compute-0 systemd[1]: Started libpod-conmon-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope.
Nov 25 17:40:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:40:22 compute-0 podman[443935]: 2025-11-25 17:40:22.463692962 +0000 UTC m=+0.021831035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:40:22 compute-0 podman[443935]: 2025-11-25 17:40:22.568070693 +0000 UTC m=+0.126208706 container init d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 25 17:40:22 compute-0 podman[443935]: 2025-11-25 17:40:22.575869906 +0000 UTC m=+0.134007909 container start d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:40:22 compute-0 podman[443935]: 2025-11-25 17:40:22.57932021 +0000 UTC m=+0.137458203 container attach d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 17:40:22 compute-0 fervent_bhaskara[443952]: 167 167
Nov 25 17:40:22 compute-0 systemd[1]: libpod-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope: Deactivated successfully.
Nov 25 17:40:22 compute-0 conmon[443952]: conmon d7e1480a529a643d0a45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope/container/memory.events
Nov 25 17:40:22 compute-0 podman[443935]: 2025-11-25 17:40:22.587608685 +0000 UTC m=+0.145746678 container died d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:40:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e15deaa9fe9fa526399079d2caca684e7895b7f3ccc0f6ca02041984fb97c27-merged.mount: Deactivated successfully.
Nov 25 17:40:22 compute-0 podman[443935]: 2025-11-25 17:40:22.634226425 +0000 UTC m=+0.192364418 container remove d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bhaskara, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:40:22 compute-0 systemd[1]: libpod-conmon-d7e1480a529a643d0a454a9fe2bd051493848217f6dc993d93155fb654ae107a.scope: Deactivated successfully.
Nov 25 17:40:22 compute-0 podman[443976]: 2025-11-25 17:40:22.797712235 +0000 UTC m=+0.046976040 container create 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:40:22 compute-0 ceph-mon[74985]: pgmap v3527: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:22 compute-0 systemd[1]: Started libpod-conmon-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope.
Nov 25 17:40:22 compute-0 podman[443976]: 2025-11-25 17:40:22.778529473 +0000 UTC m=+0.027793298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:40:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:40:22 compute-0 podman[443976]: 2025-11-25 17:40:22.921106924 +0000 UTC m=+0.170370749 container init 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:40:22 compute-0 podman[443976]: 2025-11-25 17:40:22.940496162 +0000 UTC m=+0.189759967 container start 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:40:22 compute-0 podman[443976]: 2025-11-25 17:40:22.944230194 +0000 UTC m=+0.193494029 container attach 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:40:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:24 compute-0 serene_ganguly[443992]: {
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "osd_id": 1,
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "type": "bluestore"
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:     },
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "osd_id": 2,
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "type": "bluestore"
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:     },
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "osd_id": 0,
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:         "type": "bluestore"
Nov 25 17:40:24 compute-0 serene_ganguly[443992]:     }
Nov 25 17:40:24 compute-0 serene_ganguly[443992]: }
Nov 25 17:40:24 compute-0 systemd[1]: libpod-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope: Deactivated successfully.
Nov 25 17:40:24 compute-0 podman[443976]: 2025-11-25 17:40:24.073599859 +0000 UTC m=+1.322863664 container died 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:40:24 compute-0 systemd[1]: libpod-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope: Consumed 1.143s CPU time.
Nov 25 17:40:24 compute-0 nova_compute[254092]: 2025-11-25 17:40:24.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a6d52ed4e664a2b6f7f27ccb595a05498c23fc076b022e2c2526bf72991f202-merged.mount: Deactivated successfully.
Nov 25 17:40:24 compute-0 podman[443976]: 2025-11-25 17:40:24.154721627 +0000 UTC m=+1.403985442 container remove 8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:40:24 compute-0 systemd[1]: libpod-conmon-8fcbd13cf7754d04334ce14005eb86d6861273c24432784f1768dcbce5e7d2d8.scope: Deactivated successfully.
Nov 25 17:40:24 compute-0 sudo[443870]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:40:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cc7092b0-6279-429b-9cd0-4462426d49b9 does not exist
Nov 25 17:40:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 66aa4a81-168e-49fe-86e2-d5d60175d329 does not exist
Nov 25 17:40:24 compute-0 sudo[444038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:40:24 compute-0 sudo[444038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:24 compute-0 sudo[444038]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:24 compute-0 sudo[444063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:40:24 compute-0 sudo[444063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:40:24 compute-0 sudo[444063]: pam_unix(sudo:session): session closed for user root
Nov 25 17:40:24 compute-0 ceph-mon[74985]: pgmap v3528: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:24 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:40:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:25 compute-0 nova_compute[254092]: 2025-11-25 17:40:25.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:26 compute-0 ceph-mon[74985]: pgmap v3529: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:28 compute-0 ceph-mon[74985]: pgmap v3530: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:29 compute-0 nova_compute[254092]: 2025-11-25 17:40:29.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:30 compute-0 ceph-mon[74985]: pgmap v3531: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:30 compute-0 nova_compute[254092]: 2025-11-25 17:40:30.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:32 compute-0 ceph-mon[74985]: pgmap v3532: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:34 compute-0 nova_compute[254092]: 2025-11-25 17:40:34.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:34 compute-0 ceph-mon[74985]: pgmap v3533: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:35 compute-0 nova_compute[254092]: 2025-11-25 17:40:35.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:36 compute-0 ceph-mon[74985]: pgmap v3534: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:38 compute-0 ceph-mon[74985]: pgmap v3535: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:39 compute-0 nova_compute[254092]: 2025-11-25 17:40:39.098 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:40:40
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:40:40 compute-0 podman[444089]: 2025-11-25 17:40:40.696943809 +0000 UTC m=+0.097409952 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:40:40 compute-0 podman[444088]: 2025-11-25 17:40:40.702284564 +0000 UTC m=+0.104946467 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:40:40 compute-0 podman[444090]: 2025-11-25 17:40:40.746751055 +0000 UTC m=+0.138172442 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:40:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:40:40 compute-0 ceph-mon[74985]: pgmap v3536: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:41 compute-0 nova_compute[254092]: 2025-11-25 17:40:41.026 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:42 compute-0 ceph-mon[74985]: pgmap v3537: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:44 compute-0 nova_compute[254092]: 2025-11-25 17:40:44.100 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:44 compute-0 ceph-mon[74985]: pgmap v3538: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:46 compute-0 nova_compute[254092]: 2025-11-25 17:40:46.028 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:46 compute-0 ceph-mon[74985]: pgmap v3539: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:48 compute-0 ceph-mon[74985]: pgmap v3540: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:49 compute-0 nova_compute[254092]: 2025-11-25 17:40:49.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:49 compute-0 ceph-mon[74985]: pgmap v3541: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:51 compute-0 nova_compute[254092]: 2025-11-25 17:40:51.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:40:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:40:52 compute-0 ceph-mon[74985]: pgmap v3542: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:54 compute-0 nova_compute[254092]: 2025-11-25 17:40:54.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:54 compute-0 ceph-mon[74985]: pgmap v3543: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:40:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:40:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508926618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:40:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:40:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3508926618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:40:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3508926618' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:40:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3508926618' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:40:56 compute-0 nova_compute[254092]: 2025-11-25 17:40:56.033 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:56 compute-0 ceph-mon[74985]: pgmap v3544: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:58 compute-0 ceph-mon[74985]: pgmap v3545: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:59 compute-0 nova_compute[254092]: 2025-11-25 17:40:59.232 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:40:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:40:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:00 compute-0 nova_compute[254092]: 2025-11-25 17:41:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:00 compute-0 ceph-mon[74985]: pgmap v3546: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:00 compute-0 sshd-session[444146]: Invalid user admin from 2.57.121.112 port 10319
Nov 25 17:41:01 compute-0 nova_compute[254092]: 2025-11-25 17:41:01.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:01 compute-0 sshd-session[444146]: Received disconnect from 2.57.121.112 port 10319:11: Bye [preauth]
Nov 25 17:41:01 compute-0 sshd-session[444146]: Disconnected from invalid user admin 2.57.121.112 port 10319 [preauth]
Nov 25 17:41:01 compute-0 nova_compute[254092]: 2025-11-25 17:41:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:02 compute-0 nova_compute[254092]: 2025-11-25 17:41:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:02 compute-0 nova_compute[254092]: 2025-11-25 17:41:02.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:41:02 compute-0 nova_compute[254092]: 2025-11-25 17:41:02.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:41:02 compute-0 nova_compute[254092]: 2025-11-25 17:41:02.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:41:02 compute-0 nova_compute[254092]: 2025-11-25 17:41:02.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:41:02 compute-0 nova_compute[254092]: 2025-11-25 17:41:02.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:41:02 compute-0 ceph-mon[74985]: pgmap v3547: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:41:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258336635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:41:02 compute-0 nova_compute[254092]: 2025-11-25 17:41:02.967 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.129 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.130 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3631MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.131 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.131 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.278 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.396 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.502 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.502 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.518 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.555 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:41:03 compute-0 nova_compute[254092]: 2025-11-25 17:41:03.570 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:41:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2258336635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:41:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:41:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2491037390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:41:04 compute-0 nova_compute[254092]: 2025-11-25 17:41:04.014 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:41:04 compute-0 nova_compute[254092]: 2025-11-25 17:41:04.020 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:41:04 compute-0 nova_compute[254092]: 2025-11-25 17:41:04.034 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:41:04 compute-0 nova_compute[254092]: 2025-11-25 17:41:04.035 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:41:04 compute-0 nova_compute[254092]: 2025-11-25 17:41:04.035 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:41:04 compute-0 nova_compute[254092]: 2025-11-25 17:41:04.284 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:04 compute-0 ceph-mon[74985]: pgmap v3548: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2491037390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:41:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:06 compute-0 nova_compute[254092]: 2025-11-25 17:41:06.036 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:06 compute-0 nova_compute[254092]: 2025-11-25 17:41:06.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:06 compute-0 ceph-mon[74985]: pgmap v3549: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:07 compute-0 nova_compute[254092]: 2025-11-25 17:41:07.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:07 compute-0 nova_compute[254092]: 2025-11-25 17:41:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:07 compute-0 nova_compute[254092]: 2025-11-25 17:41:07.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:41:07 compute-0 nova_compute[254092]: 2025-11-25 17:41:07.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:41:07 compute-0 nova_compute[254092]: 2025-11-25 17:41:07.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:41:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:08 compute-0 ceph-mon[74985]: pgmap v3550: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:09 compute-0 nova_compute[254092]: 2025-11-25 17:41:09.287 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:09 compute-0 nova_compute[254092]: 2025-11-25 17:41:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:09 compute-0 nova_compute[254092]: 2025-11-25 17:41:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:09 compute-0 nova_compute[254092]: 2025-11-25 17:41:09.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:41:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:41:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:41:10 compute-0 ceph-mon[74985]: pgmap v3551: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:11 compute-0 nova_compute[254092]: 2025-11-25 17:41:11.040 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:11 compute-0 podman[444192]: 2025-11-25 17:41:11.655573296 +0000 UTC m=+0.058548985 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:41:11 compute-0 podman[444193]: 2025-11-25 17:41:11.677280586 +0000 UTC m=+0.079189696 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 17:41:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:11 compute-0 podman[444194]: 2025-11-25 17:41:11.76848602 +0000 UTC m=+0.166618208 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Nov 25 17:41:12 compute-0 ceph-mon[74985]: pgmap v3552: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:41:13.683 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:41:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:41:13.684 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:41:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:41:13.684 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:41:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:14 compute-0 nova_compute[254092]: 2025-11-25 17:41:14.328 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:14 compute-0 ceph-mon[74985]: pgmap v3553: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:16 compute-0 nova_compute[254092]: 2025-11-25 17:41:16.042 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:16 compute-0 ceph-mon[74985]: pgmap v3554: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:18 compute-0 ceph-mon[74985]: pgmap v3555: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:19 compute-0 nova_compute[254092]: 2025-11-25 17:41:19.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:20 compute-0 ceph-mon[74985]: pgmap v3556: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:21 compute-0 nova_compute[254092]: 2025-11-25 17:41:21.044 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:21 compute-0 nova_compute[254092]: 2025-11-25 17:41:21.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:41:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:22 compute-0 ceph-mon[74985]: pgmap v3557: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:24 compute-0 nova_compute[254092]: 2025-11-25 17:41:24.331 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:24 compute-0 sudo[444255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:24 compute-0 sudo[444255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:24 compute-0 sudo[444255]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:24 compute-0 sudo[444280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:41:24 compute-0 sudo[444280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:24 compute-0 sudo[444280]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:24 compute-0 sudo[444305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:24 compute-0 sudo[444305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:24 compute-0 sudo[444305]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:24 compute-0 sudo[444330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:41:24 compute-0 sudo[444330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:24 compute-0 ceph-mon[74985]: pgmap v3558: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:25 compute-0 sudo[444330]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:41:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:41:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:41:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:41:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 739cb68b-3ced-4df6-95b1-18b0d8d27e4c does not exist
Nov 25 17:41:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a2cc8955-3feb-421e-89df-68b782729b4e does not exist
Nov 25 17:41:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e7786b4c-6df7-4682-98a9-169083e46e57 does not exist
Nov 25 17:41:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:41:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:41:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:41:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:41:25 compute-0 sudo[444387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:25 compute-0 sudo[444387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:25 compute-0 sudo[444387]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:25 compute-0 sudo[444412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:41:25 compute-0 sudo[444412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:25 compute-0 sudo[444412]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:25 compute-0 sudo[444437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:25 compute-0 sudo[444437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:25 compute-0 sudo[444437]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:25 compute-0 sudo[444462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:41:25 compute-0 sudo[444462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:41:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:41:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:41:25 compute-0 podman[444524]: 2025-11-25 17:41:25.904879584 +0000 UTC m=+0.041310786 container create c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:41:25 compute-0 systemd[1]: Started libpod-conmon-c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef.scope.
Nov 25 17:41:25 compute-0 podman[444524]: 2025-11-25 17:41:25.888147208 +0000 UTC m=+0.024578390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:41:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:41:26 compute-0 podman[444524]: 2025-11-25 17:41:26.019038262 +0000 UTC m=+0.155469504 container init c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:41:26 compute-0 podman[444524]: 2025-11-25 17:41:26.033623239 +0000 UTC m=+0.170054401 container start c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:41:26 compute-0 podman[444524]: 2025-11-25 17:41:26.037158695 +0000 UTC m=+0.173589887 container attach c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:41:26 compute-0 cranky_napier[444540]: 167 167
Nov 25 17:41:26 compute-0 systemd[1]: libpod-c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef.scope: Deactivated successfully.
Nov 25 17:41:26 compute-0 podman[444524]: 2025-11-25 17:41:26.042601173 +0000 UTC m=+0.179032335 container died c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:41:26 compute-0 nova_compute[254092]: 2025-11-25 17:41:26.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a09bb60cc1efe9f08c3004f4cf75a6deb8c3af6b006926e1a64569115353976-merged.mount: Deactivated successfully.
Nov 25 17:41:26 compute-0 podman[444524]: 2025-11-25 17:41:26.089847169 +0000 UTC m=+0.226278331 container remove c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:41:26 compute-0 systemd[1]: libpod-conmon-c2aac74a0c84c7a2d884438852d93599adc7a727f0a179aa0fdf97d4102149ef.scope: Deactivated successfully.
Nov 25 17:41:26 compute-0 podman[444564]: 2025-11-25 17:41:26.276927352 +0000 UTC m=+0.048871642 container create f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:41:26 compute-0 systemd[1]: Started libpod-conmon-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope.
Nov 25 17:41:26 compute-0 podman[444564]: 2025-11-25 17:41:26.253099513 +0000 UTC m=+0.025043823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:41:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:26 compute-0 podman[444564]: 2025-11-25 17:41:26.372004871 +0000 UTC m=+0.143949201 container init f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:41:26 compute-0 podman[444564]: 2025-11-25 17:41:26.389490757 +0000 UTC m=+0.161435037 container start f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:41:26 compute-0 podman[444564]: 2025-11-25 17:41:26.393693592 +0000 UTC m=+0.165637872 container attach f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:41:26 compute-0 ceph-mon[74985]: pgmap v3559: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:27 compute-0 hungry_margulis[444581]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:41:27 compute-0 hungry_margulis[444581]: --> relative data size: 1.0
Nov 25 17:41:27 compute-0 hungry_margulis[444581]: --> All data devices are unavailable
Nov 25 17:41:27 compute-0 systemd[1]: libpod-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope: Deactivated successfully.
Nov 25 17:41:27 compute-0 systemd[1]: libpod-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope: Consumed 1.051s CPU time.
Nov 25 17:41:27 compute-0 podman[444610]: 2025-11-25 17:41:27.521443932 +0000 UTC m=+0.021384503 container died f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:41:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d567e2a2277653c44343b210cfdec0a18e4eb15af274233dcfa8e934a30675e8-merged.mount: Deactivated successfully.
Nov 25 17:41:27 compute-0 podman[444610]: 2025-11-25 17:41:27.573511 +0000 UTC m=+0.073451540 container remove f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:41:27 compute-0 systemd[1]: libpod-conmon-f842aba5d4a4211716b5fec1f46db69d47a7f4b67b66652f6bd94502e7982a07.scope: Deactivated successfully.
Nov 25 17:41:27 compute-0 sudo[444462]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:27 compute-0 sudo[444625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:27 compute-0 sudo[444625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:27 compute-0 sudo[444625]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:27 compute-0 sudo[444650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:41:27 compute-0 sudo[444650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:27 compute-0 sudo[444650]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:27 compute-0 sudo[444675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:27 compute-0 sudo[444675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:27 compute-0 sudo[444675]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:27 compute-0 sudo[444700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:41:27 compute-0 sudo[444700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:29 compute-0 nova_compute[254092]: 2025-11-25 17:41:29.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:29 compute-0 ceph-mon[74985]: pgmap v3560: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:29 compute-0 podman[444766]: 2025-11-25 17:41:29.985397361 +0000 UTC m=+0.062798511 container create 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:41:30 compute-0 systemd[1]: Started libpod-conmon-822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622.scope.
Nov 25 17:41:30 compute-0 podman[444766]: 2025-11-25 17:41:29.951375185 +0000 UTC m=+0.028776385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:41:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:41:30 compute-0 podman[444766]: 2025-11-25 17:41:30.075319049 +0000 UTC m=+0.152720159 container init 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:41:30 compute-0 podman[444766]: 2025-11-25 17:41:30.089749141 +0000 UTC m=+0.167150261 container start 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:41:30 compute-0 podman[444766]: 2025-11-25 17:41:30.094304396 +0000 UTC m=+0.171705526 container attach 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:41:30 compute-0 youthful_lehmann[444782]: 167 167
Nov 25 17:41:30 compute-0 systemd[1]: libpod-822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622.scope: Deactivated successfully.
Nov 25 17:41:30 compute-0 podman[444766]: 2025-11-25 17:41:30.100749121 +0000 UTC m=+0.178150281 container died 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:41:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0348457f91cc1f73452aef8a90366c99b2f793d12f6ff57aa1d150c1031475cb-merged.mount: Deactivated successfully.
Nov 25 17:41:30 compute-0 podman[444766]: 2025-11-25 17:41:30.153122317 +0000 UTC m=+0.230523467 container remove 822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:41:30 compute-0 systemd[1]: libpod-conmon-822434eb3e017f84fc17f547a1d509971d7670276465faaf2422b9f6bb75e622.scope: Deactivated successfully.
Nov 25 17:41:30 compute-0 podman[444805]: 2025-11-25 17:41:30.404696895 +0000 UTC m=+0.077928362 container create 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:41:30 compute-0 podman[444805]: 2025-11-25 17:41:30.363244627 +0000 UTC m=+0.036476144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:41:30 compute-0 systemd[1]: Started libpod-conmon-01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f.scope.
Nov 25 17:41:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:30 compute-0 podman[444805]: 2025-11-25 17:41:30.510347861 +0000 UTC m=+0.183579378 container init 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:41:30 compute-0 podman[444805]: 2025-11-25 17:41:30.51982951 +0000 UTC m=+0.193060977 container start 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:41:30 compute-0 podman[444805]: 2025-11-25 17:41:30.524336622 +0000 UTC m=+0.197568089 container attach 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:41:30 compute-0 ceph-mon[74985]: pgmap v3561: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:31 compute-0 nova_compute[254092]: 2025-11-25 17:41:31.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]: {
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:     "0": [
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:         {
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "devices": [
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "/dev/loop3"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             ],
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_name": "ceph_lv0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_size": "21470642176",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "name": "ceph_lv0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "tags": {
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cluster_name": "ceph",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.crush_device_class": "",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.encrypted": "0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osd_id": "0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.type": "block",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.vdo": "0"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             },
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "type": "block",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "vg_name": "ceph_vg0"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:         }
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:     ],
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:     "1": [
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:         {
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "devices": [
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "/dev/loop4"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             ],
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_name": "ceph_lv1",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_size": "21470642176",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "name": "ceph_lv1",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "tags": {
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cluster_name": "ceph",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.crush_device_class": "",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.encrypted": "0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osd_id": "1",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.type": "block",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.vdo": "0"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             },
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "type": "block",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "vg_name": "ceph_vg1"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:         }
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:     ],
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:     "2": [
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:         {
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "devices": [
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "/dev/loop5"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             ],
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_name": "ceph_lv2",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_size": "21470642176",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "name": "ceph_lv2",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "tags": {
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.cluster_name": "ceph",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.crush_device_class": "",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.encrypted": "0",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osd_id": "2",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.type": "block",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:                 "ceph.vdo": "0"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             },
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "type": "block",
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:             "vg_name": "ceph_vg2"
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:         }
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]:     ]
Nov 25 17:41:31 compute-0 naughty_khayyam[444821]: }
Nov 25 17:41:31 compute-0 systemd[1]: libpod-01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f.scope: Deactivated successfully.
Nov 25 17:41:31 compute-0 podman[444805]: 2025-11-25 17:41:31.448221134 +0000 UTC m=+1.121452561 container died 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bb68b1b03db0c508b66d7c62768471e1991fd696f47b9c919d93b3a0b72b574-merged.mount: Deactivated successfully.
Nov 25 17:41:31 compute-0 podman[444805]: 2025-11-25 17:41:31.518059495 +0000 UTC m=+1.191290922 container remove 01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 17:41:31 compute-0 systemd[1]: libpod-conmon-01fca88a6cbac0395e3d223674269be6505ec5477217eab3e08348bae8294d0f.scope: Deactivated successfully.
Nov 25 17:41:31 compute-0 sudo[444700]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:31 compute-0 sudo[444843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:31 compute-0 sudo[444843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:31 compute-0 sudo[444843]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:31 compute-0 sudo[444868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:41:31 compute-0 sudo[444868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:31 compute-0 sudo[444868]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:31 compute-0 sudo[444893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:31 compute-0 sudo[444893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:31 compute-0 sudo[444893]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:31 compute-0 sudo[444918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:41:31 compute-0 sudo[444918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:32 compute-0 podman[444984]: 2025-11-25 17:41:32.361410184 +0000 UTC m=+0.063141460 container create fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:41:32 compute-0 systemd[1]: Started libpod-conmon-fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36.scope.
Nov 25 17:41:32 compute-0 podman[444984]: 2025-11-25 17:41:32.339495797 +0000 UTC m=+0.041227063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:41:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:41:32 compute-0 podman[444984]: 2025-11-25 17:41:32.465164989 +0000 UTC m=+0.166896265 container init fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:41:32 compute-0 podman[444984]: 2025-11-25 17:41:32.476140798 +0000 UTC m=+0.177872074 container start fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 17:41:32 compute-0 podman[444984]: 2025-11-25 17:41:32.480329391 +0000 UTC m=+0.182060677 container attach fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:41:32 compute-0 jolly_cerf[445000]: 167 167
Nov 25 17:41:32 compute-0 systemd[1]: libpod-fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36.scope: Deactivated successfully.
Nov 25 17:41:32 compute-0 podman[444984]: 2025-11-25 17:41:32.486146229 +0000 UTC m=+0.187877495 container died fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-da1cb7399f8e9329939b1ea30cf8e6bcd6fbab99a0d2849f8cbd9f3f7c3e81c5-merged.mount: Deactivated successfully.
Nov 25 17:41:32 compute-0 podman[444984]: 2025-11-25 17:41:32.531345401 +0000 UTC m=+0.233076677 container remove fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:41:32 compute-0 systemd[1]: libpod-conmon-fafda3e329cb905a93b920360f484d02cece43116ca17b313a71799196808d36.scope: Deactivated successfully.
Nov 25 17:41:32 compute-0 podman[445024]: 2025-11-25 17:41:32.785713855 +0000 UTC m=+0.069030770 container create ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 17:41:32 compute-0 systemd[1]: Started libpod-conmon-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope.
Nov 25 17:41:32 compute-0 podman[445024]: 2025-11-25 17:41:32.759192553 +0000 UTC m=+0.042509448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:41:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:41:32 compute-0 podman[445024]: 2025-11-25 17:41:32.892122172 +0000 UTC m=+0.175439067 container init ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:41:32 compute-0 podman[445024]: 2025-11-25 17:41:32.903948084 +0000 UTC m=+0.187264989 container start ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:41:32 compute-0 podman[445024]: 2025-11-25 17:41:32.908251201 +0000 UTC m=+0.191568096 container attach ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:41:32 compute-0 ceph-mon[74985]: pgmap v3562: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:34 compute-0 gracious_nash[445042]: {
Nov 25 17:41:34 compute-0 gracious_nash[445042]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "osd_id": 1,
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "type": "bluestore"
Nov 25 17:41:34 compute-0 gracious_nash[445042]:     },
Nov 25 17:41:34 compute-0 gracious_nash[445042]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "osd_id": 2,
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "type": "bluestore"
Nov 25 17:41:34 compute-0 gracious_nash[445042]:     },
Nov 25 17:41:34 compute-0 gracious_nash[445042]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "osd_id": 0,
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:41:34 compute-0 gracious_nash[445042]:         "type": "bluestore"
Nov 25 17:41:34 compute-0 gracious_nash[445042]:     }
Nov 25 17:41:34 compute-0 gracious_nash[445042]: }
Nov 25 17:41:34 compute-0 systemd[1]: libpod-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope: Deactivated successfully.
Nov 25 17:41:34 compute-0 systemd[1]: libpod-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope: Consumed 1.227s CPU time.
Nov 25 17:41:34 compute-0 conmon[445042]: conmon ddfe38a8b38910a93e44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope/container/memory.events
Nov 25 17:41:34 compute-0 podman[445024]: 2025-11-25 17:41:34.126743003 +0000 UTC m=+1.410059928 container died ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-acff5953f57f8b436de62f1c2f5309941f6ad2ccffd67b414c0d90ce2aa8f1c3-merged.mount: Deactivated successfully.
Nov 25 17:41:34 compute-0 podman[445024]: 2025-11-25 17:41:34.211194882 +0000 UTC m=+1.494511787 container remove ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:41:34 compute-0 systemd[1]: libpod-conmon-ddfe38a8b38910a93e4434bc2708692d50450a1a37d9a6a9b6db0e3d13df704a.scope: Deactivated successfully.
Nov 25 17:41:34 compute-0 sudo[444918]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:41:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:41:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:41:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:41:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev af0f5bfd-7dad-492d-969f-f9557dc4c52e does not exist
Nov 25 17:41:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 245025a6-7ad5-449c-b68e-2461cc619c86 does not exist
Nov 25 17:41:34 compute-0 nova_compute[254092]: 2025-11-25 17:41:34.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:34 compute-0 sudo[445089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:41:34 compute-0 sudo[445089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:34 compute-0 sudo[445089]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:34 compute-0 sudo[445114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:41:34 compute-0 sudo[445114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:41:34 compute-0 sudo[445114]: pam_unix(sudo:session): session closed for user root
Nov 25 17:41:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:35 compute-0 ceph-mon[74985]: pgmap v3563: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:41:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:41:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:36 compute-0 nova_compute[254092]: 2025-11-25 17:41:36.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:36 compute-0 ceph-mon[74985]: pgmap v3564: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:41:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 16K writes, 73K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1355 writes, 6111 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s
                                           Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     37.3      2.37              0.31        52    0.046       0      0       0.0       0.0
                                             L6      1/0   10.69 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9    125.5    106.4      4.07              1.31        51    0.080    352K    27K       0.0       0.0
                                            Sum      1/0   10.69 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     79.3     81.0      6.44              1.62       103    0.063    352K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.8    140.4    142.1      0.41              0.22        10    0.041     46K   2562       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    125.5    106.4      4.07              1.31        51    0.080    352K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     38.0      2.32              0.31        51    0.046       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.086, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.51 GB write, 0.08 MB/s write, 0.50 GB read, 0.08 MB/s read, 6.4 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 59.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000827 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3858,57.26 MB,18.8366%) FilterBlock(104,978.61 KB,0.314366%) IndexBlock(104,1.54 MB,0.506737%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 17:41:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:38 compute-0 ceph-mon[74985]: pgmap v3565: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:39 compute-0 nova_compute[254092]: 2025-11-25 17:41:39.338 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:41:40
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images']
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:41:40 compute-0 ceph-mon[74985]: pgmap v3566: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:41:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:41:41 compute-0 nova_compute[254092]: 2025-11-25 17:41:41.053 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:42 compute-0 podman[445140]: 2025-11-25 17:41:42.656123643 +0000 UTC m=+0.068812425 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:41:42 compute-0 podman[445139]: 2025-11-25 17:41:42.666547396 +0000 UTC m=+0.079268679 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:41:42 compute-0 podman[445141]: 2025-11-25 17:41:42.694378594 +0000 UTC m=+0.107100337 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:41:42 compute-0 ceph-mon[74985]: pgmap v3567: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:44 compute-0 nova_compute[254092]: 2025-11-25 17:41:44.378 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:44 compute-0 ceph-mon[74985]: pgmap v3568: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:46 compute-0 nova_compute[254092]: 2025-11-25 17:41:46.055 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:46 compute-0 ceph-mon[74985]: pgmap v3569: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:48 compute-0 ceph-mon[74985]: pgmap v3570: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:49 compute-0 nova_compute[254092]: 2025-11-25 17:41:49.427 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:50 compute-0 ceph-mon[74985]: pgmap v3571: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:51 compute-0 nova_compute[254092]: 2025-11-25 17:41:51.057 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:41:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:41:52 compute-0 ceph-mon[74985]: pgmap v3572: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:54 compute-0 nova_compute[254092]: 2025-11-25 17:41:54.471 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:54 compute-0 ceph-mon[74985]: pgmap v3573: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:41:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:41:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972227040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:41:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:41:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972227040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:41:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/972227040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:41:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/972227040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:41:56 compute-0 nova_compute[254092]: 2025-11-25 17:41:56.059 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:56 compute-0 ceph-mon[74985]: pgmap v3574: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:58 compute-0 ceph-mon[74985]: pgmap v3575: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:59 compute-0 nova_compute[254092]: 2025-11-25 17:41:59.523 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:41:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:41:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:00 compute-0 ceph-mon[74985]: pgmap v3576: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:00 compute-0 nova_compute[254092]: 2025-11-25 17:42:00.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:01 compute-0 nova_compute[254092]: 2025-11-25 17:42:01.061 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:01 compute-0 nova_compute[254092]: 2025-11-25 17:42:01.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:02 compute-0 nova_compute[254092]: 2025-11-25 17:42:02.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:02 compute-0 nova_compute[254092]: 2025-11-25 17:42:02.518 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:42:02 compute-0 nova_compute[254092]: 2025-11-25 17:42:02.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:42:02 compute-0 nova_compute[254092]: 2025-11-25 17:42:02.519 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:42:02 compute-0 nova_compute[254092]: 2025-11-25 17:42:02.520 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:42:02 compute-0 nova_compute[254092]: 2025-11-25 17:42:02.520 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:42:02 compute-0 ceph-mon[74985]: pgmap v3577: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:42:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716502427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.007 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.237 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.239 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3637MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.239 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.239 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.316 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.317 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.532 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:42:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1716502427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:42:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:42:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427291694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.970 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.977 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.995 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.998 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:42:03 compute-0 nova_compute[254092]: 2025-11-25 17:42:03.998 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:42:04 compute-0 nova_compute[254092]: 2025-11-25 17:42:04.559 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:04 compute-0 ceph-mon[74985]: pgmap v3578: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1427291694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:42:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:06 compute-0 nova_compute[254092]: 2025-11-25 17:42:06.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:06 compute-0 ceph-mon[74985]: pgmap v3579: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:08 compute-0 nova_compute[254092]: 2025-11-25 17:42:08.000 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:08 compute-0 nova_compute[254092]: 2025-11-25 17:42:08.001 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:08 compute-0 nova_compute[254092]: 2025-11-25 17:42:08.001 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:42:08 compute-0 nova_compute[254092]: 2025-11-25 17:42:08.001 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:42:08 compute-0 nova_compute[254092]: 2025-11-25 17:42:08.021 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:42:08 compute-0 nova_compute[254092]: 2025-11-25 17:42:08.021 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:08 compute-0 ceph-mon[74985]: pgmap v3580: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:09 compute-0 nova_compute[254092]: 2025-11-25 17:42:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:09 compute-0 nova_compute[254092]: 2025-11-25 17:42:09.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:09 compute-0 nova_compute[254092]: 2025-11-25 17:42:09.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:42:09 compute-0 nova_compute[254092]: 2025-11-25 17:42:09.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.933457) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529933515, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1252, "num_deletes": 250, "total_data_size": 1931572, "memory_usage": 1954920, "flush_reason": "Manual Compaction"}
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529941227, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1141350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72868, "largest_seqno": 74119, "table_properties": {"data_size": 1136815, "index_size": 1994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11845, "raw_average_key_size": 20, "raw_value_size": 1126971, "raw_average_value_size": 1963, "num_data_blocks": 91, "num_entries": 574, "num_filter_entries": 574, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092405, "oldest_key_time": 1764092405, "file_creation_time": 1764092529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 7803 microseconds, and 3546 cpu microseconds.
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.941266) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1141350 bytes OK
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.941287) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.942461) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.942473) EVENT_LOG_v1 {"time_micros": 1764092529942469, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.942492) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1925918, prev total WAL file size 1925918, number of live WAL files 2.
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.943203) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303131' seq:72057594037927935, type:22 .. '6D6772737461740033323632' seq:0, type:0; will stop at (end)
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1114KB)], [170(10MB)]
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529943236, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12355831, "oldest_snapshot_seqno": -1}
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 9127 keys, 9805417 bytes, temperature: kUnknown
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529995384, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 9805417, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9749810, "index_size": 31686, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22853, "raw_key_size": 239682, "raw_average_key_size": 26, "raw_value_size": 9592262, "raw_average_value_size": 1050, "num_data_blocks": 1221, "num_entries": 9127, "num_filter_entries": 9127, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092529, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.995923) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 9805417 bytes
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.997181) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.0 rd, 187.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(19.4) write-amplify(8.6) OK, records in: 9582, records dropped: 455 output_compression: NoCompression
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.997214) EVENT_LOG_v1 {"time_micros": 1764092529997197, "job": 106, "event": "compaction_finished", "compaction_time_micros": 52350, "compaction_time_cpu_micros": 23547, "output_level": 6, "num_output_files": 1, "total_output_size": 9805417, "num_input_records": 9582, "num_output_records": 9127, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:42:09 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092529997788, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092530001785, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:09.943088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:42:10 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:42:10.001852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:42:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:42:10 compute-0 ceph-mon[74985]: pgmap v3581: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:11 compute-0 nova_compute[254092]: 2025-11-25 17:42:11.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:12 compute-0 ceph-mon[74985]: pgmap v3582: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:42:13.684 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:42:13.685 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:42:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:42:13.685 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:42:13 compute-0 podman[445246]: 2025-11-25 17:42:13.715865541 +0000 UTC m=+0.103432617 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:42:13 compute-0 podman[445245]: 2025-11-25 17:42:13.727717973 +0000 UTC m=+0.121469858 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 25 17:42:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:13 compute-0 podman[445247]: 2025-11-25 17:42:13.790922824 +0000 UTC m=+0.171736796 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:42:14 compute-0 nova_compute[254092]: 2025-11-25 17:42:14.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:14 compute-0 ceph-mon[74985]: pgmap v3583: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:16 compute-0 nova_compute[254092]: 2025-11-25 17:42:16.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:16 compute-0 ceph-mon[74985]: pgmap v3584: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:18 compute-0 ceph-mon[74985]: pgmap v3585: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:19 compute-0 nova_compute[254092]: 2025-11-25 17:42:19.665 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:19 compute-0 ceph-mon[74985]: pgmap v3586: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:21 compute-0 nova_compute[254092]: 2025-11-25 17:42:21.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:21 compute-0 nova_compute[254092]: 2025-11-25 17:42:21.493 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:21 compute-0 nova_compute[254092]: 2025-11-25 17:42:21.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:42:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:22 compute-0 ceph-mon[74985]: pgmap v3587: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:24 compute-0 nova_compute[254092]: 2025-11-25 17:42:24.713 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:24 compute-0 ceph-mon[74985]: pgmap v3588: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:26 compute-0 nova_compute[254092]: 2025-11-25 17:42:26.070 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:26 compute-0 ceph-mon[74985]: pgmap v3589: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:28 compute-0 ceph-mon[74985]: pgmap v3590: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:29 compute-0 nova_compute[254092]: 2025-11-25 17:42:29.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:30 compute-0 ceph-mon[74985]: pgmap v3591: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:31 compute-0 nova_compute[254092]: 2025-11-25 17:42:31.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:32 compute-0 ceph-mon[74985]: pgmap v3592: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:34 compute-0 sudo[445305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:34 compute-0 sudo[445305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:34 compute-0 sudo[445305]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:34 compute-0 sudo[445330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:42:34 compute-0 sudo[445330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:34 compute-0 sudo[445330]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:34 compute-0 nova_compute[254092]: 2025-11-25 17:42:34.741 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:34 compute-0 sudo[445355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:34 compute-0 sudo[445355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:34 compute-0 sudo[445355]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:34 compute-0 ceph-mon[74985]: pgmap v3593: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:34 compute-0 sudo[445380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:42:34 compute-0 sudo[445380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:35 compute-0 sudo[445380]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:42:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:42:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:42:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:42:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 89c7d221-3a4f-4446-82d1-d4bb78e92d0b does not exist
Nov 25 17:42:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fc7bc80e-d1a8-4fbf-8c84-50a93599a769 does not exist
Nov 25 17:42:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4cb79e3f-4e14-4f71-9028-d4f0c77c0497 does not exist
Nov 25 17:42:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:42:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:42:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:42:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:42:35 compute-0 sudo[445435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:35 compute-0 sudo[445435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:35 compute-0 sudo[445435]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:35 compute-0 sudo[445460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:42:35 compute-0 sudo[445460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:35 compute-0 sudo[445460]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:35 compute-0 sudo[445485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:35 compute-0 sudo[445485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:35 compute-0 sudo[445485]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:35 compute-0 sudo[445510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:42:35 compute-0 sudo[445510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:42:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:42:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:42:36 compute-0 podman[445575]: 2025-11-25 17:42:36.050430885 +0000 UTC m=+0.061516695 container create c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:42:36 compute-0 nova_compute[254092]: 2025-11-25 17:42:36.073 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:36 compute-0 systemd[1]: Started libpod-conmon-c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f.scope.
Nov 25 17:42:36 compute-0 podman[445575]: 2025-11-25 17:42:36.028875748 +0000 UTC m=+0.039961588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:42:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:42:36 compute-0 podman[445575]: 2025-11-25 17:42:36.16596421 +0000 UTC m=+0.177050100 container init c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 17:42:36 compute-0 podman[445575]: 2025-11-25 17:42:36.177454223 +0000 UTC m=+0.188540033 container start c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:42:36 compute-0 podman[445575]: 2025-11-25 17:42:36.180606899 +0000 UTC m=+0.191692739 container attach c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:42:36 compute-0 intelligent_villani[445591]: 167 167
Nov 25 17:42:36 compute-0 systemd[1]: libpod-c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f.scope: Deactivated successfully.
Nov 25 17:42:36 compute-0 podman[445575]: 2025-11-25 17:42:36.186769156 +0000 UTC m=+0.197854966 container died c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a028c1e582b8fa4003ebeda27832eccaf9849e51608d90de9670b3615be09e1-merged.mount: Deactivated successfully.
Nov 25 17:42:36 compute-0 podman[445575]: 2025-11-25 17:42:36.235709829 +0000 UTC m=+0.246795669 container remove c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_villani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 17:42:36 compute-0 systemd[1]: libpod-conmon-c4db5ec616b15bf5884d6ed8172645a960a37644139947b9a757e1820451cb0f.scope: Deactivated successfully.
Nov 25 17:42:36 compute-0 podman[445616]: 2025-11-25 17:42:36.482187809 +0000 UTC m=+0.065925246 container create a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:42:36 compute-0 systemd[1]: Started libpod-conmon-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope.
Nov 25 17:42:36 compute-0 podman[445616]: 2025-11-25 17:42:36.45837146 +0000 UTC m=+0.042108987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:42:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:36 compute-0 podman[445616]: 2025-11-25 17:42:36.580246499 +0000 UTC m=+0.163984016 container init a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:42:36 compute-0 podman[445616]: 2025-11-25 17:42:36.592166393 +0000 UTC m=+0.175903870 container start a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 17:42:36 compute-0 podman[445616]: 2025-11-25 17:42:36.596324307 +0000 UTC m=+0.180061774 container attach a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:42:36 compute-0 ceph-mon[74985]: pgmap v3594: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:37 compute-0 hungry_tharp[445633]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:42:37 compute-0 hungry_tharp[445633]: --> relative data size: 1.0
Nov 25 17:42:37 compute-0 hungry_tharp[445633]: --> All data devices are unavailable
Nov 25 17:42:37 compute-0 systemd[1]: libpod-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope: Deactivated successfully.
Nov 25 17:42:37 compute-0 podman[445616]: 2025-11-25 17:42:37.737096073 +0000 UTC m=+1.320833590 container died a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:42:37 compute-0 systemd[1]: libpod-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope: Consumed 1.097s CPU time.
Nov 25 17:42:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a53458a7b4de0509809f1416768f3e0b58a6f7656e627a680f288171b4ef22af-merged.mount: Deactivated successfully.
Nov 25 17:42:37 compute-0 podman[445616]: 2025-11-25 17:42:37.972281335 +0000 UTC m=+1.556018782 container remove a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:42:37 compute-0 systemd[1]: libpod-conmon-a19d5b517ffd1568058ab85af7f39008bf7ec7a6e284a1bc248561df3762f06e.scope: Deactivated successfully.
Nov 25 17:42:38 compute-0 sudo[445510]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:38 compute-0 sudo[445676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:38 compute-0 sudo[445676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:38 compute-0 sudo[445676]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:38 compute-0 sudo[445701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:42:38 compute-0 sudo[445701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:38 compute-0 sudo[445701]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:38 compute-0 sudo[445726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:38 compute-0 sudo[445726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:38 compute-0 sudo[445726]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:38 compute-0 sudo[445751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:42:38 compute-0 sudo[445751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:38 compute-0 podman[445817]: 2025-11-25 17:42:38.74053695 +0000 UTC m=+0.057026584 container create 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:42:38 compute-0 systemd[1]: Started libpod-conmon-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope.
Nov 25 17:42:38 compute-0 podman[445817]: 2025-11-25 17:42:38.714721727 +0000 UTC m=+0.031211471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:42:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:42:38 compute-0 podman[445817]: 2025-11-25 17:42:38.836972584 +0000 UTC m=+0.153462208 container init 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:42:38 compute-0 podman[445817]: 2025-11-25 17:42:38.851333316 +0000 UTC m=+0.167822960 container start 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:42:38 compute-0 podman[445817]: 2025-11-25 17:42:38.854675167 +0000 UTC m=+0.171164811 container attach 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:42:38 compute-0 mystifying_poincare[445834]: 167 167
Nov 25 17:42:38 compute-0 systemd[1]: libpod-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope: Deactivated successfully.
Nov 25 17:42:38 compute-0 conmon[445834]: conmon 1450897e3a9a98215ea2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope/container/memory.events
Nov 25 17:42:38 compute-0 podman[445817]: 2025-11-25 17:42:38.861500472 +0000 UTC m=+0.177990116 container died 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:42:38 compute-0 ceph-mon[74985]: pgmap v3595: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-578b42a2abf39c16195dfa0f0ac0a58372c8286582727137e13975b8bb3f8cbb-merged.mount: Deactivated successfully.
Nov 25 17:42:38 compute-0 podman[445817]: 2025-11-25 17:42:38.913707724 +0000 UTC m=+0.230197378 container remove 1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:42:38 compute-0 systemd[1]: libpod-conmon-1450897e3a9a98215ea294083c799c021b68c1d86f29ddba77295df4788e4b4b.scope: Deactivated successfully.
Nov 25 17:42:39 compute-0 podman[445857]: 2025-11-25 17:42:39.098229378 +0000 UTC m=+0.056260403 container create 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:42:39 compute-0 systemd[1]: Started libpod-conmon-77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11.scope.
Nov 25 17:42:39 compute-0 podman[445857]: 2025-11-25 17:42:39.069587817 +0000 UTC m=+0.027618932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:42:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:39 compute-0 podman[445857]: 2025-11-25 17:42:39.230211291 +0000 UTC m=+0.188242316 container init 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:42:39 compute-0 podman[445857]: 2025-11-25 17:42:39.237435297 +0000 UTC m=+0.195466302 container start 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 17:42:39 compute-0 podman[445857]: 2025-11-25 17:42:39.240705426 +0000 UTC m=+0.198736441 container attach 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:42:39 compute-0 nova_compute[254092]: 2025-11-25 17:42:39.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]: {
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:     "0": [
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:         {
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "devices": [
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "/dev/loop3"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             ],
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_name": "ceph_lv0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_size": "21470642176",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "name": "ceph_lv0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "tags": {
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cluster_name": "ceph",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.crush_device_class": "",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.encrypted": "0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osd_id": "0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.type": "block",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.vdo": "0"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             },
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "type": "block",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "vg_name": "ceph_vg0"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:         }
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:     ],
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:     "1": [
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:         {
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "devices": [
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "/dev/loop4"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             ],
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_name": "ceph_lv1",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_size": "21470642176",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "name": "ceph_lv1",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "tags": {
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cluster_name": "ceph",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.crush_device_class": "",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.encrypted": "0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osd_id": "1",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.type": "block",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.vdo": "0"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             },
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "type": "block",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "vg_name": "ceph_vg1"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:         }
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:     ],
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:     "2": [
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:         {
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "devices": [
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "/dev/loop5"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             ],
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_name": "ceph_lv2",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_size": "21470642176",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "name": "ceph_lv2",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "tags": {
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.cluster_name": "ceph",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.crush_device_class": "",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.encrypted": "0",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osd_id": "2",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.type": "block",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:                 "ceph.vdo": "0"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             },
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "type": "block",
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:             "vg_name": "ceph_vg2"
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:         }
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]:     ]
Nov 25 17:42:40 compute-0 confident_chandrasekhar[445873]: }
Nov 25 17:42:40 compute-0 podman[445857]: 2025-11-25 17:42:40.046371739 +0000 UTC m=+1.004402754 container died 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:42:40 compute-0 systemd[1]: libpod-77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11.scope: Deactivated successfully.
Nov 25 17:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-095170ba18e7093f1ea0229e4995b9c45f842a55957007a0af709380cbaf9422-merged.mount: Deactivated successfully.
Nov 25 17:42:40 compute-0 podman[445857]: 2025-11-25 17:42:40.11951134 +0000 UTC m=+1.077542355 container remove 77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:42:40 compute-0 systemd[1]: libpod-conmon-77510073fbacf1e917cc9f02feb945679bf48c3b80e68f325f54fa5cec73cf11.scope: Deactivated successfully.
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:42:40 compute-0 sudo[445751]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:40 compute-0 sudo[445896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:40 compute-0 sudo[445896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:40 compute-0 sudo[445896]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:42:40
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:42:40 compute-0 sudo[445921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:42:40 compute-0 sudo[445921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:40 compute-0 sudo[445921]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:40 compute-0 sudo[445946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:40 compute-0 sudo[445946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:40 compute-0 sudo[445946]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:40 compute-0 sudo[445971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:42:40 compute-0 sudo[445971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:42:40 compute-0 podman[446036]: 2025-11-25 17:42:40.753658924 +0000 UTC m=+0.041665355 container create c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:42:40 compute-0 systemd[1]: Started libpod-conmon-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope.
Nov 25 17:42:40 compute-0 podman[446036]: 2025-11-25 17:42:40.737909125 +0000 UTC m=+0.025915576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:42:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:42:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:42:40 compute-0 podman[446036]: 2025-11-25 17:42:40.85050242 +0000 UTC m=+0.138508881 container init c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:42:40 compute-0 podman[446036]: 2025-11-25 17:42:40.860580955 +0000 UTC m=+0.148587386 container start c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:42:40 compute-0 podman[446036]: 2025-11-25 17:42:40.864015538 +0000 UTC m=+0.152021979 container attach c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 17:42:40 compute-0 magical_galois[446052]: 167 167
Nov 25 17:42:40 compute-0 systemd[1]: libpod-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope: Deactivated successfully.
Nov 25 17:42:40 compute-0 conmon[446052]: conmon c6647a7cb2fff15f9bd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope/container/memory.events
Nov 25 17:42:40 compute-0 podman[446036]: 2025-11-25 17:42:40.868010667 +0000 UTC m=+0.156017088 container died c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e53bb27308a75dbf2d2c54ef4431854f28e0b6ac2e63d1cac41e8ab56cda547-merged.mount: Deactivated successfully.
Nov 25 17:42:40 compute-0 ceph-mon[74985]: pgmap v3596: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:40 compute-0 podman[446036]: 2025-11-25 17:42:40.905506668 +0000 UTC m=+0.193513099 container remove c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_galois, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:42:40 compute-0 systemd[1]: libpod-conmon-c6647a7cb2fff15f9bd3398c65be35078802f6cfc78572df29a2899de584aa39.scope: Deactivated successfully.
Nov 25 17:42:41 compute-0 nova_compute[254092]: 2025-11-25 17:42:41.076 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:41 compute-0 podman[446075]: 2025-11-25 17:42:41.092470988 +0000 UTC m=+0.053138228 container create 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 17:42:41 compute-0 systemd[1]: Started libpod-conmon-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope.
Nov 25 17:42:41 compute-0 podman[446075]: 2025-11-25 17:42:41.067334834 +0000 UTC m=+0.028002154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:42:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:42:41 compute-0 podman[446075]: 2025-11-25 17:42:41.191040911 +0000 UTC m=+0.151708181 container init 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:42:41 compute-0 podman[446075]: 2025-11-25 17:42:41.208612349 +0000 UTC m=+0.169279609 container start 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:42:41 compute-0 podman[446075]: 2025-11-25 17:42:41.212583897 +0000 UTC m=+0.173251127 container attach 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:42:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:42 compute-0 condescending_mclean[446091]: {
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "osd_id": 1,
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "type": "bluestore"
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:     },
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "osd_id": 2,
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "type": "bluestore"
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:     },
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "osd_id": 0,
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:         "type": "bluestore"
Nov 25 17:42:42 compute-0 condescending_mclean[446091]:     }
Nov 25 17:42:42 compute-0 condescending_mclean[446091]: }
Nov 25 17:42:42 compute-0 systemd[1]: libpod-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope: Deactivated successfully.
Nov 25 17:42:42 compute-0 systemd[1]: libpod-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope: Consumed 1.149s CPU time.
Nov 25 17:42:42 compute-0 podman[446075]: 2025-11-25 17:42:42.345775087 +0000 UTC m=+1.306442317 container died 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a10a4636b3832bf8803dd9f51bff6475230b261ae9054daf5c1d3fd25401bc5-merged.mount: Deactivated successfully.
Nov 25 17:42:42 compute-0 podman[446075]: 2025-11-25 17:42:42.411812535 +0000 UTC m=+1.372479765 container remove 9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mclean, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:42:42 compute-0 systemd[1]: libpod-conmon-9e72cb5c93daee0ad716b4423589e6542ba7ff71bdf6529a325630856cf563e0.scope: Deactivated successfully.
Nov 25 17:42:42 compute-0 sudo[445971]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:42:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:42:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:42:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:42:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7efaf783-1abb-4fad-a49a-8ebdd9d2e0cd does not exist
Nov 25 17:42:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 22605c26-2e55-42a6-8b7e-d6600952a2d7 does not exist
Nov 25 17:42:42 compute-0 sudo[446139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:42:42 compute-0 sudo[446139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:42 compute-0 sudo[446139]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:42 compute-0 sudo[446164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:42:42 compute-0 sudo[446164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:42:42 compute-0 sudo[446164]: pam_unix(sudo:session): session closed for user root
Nov 25 17:42:42 compute-0 ceph-mon[74985]: pgmap v3597: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:42:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:42:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:44 compute-0 podman[446190]: 2025-11-25 17:42:44.65047362 +0000 UTC m=+0.059713738 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 17:42:44 compute-0 podman[446189]: 2025-11-25 17:42:44.664062239 +0000 UTC m=+0.082439735 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:42:44 compute-0 podman[446191]: 2025-11-25 17:42:44.685429011 +0000 UTC m=+0.093723032 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:42:44 compute-0 nova_compute[254092]: 2025-11-25 17:42:44.744 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:44 compute-0 ceph-mon[74985]: pgmap v3598: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:46 compute-0 nova_compute[254092]: 2025-11-25 17:42:46.079 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:46 compute-0 ceph-mon[74985]: pgmap v3599: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:48 compute-0 ceph-mon[74985]: pgmap v3600: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:49 compute-0 nova_compute[254092]: 2025-11-25 17:42:49.746 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:50 compute-0 ceph-mon[74985]: pgmap v3601: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:51 compute-0 nova_compute[254092]: 2025-11-25 17:42:51.080 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:42:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:42:52 compute-0 ceph-mon[74985]: pgmap v3602: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:54 compute-0 nova_compute[254092]: 2025-11-25 17:42:54.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:42:54 compute-0 ceph-mon[74985]: pgmap v3603: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:42:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653637048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:42:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:42:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653637048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:42:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/653637048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:42:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/653637048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:42:56 compute-0 nova_compute[254092]: 2025-11-25 17:42:56.082 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:56 compute-0 ceph-mon[74985]: pgmap v3604: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:57 compute-0 ceph-mon[74985]: pgmap v3605: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:59 compute-0 nova_compute[254092]: 2025-11-25 17:42:59.753 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:42:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:42:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:00 compute-0 ceph-mon[74985]: pgmap v3606: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:01 compute-0 nova_compute[254092]: 2025-11-25 17:43:01.084 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:01 compute-0 nova_compute[254092]: 2025-11-25 17:43:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:02 compute-0 nova_compute[254092]: 2025-11-25 17:43:02.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:02 compute-0 ceph-mon[74985]: pgmap v3607: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.539 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:04 compute-0 ceph-mon[74985]: pgmap v3608: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:43:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1852068403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:43:04 compute-0 nova_compute[254092]: 2025-11-25 17:43:04.980 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.174 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.175 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3622MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.268 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.268 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.308 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:43:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:43:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948473468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.743 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.751 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:43:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.767 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.770 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:43:05 compute-0 nova_compute[254092]: 2025-11-25 17:43:05.771 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:43:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1852068403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:43:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3948473468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:43:06 compute-0 nova_compute[254092]: 2025-11-25 17:43:06.087 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:06 compute-0 ceph-mon[74985]: pgmap v3609: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:08 compute-0 nova_compute[254092]: 2025-11-25 17:43:08.772 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:08 compute-0 nova_compute[254092]: 2025-11-25 17:43:08.773 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:08 compute-0 nova_compute[254092]: 2025-11-25 17:43:08.773 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:43:08 compute-0 nova_compute[254092]: 2025-11-25 17:43:08.774 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:43:08 compute-0 nova_compute[254092]: 2025-11-25 17:43:08.789 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:43:08 compute-0 nova_compute[254092]: 2025-11-25 17:43:08.789 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:08 compute-0 ceph-mon[74985]: pgmap v3610: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:09 compute-0 nova_compute[254092]: 2025-11-25 17:43:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:09 compute-0 nova_compute[254092]: 2025-11-25 17:43:09.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:09 compute-0 nova_compute[254092]: 2025-11-25 17:43:09.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:43:09 compute-0 nova_compute[254092]: 2025-11-25 17:43:09.756 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:43:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:43:10 compute-0 ceph-mon[74985]: pgmap v3611: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:11 compute-0 nova_compute[254092]: 2025-11-25 17:43:11.091 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:12 compute-0 ceph-mon[74985]: pgmap v3612: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:43:13.685 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:43:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:43:13.686 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:43:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:43:13.686 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:43:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:14 compute-0 nova_compute[254092]: 2025-11-25 17:43:14.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:14 compute-0 ceph-mon[74985]: pgmap v3613: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:15 compute-0 podman[446299]: 2025-11-25 17:43:15.6787647 +0000 UTC m=+0.085317414 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd)
Nov 25 17:43:15 compute-0 podman[446300]: 2025-11-25 17:43:15.703494414 +0000 UTC m=+0.101375952 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:43:15 compute-0 podman[446301]: 2025-11-25 17:43:15.717399511 +0000 UTC m=+0.111391772 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:43:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:16 compute-0 nova_compute[254092]: 2025-11-25 17:43:16.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:16 compute-0 ceph-mon[74985]: pgmap v3614: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:18 compute-0 ceph-mon[74985]: pgmap v3615: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:19 compute-0 nova_compute[254092]: 2025-11-25 17:43:19.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:20 compute-0 ceph-mon[74985]: pgmap v3616: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:21 compute-0 nova_compute[254092]: 2025-11-25 17:43:21.095 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:21 compute-0 nova_compute[254092]: 2025-11-25 17:43:21.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:43:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:22 compute-0 ceph-mon[74985]: pgmap v3617: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:24 compute-0 nova_compute[254092]: 2025-11-25 17:43:24.807 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:24 compute-0 ceph-mon[74985]: pgmap v3618: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:26 compute-0 nova_compute[254092]: 2025-11-25 17:43:26.098 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:26 compute-0 ceph-mon[74985]: pgmap v3619: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:28 compute-0 ceph-mon[74985]: pgmap v3620: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:29 compute-0 nova_compute[254092]: 2025-11-25 17:43:29.810 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:30 compute-0 ceph-mon[74985]: pgmap v3621: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:31 compute-0 nova_compute[254092]: 2025-11-25 17:43:31.100 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:32 compute-0 ceph-mon[74985]: pgmap v3622: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:34 compute-0 ceph-mon[74985]: pgmap v3623: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:34 compute-0 nova_compute[254092]: 2025-11-25 17:43:34.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:36 compute-0 nova_compute[254092]: 2025-11-25 17:43:36.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:36 compute-0 ceph-mon[74985]: pgmap v3624: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:38 compute-0 ceph-mon[74985]: pgmap v3625: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:39 compute-0 nova_compute[254092]: 2025-11-25 17:43:39.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:43:40
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:43:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:43:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 307 writes, 622 keys, 307 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 307 writes, 149 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:43:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:43:40 compute-0 ceph-mon[74985]: pgmap v3626: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:41 compute-0 nova_compute[254092]: 2025-11-25 17:43:41.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:42 compute-0 sudo[446360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:42 compute-0 sudo[446360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:42 compute-0 sudo[446360]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:42 compute-0 sudo[446385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:43:42 compute-0 sudo[446385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:42 compute-0 sudo[446385]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:42 compute-0 sudo[446410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:42 compute-0 sudo[446410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:42 compute-0 sudo[446410]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:42 compute-0 ceph-mon[74985]: pgmap v3627: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:42 compute-0 sudo[446435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:43:42 compute-0 sudo[446435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:43 compute-0 sudo[446435]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:43:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:43:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:43:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:43:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 208f4db1-f3c0-4d93-b824-b74749aac47e does not exist
Nov 25 17:43:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e9e459e3-d8b9-4d78-8be2-224a4b503e2c does not exist
Nov 25 17:43:43 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f48eb4e-0b74-4505-9455-3ab589a294e8 does not exist
Nov 25 17:43:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:43:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:43:43 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:43:43 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:43:43 compute-0 sudo[446492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:43 compute-0 sudo[446492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:43 compute-0 sudo[446492]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:43 compute-0 sudo[446517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:43:43 compute-0 sudo[446517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:43 compute-0 sudo[446517]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:43 compute-0 sudo[446542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:43 compute-0 sudo[446542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:43 compute-0 sudo[446542]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:43:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:43:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:43:43 compute-0 sudo[446567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:43:43 compute-0 sudo[446567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:44 compute-0 podman[446632]: 2025-11-25 17:43:44.445568085 +0000 UTC m=+0.061250188 container create 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:43:44 compute-0 systemd[1]: Started libpod-conmon-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope.
Nov 25 17:43:44 compute-0 podman[446632]: 2025-11-25 17:43:44.422424045 +0000 UTC m=+0.038106168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:43:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:43:44 compute-0 podman[446632]: 2025-11-25 17:43:44.538947888 +0000 UTC m=+0.154630071 container init 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 25 17:43:44 compute-0 podman[446632]: 2025-11-25 17:43:44.553051801 +0000 UTC m=+0.168733934 container start 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:43:44 compute-0 podman[446632]: 2025-11-25 17:43:44.556986428 +0000 UTC m=+0.172668621 container attach 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:43:44 compute-0 optimistic_bose[446649]: 167 167
Nov 25 17:43:44 compute-0 systemd[1]: libpod-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope: Deactivated successfully.
Nov 25 17:43:44 compute-0 conmon[446649]: conmon 4ee08558c99e00fa8201 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope/container/memory.events
Nov 25 17:43:44 compute-0 podman[446632]: 2025-11-25 17:43:44.563993809 +0000 UTC m=+0.179675942 container died 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-780f73d29e875c9b6423ff32980167577dcdce7ea8d3d85d17073131581fb0d5-merged.mount: Deactivated successfully.
Nov 25 17:43:44 compute-0 podman[446632]: 2025-11-25 17:43:44.619586673 +0000 UTC m=+0.235268766 container remove 4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 17:43:44 compute-0 systemd[1]: libpod-conmon-4ee08558c99e00fa82011d91f54e7a5d1f904baf36f17b39f37ef34d504cc03b.scope: Deactivated successfully.
Nov 25 17:43:44 compute-0 podman[446672]: 2025-11-25 17:43:44.864689405 +0000 UTC m=+0.076822422 container create 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 17:43:44 compute-0 nova_compute[254092]: 2025-11-25 17:43:44.891 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:44 compute-0 systemd[1]: Started libpod-conmon-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope.
Nov 25 17:43:44 compute-0 podman[446672]: 2025-11-25 17:43:44.835382918 +0000 UTC m=+0.047515945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:43:44 compute-0 ceph-mon[74985]: pgmap v3628: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:44 compute-0 podman[446672]: 2025-11-25 17:43:44.974231237 +0000 UTC m=+0.186364334 container init 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:43:44 compute-0 podman[446672]: 2025-11-25 17:43:44.983363675 +0000 UTC m=+0.195496732 container start 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:43:44 compute-0 podman[446672]: 2025-11-25 17:43:44.987978731 +0000 UTC m=+0.200111838 container attach 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:43:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:46 compute-0 crazy_taussig[446689]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:43:46 compute-0 crazy_taussig[446689]: --> relative data size: 1.0
Nov 25 17:43:46 compute-0 crazy_taussig[446689]: --> All data devices are unavailable
Nov 25 17:43:46 compute-0 systemd[1]: libpod-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope: Deactivated successfully.
Nov 25 17:43:46 compute-0 systemd[1]: libpod-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope: Consumed 1.039s CPU time.
Nov 25 17:43:46 compute-0 podman[446672]: 2025-11-25 17:43:46.075009404 +0000 UTC m=+1.287142421 container died 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:43:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-638fc3170aae96acba769f3ed4aaad7cdeb2315ad9bcff5763cc7099442c3edb-merged.mount: Deactivated successfully.
Nov 25 17:43:46 compute-0 nova_compute[254092]: 2025-11-25 17:43:46.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:46 compute-0 podman[446672]: 2025-11-25 17:43:46.22250866 +0000 UTC m=+1.434641677 container remove 509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:43:46 compute-0 podman[446719]: 2025-11-25 17:43:46.227949838 +0000 UTC m=+0.112889615 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 17:43:46 compute-0 systemd[1]: libpod-conmon-509bea22dbed73e5296c3faa98fb7261d15e1cd8c917b8381962223a93240573.scope: Deactivated successfully.
Nov 25 17:43:46 compute-0 sudo[446567]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:46 compute-0 podman[446729]: 2025-11-25 17:43:46.272435559 +0000 UTC m=+0.156943433 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 17:43:46 compute-0 podman[446730]: 2025-11-25 17:43:46.308771438 +0000 UTC m=+0.182552711 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:43:46 compute-0 sudo[446789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:46 compute-0 sudo[446789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:46 compute-0 sudo[446789]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:46 compute-0 sudo[446821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:43:46 compute-0 sudo[446821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:46 compute-0 sudo[446821]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:46 compute-0 sudo[446846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:46 compute-0 sudo[446846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:46 compute-0 sudo[446846]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:46 compute-0 sudo[446871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:43:46 compute-0 sudo[446871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:46 compute-0 ceph-mon[74985]: pgmap v3629: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:47 compute-0 podman[446938]: 2025-11-25 17:43:47.001463136 +0000 UTC m=+0.062127642 container create aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:43:47 compute-0 systemd[1]: Started libpod-conmon-aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125.scope.
Nov 25 17:43:47 compute-0 podman[446938]: 2025-11-25 17:43:46.981200314 +0000 UTC m=+0.041864850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:43:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:43:47 compute-0 podman[446938]: 2025-11-25 17:43:47.109920269 +0000 UTC m=+0.170584815 container init aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:43:47 compute-0 podman[446938]: 2025-11-25 17:43:47.121057512 +0000 UTC m=+0.181722018 container start aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:43:47 compute-0 podman[446938]: 2025-11-25 17:43:47.125232235 +0000 UTC m=+0.185896791 container attach aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:43:47 compute-0 agitated_brown[446954]: 167 167
Nov 25 17:43:47 compute-0 systemd[1]: libpod-aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125.scope: Deactivated successfully.
Nov 25 17:43:47 compute-0 podman[446938]: 2025-11-25 17:43:47.129839161 +0000 UTC m=+0.190503687 container died aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-71dca24b45f1536aa7786ac7204965d1b767e1fc0761e7f17d93ebbfbfcd6258-merged.mount: Deactivated successfully.
Nov 25 17:43:47 compute-0 podman[446938]: 2025-11-25 17:43:47.18124114 +0000 UTC m=+0.241905666 container remove aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:43:47 compute-0 systemd[1]: libpod-conmon-aac2158ebf8e1894495157514a1c32c619c7ea0c925ef0a54561dd8b7828c125.scope: Deactivated successfully.
Nov 25 17:43:47 compute-0 podman[446979]: 2025-11-25 17:43:47.441476544 +0000 UTC m=+0.074872099 container create 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:43:47 compute-0 systemd[1]: Started libpod-conmon-00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9.scope.
Nov 25 17:43:47 compute-0 podman[446979]: 2025-11-25 17:43:47.405978418 +0000 UTC m=+0.039374083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:43:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:47 compute-0 podman[446979]: 2025-11-25 17:43:47.549600138 +0000 UTC m=+0.182995793 container init 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:43:47 compute-0 podman[446979]: 2025-11-25 17:43:47.563888697 +0000 UTC m=+0.197284282 container start 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 17:43:47 compute-0 podman[446979]: 2025-11-25 17:43:47.568337828 +0000 UTC m=+0.201733413 container attach 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:43:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]: {
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:     "0": [
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:         {
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "devices": [
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "/dev/loop3"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             ],
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_name": "ceph_lv0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_size": "21470642176",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "name": "ceph_lv0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "tags": {
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cluster_name": "ceph",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.crush_device_class": "",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.encrypted": "0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osd_id": "0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.type": "block",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.vdo": "0"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             },
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "type": "block",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "vg_name": "ceph_vg0"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:         }
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:     ],
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:     "1": [
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:         {
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "devices": [
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "/dev/loop4"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             ],
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_name": "ceph_lv1",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_size": "21470642176",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "name": "ceph_lv1",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "tags": {
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cluster_name": "ceph",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.crush_device_class": "",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.encrypted": "0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osd_id": "1",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.type": "block",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.vdo": "0"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             },
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "type": "block",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "vg_name": "ceph_vg1"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:         }
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:     ],
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:     "2": [
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:         {
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "devices": [
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "/dev/loop5"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             ],
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_name": "ceph_lv2",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_size": "21470642176",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "name": "ceph_lv2",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "tags": {
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.cluster_name": "ceph",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.crush_device_class": "",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.encrypted": "0",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osd_id": "2",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.type": "block",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:                 "ceph.vdo": "0"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             },
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "type": "block",
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:             "vg_name": "ceph_vg2"
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:         }
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]:     ]
Nov 25 17:43:48 compute-0 trusting_chebyshev[446995]: }
Nov 25 17:43:48 compute-0 systemd[1]: libpod-00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9.scope: Deactivated successfully.
Nov 25 17:43:48 compute-0 podman[446979]: 2025-11-25 17:43:48.380322573 +0000 UTC m=+1.013718128 container died 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:43:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dda703218e75c00e3a349ca8ff8cf1c49fe4d508c424dac0eecbdf5c512f2bb-merged.mount: Deactivated successfully.
Nov 25 17:43:48 compute-0 podman[446979]: 2025-11-25 17:43:48.441829768 +0000 UTC m=+1.075225353 container remove 00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:43:48 compute-0 systemd[1]: libpod-conmon-00fe45ca76af90eeb40ed12002aa934f2455b720cbe1d53ed8017de86c436bc9.scope: Deactivated successfully.
Nov 25 17:43:48 compute-0 sudo[446871]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:48 compute-0 sudo[447015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:48 compute-0 sudo[447015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:48 compute-0 sudo[447015]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:48 compute-0 sudo[447040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:43:48 compute-0 sudo[447040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:48 compute-0 sudo[447040]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:48 compute-0 sudo[447065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:48 compute-0 sudo[447065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:48 compute-0 sudo[447065]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:48 compute-0 sudo[447090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:43:48 compute-0 sudo[447090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:48 compute-0 ceph-mon[74985]: pgmap v3630: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:49 compute-0 podman[447157]: 2025-11-25 17:43:49.228048432 +0000 UTC m=+0.050924458 container create 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:43:49 compute-0 systemd[1]: Started libpod-conmon-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope.
Nov 25 17:43:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:43:49 compute-0 podman[447157]: 2025-11-25 17:43:49.207877772 +0000 UTC m=+0.030753848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:43:49 compute-0 podman[447157]: 2025-11-25 17:43:49.315840161 +0000 UTC m=+0.138716237 container init 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:43:49 compute-0 podman[447157]: 2025-11-25 17:43:49.322332808 +0000 UTC m=+0.145208834 container start 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:43:49 compute-0 keen_edison[447173]: 167 167
Nov 25 17:43:49 compute-0 podman[447157]: 2025-11-25 17:43:49.326416129 +0000 UTC m=+0.149292205 container attach 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 17:43:49 compute-0 systemd[1]: libpod-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope: Deactivated successfully.
Nov 25 17:43:49 compute-0 conmon[447173]: conmon 767c3a9c33420972014e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope/container/memory.events
Nov 25 17:43:49 compute-0 podman[447157]: 2025-11-25 17:43:49.328097775 +0000 UTC m=+0.150973831 container died 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-48e950f39815ae0280b782457ac2b597297036cea13f5f10ef8f113056a6c3b7-merged.mount: Deactivated successfully.
Nov 25 17:43:49 compute-0 podman[447157]: 2025-11-25 17:43:49.369184514 +0000 UTC m=+0.192060580 container remove 767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:43:49 compute-0 systemd[1]: libpod-conmon-767c3a9c33420972014ed8806cca144db3eb61ac8686c562637603d1da70b69d.scope: Deactivated successfully.
Nov 25 17:43:49 compute-0 podman[447197]: 2025-11-25 17:43:49.583527979 +0000 UTC m=+0.062386540 container create bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:43:49 compute-0 systemd[1]: Started libpod-conmon-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope.
Nov 25 17:43:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:43:49 compute-0 podman[447197]: 2025-11-25 17:43:49.562587589 +0000 UTC m=+0.041446150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:43:49 compute-0 podman[447197]: 2025-11-25 17:43:49.676945952 +0000 UTC m=+0.155804493 container init bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:43:49 compute-0 podman[447197]: 2025-11-25 17:43:49.688051265 +0000 UTC m=+0.166909796 container start bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:43:49 compute-0 podman[447197]: 2025-11-25 17:43:49.692535436 +0000 UTC m=+0.171393997 container attach bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:43:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:49 compute-0 nova_compute[254092]: 2025-11-25 17:43:49.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:43:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6601.2 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 300 writes, 695 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 300 writes, 143 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:43:50 compute-0 eager_bose[447213]: {
Nov 25 17:43:50 compute-0 eager_bose[447213]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "osd_id": 1,
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "type": "bluestore"
Nov 25 17:43:50 compute-0 eager_bose[447213]:     },
Nov 25 17:43:50 compute-0 eager_bose[447213]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "osd_id": 2,
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "type": "bluestore"
Nov 25 17:43:50 compute-0 eager_bose[447213]:     },
Nov 25 17:43:50 compute-0 eager_bose[447213]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "osd_id": 0,
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:43:50 compute-0 eager_bose[447213]:         "type": "bluestore"
Nov 25 17:43:50 compute-0 eager_bose[447213]:     }
Nov 25 17:43:50 compute-0 eager_bose[447213]: }
Nov 25 17:43:50 compute-0 systemd[1]: libpod-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope: Deactivated successfully.
Nov 25 17:43:50 compute-0 podman[447197]: 2025-11-25 17:43:50.777171064 +0000 UTC m=+1.256029595 container died bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:43:50 compute-0 systemd[1]: libpod-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope: Consumed 1.100s CPU time.
Nov 25 17:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8f12309023f6eda9ac40f99b12a207cce7dc182d19cc9ff15ae315ad231a95b-merged.mount: Deactivated successfully.
Nov 25 17:43:50 compute-0 podman[447197]: 2025-11-25 17:43:50.84094976 +0000 UTC m=+1.319808311 container remove bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bose, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:43:50 compute-0 systemd[1]: libpod-conmon-bbf8803df4e42fa2784902c8d835d332abb3e656200b44415b4c27041b98c35b.scope: Deactivated successfully.
Nov 25 17:43:50 compute-0 sudo[447090]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:43:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:43:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:43:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:43:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9d8d7507-6072-4d7f-b255-a7cf8d1d3a3b does not exist
Nov 25 17:43:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 723ecaec-4780-4d5d-be81-344ffec3b8c9 does not exist
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.909000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630909052, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1042, "num_deletes": 251, "total_data_size": 1512987, "memory_usage": 1541584, "flush_reason": "Manual Compaction"}
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630917080, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1487572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74120, "largest_seqno": 75161, "table_properties": {"data_size": 1482474, "index_size": 2621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10867, "raw_average_key_size": 19, "raw_value_size": 1472281, "raw_average_value_size": 2662, "num_data_blocks": 118, "num_entries": 553, "num_filter_entries": 553, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092530, "oldest_key_time": 1764092530, "file_creation_time": 1764092630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 8119 microseconds, and 3664 cpu microseconds.
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.917129) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1487572 bytes OK
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.917145) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918193) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918203) EVENT_LOG_v1 {"time_micros": 1764092630918200, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1508106, prev total WAL file size 1508106, number of live WAL files 2.
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918787) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1452KB)], [173(9575KB)]
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630918854, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 11292989, "oldest_snapshot_seqno": -1}
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9166 keys, 9577325 bytes, temperature: kUnknown
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630978467, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9577325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9521681, "index_size": 31595, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241122, "raw_average_key_size": 26, "raw_value_size": 9363653, "raw_average_value_size": 1021, "num_data_blocks": 1211, "num_entries": 9166, "num_filter_entries": 9166, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092630, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:43:50 compute-0 ceph-mon[74985]: pgmap v3631: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:43:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.978953) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9577325 bytes
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.980344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.0 rd, 160.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(14.0) write-amplify(6.4) OK, records in: 9680, records dropped: 514 output_compression: NoCompression
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.980377) EVENT_LOG_v1 {"time_micros": 1764092630980360, "job": 108, "event": "compaction_finished", "compaction_time_micros": 59745, "compaction_time_cpu_micros": 40873, "output_level": 6, "num_output_files": 1, "total_output_size": 9577325, "num_input_records": 9680, "num_output_records": 9166, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630981216, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 25 17:43:50 compute-0 sudo[447256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092630985195, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.918710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:43:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:43:50.985313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:43:50 compute-0 sudo[447256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:50 compute-0 sudo[447256]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:51 compute-0 sudo[447281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:43:51 compute-0 sudo[447281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:43:51 compute-0 sudo[447281]: pam_unix(sudo:session): session closed for user root
Nov 25 17:43:51 compute-0 nova_compute[254092]: 2025-11-25 17:43:51.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:52 compute-0 ceph-mon[74985]: pgmap v3632: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:43:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:43:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:54 compute-0 nova_compute[254092]: 2025-11-25 17:43:54.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:54 compute-0 ceph-mon[74985]: pgmap v3633: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:43:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:43:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138638566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:43:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:43:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138638566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:43:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4138638566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:43:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/4138638566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:43:56 compute-0 nova_compute[254092]: 2025-11-25 17:43:56.193 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:56 compute-0 ceph-mon[74985]: pgmap v3634: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:58 compute-0 ceph-mon[74985]: pgmap v3635: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:43:59 compute-0 nova_compute[254092]: 2025-11-25 17:43:59.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:43:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:00 compute-0 ceph-mon[74985]: pgmap v3636: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:01 compute-0 nova_compute[254092]: 2025-11-25 17:44:01.196 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:01 compute-0 nova_compute[254092]: 2025-11-25 17:44:01.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:02 compute-0 ceph-mon[74985]: pgmap v3637: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:02 compute-0 nova_compute[254092]: 2025-11-25 17:44:02.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:04 compute-0 nova_compute[254092]: 2025-11-25 17:44:04.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:04 compute-0 ceph-mon[74985]: pgmap v3638: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:05 compute-0 nova_compute[254092]: 2025-11-25 17:44:05.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:05 compute-0 nova_compute[254092]: 2025-11-25 17:44:05.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:44:05 compute-0 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:44:05 compute-0 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:44:05 compute-0 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:44:05 compute-0 nova_compute[254092]: 2025-11-25 17:44:05.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:44:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:44:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617392977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:44:05 compute-0 nova_compute[254092]: 2025-11-25 17:44:05.987 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:44:06 compute-0 ceph-mon[74985]: pgmap v3639: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3617392977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.158 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.159 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.160 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.253 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.287 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.287 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.309 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:44:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:44:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451724122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.757 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.763 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.794 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.796 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:44:06 compute-0 nova_compute[254092]: 2025-11-25 17:44:06.796 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:44:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2451724122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:44:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:44:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6601.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 230 writes, 420 keys, 230 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s
                                           Interval WAL: 230 writes, 108 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:44:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:08 compute-0 ceph-mon[74985]: pgmap v3640: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.791 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.791 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.792 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.792 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.811 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:44:09 compute-0 nova_compute[254092]: 2025-11-25 17:44:09.900 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:44:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:44:10 compute-0 ceph-mon[74985]: pgmap v3641: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:11 compute-0 nova_compute[254092]: 2025-11-25 17:44:11.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:11 compute-0 nova_compute[254092]: 2025-11-25 17:44:11.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:12 compute-0 ceph-mon[74985]: pgmap v3642: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:44:13.687 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:44:13.688 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:44:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:44:13.688 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:44:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:14 compute-0 ceph-mon[74985]: pgmap v3643: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:14 compute-0 nova_compute[254092]: 2025-11-25 17:44:14.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:16 compute-0 nova_compute[254092]: 2025-11-25 17:44:16.306 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:16 compute-0 podman[447350]: 2025-11-25 17:44:16.663209187 +0000 UTC m=+0.072176526 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 25 17:44:16 compute-0 podman[447351]: 2025-11-25 17:44:16.678938025 +0000 UTC m=+0.088088749 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 17:44:16 compute-0 podman[447352]: 2025-11-25 17:44:16.69236052 +0000 UTC m=+0.101067792 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:44:16 compute-0 ceph-mon[74985]: pgmap v3644: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:18 compute-0 ceph-mon[74985]: pgmap v3645: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:19 compute-0 nova_compute[254092]: 2025-11-25 17:44:19.946 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:20 compute-0 nova_compute[254092]: 2025-11-25 17:44:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:20 compute-0 nova_compute[254092]: 2025-11-25 17:44:20.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:44:20 compute-0 ceph-mon[74985]: pgmap v3646: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:21 compute-0 nova_compute[254092]: 2025-11-25 17:44:21.309 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:22 compute-0 nova_compute[254092]: 2025-11-25 17:44:22.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:22 compute-0 nova_compute[254092]: 2025-11-25 17:44:22.525 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:22 compute-0 nova_compute[254092]: 2025-11-25 17:44:22.525 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:44:22 compute-0 nova_compute[254092]: 2025-11-25 17:44:22.535 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:44:22 compute-0 ceph-mon[74985]: pgmap v3647: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:23 compute-0 nova_compute[254092]: 2025-11-25 17:44:23.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:24 compute-0 ceph-mon[74985]: pgmap v3648: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:24 compute-0 nova_compute[254092]: 2025-11-25 17:44:24.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 17:44:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:26 compute-0 nova_compute[254092]: 2025-11-25 17:44:26.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:26 compute-0 ceph-mon[74985]: pgmap v3649: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3650: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:28 compute-0 ceph-mon[74985]: pgmap v3650: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:29 compute-0 nova_compute[254092]: 2025-11-25 17:44:29.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:30 compute-0 ceph-mon[74985]: pgmap v3651: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:31 compute-0 nova_compute[254092]: 2025-11-25 17:44:31.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:32 compute-0 ceph-mon[74985]: pgmap v3652: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:34 compute-0 ceph-mon[74985]: pgmap v3653: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:34 compute-0 nova_compute[254092]: 2025-11-25 17:44:34.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:36 compute-0 nova_compute[254092]: 2025-11-25 17:44:36.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:36 compute-0 ceph-mon[74985]: pgmap v3654: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3655: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:38 compute-0 ceph-mon[74985]: pgmap v3655: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:39 compute-0 nova_compute[254092]: 2025-11-25 17:44:39.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:44:40
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:44:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:44:40 compute-0 ceph-mon[74985]: pgmap v3656: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:41 compute-0 nova_compute[254092]: 2025-11-25 17:44:41.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3657: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:43 compute-0 ceph-mon[74985]: pgmap v3657: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:43 compute-0 nova_compute[254092]: 2025-11-25 17:44:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:44:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:44 compute-0 ceph-mon[74985]: pgmap v3658: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:44 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:44 compute-0 nova_compute[254092]: 2025-11-25 17:44:44.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:46 compute-0 nova_compute[254092]: 2025-11-25 17:44:46.383 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:46 compute-0 ceph-mon[74985]: pgmap v3659: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:47 compute-0 podman[447412]: 2025-11-25 17:44:47.679321628 +0000 UTC m=+0.089540459 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:44:47 compute-0 podman[447411]: 2025-11-25 17:44:47.688160579 +0000 UTC m=+0.105964266 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:44:47 compute-0 podman[447413]: 2025-11-25 17:44:47.703419884 +0000 UTC m=+0.107827186 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:44:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3660: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:48 compute-0 ceph-mon[74985]: pgmap v3660: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:49 compute-0 nova_compute[254092]: 2025-11-25 17:44:49.997 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:51 compute-0 ceph-mon[74985]: pgmap v3661: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:51 compute-0 sudo[447477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:51 compute-0 sudo[447477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:51 compute-0 sudo[447477]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:51 compute-0 sudo[447502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:44:51 compute-0 sudo[447502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:51 compute-0 sudo[447502]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:51 compute-0 sudo[447527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:51 compute-0 sudo[447527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:51 compute-0 sudo[447527]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:51 compute-0 sudo[447552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:44:51 compute-0 sudo[447552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:51 compute-0 nova_compute[254092]: 2025-11-25 17:44:51.385 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3662: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:51 compute-0 sudo[447552]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:44:51 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:44:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:44:51 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:44:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:44:52 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9e656d33-ef8a-41d1-929f-5ce50c3a5e8b does not exist
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e81ff122-c5d9-46c6-b58e-dc17b050aaac does not exist
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f5fe6f9-ff37-4191-901c-564b49a77c7e does not exist
Nov 25 17:44:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:44:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:44:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:44:52 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:44:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:44:52 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:44:52 compute-0 sudo[447609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:52 compute-0 sudo[447609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:52 compute-0 sudo[447609]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:52 compute-0 sudo[447634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:44:52 compute-0 sudo[447634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:52 compute-0 sudo[447634]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:52 compute-0 ceph-mon[74985]: pgmap v3662: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:44:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:44:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:44:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:44:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:44:52 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:44:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:44:52 compute-0 sudo[447659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:52 compute-0 sudo[447659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:52 compute-0 sudo[447659]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:52 compute-0 sudo[447684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:44:52 compute-0 sudo[447684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:52 compute-0 podman[447748]: 2025-11-25 17:44:52.81725428 +0000 UTC m=+0.111322461 container create eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 17:44:52 compute-0 podman[447748]: 2025-11-25 17:44:52.735197807 +0000 UTC m=+0.029266018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:44:52 compute-0 systemd[1]: Started libpod-conmon-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope.
Nov 25 17:44:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:44:52 compute-0 podman[447748]: 2025-11-25 17:44:52.981260055 +0000 UTC m=+0.275328316 container init eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:44:52 compute-0 podman[447748]: 2025-11-25 17:44:52.992055139 +0000 UTC m=+0.286123350 container start eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:44:52 compute-0 elastic_lewin[447764]: 167 167
Nov 25 17:44:52 compute-0 systemd[1]: libpod-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope: Deactivated successfully.
Nov 25 17:44:53 compute-0 conmon[447764]: conmon eb9c4fcd1b23441e60e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope/container/memory.events
Nov 25 17:44:53 compute-0 podman[447748]: 2025-11-25 17:44:53.123850737 +0000 UTC m=+0.417919008 container attach eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:44:53 compute-0 podman[447748]: 2025-11-25 17:44:53.125849991 +0000 UTC m=+0.419918202 container died eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 17:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a8672e82e180c0e7261c0ebee47cb1c345c24f877bdb30880b4841c71bb6631-merged.mount: Deactivated successfully.
Nov 25 17:44:53 compute-0 podman[447748]: 2025-11-25 17:44:53.554625995 +0000 UTC m=+0.848694196 container remove eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:44:53 compute-0 systemd[1]: libpod-conmon-eb9c4fcd1b23441e60e50ba5b106739b9d65bc11137bffa68c6941cc84cb709a.scope: Deactivated successfully.
Nov 25 17:44:53 compute-0 podman[447788]: 2025-11-25 17:44:53.800359084 +0000 UTC m=+0.086063134 container create f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:44:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:53 compute-0 podman[447788]: 2025-11-25 17:44:53.738066819 +0000 UTC m=+0.023770849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:44:53 compute-0 systemd[1]: Started libpod-conmon-f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7.scope.
Nov 25 17:44:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:54 compute-0 podman[447788]: 2025-11-25 17:44:54.02424117 +0000 UTC m=+0.309945220 container init f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:44:54 compute-0 podman[447788]: 2025-11-25 17:44:54.031570049 +0000 UTC m=+0.317274099 container start f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:44:54 compute-0 podman[447788]: 2025-11-25 17:44:54.134202973 +0000 UTC m=+0.419907063 container attach f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:44:54 compute-0 nice_nobel[447803]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:44:54 compute-0 nice_nobel[447803]: --> relative data size: 1.0
Nov 25 17:44:54 compute-0 nice_nobel[447803]: --> All data devices are unavailable
Nov 25 17:44:55 compute-0 nova_compute[254092]: 2025-11-25 17:44:54.999 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:44:55 compute-0 systemd[1]: libpod-f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7.scope: Deactivated successfully.
Nov 25 17:44:55 compute-0 podman[447788]: 2025-11-25 17:44:55.023800631 +0000 UTC m=+1.309504651 container died f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:44:55 compute-0 ceph-mon[74985]: pgmap v3663: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-122bf3bb467252ef08220327e5043388971f66c7b08c84fef6436aefe7e9cebd-merged.mount: Deactivated successfully.
Nov 25 17:44:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:44:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/898474702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:44:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:44:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/898474702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:44:55 compute-0 podman[447788]: 2025-11-25 17:44:55.482279973 +0000 UTC m=+1.767983983 container remove f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:44:55 compute-0 sudo[447684]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:55 compute-0 sudo[447842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:55 compute-0 sudo[447842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:55 compute-0 sudo[447842]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:55 compute-0 systemd[1]: libpod-conmon-f223c704ee4e1e4992f8bbb33970e3b2071059aba0570120cb339f4e7322fce7.scope: Deactivated successfully.
Nov 25 17:44:55 compute-0 sudo[447867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:44:55 compute-0 sudo[447867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:55 compute-0 sudo[447867]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:55 compute-0 sudo[447892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:55 compute-0 sudo[447892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:55 compute-0 sudo[447892]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:55 compute-0 sudo[447917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:44:55 compute-0 sudo[447917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/898474702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:44:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/898474702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:44:56 compute-0 ceph-mon[74985]: pgmap v3664: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:56 compute-0 podman[447982]: 2025-11-25 17:44:56.087566981 +0000 UTC m=+0.040486363 container create 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:44:56 compute-0 systemd[1]: Started libpod-conmon-1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5.scope.
Nov 25 17:44:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:44:56 compute-0 podman[447982]: 2025-11-25 17:44:56.069004906 +0000 UTC m=+0.021924328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:44:56 compute-0 podman[447982]: 2025-11-25 17:44:56.170405236 +0000 UTC m=+0.123324638 container init 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:44:56 compute-0 podman[447982]: 2025-11-25 17:44:56.176624155 +0000 UTC m=+0.129543577 container start 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 17:44:56 compute-0 podman[447982]: 2025-11-25 17:44:56.180160861 +0000 UTC m=+0.133080273 container attach 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 17:44:56 compute-0 happy_bartik[447999]: 167 167
Nov 25 17:44:56 compute-0 systemd[1]: libpod-1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5.scope: Deactivated successfully.
Nov 25 17:44:56 compute-0 podman[447982]: 2025-11-25 17:44:56.182557717 +0000 UTC m=+0.135477099 container died 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3d11aa6a4423eaf0e26514e8344cf6c74e301e204f3bb8176da1e22710f3e30-merged.mount: Deactivated successfully.
Nov 25 17:44:56 compute-0 podman[447982]: 2025-11-25 17:44:56.218002412 +0000 UTC m=+0.170921784 container remove 1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:44:56 compute-0 systemd[1]: libpod-conmon-1a507544fcaa671db8bcd6153b44f710813a534226f3e8e296daaca9f09d13f5.scope: Deactivated successfully.
Nov 25 17:44:56 compute-0 podman[448022]: 2025-11-25 17:44:56.40748217 +0000 UTC m=+0.049615311 container create 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:44:56 compute-0 nova_compute[254092]: 2025-11-25 17:44:56.428 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:44:56 compute-0 systemd[1]: Started libpod-conmon-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope.
Nov 25 17:44:56 compute-0 podman[448022]: 2025-11-25 17:44:56.384749571 +0000 UTC m=+0.026882762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:44:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:56 compute-0 podman[448022]: 2025-11-25 17:44:56.492808113 +0000 UTC m=+0.134941254 container init 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:44:56 compute-0 podman[448022]: 2025-11-25 17:44:56.501975863 +0000 UTC m=+0.144108994 container start 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:44:56 compute-0 podman[448022]: 2025-11-25 17:44:56.50520943 +0000 UTC m=+0.147342571 container attach 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]: {
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:     "0": [
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:         {
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "devices": [
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "/dev/loop3"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             ],
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_name": "ceph_lv0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_size": "21470642176",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "name": "ceph_lv0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "tags": {
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cluster_name": "ceph",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.crush_device_class": "",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.encrypted": "0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osd_id": "0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.type": "block",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.vdo": "0"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             },
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "type": "block",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "vg_name": "ceph_vg0"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:         }
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:     ],
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:     "1": [
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:         {
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "devices": [
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "/dev/loop4"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             ],
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_name": "ceph_lv1",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_size": "21470642176",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "name": "ceph_lv1",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "tags": {
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cluster_name": "ceph",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.crush_device_class": "",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.encrypted": "0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osd_id": "1",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.type": "block",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.vdo": "0"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             },
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "type": "block",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "vg_name": "ceph_vg1"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:         }
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:     ],
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:     "2": [
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:         {
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "devices": [
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "/dev/loop5"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             ],
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_name": "ceph_lv2",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_size": "21470642176",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "name": "ceph_lv2",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "tags": {
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.cluster_name": "ceph",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.crush_device_class": "",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.encrypted": "0",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osd_id": "2",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.type": "block",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:                 "ceph.vdo": "0"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             },
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "type": "block",
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:             "vg_name": "ceph_vg2"
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:         }
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]:     ]
Nov 25 17:44:57 compute-0 awesome_elgamal[448038]: }
Nov 25 17:44:57 compute-0 systemd[1]: libpod-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope: Deactivated successfully.
Nov 25 17:44:57 compute-0 conmon[448038]: conmon 392a00429266d61ee25c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope/container/memory.events
Nov 25 17:44:57 compute-0 podman[448022]: 2025-11-25 17:44:57.311068639 +0000 UTC m=+0.953201770 container died 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-48778f121e24d5c2876f51c83225aed2e41842ddf26aab07f3ce57ceadf1bb5c-merged.mount: Deactivated successfully.
Nov 25 17:44:57 compute-0 podman[448022]: 2025-11-25 17:44:57.357058801 +0000 UTC m=+0.999191932 container remove 392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:44:57 compute-0 systemd[1]: libpod-conmon-392a00429266d61ee25cb74ec21ef03175d1f687aa216a225cd4bbcdc69bc181.scope: Deactivated successfully.
Nov 25 17:44:57 compute-0 sudo[447917]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:57 compute-0 sudo[448057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:57 compute-0 sudo[448057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:57 compute-0 sudo[448057]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:57 compute-0 sudo[448082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:44:57 compute-0 sudo[448082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:57 compute-0 sudo[448082]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:57 compute-0 sudo[448107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:57 compute-0 sudo[448107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:57 compute-0 sudo[448107]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:57 compute-0 sudo[448132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:44:57 compute-0 sudo[448132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:57 compute-0 podman[448197]: 2025-11-25 17:44:57.928291382 +0000 UTC m=+0.044127721 container create 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:44:57 compute-0 systemd[1]: Started libpod-conmon-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope.
Nov 25 17:44:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:44:58 compute-0 podman[448197]: 2025-11-25 17:44:57.909248995 +0000 UTC m=+0.025085194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:44:58 compute-0 podman[448197]: 2025-11-25 17:44:58.008334212 +0000 UTC m=+0.124170421 container init 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:44:58 compute-0 podman[448197]: 2025-11-25 17:44:58.016963017 +0000 UTC m=+0.132799196 container start 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:44:58 compute-0 podman[448197]: 2025-11-25 17:44:58.021108889 +0000 UTC m=+0.136945068 container attach 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:44:58 compute-0 brave_panini[448213]: 167 167
Nov 25 17:44:58 compute-0 systemd[1]: libpod-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope: Deactivated successfully.
Nov 25 17:44:58 compute-0 conmon[448213]: conmon 2593a422c7066f242174 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope/container/memory.events
Nov 25 17:44:58 compute-0 podman[448197]: 2025-11-25 17:44:58.024620756 +0000 UTC m=+0.140456935 container died 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-edfe1615ba6439a1fc6e7f2cd208bf68645044be5ef1975e01cb872be7ae97bc-merged.mount: Deactivated successfully.
Nov 25 17:44:58 compute-0 podman[448197]: 2025-11-25 17:44:58.059066643 +0000 UTC m=+0.174902822 container remove 2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:44:58 compute-0 systemd[1]: libpod-conmon-2593a422c7066f2421745076a01cc8e87ef5e8b48ec01c6ff69a2aa6c9bb1e00.scope: Deactivated successfully.
Nov 25 17:44:58 compute-0 podman[448237]: 2025-11-25 17:44:58.209880119 +0000 UTC m=+0.037929144 container create d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:44:58 compute-0 systemd[1]: Started libpod-conmon-d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8.scope.
Nov 25 17:44:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:44:58 compute-0 podman[448237]: 2025-11-25 17:44:58.19339377 +0000 UTC m=+0.021442815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:44:58 compute-0 podman[448237]: 2025-11-25 17:44:58.294430951 +0000 UTC m=+0.122480006 container init d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:44:58 compute-0 podman[448237]: 2025-11-25 17:44:58.300065204 +0000 UTC m=+0.128114229 container start d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:44:58 compute-0 podman[448237]: 2025-11-25 17:44:58.303811576 +0000 UTC m=+0.131860621 container attach d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 17:44:58 compute-0 ceph-mon[74985]: pgmap v3665: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]: {
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "osd_id": 1,
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "type": "bluestore"
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:     },
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "osd_id": 2,
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "type": "bluestore"
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:     },
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "osd_id": 0,
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:         "type": "bluestore"
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]:     }
Nov 25 17:44:59 compute-0 pedantic_feynman[448253]: }
Nov 25 17:44:59 compute-0 systemd[1]: libpod-d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8.scope: Deactivated successfully.
Nov 25 17:44:59 compute-0 podman[448237]: 2025-11-25 17:44:59.230858933 +0000 UTC m=+1.058908028 container died d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0b7f2333425fff7fcdb87cb13dddeb26ec4504d75e272bdbae437bd1895b8aa-merged.mount: Deactivated successfully.
Nov 25 17:44:59 compute-0 podman[448237]: 2025-11-25 17:44:59.29350841 +0000 UTC m=+1.121557475 container remove d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_feynman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:44:59 compute-0 systemd[1]: libpod-conmon-d076e92dc6c615e15da2fca657e17233fd3f9c56a1e2c87f281d8be355b1aaa8.scope: Deactivated successfully.
Nov 25 17:44:59 compute-0 sudo[448132]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:44:59 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:44:59 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:44:59 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:44:59 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2aa09f5b-4d81-41c4-a3c0-2d34b19e2dfb does not exist
Nov 25 17:44:59 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 6212ca05-2313-4833-a25c-0061e28f7dc3 does not exist
Nov 25 17:44:59 compute-0 sudo[448302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:44:59 compute-0 sudo[448302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:59 compute-0 sudo[448302]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:59 compute-0 sudo[448327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:44:59 compute-0 sudo[448327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:44:59 compute-0 sudo[448327]: pam_unix(sudo:session): session closed for user root
Nov 25 17:44:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:00 compute-0 nova_compute[254092]: 2025-11-25 17:45:00.001 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.020560) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700020589, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 792, "num_deletes": 257, "total_data_size": 1017571, "memory_usage": 1033296, "flush_reason": "Manual Compaction"}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700028541, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1008201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75162, "largest_seqno": 75953, "table_properties": {"data_size": 1004124, "index_size": 1792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8854, "raw_average_key_size": 19, "raw_value_size": 995971, "raw_average_value_size": 2137, "num_data_blocks": 80, "num_entries": 466, "num_filter_entries": 466, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092630, "oldest_key_time": 1764092630, "file_creation_time": 1764092700, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 8023 microseconds, and 3787 cpu microseconds.
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.028581) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1008201 bytes OK
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.028598) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030043) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030058) EVENT_LOG_v1 {"time_micros": 1764092700030054, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1013570, prev total WAL file size 1013570, number of live WAL files 2.
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030747) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323637' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(984KB)], [176(9352KB)]
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700030775, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 10585526, "oldest_snapshot_seqno": -1}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9107 keys, 10471828 bytes, temperature: kUnknown
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700100746, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10471828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10415099, "index_size": 32872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240822, "raw_average_key_size": 26, "raw_value_size": 10256460, "raw_average_value_size": 1126, "num_data_blocks": 1266, "num_entries": 9107, "num_filter_entries": 9107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092700, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.101062) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10471828 bytes
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.103019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.0 rd, 149.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.1 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(20.9) write-amplify(10.4) OK, records in: 9632, records dropped: 525 output_compression: NoCompression
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.103041) EVENT_LOG_v1 {"time_micros": 1764092700103031, "job": 110, "event": "compaction_finished", "compaction_time_micros": 70089, "compaction_time_cpu_micros": 31274, "output_level": 6, "num_output_files": 1, "total_output_size": 10471828, "num_input_records": 9632, "num_output_records": 9107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700103410, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092700105629, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.030604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:45:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:45:00.105800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:45:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:45:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:45:00 compute-0 ceph-mon[74985]: pgmap v3666: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:01 compute-0 nova_compute[254092]: 2025-11-25 17:45:01.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3667: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:02 compute-0 ceph-mon[74985]: pgmap v3667: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:03 compute-0 nova_compute[254092]: 2025-11-25 17:45:03.519 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:04 compute-0 nova_compute[254092]: 2025-11-25 17:45:04.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:04 compute-0 ceph-mon[74985]: pgmap v3668: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:05 compute-0 nova_compute[254092]: 2025-11-25 17:45:05.003 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:06 compute-0 nova_compute[254092]: 2025-11-25 17:45:06.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:06 compute-0 ceph-mon[74985]: pgmap v3669: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:07 compute-0 nova_compute[254092]: 2025-11-25 17:45:07.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:07 compute-0 nova_compute[254092]: 2025-11-25 17:45:07.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:45:07 compute-0 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:45:07 compute-0 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:45:07 compute-0 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:45:07 compute-0 nova_compute[254092]: 2025-11-25 17:45:07.528 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:45:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:45:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219470700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:45:07 compute-0 nova_compute[254092]: 2025-11-25 17:45:07.992 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.131 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.132 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3610MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.132 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.133 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.205 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.206 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.234 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:45:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:45:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580759221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.686 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.692 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.708 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.709 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:45:08 compute-0 nova_compute[254092]: 2025-11-25 17:45:08.710 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:45:08 compute-0 ceph-mon[74985]: pgmap v3670: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4219470700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:45:08 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3580759221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:45:09 compute-0 nova_compute[254092]: 2025-11-25 17:45:09.710 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:09 compute-0 nova_compute[254092]: 2025-11-25 17:45:09.711 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:09 compute-0 nova_compute[254092]: 2025-11-25 17:45:09.711 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:45:09 compute-0 nova_compute[254092]: 2025-11-25 17:45:09.711 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:45:09 compute-0 nova_compute[254092]: 2025-11-25 17:45:09.733 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:45:09 compute-0 nova_compute[254092]: 2025-11-25 17:45:09.733 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:10 compute-0 nova_compute[254092]: 2025-11-25 17:45:10.005 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:45:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:45:10 compute-0 nova_compute[254092]: 2025-11-25 17:45:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:10 compute-0 nova_compute[254092]: 2025-11-25 17:45:10.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:45:10 compute-0 ceph-mon[74985]: pgmap v3671: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:11 compute-0 nova_compute[254092]: 2025-11-25 17:45:11.435 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:11 compute-0 nova_compute[254092]: 2025-11-25 17:45:11.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:12 compute-0 nova_compute[254092]: 2025-11-25 17:45:12.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:12 compute-0 sshd-session[448396]: Connection closed by authenticating user root 171.244.51.45 port 43068 [preauth]
Nov 25 17:45:12 compute-0 ceph-mon[74985]: pgmap v3672: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:45:13.688 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:45:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:45:13.689 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:45:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:45:13.689 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:45:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:14 compute-0 ceph-mon[74985]: pgmap v3673: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:15 compute-0 nova_compute[254092]: 2025-11-25 17:45:15.046 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:16 compute-0 nova_compute[254092]: 2025-11-25 17:45:16.437 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:16 compute-0 ceph-mon[74985]: pgmap v3674: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:18 compute-0 podman[448399]: 2025-11-25 17:45:18.653812134 +0000 UTC m=+0.061531096 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 17:45:18 compute-0 podman[448398]: 2025-11-25 17:45:18.657163355 +0000 UTC m=+0.068703751 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 17:45:18 compute-0 podman[448400]: 2025-11-25 17:45:18.724508309 +0000 UTC m=+0.129078255 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 17:45:18 compute-0 ceph-mon[74985]: pgmap v3675: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:20 compute-0 nova_compute[254092]: 2025-11-25 17:45:20.048 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:20 compute-0 ceph-mon[74985]: pgmap v3676: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:21 compute-0 nova_compute[254092]: 2025-11-25 17:45:21.440 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3677: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:22 compute-0 ceph-mon[74985]: pgmap v3677: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:24 compute-0 nova_compute[254092]: 2025-11-25 17:45:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:24 compute-0 ceph-mon[74985]: pgmap v3678: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:25 compute-0 nova_compute[254092]: 2025-11-25 17:45:25.050 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:26 compute-0 nova_compute[254092]: 2025-11-25 17:45:26.444 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:26 compute-0 ceph-mon[74985]: pgmap v3679: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:29 compute-0 ceph-mon[74985]: pgmap v3680: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:30 compute-0 nova_compute[254092]: 2025-11-25 17:45:30.052 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:30 compute-0 ceph-mon[74985]: pgmap v3681: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:31 compute-0 nova_compute[254092]: 2025-11-25 17:45:31.446 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:32 compute-0 ceph-mon[74985]: pgmap v3682: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:35 compute-0 nova_compute[254092]: 2025-11-25 17:45:35.054 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:35 compute-0 ceph-mon[74985]: pgmap v3683: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:36 compute-0 ceph-mon[74985]: pgmap v3684: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:36 compute-0 nova_compute[254092]: 2025-11-25 17:45:36.449 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:39 compute-0 ceph-mon[74985]: pgmap v3685: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:40 compute-0 nova_compute[254092]: 2025-11-25 17:45:40.056 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:40 compute-0 ceph-mon[74985]: pgmap v3686: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:45:40
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'vms', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:45:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:45:41 compute-0 nova_compute[254092]: 2025-11-25 17:45:41.481 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:42 compute-0 nova_compute[254092]: 2025-11-25 17:45:42.481 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:45:42 compute-0 ceph-mon[74985]: pgmap v3687: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:44 compute-0 ceph-mon[74985]: pgmap v3688: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:45 compute-0 nova_compute[254092]: 2025-11-25 17:45:45.110 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:46 compute-0 nova_compute[254092]: 2025-11-25 17:45:46.484 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:46 compute-0 ceph-mon[74985]: pgmap v3689: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:48 compute-0 ceph-mon[74985]: pgmap v3690: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:49 compute-0 podman[448465]: 2025-11-25 17:45:49.662149814 +0000 UTC m=+0.068476056 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:45:49 compute-0 podman[448464]: 2025-11-25 17:45:49.677475221 +0000 UTC m=+0.080314678 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 17:45:49 compute-0 podman[448466]: 2025-11-25 17:45:49.693168048 +0000 UTC m=+0.089969670 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 17:45:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:50 compute-0 nova_compute[254092]: 2025-11-25 17:45:50.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:51 compute-0 ceph-mon[74985]: pgmap v3691: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:51 compute-0 nova_compute[254092]: 2025-11-25 17:45:51.487 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:45:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:45:53 compute-0 ceph-mon[74985]: pgmap v3692: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:54 compute-0 ceph-mon[74985]: pgmap v3693: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:45:55 compute-0 nova_compute[254092]: 2025-11-25 17:45:55.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:45:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288394970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:45:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:45:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288394970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:45:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1288394970' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:45:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1288394970' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:45:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:56 compute-0 nova_compute[254092]: 2025-11-25 17:45:56.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:45:56 compute-0 ceph-mon[74985]: pgmap v3694: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:58 compute-0 ceph-mon[74985]: pgmap v3695: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:59 compute-0 sudo[448526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:45:59 compute-0 sudo[448526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:45:59 compute-0 sudo[448526]: pam_unix(sudo:session): session closed for user root
Nov 25 17:45:59 compute-0 sudo[448551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:45:59 compute-0 sudo[448551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:45:59 compute-0 sudo[448551]: pam_unix(sudo:session): session closed for user root
Nov 25 17:45:59 compute-0 sudo[448576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:45:59 compute-0 sudo[448576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:45:59 compute-0 sudo[448576]: pam_unix(sudo:session): session closed for user root
Nov 25 17:45:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:45:59 compute-0 sudo[448601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:45:59 compute-0 sudo[448601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:00 compute-0 nova_compute[254092]: 2025-11-25 17:46:00.114 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:00 compute-0 sudo[448601]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:46:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:46:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:46:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:46:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f1e2b986-f3a7-4f63-be78-035002d465cd does not exist
Nov 25 17:46:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 541fcc6d-1422-4491-9444-844ddaa6517b does not exist
Nov 25 17:46:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b6d82f61-5418-4bc5-a9e0-3f88d539f797 does not exist
Nov 25 17:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:46:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:46:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:46:00 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:46:00 compute-0 sudo[448656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:46:00 compute-0 sudo[448656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:00 compute-0 sudo[448656]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:00 compute-0 sudo[448681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:46:00 compute-0 sudo[448681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:00 compute-0 sudo[448681]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:00 compute-0 sudo[448706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:46:00 compute-0 sudo[448706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:00 compute-0 sudo[448706]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:00 compute-0 sudo[448731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:46:00 compute-0 sudo[448731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:00 compute-0 ceph-mon[74985]: pgmap v3696: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:46:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:46:00 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:46:01 compute-0 podman[448799]: 2025-11-25 17:46:01.194737812 +0000 UTC m=+0.062295707 container create a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:46:01 compute-0 systemd[1]: Started libpod-conmon-a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a.scope.
Nov 25 17:46:01 compute-0 podman[448799]: 2025-11-25 17:46:01.165699252 +0000 UTC m=+0.033257177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:46:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:46:01 compute-0 podman[448799]: 2025-11-25 17:46:01.295899106 +0000 UTC m=+0.163457061 container init a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:46:01 compute-0 podman[448799]: 2025-11-25 17:46:01.305060886 +0000 UTC m=+0.172618751 container start a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:46:01 compute-0 podman[448799]: 2025-11-25 17:46:01.31001827 +0000 UTC m=+0.177576185 container attach a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:46:01 compute-0 sad_cohen[448816]: 167 167
Nov 25 17:46:01 compute-0 systemd[1]: libpod-a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a.scope: Deactivated successfully.
Nov 25 17:46:01 compute-0 podman[448821]: 2025-11-25 17:46:01.365932672 +0000 UTC m=+0.035500417 container died a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-080eaaa242137c6ff50ea1aa64b081653f61341fd35f14dd9088a56d53efe11a-merged.mount: Deactivated successfully.
Nov 25 17:46:01 compute-0 podman[448821]: 2025-11-25 17:46:01.417196008 +0000 UTC m=+0.086763703 container remove a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:46:01 compute-0 systemd[1]: libpod-conmon-a171f1838b5f4b16ff2c917e73d6d6fecda08b303872890b040f8889c436754a.scope: Deactivated successfully.
Nov 25 17:46:01 compute-0 nova_compute[254092]: 2025-11-25 17:46:01.491 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:01 compute-0 podman[448843]: 2025-11-25 17:46:01.662128526 +0000 UTC m=+0.066022948 container create 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:46:01 compute-0 systemd[1]: Started libpod-conmon-8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b.scope.
Nov 25 17:46:01 compute-0 podman[448843]: 2025-11-25 17:46:01.639692835 +0000 UTC m=+0.043587237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:46:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:01 compute-0 podman[448843]: 2025-11-25 17:46:01.765178172 +0000 UTC m=+0.169072574 container init 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:46:01 compute-0 podman[448843]: 2025-11-25 17:46:01.772787619 +0000 UTC m=+0.176682001 container start 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 17:46:01 compute-0 podman[448843]: 2025-11-25 17:46:01.77576217 +0000 UTC m=+0.179656572 container attach 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:46:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:02 compute-0 romantic_lalande[448860]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:46:02 compute-0 romantic_lalande[448860]: --> relative data size: 1.0
Nov 25 17:46:02 compute-0 romantic_lalande[448860]: --> All data devices are unavailable
Nov 25 17:46:02 compute-0 systemd[1]: libpod-8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b.scope: Deactivated successfully.
Nov 25 17:46:02 compute-0 podman[448889]: 2025-11-25 17:46:02.842602473 +0000 UTC m=+0.026220465 container died 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1057bb36dfff132d0176a68639b034edb014db264240341e242545df82280272-merged.mount: Deactivated successfully.
Nov 25 17:46:02 compute-0 podman[448889]: 2025-11-25 17:46:02.889681464 +0000 UTC m=+0.073299456 container remove 8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:46:02 compute-0 systemd[1]: libpod-conmon-8205b94e3da0899ae99923b11139c9b865d567f10d64f537333a0b1d9590862b.scope: Deactivated successfully.
Nov 25 17:46:02 compute-0 ceph-mon[74985]: pgmap v3697: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:02 compute-0 sudo[448731]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:02 compute-0 sudo[448904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:46:02 compute-0 sudo[448904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:02 compute-0 sudo[448904]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:03 compute-0 sudo[448929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:46:03 compute-0 sudo[448929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:03 compute-0 sudo[448929]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:03 compute-0 sudo[448954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:46:03 compute-0 sudo[448954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:03 compute-0 sudo[448954]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:03 compute-0 sudo[448979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:46:03 compute-0 sudo[448979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:03 compute-0 podman[449046]: 2025-11-25 17:46:03.498563681 +0000 UTC m=+0.034057028 container create 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:46:03 compute-0 systemd[1]: Started libpod-conmon-27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860.scope.
Nov 25 17:46:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:46:03 compute-0 podman[449046]: 2025-11-25 17:46:03.57276296 +0000 UTC m=+0.108256337 container init 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:46:03 compute-0 podman[449046]: 2025-11-25 17:46:03.578660781 +0000 UTC m=+0.114154128 container start 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:46:03 compute-0 podman[449046]: 2025-11-25 17:46:03.484516568 +0000 UTC m=+0.020009935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:46:03 compute-0 podman[449046]: 2025-11-25 17:46:03.582202768 +0000 UTC m=+0.117696145 container attach 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:46:03 compute-0 practical_shockley[449062]: 167 167
Nov 25 17:46:03 compute-0 systemd[1]: libpod-27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860.scope: Deactivated successfully.
Nov 25 17:46:03 compute-0 podman[449046]: 2025-11-25 17:46:03.587767039 +0000 UTC m=+0.123260386 container died 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec403ad57cbdb1156dd9238050310db55016b60c1d0d5d24e3e888a21c39a708-merged.mount: Deactivated successfully.
Nov 25 17:46:03 compute-0 podman[449046]: 2025-11-25 17:46:03.620363666 +0000 UTC m=+0.155857013 container remove 27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:46:03 compute-0 systemd[1]: libpod-conmon-27ee88dc328ef006dd0c4c01e48bb37729f65ac2be756de90d1932ebd91ed860.scope: Deactivated successfully.
Nov 25 17:46:03 compute-0 podman[449087]: 2025-11-25 17:46:03.775293534 +0000 UTC m=+0.044505272 container create cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:46:03 compute-0 systemd[1]: Started libpod-conmon-cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9.scope.
Nov 25 17:46:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:03 compute-0 podman[449087]: 2025-11-25 17:46:03.757567451 +0000 UTC m=+0.026779219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:46:03 compute-0 podman[449087]: 2025-11-25 17:46:03.866116037 +0000 UTC m=+0.135327805 container init cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 17:46:03 compute-0 podman[449087]: 2025-11-25 17:46:03.873169928 +0000 UTC m=+0.142381656 container start cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:46:03 compute-0 podman[449087]: 2025-11-25 17:46:03.876415077 +0000 UTC m=+0.145626845 container attach cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:46:04 compute-0 nova_compute[254092]: 2025-11-25 17:46:04.512 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:04 compute-0 nova_compute[254092]: 2025-11-25 17:46:04.513 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]: {
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:     "0": [
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:         {
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "devices": [
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "/dev/loop3"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             ],
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_name": "ceph_lv0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_size": "21470642176",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "name": "ceph_lv0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "tags": {
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cluster_name": "ceph",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.crush_device_class": "",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.encrypted": "0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osd_id": "0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.type": "block",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.vdo": "0"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             },
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "type": "block",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "vg_name": "ceph_vg0"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:         }
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:     ],
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:     "1": [
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:         {
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "devices": [
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "/dev/loop4"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             ],
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_name": "ceph_lv1",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_size": "21470642176",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "name": "ceph_lv1",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "tags": {
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cluster_name": "ceph",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.crush_device_class": "",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.encrypted": "0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osd_id": "1",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.type": "block",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.vdo": "0"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             },
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "type": "block",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "vg_name": "ceph_vg1"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:         }
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:     ],
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:     "2": [
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:         {
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "devices": [
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "/dev/loop5"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             ],
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_name": "ceph_lv2",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_size": "21470642176",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "name": "ceph_lv2",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "tags": {
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.cluster_name": "ceph",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.crush_device_class": "",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.encrypted": "0",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osd_id": "2",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.type": "block",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:                 "ceph.vdo": "0"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             },
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "type": "block",
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:             "vg_name": "ceph_vg2"
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:         }
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]:     ]
Nov 25 17:46:04 compute-0 pedantic_mestorf[449104]: }
Nov 25 17:46:04 compute-0 systemd[1]: libpod-cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9.scope: Deactivated successfully.
Nov 25 17:46:04 compute-0 podman[449087]: 2025-11-25 17:46:04.664253235 +0000 UTC m=+0.933464973 container died cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:46:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fbfc97bc9fdc668b098b73e3320507c7a984f1e42c565f8178783e797d64927-merged.mount: Deactivated successfully.
Nov 25 17:46:04 compute-0 podman[449087]: 2025-11-25 17:46:04.73019335 +0000 UTC m=+0.999405078 container remove cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:46:04 compute-0 systemd[1]: libpod-conmon-cb0f6b6dd83ffd2f7ec0949e834acb334f4db7fc79fe313fb77b432ccf6d2af9.scope: Deactivated successfully.
Nov 25 17:46:04 compute-0 sudo[448979]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:04 compute-0 sudo[449124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:46:04 compute-0 sudo[449124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:04 compute-0 sudo[449124]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:04 compute-0 sudo[449149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:46:04 compute-0 sudo[449149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:04 compute-0 sudo[449149]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:04 compute-0 ceph-mon[74985]: pgmap v3698: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:04 compute-0 sudo[449174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:46:04 compute-0 sudo[449174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:04 compute-0 sudo[449174]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:05 compute-0 sudo[449199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:46:05 compute-0 sudo[449199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:05 compute-0 nova_compute[254092]: 2025-11-25 17:46:05.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:05 compute-0 podman[449264]: 2025-11-25 17:46:05.344565606 +0000 UTC m=+0.039397474 container create dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 17:46:05 compute-0 systemd[1]: Started libpod-conmon-dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041.scope.
Nov 25 17:46:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:46:05 compute-0 podman[449264]: 2025-11-25 17:46:05.327023507 +0000 UTC m=+0.021855385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:46:05 compute-0 podman[449264]: 2025-11-25 17:46:05.429782565 +0000 UTC m=+0.124614443 container init dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:46:05 compute-0 podman[449264]: 2025-11-25 17:46:05.443112478 +0000 UTC m=+0.137944336 container start dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 17:46:05 compute-0 podman[449264]: 2025-11-25 17:46:05.44611591 +0000 UTC m=+0.140947798 container attach dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:46:05 compute-0 keen_allen[449281]: 167 167
Nov 25 17:46:05 compute-0 systemd[1]: libpod-dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041.scope: Deactivated successfully.
Nov 25 17:46:05 compute-0 podman[449264]: 2025-11-25 17:46:05.451797975 +0000 UTC m=+0.146629833 container died dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b3c634f2e529c501e6072f9d14cafd97886b0df736d95b375dd48689bf3d18-merged.mount: Deactivated successfully.
Nov 25 17:46:05 compute-0 podman[449264]: 2025-11-25 17:46:05.484933637 +0000 UTC m=+0.179765495 container remove dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:46:05 compute-0 systemd[1]: libpod-conmon-dce7574adc67969111d23a979d88b5c08b1d67a1cfb8b131c3728335121f9041.scope: Deactivated successfully.
Nov 25 17:46:05 compute-0 podman[449306]: 2025-11-25 17:46:05.642786264 +0000 UTC m=+0.038834509 container create c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:46:05 compute-0 systemd[1]: Started libpod-conmon-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope.
Nov 25 17:46:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:46:05 compute-0 podman[449306]: 2025-11-25 17:46:05.720006766 +0000 UTC m=+0.116055051 container init c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:46:05 compute-0 podman[449306]: 2025-11-25 17:46:05.626941932 +0000 UTC m=+0.022990207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:46:05 compute-0 podman[449306]: 2025-11-25 17:46:05.726791161 +0000 UTC m=+0.122839406 container start c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:46:05 compute-0 podman[449306]: 2025-11-25 17:46:05.730355727 +0000 UTC m=+0.126403972 container attach c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:46:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 17:46:06 compute-0 nova_compute[254092]: 2025-11-25 17:46:06.496 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:06 compute-0 distracted_darwin[449323]: {
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "osd_id": 1,
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "type": "bluestore"
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:     },
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "osd_id": 2,
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "type": "bluestore"
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:     },
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "osd_id": 0,
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:         "type": "bluestore"
Nov 25 17:46:06 compute-0 distracted_darwin[449323]:     }
Nov 25 17:46:06 compute-0 distracted_darwin[449323]: }
Nov 25 17:46:06 compute-0 systemd[1]: libpod-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope: Deactivated successfully.
Nov 25 17:46:06 compute-0 conmon[449323]: conmon c18dd3f858f0c88c5f25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope/container/memory.events
Nov 25 17:46:06 compute-0 podman[449306]: 2025-11-25 17:46:06.690926358 +0000 UTC m=+1.086974623 container died c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 17:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d6e91e22206cddb7c7e80a3abd03cc2dd99de85b19aa7e9ac19d384c6da29b-merged.mount: Deactivated successfully.
Nov 25 17:46:06 compute-0 podman[449306]: 2025-11-25 17:46:06.750942912 +0000 UTC m=+1.146991167 container remove c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:46:06 compute-0 systemd[1]: libpod-conmon-c18dd3f858f0c88c5f2572900a75dc23fc6c30ad091c5475c1638a312ef898a9.scope: Deactivated successfully.
Nov 25 17:46:06 compute-0 sudo[449199]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:46:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:46:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:46:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:46:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b61c370f-ccf2-42d8-a0ea-1afaa1e6fca3 does not exist
Nov 25 17:46:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cc0f6696-ad5e-4615-8473-e9dcd1dba29b does not exist
Nov 25 17:46:06 compute-0 sudo[449368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:46:06 compute-0 sudo[449368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:06 compute-0 sudo[449368]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:06 compute-0 sudo[449393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:46:06 compute-0 sudo[449393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:46:06 compute-0 sudo[449393]: pam_unix(sudo:session): session closed for user root
Nov 25 17:46:06 compute-0 ceph-mon[74985]: pgmap v3699: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 17:46:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:46:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:46:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 17:46:08 compute-0 nova_compute[254092]: 2025-11-25 17:46:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:08 compute-0 ceph-mon[74985]: pgmap v3700: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 17:46:09 compute-0 nova_compute[254092]: 2025-11-25 17:46:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:09 compute-0 nova_compute[254092]: 2025-11-25 17:46:09.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:46:09 compute-0 nova_compute[254092]: 2025-11-25 17:46:09.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:46:09 compute-0 nova_compute[254092]: 2025-11-25 17:46:09.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:46:09 compute-0 nova_compute[254092]: 2025-11-25 17:46:09.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:46:09 compute-0 nova_compute[254092]: 2025-11-25 17:46:09.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:46:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 17:46:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:46:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297102051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:46:09 compute-0 nova_compute[254092]: 2025-11-25 17:46:09.992 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:46:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.117 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:46:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.163 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.164 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3567MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.165 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.568 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.569 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.712 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.813 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.814 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.829 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.873 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:46:10 compute-0 nova_compute[254092]: 2025-11-25 17:46:10.890 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:46:10 compute-0 ceph-mon[74985]: pgmap v3701: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Nov 25 17:46:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2297102051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:46:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:46:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469955822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:46:11 compute-0 nova_compute[254092]: 2025-11-25 17:46:11.304 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:46:11 compute-0 nova_compute[254092]: 2025-11-25 17:46:11.312 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:46:11 compute-0 nova_compute[254092]: 2025-11-25 17:46:11.332 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:46:11 compute-0 nova_compute[254092]: 2025-11-25 17:46:11.335 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:46:11 compute-0 nova_compute[254092]: 2025-11-25 17:46:11.336 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:46:11 compute-0 nova_compute[254092]: 2025-11-25 17:46:11.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:46:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3469955822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:46:12 compute-0 ceph-mon[74985]: pgmap v3702: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.332 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:13 compute-0 nova_compute[254092]: 2025-11-25 17:46:13.349 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:46:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:46:13.690 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:46:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:46:13.691 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:46:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:46:13.691 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:46:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:46:14 compute-0 ceph-mon[74985]: pgmap v3703: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:46:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:15 compute-0 nova_compute[254092]: 2025-11-25 17:46:15.119 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:46:16 compute-0 nova_compute[254092]: 2025-11-25 17:46:16.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:17 compute-0 ceph-mon[74985]: pgmap v3704: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 17:46:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 17:46:19 compute-0 ceph-mon[74985]: pgmap v3705: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 17:46:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Nov 25 17:46:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:20 compute-0 nova_compute[254092]: 2025-11-25 17:46:20.121 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:20 compute-0 podman[449463]: 2025-11-25 17:46:20.703373969 +0000 UTC m=+0.110447538 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 17:46:20 compute-0 podman[449462]: 2025-11-25 17:46:20.706184185 +0000 UTC m=+0.120047449 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:46:20 compute-0 podman[449464]: 2025-11-25 17:46:20.721058811 +0000 UTC m=+0.131493361 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 17:46:21 compute-0 ceph-mon[74985]: pgmap v3706: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Nov 25 17:46:21 compute-0 nova_compute[254092]: 2025-11-25 17:46:21.545 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Nov 25 17:46:22 compute-0 ceph-mon[74985]: pgmap v3707: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Nov 25 17:46:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:24 compute-0 nova_compute[254092]: 2025-11-25 17:46:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:24 compute-0 nova_compute[254092]: 2025-11-25 17:46:24.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:46:24 compute-0 ceph-mon[74985]: pgmap v3708: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:25 compute-0 nova_compute[254092]: 2025-11-25 17:46:25.122 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:26 compute-0 nova_compute[254092]: 2025-11-25 17:46:26.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:26 compute-0 ceph-mon[74985]: pgmap v3709: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:28 compute-0 ceph-mon[74985]: pgmap v3710: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:30 compute-0 nova_compute[254092]: 2025-11-25 17:46:30.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:30 compute-0 ceph-mon[74985]: pgmap v3711: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:31 compute-0 nova_compute[254092]: 2025-11-25 17:46:31.551 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:32 compute-0 ceph-mon[74985]: pgmap v3712: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Nov 25 17:46:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:34 compute-0 ceph-mon[74985]: pgmap v3713: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:35 compute-0 nova_compute[254092]: 2025-11-25 17:46:35.137 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:36 compute-0 nova_compute[254092]: 2025-11-25 17:46:36.588 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:36 compute-0 ceph-mon[74985]: pgmap v3714: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:38 compute-0 ceph-mon[74985]: pgmap v3715: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:46:40 compute-0 nova_compute[254092]: 2025-11-25 17:46:40.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:46:40
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:46:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:46:40 compute-0 ceph-mon[74985]: pgmap v3716: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:41 compute-0 nova_compute[254092]: 2025-11-25 17:46:41.591 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:42 compute-0 ceph-mon[74985]: pgmap v3717: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:44 compute-0 ceph-mon[74985]: pgmap v3718: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:45 compute-0 nova_compute[254092]: 2025-11-25 17:46:45.140 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:46 compute-0 nova_compute[254092]: 2025-11-25 17:46:46.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:46 compute-0 ceph-mon[74985]: pgmap v3719: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:49 compute-0 ceph-mon[74985]: pgmap v3720: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:50 compute-0 nova_compute[254092]: 2025-11-25 17:46:50.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:51 compute-0 ceph-mon[74985]: pgmap v3721: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:51 compute-0 nova_compute[254092]: 2025-11-25 17:46:51.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:51 compute-0 podman[449528]: 2025-11-25 17:46:51.630316424 +0000 UTC m=+0.047828963 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:46:51 compute-0 podman[449527]: 2025-11-25 17:46:51.641663203 +0000 UTC m=+0.058638587 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 17:46:51 compute-0 podman[449529]: 2025-11-25 17:46:51.662520301 +0000 UTC m=+0.077347157 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:46:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:52 compute-0 ceph-mon[74985]: pgmap v3722: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:46:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:46:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:54 compute-0 ceph-mon[74985]: pgmap v3723: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:46:55 compute-0 nova_compute[254092]: 2025-11-25 17:46:55.143 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:46:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611584250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:46:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:46:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611584250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:46:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/611584250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:46:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/611584250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:46:56 compute-0 nova_compute[254092]: 2025-11-25 17:46:56.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:46:56 compute-0 ceph-mon[74985]: pgmap v3724: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:58 compute-0 ceph-mon[74985]: pgmap v3725: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:46:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:00 compute-0 nova_compute[254092]: 2025-11-25 17:47:00.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:00 compute-0 ceph-mon[74985]: pgmap v3726: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:01 compute-0 nova_compute[254092]: 2025-11-25 17:47:01.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:02 compute-0 ceph-mon[74985]: pgmap v3727: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:04 compute-0 ceph-mon[74985]: pgmap v3728: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:05 compute-0 nova_compute[254092]: 2025-11-25 17:47:05.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:05 compute-0 nova_compute[254092]: 2025-11-25 17:47:05.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:06 compute-0 nova_compute[254092]: 2025-11-25 17:47:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:06 compute-0 nova_compute[254092]: 2025-11-25 17:47:06.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:06 compute-0 ceph-mon[74985]: pgmap v3729: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:07 compute-0 sudo[449592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:07 compute-0 sudo[449592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 sudo[449592]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:07 compute-0 sudo[449617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:47:07 compute-0 sudo[449617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 sudo[449617]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:07 compute-0 sudo[449642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:07 compute-0 sudo[449642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 sudo[449642]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:07 compute-0 sudo[449667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 17:47:07 compute-0 sudo[449667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 sudo[449667]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:47:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:47:07 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:07 compute-0 sudo[449711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:07 compute-0 sudo[449711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 sudo[449711]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:07 compute-0 sudo[449736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:47:07 compute-0 sudo[449736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 sudo[449736]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:07 compute-0 sudo[449761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:07 compute-0 sudo[449761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 sudo[449761]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:07 compute-0 sudo[449786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:47:07 compute-0 sudo[449786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:08 compute-0 sudo[449786]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:47:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:47:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:47:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b1db504f-7a6d-4470-a14e-85ebc878885a does not exist
Nov 25 17:47:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7ac9cd6d-da4d-41db-9e2a-4e9c869e908d does not exist
Nov 25 17:47:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7e2f61e3-f1fc-473b-bc1f-3d11e973cf1b does not exist
Nov 25 17:47:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:47:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:47:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:47:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:47:08 compute-0 sudo[449842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:08 compute-0 sudo[449842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:08 compute-0 sudo[449842]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:08 compute-0 sudo[449867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:47:08 compute-0 sudo[449867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:08 compute-0 sudo[449867]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:08 compute-0 sudo[449892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:08 compute-0 sudo[449892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:08 compute-0 sudo[449892]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:08 compute-0 sudo[449917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:47:08 compute-0 sudo[449917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:08 compute-0 ceph-mon[74985]: pgmap v3730: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:47:08 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:47:08 compute-0 nova_compute[254092]: 2025-11-25 17:47:08.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:08 compute-0 podman[449983]: 2025-11-25 17:47:08.718150617 +0000 UTC m=+0.037338457 container create f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:47:08 compute-0 systemd[1]: Started libpod-conmon-f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb.scope.
Nov 25 17:47:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:47:08 compute-0 podman[449983]: 2025-11-25 17:47:08.702074479 +0000 UTC m=+0.021262329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:47:08 compute-0 podman[449983]: 2025-11-25 17:47:08.801939818 +0000 UTC m=+0.121127678 container init f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 17:47:08 compute-0 podman[449983]: 2025-11-25 17:47:08.810144622 +0000 UTC m=+0.129332452 container start f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:47:08 compute-0 podman[449983]: 2025-11-25 17:47:08.813161374 +0000 UTC m=+0.132349234 container attach f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 17:47:08 compute-0 keen_shamir[449999]: 167 167
Nov 25 17:47:08 compute-0 systemd[1]: libpod-f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb.scope: Deactivated successfully.
Nov 25 17:47:08 compute-0 podman[449983]: 2025-11-25 17:47:08.817114071 +0000 UTC m=+0.136301901 container died f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:47:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-91df52b1c3815759c4233039899493e820976199120ba73d7398cf89a505a487-merged.mount: Deactivated successfully.
Nov 25 17:47:08 compute-0 podman[449983]: 2025-11-25 17:47:08.860085742 +0000 UTC m=+0.179273572 container remove f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shamir, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:47:08 compute-0 systemd[1]: libpod-conmon-f825dda09e11a840b870bf3a323d78091245c0a39ad0dfcae467e994b081fafb.scope: Deactivated successfully.
Nov 25 17:47:09 compute-0 podman[450022]: 2025-11-25 17:47:09.088631723 +0000 UTC m=+0.064548468 container create 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:47:09 compute-0 systemd[1]: Started libpod-conmon-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope.
Nov 25 17:47:09 compute-0 podman[450022]: 2025-11-25 17:47:09.060322132 +0000 UTC m=+0.036238897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:47:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:09 compute-0 podman[450022]: 2025-11-25 17:47:09.200589051 +0000 UTC m=+0.176505766 container init 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:47:09 compute-0 podman[450022]: 2025-11-25 17:47:09.213559004 +0000 UTC m=+0.189475719 container start 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:47:09 compute-0 podman[450022]: 2025-11-25 17:47:09.216090013 +0000 UTC m=+0.192006738 container attach 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 17:47:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:47:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:10 compute-0 gifted_bell[450039]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:47:10 compute-0 gifted_bell[450039]: --> relative data size: 1.0
Nov 25 17:47:10 compute-0 gifted_bell[450039]: --> All data devices are unavailable
Nov 25 17:47:10 compute-0 systemd[1]: libpod-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope: Deactivated successfully.
Nov 25 17:47:10 compute-0 systemd[1]: libpod-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope: Consumed 1.083s CPU time.
Nov 25 17:47:10 compute-0 podman[450022]: 2025-11-25 17:47:10.345061158 +0000 UTC m=+1.320977883 container died 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:47:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-746c7972067a87dd8a29407b13cf0570713ae01d0bd3e7264de3e18a3bf67a80-merged.mount: Deactivated successfully.
Nov 25 17:47:10 compute-0 podman[450022]: 2025-11-25 17:47:10.408458143 +0000 UTC m=+1.384374868 container remove 989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bell, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:47:10 compute-0 systemd[1]: libpod-conmon-989b5dc03e49a88243739d89c46881594e10358ebb5b918ffe9ea69e776571ca.scope: Deactivated successfully.
Nov 25 17:47:10 compute-0 sudo[449917]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:10 compute-0 sudo[450080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:10 compute-0 sudo[450080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.514 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.515 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.515 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.516 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:47:10 compute-0 sudo[450080]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:10 compute-0 sudo[450105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:47:10 compute-0 sudo[450105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:10 compute-0 sudo[450105]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:10 compute-0 sudo[450131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:10 compute-0 sudo[450131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:10 compute-0 sudo[450131]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:10 compute-0 sudo[450175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:47:10 compute-0 sudo[450175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:47:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997106820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:47:10 compute-0 ceph-mon[74985]: pgmap v3731: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:10 compute-0 nova_compute[254092]: 2025-11-25 17:47:10.950 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:47:11 compute-0 podman[450244]: 2025-11-25 17:47:11.027201908 +0000 UTC m=+0.041215464 container create a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:47:11 compute-0 systemd[1]: Started libpod-conmon-a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2.scope.
Nov 25 17:47:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:47:11 compute-0 podman[450244]: 2025-11-25 17:47:11.097975704 +0000 UTC m=+0.111989280 container init a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:47:11 compute-0 podman[450244]: 2025-11-25 17:47:11.104608115 +0000 UTC m=+0.118621671 container start a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:47:11 compute-0 podman[450244]: 2025-11-25 17:47:11.009931128 +0000 UTC m=+0.023944704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:47:11 compute-0 podman[450244]: 2025-11-25 17:47:11.107433062 +0000 UTC m=+0.121446638 container attach a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:47:11 compute-0 sharp_shaw[450260]: 167 167
Nov 25 17:47:11 compute-0 systemd[1]: libpod-a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2.scope: Deactivated successfully.
Nov 25 17:47:11 compute-0 podman[450244]: 2025-11-25 17:47:11.110455714 +0000 UTC m=+0.124469270 container died a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.118 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.119 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3569MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.119 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.119 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8be0076bf57f2497660d676c85e141b1ee5ba13fb42c65195e90d347a2c16be-merged.mount: Deactivated successfully.
Nov 25 17:47:11 compute-0 podman[450244]: 2025-11-25 17:47:11.144333487 +0000 UTC m=+0.158347033 container remove a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_shaw, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:47:11 compute-0 systemd[1]: libpod-conmon-a41a5ad84c17ff1b815775825394a117e6e595d30b6e2e4d1ede4ad44b8abaa2.scope: Deactivated successfully.
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.200 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.228 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:47:11 compute-0 podman[450285]: 2025-11-25 17:47:11.295574334 +0000 UTC m=+0.038893470 container create 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:47:11 compute-0 systemd[1]: Started libpod-conmon-77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367.scope.
Nov 25 17:47:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:11 compute-0 podman[450285]: 2025-11-25 17:47:11.280479943 +0000 UTC m=+0.023799099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:11 compute-0 podman[450285]: 2025-11-25 17:47:11.385227315 +0000 UTC m=+0.128546461 container init 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:47:11 compute-0 podman[450285]: 2025-11-25 17:47:11.391978548 +0000 UTC m=+0.135297684 container start 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:47:11 compute-0 podman[450285]: 2025-11-25 17:47:11.395037021 +0000 UTC m=+0.138356157 container attach 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:47:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326731643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.695 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.702 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.723 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.726 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:47:11 compute-0 nova_compute[254092]: 2025-11-25 17:47:11.726 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:47:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3997106820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:47:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1326731643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:47:12 compute-0 lucid_hopper[450301]: {
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:     "0": [
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:         {
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "devices": [
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "/dev/loop3"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             ],
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_name": "ceph_lv0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_size": "21470642176",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "name": "ceph_lv0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "tags": {
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cluster_name": "ceph",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.crush_device_class": "",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.encrypted": "0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osd_id": "0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.type": "block",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.vdo": "0"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             },
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "type": "block",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "vg_name": "ceph_vg0"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:         }
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:     ],
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:     "1": [
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:         {
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "devices": [
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "/dev/loop4"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             ],
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_name": "ceph_lv1",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_size": "21470642176",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "name": "ceph_lv1",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "tags": {
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cluster_name": "ceph",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.crush_device_class": "",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.encrypted": "0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osd_id": "1",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.type": "block",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.vdo": "0"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             },
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "type": "block",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "vg_name": "ceph_vg1"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:         }
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:     ],
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:     "2": [
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:         {
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "devices": [
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "/dev/loop5"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             ],
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_name": "ceph_lv2",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_size": "21470642176",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "name": "ceph_lv2",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "tags": {
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.cluster_name": "ceph",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.crush_device_class": "",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.encrypted": "0",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osd_id": "2",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.type": "block",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:                 "ceph.vdo": "0"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             },
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "type": "block",
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:             "vg_name": "ceph_vg2"
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:         }
Nov 25 17:47:12 compute-0 lucid_hopper[450301]:     ]
Nov 25 17:47:12 compute-0 lucid_hopper[450301]: }
Nov 25 17:47:12 compute-0 systemd[1]: libpod-77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367.scope: Deactivated successfully.
Nov 25 17:47:12 compute-0 podman[450285]: 2025-11-25 17:47:12.160058549 +0000 UTC m=+0.903377725 container died 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a57a2bda6d3844bf6ec32c40f3becdc04de0707580470eb20a46bfbaabc7dda-merged.mount: Deactivated successfully.
Nov 25 17:47:12 compute-0 podman[450285]: 2025-11-25 17:47:12.223174776 +0000 UTC m=+0.966493922 container remove 77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:47:12 compute-0 systemd[1]: libpod-conmon-77da2b4ec27fd9c8ca13313533bce9faaa8f03a81a069e6e188908b793764367.scope: Deactivated successfully.
Nov 25 17:47:12 compute-0 sudo[450175]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:12 compute-0 sudo[450346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:12 compute-0 sudo[450346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:12 compute-0 sudo[450346]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:12 compute-0 sudo[450371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:47:12 compute-0 sudo[450371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:12 compute-0 sudo[450371]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:12 compute-0 sudo[450396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:12 compute-0 sudo[450396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:12 compute-0 sudo[450396]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:12 compute-0 sudo[450421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:47:12 compute-0 sudo[450421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:12 compute-0 ceph-mon[74985]: pgmap v3732: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:12 compute-0 podman[450489]: 2025-11-25 17:47:12.99321826 +0000 UTC m=+0.063495530 container create 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:47:13 compute-0 systemd[1]: Started libpod-conmon-0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d.scope.
Nov 25 17:47:13 compute-0 podman[450489]: 2025-11-25 17:47:12.963756997 +0000 UTC m=+0.034034307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:47:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:47:13 compute-0 podman[450489]: 2025-11-25 17:47:13.097957831 +0000 UTC m=+0.168235091 container init 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 17:47:13 compute-0 podman[450489]: 2025-11-25 17:47:13.111601472 +0000 UTC m=+0.181878712 container start 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:47:13 compute-0 podman[450489]: 2025-11-25 17:47:13.114902972 +0000 UTC m=+0.185180272 container attach 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:47:13 compute-0 compassionate_jones[450505]: 167 167
Nov 25 17:47:13 compute-0 systemd[1]: libpod-0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d.scope: Deactivated successfully.
Nov 25 17:47:13 compute-0 podman[450489]: 2025-11-25 17:47:13.121439731 +0000 UTC m=+0.191717061 container died 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e8188b24c276f86b20bb3224f9bf5c976c9e28691d471b6e6187502fa15c49a-merged.mount: Deactivated successfully.
Nov 25 17:47:13 compute-0 podman[450489]: 2025-11-25 17:47:13.158610392 +0000 UTC m=+0.228887632 container remove 0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:47:13 compute-0 systemd[1]: libpod-conmon-0efa5c9aed8eafb869f2a5c29135cc24040ea24a994edd8334dd2614f878757d.scope: Deactivated successfully.
Nov 25 17:47:13 compute-0 podman[450528]: 2025-11-25 17:47:13.31869451 +0000 UTC m=+0.045510349 container create df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:47:13 compute-0 systemd[1]: Started libpod-conmon-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope.
Nov 25 17:47:13 compute-0 podman[450528]: 2025-11-25 17:47:13.296668151 +0000 UTC m=+0.023484000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:47:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:47:13 compute-0 podman[450528]: 2025-11-25 17:47:13.414311993 +0000 UTC m=+0.141127822 container init df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:47:13 compute-0 podman[450528]: 2025-11-25 17:47:13.422202028 +0000 UTC m=+0.149017877 container start df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:47:13 compute-0 podman[450528]: 2025-11-25 17:47:13.426215888 +0000 UTC m=+0.153031727 container attach df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:47:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:47:13.691 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:47:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:47:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:47:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:47:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.727 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.727 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.728 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.744 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.744 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.745 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:13 compute-0 nova_compute[254092]: 2025-11-25 17:47:13.745 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:47:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:14 compute-0 competent_gates[450545]: {
Nov 25 17:47:14 compute-0 competent_gates[450545]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "osd_id": 1,
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "type": "bluestore"
Nov 25 17:47:14 compute-0 competent_gates[450545]:     },
Nov 25 17:47:14 compute-0 competent_gates[450545]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "osd_id": 2,
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "type": "bluestore"
Nov 25 17:47:14 compute-0 competent_gates[450545]:     },
Nov 25 17:47:14 compute-0 competent_gates[450545]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "osd_id": 0,
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:47:14 compute-0 competent_gates[450545]:         "type": "bluestore"
Nov 25 17:47:14 compute-0 competent_gates[450545]:     }
Nov 25 17:47:14 compute-0 competent_gates[450545]: }
Nov 25 17:47:14 compute-0 systemd[1]: libpod-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope: Deactivated successfully.
Nov 25 17:47:14 compute-0 systemd[1]: libpod-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope: Consumed 1.116s CPU time.
Nov 25 17:47:14 compute-0 podman[450528]: 2025-11-25 17:47:14.531437956 +0000 UTC m=+1.258253765 container died df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e02e77d7436c947a35a1c20ce117b3fdc16682576454842446af4fe467ce1595-merged.mount: Deactivated successfully.
Nov 25 17:47:14 compute-0 podman[450528]: 2025-11-25 17:47:14.589486846 +0000 UTC m=+1.316302655 container remove df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_gates, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:47:14 compute-0 systemd[1]: libpod-conmon-df3e5e95ade9fcaee05747700ed2b5b5a69c70d6debb93889928effd62c1098d.scope: Deactivated successfully.
Nov 25 17:47:14 compute-0 sudo[450421]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:47:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:47:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e1415b1f-4488-4985-a6c3-429f2157204b does not exist
Nov 25 17:47:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 575208ef-5f7f-4f47-8d0b-2663958eb69d does not exist
Nov 25 17:47:14 compute-0 sudo[450591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:47:14 compute-0 sudo[450591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:14 compute-0 sudo[450591]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:14 compute-0 sudo[450616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:47:14 compute-0 sudo[450616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:47:14 compute-0 sudo[450616]: pam_unix(sudo:session): session closed for user root
Nov 25 17:47:14 compute-0 ceph-mon[74985]: pgmap v3733: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:14 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:47:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:15 compute-0 nova_compute[254092]: 2025-11-25 17:47:15.187 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:16 compute-0 nova_compute[254092]: 2025-11-25 17:47:16.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:16 compute-0 ceph-mon[74985]: pgmap v3734: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:18 compute-0 ceph-mon[74985]: pgmap v3735: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:20 compute-0 nova_compute[254092]: 2025-11-25 17:47:20.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:20 compute-0 ceph-mon[74985]: pgmap v3736: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:21 compute-0 nova_compute[254092]: 2025-11-25 17:47:21.613 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:22 compute-0 podman[450642]: 2025-11-25 17:47:22.696133009 +0000 UTC m=+0.094662959 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 17:47:22 compute-0 podman[450641]: 2025-11-25 17:47:22.734197074 +0000 UTC m=+0.132332963 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 25 17:47:22 compute-0 podman[450643]: 2025-11-25 17:47:22.781123022 +0000 UTC m=+0.170350539 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 17:47:23 compute-0 ceph-mon[74985]: pgmap v3737: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:24 compute-0 nova_compute[254092]: 2025-11-25 17:47:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:47:25 compute-0 ceph-mon[74985]: pgmap v3738: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:25 compute-0 nova_compute[254092]: 2025-11-25 17:47:25.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:26 compute-0 nova_compute[254092]: 2025-11-25 17:47:26.616 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:27 compute-0 ceph-mon[74985]: pgmap v3739: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:29 compute-0 ceph-mon[74985]: pgmap v3740: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:30 compute-0 nova_compute[254092]: 2025-11-25 17:47:30.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:31 compute-0 ceph-mon[74985]: pgmap v3741: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:31 compute-0 nova_compute[254092]: 2025-11-25 17:47:31.618 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:33 compute-0 ceph-mon[74985]: pgmap v3742: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:34 compute-0 ceph-mon[74985]: pgmap v3743: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:35 compute-0 nova_compute[254092]: 2025-11-25 17:47:35.336 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:36 compute-0 nova_compute[254092]: 2025-11-25 17:47:36.620 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:36 compute-0 ceph-mon[74985]: pgmap v3744: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:38 compute-0 ceph-mon[74985]: pgmap v3745: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:47:40
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', 'volumes', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root']
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:47:40 compute-0 nova_compute[254092]: 2025-11-25 17:47:40.388 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:47:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:47:40 compute-0 ceph-mon[74985]: pgmap v3746: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:41 compute-0 nova_compute[254092]: 2025-11-25 17:47:41.623 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.965890) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092861965915, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1511, "num_deletes": 251, "total_data_size": 2459817, "memory_usage": 2506864, "flush_reason": "Manual Compaction"}
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092861986725, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 2414762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75954, "largest_seqno": 77464, "table_properties": {"data_size": 2407609, "index_size": 4223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14477, "raw_average_key_size": 19, "raw_value_size": 2393460, "raw_average_value_size": 3292, "num_data_blocks": 189, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092700, "oldest_key_time": 1764092700, "file_creation_time": 1764092861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 20866 microseconds, and 4863 cpu microseconds.
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.986754) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 2414762 bytes OK
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.986770) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.988330) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.988378) EVENT_LOG_v1 {"time_micros": 1764092861988368, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.988405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2453213, prev total WAL file size 2453213, number of live WAL files 2.
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.989173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(2358KB)], [179(10226KB)]
Nov 25 17:47:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092861989202, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 12886590, "oldest_snapshot_seqno": -1}
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9320 keys, 11141598 bytes, temperature: kUnknown
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092862058402, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 11141598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11082692, "index_size": 34473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 245872, "raw_average_key_size": 26, "raw_value_size": 10919605, "raw_average_value_size": 1171, "num_data_blocks": 1328, "num_entries": 9320, "num_filter_entries": 9320, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764092861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.058717) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 11141598 bytes
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.059824) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.0 rd, 160.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 9834, records dropped: 514 output_compression: NoCompression
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.059844) EVENT_LOG_v1 {"time_micros": 1764092862059834, "job": 112, "event": "compaction_finished", "compaction_time_micros": 69283, "compaction_time_cpu_micros": 24344, "output_level": 6, "num_output_files": 1, "total_output_size": 11141598, "num_input_records": 9834, "num_output_records": 9320, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092862060441, "job": 112, "event": "table_file_deletion", "file_number": 181}
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764092862062946, "job": 112, "event": "table_file_deletion", "file_number": 179}
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:41.989114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:47:42 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:47:42.062992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:47:42 compute-0 ceph-mon[74985]: pgmap v3747: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:44 compute-0 ceph-mon[74985]: pgmap v3748: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:45 compute-0 nova_compute[254092]: 2025-11-25 17:47:45.390 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:46 compute-0 nova_compute[254092]: 2025-11-25 17:47:46.626 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:46 compute-0 ceph-mon[74985]: pgmap v3749: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:48 compute-0 ceph-mon[74985]: pgmap v3750: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:50 compute-0 nova_compute[254092]: 2025-11-25 17:47:50.392 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:50 compute-0 ceph-mon[74985]: pgmap v3751: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:51 compute-0 nova_compute[254092]: 2025-11-25 17:47:51.628 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:47:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:47:53 compute-0 ceph-mon[74985]: pgmap v3752: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:53 compute-0 podman[450705]: 2025-11-25 17:47:53.627821062 +0000 UTC m=+0.050124446 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:47:53 compute-0 podman[450706]: 2025-11-25 17:47:53.653779208 +0000 UTC m=+0.071798475 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:47:53 compute-0 podman[450704]: 2025-11-25 17:47:53.658306172 +0000 UTC m=+0.081219912 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 17:47:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:55 compute-0 ceph-mon[74985]: pgmap v3753: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:47:55 compute-0 nova_compute[254092]: 2025-11-25 17:47:55.394 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:47:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/165510286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:47:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:47:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/165510286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:47:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/165510286' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:47:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/165510286' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:47:56 compute-0 nova_compute[254092]: 2025-11-25 17:47:56.653 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:47:57 compute-0 ceph-mon[74985]: pgmap v3754: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:59 compute-0 ceph-mon[74985]: pgmap v3755: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:47:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:00 compute-0 nova_compute[254092]: 2025-11-25 17:48:00.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:01 compute-0 ceph-mon[74985]: pgmap v3756: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:01 compute-0 nova_compute[254092]: 2025-11-25 17:48:01.698 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:03 compute-0 ceph-mon[74985]: pgmap v3757: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:04 compute-0 ceph-mon[74985]: pgmap v3758: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:05 compute-0 nova_compute[254092]: 2025-11-25 17:48:05.433 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:06 compute-0 nova_compute[254092]: 2025-11-25 17:48:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:06 compute-0 nova_compute[254092]: 2025-11-25 17:48:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:06 compute-0 nova_compute[254092]: 2025-11-25 17:48:06.717 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:06 compute-0 ceph-mon[74985]: pgmap v3759: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:08 compute-0 nova_compute[254092]: 2025-11-25 17:48:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:08 compute-0 ceph-mon[74985]: pgmap v3760: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:48:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:48:10 compute-0 nova_compute[254092]: 2025-11-25 17:48:10.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:11 compute-0 ceph-mon[74985]: pgmap v3761: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.526 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.526 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:48:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970592892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:48:11 compute-0 nova_compute[254092]: 2025-11-25 17:48:11.956 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:48:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2970592892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.107 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.109 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.109 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.109 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.179 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.180 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.204 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:48:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:48:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145851242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.645 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.650 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.666 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.668 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:48:12 compute-0 nova_compute[254092]: 2025-11-25 17:48:12.668 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:48:13 compute-0 ceph-mon[74985]: pgmap v3762: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:13 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2145851242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:48:13 compute-0 nova_compute[254092]: 2025-11-25 17:48:13.671 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:13 compute-0 nova_compute[254092]: 2025-11-25 17:48:13.674 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:48:13 compute-0 nova_compute[254092]: 2025-11-25 17:48:13.674 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:48:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:48:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:48:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:48:13.692 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:48:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:48:13.693 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:48:13 compute-0 nova_compute[254092]: 2025-11-25 17:48:13.693 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:48:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:14 compute-0 ceph-mon[74985]: pgmap v3763: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:14 compute-0 nova_compute[254092]: 2025-11-25 17:48:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:14 compute-0 nova_compute[254092]: 2025-11-25 17:48:14.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:48:14 compute-0 sudo[450812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:14 compute-0 sudo[450812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:14 compute-0 sudo[450812]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:14 compute-0 sudo[450837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:48:14 compute-0 sudo[450837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:14 compute-0 sudo[450837]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:14 compute-0 sudo[450862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:14 compute-0 sudo[450862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:14 compute-0 sudo[450862]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:15 compute-0 sudo[450887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:48:15 compute-0 sudo[450887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:15 compute-0 sudo[450887]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:15 compute-0 nova_compute[254092]: 2025-11-25 17:48:15.478 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:15 compute-0 nova_compute[254092]: 2025-11-25 17:48:15.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:48:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:48:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:48:15 compute-0 nova_compute[254092]: 2025-11-25 17:48:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:48:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev cab5ad44-2f94-46eb-8701-da449b9fc31d does not exist
Nov 25 17:48:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 146c542f-cccc-4589-88d5-49e3d914613a does not exist
Nov 25 17:48:15 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b94e63db-20cb-4aa5-90a9-70d5767d94be does not exist
Nov 25 17:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:48:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:48:15 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:48:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:48:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:48:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:48:15 compute-0 sudo[450943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:15 compute-0 sudo[450943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:15 compute-0 sudo[450943]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:15 compute-0 sudo[450968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:48:15 compute-0 sudo[450968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:15 compute-0 sudo[450968]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:15 compute-0 sudo[450993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:15 compute-0 sudo[450993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:15 compute-0 sudo[450993]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:15 compute-0 sudo[451018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:48:15 compute-0 sudo[451018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:16 compute-0 podman[451083]: 2025-11-25 17:48:16.010269505 +0000 UTC m=+0.038722455 container create 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:48:16 compute-0 systemd[1]: Started libpod-conmon-44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540.scope.
Nov 25 17:48:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:48:16 compute-0 podman[451083]: 2025-11-25 17:48:15.992837031 +0000 UTC m=+0.021290001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:48:16 compute-0 podman[451083]: 2025-11-25 17:48:16.090461819 +0000 UTC m=+0.118914789 container init 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:48:16 compute-0 podman[451083]: 2025-11-25 17:48:16.098450795 +0000 UTC m=+0.126903745 container start 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:48:16 compute-0 podman[451083]: 2025-11-25 17:48:16.101683754 +0000 UTC m=+0.130136704 container attach 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:48:16 compute-0 sweet_elgamal[451099]: 167 167
Nov 25 17:48:16 compute-0 systemd[1]: libpod-44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540.scope: Deactivated successfully.
Nov 25 17:48:16 compute-0 podman[451083]: 2025-11-25 17:48:16.103045221 +0000 UTC m=+0.131498171 container died 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-52f2693ab2d10e43b121ac6a83abf5546d507783652d991f55e84fda94ef3eec-merged.mount: Deactivated successfully.
Nov 25 17:48:16 compute-0 podman[451083]: 2025-11-25 17:48:16.148009825 +0000 UTC m=+0.176462775 container remove 44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:48:16 compute-0 systemd[1]: libpod-conmon-44d2cc8649d1d1c6ecf43b3d2eda2bbec1a7abd975fb223323fa6f100334a540.scope: Deactivated successfully.
Nov 25 17:48:16 compute-0 podman[451123]: 2025-11-25 17:48:16.314534359 +0000 UTC m=+0.044559544 container create b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:48:16 compute-0 systemd[1]: Started libpod-conmon-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope.
Nov 25 17:48:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:16 compute-0 podman[451123]: 2025-11-25 17:48:16.297831794 +0000 UTC m=+0.027856999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:48:16 compute-0 podman[451123]: 2025-11-25 17:48:16.395111352 +0000 UTC m=+0.125136537 container init b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:48:16 compute-0 podman[451123]: 2025-11-25 17:48:16.403002777 +0000 UTC m=+0.133027962 container start b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:48:16 compute-0 podman[451123]: 2025-11-25 17:48:16.406286596 +0000 UTC m=+0.136311781 container attach b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:48:16 compute-0 ceph-mon[74985]: pgmap v3764: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:16 compute-0 nova_compute[254092]: 2025-11-25 17:48:16.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:17 compute-0 zen_euler[451139]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:48:17 compute-0 zen_euler[451139]: --> relative data size: 1.0
Nov 25 17:48:17 compute-0 zen_euler[451139]: --> All data devices are unavailable
Nov 25 17:48:17 compute-0 systemd[1]: libpod-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope: Deactivated successfully.
Nov 25 17:48:17 compute-0 systemd[1]: libpod-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope: Consumed 1.010s CPU time.
Nov 25 17:48:17 compute-0 podman[451123]: 2025-11-25 17:48:17.463909229 +0000 UTC m=+1.193934404 container died b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-31a7194021f0e88e9b9b2aeb4ef7119585c4d9d8e6ec9390ce37e554fe022e22-merged.mount: Deactivated successfully.
Nov 25 17:48:17 compute-0 podman[451123]: 2025-11-25 17:48:17.522216296 +0000 UTC m=+1.252241481 container remove b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 17:48:17 compute-0 systemd[1]: libpod-conmon-b58af92ded4d515def7953dff79cdf7cd60aab11591ff899b869804c6bc0daec.scope: Deactivated successfully.
Nov 25 17:48:17 compute-0 sudo[451018]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:17 compute-0 sudo[451181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:17 compute-0 sudo[451181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:17 compute-0 sudo[451181]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:17 compute-0 sudo[451206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:48:17 compute-0 sudo[451206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:17 compute-0 sudo[451206]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:17 compute-0 sudo[451231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:17 compute-0 sudo[451231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:17 compute-0 sudo[451231]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:17 compute-0 sudo[451256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:48:17 compute-0 sudo[451256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:18 compute-0 podman[451319]: 2025-11-25 17:48:18.072457235 +0000 UTC m=+0.038861039 container create 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:48:18 compute-0 systemd[1]: Started libpod-conmon-95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378.scope.
Nov 25 17:48:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:48:18 compute-0 podman[451319]: 2025-11-25 17:48:18.057304823 +0000 UTC m=+0.023708647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:48:18 compute-0 podman[451319]: 2025-11-25 17:48:18.153565473 +0000 UTC m=+0.119969297 container init 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:48:18 compute-0 podman[451319]: 2025-11-25 17:48:18.160374488 +0000 UTC m=+0.126778292 container start 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:48:18 compute-0 podman[451319]: 2025-11-25 17:48:18.164211983 +0000 UTC m=+0.130615787 container attach 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:48:18 compute-0 intelligent_montalcini[451335]: 167 167
Nov 25 17:48:18 compute-0 systemd[1]: libpod-95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378.scope: Deactivated successfully.
Nov 25 17:48:18 compute-0 podman[451319]: 2025-11-25 17:48:18.165056496 +0000 UTC m=+0.131460300 container died 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1437d73c5febeca8b2a1922bb5256341cbabc0a2dbc66a06915b3e3d3e29dc70-merged.mount: Deactivated successfully.
Nov 25 17:48:18 compute-0 podman[451319]: 2025-11-25 17:48:18.199576426 +0000 UTC m=+0.165980230 container remove 95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 17:48:18 compute-0 systemd[1]: libpod-conmon-95ce9186a431a847ca23c85b0ac0eb1f4bb3316b54ac8e47da2fb5f4f9a52378.scope: Deactivated successfully.
Nov 25 17:48:18 compute-0 podman[451359]: 2025-11-25 17:48:18.344170012 +0000 UTC m=+0.037599484 container create a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:48:18 compute-0 systemd[1]: Started libpod-conmon-a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19.scope.
Nov 25 17:48:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:18 compute-0 podman[451359]: 2025-11-25 17:48:18.328394692 +0000 UTC m=+0.021824214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:48:18 compute-0 podman[451359]: 2025-11-25 17:48:18.437892813 +0000 UTC m=+0.131322345 container init a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:48:18 compute-0 podman[451359]: 2025-11-25 17:48:18.450274001 +0000 UTC m=+0.143703483 container start a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:48:18 compute-0 podman[451359]: 2025-11-25 17:48:18.453842258 +0000 UTC m=+0.147271730 container attach a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:48:18 compute-0 ceph-mon[74985]: pgmap v3765: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:19 compute-0 competent_margulis[451375]: {
Nov 25 17:48:19 compute-0 competent_margulis[451375]:     "0": [
Nov 25 17:48:19 compute-0 competent_margulis[451375]:         {
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "devices": [
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "/dev/loop3"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             ],
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_name": "ceph_lv0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_size": "21470642176",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "name": "ceph_lv0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "tags": {
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cluster_name": "ceph",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.crush_device_class": "",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.encrypted": "0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osd_id": "0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.type": "block",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.vdo": "0"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             },
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "type": "block",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "vg_name": "ceph_vg0"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:         }
Nov 25 17:48:19 compute-0 competent_margulis[451375]:     ],
Nov 25 17:48:19 compute-0 competent_margulis[451375]:     "1": [
Nov 25 17:48:19 compute-0 competent_margulis[451375]:         {
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "devices": [
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "/dev/loop4"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             ],
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_name": "ceph_lv1",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_size": "21470642176",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "name": "ceph_lv1",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "tags": {
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cluster_name": "ceph",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.crush_device_class": "",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.encrypted": "0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osd_id": "1",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.type": "block",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.vdo": "0"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             },
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "type": "block",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "vg_name": "ceph_vg1"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:         }
Nov 25 17:48:19 compute-0 competent_margulis[451375]:     ],
Nov 25 17:48:19 compute-0 competent_margulis[451375]:     "2": [
Nov 25 17:48:19 compute-0 competent_margulis[451375]:         {
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "devices": [
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "/dev/loop5"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             ],
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_name": "ceph_lv2",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_size": "21470642176",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "name": "ceph_lv2",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "tags": {
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.cluster_name": "ceph",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.crush_device_class": "",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.encrypted": "0",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osd_id": "2",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.type": "block",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:                 "ceph.vdo": "0"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             },
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "type": "block",
Nov 25 17:48:19 compute-0 competent_margulis[451375]:             "vg_name": "ceph_vg2"
Nov 25 17:48:19 compute-0 competent_margulis[451375]:         }
Nov 25 17:48:19 compute-0 competent_margulis[451375]:     ]
Nov 25 17:48:19 compute-0 competent_margulis[451375]: }
Nov 25 17:48:19 compute-0 systemd[1]: libpod-a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19.scope: Deactivated successfully.
Nov 25 17:48:19 compute-0 podman[451359]: 2025-11-25 17:48:19.215782531 +0000 UTC m=+0.909212033 container died a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:48:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f030569d2e76b62b04a735abaa631157a3908b5c05d514368b4300abb89799a-merged.mount: Deactivated successfully.
Nov 25 17:48:19 compute-0 podman[451359]: 2025-11-25 17:48:19.272551136 +0000 UTC m=+0.965980608 container remove a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:48:19 compute-0 systemd[1]: libpod-conmon-a62f4ad919d9925824a719071ad598de222a71e1ae19b517efaaeb2180555e19.scope: Deactivated successfully.
Nov 25 17:48:19 compute-0 sudo[451256]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:19 compute-0 sudo[451397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:19 compute-0 sudo[451397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:19 compute-0 sudo[451397]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:19 compute-0 sudo[451422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:48:19 compute-0 sudo[451422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:19 compute-0 sudo[451422]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:19 compute-0 sudo[451447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:19 compute-0 sudo[451447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:19 compute-0 sudo[451447]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:19 compute-0 sudo[451472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:48:19 compute-0 sudo[451472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:19 compute-0 podman[451537]: 2025-11-25 17:48:19.895705621 +0000 UTC m=+0.047640328 container create dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:48:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:19 compute-0 systemd[1]: Started libpod-conmon-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope.
Nov 25 17:48:19 compute-0 podman[451537]: 2025-11-25 17:48:19.87475778 +0000 UTC m=+0.026692467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:48:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:48:19 compute-0 podman[451537]: 2025-11-25 17:48:19.990036579 +0000 UTC m=+0.141971276 container init dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:48:19 compute-0 podman[451537]: 2025-11-25 17:48:19.998116998 +0000 UTC m=+0.150051675 container start dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:48:20 compute-0 podman[451537]: 2025-11-25 17:48:20.001853901 +0000 UTC m=+0.153788618 container attach dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:48:20 compute-0 systemd[1]: libpod-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope: Deactivated successfully.
Nov 25 17:48:20 compute-0 dazzling_elbakyan[451553]: 167 167
Nov 25 17:48:20 compute-0 conmon[451553]: conmon dcf5552e540a028a794f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope/container/memory.events
Nov 25 17:48:20 compute-0 podman[451537]: 2025-11-25 17:48:20.008024808 +0000 UTC m=+0.159959485 container died dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:48:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-868c3ece7813b0669a93c40ba0f410a3d7bfb470534d92a4718ec568b92cb60d-merged.mount: Deactivated successfully.
Nov 25 17:48:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:20 compute-0 podman[451537]: 2025-11-25 17:48:20.054105743 +0000 UTC m=+0.206040410 container remove dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:48:20 compute-0 systemd[1]: libpod-conmon-dcf5552e540a028a794ff5721ad0bf862376c76fabdeea80d78fa9cd0d087165.scope: Deactivated successfully.
Nov 25 17:48:20 compute-0 podman[451578]: 2025-11-25 17:48:20.261038616 +0000 UTC m=+0.068018662 container create a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:48:20 compute-0 systemd[1]: Started libpod-conmon-a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15.scope.
Nov 25 17:48:20 compute-0 podman[451578]: 2025-11-25 17:48:20.233042294 +0000 UTC m=+0.040022390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:48:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:48:20 compute-0 podman[451578]: 2025-11-25 17:48:20.356876125 +0000 UTC m=+0.163856201 container init a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:48:20 compute-0 podman[451578]: 2025-11-25 17:48:20.370556018 +0000 UTC m=+0.177536024 container start a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:48:20 compute-0 podman[451578]: 2025-11-25 17:48:20.373885759 +0000 UTC m=+0.180865865 container attach a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:48:20 compute-0 nova_compute[254092]: 2025-11-25 17:48:20.479 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:20 compute-0 ceph-mon[74985]: pgmap v3766: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:21 compute-0 elated_volhard[451595]: {
Nov 25 17:48:21 compute-0 elated_volhard[451595]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "osd_id": 1,
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "type": "bluestore"
Nov 25 17:48:21 compute-0 elated_volhard[451595]:     },
Nov 25 17:48:21 compute-0 elated_volhard[451595]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "osd_id": 2,
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "type": "bluestore"
Nov 25 17:48:21 compute-0 elated_volhard[451595]:     },
Nov 25 17:48:21 compute-0 elated_volhard[451595]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "osd_id": 0,
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:48:21 compute-0 elated_volhard[451595]:         "type": "bluestore"
Nov 25 17:48:21 compute-0 elated_volhard[451595]:     }
Nov 25 17:48:21 compute-0 elated_volhard[451595]: }
Nov 25 17:48:21 compute-0 systemd[1]: libpod-a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15.scope: Deactivated successfully.
Nov 25 17:48:21 compute-0 podman[451578]: 2025-11-25 17:48:21.330287555 +0000 UTC m=+1.137267601 container died a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:48:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6536586f43bedabd4f71b3f365b1a718990081f35a7f1a51894e47daa784979f-merged.mount: Deactivated successfully.
Nov 25 17:48:21 compute-0 podman[451578]: 2025-11-25 17:48:21.380190854 +0000 UTC m=+1.187170870 container remove a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_volhard, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:48:21 compute-0 systemd[1]: libpod-conmon-a79d61ad102df0cba0f34aa4a1829f4e7d323d30313e7a38575b9f00a06c6c15.scope: Deactivated successfully.
Nov 25 17:48:21 compute-0 sudo[451472]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:48:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:48:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:48:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:48:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8a834df9-42b7-4951-b0aa-4838ce7c4ba1 does not exist
Nov 25 17:48:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5f9716f7-2ac0-4114-bf7a-ae88ab43ed04 does not exist
Nov 25 17:48:21 compute-0 sudo[451642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:48:21 compute-0 sudo[451642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:21 compute-0 sudo[451642]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:21 compute-0 sudo[451667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:48:21 compute-0 sudo[451667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:48:21 compute-0 sudo[451667]: pam_unix(sudo:session): session closed for user root
Nov 25 17:48:21 compute-0 nova_compute[254092]: 2025-11-25 17:48:21.724 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:48:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:48:22 compute-0 ceph-mon[74985]: pgmap v3767: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:24 compute-0 podman[451693]: 2025-11-25 17:48:24.660702661 +0000 UTC m=+0.073906513 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:48:24 compute-0 podman[451692]: 2025-11-25 17:48:24.697688048 +0000 UTC m=+0.109136822 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 17:48:24 compute-0 podman[451694]: 2025-11-25 17:48:24.717469336 +0000 UTC m=+0.130675218 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:48:24 compute-0 ceph-mon[74985]: pgmap v3768: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:25 compute-0 nova_compute[254092]: 2025-11-25 17:48:25.480 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:25 compute-0 nova_compute[254092]: 2025-11-25 17:48:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3769: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:26 compute-0 nova_compute[254092]: 2025-11-25 17:48:26.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:26 compute-0 ceph-mon[74985]: pgmap v3769: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:28 compute-0 ceph-mon[74985]: pgmap v3770: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:29 compute-0 nova_compute[254092]: 2025-11-25 17:48:29.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:48:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:30 compute-0 nova_compute[254092]: 2025-11-25 17:48:30.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:31 compute-0 ceph-mon[74985]: pgmap v3771: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:31 compute-0 nova_compute[254092]: 2025-11-25 17:48:31.729 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:32 compute-0 ceph-mon[74985]: pgmap v3772: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:34 compute-0 ceph-mon[74985]: pgmap v3773: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:35 compute-0 nova_compute[254092]: 2025-11-25 17:48:35.486 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:36 compute-0 nova_compute[254092]: 2025-11-25 17:48:36.732 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:36 compute-0 ceph-mon[74985]: pgmap v3774: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3775: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:38 compute-0 ceph-mon[74985]: pgmap v3775: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:48:40
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', '.mgr', 'images']
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:48:40 compute-0 nova_compute[254092]: 2025-11-25 17:48:40.489 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:48:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:48:40 compute-0 ceph-mon[74985]: pgmap v3776: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:41 compute-0 nova_compute[254092]: 2025-11-25 17:48:41.734 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:42 compute-0 ceph-mon[74985]: pgmap v3777: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:44 compute-0 ceph-mon[74985]: pgmap v3778: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:45 compute-0 nova_compute[254092]: 2025-11-25 17:48:45.491 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:46 compute-0 nova_compute[254092]: 2025-11-25 17:48:46.735 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:47 compute-0 ceph-mon[74985]: pgmap v3779: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:49 compute-0 ceph-mon[74985]: pgmap v3780: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:50 compute-0 nova_compute[254092]: 2025-11-25 17:48:50.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:51 compute-0 ceph-mon[74985]: pgmap v3781: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:51 compute-0 nova_compute[254092]: 2025-11-25 17:48:51.737 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:48:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:48:53 compute-0 ceph-mon[74985]: pgmap v3782: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:55 compute-0 ceph-mon[74985]: pgmap v3783: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:48:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:48:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860690747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:48:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:48:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860690747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:48:55 compute-0 nova_compute[254092]: 2025-11-25 17:48:55.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:55 compute-0 podman[451752]: 2025-11-25 17:48:55.636882275 +0000 UTC m=+0.049546399 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:48:55 compute-0 podman[451751]: 2025-11-25 17:48:55.643430784 +0000 UTC m=+0.057592259 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 17:48:55 compute-0 podman[451753]: 2025-11-25 17:48:55.682552079 +0000 UTC m=+0.091592044 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:48:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1860690747' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:48:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1860690747' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:48:56 compute-0 nova_compute[254092]: 2025-11-25 17:48:56.740 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:48:57 compute-0 ceph-mon[74985]: pgmap v3784: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:58 compute-0 ceph-mon[74985]: pgmap v3785: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:48:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:00 compute-0 nova_compute[254092]: 2025-11-25 17:49:00.539 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:00 compute-0 ceph-mon[74985]: pgmap v3786: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:01 compute-0 nova_compute[254092]: 2025-11-25 17:49:01.743 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:02 compute-0 ceph-mon[74985]: pgmap v3787: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:04 compute-0 ceph-mon[74985]: pgmap v3788: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:05 compute-0 nova_compute[254092]: 2025-11-25 17:49:05.540 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:06 compute-0 nova_compute[254092]: 2025-11-25 17:49:06.764 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:06 compute-0 ceph-mon[74985]: pgmap v3789: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:07 compute-0 nova_compute[254092]: 2025-11-25 17:49:07.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:08 compute-0 nova_compute[254092]: 2025-11-25 17:49:08.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:09 compute-0 ceph-mon[74985]: pgmap v3790: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:09 compute-0 nova_compute[254092]: 2025-11-25 17:49:09.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:49:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:49:10 compute-0 nova_compute[254092]: 2025-11-25 17:49:10.542 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:11 compute-0 ceph-mon[74985]: pgmap v3791: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:11 compute-0 nova_compute[254092]: 2025-11-25 17:49:11.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:12 compute-0 nova_compute[254092]: 2025-11-25 17:49:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:12 compute-0 nova_compute[254092]: 2025-11-25 17:49:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:49:12 compute-0 nova_compute[254092]: 2025-11-25 17:49:12.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:49:12 compute-0 nova_compute[254092]: 2025-11-25 17:49:12.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:49:13 compute-0 ceph-mon[74985]: pgmap v3792: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:13 compute-0 nova_compute[254092]: 2025-11-25 17:49:13.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:13 compute-0 nova_compute[254092]: 2025-11-25 17:49:13.536 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:49:13 compute-0 nova_compute[254092]: 2025-11-25 17:49:13.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:49:13 compute-0 nova_compute[254092]: 2025-11-25 17:49:13.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:49:13 compute-0 nova_compute[254092]: 2025-11-25 17:49:13.538 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:49:13 compute-0 nova_compute[254092]: 2025-11-25 17:49:13.539 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:49:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:49:13.693 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:49:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:49:13.694 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:49:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:49:13.695 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:49:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:49:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120976132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.065 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.337 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.339 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3625MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.340 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.340 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.425 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.426 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.457 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:49:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:49:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3053958096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.937 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.945 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.960 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.963 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:49:14 compute-0 nova_compute[254092]: 2025-11-25 17:49:14.964 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:49:15 compute-0 ceph-mon[74985]: pgmap v3793: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1120976132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:49:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3053958096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:49:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:15 compute-0 nova_compute[254092]: 2025-11-25 17:49:15.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:16 compute-0 nova_compute[254092]: 2025-11-25 17:49:16.820 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:16 compute-0 nova_compute[254092]: 2025-11-25 17:49:16.964 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:16 compute-0 nova_compute[254092]: 2025-11-25 17:49:16.965 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:16 compute-0 nova_compute[254092]: 2025-11-25 17:49:16.965 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:49:17 compute-0 ceph-mon[74985]: pgmap v3794: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:17 compute-0 nova_compute[254092]: 2025-11-25 17:49:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:19 compute-0 ceph-mon[74985]: pgmap v3795: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:20 compute-0 ceph-mon[74985]: pgmap v3796: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:20 compute-0 nova_compute[254092]: 2025-11-25 17:49:20.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:21 compute-0 sudo[451857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:21 compute-0 sudo[451857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:21 compute-0 sudo[451857]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:21 compute-0 sudo[451882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:49:21 compute-0 sudo[451882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:21 compute-0 sudo[451882]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:21 compute-0 sudo[451907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:21 compute-0 sudo[451907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:21 compute-0 sudo[451907]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:21 compute-0 nova_compute[254092]: 2025-11-25 17:49:21.822 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:21 compute-0 sudo[451932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:49:21 compute-0 sudo[451932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:22 compute-0 sudo[451932]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 17:49:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:49:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:49:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:49:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:49:22 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 32bf1ed7-d3f9-4d27-a410-812de383399d does not exist
Nov 25 17:49:22 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 178a01c1-b04d-4ded-ad03-eca2fedbfd6e does not exist
Nov 25 17:49:22 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e1aafb24-ebad-447b-9d0a-c8befa3616a1 does not exist
Nov 25 17:49:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:49:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:49:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:49:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:49:22 compute-0 sudo[451988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:22 compute-0 sudo[451988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:22 compute-0 sudo[451988]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:22 compute-0 sudo[452013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:49:22 compute-0 sudo[452013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:22 compute-0 sudo[452013]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:22 compute-0 sudo[452038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:22 compute-0 sudo[452038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:22 compute-0 sudo[452038]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:22 compute-0 sudo[452063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:49:22 compute-0 sudo[452063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:22 compute-0 ceph-mon[74985]: pgmap v3797: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:49:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:49:22 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:49:23 compute-0 podman[452130]: 2025-11-25 17:49:23.3814207 +0000 UTC m=+0.075748323 container create 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:49:23 compute-0 systemd[1]: Started libpod-conmon-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope.
Nov 25 17:49:23 compute-0 podman[452130]: 2025-11-25 17:49:23.351708462 +0000 UTC m=+0.046036135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:49:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:49:23 compute-0 nova_compute[254092]: 2025-11-25 17:49:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:23 compute-0 nova_compute[254092]: 2025-11-25 17:49:23.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:49:23 compute-0 podman[452130]: 2025-11-25 17:49:23.502890707 +0000 UTC m=+0.197218330 container init 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:49:23 compute-0 podman[452130]: 2025-11-25 17:49:23.516758684 +0000 UTC m=+0.211086267 container start 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:49:23 compute-0 podman[452130]: 2025-11-25 17:49:23.521491473 +0000 UTC m=+0.215819146 container attach 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:49:23 compute-0 brave_joliot[452146]: 167 167
Nov 25 17:49:23 compute-0 systemd[1]: libpod-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope: Deactivated successfully.
Nov 25 17:49:23 compute-0 conmon[452146]: conmon 46ae5d335ce15be8c74f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope/container/memory.events
Nov 25 17:49:23 compute-0 podman[452130]: 2025-11-25 17:49:23.52830896 +0000 UTC m=+0.222636583 container died 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ee1f939f567cd04c1fc160f10be22753be2f0fb1e13a39031cff1584e650dd1-merged.mount: Deactivated successfully.
Nov 25 17:49:23 compute-0 podman[452130]: 2025-11-25 17:49:23.589064963 +0000 UTC m=+0.283392586 container remove 46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:49:23 compute-0 systemd[1]: libpod-conmon-46ae5d335ce15be8c74f870735a6009771842abd3331dc3a1937edd609f68389.scope: Deactivated successfully.
Nov 25 17:49:23 compute-0 podman[452170]: 2025-11-25 17:49:23.801854836 +0000 UTC m=+0.070146161 container create 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:49:23 compute-0 systemd[1]: Started libpod-conmon-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope.
Nov 25 17:49:23 compute-0 podman[452170]: 2025-11-25 17:49:23.781265656 +0000 UTC m=+0.049557011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:49:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:23 compute-0 podman[452170]: 2025-11-25 17:49:23.918298366 +0000 UTC m=+0.186589731 container init 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:49:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:23 compute-0 podman[452170]: 2025-11-25 17:49:23.931757813 +0000 UTC m=+0.200049178 container start 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:49:23 compute-0 podman[452170]: 2025-11-25 17:49:23.936601454 +0000 UTC m=+0.204892809 container attach 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:49:24 compute-0 kind_einstein[452187]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:49:24 compute-0 kind_einstein[452187]: --> relative data size: 1.0
Nov 25 17:49:24 compute-0 kind_einstein[452187]: --> All data devices are unavailable
Nov 25 17:49:24 compute-0 ceph-mon[74985]: pgmap v3798: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:25 compute-0 systemd[1]: libpod-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope: Deactivated successfully.
Nov 25 17:49:25 compute-0 systemd[1]: libpod-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope: Consumed 1.022s CPU time.
Nov 25 17:49:25 compute-0 podman[452170]: 2025-11-25 17:49:25.004521897 +0000 UTC m=+1.272813242 container died 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 17:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4164e92019bb76dde919e8ac0360e758f371a22e6b0e3155065d4258372cb9a-merged.mount: Deactivated successfully.
Nov 25 17:49:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:25 compute-0 podman[452170]: 2025-11-25 17:49:25.081924494 +0000 UTC m=+1.350215859 container remove 21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:49:25 compute-0 systemd[1]: libpod-conmon-21e41765bb31ed8dd5b4762eec413f62b19805173db5ba8a0ad7ed3dbb43bb84.scope: Deactivated successfully.
Nov 25 17:49:25 compute-0 sudo[452063]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:25 compute-0 sudo[452227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:25 compute-0 sudo[452227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:25 compute-0 sudo[452227]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:25 compute-0 sudo[452252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:49:25 compute-0 sudo[452252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:25 compute-0 sudo[452252]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:25 compute-0 sudo[452277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:25 compute-0 sudo[452277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:25 compute-0 sudo[452277]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:25 compute-0 sudo[452302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:49:25 compute-0 sudo[452302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:25 compute-0 nova_compute[254092]: 2025-11-25 17:49:25.546 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:25 compute-0 podman[452367]: 2025-11-25 17:49:25.960143923 +0000 UTC m=+0.051051162 container create eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:49:26 compute-0 systemd[1]: Started libpod-conmon-eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122.scope.
Nov 25 17:49:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:49:26 compute-0 podman[452367]: 2025-11-25 17:49:25.93836809 +0000 UTC m=+0.029275379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:49:26 compute-0 podman[452367]: 2025-11-25 17:49:26.049870065 +0000 UTC m=+0.140777394 container init eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:49:26 compute-0 podman[452367]: 2025-11-25 17:49:26.061606645 +0000 UTC m=+0.152513924 container start eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:49:26 compute-0 podman[452367]: 2025-11-25 17:49:26.06587505 +0000 UTC m=+0.156782319 container attach eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:49:26 compute-0 lucid_ishizaka[452387]: 167 167
Nov 25 17:49:26 compute-0 systemd[1]: libpod-eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122.scope: Deactivated successfully.
Nov 25 17:49:26 compute-0 podman[452367]: 2025-11-25 17:49:26.071894965 +0000 UTC m=+0.162802234 container died eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:49:26 compute-0 podman[452385]: 2025-11-25 17:49:26.085084983 +0000 UTC m=+0.077286244 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:49:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d42334ea497758cec7857125ab6ec112e93960aaad8a55955d0a6cdd7ed01db-merged.mount: Deactivated successfully.
Nov 25 17:49:26 compute-0 podman[452367]: 2025-11-25 17:49:26.122479712 +0000 UTC m=+0.213386981 container remove eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:49:26 compute-0 podman[452381]: 2025-11-25 17:49:26.12721561 +0000 UTC m=+0.113348436 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 25 17:49:26 compute-0 systemd[1]: libpod-conmon-eb4af4c6d6ad28132a46b3ce6e80cb14c3516fda46186ee97c235e3f01c2f122.scope: Deactivated successfully.
Nov 25 17:49:26 compute-0 podman[452386]: 2025-11-25 17:49:26.141494749 +0000 UTC m=+0.131718767 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 17:49:26 compute-0 podman[452472]: 2025-11-25 17:49:26.343110378 +0000 UTC m=+0.052276154 container create 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:49:26 compute-0 systemd[1]: Started libpod-conmon-269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1.scope.
Nov 25 17:49:26 compute-0 podman[452472]: 2025-11-25 17:49:26.321333536 +0000 UTC m=+0.030499362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:49:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:26 compute-0 podman[452472]: 2025-11-25 17:49:26.450013889 +0000 UTC m=+0.159179695 container init 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:49:26 compute-0 podman[452472]: 2025-11-25 17:49:26.460385551 +0000 UTC m=+0.169551337 container start 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 17:49:26 compute-0 podman[452472]: 2025-11-25 17:49:26.463619418 +0000 UTC m=+0.172785244 container attach 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 17:49:26 compute-0 nova_compute[254092]: 2025-11-25 17:49:26.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:26 compute-0 nova_compute[254092]: 2025-11-25 17:49:26.824 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:27 compute-0 ceph-mon[74985]: pgmap v3799: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:27 compute-0 agitated_borg[452489]: {
Nov 25 17:49:27 compute-0 agitated_borg[452489]:     "0": [
Nov 25 17:49:27 compute-0 agitated_borg[452489]:         {
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "devices": [
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "/dev/loop3"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             ],
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_name": "ceph_lv0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_size": "21470642176",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "name": "ceph_lv0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "tags": {
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cluster_name": "ceph",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.crush_device_class": "",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.encrypted": "0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osd_id": "0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.type": "block",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.vdo": "0"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             },
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "type": "block",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "vg_name": "ceph_vg0"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:         }
Nov 25 17:49:27 compute-0 agitated_borg[452489]:     ],
Nov 25 17:49:27 compute-0 agitated_borg[452489]:     "1": [
Nov 25 17:49:27 compute-0 agitated_borg[452489]:         {
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "devices": [
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "/dev/loop4"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             ],
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_name": "ceph_lv1",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_size": "21470642176",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "name": "ceph_lv1",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "tags": {
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cluster_name": "ceph",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.crush_device_class": "",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.encrypted": "0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osd_id": "1",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.type": "block",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.vdo": "0"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             },
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "type": "block",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "vg_name": "ceph_vg1"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:         }
Nov 25 17:49:27 compute-0 agitated_borg[452489]:     ],
Nov 25 17:49:27 compute-0 agitated_borg[452489]:     "2": [
Nov 25 17:49:27 compute-0 agitated_borg[452489]:         {
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "devices": [
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "/dev/loop5"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             ],
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_name": "ceph_lv2",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_size": "21470642176",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "name": "ceph_lv2",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "tags": {
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.cluster_name": "ceph",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.crush_device_class": "",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.encrypted": "0",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osd_id": "2",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.type": "block",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:                 "ceph.vdo": "0"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             },
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "type": "block",
Nov 25 17:49:27 compute-0 agitated_borg[452489]:             "vg_name": "ceph_vg2"
Nov 25 17:49:27 compute-0 agitated_borg[452489]:         }
Nov 25 17:49:27 compute-0 agitated_borg[452489]:     ]
Nov 25 17:49:27 compute-0 agitated_borg[452489]: }
Nov 25 17:49:27 compute-0 systemd[1]: libpod-269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1.scope: Deactivated successfully.
Nov 25 17:49:27 compute-0 podman[452472]: 2025-11-25 17:49:27.371889846 +0000 UTC m=+1.081055632 container died 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:49:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2616ab5fa76ec72be050fa5376cec464718586404ed41852b57dec7b072dea93-merged.mount: Deactivated successfully.
Nov 25 17:49:27 compute-0 podman[452472]: 2025-11-25 17:49:27.435111596 +0000 UTC m=+1.144277372 container remove 269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_borg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:49:27 compute-0 systemd[1]: libpod-conmon-269c8c34c5d76025bd5a5a97957168d775ee7fd102699d5a41333b97f51732e1.scope: Deactivated successfully.
Nov 25 17:49:27 compute-0 sudo[452302]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:27 compute-0 sudo[452509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:27 compute-0 sudo[452509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:27 compute-0 sudo[452509]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:27 compute-0 sudo[452534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:49:27 compute-0 sudo[452534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:27 compute-0 sudo[452534]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:27 compute-0 sudo[452559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:27 compute-0 sudo[452559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:27 compute-0 sudo[452559]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:27 compute-0 sudo[452584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:49:27 compute-0 sudo[452584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:28 compute-0 podman[452650]: 2025-11-25 17:49:28.150029129 +0000 UTC m=+0.049683323 container create 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 17:49:28 compute-0 systemd[1]: Started libpod-conmon-1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a.scope.
Nov 25 17:49:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:49:28 compute-0 podman[452650]: 2025-11-25 17:49:28.134224969 +0000 UTC m=+0.033879193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:49:28 compute-0 podman[452650]: 2025-11-25 17:49:28.243199696 +0000 UTC m=+0.142853920 container init 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:49:28 compute-0 podman[452650]: 2025-11-25 17:49:28.251863142 +0000 UTC m=+0.151517336 container start 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:49:28 compute-0 podman[452650]: 2025-11-25 17:49:28.255789548 +0000 UTC m=+0.155443782 container attach 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:49:28 compute-0 jovial_aryabhata[452666]: 167 167
Nov 25 17:49:28 compute-0 systemd[1]: libpod-1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a.scope: Deactivated successfully.
Nov 25 17:49:28 compute-0 podman[452650]: 2025-11-25 17:49:28.260727343 +0000 UTC m=+0.160381537 container died 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-194c8f934ce7b239597908d30e79017011c87fa01f4c3d77ad9be632ec504f0a-merged.mount: Deactivated successfully.
Nov 25 17:49:28 compute-0 podman[452650]: 2025-11-25 17:49:28.298194632 +0000 UTC m=+0.197848866 container remove 1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_aryabhata, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:49:28 compute-0 systemd[1]: libpod-conmon-1d09af9279e813a0c20d42abd6c7b77da6139ebec6a9425ac34583ff0f149b5a.scope: Deactivated successfully.
Nov 25 17:49:28 compute-0 podman[452689]: 2025-11-25 17:49:28.484231717 +0000 UTC m=+0.057716692 container create 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 17:49:28 compute-0 systemd[1]: Started libpod-conmon-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope.
Nov 25 17:49:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:49:28 compute-0 podman[452689]: 2025-11-25 17:49:28.463095832 +0000 UTC m=+0.036580857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:49:28 compute-0 podman[452689]: 2025-11-25 17:49:28.563265809 +0000 UTC m=+0.136750794 container init 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:49:28 compute-0 podman[452689]: 2025-11-25 17:49:28.571412021 +0000 UTC m=+0.144896996 container start 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:49:28 compute-0 podman[452689]: 2025-11-25 17:49:28.574759892 +0000 UTC m=+0.148244877 container attach 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:49:29 compute-0 ceph-mon[74985]: pgmap v3800: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:29 compute-0 confident_goodall[452706]: {
Nov 25 17:49:29 compute-0 confident_goodall[452706]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "osd_id": 1,
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "type": "bluestore"
Nov 25 17:49:29 compute-0 confident_goodall[452706]:     },
Nov 25 17:49:29 compute-0 confident_goodall[452706]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "osd_id": 2,
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "type": "bluestore"
Nov 25 17:49:29 compute-0 confident_goodall[452706]:     },
Nov 25 17:49:29 compute-0 confident_goodall[452706]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "osd_id": 0,
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:49:29 compute-0 confident_goodall[452706]:         "type": "bluestore"
Nov 25 17:49:29 compute-0 confident_goodall[452706]:     }
Nov 25 17:49:29 compute-0 confident_goodall[452706]: }
Nov 25 17:49:29 compute-0 systemd[1]: libpod-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope: Deactivated successfully.
Nov 25 17:49:29 compute-0 systemd[1]: libpod-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope: Consumed 1.283s CPU time.
Nov 25 17:49:29 compute-0 podman[452739]: 2025-11-25 17:49:29.88824954 +0000 UTC m=+0.026746000 container died 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3b9ec3408a978d545328ce22d1c1b10af8a1ddae8fdcdcc113458096c61c912-merged.mount: Deactivated successfully.
Nov 25 17:49:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:29 compute-0 podman[452739]: 2025-11-25 17:49:29.953403743 +0000 UTC m=+0.091900193 container remove 05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:49:29 compute-0 systemd[1]: libpod-conmon-05e7e6f5fc41fd71def593b8142dd0c57f493515c5b6c80d242ba24a9cf36e41.scope: Deactivated successfully.
Nov 25 17:49:30 compute-0 sudo[452584]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:49:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:49:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:49:30 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:49:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a63e0cbd-a2a5-41f4-968f-9938eed75f3e does not exist
Nov 25 17:49:30 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 38e5c5df-e6b5-44a9-8ed2-2468333826df does not exist
Nov 25 17:49:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:30 compute-0 sudo[452753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:49:30 compute-0 sudo[452753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:30 compute-0 sudo[452753]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:30 compute-0 sudo[452778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:49:30 compute-0 sudo[452778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:49:30 compute-0 sudo[452778]: pam_unix(sudo:session): session closed for user root
Nov 25 17:49:30 compute-0 nova_compute[254092]: 2025-11-25 17:49:30.549 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:31 compute-0 ceph-mon[74985]: pgmap v3801: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:31 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:49:31 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:49:31 compute-0 nova_compute[254092]: 2025-11-25 17:49:31.826 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:33 compute-0 ceph-mon[74985]: pgmap v3802: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:33 compute-0 nova_compute[254092]: 2025-11-25 17:49:33.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:33 compute-0 nova_compute[254092]: 2025-11-25 17:49:33.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:49:33 compute-0 nova_compute[254092]: 2025-11-25 17:49:33.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:49:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:35 compute-0 ceph-mon[74985]: pgmap v3803: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:35 compute-0 nova_compute[254092]: 2025-11-25 17:49:35.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:36 compute-0 nova_compute[254092]: 2025-11-25 17:49:36.862 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:37 compute-0 ceph-mon[74985]: pgmap v3804: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:38 compute-0 ceph-mon[74985]: pgmap v3805: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:49:40
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'images', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:49:40 compute-0 nova_compute[254092]: 2025-11-25 17:49:40.552 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:49:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:49:40 compute-0 ceph-mon[74985]: pgmap v3806: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:41 compute-0 nova_compute[254092]: 2025-11-25 17:49:41.864 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:42 compute-0 ceph-mon[74985]: pgmap v3807: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3808: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:45 compute-0 ceph-mon[74985]: pgmap v3808: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:45 compute-0 nova_compute[254092]: 2025-11-25 17:49:45.555 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3809: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:46 compute-0 nova_compute[254092]: 2025-11-25 17:49:46.927 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:47 compute-0 ceph-mon[74985]: pgmap v3809: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:47 compute-0 nova_compute[254092]: 2025-11-25 17:49:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:49:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3810: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:49 compute-0 ceph-mon[74985]: pgmap v3810: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3811: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:50 compute-0 nova_compute[254092]: 2025-11-25 17:49:50.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:51 compute-0 ceph-mon[74985]: pgmap v3811: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:51 compute-0 nova_compute[254092]: 2025-11-25 17:49:51.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3812: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:49:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:49:53 compute-0 ceph-mon[74985]: pgmap v3812: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3813: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:55 compute-0 ceph-mon[74985]: pgmap v3813: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:49:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:49:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328540502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:49:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:49:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328540502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:49:55 compute-0 nova_compute[254092]: 2025-11-25 17:49:55.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3814: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/328540502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:49:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/328540502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:49:56 compute-0 podman[452804]: 2025-11-25 17:49:56.633786959 +0000 UTC m=+0.049460008 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 17:49:56 compute-0 podman[452803]: 2025-11-25 17:49:56.643938226 +0000 UTC m=+0.059616925 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 17:49:56 compute-0 podman[452805]: 2025-11-25 17:49:56.721726083 +0000 UTC m=+0.136276700 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 17:49:56 compute-0 nova_compute[254092]: 2025-11-25 17:49:56.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:49:57 compute-0 ceph-mon[74985]: pgmap v3814: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3815: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:59 compute-0 ceph-mon[74985]: pgmap v3815: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:49:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3816: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:00 compute-0 nova_compute[254092]: 2025-11-25 17:50:00.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:01 compute-0 ceph-mon[74985]: pgmap v3816: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:01 compute-0 sshd-session[452867]: Connection closed by authenticating user root 171.244.51.45 port 48310 [preauth]
Nov 25 17:50:01 compute-0 nova_compute[254092]: 2025-11-25 17:50:01.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3817: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:02 compute-0 ceph-mon[74985]: pgmap v3817: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3818: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:05 compute-0 ceph-mon[74985]: pgmap v3818: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:05 compute-0 nova_compute[254092]: 2025-11-25 17:50:05.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3819: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:06 compute-0 nova_compute[254092]: 2025-11-25 17:50:06.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:07 compute-0 ceph-mon[74985]: pgmap v3819: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3820: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:09 compute-0 ceph-mon[74985]: pgmap v3820: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:09 compute-0 nova_compute[254092]: 2025-11-25 17:50:09.505 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3821: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:50:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:50:10 compute-0 nova_compute[254092]: 2025-11-25 17:50:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:10 compute-0 nova_compute[254092]: 2025-11-25 17:50:10.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:10 compute-0 nova_compute[254092]: 2025-11-25 17:50:10.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:11 compute-0 ceph-mon[74985]: pgmap v3821: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:11 compute-0 nova_compute[254092]: 2025-11-25 17:50:11.938 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3822: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:13 compute-0 ceph-mon[74985]: pgmap v3822: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:13 compute-0 nova_compute[254092]: 2025-11-25 17:50:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:13 compute-0 nova_compute[254092]: 2025-11-25 17:50:13.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:50:13 compute-0 nova_compute[254092]: 2025-11-25 17:50:13.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:50:13 compute-0 nova_compute[254092]: 2025-11-25 17:50:13.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:50:13.694 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:50:13.695 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:50:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:50:13.695 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:50:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3823: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:15 compute-0 ceph-mon[74985]: pgmap v3823: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.523 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.603 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:50:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527462887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:50:15 compute-0 nova_compute[254092]: 2025-11-25 17:50:15.943 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:50:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3824: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2527462887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.114 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.115 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3624MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.116 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.116 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.200 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.215 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:50:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:50:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141294128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.687 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.693 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.707 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.709 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.709 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:50:16 compute-0 nova_compute[254092]: 2025-11-25 17:50:16.941 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:17 compute-0 ceph-mon[74985]: pgmap v3824: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1141294128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:50:17 compute-0 nova_compute[254092]: 2025-11-25 17:50:17.709 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:17 compute-0 nova_compute[254092]: 2025-11-25 17:50:17.710 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:17 compute-0 nova_compute[254092]: 2025-11-25 17:50:17.710 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:50:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3825: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:18 compute-0 nova_compute[254092]: 2025-11-25 17:50:18.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:19 compute-0 ceph-mon[74985]: pgmap v3825: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3826: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:20 compute-0 nova_compute[254092]: 2025-11-25 17:50:20.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:21 compute-0 ceph-mon[74985]: pgmap v3826: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:21 compute-0 nova_compute[254092]: 2025-11-25 17:50:21.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3827: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:23 compute-0 ceph-mon[74985]: pgmap v3827: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:23 compute-0 sshd-session[452913]: Connection closed by 152.32.206.83 port 34500
Nov 25 17:50:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3828: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:24 compute-0 sshd-session[452914]: Connection closed by 152.32.206.83 port 34692 [preauth]
Nov 25 17:50:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:25 compute-0 ceph-mon[74985]: pgmap v3828: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:25 compute-0 sshd-session[452916]: Unable to negotiate with 152.32.206.83 port 35066: no matching host key type found. Their offer: ssh-rsa [preauth]
Nov 25 17:50:25 compute-0 nova_compute[254092]: 2025-11-25 17:50:25.608 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3829: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:26 compute-0 ceph-mon[74985]: pgmap v3829: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:26 compute-0 nova_compute[254092]: 2025-11-25 17:50:26.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:26 compute-0 nova_compute[254092]: 2025-11-25 17:50:26.945 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:27 compute-0 podman[452919]: 2025-11-25 17:50:27.630665454 +0000 UTC m=+0.049176190 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 17:50:27 compute-0 podman[452918]: 2025-11-25 17:50:27.661426541 +0000 UTC m=+0.081353716 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 17:50:27 compute-0 podman[452920]: 2025-11-25 17:50:27.684704135 +0000 UTC m=+0.098149013 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 17:50:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3830: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:28 compute-0 ceph-mon[74985]: pgmap v3830: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3831: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.097071) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030097279, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1542, "num_deletes": 250, "total_data_size": 2471697, "memory_usage": 2512664, "flush_reason": "Manual Compaction"}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030111881, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 1412433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77465, "largest_seqno": 79006, "table_properties": {"data_size": 1407242, "index_size": 2458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13648, "raw_average_key_size": 20, "raw_value_size": 1395714, "raw_average_value_size": 2114, "num_data_blocks": 113, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764092862, "oldest_key_time": 1764092862, "file_creation_time": 1764093030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 14671 microseconds, and 3785 cpu microseconds.
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.111921) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 1412433 bytes OK
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.111938) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.115563) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.115577) EVENT_LOG_v1 {"time_micros": 1764093030115572, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.115595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2464997, prev total WAL file size 2464997, number of live WAL files 2.
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.116179) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323631' seq:72057594037927935, type:22 .. '6D6772737461740033353132' seq:0, type:0; will stop at (end)
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1379KB)], [182(10MB)]
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030116203, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12554031, "oldest_snapshot_seqno": -1}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9546 keys, 10279279 bytes, temperature: kUnknown
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030160894, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10279279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10221475, "index_size": 32859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 250687, "raw_average_key_size": 26, "raw_value_size": 10056931, "raw_average_value_size": 1053, "num_data_blocks": 1266, "num_entries": 9546, "num_filter_entries": 9546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.161108) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10279279 bytes
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.162599) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 280.5 rd, 229.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(16.2) write-amplify(7.3) OK, records in: 9980, records dropped: 434 output_compression: NoCompression
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.162614) EVENT_LOG_v1 {"time_micros": 1764093030162607, "job": 114, "event": "compaction_finished", "compaction_time_micros": 44758, "compaction_time_cpu_micros": 24842, "output_level": 6, "num_output_files": 1, "total_output_size": 10279279, "num_input_records": 9980, "num_output_records": 9546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030162948, "job": 114, "event": "table_file_deletion", "file_number": 184}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093030165103, "job": 114, "event": "table_file_deletion", "file_number": 182}
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.116138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:50:30 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:50:30.165512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:50:30 compute-0 sudo[452979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:30 compute-0 sudo[452979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:30 compute-0 sudo[452979]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:30 compute-0 sudo[453004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:50:30 compute-0 sudo[453004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:30 compute-0 sudo[453004]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:30 compute-0 sudo[453029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:30 compute-0 sudo[453029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:30 compute-0 sudo[453029]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:30 compute-0 sudo[453054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 17:50:30 compute-0 sudo[453054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:30 compute-0 nova_compute[254092]: 2025-11-25 17:50:30.656 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:30 compute-0 podman[453151]: 2025-11-25 17:50:30.942121274 +0000 UTC m=+0.071313582 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:50:31 compute-0 ceph-mon[74985]: pgmap v3831: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:31 compute-0 podman[453151]: 2025-11-25 17:50:31.032136054 +0000 UTC m=+0.161328362 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:50:31 compute-0 sudo[453054]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:50:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:50:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:31 compute-0 sudo[453310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:31 compute-0 sudo[453310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:31 compute-0 sudo[453310]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:31 compute-0 sudo[453335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:50:31 compute-0 sudo[453335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:31 compute-0 sudo[453335]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:31 compute-0 sudo[453360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:31 compute-0 sudo[453360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:31 compute-0 sudo[453360]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:31 compute-0 nova_compute[254092]: 2025-11-25 17:50:31.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3832: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:32 compute-0 sudo[453385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:50:32 compute-0 sudo[453385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:32 compute-0 sudo[453385]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 13cba712-1017-492e-aafd-43db4268005a does not exist
Nov 25 17:50:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev afa8a022-da52-4465-9316-5c32a3aec5ae does not exist
Nov 25 17:50:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 98af20ff-adc6-42ac-81e2-590c5f351d2c does not exist
Nov 25 17:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:50:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:50:32 compute-0 sudo[453441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:32 compute-0 sudo[453441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:32 compute-0 sudo[453441]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:32 compute-0 sudo[453466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:50:32 compute-0 sudo[453466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:32 compute-0 sudo[453466]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:32 compute-0 sudo[453491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:32 compute-0 sudo[453491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:32 compute-0 sudo[453491]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:32 compute-0 ceph-mon[74985]: pgmap v3832: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:50:32 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:50:32 compute-0 sudo[453516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:50:32 compute-0 sudo[453516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:33 compute-0 podman[453581]: 2025-11-25 17:50:33.06032701 +0000 UTC m=+0.039090535 container create 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:50:33 compute-0 systemd[1]: Started libpod-conmon-1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c.scope.
Nov 25 17:50:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:50:33 compute-0 podman[453581]: 2025-11-25 17:50:33.132781843 +0000 UTC m=+0.111545448 container init 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:50:33 compute-0 podman[453581]: 2025-11-25 17:50:33.138780886 +0000 UTC m=+0.117544401 container start 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:50:33 compute-0 podman[453581]: 2025-11-25 17:50:33.041368244 +0000 UTC m=+0.020131789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:50:33 compute-0 podman[453581]: 2025-11-25 17:50:33.143064843 +0000 UTC m=+0.121828398 container attach 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:50:33 compute-0 happy_fermat[453597]: 167 167
Nov 25 17:50:33 compute-0 systemd[1]: libpod-1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c.scope: Deactivated successfully.
Nov 25 17:50:33 compute-0 podman[453581]: 2025-11-25 17:50:33.145212441 +0000 UTC m=+0.123975956 container died 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c514a65e41eb2381a42ec38f7733b77f3996e849b6087bdf3bf91fdd8575989d-merged.mount: Deactivated successfully.
Nov 25 17:50:33 compute-0 podman[453581]: 2025-11-25 17:50:33.181204341 +0000 UTC m=+0.159967856 container remove 1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 17:50:33 compute-0 systemd[1]: libpod-conmon-1e5750c1453aa5a7a10765f36966086fa4bf21b029ef24a78e3b1491f0d1e25c.scope: Deactivated successfully.
Nov 25 17:50:33 compute-0 podman[453622]: 2025-11-25 17:50:33.343074888 +0000 UTC m=+0.050351482 container create 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:50:33 compute-0 systemd[1]: Started libpod-conmon-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope.
Nov 25 17:50:33 compute-0 podman[453622]: 2025-11-25 17:50:33.319872796 +0000 UTC m=+0.027149410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:50:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:33 compute-0 podman[453622]: 2025-11-25 17:50:33.473512459 +0000 UTC m=+0.180789053 container init 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:50:33 compute-0 podman[453622]: 2025-11-25 17:50:33.486590145 +0000 UTC m=+0.193866749 container start 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:50:33 compute-0 podman[453622]: 2025-11-25 17:50:33.492772484 +0000 UTC m=+0.200049118 container attach 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:50:33 compute-0 nova_compute[254092]: 2025-11-25 17:50:33.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:50:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3833: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:34 compute-0 laughing_hodgkin[453638]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:50:34 compute-0 laughing_hodgkin[453638]: --> relative data size: 1.0
Nov 25 17:50:34 compute-0 laughing_hodgkin[453638]: --> All data devices are unavailable
Nov 25 17:50:34 compute-0 systemd[1]: libpod-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope: Deactivated successfully.
Nov 25 17:50:34 compute-0 systemd[1]: libpod-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope: Consumed 1.014s CPU time.
Nov 25 17:50:34 compute-0 podman[453622]: 2025-11-25 17:50:34.563168064 +0000 UTC m=+1.270444658 container died 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b77c141607fe01fd5a9798a1748f00d98305e60e544c5dd38eea3badcd8f8e42-merged.mount: Deactivated successfully.
Nov 25 17:50:34 compute-0 podman[453622]: 2025-11-25 17:50:34.622460088 +0000 UTC m=+1.329736682 container remove 541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hodgkin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:50:34 compute-0 systemd[1]: libpod-conmon-541fd8247572000d1644e08f490476a7c6c3c1475b534347c6939ffcdacd80f9.scope: Deactivated successfully.
Nov 25 17:50:34 compute-0 sudo[453516]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:34 compute-0 sudo[453677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:34 compute-0 sudo[453677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:34 compute-0 sudo[453677]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:34 compute-0 sudo[453702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:50:34 compute-0 sudo[453702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:34 compute-0 sudo[453702]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:34 compute-0 sudo[453727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:34 compute-0 sudo[453727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:34 compute-0 sudo[453727]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:34 compute-0 sudo[453752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:50:34 compute-0 sudo[453752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:35 compute-0 ceph-mon[74985]: pgmap v3833: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:35 compute-0 podman[453817]: 2025-11-25 17:50:35.404405305 +0000 UTC m=+0.082039934 container create 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:50:35 compute-0 podman[453817]: 2025-11-25 17:50:35.348131493 +0000 UTC m=+0.025766212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:50:35 compute-0 systemd[1]: Started libpod-conmon-6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6.scope.
Nov 25 17:50:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:50:35 compute-0 podman[453817]: 2025-11-25 17:50:35.522476239 +0000 UTC m=+0.200110878 container init 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:50:35 compute-0 podman[453817]: 2025-11-25 17:50:35.535179005 +0000 UTC m=+0.212813634 container start 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:50:35 compute-0 exciting_perlman[453833]: 167 167
Nov 25 17:50:35 compute-0 systemd[1]: libpod-6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6.scope: Deactivated successfully.
Nov 25 17:50:35 compute-0 podman[453817]: 2025-11-25 17:50:35.572398388 +0000 UTC m=+0.250033047 container attach 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:50:35 compute-0 podman[453817]: 2025-11-25 17:50:35.572904042 +0000 UTC m=+0.250538671 container died 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-55db5ace3432b8d7f6f7b6160f8d2f4356acafdec8d3dbcb7e1cd6190da25971-merged.mount: Deactivated successfully.
Nov 25 17:50:35 compute-0 podman[453817]: 2025-11-25 17:50:35.62537274 +0000 UTC m=+0.303007409 container remove 6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:50:35 compute-0 systemd[1]: libpod-conmon-6c5235ad6e742a31b16622795d3069df1f655eb05fa2afd441dfc71026e1d7a6.scope: Deactivated successfully.
Nov 25 17:50:35 compute-0 nova_compute[254092]: 2025-11-25 17:50:35.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:35 compute-0 podman[453861]: 2025-11-25 17:50:35.843098817 +0000 UTC m=+0.050322860 container create b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:50:35 compute-0 systemd[1]: Started libpod-conmon-b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc.scope.
Nov 25 17:50:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:35 compute-0 podman[453861]: 2025-11-25 17:50:35.820179423 +0000 UTC m=+0.027403516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:50:35 compute-0 podman[453861]: 2025-11-25 17:50:35.921939114 +0000 UTC m=+0.129163177 container init b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:50:35 compute-0 podman[453861]: 2025-11-25 17:50:35.941406044 +0000 UTC m=+0.148630087 container start b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:50:35 compute-0 podman[453861]: 2025-11-25 17:50:35.944497328 +0000 UTC m=+0.151721381 container attach b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:50:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3834: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:36 compute-0 beautiful_moore[453877]: {
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:     "0": [
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:         {
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "devices": [
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "/dev/loop3"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             ],
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_name": "ceph_lv0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_size": "21470642176",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "name": "ceph_lv0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "tags": {
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cluster_name": "ceph",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.crush_device_class": "",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.encrypted": "0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osd_id": "0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.type": "block",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.vdo": "0"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             },
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "type": "block",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "vg_name": "ceph_vg0"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:         }
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:     ],
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:     "1": [
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:         {
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "devices": [
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "/dev/loop4"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             ],
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_name": "ceph_lv1",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_size": "21470642176",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "name": "ceph_lv1",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "tags": {
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cluster_name": "ceph",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.crush_device_class": "",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.encrypted": "0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osd_id": "1",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.type": "block",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.vdo": "0"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             },
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "type": "block",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "vg_name": "ceph_vg1"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:         }
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:     ],
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:     "2": [
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:         {
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "devices": [
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "/dev/loop5"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             ],
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_name": "ceph_lv2",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_size": "21470642176",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "name": "ceph_lv2",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "tags": {
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.cluster_name": "ceph",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.crush_device_class": "",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.encrypted": "0",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osd_id": "2",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.type": "block",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:                 "ceph.vdo": "0"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             },
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "type": "block",
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:             "vg_name": "ceph_vg2"
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:         }
Nov 25 17:50:36 compute-0 beautiful_moore[453877]:     ]
Nov 25 17:50:36 compute-0 beautiful_moore[453877]: }
Nov 25 17:50:36 compute-0 systemd[1]: libpod-b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc.scope: Deactivated successfully.
Nov 25 17:50:36 compute-0 podman[453861]: 2025-11-25 17:50:36.720231086 +0000 UTC m=+0.927455129 container died b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:50:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e14aabecf56d275d52bfbfc0a06fd73212f5bab21a610996244225d6bc8645f-merged.mount: Deactivated successfully.
Nov 25 17:50:36 compute-0 podman[453861]: 2025-11-25 17:50:36.780585449 +0000 UTC m=+0.987809492 container remove b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:50:36 compute-0 systemd[1]: libpod-conmon-b940a0f69cf05989ac430669625694c1cbbd690c2228cf25499a2b3f78e213bc.scope: Deactivated successfully.
Nov 25 17:50:36 compute-0 sudo[453752]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:36 compute-0 sudo[453900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:36 compute-0 sudo[453900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:36 compute-0 sudo[453900]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:36 compute-0 nova_compute[254092]: 2025-11-25 17:50:36.984 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:37 compute-0 sudo[453925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:50:37 compute-0 sudo[453925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:37 compute-0 sudo[453925]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:37 compute-0 sshd-session[453882]: Received disconnect from 193.46.255.99 port 42002:11:  [preauth]
Nov 25 17:50:37 compute-0 sshd-session[453882]: Disconnected from authenticating user root 193.46.255.99 port 42002 [preauth]
Nov 25 17:50:37 compute-0 ceph-mon[74985]: pgmap v3834: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:37 compute-0 sudo[453950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:37 compute-0 sudo[453950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:37 compute-0 sudo[453950]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:37 compute-0 sudo[453975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:50:37 compute-0 sudo[453975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:37 compute-0 podman[454042]: 2025-11-25 17:50:37.491653287 +0000 UTC m=+0.044762569 container create aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:50:37 compute-0 systemd[1]: Started libpod-conmon-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope.
Nov 25 17:50:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:50:37 compute-0 podman[454042]: 2025-11-25 17:50:37.471726955 +0000 UTC m=+0.024836257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:50:37 compute-0 podman[454042]: 2025-11-25 17:50:37.579036276 +0000 UTC m=+0.132145578 container init aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:50:37 compute-0 podman[454042]: 2025-11-25 17:50:37.588313508 +0000 UTC m=+0.141422790 container start aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:50:37 compute-0 podman[454042]: 2025-11-25 17:50:37.591424884 +0000 UTC m=+0.144534166 container attach aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 17:50:37 compute-0 nostalgic_hypatia[454058]: 167 167
Nov 25 17:50:37 compute-0 systemd[1]: libpod-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope: Deactivated successfully.
Nov 25 17:50:37 compute-0 conmon[454058]: conmon aeccb46769799235bb21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope/container/memory.events
Nov 25 17:50:37 compute-0 podman[454042]: 2025-11-25 17:50:37.596932264 +0000 UTC m=+0.150041576 container died aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:50:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d09580fde88e75b202a4de61dbc3c2c516ad1a922b9cabb43aeec449c0132e9f-merged.mount: Deactivated successfully.
Nov 25 17:50:37 compute-0 podman[454042]: 2025-11-25 17:50:37.640831908 +0000 UTC m=+0.193941210 container remove aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hypatia, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:50:37 compute-0 systemd[1]: libpod-conmon-aeccb46769799235bb21bf60fbf088cad34856e0c9afff9aea0725316c4c5ab5.scope: Deactivated successfully.
Nov 25 17:50:37 compute-0 podman[454083]: 2025-11-25 17:50:37.852435169 +0000 UTC m=+0.060289853 container create 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:50:37 compute-0 systemd[1]: Started libpod-conmon-260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba.scope.
Nov 25 17:50:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:50:37 compute-0 podman[454083]: 2025-11-25 17:50:37.836053673 +0000 UTC m=+0.043908377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:50:37 compute-0 podman[454083]: 2025-11-25 17:50:37.945355989 +0000 UTC m=+0.153210713 container init 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:50:37 compute-0 podman[454083]: 2025-11-25 17:50:37.954885168 +0000 UTC m=+0.162739852 container start 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:50:37 compute-0 podman[454083]: 2025-11-25 17:50:37.958445775 +0000 UTC m=+0.166300459 container attach 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:50:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3835: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]: {
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "osd_id": 1,
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "type": "bluestore"
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:     },
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "osd_id": 2,
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "type": "bluestore"
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:     },
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "osd_id": 0,
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:         "type": "bluestore"
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]:     }
Nov 25 17:50:38 compute-0 hopeful_hermann[454099]: }
Nov 25 17:50:38 compute-0 systemd[1]: libpod-260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba.scope: Deactivated successfully.
Nov 25 17:50:38 compute-0 podman[454083]: 2025-11-25 17:50:38.898959009 +0000 UTC m=+1.106813693 container died 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:50:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a340bb265fc90df7caf84d11e90e216de5d369720b42f35a52dfbfa468374ea6-merged.mount: Deactivated successfully.
Nov 25 17:50:38 compute-0 podman[454083]: 2025-11-25 17:50:38.955051186 +0000 UTC m=+1.162905870 container remove 260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:50:38 compute-0 systemd[1]: libpod-conmon-260aec9e69d13a64c137832523a42fd80e9ac36045990725c72c06eaf5a7c5ba.scope: Deactivated successfully.
Nov 25 17:50:38 compute-0 sudo[453975]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:50:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:50:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1891583c-dd36-4336-b19f-ca50420b6da6 does not exist
Nov 25 17:50:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5922e36e-c6e4-4114-98a7-9c119a26e481 does not exist
Nov 25 17:50:39 compute-0 ceph-mon[74985]: pgmap v3835: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:50:39 compute-0 sudo[454145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:50:39 compute-0 sudo[454145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:39 compute-0 sudo[454145]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:39 compute-0 sudo[454170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:50:39 compute-0 sudo[454170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:50:39 compute-0 sudo[454170]: pam_unix(sudo:session): session closed for user root
Nov 25 17:50:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3836: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:50:40
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log']
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:50:40 compute-0 nova_compute[254092]: 2025-11-25 17:50:40.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:50:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:50:41 compute-0 ceph-mon[74985]: pgmap v3836: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3837: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:41 compute-0 nova_compute[254092]: 2025-11-25 17:50:41.986 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:43 compute-0 ceph-mon[74985]: pgmap v3837: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3838: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:45 compute-0 ceph-mon[74985]: pgmap v3838: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:45 compute-0 nova_compute[254092]: 2025-11-25 17:50:45.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3839: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:46 compute-0 ceph-mon[74985]: pgmap v3839: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:47 compute-0 nova_compute[254092]: 2025-11-25 17:50:47.038 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3840: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:49 compute-0 ceph-mon[74985]: pgmap v3840: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3841: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:50 compute-0 nova_compute[254092]: 2025-11-25 17:50:50.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:51 compute-0 ceph-mon[74985]: pgmap v3841: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3842: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:52 compute-0 nova_compute[254092]: 2025-11-25 17:50:52.101 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:50:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:50:53 compute-0 ceph-mon[74985]: pgmap v3842: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:53 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3843: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:55 compute-0 ceph-mon[74985]: pgmap v3843: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:50:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:50:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/229368862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:50:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:50:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/229368862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:50:55 compute-0 nova_compute[254092]: 2025-11-25 17:50:55.666 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:55 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3844: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/229368862' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:50:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/229368862' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:50:57 compute-0 ceph-mon[74985]: pgmap v3844: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:57 compute-0 nova_compute[254092]: 2025-11-25 17:50:57.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:50:57 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3845: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:58 compute-0 podman[454196]: 2025-11-25 17:50:58.680032183 +0000 UTC m=+0.078577441 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 17:50:58 compute-0 podman[454195]: 2025-11-25 17:50:58.695191145 +0000 UTC m=+0.093656751 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 17:50:58 compute-0 podman[454197]: 2025-11-25 17:50:58.70714239 +0000 UTC m=+0.107326512 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 25 17:50:59 compute-0 ceph-mon[74985]: pgmap v3845: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:50:59 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3846: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:00 compute-0 nova_compute[254092]: 2025-11-25 17:51:00.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:01 compute-0 ceph-mon[74985]: pgmap v3846: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:01 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3847: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:02 compute-0 nova_compute[254092]: 2025-11-25 17:51:02.168 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:03 compute-0 ceph-mon[74985]: pgmap v3847: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:03 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3848: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:05 compute-0 ceph-mon[74985]: pgmap v3848: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:05 compute-0 nova_compute[254092]: 2025-11-25 17:51:05.669 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:05 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3849: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:07 compute-0 ceph-mon[74985]: pgmap v3849: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:07 compute-0 nova_compute[254092]: 2025-11-25 17:51:07.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:07 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3850: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:08 compute-0 ceph-mon[74985]: pgmap v3850: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:09 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3851: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:51:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:51:10 compute-0 nova_compute[254092]: 2025-11-25 17:51:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:10 compute-0 nova_compute[254092]: 2025-11-25 17:51:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:10 compute-0 nova_compute[254092]: 2025-11-25 17:51:10.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:11 compute-0 ceph-mon[74985]: pgmap v3851: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:11 compute-0 nova_compute[254092]: 2025-11-25 17:51:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:11 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3852: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:12 compute-0 nova_compute[254092]: 2025-11-25 17:51:12.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:13 compute-0 ceph-mon[74985]: pgmap v3852: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:51:13.696 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:51:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:51:13.696 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:51:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:51:13.696 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:51:13 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3853: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:15 compute-0 ceph-mon[74985]: pgmap v3853: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:15 compute-0 nova_compute[254092]: 2025-11-25 17:51:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:15 compute-0 nova_compute[254092]: 2025-11-25 17:51:15.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:51:15 compute-0 nova_compute[254092]: 2025-11-25 17:51:15.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:51:15 compute-0 nova_compute[254092]: 2025-11-25 17:51:15.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:51:15 compute-0 nova_compute[254092]: 2025-11-25 17:51:15.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:15 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3854: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:17 compute-0 ceph-mon[74985]: pgmap v3854: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:51:17 compute-0 nova_compute[254092]: 2025-11-25 17:51:17.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:51:17 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3855: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:51:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350217163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.007 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:51:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/350217163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.231 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.233 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.234 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.235 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.406 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.599 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.700 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.701 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.717 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.755 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:51:18 compute-0 nova_compute[254092]: 2025-11-25 17:51:18.774 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:51:19 compute-0 ceph-mon[74985]: pgmap v3855: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:51:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680698078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:51:19 compute-0 nova_compute[254092]: 2025-11-25 17:51:19.246 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:51:19 compute-0 nova_compute[254092]: 2025-11-25 17:51:19.252 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:51:19 compute-0 nova_compute[254092]: 2025-11-25 17:51:19.265 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:51:19 compute-0 nova_compute[254092]: 2025-11-25 17:51:19.266 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:51:19 compute-0 nova_compute[254092]: 2025-11-25 17:51:19.266 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:51:19 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3856: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/680698078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:51:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:20 compute-0 nova_compute[254092]: 2025-11-25 17:51:20.266 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:20 compute-0 nova_compute[254092]: 2025-11-25 17:51:20.267 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:20 compute-0 nova_compute[254092]: 2025-11-25 17:51:20.267 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:51:20 compute-0 nova_compute[254092]: 2025-11-25 17:51:20.749 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:21 compute-0 ceph-mon[74985]: pgmap v3856: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:21 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3857: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:22 compute-0 nova_compute[254092]: 2025-11-25 17:51:22.279 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:23 compute-0 ceph-mon[74985]: pgmap v3857: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:23 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3858: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:25 compute-0 ceph-mon[74985]: pgmap v3858: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:25 compute-0 nova_compute[254092]: 2025-11-25 17:51:25.750 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:25 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3859: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:27 compute-0 ceph-mon[74985]: pgmap v3859: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:27 compute-0 nova_compute[254092]: 2025-11-25 17:51:27.282 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:27 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3860: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:28 compute-0 ceph-mon[74985]: pgmap v3860: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:28 compute-0 nova_compute[254092]: 2025-11-25 17:51:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:51:29 compute-0 podman[454304]: 2025-11-25 17:51:29.679543669 +0000 UTC m=+0.084622634 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 17:51:29 compute-0 podman[454305]: 2025-11-25 17:51:29.681962185 +0000 UTC m=+0.085348374 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 17:51:29 compute-0 podman[454306]: 2025-11-25 17:51:29.696496941 +0000 UTC m=+0.104138556 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:51:29 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3861: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:30 compute-0 nova_compute[254092]: 2025-11-25 17:51:30.752 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:31 compute-0 ceph-mon[74985]: pgmap v3861: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.057352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091057417, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 251, "total_data_size": 951689, "memory_usage": 965608, "flush_reason": "Manual Compaction"}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091067895, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 931728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79007, "largest_seqno": 79756, "table_properties": {"data_size": 927844, "index_size": 1663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8670, "raw_average_key_size": 19, "raw_value_size": 920112, "raw_average_value_size": 2063, "num_data_blocks": 74, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093031, "oldest_key_time": 1764093031, "file_creation_time": 1764093091, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 10614 microseconds, and 6407 cpu microseconds.
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.067962) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 931728 bytes OK
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.068001) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.069398) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.069426) EVENT_LOG_v1 {"time_micros": 1764093091069417, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.069458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 947868, prev total WAL file size 947868, number of live WAL files 2.
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.070500) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(909KB)], [185(10038KB)]
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091070562, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 11211007, "oldest_snapshot_seqno": -1}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9479 keys, 9467891 bytes, temperature: kUnknown
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091142424, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 9467891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9411331, "index_size": 31743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 249954, "raw_average_key_size": 26, "raw_value_size": 9248694, "raw_average_value_size": 975, "num_data_blocks": 1213, "num_entries": 9479, "num_filter_entries": 9479, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093091, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.143687) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 9467891 bytes
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.145372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.4 rd, 130.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.8 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(22.2) write-amplify(10.2) OK, records in: 9992, records dropped: 513 output_compression: NoCompression
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.145406) EVENT_LOG_v1 {"time_micros": 1764093091145390, "job": 116, "event": "compaction_finished", "compaction_time_micros": 72606, "compaction_time_cpu_micros": 54641, "output_level": 6, "num_output_files": 1, "total_output_size": 9467891, "num_input_records": 9992, "num_output_records": 9479, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091145997, "job": 116, "event": "table_file_deletion", "file_number": 187}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093091149483, "job": 116, "event": "table_file_deletion", "file_number": 185}
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.070313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:51:31 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:51:31.149567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:51:31 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3862: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:32 compute-0 nova_compute[254092]: 2025-11-25 17:51:32.322 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:33 compute-0 ceph-mon[74985]: pgmap v3862: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:33 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3863: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:35 compute-0 ceph-mon[74985]: pgmap v3863: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:35 compute-0 nova_compute[254092]: 2025-11-25 17:51:35.755 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:35 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3864: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:37 compute-0 ceph-mon[74985]: pgmap v3864: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:51:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 17K writes, 79K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1312 writes, 6191 keys, 1312 commit groups, 1.0 writes per commit group, ingest: 8.55 MB, 0.01 MB/s
                                           Interval WAL: 1312 writes, 1312 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     39.5      2.44              0.34        58    0.042       0      0       0.0       0.0
                                             L6      1/0    9.03 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.1    130.3    110.6      4.44              1.51        57    0.078    411K    30K       0.0       0.0
                                            Sum      1/0    9.03 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.1     84.1     85.4      6.88              1.84       115    0.060    411K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2    154.0    150.2      0.44              0.23        12    0.037     58K   2955       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    130.3    110.6      4.44              1.51        57    0.078    411K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     40.3      2.39              0.34        57    0.042       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.094, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.57 GB write, 0.08 MB/s write, 0.56 GB read, 0.08 MB/s read, 6.9 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 67.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000713 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4320,64.28 MB,21.1438%) FilterBlock(116,1.11 MB,0.363716%) IndexBlock(116,1.77 MB,0.581325%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 17:51:37 compute-0 nova_compute[254092]: 2025-11-25 17:51:37.358 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:37 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3865: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:38 compute-0 ceph-mon[74985]: pgmap v3865: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:39 compute-0 sudo[454368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:39 compute-0 sudo[454368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:39 compute-0 sudo[454368]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:39 compute-0 sudo[454393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:51:39 compute-0 sudo[454393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:39 compute-0 sudo[454393]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:39 compute-0 sudo[454418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:39 compute-0 sudo[454418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:39 compute-0 sudo[454418]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:39 compute-0 sudo[454443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:51:39 compute-0 sudo[454443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:39 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3866: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:40 compute-0 sudo[454443]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:51:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:51:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:51:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 31867035-aa9c-43a0-80f0-39baf80704ea does not exist
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b9f131e5-af65-4c53-a051-89896c3ed3c5 does not exist
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 0639f5ff-b1d5-4f2f-8137-025aae8a19ca does not exist
Nov 25 17:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:51:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:51:40 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:51:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:51:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:51:40 compute-0 sudo[454499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:40 compute-0 sudo[454499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:40 compute-0 sudo[454499]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:40 compute-0 sudo[454524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:51:40 compute-0 sudo[454524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:40 compute-0 sudo[454524]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:51:40
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', '.mgr', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:51:40 compute-0 sudo[454549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:40 compute-0 sudo[454549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:40 compute-0 sudo[454549]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:40 compute-0 sudo[454574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:51:40 compute-0 sudo[454574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:51:40 compute-0 nova_compute[254092]: 2025-11-25 17:51:40.799 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:51:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:51:40 compute-0 podman[454640]: 2025-11-25 17:51:40.888531448 +0000 UTC m=+0.058816872 container create a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:51:40 compute-0 systemd[1]: Started libpod-conmon-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope.
Nov 25 17:51:40 compute-0 podman[454640]: 2025-11-25 17:51:40.863780934 +0000 UTC m=+0.034066438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:51:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:51:40 compute-0 podman[454640]: 2025-11-25 17:51:40.990536874 +0000 UTC m=+0.160822338 container init a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:51:41 compute-0 podman[454640]: 2025-11-25 17:51:41.000928648 +0000 UTC m=+0.171214082 container start a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:51:41 compute-0 podman[454640]: 2025-11-25 17:51:41.004954657 +0000 UTC m=+0.175240171 container attach a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:51:41 compute-0 distracted_mendeleev[454656]: 167 167
Nov 25 17:51:41 compute-0 systemd[1]: libpod-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope: Deactivated successfully.
Nov 25 17:51:41 compute-0 conmon[454656]: conmon a955d0a2e418e41b9a93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope/container/memory.events
Nov 25 17:51:41 compute-0 podman[454640]: 2025-11-25 17:51:41.012332088 +0000 UTC m=+0.182617542 container died a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-da09fd61977de1f7b9bc95c3b6461950d7f04722415b4c48a9f0b1a2709532be-merged.mount: Deactivated successfully.
Nov 25 17:51:41 compute-0 ceph-mon[74985]: pgmap v3866: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:51:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:51:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:51:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:51:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:51:41 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:51:41 compute-0 podman[454640]: 2025-11-25 17:51:41.068078126 +0000 UTC m=+0.238363570 container remove a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:51:41 compute-0 systemd[1]: libpod-conmon-a955d0a2e418e41b9a9347fa7c7fdd8f358a7a10bf04eb6bd77211ec515414c7.scope: Deactivated successfully.
Nov 25 17:51:41 compute-0 podman[454680]: 2025-11-25 17:51:41.298600731 +0000 UTC m=+0.054569917 container create 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:51:41 compute-0 systemd[1]: Started libpod-conmon-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope.
Nov 25 17:51:41 compute-0 podman[454680]: 2025-11-25 17:51:41.276423867 +0000 UTC m=+0.032393073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:51:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:41 compute-0 podman[454680]: 2025-11-25 17:51:41.416779568 +0000 UTC m=+0.172748784 container init 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:51:41 compute-0 podman[454680]: 2025-11-25 17:51:41.431389376 +0000 UTC m=+0.187358552 container start 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:51:41 compute-0 podman[454680]: 2025-11-25 17:51:41.434874091 +0000 UTC m=+0.190843287 container attach 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:51:41 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3867: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:42 compute-0 nova_compute[254092]: 2025-11-25 17:51:42.361 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:42 compute-0 mystifying_borg[454697]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:51:42 compute-0 mystifying_borg[454697]: --> relative data size: 1.0
Nov 25 17:51:42 compute-0 mystifying_borg[454697]: --> All data devices are unavailable
Nov 25 17:51:42 compute-0 systemd[1]: libpod-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope: Deactivated successfully.
Nov 25 17:51:42 compute-0 podman[454680]: 2025-11-25 17:51:42.573530869 +0000 UTC m=+1.329500065 container died 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:51:42 compute-0 systemd[1]: libpod-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope: Consumed 1.092s CPU time.
Nov 25 17:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f4a0d533081c76de4e4e457aea8be06a8bf4f6c1c1500d4651f02c91c520ff9-merged.mount: Deactivated successfully.
Nov 25 17:51:42 compute-0 podman[454680]: 2025-11-25 17:51:42.662622955 +0000 UTC m=+1.418592121 container remove 969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 17:51:42 compute-0 systemd[1]: libpod-conmon-969e28ded4cde688a80d9d34e034c49b7716c69893bddbedccdc2af792b58f98.scope: Deactivated successfully.
Nov 25 17:51:42 compute-0 sudo[454574]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:42 compute-0 sudo[454738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:42 compute-0 sudo[454738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:42 compute-0 sudo[454738]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:42 compute-0 sudo[454763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:51:42 compute-0 sudo[454763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:42 compute-0 sudo[454763]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:42 compute-0 sudo[454788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:42 compute-0 sudo[454788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:42 compute-0 sudo[454788]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:43 compute-0 ceph-mon[74985]: pgmap v3867: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:43 compute-0 sudo[454813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:51:43 compute-0 sudo[454813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:43 compute-0 podman[454880]: 2025-11-25 17:51:43.51247896 +0000 UTC m=+0.050112085 container create 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:51:43 compute-0 systemd[1]: Started libpod-conmon-4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068.scope.
Nov 25 17:51:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:51:43 compute-0 podman[454880]: 2025-11-25 17:51:43.492568829 +0000 UTC m=+0.030202004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:51:43 compute-0 podman[454880]: 2025-11-25 17:51:43.609615195 +0000 UTC m=+0.147248360 container init 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 17:51:43 compute-0 podman[454880]: 2025-11-25 17:51:43.6204361 +0000 UTC m=+0.158069225 container start 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:51:43 compute-0 podman[454880]: 2025-11-25 17:51:43.623464302 +0000 UTC m=+0.161097467 container attach 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:51:43 compute-0 objective_cray[454896]: 167 167
Nov 25 17:51:43 compute-0 systemd[1]: libpod-4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068.scope: Deactivated successfully.
Nov 25 17:51:43 compute-0 podman[454901]: 2025-11-25 17:51:43.677406671 +0000 UTC m=+0.033632107 container died 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:51:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a55993b95c074cde371746373e1453f4f1e0697bc7a83d86116a3b67bd43781b-merged.mount: Deactivated successfully.
Nov 25 17:51:43 compute-0 podman[454901]: 2025-11-25 17:51:43.736431947 +0000 UTC m=+0.092657323 container remove 4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cray, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:51:43 compute-0 systemd[1]: libpod-conmon-4523815b3d9c2604e57da7424fc2b552e44cd5c86502d57c289662e845fe8068.scope: Deactivated successfully.
Nov 25 17:51:43 compute-0 podman[454923]: 2025-11-25 17:51:43.95839459 +0000 UTC m=+0.053923859 container create f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:51:43 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3868: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:43 compute-0 systemd[1]: Started libpod-conmon-f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87.scope.
Nov 25 17:51:44 compute-0 podman[454923]: 2025-11-25 17:51:43.938663683 +0000 UTC m=+0.034192982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:51:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:44 compute-0 podman[454923]: 2025-11-25 17:51:44.061062145 +0000 UTC m=+0.156591444 container init f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:51:44 compute-0 podman[454923]: 2025-11-25 17:51:44.067785848 +0000 UTC m=+0.163315127 container start f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:51:44 compute-0 podman[454923]: 2025-11-25 17:51:44.074198573 +0000 UTC m=+0.169727842 container attach f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]: {
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:     "0": [
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:         {
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "devices": [
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "/dev/loop3"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             ],
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_name": "ceph_lv0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_size": "21470642176",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "name": "ceph_lv0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "tags": {
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cluster_name": "ceph",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.crush_device_class": "",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.encrypted": "0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osd_id": "0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.type": "block",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.vdo": "0"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             },
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "type": "block",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "vg_name": "ceph_vg0"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:         }
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:     ],
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:     "1": [
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:         {
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "devices": [
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "/dev/loop4"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             ],
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_name": "ceph_lv1",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_size": "21470642176",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "name": "ceph_lv1",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "tags": {
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cluster_name": "ceph",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.crush_device_class": "",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.encrypted": "0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osd_id": "1",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.type": "block",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.vdo": "0"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             },
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "type": "block",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "vg_name": "ceph_vg1"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:         }
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:     ],
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:     "2": [
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:         {
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "devices": [
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "/dev/loop5"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             ],
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_name": "ceph_lv2",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_size": "21470642176",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "name": "ceph_lv2",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "tags": {
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.cluster_name": "ceph",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.crush_device_class": "",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.encrypted": "0",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osd_id": "2",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.type": "block",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:                 "ceph.vdo": "0"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             },
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "type": "block",
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:             "vg_name": "ceph_vg2"
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:         }
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]:     ]
Nov 25 17:51:44 compute-0 unruffled_blackwell[454939]: }
Nov 25 17:51:44 compute-0 systemd[1]: libpod-f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87.scope: Deactivated successfully.
Nov 25 17:51:44 compute-0 podman[454923]: 2025-11-25 17:51:44.902198164 +0000 UTC m=+0.997727523 container died f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:51:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a04cd98123c2a4badb2cf2d578f5cc47d960432ede9563e9a68cbf58f24544-merged.mount: Deactivated successfully.
Nov 25 17:51:44 compute-0 podman[454923]: 2025-11-25 17:51:44.963585315 +0000 UTC m=+1.059114584 container remove f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_blackwell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:51:44 compute-0 systemd[1]: libpod-conmon-f96a153411736f6d32f64488d525967ee6283db12046c87e6a278958f9a50a87.scope: Deactivated successfully.
Nov 25 17:51:45 compute-0 sudo[454813]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:45 compute-0 ceph-mon[74985]: pgmap v3868: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:45 compute-0 sudo[454961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:45 compute-0 sudo[454961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:45 compute-0 sudo[454961]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:45 compute-0 sudo[454986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:51:45 compute-0 sudo[454986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:45 compute-0 sudo[454986]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:45 compute-0 sudo[455011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:45 compute-0 sudo[455011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:45 compute-0 sudo[455011]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:45 compute-0 sudo[455036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:51:45 compute-0 sudo[455036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:45 compute-0 nova_compute[254092]: 2025-11-25 17:51:45.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:45 compute-0 podman[455101]: 2025-11-25 17:51:45.860600125 +0000 UTC m=+0.060030845 container create 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:51:45 compute-0 systemd[1]: Started libpod-conmon-8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa.scope.
Nov 25 17:51:45 compute-0 podman[455101]: 2025-11-25 17:51:45.835199173 +0000 UTC m=+0.034629883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:51:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:51:45 compute-0 podman[455101]: 2025-11-25 17:51:45.961017479 +0000 UTC m=+0.160448209 container init 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:51:45 compute-0 podman[455101]: 2025-11-25 17:51:45.970690722 +0000 UTC m=+0.170121442 container start 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:51:45 compute-0 podman[455101]: 2025-11-25 17:51:45.97541583 +0000 UTC m=+0.174846570 container attach 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:51:45 compute-0 musing_davinci[455118]: 167 167
Nov 25 17:51:45 compute-0 systemd[1]: libpod-8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa.scope: Deactivated successfully.
Nov 25 17:51:45 compute-0 podman[455101]: 2025-11-25 17:51:45.978865485 +0000 UTC m=+0.178296215 container died 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:51:45 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3869: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7c184eab5a43cf268f16e90352d2ed20123f05cb93084c7aa991683c4c545f-merged.mount: Deactivated successfully.
Nov 25 17:51:46 compute-0 podman[455101]: 2025-11-25 17:51:46.032160696 +0000 UTC m=+0.231591396 container remove 8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:51:46 compute-0 systemd[1]: libpod-conmon-8b6999763d4bca9da036dd4cbc926c137ad626e54f02abcf208d67a26a50c8aa.scope: Deactivated successfully.
Nov 25 17:51:46 compute-0 podman[455141]: 2025-11-25 17:51:46.248418583 +0000 UTC m=+0.044017030 container create 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:51:46 compute-0 systemd[1]: Started libpod-conmon-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope.
Nov 25 17:51:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:46 compute-0 podman[455141]: 2025-11-25 17:51:46.233798595 +0000 UTC m=+0.029397062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:51:46 compute-0 podman[455141]: 2025-11-25 17:51:46.345722922 +0000 UTC m=+0.141321379 container init 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 17:51:46 compute-0 podman[455141]: 2025-11-25 17:51:46.354301676 +0000 UTC m=+0.149900133 container start 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:51:46 compute-0 podman[455141]: 2025-11-25 17:51:46.357152463 +0000 UTC m=+0.152750920 container attach 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:51:47 compute-0 ceph-mon[74985]: pgmap v3869: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:47 compute-0 nova_compute[254092]: 2025-11-25 17:51:47.396 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:47 compute-0 brave_lederberg[455158]: {
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "osd_id": 1,
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "type": "bluestore"
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:     },
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "osd_id": 2,
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "type": "bluestore"
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:     },
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "osd_id": 0,
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:         "type": "bluestore"
Nov 25 17:51:47 compute-0 brave_lederberg[455158]:     }
Nov 25 17:51:47 compute-0 brave_lederberg[455158]: }
Nov 25 17:51:47 compute-0 systemd[1]: libpod-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope: Deactivated successfully.
Nov 25 17:51:47 compute-0 systemd[1]: libpod-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope: Consumed 1.224s CPU time.
Nov 25 17:51:47 compute-0 podman[455141]: 2025-11-25 17:51:47.571678296 +0000 UTC m=+1.367276773 container died 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:51:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7769418098ca96cf3d42825dd370d81816a611a9fb5178d6bb0b45d43fe2c1f-merged.mount: Deactivated successfully.
Nov 25 17:51:47 compute-0 podman[455141]: 2025-11-25 17:51:47.657278307 +0000 UTC m=+1.452876754 container remove 3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lederberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:51:47 compute-0 systemd[1]: libpod-conmon-3f4ce54d764ce79148a84bf2e7888a4b0d4bf291ef7b846c2398966b0903fdbd.scope: Deactivated successfully.
Nov 25 17:51:47 compute-0 sudo[455036]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:51:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:51:47 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:51:47 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:51:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 966f6b15-0798-4064-9f59-986d8d24e34d does not exist
Nov 25 17:51:47 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9a4109b7-42a7-4bf1-97a6-9f44e04c56e7 does not exist
Nov 25 17:51:47 compute-0 sudo[455205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:51:47 compute-0 sudo[455205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:47 compute-0 sudo[455205]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:47 compute-0 sudo[455230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:51:47 compute-0 sudo[455230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:51:47 compute-0 sudo[455230]: pam_unix(sudo:session): session closed for user root
Nov 25 17:51:47 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3870: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:51:48 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:51:48 compute-0 ceph-mon[74985]: pgmap v3870: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:49 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3871: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:50 compute-0 nova_compute[254092]: 2025-11-25 17:51:50.803 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:51 compute-0 ceph-mon[74985]: pgmap v3871: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:51 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3872: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:51:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:51:52 compute-0 nova_compute[254092]: 2025-11-25 17:51:52.401 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:53 compute-0 ceph-mon[74985]: pgmap v3872: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3873: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:55 compute-0 ceph-mon[74985]: pgmap v3873: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:51:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:51:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391902824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:51:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:51:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3391902824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:51:55 compute-0 nova_compute[254092]: 2025-11-25 17:51:55.805 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3874: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3391902824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:51:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3391902824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:51:57 compute-0 ceph-mon[74985]: pgmap v3874: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:57 compute-0 nova_compute[254092]: 2025-11-25 17:51:57.403 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:51:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3875: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:51:59 compute-0 ceph-mon[74985]: pgmap v3875: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3876: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:00 compute-0 podman[455256]: 2025-11-25 17:52:00.674013302 +0000 UTC m=+0.081463389 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:52:00 compute-0 podman[455255]: 2025-11-25 17:52:00.716627042 +0000 UTC m=+0.128108488 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 17:52:00 compute-0 podman[455257]: 2025-11-25 17:52:00.725673299 +0000 UTC m=+0.131723067 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:52:00 compute-0 nova_compute[254092]: 2025-11-25 17:52:00.847 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:01 compute-0 ceph-mon[74985]: pgmap v3876: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3877: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:02 compute-0 ceph-mon[74985]: pgmap v3877: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:02 compute-0 nova_compute[254092]: 2025-11-25 17:52:02.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3878: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:05 compute-0 ceph-mon[74985]: pgmap v3878: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:05 compute-0 nova_compute[254092]: 2025-11-25 17:52:05.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3879: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:07 compute-0 ceph-mon[74985]: pgmap v3879: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:07 compute-0 nova_compute[254092]: 2025-11-25 17:52:07.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3880: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:09 compute-0 ceph-mon[74985]: pgmap v3880: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3881: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:52:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:52:10 compute-0 nova_compute[254092]: 2025-11-25 17:52:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:10 compute-0 nova_compute[254092]: 2025-11-25 17:52:10.851 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:11 compute-0 ceph-mon[74985]: pgmap v3881: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:11 compute-0 nova_compute[254092]: 2025-11-25 17:52:11.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3882: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:12 compute-0 nova_compute[254092]: 2025-11-25 17:52:12.448 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:12 compute-0 nova_compute[254092]: 2025-11-25 17:52:12.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:13 compute-0 ceph-mon[74985]: pgmap v3882: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:52:13.697 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:52:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:52:13.698 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:52:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:52:13.698 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:52:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3883: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:15 compute-0 ceph-mon[74985]: pgmap v3883: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:15 compute-0 nova_compute[254092]: 2025-11-25 17:52:15.853 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3884: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:17 compute-0 ceph-mon[74985]: pgmap v3884: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:17 compute-0 nova_compute[254092]: 2025-11-25 17:52:17.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:17 compute-0 nova_compute[254092]: 2025-11-25 17:52:17.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:17 compute-0 nova_compute[254092]: 2025-11-25 17:52:17.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:52:17 compute-0 nova_compute[254092]: 2025-11-25 17:52:17.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:52:17 compute-0 nova_compute[254092]: 2025-11-25 17:52:17.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:17 compute-0 nova_compute[254092]: 2025-11-25 17:52:17.661 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:52:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3885: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:18 compute-0 ceph-mon[74985]: pgmap v3885: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.542 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.542 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.542 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.543 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:52:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:52:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514194135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:52:19 compute-0 nova_compute[254092]: 2025-11-25 17:52:19.987 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:52:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3886: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2514194135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:52:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.176 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.178 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3613MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.178 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.178 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.248 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.249 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.273 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:52:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:52:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419287978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.764 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.776 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.797 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.800 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.801 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:52:20 compute-0 nova_compute[254092]: 2025-11-25 17:52:20.856 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:21 compute-0 ceph-mon[74985]: pgmap v3886: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3419287978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:52:21 compute-0 nova_compute[254092]: 2025-11-25 17:52:21.803 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3887: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:22 compute-0 nova_compute[254092]: 2025-11-25 17:52:22.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:23 compute-0 ceph-mon[74985]: pgmap v3887: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3888: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:25 compute-0 ceph-mon[74985]: pgmap v3888: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:25 compute-0 nova_compute[254092]: 2025-11-25 17:52:25.859 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3889: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:27 compute-0 ceph-mon[74985]: pgmap v3889: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:27 compute-0 nova_compute[254092]: 2025-11-25 17:52:27.506 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3890: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:28 compute-0 ceph-mon[74985]: pgmap v3890: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3891: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:30 compute-0 nova_compute[254092]: 2025-11-25 17:52:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:30 compute-0 nova_compute[254092]: 2025-11-25 17:52:30.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:31 compute-0 ceph-mon[74985]: pgmap v3891: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:31 compute-0 podman[455363]: 2025-11-25 17:52:31.680755779 +0000 UTC m=+0.082933169 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:52:31 compute-0 podman[455362]: 2025-11-25 17:52:31.692739115 +0000 UTC m=+0.095435859 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:52:31 compute-0 podman[455364]: 2025-11-25 17:52:31.728507689 +0000 UTC m=+0.125078687 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:52:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3892: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:32 compute-0 nova_compute[254092]: 2025-11-25 17:52:32.510 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:33 compute-0 ceph-mon[74985]: pgmap v3892: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:33 compute-0 nova_compute[254092]: 2025-11-25 17:52:33.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:52:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3893: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:35 compute-0 ceph-mon[74985]: pgmap v3893: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:35 compute-0 nova_compute[254092]: 2025-11-25 17:52:35.863 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3894: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:37 compute-0 ceph-mon[74985]: pgmap v3894: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:37 compute-0 nova_compute[254092]: 2025-11-25 17:52:37.543 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3895: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:39 compute-0 ceph-mon[74985]: pgmap v3895: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3896: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:52:40
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'volumes', 'images', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:52:40 compute-0 nova_compute[254092]: 2025-11-25 17:52:40.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:40 compute-0 ceph-mgr[75280]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1812945073
Nov 25 17:52:41 compute-0 ceph-mon[74985]: pgmap v3896: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3897: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:42 compute-0 nova_compute[254092]: 2025-11-25 17:52:42.547 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:43 compute-0 ceph-mon[74985]: pgmap v3897: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3898: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:44 compute-0 ceph-mon[74985]: pgmap v3898: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:45 compute-0 nova_compute[254092]: 2025-11-25 17:52:45.901 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3899: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:47 compute-0 ceph-mon[74985]: pgmap v3899: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:47 compute-0 nova_compute[254092]: 2025-11-25 17:52:47.550 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:48 compute-0 sudo[455430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:48 compute-0 sudo[455430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:48 compute-0 sudo[455430]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3900: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:48 compute-0 sudo[455455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:52:48 compute-0 sudo[455455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:48 compute-0 sudo[455455]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:48 compute-0 sudo[455480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:48 compute-0 sudo[455480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:48 compute-0 sudo[455480]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:48 compute-0 sudo[455505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:52:48 compute-0 sudo[455505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:48 compute-0 sudo[455505]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:52:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:52:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:52:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:52:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:52:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:52:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a0fa81ab-1844-4dd5-92f6-4f100751e09f does not exist
Nov 25 17:52:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4cc3f2f0-14bd-4dd5-b533-4227dbb0755d does not exist
Nov 25 17:52:48 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8382a0f2-8d6e-4f70-8e14-ff280ed7712d does not exist
Nov 25 17:52:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:52:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:52:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:52:48 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:52:48 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:52:48 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:52:49 compute-0 sudo[455561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:49 compute-0 sudo[455561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:49 compute-0 sudo[455561]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:49 compute-0 sudo[455586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:52:49 compute-0 sudo[455586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:49 compute-0 ceph-mon[74985]: pgmap v3900: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:52:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:52:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:52:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:52:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:52:49 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:52:49 compute-0 sudo[455586]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:49 compute-0 sudo[455611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:49 compute-0 sudo[455611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:49 compute-0 sudo[455611]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:49 compute-0 sudo[455636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:52:49 compute-0 sudo[455636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:49 compute-0 podman[455700]: 2025-11-25 17:52:49.711561891 +0000 UTC m=+0.058256427 container create a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:52:49 compute-0 systemd[1]: Started libpod-conmon-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope.
Nov 25 17:52:49 compute-0 podman[455700]: 2025-11-25 17:52:49.681458862 +0000 UTC m=+0.028153448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:52:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:52:49 compute-0 podman[455700]: 2025-11-25 17:52:49.82430407 +0000 UTC m=+0.170998666 container init a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:52:49 compute-0 podman[455700]: 2025-11-25 17:52:49.833716537 +0000 UTC m=+0.180411083 container start a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 17:52:49 compute-0 podman[455700]: 2025-11-25 17:52:49.837922311 +0000 UTC m=+0.184616847 container attach a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:52:49 compute-0 ecstatic_engelbart[455716]: 167 167
Nov 25 17:52:49 compute-0 systemd[1]: libpod-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope: Deactivated successfully.
Nov 25 17:52:49 compute-0 conmon[455716]: conmon a7ee95eeddfdfa1de295 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope/container/memory.events
Nov 25 17:52:49 compute-0 podman[455700]: 2025-11-25 17:52:49.843193065 +0000 UTC m=+0.189887611 container died a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf7bf8594c7de5fb4fc9b3634c66a8901dbe3cb351e481e3a3217c4af0eb4b9-merged.mount: Deactivated successfully.
Nov 25 17:52:49 compute-0 podman[455700]: 2025-11-25 17:52:49.9036221 +0000 UTC m=+0.250316616 container remove a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:52:49 compute-0 systemd[1]: libpod-conmon-a7ee95eeddfdfa1de295737a45c196789f188c81fdcf53aa3142b74d62174dba.scope: Deactivated successfully.
Nov 25 17:52:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3901: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:50 compute-0 podman[455740]: 2025-11-25 17:52:50.109854554 +0000 UTC m=+0.069676848 container create 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 17:52:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.122188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170122259, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 873, "num_deletes": 254, "total_data_size": 1164844, "memory_usage": 1187336, "flush_reason": "Manual Compaction"}
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170135463, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 1153987, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79757, "largest_seqno": 80629, "table_properties": {"data_size": 1149585, "index_size": 2053, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9536, "raw_average_key_size": 19, "raw_value_size": 1140705, "raw_average_value_size": 2290, "num_data_blocks": 92, "num_entries": 498, "num_filter_entries": 498, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093092, "oldest_key_time": 1764093092, "file_creation_time": 1764093170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 13333 microseconds, and 9329 cpu microseconds.
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.135525) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 1153987 bytes OK
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.135553) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.136848) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.136866) EVENT_LOG_v1 {"time_micros": 1764093170136859, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.136889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1160543, prev total WAL file size 1187031, number of live WAL files 2.
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.137565) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373730' seq:0, type:0; will stop at (end)
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(1126KB)], [188(9245KB)]
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170137691, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 10621878, "oldest_snapshot_seqno": -1}
Nov 25 17:52:50 compute-0 systemd[1]: Started libpod-conmon-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope.
Nov 25 17:52:50 compute-0 podman[455740]: 2025-11-25 17:52:50.086450137 +0000 UTC m=+0.046272461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:52:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9454 keys, 10519778 bytes, temperature: kUnknown
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170221893, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 10519778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10461573, "index_size": 33452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23685, "raw_key_size": 250351, "raw_average_key_size": 26, "raw_value_size": 10297591, "raw_average_value_size": 1089, "num_data_blocks": 1284, "num_entries": 9454, "num_filter_entries": 9454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.222190) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 10519778 bytes
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.223693) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.0 rd, 124.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.0 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(18.3) write-amplify(9.1) OK, records in: 9977, records dropped: 523 output_compression: NoCompression
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.223721) EVENT_LOG_v1 {"time_micros": 1764093170223711, "job": 118, "event": "compaction_finished", "compaction_time_micros": 84276, "compaction_time_cpu_micros": 44572, "output_level": 6, "num_output_files": 1, "total_output_size": 10519778, "num_input_records": 9977, "num_output_records": 9454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170224023, "job": 118, "event": "table_file_deletion", "file_number": 190}
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093170225774, "job": 118, "event": "table_file_deletion", "file_number": 188}
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.137462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:52:50 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:52:50.225921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:52:50 compute-0 podman[455740]: 2025-11-25 17:52:50.231956089 +0000 UTC m=+0.191778453 container init 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:52:50 compute-0 podman[455740]: 2025-11-25 17:52:50.250743559 +0000 UTC m=+0.210565883 container start 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 17:52:50 compute-0 podman[455740]: 2025-11-25 17:52:50.255196621 +0000 UTC m=+0.215018935 container attach 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:52:50 compute-0 nova_compute[254092]: 2025-11-25 17:52:50.905 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:51 compute-0 ceph-mon[74985]: pgmap v3901: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:51 compute-0 objective_colden[455756]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:52:51 compute-0 objective_colden[455756]: --> relative data size: 1.0
Nov 25 17:52:51 compute-0 objective_colden[455756]: --> All data devices are unavailable
Nov 25 17:52:51 compute-0 systemd[1]: libpod-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope: Deactivated successfully.
Nov 25 17:52:51 compute-0 systemd[1]: libpod-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope: Consumed 1.318s CPU time.
Nov 25 17:52:51 compute-0 podman[455740]: 2025-11-25 17:52:51.621731133 +0000 UTC m=+1.581553457 container died 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:52:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-56ec0334d60d50171d7ae99a0fdfa6bfe226e9265a715e3795f2d0e47a8f1945-merged.mount: Deactivated successfully.
Nov 25 17:52:51 compute-0 podman[455740]: 2025-11-25 17:52:51.705454843 +0000 UTC m=+1.665277167 container remove 551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:52:51 compute-0 systemd[1]: libpod-conmon-551b3d5b0f98a207fc27189d8d6cf19108721a88fb14474c983a4962662f6d00.scope: Deactivated successfully.
Nov 25 17:52:51 compute-0 sudo[455636]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:51 compute-0 sudo[455800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:51 compute-0 sudo[455800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:51 compute-0 sudo[455800]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:51 compute-0 sudo[455825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:52:51 compute-0 sudo[455825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:51 compute-0 sudo[455825]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3902: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:52 compute-0 sudo[455850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:52 compute-0 sudo[455850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:52 compute-0 sudo[455850]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:52 compute-0 ceph-mon[74985]: pgmap v3902: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:52 compute-0 sudo[455875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:52:52 compute-0 sudo[455875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:52:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:52:52 compute-0 nova_compute[254092]: 2025-11-25 17:52:52.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:52 compute-0 podman[455940]: 2025-11-25 17:52:52.712867717 +0000 UTC m=+0.054557365 container create d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:52:52 compute-0 systemd[1]: Started libpod-conmon-d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9.scope.
Nov 25 17:52:52 compute-0 podman[455940]: 2025-11-25 17:52:52.683858588 +0000 UTC m=+0.025548246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:52:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:52:52 compute-0 podman[455940]: 2025-11-25 17:52:52.828495626 +0000 UTC m=+0.170185284 container init d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:52:52 compute-0 podman[455940]: 2025-11-25 17:52:52.843998327 +0000 UTC m=+0.185687975 container start d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 17:52:52 compute-0 podman[455940]: 2025-11-25 17:52:52.848309075 +0000 UTC m=+0.189998933 container attach d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:52:52 compute-0 dazzling_kirch[455957]: 167 167
Nov 25 17:52:52 compute-0 systemd[1]: libpod-d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9.scope: Deactivated successfully.
Nov 25 17:52:52 compute-0 podman[455940]: 2025-11-25 17:52:52.852498719 +0000 UTC m=+0.194188357 container died d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-050116956e7cd819ad3874478d6c98a1ea01125551ce43662ae4367bc213f188-merged.mount: Deactivated successfully.
Nov 25 17:52:52 compute-0 podman[455940]: 2025-11-25 17:52:52.909008837 +0000 UTC m=+0.250698475 container remove d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:52:52 compute-0 systemd[1]: libpod-conmon-d6f025da437cea82c0884619424d7de98c17378b7ad4830133638f882631d4c9.scope: Deactivated successfully.
Nov 25 17:52:53 compute-0 podman[455981]: 2025-11-25 17:52:53.167457854 +0000 UTC m=+0.072722771 container create 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 17:52:53 compute-0 podman[455981]: 2025-11-25 17:52:53.135828092 +0000 UTC m=+0.041093019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:52:53 compute-0 systemd[1]: Started libpod-conmon-48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58.scope.
Nov 25 17:52:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:53 compute-0 podman[455981]: 2025-11-25 17:52:53.307987719 +0000 UTC m=+0.213252676 container init 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 17:52:53 compute-0 podman[455981]: 2025-11-25 17:52:53.320218472 +0000 UTC m=+0.225483389 container start 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:52:53 compute-0 podman[455981]: 2025-11-25 17:52:53.325835025 +0000 UTC m=+0.231099952 container attach 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:52:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3903: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]: {
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:     "0": [
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:         {
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "devices": [
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "/dev/loop3"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             ],
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_name": "ceph_lv0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_size": "21470642176",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "name": "ceph_lv0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "tags": {
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cluster_name": "ceph",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.crush_device_class": "",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.encrypted": "0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osd_id": "0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.type": "block",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.vdo": "0"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             },
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "type": "block",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "vg_name": "ceph_vg0"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:         }
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:     ],
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:     "1": [
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:         {
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "devices": [
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "/dev/loop4"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             ],
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_name": "ceph_lv1",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_size": "21470642176",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "name": "ceph_lv1",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "tags": {
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cluster_name": "ceph",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.crush_device_class": "",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.encrypted": "0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osd_id": "1",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.type": "block",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.vdo": "0"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             },
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "type": "block",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "vg_name": "ceph_vg1"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:         }
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:     ],
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:     "2": [
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:         {
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "devices": [
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "/dev/loop5"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             ],
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_name": "ceph_lv2",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_size": "21470642176",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "name": "ceph_lv2",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "tags": {
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.cluster_name": "ceph",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.crush_device_class": "",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.encrypted": "0",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osd_id": "2",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.type": "block",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:                 "ceph.vdo": "0"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             },
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "type": "block",
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:             "vg_name": "ceph_vg2"
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:         }
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]:     ]
Nov 25 17:52:54 compute-0 kind_zhukovsky[455998]: }
Nov 25 17:52:54 compute-0 systemd[1]: libpod-48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58.scope: Deactivated successfully.
Nov 25 17:52:54 compute-0 podman[455981]: 2025-11-25 17:52:54.143169666 +0000 UTC m=+1.048434593 container died 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:52:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d3b20b421163644ef5c99671ad44f2795e2f59e535e8dbce0c2ac13700df51-merged.mount: Deactivated successfully.
Nov 25 17:52:54 compute-0 podman[455981]: 2025-11-25 17:52:54.218540858 +0000 UTC m=+1.123805745 container remove 48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:52:54 compute-0 systemd[1]: libpod-conmon-48112e6de286d766900056f555f84b56de61c24a7225625cb7f153a7d9c5ca58.scope: Deactivated successfully.
Nov 25 17:52:54 compute-0 sudo[455875]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:54 compute-0 sudo[456021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:54 compute-0 sudo[456021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:54 compute-0 sudo[456021]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:54 compute-0 sudo[456046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:52:54 compute-0 sudo[456046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:54 compute-0 sudo[456046]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:54 compute-0 sudo[456071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:54 compute-0 sudo[456071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:54 compute-0 sudo[456071]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:54 compute-0 sudo[456096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:52:54 compute-0 sudo[456096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:54 compute-0 podman[456159]: 2025-11-25 17:52:54.931005653 +0000 UTC m=+0.052231702 container create 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:52:54 compute-0 systemd[1]: Started libpod-conmon-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope.
Nov 25 17:52:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:52:54 compute-0 podman[456159]: 2025-11-25 17:52:54.90441314 +0000 UTC m=+0.025639279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:52:55 compute-0 podman[456159]: 2025-11-25 17:52:55.012511632 +0000 UTC m=+0.133737771 container init 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:52:55 compute-0 podman[456159]: 2025-11-25 17:52:55.021791695 +0000 UTC m=+0.143017744 container start 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:52:55 compute-0 podman[456159]: 2025-11-25 17:52:55.025174907 +0000 UTC m=+0.146401056 container attach 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:52:55 compute-0 systemd[1]: libpod-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope: Deactivated successfully.
Nov 25 17:52:55 compute-0 distracted_pare[456175]: 167 167
Nov 25 17:52:55 compute-0 conmon[456175]: conmon 23c52e1c999d53c0828e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope/container/memory.events
Nov 25 17:52:55 compute-0 podman[456159]: 2025-11-25 17:52:55.029961238 +0000 UTC m=+0.151187327 container died 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:52:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9c1147e321e5bd6f7509fb6afb78bb84aad29379a6883656609e28a092722ed-merged.mount: Deactivated successfully.
Nov 25 17:52:55 compute-0 podman[456159]: 2025-11-25 17:52:55.086502497 +0000 UTC m=+0.207728546 container remove 23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:52:55 compute-0 systemd[1]: libpod-conmon-23c52e1c999d53c0828edb0caca78c7322549998a09d5b7d0d00ad15ecf8cb8c.scope: Deactivated successfully.
Nov 25 17:52:55 compute-0 ceph-mon[74985]: pgmap v3903: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:52:55 compute-0 podman[456201]: 2025-11-25 17:52:55.334594 +0000 UTC m=+0.071548438 container create 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:52:55 compute-0 systemd[1]: Started libpod-conmon-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope.
Nov 25 17:52:55 compute-0 podman[456201]: 2025-11-25 17:52:55.310439683 +0000 UTC m=+0.047394191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:52:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:52:55 compute-0 podman[456201]: 2025-11-25 17:52:55.446015834 +0000 UTC m=+0.182970292 container init 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:52:55 compute-0 podman[456201]: 2025-11-25 17:52:55.461258028 +0000 UTC m=+0.198212466 container start 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:52:55 compute-0 podman[456201]: 2025-11-25 17:52:55.465466783 +0000 UTC m=+0.202421231 container attach 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:52:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:52:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34646103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:52:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:52:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34646103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:52:55 compute-0 nova_compute[254092]: 2025-11-25 17:52:55.907 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3904: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/34646103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:52:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/34646103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:52:56 compute-0 jovial_pike[456219]: {
Nov 25 17:52:56 compute-0 jovial_pike[456219]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "osd_id": 1,
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "type": "bluestore"
Nov 25 17:52:56 compute-0 jovial_pike[456219]:     },
Nov 25 17:52:56 compute-0 jovial_pike[456219]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "osd_id": 2,
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "type": "bluestore"
Nov 25 17:52:56 compute-0 jovial_pike[456219]:     },
Nov 25 17:52:56 compute-0 jovial_pike[456219]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "osd_id": 0,
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:52:56 compute-0 jovial_pike[456219]:         "type": "bluestore"
Nov 25 17:52:56 compute-0 jovial_pike[456219]:     }
Nov 25 17:52:56 compute-0 jovial_pike[456219]: }
Nov 25 17:52:56 compute-0 systemd[1]: libpod-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope: Deactivated successfully.
Nov 25 17:52:56 compute-0 podman[456201]: 2025-11-25 17:52:56.635588028 +0000 UTC m=+1.372542456 container died 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:52:56 compute-0 systemd[1]: libpod-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope: Consumed 1.182s CPU time.
Nov 25 17:52:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea91d88f5b5f380bdce5c452cda93ca48b39800e6cd8a75acc74ab1455cb7fe-merged.mount: Deactivated successfully.
Nov 25 17:52:56 compute-0 podman[456201]: 2025-11-25 17:52:56.697442042 +0000 UTC m=+1.434396470 container remove 1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 17:52:56 compute-0 systemd[1]: libpod-conmon-1f45b2c727e1313d662240a61a0c067336936a2661eea0a3f8579d93e6029c4d.scope: Deactivated successfully.
Nov 25 17:52:56 compute-0 sudo[456096]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:52:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:52:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:52:56 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:52:56 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 9bffd952-46c8-41cf-9b1e-8e8e77866204 does not exist
Nov 25 17:52:56 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 501a36e1-1941-4fb4-9186-4e51fc2913ed does not exist
Nov 25 17:52:56 compute-0 sudo[456266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:52:56 compute-0 sudo[456266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:56 compute-0 sudo[456266]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:57 compute-0 sudo[456291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:52:57 compute-0 sudo[456291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:52:57 compute-0 sudo[456291]: pam_unix(sudo:session): session closed for user root
Nov 25 17:52:57 compute-0 ceph-mon[74985]: pgmap v3904: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:52:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:52:57 compute-0 nova_compute[254092]: 2025-11-25 17:52:57.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:52:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3905: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:52:59 compute-0 ceph-mon[74985]: pgmap v3905: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3906: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:00 compute-0 ceph-mon[74985]: pgmap v3906: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:00 compute-0 nova_compute[254092]: 2025-11-25 17:53:00.909 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3907: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:02 compute-0 nova_compute[254092]: 2025-11-25 17:53:02.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:02 compute-0 podman[456316]: 2025-11-25 17:53:02.704118037 +0000 UTC m=+0.102075481 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 17:53:02 compute-0 podman[456317]: 2025-11-25 17:53:02.724704017 +0000 UTC m=+0.120011759 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 17:53:02 compute-0 podman[456318]: 2025-11-25 17:53:02.759725811 +0000 UTC m=+0.156825951 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 17:53:03 compute-0 ceph-mon[74985]: pgmap v3907: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3908: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:05 compute-0 ceph-mon[74985]: pgmap v3908: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:05 compute-0 nova_compute[254092]: 2025-11-25 17:53:05.913 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3909: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:06 compute-0 ceph-mon[74985]: pgmap v3909: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:07 compute-0 nova_compute[254092]: 2025-11-25 17:53:07.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3910: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:09 compute-0 ceph-mon[74985]: pgmap v3910: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3911: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:53:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:53:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:10 compute-0 nova_compute[254092]: 2025-11-25 17:53:10.914 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:11 compute-0 ceph-mon[74985]: pgmap v3911: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:11 compute-0 nova_compute[254092]: 2025-11-25 17:53:11.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3912: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:12 compute-0 ceph-mon[74985]: pgmap v3912: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:12 compute-0 nova_compute[254092]: 2025-11-25 17:53:12.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:12 compute-0 nova_compute[254092]: 2025-11-25 17:53:12.607 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:13 compute-0 nova_compute[254092]: 2025-11-25 17:53:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:53:13.699 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:53:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:53:13.699 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:53:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:53:13.699 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:53:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3913: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:15 compute-0 ceph-mon[74985]: pgmap v3913: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:15 compute-0 nova_compute[254092]: 2025-11-25 17:53:15.917 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3914: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:17 compute-0 ceph-mon[74985]: pgmap v3914: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:17 compute-0 nova_compute[254092]: 2025-11-25 17:53:17.643 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3915: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:18 compute-0 nova_compute[254092]: 2025-11-25 17:53:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:18 compute-0 nova_compute[254092]: 2025-11-25 17:53:18.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:53:18 compute-0 nova_compute[254092]: 2025-11-25 17:53:18.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:53:18 compute-0 nova_compute[254092]: 2025-11-25 17:53:18.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:53:19 compute-0 ceph-mon[74985]: pgmap v3915: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.528 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:53:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:53:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147502024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:53:19 compute-0 nova_compute[254092]: 2025-11-25 17:53:19.989 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:53:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3916: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1147502024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:53:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.190 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.192 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3623MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.192 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.192 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.260 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.260 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.285 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:53:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:53:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623878651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.728 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.735 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.747 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.749 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.749 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:53:20 compute-0 nova_compute[254092]: 2025-11-25 17:53:20.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:21 compute-0 ceph-mon[74985]: pgmap v3916: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3623878651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:53:21 compute-0 nova_compute[254092]: 2025-11-25 17:53:21.750 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:21 compute-0 nova_compute[254092]: 2025-11-25 17:53:21.751 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:21 compute-0 nova_compute[254092]: 2025-11-25 17:53:21.751 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:53:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3917: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:22 compute-0 nova_compute[254092]: 2025-11-25 17:53:22.646 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:23 compute-0 ceph-mon[74985]: pgmap v3917: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3918: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:24 compute-0 ceph-mon[74985]: pgmap v3918: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:25 compute-0 nova_compute[254092]: 2025-11-25 17:53:25.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3919: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:27 compute-0 ceph-mon[74985]: pgmap v3919: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:27 compute-0 nova_compute[254092]: 2025-11-25 17:53:27.649 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3920: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:29 compute-0 ceph-mon[74985]: pgmap v3920: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3921: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:30 compute-0 nova_compute[254092]: 2025-11-25 17:53:30.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:53:30 compute-0 nova_compute[254092]: 2025-11-25 17:53:30.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:31 compute-0 ceph-mon[74985]: pgmap v3921: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3922: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:32 compute-0 nova_compute[254092]: 2025-11-25 17:53:32.651 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:33 compute-0 ceph-mon[74985]: pgmap v3922: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:33 compute-0 podman[456421]: 2025-11-25 17:53:33.662554087 +0000 UTC m=+0.064832165 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:53:33 compute-0 podman[456420]: 2025-11-25 17:53:33.688921495 +0000 UTC m=+0.089793456 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:53:33 compute-0 podman[456422]: 2025-11-25 17:53:33.708010645 +0000 UTC m=+0.101668479 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 17:53:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3923: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:35 compute-0 ceph-mon[74985]: pgmap v3923: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:35 compute-0 nova_compute[254092]: 2025-11-25 17:53:35.928 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3924: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:36 compute-0 ceph-mon[74985]: pgmap v3924: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:37 compute-0 nova_compute[254092]: 2025-11-25 17:53:37.686 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3925: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:39 compute-0 ceph-mon[74985]: pgmap v3925: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3926: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:53:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:53:40
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr']
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:53:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:53:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:53:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:53:40 compute-0 nova_compute[254092]: 2025-11-25 17:53:40.929 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:41 compute-0 ceph-mon[74985]: pgmap v3926: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3927: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:42 compute-0 nova_compute[254092]: 2025-11-25 17:53:42.721 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:43 compute-0 ceph-mon[74985]: pgmap v3927: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3928: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:45 compute-0 ceph-mon[74985]: pgmap v3928: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:45 compute-0 nova_compute[254092]: 2025-11-25 17:53:45.934 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3929: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:46 compute-0 ceph-mon[74985]: pgmap v3929: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:47 compute-0 nova_compute[254092]: 2025-11-25 17:53:47.754 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3930: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:49 compute-0 ceph-mon[74985]: pgmap v3930: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3931: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:53:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7201.2 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:53:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:50 compute-0 nova_compute[254092]: 2025-11-25 17:53:50.935 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:51 compute-0 ceph-mon[74985]: pgmap v3931: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3932: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:53:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:53:52 compute-0 nova_compute[254092]: 2025-11-25 17:53:52.792 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:53 compute-0 ceph-mon[74985]: pgmap v3932: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3933: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:54 compute-0 ceph-mon[74985]: pgmap v3933: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:53:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:53:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3982988324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:53:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:53:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3982988324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:53:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3982988324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:53:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3982988324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:53:55 compute-0 nova_compute[254092]: 2025-11-25 17:53:55.937 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3934: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:56 compute-0 ceph-mon[74985]: pgmap v3934: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:57 compute-0 sudo[456481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:53:57 compute-0 sudo[456481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:57 compute-0 sudo[456481]: pam_unix(sudo:session): session closed for user root
Nov 25 17:53:57 compute-0 sudo[456506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:53:57 compute-0 sudo[456506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:57 compute-0 sudo[456506]: pam_unix(sudo:session): session closed for user root
Nov 25 17:53:57 compute-0 sudo[456531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:53:57 compute-0 sudo[456531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:57 compute-0 sudo[456531]: pam_unix(sudo:session): session closed for user root
Nov 25 17:53:57 compute-0 sudo[456556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:53:57 compute-0 sudo[456556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:57 compute-0 sudo[456556]: pam_unix(sudo:session): session closed for user root
Nov 25 17:53:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:53:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:53:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:53:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:53:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 62cb84fc-1604-4bb8-a272-2570eaf7ecef does not exist
Nov 25 17:53:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 265d3ce0-41dc-4382-a931-cfd77324b3f3 does not exist
Nov 25 17:53:57 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b3a8c510-d8f3-400d-938b-ea64f9ba8cbf does not exist
Nov 25 17:53:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:53:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:53:57 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:53:57 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:53:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:53:57 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:53:57 compute-0 nova_compute[254092]: 2025-11-25 17:53:57.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:53:57 compute-0 sudo[456612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:53:57 compute-0 sudo[456612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:57 compute-0 sudo[456612]: pam_unix(sudo:session): session closed for user root
Nov 25 17:53:57 compute-0 sudo[456637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:53:57 compute-0 sudo[456637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:57 compute-0 sudo[456637]: pam_unix(sudo:session): session closed for user root
Nov 25 17:53:58 compute-0 sudo[456662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:53:58 compute-0 sudo[456662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:58 compute-0 sudo[456662]: pam_unix(sudo:session): session closed for user root
Nov 25 17:53:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3935: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:58 compute-0 sudo[456687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:53:58 compute-0 sudo[456687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:53:58 compute-0 podman[456752]: 2025-11-25 17:53:58.443599199 +0000 UTC m=+0.039402450 container create 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:53:58 compute-0 systemd[1]: Started libpod-conmon-2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406.scope.
Nov 25 17:53:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:53:58 compute-0 podman[456752]: 2025-11-25 17:53:58.425587961 +0000 UTC m=+0.021391232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:53:58 compute-0 podman[456752]: 2025-11-25 17:53:58.523275521 +0000 UTC m=+0.119078782 container init 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:53:58 compute-0 podman[456752]: 2025-11-25 17:53:58.536429448 +0000 UTC m=+0.132232689 container start 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 17:53:58 compute-0 podman[456752]: 2025-11-25 17:53:58.541725182 +0000 UTC m=+0.137528443 container attach 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 17:53:58 compute-0 frosty_almeida[456769]: 167 167
Nov 25 17:53:58 compute-0 systemd[1]: libpod-2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406.scope: Deactivated successfully.
Nov 25 17:53:58 compute-0 podman[456752]: 2025-11-25 17:53:58.544122677 +0000 UTC m=+0.139925928 container died 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a17e9a14b1ed0007aac89e941baecba1cf53b22578b218afbed794313aaef79-merged.mount: Deactivated successfully.
Nov 25 17:53:58 compute-0 podman[456752]: 2025-11-25 17:53:58.587820493 +0000 UTC m=+0.183623744 container remove 2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_almeida, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:53:58 compute-0 systemd[1]: libpod-conmon-2f81bbc4e386d6ceae07d059bc113a1d416987befdab76b5c117524d5f629406.scope: Deactivated successfully.
Nov 25 17:53:58 compute-0 ceph-mon[74985]: pgmap v3935: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:53:58 compute-0 podman[456795]: 2025-11-25 17:53:58.841137526 +0000 UTC m=+0.062432145 container create aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:53:58 compute-0 systemd[1]: Started libpod-conmon-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope.
Nov 25 17:53:58 compute-0 podman[456795]: 2025-11-25 17:53:58.819896499 +0000 UTC m=+0.041191128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:53:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:53:58 compute-0 podman[456795]: 2025-11-25 17:53:58.9481602 +0000 UTC m=+0.169454799 container init aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:53:58 compute-0 podman[456795]: 2025-11-25 17:53:58.963405654 +0000 UTC m=+0.184700253 container start aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 17:53:58 compute-0 podman[456795]: 2025-11-25 17:53:58.966829627 +0000 UTC m=+0.188124216 container attach aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:54:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3936: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:00 compute-0 gallant_khorana[456812]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:54:00 compute-0 gallant_khorana[456812]: --> relative data size: 1.0
Nov 25 17:54:00 compute-0 gallant_khorana[456812]: --> All data devices are unavailable
Nov 25 17:54:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:00 compute-0 systemd[1]: libpod-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope: Deactivated successfully.
Nov 25 17:54:00 compute-0 podman[456795]: 2025-11-25 17:54:00.178756231 +0000 UTC m=+1.400050900 container died aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:54:00 compute-0 systemd[1]: libpod-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope: Consumed 1.168s CPU time.
Nov 25 17:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a06255f5f9ecd4869d4757d2e995cb2da2681fe0945d68259a7da30acb7dbeb2-merged.mount: Deactivated successfully.
Nov 25 17:54:00 compute-0 podman[456795]: 2025-11-25 17:54:00.2693639 +0000 UTC m=+1.490658509 container remove aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 17:54:00 compute-0 systemd[1]: libpod-conmon-aafb56c685a8d4016f41ec91978c136c6e01668f42c0ee821aa88c0346468cb2.scope: Deactivated successfully.
Nov 25 17:54:00 compute-0 sudo[456687]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:00 compute-0 sudo[456855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:54:00 compute-0 sudo[456855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:00 compute-0 sudo[456855]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:00 compute-0 sudo[456880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:54:00 compute-0 sudo[456880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:00 compute-0 sudo[456880]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:00 compute-0 sudo[456905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:54:00 compute-0 sudo[456905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:00 compute-0 sudo[456905]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:00 compute-0 sudo[456930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:54:00 compute-0 sudo[456930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:00 compute-0 nova_compute[254092]: 2025-11-25 17:54:00.939 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:01 compute-0 podman[456995]: 2025-11-25 17:54:01.0591314 +0000 UTC m=+0.052558427 container create 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 17:54:01 compute-0 systemd[1]: Started libpod-conmon-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope.
Nov 25 17:54:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:54:01 compute-0 podman[456995]: 2025-11-25 17:54:01.039341323 +0000 UTC m=+0.032768330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:54:01 compute-0 ceph-mon[74985]: pgmap v3936: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:01 compute-0 podman[456995]: 2025-11-25 17:54:01.153673134 +0000 UTC m=+0.147100201 container init 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 17:54:01 compute-0 podman[456995]: 2025-11-25 17:54:01.160610073 +0000 UTC m=+0.154037090 container start 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 17:54:01 compute-0 podman[456995]: 2025-11-25 17:54:01.167910642 +0000 UTC m=+0.161337669 container attach 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:54:01 compute-0 admiring_fermi[457012]: 167 167
Nov 25 17:54:01 compute-0 systemd[1]: libpod-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope: Deactivated successfully.
Nov 25 17:54:01 compute-0 conmon[457012]: conmon 82b9bcef644d15a38017 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope/container/memory.events
Nov 25 17:54:01 compute-0 podman[456995]: 2025-11-25 17:54:01.171464808 +0000 UTC m=+0.164891805 container died 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:54:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c1152bea6dcb4e1f4ce5114eecd8193713508195d4b8474df440d86b55a01d6-merged.mount: Deactivated successfully.
Nov 25 17:54:01 compute-0 podman[456995]: 2025-11-25 17:54:01.212247144 +0000 UTC m=+0.205674131 container remove 82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:54:01 compute-0 systemd[1]: libpod-conmon-82b9bcef644d15a380177725bd2841cda0f28f1aa4291fc2a99ad26496765cd9.scope: Deactivated successfully.
Nov 25 17:54:01 compute-0 podman[457037]: 2025-11-25 17:54:01.43911317 +0000 UTC m=+0.071784388 container create 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 17:54:01 compute-0 systemd[1]: Started libpod-conmon-82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106.scope.
Nov 25 17:54:01 compute-0 podman[457037]: 2025-11-25 17:54:01.411044718 +0000 UTC m=+0.043715986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:54:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:01 compute-0 podman[457037]: 2025-11-25 17:54:01.538188289 +0000 UTC m=+0.170859557 container init 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:54:01 compute-0 podman[457037]: 2025-11-25 17:54:01.553337649 +0000 UTC m=+0.186008847 container start 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 17:54:01 compute-0 podman[457037]: 2025-11-25 17:54:01.557494873 +0000 UTC m=+0.190166141 container attach 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:54:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3937: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:02 compute-0 zealous_gauss[457053]: {
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:     "0": [
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:         {
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "devices": [
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "/dev/loop3"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             ],
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_name": "ceph_lv0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_size": "21470642176",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "name": "ceph_lv0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "tags": {
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cluster_name": "ceph",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.crush_device_class": "",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.encrypted": "0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osd_id": "0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.type": "block",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.vdo": "0"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             },
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "type": "block",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "vg_name": "ceph_vg0"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:         }
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:     ],
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:     "1": [
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:         {
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "devices": [
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "/dev/loop4"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             ],
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_name": "ceph_lv1",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_size": "21470642176",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "name": "ceph_lv1",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "tags": {
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cluster_name": "ceph",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.crush_device_class": "",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.encrypted": "0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osd_id": "1",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.type": "block",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.vdo": "0"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             },
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "type": "block",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "vg_name": "ceph_vg1"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:         }
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:     ],
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:     "2": [
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:         {
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "devices": [
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "/dev/loop5"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             ],
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_name": "ceph_lv2",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_size": "21470642176",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "name": "ceph_lv2",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "tags": {
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.cluster_name": "ceph",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.crush_device_class": "",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.encrypted": "0",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osd_id": "2",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.type": "block",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:                 "ceph.vdo": "0"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             },
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "type": "block",
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:             "vg_name": "ceph_vg2"
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:         }
Nov 25 17:54:02 compute-0 zealous_gauss[457053]:     ]
Nov 25 17:54:02 compute-0 zealous_gauss[457053]: }
Nov 25 17:54:02 compute-0 systemd[1]: libpod-82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106.scope: Deactivated successfully.
Nov 25 17:54:02 compute-0 podman[457037]: 2025-11-25 17:54:02.357707505 +0000 UTC m=+0.990378703 container died 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 17:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b88a4af4d799e20746b634014944cd9338f673f7251a960b49c547f325ee8eb7-merged.mount: Deactivated successfully.
Nov 25 17:54:02 compute-0 podman[457037]: 2025-11-25 17:54:02.419713488 +0000 UTC m=+1.052384676 container remove 82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:54:02 compute-0 systemd[1]: libpod-conmon-82f22e4c8033df77152a07e61cc6b203ce4c88506b7ed589ed8a097a2584e106.scope: Deactivated successfully.
Nov 25 17:54:02 compute-0 sudo[456930]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:02 compute-0 sudo[457073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:54:02 compute-0 sudo[457073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:02 compute-0 sudo[457073]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:02 compute-0 sudo[457098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:54:02 compute-0 sudo[457098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:02 compute-0 sudo[457098]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:02 compute-0 sudo[457123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:54:02 compute-0 sudo[457123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:02 compute-0 sudo[457123]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:02 compute-0 sudo[457148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:54:02 compute-0 sudo[457148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:02 compute-0 nova_compute[254092]: 2025-11-25 17:54:02.843 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:03 compute-0 podman[457214]: 2025-11-25 17:54:03.038756065 +0000 UTC m=+0.044726325 container create e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 17:54:03 compute-0 systemd[1]: Started libpod-conmon-e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb.scope.
Nov 25 17:54:03 compute-0 podman[457214]: 2025-11-25 17:54:03.021059065 +0000 UTC m=+0.027029335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:54:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:54:03 compute-0 podman[457214]: 2025-11-25 17:54:03.144693179 +0000 UTC m=+0.150663499 container init e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:54:03 compute-0 podman[457214]: 2025-11-25 17:54:03.15246117 +0000 UTC m=+0.158431450 container start e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:54:03 compute-0 ceph-mon[74985]: pgmap v3937: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:03 compute-0 podman[457214]: 2025-11-25 17:54:03.156198652 +0000 UTC m=+0.162168912 container attach e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:54:03 compute-0 friendly_blackwell[457230]: 167 167
Nov 25 17:54:03 compute-0 systemd[1]: libpod-e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb.scope: Deactivated successfully.
Nov 25 17:54:03 compute-0 podman[457214]: 2025-11-25 17:54:03.164061075 +0000 UTC m=+0.170031375 container died e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ae33584bb82086cbaebcc64dd7a5e02203614bc904af549444cbdc1a308a244-merged.mount: Deactivated successfully.
Nov 25 17:54:03 compute-0 podman[457214]: 2025-11-25 17:54:03.208938933 +0000 UTC m=+0.214909233 container remove e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:54:03 compute-0 systemd[1]: libpod-conmon-e7f0a05eb6c907385718a6bfea3790e639f13bab30d84eb3235c889c539162fb.scope: Deactivated successfully.
Nov 25 17:54:03 compute-0 podman[457254]: 2025-11-25 17:54:03.451731611 +0000 UTC m=+0.071115661 container create 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:54:03 compute-0 systemd[1]: Started libpod-conmon-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope.
Nov 25 17:54:03 compute-0 podman[457254]: 2025-11-25 17:54:03.430508985 +0000 UTC m=+0.049893065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:54:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:54:03 compute-0 podman[457254]: 2025-11-25 17:54:03.570426502 +0000 UTC m=+0.189810622 container init 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:54:03 compute-0 podman[457254]: 2025-11-25 17:54:03.58621937 +0000 UTC m=+0.205603460 container start 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:54:03 compute-0 podman[457254]: 2025-11-25 17:54:03.590066084 +0000 UTC m=+0.209450184 container attach 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 17:54:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3938: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:04 compute-0 ceph-mon[74985]: pgmap v3938: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:04 compute-0 podman[457297]: 2025-11-25 17:54:04.660966812 +0000 UTC m=+0.071131151 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:54:04 compute-0 eager_perlman[457271]: {
Nov 25 17:54:04 compute-0 eager_perlman[457271]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "osd_id": 1,
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "type": "bluestore"
Nov 25 17:54:04 compute-0 eager_perlman[457271]:     },
Nov 25 17:54:04 compute-0 eager_perlman[457271]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "osd_id": 2,
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "type": "bluestore"
Nov 25 17:54:04 compute-0 eager_perlman[457271]:     },
Nov 25 17:54:04 compute-0 eager_perlman[457271]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "osd_id": 0,
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:54:04 compute-0 eager_perlman[457271]:         "type": "bluestore"
Nov 25 17:54:04 compute-0 eager_perlman[457271]:     }
Nov 25 17:54:04 compute-0 eager_perlman[457271]: }
Nov 25 17:54:04 compute-0 podman[457296]: 2025-11-25 17:54:04.67452567 +0000 UTC m=+0.084941225 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 17:54:04 compute-0 systemd[1]: libpod-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope: Deactivated successfully.
Nov 25 17:54:04 compute-0 systemd[1]: libpod-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope: Consumed 1.115s CPU time.
Nov 25 17:54:04 compute-0 podman[457300]: 2025-11-25 17:54:04.733522561 +0000 UTC m=+0.139238398 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:54:04 compute-0 podman[457366]: 2025-11-25 17:54:04.756779173 +0000 UTC m=+0.033615753 container died 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:54:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-aea83991d9f366a81845dee22c68f75e57264fe6cdfbb5c4cc2c1cdeb5be488d-merged.mount: Deactivated successfully.
Nov 25 17:54:04 compute-0 podman[457366]: 2025-11-25 17:54:04.815019342 +0000 UTC m=+0.091855942 container remove 26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_perlman, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:54:04 compute-0 systemd[1]: libpod-conmon-26781447c1ba12dff13f6b2946bd84dd9e83a67f5f99fa7c4e71ff10715b9fcc.scope: Deactivated successfully.
Nov 25 17:54:04 compute-0 sudo[457148]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:54:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:54:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:54:04 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:54:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2f5fcc3e-9b86-4589-a2f2-a45fe259260a does not exist
Nov 25 17:54:04 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b9410ac0-5321-4fe0-9df5-401ba8989727 does not exist
Nov 25 17:54:04 compute-0 sudo[457382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:54:04 compute-0 sudo[457382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:04 compute-0 sudo[457382]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:05 compute-0 sudo[457407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:54:05 compute-0 sudo[457407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:54:05 compute-0 sudo[457407]: pam_unix(sudo:session): session closed for user root
Nov 25 17:54:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:54:05 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:54:05 compute-0 nova_compute[254092]: 2025-11-25 17:54:05.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3939: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:06 compute-0 ceph-mon[74985]: pgmap v3939: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 17:54:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7201.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 17:54:07 compute-0 nova_compute[254092]: 2025-11-25 17:54:07.846 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3940: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:09 compute-0 ceph-mon[74985]: pgmap v3940: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3941: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:54:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:54:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:10 compute-0 nova_compute[254092]: 2025-11-25 17:54:10.942 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:11 compute-0 ceph-mon[74985]: pgmap v3941: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3942: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:12 compute-0 nova_compute[254092]: 2025-11-25 17:54:12.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:13 compute-0 ceph-mon[74985]: pgmap v3942: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:13 compute-0 nova_compute[254092]: 2025-11-25 17:54:13.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:54:13.700 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:54:13.701 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:54:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:54:13.701 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:54:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3943: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:14 compute-0 ceph-mon[74985]: pgmap v3943: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:14 compute-0 nova_compute[254092]: 2025-11-25 17:54:14.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:15 compute-0 nova_compute[254092]: 2025-11-25 17:54:15.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:15 compute-0 nova_compute[254092]: 2025-11-25 17:54:15.944 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3944: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:17 compute-0 ceph-mon[74985]: pgmap v3944: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:17 compute-0 nova_compute[254092]: 2025-11-25 17:54:17.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3945: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:19 compute-0 ceph-mon[74985]: pgmap v3945: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:19 compute-0 nova_compute[254092]: 2025-11-25 17:54:19.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:19 compute-0 nova_compute[254092]: 2025-11-25 17:54:19.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:19 compute-0 nova_compute[254092]: 2025-11-25 17:54:19.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:54:19 compute-0 nova_compute[254092]: 2025-11-25 17:54:19.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:54:19 compute-0 nova_compute[254092]: 2025-11-25 17:54:19.522 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:54:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3946: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:20 compute-0 ceph-mon[74985]: pgmap v3946: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:20 compute-0 nova_compute[254092]: 2025-11-25 17:54:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:20 compute-0 nova_compute[254092]: 2025-11-25 17:54:20.566 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:54:20 compute-0 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:54:20 compute-0 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:54:20 compute-0 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:54:20 compute-0 nova_compute[254092]: 2025-11-25 17:54:20.567 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:54:20 compute-0 nova_compute[254092]: 2025-11-25 17:54:20.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:54:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942230637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.034 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:54:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2942230637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.257 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.258 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.258 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.259 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.328 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.329 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.349 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:54:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:54:21 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874217884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.773 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.783 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.813 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.818 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:54:21 compute-0 nova_compute[254092]: 2025-11-25 17:54:21.818 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:54:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3947: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2874217884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:54:22 compute-0 ceph-mon[74985]: pgmap v3947: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:22 compute-0 nova_compute[254092]: 2025-11-25 17:54:22.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:23 compute-0 nova_compute[254092]: 2025-11-25 17:54:23.820 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:23 compute-0 nova_compute[254092]: 2025-11-25 17:54:23.821 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:23 compute-0 nova_compute[254092]: 2025-11-25 17:54:23.821 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:54:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3948: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:25 compute-0 ceph-mon[74985]: pgmap v3948: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 17:54:25 compute-0 nova_compute[254092]: 2025-11-25 17:54:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:25 compute-0 nova_compute[254092]: 2025-11-25 17:54:25.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:54:25 compute-0 nova_compute[254092]: 2025-11-25 17:54:25.948 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3949: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:27 compute-0 ceph-mon[74985]: pgmap v3949: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:28 compute-0 nova_compute[254092]: 2025-11-25 17:54:28.002 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3950: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:28 compute-0 ceph-mon[74985]: pgmap v3950: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3951: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:30 compute-0 nova_compute[254092]: 2025-11-25 17:54:30.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:31 compute-0 ceph-mon[74985]: pgmap v3951: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3952: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:32 compute-0 ceph-mon[74985]: pgmap v3952: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:32 compute-0 nova_compute[254092]: 2025-11-25 17:54:32.522 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:33 compute-0 nova_compute[254092]: 2025-11-25 17:54:33.006 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:33 compute-0 nova_compute[254092]: 2025-11-25 17:54:33.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3953: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:35 compute-0 ceph-mon[74985]: pgmap v3953: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:35 compute-0 podman[457478]: 2025-11-25 17:54:35.650557131 +0000 UTC m=+0.067877903 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 17:54:35 compute-0 podman[457480]: 2025-11-25 17:54:35.672445084 +0000 UTC m=+0.083165167 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 17:54:35 compute-0 podman[457479]: 2025-11-25 17:54:35.681126231 +0000 UTC m=+0.087795704 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:54:35 compute-0 nova_compute[254092]: 2025-11-25 17:54:35.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3954: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:36 compute-0 nova_compute[254092]: 2025-11-25 17:54:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:36 compute-0 nova_compute[254092]: 2025-11-25 17:54:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:54:36 compute-0 nova_compute[254092]: 2025-11-25 17:54:36.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:54:36 compute-0 sshd-session[457476]: Connection closed by authenticating user root 171.244.51.45 port 33124 [preauth]
Nov 25 17:54:37 compute-0 ceph-mon[74985]: pgmap v3954: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:38 compute-0 nova_compute[254092]: 2025-11-25 17:54:38.049 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3955: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:38 compute-0 ceph-mon[74985]: pgmap v3955: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3956: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:54:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:54:40
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', '.mgr', 'vms', 'images', 'default.rgw.meta', 'volumes']
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:54:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:54:40 compute-0 nova_compute[254092]: 2025-11-25 17:54:40.954 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:41 compute-0 ceph-mon[74985]: pgmap v3956: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3957: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:42 compute-0 ceph-mon[74985]: pgmap v3957: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:43 compute-0 nova_compute[254092]: 2025-11-25 17:54:43.081 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3958: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:45 compute-0 ceph-mon[74985]: pgmap v3958: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:45 compute-0 nova_compute[254092]: 2025-11-25 17:54:45.956 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3959: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:47 compute-0 ceph-mon[74985]: pgmap v3959: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:47 compute-0 nova_compute[254092]: 2025-11-25 17:54:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:54:48 compute-0 nova_compute[254092]: 2025-11-25 17:54:48.085 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3960: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:48 compute-0 ceph-mon[74985]: pgmap v3960: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3961: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:50 compute-0 nova_compute[254092]: 2025-11-25 17:54:50.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:51 compute-0 ceph-mon[74985]: pgmap v3961: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3962: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:54:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:54:53 compute-0 nova_compute[254092]: 2025-11-25 17:54:53.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:53 compute-0 ceph-mon[74985]: pgmap v3962: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3963: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:54 compute-0 ceph-mon[74985]: pgmap v3963: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:54:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:54:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1299631687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:54:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:54:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1299631687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:54:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1299631687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:54:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1299631687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:54:55 compute-0 nova_compute[254092]: 2025-11-25 17:54:55.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3964: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:56 compute-0 ceph-mon[74985]: pgmap v3964: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:58 compute-0 nova_compute[254092]: 2025-11-25 17:54:58.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:54:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3965: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:54:59 compute-0 ceph-mon[74985]: pgmap v3965: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3966: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:00 compute-0 ceph-mon[74985]: pgmap v3966: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:00 compute-0 nova_compute[254092]: 2025-11-25 17:55:00.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3967: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:03 compute-0 nova_compute[254092]: 2025-11-25 17:55:03.124 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:03 compute-0 ceph-mon[74985]: pgmap v3967: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3968: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:04 compute-0 ceph-mon[74985]: pgmap v3968: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:05 compute-0 sudo[457546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:05 compute-0 sudo[457546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:05 compute-0 sudo[457546]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:05 compute-0 sudo[457571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:55:05 compute-0 sudo[457571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:05 compute-0 sudo[457571]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:05 compute-0 sudo[457596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:05 compute-0 sudo[457596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:05 compute-0 sudo[457596]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:05 compute-0 sudo[457621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:55:05 compute-0 sudo[457621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:05 compute-0 sudo[457621]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:05 compute-0 nova_compute[254092]: 2025-11-25 17:55:05.965 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:55:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:55:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:55:05 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:55:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:55:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f5956c67-45df-45de-aa6f-f28ba6312299 does not exist
Nov 25 17:55:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ae69cf78-d17e-4865-b027-ed5457fe5b95 does not exist
Nov 25 17:55:06 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 8b2fea4a-6c0b-4ec2-933b-b57664a45961 does not exist
Nov 25 17:55:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:55:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:55:06 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:55:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:55:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:55:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3969: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:06 compute-0 sudo[457678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:06 compute-0 sudo[457678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:06 compute-0 sudo[457678]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:06 compute-0 sudo[457721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:55:06 compute-0 sudo[457721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:06 compute-0 podman[457703]: 2025-11-25 17:55:06.1928828 +0000 UTC m=+0.061333635 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 17:55:06 compute-0 sudo[457721]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:06 compute-0 podman[457702]: 2025-11-25 17:55:06.210856448 +0000 UTC m=+0.085556973 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 17:55:06 compute-0 podman[457704]: 2025-11-25 17:55:06.232443613 +0000 UTC m=+0.104193348 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 17:55:06 compute-0 sudo[457784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:06 compute-0 sudo[457784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:06 compute-0 sudo[457784]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:06 compute-0 sudo[457815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:55:06 compute-0 sudo[457815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:06 compute-0 podman[457880]: 2025-11-25 17:55:06.687244375 +0000 UTC m=+0.042778413 container create 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:55:06 compute-0 systemd[1]: Started libpod-conmon-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope.
Nov 25 17:55:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:55:06 compute-0 podman[457880]: 2025-11-25 17:55:06.671375294 +0000 UTC m=+0.026909352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:55:06 compute-0 podman[457880]: 2025-11-25 17:55:06.772549529 +0000 UTC m=+0.128083597 container init 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 17:55:06 compute-0 podman[457880]: 2025-11-25 17:55:06.781403769 +0000 UTC m=+0.136937807 container start 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:55:06 compute-0 podman[457880]: 2025-11-25 17:55:06.784760971 +0000 UTC m=+0.140295009 container attach 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:55:06 compute-0 nostalgic_kilby[457897]: 167 167
Nov 25 17:55:06 compute-0 systemd[1]: libpod-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope: Deactivated successfully.
Nov 25 17:55:06 compute-0 conmon[457897]: conmon 135e00fac1613096951f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope/container/memory.events
Nov 25 17:55:06 compute-0 podman[457880]: 2025-11-25 17:55:06.789705404 +0000 UTC m=+0.145239442 container died 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:55:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3185c74086fcc2745072874c91e1d428a7e2ac9fb1c95e90a4438165c679ec9a-merged.mount: Deactivated successfully.
Nov 25 17:55:06 compute-0 podman[457880]: 2025-11-25 17:55:06.833497363 +0000 UTC m=+0.189031401 container remove 135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:55:06 compute-0 systemd[1]: libpod-conmon-135e00fac1613096951f9ef42bef8a8cabec94a6fbbbd3b8f3d956d126abab51.scope: Deactivated successfully.
Nov 25 17:55:07 compute-0 podman[457920]: 2025-11-25 17:55:07.058349164 +0000 UTC m=+0.047778748 container create b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:55:07 compute-0 ceph-mon[74985]: pgmap v3969: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:07 compute-0 systemd[1]: Started libpod-conmon-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope.
Nov 25 17:55:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:55:07 compute-0 podman[457920]: 2025-11-25 17:55:07.04124749 +0000 UTC m=+0.030677104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:07 compute-0 podman[457920]: 2025-11-25 17:55:07.156076456 +0000 UTC m=+0.145506060 container init b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:55:07 compute-0 podman[457920]: 2025-11-25 17:55:07.163334143 +0000 UTC m=+0.152763727 container start b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:55:07 compute-0 podman[457920]: 2025-11-25 17:55:07.166366695 +0000 UTC m=+0.155796299 container attach b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:55:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3970: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:08 compute-0 nova_compute[254092]: 2025-11-25 17:55:08.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:08 compute-0 ceph-mon[74985]: pgmap v3970: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:08 compute-0 friendly_galileo[457936]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:55:08 compute-0 friendly_galileo[457936]: --> relative data size: 1.0
Nov 25 17:55:08 compute-0 friendly_galileo[457936]: --> All data devices are unavailable
Nov 25 17:55:08 compute-0 systemd[1]: libpod-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope: Deactivated successfully.
Nov 25 17:55:08 compute-0 systemd[1]: libpod-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope: Consumed 1.134s CPU time.
Nov 25 17:55:08 compute-0 podman[457920]: 2025-11-25 17:55:08.416411594 +0000 UTC m=+1.405841178 container died b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 17:55:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1000ad7f323cf09794afbf0098eaf481c16087c8ec1fc3e452b8ee4abe00d8fb-merged.mount: Deactivated successfully.
Nov 25 17:55:08 compute-0 podman[457920]: 2025-11-25 17:55:08.479958299 +0000 UTC m=+1.469387883 container remove b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_galileo, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:55:08 compute-0 systemd[1]: libpod-conmon-b51146943fe00784818d3a885c66d9473e6f10c253a33e47ba2ac3e232ae7a10.scope: Deactivated successfully.
Nov 25 17:55:08 compute-0 sudo[457815]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:08 compute-0 sudo[457977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:08 compute-0 sudo[457977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:08 compute-0 sudo[457977]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:08 compute-0 sudo[458002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:55:08 compute-0 sudo[458002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:08 compute-0 sudo[458002]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:08 compute-0 sudo[458027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:08 compute-0 sudo[458027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:08 compute-0 sudo[458027]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:08 compute-0 sudo[458052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:55:08 compute-0 sudo[458052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:09 compute-0 podman[458118]: 2025-11-25 17:55:09.284139979 +0000 UTC m=+0.061019147 container create fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:55:09 compute-0 systemd[1]: Started libpod-conmon-fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1.scope.
Nov 25 17:55:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:55:09 compute-0 podman[458118]: 2025-11-25 17:55:09.266599973 +0000 UTC m=+0.043479131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:55:09 compute-0 podman[458118]: 2025-11-25 17:55:09.375724254 +0000 UTC m=+0.152603462 container init fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:55:09 compute-0 podman[458118]: 2025-11-25 17:55:09.383764942 +0000 UTC m=+0.160644120 container start fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 17:55:09 compute-0 podman[458118]: 2025-11-25 17:55:09.386460095 +0000 UTC m=+0.163339273 container attach fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:55:09 compute-0 silly_goldstine[458134]: 167 167
Nov 25 17:55:09 compute-0 systemd[1]: libpod-fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1.scope: Deactivated successfully.
Nov 25 17:55:09 compute-0 podman[458118]: 2025-11-25 17:55:09.3914175 +0000 UTC m=+0.168296658 container died fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed1d316139c266c1e9885b9065292368505a723594df76ffaaea78f00fd2f90e-merged.mount: Deactivated successfully.
Nov 25 17:55:09 compute-0 podman[458118]: 2025-11-25 17:55:09.431027214 +0000 UTC m=+0.207906412 container remove fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:55:09 compute-0 systemd[1]: libpod-conmon-fab854771068471fb9cfff816bb02af57fe1c2bc114bd4ee32fc51302348b7c1.scope: Deactivated successfully.
Nov 25 17:55:09 compute-0 podman[458158]: 2025-11-25 17:55:09.656087711 +0000 UTC m=+0.045428983 container create e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 17:55:09 compute-0 systemd[1]: Started libpod-conmon-e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e.scope.
Nov 25 17:55:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:09 compute-0 podman[458158]: 2025-11-25 17:55:09.636677295 +0000 UTC m=+0.026018557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:09 compute-0 podman[458158]: 2025-11-25 17:55:09.744668935 +0000 UTC m=+0.134010227 container init e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 17:55:09 compute-0 podman[458158]: 2025-11-25 17:55:09.755610572 +0000 UTC m=+0.144951814 container start e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:55:09 compute-0 podman[458158]: 2025-11-25 17:55:09.758779568 +0000 UTC m=+0.148120800 container attach e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:55:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3971: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:55:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:55:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:10 compute-0 sleepy_wu[458174]: {
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:     "0": [
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:         {
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "devices": [
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "/dev/loop3"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             ],
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_name": "ceph_lv0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_size": "21470642176",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "name": "ceph_lv0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "tags": {
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cluster_name": "ceph",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.crush_device_class": "",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.encrypted": "0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osd_id": "0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.type": "block",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.vdo": "0"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             },
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "type": "block",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "vg_name": "ceph_vg0"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:         }
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:     ],
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:     "1": [
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:         {
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "devices": [
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "/dev/loop4"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             ],
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_name": "ceph_lv1",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_size": "21470642176",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "name": "ceph_lv1",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "tags": {
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cluster_name": "ceph",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.crush_device_class": "",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.encrypted": "0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osd_id": "1",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.type": "block",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.vdo": "0"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             },
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "type": "block",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "vg_name": "ceph_vg1"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:         }
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:     ],
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:     "2": [
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:         {
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "devices": [
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "/dev/loop5"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             ],
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_name": "ceph_lv2",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_size": "21470642176",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "name": "ceph_lv2",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "tags": {
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.cluster_name": "ceph",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.crush_device_class": "",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.encrypted": "0",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osd_id": "2",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.type": "block",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:                 "ceph.vdo": "0"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             },
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "type": "block",
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:             "vg_name": "ceph_vg2"
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:         }
Nov 25 17:55:10 compute-0 sleepy_wu[458174]:     ]
Nov 25 17:55:10 compute-0 sleepy_wu[458174]: }
Nov 25 17:55:10 compute-0 systemd[1]: libpod-e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e.scope: Deactivated successfully.
Nov 25 17:55:10 compute-0 podman[458158]: 2025-11-25 17:55:10.522533632 +0000 UTC m=+0.911874944 container died e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 17:55:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed87417a508f6366f154519a0a184b28d7f93f0ca280627a1c3511971fe02263-merged.mount: Deactivated successfully.
Nov 25 17:55:10 compute-0 podman[458158]: 2025-11-25 17:55:10.59656184 +0000 UTC m=+0.985903082 container remove e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:55:10 compute-0 systemd[1]: libpod-conmon-e8c41f78f1332d60bfb4acf9cd82b8e51dec67a7276cae820639b05634e8898e.scope: Deactivated successfully.
Nov 25 17:55:10 compute-0 sudo[458052]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:10 compute-0 sudo[458196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:10 compute-0 sudo[458196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:10 compute-0 sudo[458196]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:10 compute-0 sudo[458221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:55:10 compute-0 sudo[458221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:10 compute-0 sudo[458221]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:10 compute-0 sudo[458246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:10 compute-0 sudo[458246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:10 compute-0 sudo[458246]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:10 compute-0 sudo[458271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:55:10 compute-0 sudo[458271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:10 compute-0 nova_compute[254092]: 2025-11-25 17:55:10.967 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:11 compute-0 ceph-mon[74985]: pgmap v3971: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:11 compute-0 podman[458335]: 2025-11-25 17:55:11.272896303 +0000 UTC m=+0.050614845 container create 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:55:11 compute-0 systemd[1]: Started libpod-conmon-15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570.scope.
Nov 25 17:55:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:55:11 compute-0 podman[458335]: 2025-11-25 17:55:11.348397171 +0000 UTC m=+0.126115733 container init 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:55:11 compute-0 podman[458335]: 2025-11-25 17:55:11.257194826 +0000 UTC m=+0.034913378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:55:11 compute-0 podman[458335]: 2025-11-25 17:55:11.357173789 +0000 UTC m=+0.134892331 container start 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:55:11 compute-0 podman[458335]: 2025-11-25 17:55:11.360067428 +0000 UTC m=+0.137785970 container attach 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 17:55:11 compute-0 goofy_pasteur[458351]: 167 167
Nov 25 17:55:11 compute-0 systemd[1]: libpod-15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570.scope: Deactivated successfully.
Nov 25 17:55:11 compute-0 podman[458356]: 2025-11-25 17:55:11.401591755 +0000 UTC m=+0.024968739 container died 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b3ffd6b166281217861ef5db44936535f05f606da6e4644cc5e68580e56be8-merged.mount: Deactivated successfully.
Nov 25 17:55:11 compute-0 podman[458356]: 2025-11-25 17:55:11.428469034 +0000 UTC m=+0.051845998 container remove 15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pasteur, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 17:55:11 compute-0 systemd[1]: libpod-conmon-15a747d72de8f890a29b0388f106566572ced6fe4e2e5d1dd48984f1d4eaf570.scope: Deactivated successfully.
Nov 25 17:55:11 compute-0 podman[458378]: 2025-11-25 17:55:11.592554896 +0000 UTC m=+0.041898798 container create 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:55:11 compute-0 systemd[1]: Started libpod-conmon-538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107.scope.
Nov 25 17:55:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:55:11 compute-0 podman[458378]: 2025-11-25 17:55:11.575932825 +0000 UTC m=+0.025276757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:55:11 compute-0 podman[458378]: 2025-11-25 17:55:11.682672961 +0000 UTC m=+0.132016883 container init 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 17:55:11 compute-0 podman[458378]: 2025-11-25 17:55:11.688203241 +0000 UTC m=+0.137547143 container start 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:55:11 compute-0 podman[458378]: 2025-11-25 17:55:11.691567283 +0000 UTC m=+0.140911215 container attach 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:55:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3972: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:12 compute-0 competent_banach[458395]: {
Nov 25 17:55:12 compute-0 competent_banach[458395]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "osd_id": 1,
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "type": "bluestore"
Nov 25 17:55:12 compute-0 competent_banach[458395]:     },
Nov 25 17:55:12 compute-0 competent_banach[458395]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "osd_id": 2,
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "type": "bluestore"
Nov 25 17:55:12 compute-0 competent_banach[458395]:     },
Nov 25 17:55:12 compute-0 competent_banach[458395]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "osd_id": 0,
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:55:12 compute-0 competent_banach[458395]:         "type": "bluestore"
Nov 25 17:55:12 compute-0 competent_banach[458395]:     }
Nov 25 17:55:12 compute-0 competent_banach[458395]: }
Nov 25 17:55:12 compute-0 systemd[1]: libpod-538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107.scope: Deactivated successfully.
Nov 25 17:55:12 compute-0 podman[458378]: 2025-11-25 17:55:12.614463665 +0000 UTC m=+1.063807557 container died 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:55:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6894a6e11d31605111763d4b0bb2c521322a861a74a05ed102e5836911f0f694-merged.mount: Deactivated successfully.
Nov 25 17:55:12 compute-0 podman[458378]: 2025-11-25 17:55:12.662385676 +0000 UTC m=+1.111729578 container remove 538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:55:12 compute-0 systemd[1]: libpod-conmon-538a23a802095f1b0d9c819e5184563ce7405d804c60d41e58ddded319d4f107.scope: Deactivated successfully.
Nov 25 17:55:12 compute-0 sudo[458271]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:55:12 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:55:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:55:12 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:55:12 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fda29980-9a99-45d6-801f-3ea2b4160525 does not exist
Nov 25 17:55:12 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 41f1299e-0a84-4c29-a311-031cfce62a25 does not exist
Nov 25 17:55:12 compute-0 sudo[458440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:55:12 compute-0 sudo[458440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:12 compute-0 sudo[458440]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:12 compute-0 sudo[458465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:55:12 compute-0 sudo[458465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:55:12 compute-0 sudo[458465]: pam_unix(sudo:session): session closed for user root
Nov 25 17:55:13 compute-0 ceph-mon[74985]: pgmap v3972: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:13 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:55:13 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:55:13 compute-0 nova_compute[254092]: 2025-11-25 17:55:13.220 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:13 compute-0 nova_compute[254092]: 2025-11-25 17:55:13.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:55:13.701 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:55:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:55:13.702 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:55:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:55:13.702 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:55:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3973: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:14 compute-0 ceph-mon[74985]: pgmap v3973: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:14 compute-0 nova_compute[254092]: 2025-11-25 17:55:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:15 compute-0 nova_compute[254092]: 2025-11-25 17:55:15.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:15 compute-0 nova_compute[254092]: 2025-11-25 17:55:15.968 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3974: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:17 compute-0 ceph-mon[74985]: pgmap v3974: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.188915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317188958, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1428, "num_deletes": 251, "total_data_size": 2237469, "memory_usage": 2275544, "flush_reason": "Manual Compaction"}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317204734, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 2193753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80630, "largest_seqno": 82057, "table_properties": {"data_size": 2187069, "index_size": 3818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13811, "raw_average_key_size": 19, "raw_value_size": 2173729, "raw_average_value_size": 3123, "num_data_blocks": 172, "num_entries": 696, "num_filter_entries": 696, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093170, "oldest_key_time": 1764093170, "file_creation_time": 1764093317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 15934 microseconds, and 6662 cpu microseconds.
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.204778) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 2193753 bytes OK
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.204867) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206134) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206148) EVENT_LOG_v1 {"time_micros": 1764093317206143, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206169) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2231182, prev total WAL file size 2231182, number of live WAL files 2.
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.207037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(2142KB)], [191(10MB)]
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317207150, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12713531, "oldest_snapshot_seqno": -1}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9636 keys, 10954011 bytes, temperature: kUnknown
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317283413, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 10954011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10894175, "index_size": 34635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24133, "raw_key_size": 254735, "raw_average_key_size": 26, "raw_value_size": 10726498, "raw_average_value_size": 1113, "num_data_blocks": 1329, "num_entries": 9636, "num_filter_entries": 9636, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.283872) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 10954011 bytes
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.285443) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.5 rd, 143.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.8) write-amplify(5.0) OK, records in: 10150, records dropped: 514 output_compression: NoCompression
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.285476) EVENT_LOG_v1 {"time_micros": 1764093317285461, "job": 120, "event": "compaction_finished", "compaction_time_micros": 76369, "compaction_time_cpu_micros": 52918, "output_level": 6, "num_output_files": 1, "total_output_size": 10954011, "num_input_records": 10150, "num_output_records": 9636, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317286562, "job": 120, "event": "table_file_deletion", "file_number": 193}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093317291373, "job": 120, "event": "table_file_deletion", "file_number": 191}
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.206828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:55:17 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:55:17.291537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:55:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3975: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:18 compute-0 ceph-mon[74985]: pgmap v3975: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:18 compute-0 nova_compute[254092]: 2025-11-25 17:55:18.280 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:19 compute-0 nova_compute[254092]: 2025-11-25 17:55:19.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:19 compute-0 nova_compute[254092]: 2025-11-25 17:55:19.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:55:19 compute-0 nova_compute[254092]: 2025-11-25 17:55:19.498 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:55:19 compute-0 nova_compute[254092]: 2025-11-25 17:55:19.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:55:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3976: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:20 compute-0 nova_compute[254092]: 2025-11-25 17:55:20.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:20 compute-0 nova_compute[254092]: 2025-11-25 17:55:20.970 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:21 compute-0 ceph-mon[74985]: pgmap v3976: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3977: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:22 compute-0 nova_compute[254092]: 2025-11-25 17:55:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:22 compute-0 nova_compute[254092]: 2025-11-25 17:55:22.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:55:22 compute-0 nova_compute[254092]: 2025-11-25 17:55:22.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:55:22 compute-0 nova_compute[254092]: 2025-11-25 17:55:22.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:55:22 compute-0 nova_compute[254092]: 2025-11-25 17:55:22.539 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:55:22 compute-0 nova_compute[254092]: 2025-11-25 17:55:22.540 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:55:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:55:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794403191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.049 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:55:23 compute-0 ceph-mon[74985]: pgmap v3977: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1794403191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.319 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.321 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.322 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.322 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.407 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.427 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:55:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:55:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639957549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.886 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.893 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.909 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.911 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:55:23 compute-0 nova_compute[254092]: 2025-11-25 17:55:23.911 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:55:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3978: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2639957549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:55:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:25 compute-0 ceph-mon[74985]: pgmap v3978: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:25 compute-0 nova_compute[254092]: 2025-11-25 17:55:25.911 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:25 compute-0 nova_compute[254092]: 2025-11-25 17:55:25.912 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:25 compute-0 nova_compute[254092]: 2025-11-25 17:55:25.912 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:55:25 compute-0 nova_compute[254092]: 2025-11-25 17:55:25.972 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3979: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:26 compute-0 ceph-mon[74985]: pgmap v3979: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3980: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:28 compute-0 nova_compute[254092]: 2025-11-25 17:55:28.355 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:29 compute-0 ceph-mon[74985]: pgmap v3980: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3981: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:30 compute-0 nova_compute[254092]: 2025-11-25 17:55:30.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:31 compute-0 ceph-mon[74985]: pgmap v3981: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3982: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:32 compute-0 ceph-mon[74985]: pgmap v3982: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:33 compute-0 nova_compute[254092]: 2025-11-25 17:55:33.402 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3983: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:34 compute-0 nova_compute[254092]: 2025-11-25 17:55:34.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:55:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:35 compute-0 ceph-mon[74985]: pgmap v3983: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:35 compute-0 nova_compute[254092]: 2025-11-25 17:55:35.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3984: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:36 compute-0 podman[458535]: 2025-11-25 17:55:36.697140858 +0000 UTC m=+0.095739449 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:55:36 compute-0 podman[458534]: 2025-11-25 17:55:36.704152579 +0000 UTC m=+0.106995294 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 17:55:36 compute-0 podman[458536]: 2025-11-25 17:55:36.769344037 +0000 UTC m=+0.165437080 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 17:55:37 compute-0 ceph-mon[74985]: pgmap v3984: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3985: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:38 compute-0 ceph-mon[74985]: pgmap v3985: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:38 compute-0 nova_compute[254092]: 2025-11-25 17:55:38.451 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3986: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:55:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:55:40
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:55:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:55:40 compute-0 nova_compute[254092]: 2025-11-25 17:55:40.979 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:41 compute-0 ceph-mon[74985]: pgmap v3986: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3987: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:43 compute-0 ceph-mon[74985]: pgmap v3987: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:43 compute-0 nova_compute[254092]: 2025-11-25 17:55:43.507 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3988: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:44 compute-0 ceph-mon[74985]: pgmap v3988: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:45 compute-0 nova_compute[254092]: 2025-11-25 17:55:45.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3989: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:47 compute-0 ceph-mon[74985]: pgmap v3989: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3990: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:48 compute-0 nova_compute[254092]: 2025-11-25 17:55:48.566 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:49 compute-0 ceph-mon[74985]: pgmap v3990: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3991: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:50 compute-0 ceph-mon[74985]: pgmap v3991: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:50 compute-0 nova_compute[254092]: 2025-11-25 17:55:50.980 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3992: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:55:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:55:53 compute-0 ceph-mon[74985]: pgmap v3992: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:53 compute-0 nova_compute[254092]: 2025-11-25 17:55:53.570 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3993: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:55:55 compute-0 ceph-mon[74985]: pgmap v3993: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:55:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217114339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:55:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:55:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/217114339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:55:55 compute-0 nova_compute[254092]: 2025-11-25 17:55:55.983 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3994: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/217114339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:55:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/217114339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:55:56 compute-0 ceph-mon[74985]: pgmap v3994: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3995: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:55:58 compute-0 nova_compute[254092]: 2025-11-25 17:55:58.573 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:55:59 compute-0 ceph-mon[74985]: pgmap v3995: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3996: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:00 compute-0 ceph-mon[74985]: pgmap v3996: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:00 compute-0 nova_compute[254092]: 2025-11-25 17:56:00.987 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3997: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 17:56:03 compute-0 ceph-mon[74985]: pgmap v3997: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 17:56:03 compute-0 nova_compute[254092]: 2025-11-25 17:56:03.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:03 compute-0 nova_compute[254092]: 2025-11-25 17:56:03.577 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3998: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 17:56:04 compute-0 ceph-mon[74985]: pgmap v3998: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 25 17:56:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:05 compute-0 nova_compute[254092]: 2025-11-25 17:56:05.990 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v3999: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 17:56:07 compute-0 ceph-mon[74985]: pgmap v3999: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 17:56:07 compute-0 podman[458600]: 2025-11-25 17:56:07.683205598 +0000 UTC m=+0.077895604 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:56:07 compute-0 podman[458599]: 2025-11-25 17:56:07.692568043 +0000 UTC m=+0.090680052 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 17:56:07 compute-0 podman[458601]: 2025-11-25 17:56:07.715829784 +0000 UTC m=+0.111581079 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 17:56:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4000: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 17:56:08 compute-0 ceph-mon[74985]: pgmap v4000: 321 pgs: 321 active+clean; 41 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 25 17:56:08 compute-0 nova_compute[254092]: 2025-11-25 17:56:08.578 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4001: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Nov 25 17:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:56:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:56:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:10 compute-0 nova_compute[254092]: 2025-11-25 17:56:10.992 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:11 compute-0 ceph-mon[74985]: pgmap v4001: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Nov 25 17:56:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4002: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 17:56:12 compute-0 ceph-mon[74985]: pgmap v4002: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 25 17:56:12 compute-0 sudo[458663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:12 compute-0 sudo[458663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:12 compute-0 sudo[458663]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:13 compute-0 sudo[458688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:56:13 compute-0 sudo[458688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:13 compute-0 sudo[458688]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:13 compute-0 sudo[458713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:13 compute-0 sudo[458713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:13 compute-0 sudo[458713]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:13 compute-0 sudo[458738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:56:13 compute-0 sudo[458738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:13 compute-0 nova_compute[254092]: 2025-11-25 17:56:13.580 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:13 compute-0 sudo[458738]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:56:13.702 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:56:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:56:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:56:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:56:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:56:13 compute-0 sudo[458796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:13 compute-0 sudo[458796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:13 compute-0 sudo[458796]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:13 compute-0 sudo[458821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:56:13 compute-0 sudo[458821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:13 compute-0 sudo[458821]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:13 compute-0 sudo[458846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:13 compute-0 sudo[458846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:13 compute-0 sudo[458846]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:13 compute-0 sudo[458871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 17:56:13 compute-0 sudo[458871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4003: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 17:56:14 compute-0 sudo[458871]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 30559ecc-4f10-49dc-ae64-fb2fb2bb6863 does not exist
Nov 25 17:56:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7896aede-c725-4cbd-b63d-00c1fa524874 does not exist
Nov 25 17:56:14 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a98d9ab2-e97e-43bc-9f7a-de7a91e687cd does not exist
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:56:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:56:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:56:14 compute-0 sudo[458914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:14 compute-0 sudo[458914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:14 compute-0 sudo[458914]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:14 compute-0 sudo[458939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:56:14 compute-0 sudo[458939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:14 compute-0 sudo[458939]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:14 compute-0 sudo[458964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:14 compute-0 sudo[458964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:14 compute-0 sudo[458964]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:14 compute-0 sudo[458989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:56:14 compute-0 sudo[458989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:14 compute-0 nova_compute[254092]: 2025-11-25 17:56:14.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:14 compute-0 podman[459056]: 2025-11-25 17:56:14.818079668 +0000 UTC m=+0.050346027 container create bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:56:14 compute-0 systemd[1]: Started libpod-conmon-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope.
Nov 25 17:56:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:56:14 compute-0 podman[459056]: 2025-11-25 17:56:14.794035746 +0000 UTC m=+0.026302195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:56:14 compute-0 podman[459056]: 2025-11-25 17:56:14.902277913 +0000 UTC m=+0.134544282 container init bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:56:14 compute-0 podman[459056]: 2025-11-25 17:56:14.913459466 +0000 UTC m=+0.145725845 container start bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:56:14 compute-0 podman[459056]: 2025-11-25 17:56:14.916841408 +0000 UTC m=+0.149107767 container attach bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 17:56:14 compute-0 crazy_colden[459073]: 167 167
Nov 25 17:56:14 compute-0 systemd[1]: libpod-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope: Deactivated successfully.
Nov 25 17:56:14 compute-0 conmon[459073]: conmon bb8be36881b84396297a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope/container/memory.events
Nov 25 17:56:14 compute-0 podman[459056]: 2025-11-25 17:56:14.923477758 +0000 UTC m=+0.155744107 container died bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-471c2c0c53ac4c9df553ab1f62092575f8c967a5a0b89db00481b900d393e49f-merged.mount: Deactivated successfully.
Nov 25 17:56:14 compute-0 podman[459056]: 2025-11-25 17:56:14.962305242 +0000 UTC m=+0.194571611 container remove bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 17:56:14 compute-0 systemd[1]: libpod-conmon-bb8be36881b84396297abc644d32f7f8d314ddd10bb65842ac471f6f1ba19a4f.scope: Deactivated successfully.
Nov 25 17:56:15 compute-0 podman[459096]: 2025-11-25 17:56:15.147673142 +0000 UTC m=+0.051984302 container create fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:56:15 compute-0 ceph-mon[74985]: pgmap v4003: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:56:15 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:56:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:15 compute-0 systemd[1]: Started libpod-conmon-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope.
Nov 25 17:56:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:15 compute-0 podman[459096]: 2025-11-25 17:56:15.131333348 +0000 UTC m=+0.035644528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:15 compute-0 podman[459096]: 2025-11-25 17:56:15.240535921 +0000 UTC m=+0.144847081 container init fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:56:15 compute-0 podman[459096]: 2025-11-25 17:56:15.251808527 +0000 UTC m=+0.156119677 container start fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:56:15 compute-0 podman[459096]: 2025-11-25 17:56:15.255483897 +0000 UTC m=+0.159795067 container attach fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:56:15 compute-0 nova_compute[254092]: 2025-11-25 17:56:15.994 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4004: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 17:56:16 compute-0 bold_beaver[459112]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:56:16 compute-0 bold_beaver[459112]: --> relative data size: 1.0
Nov 25 17:56:16 compute-0 bold_beaver[459112]: --> All data devices are unavailable
Nov 25 17:56:16 compute-0 systemd[1]: libpod-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope: Deactivated successfully.
Nov 25 17:56:16 compute-0 systemd[1]: libpod-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope: Consumed 1.065s CPU time.
Nov 25 17:56:16 compute-0 podman[459096]: 2025-11-25 17:56:16.365692312 +0000 UTC m=+1.270003482 container died fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-00a88779507d11f6145a01536fd401cf1e2b9b54be3f035482f19b9569825fb4-merged.mount: Deactivated successfully.
Nov 25 17:56:16 compute-0 podman[459096]: 2025-11-25 17:56:16.415054871 +0000 UTC m=+1.319366011 container remove fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 17:56:16 compute-0 systemd[1]: libpod-conmon-fa77c1d289d5f4d0ba389b5811ec6460b5013864c9f3ee9cfe426ac9753d9b2f.scope: Deactivated successfully.
Nov 25 17:56:16 compute-0 sudo[458989]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:16 compute-0 nova_compute[254092]: 2025-11-25 17:56:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:16 compute-0 nova_compute[254092]: 2025-11-25 17:56:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:16 compute-0 sudo[459153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:16 compute-0 sudo[459153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:16 compute-0 sudo[459153]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:16 compute-0 sudo[459178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:56:16 compute-0 sudo[459178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:16 compute-0 sudo[459178]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:16 compute-0 sudo[459203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:16 compute-0 sudo[459203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:16 compute-0 sudo[459203]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:16 compute-0 sudo[459228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:56:16 compute-0 sudo[459228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:17 compute-0 podman[459294]: 2025-11-25 17:56:17.112124705 +0000 UTC m=+0.056924935 container create 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:56:17 compute-0 systemd[1]: Started libpod-conmon-2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62.scope.
Nov 25 17:56:17 compute-0 ceph-mon[74985]: pgmap v4004: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Nov 25 17:56:17 compute-0 podman[459294]: 2025-11-25 17:56:17.080512908 +0000 UTC m=+0.025313208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:56:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:56:17 compute-0 podman[459294]: 2025-11-25 17:56:17.218489632 +0000 UTC m=+0.163289872 container init 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:56:17 compute-0 podman[459294]: 2025-11-25 17:56:17.231485035 +0000 UTC m=+0.176285225 container start 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:56:17 compute-0 podman[459294]: 2025-11-25 17:56:17.235181634 +0000 UTC m=+0.179981844 container attach 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 17:56:17 compute-0 romantic_hawking[459311]: 167 167
Nov 25 17:56:17 compute-0 systemd[1]: libpod-2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62.scope: Deactivated successfully.
Nov 25 17:56:17 compute-0 podman[459294]: 2025-11-25 17:56:17.24167889 +0000 UTC m=+0.186479081 container died 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 17:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-efdb87e44ddd92b026e5109c1cc7238479c4f1fe3ee3f0f342bfb8e1a980d8e9-merged.mount: Deactivated successfully.
Nov 25 17:56:17 compute-0 podman[459294]: 2025-11-25 17:56:17.282341104 +0000 UTC m=+0.227141294 container remove 2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:56:17 compute-0 systemd[1]: libpod-conmon-2cf69a9fdc231a090674d478d049b1eb51aa179be9870e18918d537e8e629b62.scope: Deactivated successfully.
Nov 25 17:56:17 compute-0 podman[459334]: 2025-11-25 17:56:17.534410904 +0000 UTC m=+0.077789462 container create b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 17:56:17 compute-0 systemd[1]: Started libpod-conmon-b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb.scope.
Nov 25 17:56:17 compute-0 podman[459334]: 2025-11-25 17:56:17.504623855 +0000 UTC m=+0.048002463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:56:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:17 compute-0 podman[459334]: 2025-11-25 17:56:17.644572663 +0000 UTC m=+0.187951231 container init b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 17:56:17 compute-0 podman[459334]: 2025-11-25 17:56:17.659090487 +0000 UTC m=+0.202469035 container start b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 17:56:17 compute-0 podman[459334]: 2025-11-25 17:56:17.663334242 +0000 UTC m=+0.206712840 container attach b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:56:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4005: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Nov 25 17:56:18 compute-0 competent_shamir[459350]: {
Nov 25 17:56:18 compute-0 competent_shamir[459350]:     "0": [
Nov 25 17:56:18 compute-0 competent_shamir[459350]:         {
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "devices": [
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "/dev/loop3"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             ],
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_name": "ceph_lv0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_size": "21470642176",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "name": "ceph_lv0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "tags": {
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cluster_name": "ceph",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.crush_device_class": "",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.encrypted": "0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osd_id": "0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.type": "block",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.vdo": "0"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             },
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "type": "block",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "vg_name": "ceph_vg0"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:         }
Nov 25 17:56:18 compute-0 competent_shamir[459350]:     ],
Nov 25 17:56:18 compute-0 competent_shamir[459350]:     "1": [
Nov 25 17:56:18 compute-0 competent_shamir[459350]:         {
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "devices": [
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "/dev/loop4"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             ],
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_name": "ceph_lv1",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_size": "21470642176",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "name": "ceph_lv1",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "tags": {
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cluster_name": "ceph",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.crush_device_class": "",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.encrypted": "0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osd_id": "1",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.type": "block",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.vdo": "0"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             },
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "type": "block",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "vg_name": "ceph_vg1"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:         }
Nov 25 17:56:18 compute-0 competent_shamir[459350]:     ],
Nov 25 17:56:18 compute-0 competent_shamir[459350]:     "2": [
Nov 25 17:56:18 compute-0 competent_shamir[459350]:         {
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "devices": [
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "/dev/loop5"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             ],
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_name": "ceph_lv2",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_size": "21470642176",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "name": "ceph_lv2",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "tags": {
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.cluster_name": "ceph",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.crush_device_class": "",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.encrypted": "0",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osd_id": "2",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.type": "block",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:                 "ceph.vdo": "0"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             },
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "type": "block",
Nov 25 17:56:18 compute-0 competent_shamir[459350]:             "vg_name": "ceph_vg2"
Nov 25 17:56:18 compute-0 competent_shamir[459350]:         }
Nov 25 17:56:18 compute-0 competent_shamir[459350]:     ]
Nov 25 17:56:18 compute-0 competent_shamir[459350]: }
Nov 25 17:56:18 compute-0 podman[459334]: 2025-11-25 17:56:18.497420315 +0000 UTC m=+1.040798833 container died b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:56:18 compute-0 systemd[1]: libpod-b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb.scope: Deactivated successfully.
Nov 25 17:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e969ddba0757c65516fb87f5f3f44a5304002c3f8acf329b676799250dceb0-merged.mount: Deactivated successfully.
Nov 25 17:56:18 compute-0 podman[459334]: 2025-11-25 17:56:18.561896864 +0000 UTC m=+1.105275392 container remove b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shamir, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 17:56:18 compute-0 systemd[1]: libpod-conmon-b098cd4c33d84789d54885be47e4b74c9e06a768481be44dfcddeef712f894fb.scope: Deactivated successfully.
Nov 25 17:56:18 compute-0 nova_compute[254092]: 2025-11-25 17:56:18.583 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:18 compute-0 sudo[459228]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:18 compute-0 sudo[459373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:18 compute-0 sudo[459373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:18 compute-0 sudo[459373]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:18 compute-0 sudo[459398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:56:18 compute-0 sudo[459398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:18 compute-0 sudo[459398]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:18 compute-0 sudo[459423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:18 compute-0 sudo[459423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:18 compute-0 sudo[459423]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:18 compute-0 sudo[459448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:56:18 compute-0 sudo[459448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:19 compute-0 ceph-mon[74985]: pgmap v4005: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Nov 25 17:56:19 compute-0 podman[459514]: 2025-11-25 17:56:19.292018435 +0000 UTC m=+0.047527191 container create a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:56:19 compute-0 systemd[1]: Started libpod-conmon-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope.
Nov 25 17:56:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:56:19 compute-0 podman[459514]: 2025-11-25 17:56:19.273852103 +0000 UTC m=+0.029360869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:56:19 compute-0 podman[459514]: 2025-11-25 17:56:19.371608705 +0000 UTC m=+0.127117541 container init a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:56:19 compute-0 podman[459514]: 2025-11-25 17:56:19.378852111 +0000 UTC m=+0.134360887 container start a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 17:56:19 compute-0 podman[459514]: 2025-11-25 17:56:19.382478959 +0000 UTC m=+0.137987715 container attach a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 17:56:19 compute-0 angry_heyrovsky[459530]: 167 167
Nov 25 17:56:19 compute-0 systemd[1]: libpod-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope: Deactivated successfully.
Nov 25 17:56:19 compute-0 conmon[459530]: conmon a01c027e47b176dac0c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope/container/memory.events
Nov 25 17:56:19 compute-0 podman[459514]: 2025-11-25 17:56:19.386079038 +0000 UTC m=+0.141587794 container died a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 17:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fb1e4e24e1a885b7d50ecc36d818b36b07a50d8c987ff180fa6655629009721-merged.mount: Deactivated successfully.
Nov 25 17:56:19 compute-0 podman[459514]: 2025-11-25 17:56:19.428788546 +0000 UTC m=+0.184297302 container remove a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:56:19 compute-0 systemd[1]: libpod-conmon-a01c027e47b176dac0c1559125a0cf0717510bcfbeb4d87d607496b5a762c9f9.scope: Deactivated successfully.
Nov 25 17:56:19 compute-0 podman[459554]: 2025-11-25 17:56:19.624459736 +0000 UTC m=+0.058685714 container create 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:56:19 compute-0 systemd[1]: Started libpod-conmon-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope.
Nov 25 17:56:19 compute-0 podman[459554]: 2025-11-25 17:56:19.607486215 +0000 UTC m=+0.041712213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:56:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:56:19 compute-0 podman[459554]: 2025-11-25 17:56:19.730272127 +0000 UTC m=+0.164498105 container init 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 17:56:19 compute-0 podman[459554]: 2025-11-25 17:56:19.742272763 +0000 UTC m=+0.176498771 container start 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:56:19 compute-0 podman[459554]: 2025-11-25 17:56:19.746493477 +0000 UTC m=+0.180719475 container attach 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:56:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4006: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Nov 25 17:56:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]: {
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "osd_id": 1,
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "type": "bluestore"
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:     },
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "osd_id": 2,
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "type": "bluestore"
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:     },
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "osd_id": 0,
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:         "type": "bluestore"
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]:     }
Nov 25 17:56:20 compute-0 heuristic_bohr[459571]: }
Nov 25 17:56:20 compute-0 systemd[1]: libpod-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope: Deactivated successfully.
Nov 25 17:56:20 compute-0 systemd[1]: libpod-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope: Consumed 1.053s CPU time.
Nov 25 17:56:20 compute-0 podman[459604]: 2025-11-25 17:56:20.849611429 +0000 UTC m=+0.042140144 container died 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e93148b6aee1dcda55c0a2e2766baeafd5b32a7bfd62e5ec1ceb8e5e5144bea8-merged.mount: Deactivated successfully.
Nov 25 17:56:20 compute-0 podman[459604]: 2025-11-25 17:56:20.930167715 +0000 UTC m=+0.122696340 container remove 8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bohr, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 17:56:20 compute-0 systemd[1]: libpod-conmon-8f60442a1c39d63e6cf06691b1da3c813a55f30449d4fa551f094490776877a2.scope: Deactivated successfully.
Nov 25 17:56:20 compute-0 sudo[459448]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:21 compute-0 nova_compute[254092]: 2025-11-25 17:56:21.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:56:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:56:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f34c8fe6-844d-42d0-9019-ef60f42c0d02 does not exist
Nov 25 17:56:21 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 16363266-8559-4e5b-82ff-1b3f33928d1a does not exist
Nov 25 17:56:21 compute-0 sudo[459619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:56:21 compute-0 sudo[459619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:21 compute-0 sudo[459619]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:21 compute-0 ceph-mon[74985]: pgmap v4006: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Nov 25 17:56:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:21 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:56:21 compute-0 sudo[459644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:56:21 compute-0 sudo[459644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:56:21 compute-0 sudo[459644]: pam_unix(sudo:session): session closed for user root
Nov 25 17:56:21 compute-0 nova_compute[254092]: 2025-11-25 17:56:21.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:21 compute-0 nova_compute[254092]: 2025-11-25 17:56:21.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:21 compute-0 nova_compute[254092]: 2025-11-25 17:56:21.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:56:21 compute-0 nova_compute[254092]: 2025-11-25 17:56:21.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:56:21 compute-0 nova_compute[254092]: 2025-11-25 17:56:21.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:56:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4007: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 17:56:22 compute-0 nova_compute[254092]: 2025-11-25 17:56:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:22 compute-0 nova_compute[254092]: 2025-11-25 17:56:22.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:56:22 compute-0 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:56:22 compute-0 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:56:22 compute-0 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:56:22 compute-0 nova_compute[254092]: 2025-11-25 17:56:22.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:56:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:56:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105391424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:56:22 compute-0 nova_compute[254092]: 2025-11-25 17:56:22.951 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.197 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.200 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.201 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:56:23 compute-0 ceph-mon[74985]: pgmap v4007: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Nov 25 17:56:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3105391424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.202 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.584 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.585 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.590 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.718 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.872 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.873 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.887 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.917 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 17:56:23 compute-0 nova_compute[254092]: 2025-11-25 17:56:23.999 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:56:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4008: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:24 compute-0 ceph-mon[74985]: pgmap v4008: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:56:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190681390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:56:24 compute-0 nova_compute[254092]: 2025-11-25 17:56:24.442 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:56:24 compute-0 nova_compute[254092]: 2025-11-25 17:56:24.450 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:56:24 compute-0 nova_compute[254092]: 2025-11-25 17:56:24.486 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:56:24 compute-0 nova_compute[254092]: 2025-11-25 17:56:24.489 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:56:24 compute-0 nova_compute[254092]: 2025-11-25 17:56:24.489 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:56:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1190681390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:56:25 compute-0 sshd-session[459713]: Connection closed by authenticating user root 134.199.144.204 port 53288 [preauth]
Nov 25 17:56:26 compute-0 nova_compute[254092]: 2025-11-25 17:56:26.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4009: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:26 compute-0 ceph-mon[74985]: pgmap v4009: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:26 compute-0 nova_compute[254092]: 2025-11-25 17:56:26.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:26 compute-0 nova_compute[254092]: 2025-11-25 17:56:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:26 compute-0 nova_compute[254092]: 2025-11-25 17:56:26.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:56:27 compute-0 sshd-session[459715]: Connection closed by authenticating user root 134.199.144.204 port 53304 [preauth]
Nov 25 17:56:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4010: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:28 compute-0 nova_compute[254092]: 2025-11-25 17:56:28.593 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:28 compute-0 ceph-mon[74985]: pgmap v4010: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:29 compute-0 sshd-session[459717]: Connection closed by authenticating user root 134.199.144.204 port 53318 [preauth]
Nov 25 17:56:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4011: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:30 compute-0 ceph-mon[74985]: pgmap v4011: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:30 compute-0 sshd-session[459719]: Connection closed by authenticating user root 134.199.144.204 port 53332 [preauth]
Nov 25 17:56:31 compute-0 nova_compute[254092]: 2025-11-25 17:56:31.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4012: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:32 compute-0 ceph-mon[74985]: pgmap v4012: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:32 compute-0 sshd-session[459722]: Connection closed by authenticating user root 134.199.144.204 port 53336 [preauth]
Nov 25 17:56:32 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:53352 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:32 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:53368 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:33 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:53374 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:33 compute-0 nova_compute[254092]: 2025-11-25 17:56:33.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:33 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:53380 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:34 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:36070 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4013: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:34 compute-0 ceph-mon[74985]: pgmap v4013: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:34 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:36082 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:34 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:36096 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:35 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:36108 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:35 compute-0 nova_compute[254092]: 2025-11-25 17:56:35.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:35 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:36122 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:36 compute-0 nova_compute[254092]: 2025-11-25 17:56:36.024 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4014: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:36 compute-0 ceph-mon[74985]: pgmap v4014: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:36 compute-0 nova_compute[254092]: 2025-11-25 17:56:36.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:56:37 compute-0 sshd[190203]: drop connection #0 from [134.199.144.204]:36134 on [38.102.83.64]:22 penalty: failed authentication
Nov 25 17:56:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4015: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:38 compute-0 ceph-mon[74985]: pgmap v4015: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:38 compute-0 nova_compute[254092]: 2025-11-25 17:56:38.596 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:38 compute-0 podman[459724]: 2025-11-25 17:56:38.668859861 +0000 UTC m=+0.084375120 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 17:56:38 compute-0 podman[459725]: 2025-11-25 17:56:38.677251119 +0000 UTC m=+0.078729698 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 25 17:56:38 compute-0 podman[459726]: 2025-11-25 17:56:38.736495856 +0000 UTC m=+0.132137576 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4016: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:40 compute-0 ceph-mon[74985]: pgmap v4016: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:56:40
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr', 'images']
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:56:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:56:41 compute-0 nova_compute[254092]: 2025-11-25 17:56:41.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4017: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:42 compute-0 ceph-mon[74985]: pgmap v4017: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:43 compute-0 nova_compute[254092]: 2025-11-25 17:56:43.598 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4018: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:44 compute-0 ceph-mon[74985]: pgmap v4018: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:46 compute-0 nova_compute[254092]: 2025-11-25 17:56:46.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4019: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:46 compute-0 ceph-mon[74985]: pgmap v4019: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4020: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:48 compute-0 ceph-mon[74985]: pgmap v4020: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:48 compute-0 nova_compute[254092]: 2025-11-25 17:56:48.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4021: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:50 compute-0 ceph-mon[74985]: pgmap v4021: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:51 compute-0 nova_compute[254092]: 2025-11-25 17:56:51.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4022: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:56:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:56:52 compute-0 ceph-mon[74985]: pgmap v4022: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:53 compute-0 nova_compute[254092]: 2025-11-25 17:56:53.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4023: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:54 compute-0 ceph-mon[74985]: pgmap v4023: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:56:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:56:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533605188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:56:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:56:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/533605188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:56:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/533605188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:56:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/533605188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:56:56 compute-0 nova_compute[254092]: 2025-11-25 17:56:56.090 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:56:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4024: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:56 compute-0 ceph-mon[74985]: pgmap v4024: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4025: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:56:58 compute-0 nova_compute[254092]: 2025-11-25 17:56:58.609 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4026: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:01 compute-0 nova_compute[254092]: 2025-11-25 17:57:01.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4027: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:03 compute-0 ceph-mds[102090]: mds.beacon.cephfs.compute-0.aidjys missed beacon ack from the monitors
Nov 25 17:57:03 compute-0 nova_compute[254092]: 2025-11-25 17:57:03.655 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4028: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:06 compute-0 nova_compute[254092]: 2025-11-25 17:57:06.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4029: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:07 compute-0 ceph-mds[102090]: mds.beacon.cephfs.compute-0.aidjys missed beacon ack from the monitors
Nov 25 17:57:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4030: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:08 compute-0 nova_compute[254092]: 2025-11-25 17:57:08.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:09 compute-0 podman[459790]: 2025-11-25 17:57:09.689139241 +0000 UTC m=+0.095893432 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 17:57:09 compute-0 podman[459791]: 2025-11-25 17:57:09.704379895 +0000 UTC m=+0.104339503 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:57:09 compute-0 podman[459792]: 2025-11-25 17:57:09.741875532 +0000 UTC m=+0.138788786 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 17:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:57:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:57:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4031: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 15.6366 seconds
Nov 25 17:57:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:11 compute-0 ceph-mon[74985]: pgmap v4025: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:11 compute-0 nova_compute[254092]: 2025-11-25 17:57:11.208 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4032: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:12 compute-0 ceph-mon[74985]: pgmap v4026: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:12 compute-0 ceph-mon[74985]: pgmap v4027: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:12 compute-0 ceph-mon[74985]: pgmap v4028: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:12 compute-0 ceph-mon[74985]: pgmap v4029: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:12 compute-0 ceph-mon[74985]: pgmap v4030: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:12 compute-0 ceph-mon[74985]: pgmap v4031: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:13 compute-0 nova_compute[254092]: 2025-11-25 17:57:13.663 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:57:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:57:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:57:13.703 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:57:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:57:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:57:13 compute-0 ceph-mon[74985]: pgmap v4032: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4033: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:15 compute-0 ceph-mon[74985]: pgmap v4033: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:15 compute-0 nova_compute[254092]: 2025-11-25 17:57:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4034: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:16 compute-0 nova_compute[254092]: 2025-11-25 17:57:16.210 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:16 compute-0 nova_compute[254092]: 2025-11-25 17:57:16.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:16 compute-0 ceph-mon[74985]: pgmap v4034: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4035: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:18 compute-0 ceph-mon[74985]: pgmap v4035: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:18 compute-0 nova_compute[254092]: 2025-11-25 17:57:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:18 compute-0 nova_compute[254092]: 2025-11-25 17:57:18.667 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4036: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:20 compute-0 ceph-mon[74985]: pgmap v4036: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:21 compute-0 nova_compute[254092]: 2025-11-25 17:57:21.211 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:21 compute-0 sudo[459850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:21 compute-0 sudo[459850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:21 compute-0 sudo[459850]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:21 compute-0 sudo[459875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:57:21 compute-0 sudo[459875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:21 compute-0 sudo[459875]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:21 compute-0 sudo[459900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:21 compute-0 sudo[459900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:21 compute-0 sudo[459900]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:21 compute-0 nova_compute[254092]: 2025-11-25 17:57:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:21 compute-0 nova_compute[254092]: 2025-11-25 17:57:21.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:57:21 compute-0 nova_compute[254092]: 2025-11-25 17:57:21.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:57:21 compute-0 nova_compute[254092]: 2025-11-25 17:57:21.512 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:57:21 compute-0 sudo[459925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 17:57:21 compute-0 sudo[459925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:21 compute-0 sudo[459925]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:57:21 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:57:22 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:22 compute-0 sudo[459970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4037: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:22 compute-0 sudo[459970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:22 compute-0 sudo[459970]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:22 compute-0 sudo[459995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:57:22 compute-0 sudo[459995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:22 compute-0 sudo[459995]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:22 compute-0 sudo[460020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:22 compute-0 sudo[460020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:22 compute-0 sudo[460020]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:22 compute-0 sudo[460045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:57:22 compute-0 sudo[460045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.525 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:57:22 compute-0 sudo[460045]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:57:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568836969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:57:22 compute-0 nova_compute[254092]: 2025-11-25 17:57:22.959 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:57:23 compute-0 sudo[460124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:23 compute-0 sudo[460124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:23 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:23 compute-0 ceph-mon[74985]: pgmap v4037: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3568836969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:57:23 compute-0 sudo[460124]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.141 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.142 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3620MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.142 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.143 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:57:23 compute-0 sudo[460149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:57:23 compute-0 sudo[460149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:23 compute-0 sudo[460149]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.216 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.217 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:57:23 compute-0 sudo[460174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:23 compute-0 sudo[460174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.237 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:57:23 compute-0 sudo[460174]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:23 compute-0 sudo[460200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- inventory --format=json-pretty --filter-for-batch
Nov 25 17:57:23 compute-0 sudo[460200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.671 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:57:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219603912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.700 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.709 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.729 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.731 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:57:23 compute-0 nova_compute[254092]: 2025-11-25 17:57:23.731 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:57:23 compute-0 podman[460287]: 2025-11-25 17:57:23.736345034 +0000 UTC m=+0.035112364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:23 compute-0 podman[460287]: 2025-11-25 17:57:23.932403764 +0000 UTC m=+0.231170994 container create ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:57:24 compute-0 systemd[1]: Started libpod-conmon-ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea.scope.
Nov 25 17:57:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:24 compute-0 podman[460287]: 2025-11-25 17:57:24.137416106 +0000 UTC m=+0.436183356 container init ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:57:24 compute-0 podman[460287]: 2025-11-25 17:57:24.153116612 +0000 UTC m=+0.451883842 container start ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 17:57:24 compute-0 silly_chaum[460304]: 167 167
Nov 25 17:57:24 compute-0 systemd[1]: libpod-ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea.scope: Deactivated successfully.
Nov 25 17:57:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4038: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:24 compute-0 podman[460287]: 2025-11-25 17:57:24.224197571 +0000 UTC m=+0.522964841 container attach ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:57:24 compute-0 podman[460287]: 2025-11-25 17:57:24.224717125 +0000 UTC m=+0.523484365 container died ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:57:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/219603912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:57:24 compute-0 ceph-mon[74985]: pgmap v4038: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ff0d2bb1c1cf4b7239d0c7cce2017c24cd411e462e4048b1b56ade89265efa-merged.mount: Deactivated successfully.
Nov 25 17:57:24 compute-0 podman[460287]: 2025-11-25 17:57:24.972903137 +0000 UTC m=+1.271670387 container remove ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:57:25 compute-0 systemd[1]: libpod-conmon-ea685f454e0ffc17060d1dff14e2fe3f35718e7dd4eb770b6d75267725e852ea.scope: Deactivated successfully.
Nov 25 17:57:25 compute-0 podman[460330]: 2025-11-25 17:57:25.209013493 +0000 UTC m=+0.037917990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:25 compute-0 podman[460330]: 2025-11-25 17:57:25.342947627 +0000 UTC m=+0.171852064 container create bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:57:25 compute-0 systemd[1]: Started libpod-conmon-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope.
Nov 25 17:57:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:25 compute-0 podman[460330]: 2025-11-25 17:57:25.532373628 +0000 UTC m=+0.361278055 container init bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:57:25 compute-0 podman[460330]: 2025-11-25 17:57:25.545212705 +0000 UTC m=+0.374117142 container start bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:57:25 compute-0 podman[460330]: 2025-11-25 17:57:25.571584382 +0000 UTC m=+0.400488829 container attach bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 25 17:57:25 compute-0 nova_compute[254092]: 2025-11-25 17:57:25.731 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4039: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:26 compute-0 nova_compute[254092]: 2025-11-25 17:57:26.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:26 compute-0 ceph-mon[74985]: pgmap v4039: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:27 compute-0 zealous_haibt[460347]: [
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:     {
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "available": false,
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "ceph_device": false,
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "lsm_data": {},
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "lvs": [],
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "path": "/dev/sr0",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "rejected_reasons": [
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "Has a FileSystem",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "Insufficient space (<5GB)"
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         ],
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         "sys_api": {
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "actuators": null,
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "device_nodes": "sr0",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "devname": "sr0",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "human_readable_size": "482.00 KB",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "id_bus": "ata",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "model": "QEMU DVD-ROM",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "nr_requests": "2",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "parent": "/dev/sr0",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "partitions": {},
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "path": "/dev/sr0",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "removable": "1",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "rev": "2.5+",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "ro": "0",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "rotational": "1",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "sas_address": "",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "sas_device_handle": "",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "scheduler_mode": "mq-deadline",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "sectors": 0,
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "sectorsize": "2048",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "size": 493568.0,
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "support_discard": "2048",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "type": "disk",
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:             "vendor": "QEMU"
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:         }
Nov 25 17:57:27 compute-0 zealous_haibt[460347]:     }
Nov 25 17:57:27 compute-0 zealous_haibt[460347]: ]
Nov 25 17:57:27 compute-0 systemd[1]: libpod-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope: Deactivated successfully.
Nov 25 17:57:27 compute-0 systemd[1]: libpod-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope: Consumed 1.444s CPU time.
Nov 25 17:57:27 compute-0 podman[462175]: 2025-11-25 17:57:27.332559273 +0000 UTC m=+0.023138548 container died bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-49f765b0b527bf5d3d79f490a446b07748a2cfdf16bc88287b0aaa994ffcfca3-merged.mount: Deactivated successfully.
Nov 25 17:57:27 compute-0 podman[462175]: 2025-11-25 17:57:27.757970617 +0000 UTC m=+0.448549872 container remove bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 17:57:27 compute-0 systemd[1]: libpod-conmon-bb6f915aa1552e11b2d2c8ed5c5dc0a2cb971651f975ad239c5eaafc8be6e16a.scope: Deactivated successfully.
Nov 25 17:57:27 compute-0 sudo[460200]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c29fe0a0-e8fa-492a-8b04-d8764ccbc928 does not exist
Nov 25 17:57:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 234d231f-e9ea-4d4d-92a8-1ae69a2a8f0e does not exist
Nov 25 17:57:27 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 70293527-044f-460e-bb46-b1d1edb06486 does not exist
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:57:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:57:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:57:27 compute-0 sudo[462190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:27 compute-0 sudo[462190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:27 compute-0 sudo[462190]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:28 compute-0 sudo[462215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:57:28 compute-0 sudo[462215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:28 compute-0 sudo[462215]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:28 compute-0 sudo[462240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:28 compute-0 sudo[462240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:28 compute-0 sudo[462240]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:28 compute-0 sudo[462265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:57:28 compute-0 sudo[462265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4040: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:28 compute-0 podman[462332]: 2025-11-25 17:57:28.485396335 +0000 UTC m=+0.066284559 container create 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 17:57:28 compute-0 nova_compute[254092]: 2025-11-25 17:57:28.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:28 compute-0 nova_compute[254092]: 2025-11-25 17:57:28.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:57:28 compute-0 podman[462332]: 2025-11-25 17:57:28.439782998 +0000 UTC m=+0.020671232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:28 compute-0 systemd[1]: Started libpod-conmon-29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54.scope.
Nov 25 17:57:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:28 compute-0 podman[462332]: 2025-11-25 17:57:28.615086044 +0000 UTC m=+0.195974288 container init 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:57:28 compute-0 podman[462332]: 2025-11-25 17:57:28.622712152 +0000 UTC m=+0.203600376 container start 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:57:28 compute-0 trusting_bell[462348]: 167 167
Nov 25 17:57:28 compute-0 systemd[1]: libpod-29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54.scope: Deactivated successfully.
Nov 25 17:57:28 compute-0 podman[462332]: 2025-11-25 17:57:28.652088168 +0000 UTC m=+0.232976412 container attach 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:57:28 compute-0 podman[462332]: 2025-11-25 17:57:28.652407727 +0000 UTC m=+0.233295951 container died 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:57:28 compute-0 nova_compute[254092]: 2025-11-25 17:57:28.675 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e29febedfe4ee59dbaf6d75cfe8515697486b4d21ab78b2fee77323b36d7d5e-merged.mount: Deactivated successfully.
Nov 25 17:57:28 compute-0 podman[462332]: 2025-11-25 17:57:28.804728591 +0000 UTC m=+0.385616815 container remove 29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:57:28 compute-0 systemd[1]: libpod-conmon-29e63d5fbea978e5ff727e166be49c7a6cf2818d68fd7994893a0e0af1a21b54.scope: Deactivated successfully.
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:57:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:57:28 compute-0 ceph-mon[74985]: pgmap v4040: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:29 compute-0 podman[462373]: 2025-11-25 17:57:29.002438795 +0000 UTC m=+0.065575810 container create 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:57:29 compute-0 podman[462373]: 2025-11-25 17:57:28.965146983 +0000 UTC m=+0.028284018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:29 compute-0 systemd[1]: Started libpod-conmon-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope.
Nov 25 17:57:29 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:57:29 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:57:29 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:57:29 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 17:57:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:29 compute-0 podman[462373]: 2025-11-25 17:57:29.110311962 +0000 UTC m=+0.173448977 container init 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 17:57:29 compute-0 podman[462373]: 2025-11-25 17:57:29.118502415 +0000 UTC m=+0.181639430 container start 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:57:29 compute-0 podman[462373]: 2025-11-25 17:57:29.132120734 +0000 UTC m=+0.195257749 container attach 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:57:30 compute-0 serene_mahavira[462390]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:57:30 compute-0 serene_mahavira[462390]: --> relative data size: 1.0
Nov 25 17:57:30 compute-0 serene_mahavira[462390]: --> All data devices are unavailable
Nov 25 17:57:30 compute-0 systemd[1]: libpod-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope: Deactivated successfully.
Nov 25 17:57:30 compute-0 conmon[462390]: conmon 7f9b3fafa5a190b709e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope/container/memory.events
Nov 25 17:57:30 compute-0 podman[462373]: 2025-11-25 17:57:30.139305683 +0000 UTC m=+1.202442698 container died 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5778adb9e6bf5dcb33fea0cdca9fad51eefd9a38e2d92b63448d8326de3d94ba-merged.mount: Deactivated successfully.
Nov 25 17:57:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4041: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:30 compute-0 podman[462373]: 2025-11-25 17:57:30.199366443 +0000 UTC m=+1.262503468 container remove 7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:57:30 compute-0 systemd[1]: libpod-conmon-7f9b3fafa5a190b709e4ad37240fb331298c4d5fa476bee5cab07f7591cc50f1.scope: Deactivated successfully.
Nov 25 17:57:30 compute-0 sudo[462265]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:30 compute-0 ceph-mon[74985]: pgmap v4041: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:30 compute-0 sudo[462432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:30 compute-0 sudo[462432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:30 compute-0 sudo[462432]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:30 compute-0 sudo[462457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:57:30 compute-0 sudo[462457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:30 compute-0 sudo[462457]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:30 compute-0 sudo[462482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:30 compute-0 sudo[462482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:30 compute-0 sudo[462482]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:30 compute-0 sudo[462507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:57:30 compute-0 sudo[462507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:30 compute-0 podman[462573]: 2025-11-25 17:57:30.853734738 +0000 UTC m=+0.040545110 container create c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 17:57:30 compute-0 systemd[1]: Started libpod-conmon-c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618.scope.
Nov 25 17:57:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:30 compute-0 podman[462573]: 2025-11-25 17:57:30.836283405 +0000 UTC m=+0.023093807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:30 compute-0 podman[462573]: 2025-11-25 17:57:30.943035561 +0000 UTC m=+0.129845953 container init c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 17:57:30 compute-0 podman[462573]: 2025-11-25 17:57:30.951995314 +0000 UTC m=+0.138805686 container start c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 25 17:57:30 compute-0 podman[462573]: 2025-11-25 17:57:30.955561402 +0000 UTC m=+0.142371794 container attach c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 17:57:30 compute-0 flamboyant_hypatia[462590]: 167 167
Nov 25 17:57:30 compute-0 systemd[1]: libpod-c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618.scope: Deactivated successfully.
Nov 25 17:57:30 compute-0 podman[462573]: 2025-11-25 17:57:30.959219961 +0000 UTC m=+0.146030333 container died c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-54d9cd6f09fde9f6d6404aaaf0d46e7b7c816ec030db12bdd02e49c442e89e96-merged.mount: Deactivated successfully.
Nov 25 17:57:30 compute-0 podman[462573]: 2025-11-25 17:57:30.993516981 +0000 UTC m=+0.180327353 container remove c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:57:31 compute-0 systemd[1]: libpod-conmon-c8ac86d411eb55d0eb08412a02d66ef777028128bb4fa6fc9ef046d7f73de618.scope: Deactivated successfully.
Nov 25 17:57:31 compute-0 podman[462612]: 2025-11-25 17:57:31.151905609 +0000 UTC m=+0.043530192 container create 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 17:57:31 compute-0 systemd[1]: Started libpod-conmon-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope.
Nov 25 17:57:31 compute-0 nova_compute[254092]: 2025-11-25 17:57:31.213 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:31 compute-0 podman[462612]: 2025-11-25 17:57:31.134482606 +0000 UTC m=+0.026107209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:31 compute-0 podman[462612]: 2025-11-25 17:57:31.241003107 +0000 UTC m=+0.132627710 container init 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:57:31 compute-0 podman[462612]: 2025-11-25 17:57:31.249679672 +0000 UTC m=+0.141304255 container start 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:57:31 compute-0 podman[462612]: 2025-11-25 17:57:31.253480265 +0000 UTC m=+0.145104868 container attach 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:57:32 compute-0 stupefied_saha[462628]: {
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:     "0": [
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:         {
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "devices": [
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "/dev/loop3"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             ],
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_name": "ceph_lv0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_size": "21470642176",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "name": "ceph_lv0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "tags": {
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cluster_name": "ceph",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.crush_device_class": "",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.encrypted": "0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osd_id": "0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.type": "block",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.vdo": "0"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             },
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "type": "block",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "vg_name": "ceph_vg0"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:         }
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:     ],
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:     "1": [
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:         {
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "devices": [
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "/dev/loop4"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             ],
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_name": "ceph_lv1",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_size": "21470642176",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "name": "ceph_lv1",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "tags": {
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cluster_name": "ceph",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.crush_device_class": "",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.encrypted": "0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osd_id": "1",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.type": "block",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.vdo": "0"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             },
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "type": "block",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "vg_name": "ceph_vg1"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:         }
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:     ],
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:     "2": [
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:         {
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "devices": [
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "/dev/loop5"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             ],
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_name": "ceph_lv2",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_size": "21470642176",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "name": "ceph_lv2",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "tags": {
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.cluster_name": "ceph",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.crush_device_class": "",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.encrypted": "0",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osd_id": "2",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.type": "block",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:                 "ceph.vdo": "0"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             },
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "type": "block",
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:             "vg_name": "ceph_vg2"
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:         }
Nov 25 17:57:32 compute-0 stupefied_saha[462628]:     ]
Nov 25 17:57:32 compute-0 stupefied_saha[462628]: }
Nov 25 17:57:32 compute-0 systemd[1]: libpod-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope: Deactivated successfully.
Nov 25 17:57:32 compute-0 conmon[462628]: conmon 0fea1f15c1e2d4416976 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope/container/memory.events
Nov 25 17:57:32 compute-0 podman[462612]: 2025-11-25 17:57:32.064587685 +0000 UTC m=+0.956212268 container died 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c323947b7e35dc4a0d5a4df27dd920c0da0fa8eceec249f0353f03374af09a3-merged.mount: Deactivated successfully.
Nov 25 17:57:32 compute-0 podman[462612]: 2025-11-25 17:57:32.121047976 +0000 UTC m=+1.012672579 container remove 0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:57:32 compute-0 systemd[1]: libpod-conmon-0fea1f15c1e2d4416976dd0af0083623478e89606ec1c52af9c82e3a2361130d.scope: Deactivated successfully.
Nov 25 17:57:32 compute-0 sudo[462507]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4042: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:32 compute-0 sudo[462649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:32 compute-0 sudo[462649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:32 compute-0 sudo[462649]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:32 compute-0 ceph-mon[74985]: pgmap v4042: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:32 compute-0 sudo[462674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:57:32 compute-0 sudo[462674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:32 compute-0 sudo[462674]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:32 compute-0 sudo[462699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:32 compute-0 sudo[462699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:32 compute-0 sudo[462699]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:32 compute-0 sudo[462724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:57:32 compute-0 sudo[462724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:32 compute-0 podman[462788]: 2025-11-25 17:57:32.717007496 +0000 UTC m=+0.042464273 container create e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 17:57:32 compute-0 systemd[1]: Started libpod-conmon-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope.
Nov 25 17:57:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:32 compute-0 podman[462788]: 2025-11-25 17:57:32.696563522 +0000 UTC m=+0.022020319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:32 compute-0 podman[462788]: 2025-11-25 17:57:32.795291661 +0000 UTC m=+0.120748458 container init e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:57:32 compute-0 podman[462788]: 2025-11-25 17:57:32.801770207 +0000 UTC m=+0.127226984 container start e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:57:32 compute-0 podman[462788]: 2025-11-25 17:57:32.804290626 +0000 UTC m=+0.129747583 container attach e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 17:57:32 compute-0 musing_swanson[462804]: 167 167
Nov 25 17:57:32 compute-0 systemd[1]: libpod-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope: Deactivated successfully.
Nov 25 17:57:32 compute-0 conmon[462804]: conmon e88c24c7597abf36ceca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope/container/memory.events
Nov 25 17:57:32 compute-0 podman[462788]: 2025-11-25 17:57:32.808478049 +0000 UTC m=+0.133934826 container died e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 17:57:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bb7ecc7213a6441faf053cbf5e873df1fe3e1e041cc05776b112a1fcf8f933b-merged.mount: Deactivated successfully.
Nov 25 17:57:32 compute-0 podman[462788]: 2025-11-25 17:57:32.84094351 +0000 UTC m=+0.166400287 container remove e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:57:32 compute-0 systemd[1]: libpod-conmon-e88c24c7597abf36ceca4a35e34b787d6b6167d7413dabc4f81875ea42e58da5.scope: Deactivated successfully.
Nov 25 17:57:32 compute-0 podman[462827]: 2025-11-25 17:57:32.98541531 +0000 UTC m=+0.040568111 container create 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 17:57:33 compute-0 systemd[1]: Started libpod-conmon-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope.
Nov 25 17:57:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:57:33 compute-0 podman[462827]: 2025-11-25 17:57:32.966791755 +0000 UTC m=+0.021944576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:57:33 compute-0 podman[462827]: 2025-11-25 17:57:33.069747298 +0000 UTC m=+0.124900109 container init 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 17:57:33 compute-0 podman[462827]: 2025-11-25 17:57:33.07498373 +0000 UTC m=+0.130136531 container start 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 17:57:33 compute-0 podman[462827]: 2025-11-25 17:57:33.077806837 +0000 UTC m=+0.132959668 container attach 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:57:33 compute-0 nova_compute[254092]: 2025-11-25 17:57:33.679 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:34 compute-0 competent_lewin[462844]: {
Nov 25 17:57:34 compute-0 competent_lewin[462844]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "osd_id": 1,
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "type": "bluestore"
Nov 25 17:57:34 compute-0 competent_lewin[462844]:     },
Nov 25 17:57:34 compute-0 competent_lewin[462844]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "osd_id": 2,
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "type": "bluestore"
Nov 25 17:57:34 compute-0 competent_lewin[462844]:     },
Nov 25 17:57:34 compute-0 competent_lewin[462844]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "osd_id": 0,
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:57:34 compute-0 competent_lewin[462844]:         "type": "bluestore"
Nov 25 17:57:34 compute-0 competent_lewin[462844]:     }
Nov 25 17:57:34 compute-0 competent_lewin[462844]: }
Nov 25 17:57:34 compute-0 systemd[1]: libpod-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope: Deactivated successfully.
Nov 25 17:57:34 compute-0 systemd[1]: libpod-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope: Consumed 1.012s CPU time.
Nov 25 17:57:34 compute-0 podman[462827]: 2025-11-25 17:57:34.079362784 +0000 UTC m=+1.134515585 container died 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d82e8b220b3b99b42a558fe653dfdb42db97bb45e9d401387807e7eb9d93be9a-merged.mount: Deactivated successfully.
Nov 25 17:57:34 compute-0 podman[462827]: 2025-11-25 17:57:34.138114628 +0000 UTC m=+1.193267439 container remove 8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_lewin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:57:34 compute-0 systemd[1]: libpod-conmon-8c338728a860a20c8866932e033a0a1433a449af41f330473094977d6471903e.scope: Deactivated successfully.
Nov 25 17:57:34 compute-0 sudo[462724]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:57:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4043: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:57:34 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 721df93d-5682-4df7-a7a8-63bfda36596f does not exist
Nov 25 17:57:34 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 12e03706-c366-4f70-bd65-b391e5762f92 does not exist
Nov 25 17:57:34 compute-0 sudo[462890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:57:34 compute-0 sudo[462890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:34 compute-0 sudo[462890]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:34 compute-0 sudo[462915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:57:34 compute-0 sudo[462915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:57:34 compute-0 sudo[462915]: pam_unix(sudo:session): session closed for user root
Nov 25 17:57:35 compute-0 ceph-mon[74985]: pgmap v4043: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:57:35 compute-0 nova_compute[254092]: 2025-11-25 17:57:35.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:57:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4044: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:36 compute-0 nova_compute[254092]: 2025-11-25 17:57:36.214 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:37 compute-0 ceph-mon[74985]: pgmap v4044: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4045: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:38 compute-0 nova_compute[254092]: 2025-11-25 17:57:38.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:39 compute-0 ceph-mon[74985]: pgmap v4045: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4046: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:57:40
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:57:40 compute-0 podman[462940]: 2025-11-25 17:57:40.641779591 +0000 UTC m=+0.061540451 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 25 17:57:40 compute-0 podman[462941]: 2025-11-25 17:57:40.663775808 +0000 UTC m=+0.083480087 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 17:57:40 compute-0 podman[462942]: 2025-11-25 17:57:40.675834295 +0000 UTC m=+0.089948412 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:57:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:57:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:57:41 compute-0 nova_compute[254092]: 2025-11-25 17:57:41.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:41 compute-0 ceph-mon[74985]: pgmap v4046: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4047: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:42 compute-0 ceph-mon[74985]: pgmap v4047: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:43 compute-0 nova_compute[254092]: 2025-11-25 17:57:43.687 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4048: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:44 compute-0 ceph-mon[74985]: pgmap v4048: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4049: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:46 compute-0 nova_compute[254092]: 2025-11-25 17:57:46.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:46 compute-0 ceph-mon[74985]: pgmap v4049: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4050: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:48 compute-0 ceph-mon[74985]: pgmap v4050: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:48 compute-0 nova_compute[254092]: 2025-11-25 17:57:48.692 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4051: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:50 compute-0 ceph-mon[74985]: pgmap v4051: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:51 compute-0 nova_compute[254092]: 2025-11-25 17:57:51.221 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4052: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mon[74985]: pgmap v4052: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:57:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:57:53 compute-0 nova_compute[254092]: 2025-11-25 17:57:53.695 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4053: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:54 compute-0 ceph-mon[74985]: pgmap v4053: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:57:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559460647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:57:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:57:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2559460647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:57:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2559460647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:57:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2559460647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:57:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:57:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4054: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:56 compute-0 nova_compute[254092]: 2025-11-25 17:57:56.224 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:57:56 compute-0 ceph-mon[74985]: pgmap v4054: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4055: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:58 compute-0 ceph-mon[74985]: pgmap v4055: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:57:58 compute-0 nova_compute[254092]: 2025-11-25 17:57:58.699 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4056: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:00 compute-0 ceph-mon[74985]: pgmap v4056: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:01 compute-0 nova_compute[254092]: 2025-11-25 17:58:01.226 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4057: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:02 compute-0 ceph-mon[74985]: pgmap v4057: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:03 compute-0 nova_compute[254092]: 2025-11-25 17:58:03.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4058: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:04 compute-0 ceph-mon[74985]: pgmap v4058: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4059: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:06 compute-0 nova_compute[254092]: 2025-11-25 17:58:06.269 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:06 compute-0 ceph-mon[74985]: pgmap v4059: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4060: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:08 compute-0 ceph-mon[74985]: pgmap v4060: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:08 compute-0 nova_compute[254092]: 2025-11-25 17:58:08.719 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:58:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:58:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4061: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:10 compute-0 ceph-mon[74985]: pgmap v4061: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:11 compute-0 nova_compute[254092]: 2025-11-25 17:58:11.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:11 compute-0 podman[463004]: 2025-11-25 17:58:11.638092826 +0000 UTC m=+0.059789613 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 25 17:58:11 compute-0 podman[463005]: 2025-11-25 17:58:11.654291736 +0000 UTC m=+0.064718097 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 17:58:11 compute-0 podman[463006]: 2025-11-25 17:58:11.667693869 +0000 UTC m=+0.082574601 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 17:58:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4062: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:12 compute-0 ceph-mon[74985]: pgmap v4062: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:58:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:58:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:58:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:58:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:58:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:58:13 compute-0 nova_compute[254092]: 2025-11-25 17:58:13.723 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4063: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:14 compute-0 ceph-mon[74985]: pgmap v4063: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:15 compute-0 nova_compute[254092]: 2025-11-25 17:58:15.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4064: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:16 compute-0 nova_compute[254092]: 2025-11-25 17:58:16.273 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:16 compute-0 ceph-mon[74985]: pgmap v4064: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:16 compute-0 nova_compute[254092]: 2025-11-25 17:58:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4065: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:18 compute-0 ceph-mon[74985]: pgmap v4065: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:18 compute-0 nova_compute[254092]: 2025-11-25 17:58:18.727 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4066: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:20 compute-0 nova_compute[254092]: 2025-11-25 17:58:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:20 compute-0 ceph-mon[74985]: pgmap v4066: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:21 compute-0 nova_compute[254092]: 2025-11-25 17:58:21.277 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4067: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:22 compute-0 nova_compute[254092]: 2025-11-25 17:58:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:22 compute-0 nova_compute[254092]: 2025-11-25 17:58:22.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:58:22 compute-0 nova_compute[254092]: 2025-11-25 17:58:22.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:58:22 compute-0 ceph-mon[74985]: pgmap v4067: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:22 compute-0 nova_compute[254092]: 2025-11-25 17:58:22.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.550 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.551 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.551 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.552 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.552 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.785 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:58:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3518698529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:58:23 compute-0 nova_compute[254092]: 2025-11-25 17:58:23.985 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:58:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3518698529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.200 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3607MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.202 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.202 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:58:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4068: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.286 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.287 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.315 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:58:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:58:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719363109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.750 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.759 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.789 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.792 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:58:24 compute-0 nova_compute[254092]: 2025-11-25 17:58:24.792 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:58:25 compute-0 ceph-mon[74985]: pgmap v4068: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/719363109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:58:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4069: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:26 compute-0 nova_compute[254092]: 2025-11-25 17:58:26.278 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:26 compute-0 ceph-mon[74985]: pgmap v4069: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:27 compute-0 nova_compute[254092]: 2025-11-25 17:58:27.794 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4070: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:28 compute-0 ceph-mon[74985]: pgmap v4070: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:28 compute-0 nova_compute[254092]: 2025-11-25 17:58:28.788 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:29 compute-0 nova_compute[254092]: 2025-11-25 17:58:29.550 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:29 compute-0 nova_compute[254092]: 2025-11-25 17:58:29.551 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:58:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4071: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:30 compute-0 ceph-mon[74985]: pgmap v4071: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:31 compute-0 nova_compute[254092]: 2025-11-25 17:58:31.281 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4072: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:32 compute-0 ceph-mon[74985]: pgmap v4072: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:33 compute-0 nova_compute[254092]: 2025-11-25 17:58:33.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4073: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:34 compute-0 ceph-mon[74985]: pgmap v4073: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:34 compute-0 sudo[463111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:34 compute-0 sudo[463111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:34 compute-0 sudo[463111]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:34 compute-0 sudo[463136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:58:34 compute-0 sudo[463136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:34 compute-0 sudo[463136]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:34 compute-0 sudo[463161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:34 compute-0 sudo[463161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:34 compute-0 sudo[463161]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:34 compute-0 sudo[463186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:58:34 compute-0 sudo[463186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:35 compute-0 sudo[463186]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:58:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:58:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:58:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:58:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2b4cff6d-4b63-436a-a749-0db7e53274f8 does not exist
Nov 25 17:58:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 899231f5-8cba-44ee-b567-db6d231c0ea3 does not exist
Nov 25 17:58:35 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 915e68b5-eeac-4357-981f-1ec968792294 does not exist
Nov 25 17:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:58:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:58:35 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:58:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:58:35 compute-0 sudo[463244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:35 compute-0 sudo[463244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:35 compute-0 sudo[463244]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:35 compute-0 sudo[463269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:58:35 compute-0 sudo[463269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:35 compute-0 sudo[463269]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:58:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:58:35 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:58:35 compute-0 sudo[463294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:35 compute-0 sudo[463294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:35 compute-0 sudo[463294]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:35 compute-0 sudo[463319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:58:35 compute-0 sudo[463319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:35 compute-0 podman[463384]: 2025-11-25 17:58:35.78347181 +0000 UTC m=+0.049685639 container create c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:58:35 compute-0 systemd[1]: Started libpod-conmon-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope.
Nov 25 17:58:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:35 compute-0 podman[463384]: 2025-11-25 17:58:35.757876156 +0000 UTC m=+0.024090075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:58:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:58:35 compute-0 podman[463384]: 2025-11-25 17:58:35.888588663 +0000 UTC m=+0.154802552 container init c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:58:35 compute-0 podman[463384]: 2025-11-25 17:58:35.897498995 +0000 UTC m=+0.163712814 container start c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:58:35 compute-0 podman[463384]: 2025-11-25 17:58:35.900422824 +0000 UTC m=+0.166636743 container attach c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 17:58:35 compute-0 stoic_robinson[463401]: 167 167
Nov 25 17:58:35 compute-0 systemd[1]: libpod-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope: Deactivated successfully.
Nov 25 17:58:35 compute-0 conmon[463401]: conmon c7e3f7b3c05629e64b7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope/container/memory.events
Nov 25 17:58:35 compute-0 podman[463384]: 2025-11-25 17:58:35.905243894 +0000 UTC m=+0.171457763 container died c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 17:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2001923bd13ee96e4aa38b8b68c6465852a6b7f31ce659e2d548e945c23e1791-merged.mount: Deactivated successfully.
Nov 25 17:58:35 compute-0 podman[463384]: 2025-11-25 17:58:35.955772776 +0000 UTC m=+0.221986605 container remove c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_robinson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 17:58:35 compute-0 systemd[1]: libpod-conmon-c7e3f7b3c05629e64b7c578d0a58a27532a52334c637c73b4dba70fc03be0412.scope: Deactivated successfully.
Nov 25 17:58:36 compute-0 podman[463425]: 2025-11-25 17:58:36.171948602 +0000 UTC m=+0.060795091 container create 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:58:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4074: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:36 compute-0 systemd[1]: Started libpod-conmon-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope.
Nov 25 17:58:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:36 compute-0 podman[463425]: 2025-11-25 17:58:36.154350553 +0000 UTC m=+0.043197052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:58:36 compute-0 podman[463425]: 2025-11-25 17:58:36.260949336 +0000 UTC m=+0.149795825 container init 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:58:36 compute-0 podman[463425]: 2025-11-25 17:58:36.268307786 +0000 UTC m=+0.157154275 container start 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 17:58:36 compute-0 podman[463425]: 2025-11-25 17:58:36.271114312 +0000 UTC m=+0.159960801 container attach 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:58:36 compute-0 nova_compute[254092]: 2025-11-25 17:58:36.283 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:36 compute-0 ceph-mon[74985]: pgmap v4074: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:36 compute-0 nova_compute[254092]: 2025-11-25 17:58:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:37 compute-0 friendly_swirles[463442]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:58:37 compute-0 friendly_swirles[463442]: --> relative data size: 1.0
Nov 25 17:58:37 compute-0 friendly_swirles[463442]: --> All data devices are unavailable
Nov 25 17:58:37 compute-0 systemd[1]: libpod-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope: Deactivated successfully.
Nov 25 17:58:37 compute-0 systemd[1]: libpod-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope: Consumed 1.104s CPU time.
Nov 25 17:58:37 compute-0 podman[463425]: 2025-11-25 17:58:37.423998435 +0000 UTC m=+1.312844964 container died 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 17:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-081712b0d91fc17d68c3bde0ae5459c6d04f54b9b6b5c2504aebba420d521b25-merged.mount: Deactivated successfully.
Nov 25 17:58:37 compute-0 podman[463425]: 2025-11-25 17:58:37.50379076 +0000 UTC m=+1.392637259 container remove 9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:58:37 compute-0 systemd[1]: libpod-conmon-9c870e3fd9bb174934c39afffc9eadc9fd8bcef6430df6674dcc2eaad6754e8b.scope: Deactivated successfully.
Nov 25 17:58:37 compute-0 sudo[463319]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:37 compute-0 sudo[463483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:37 compute-0 sudo[463483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:37 compute-0 sudo[463483]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:37 compute-0 sudo[463508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:58:37 compute-0 sudo[463508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:37 compute-0 sudo[463508]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:37 compute-0 sudo[463533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:37 compute-0 sudo[463533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:37 compute-0 sudo[463533]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:37 compute-0 sudo[463558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:58:37 compute-0 sudo[463558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:38 compute-0 podman[463622]: 2025-11-25 17:58:38.196018102 +0000 UTC m=+0.056261188 container create 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 25 17:58:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4075: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:38 compute-0 systemd[1]: Started libpod-conmon-4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3.scope.
Nov 25 17:58:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:58:38 compute-0 podman[463622]: 2025-11-25 17:58:38.168186036 +0000 UTC m=+0.028429162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:58:38 compute-0 podman[463622]: 2025-11-25 17:58:38.273468044 +0000 UTC m=+0.133711180 container init 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 17:58:38 compute-0 podman[463622]: 2025-11-25 17:58:38.282801276 +0000 UTC m=+0.143044362 container start 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:58:38 compute-0 podman[463622]: 2025-11-25 17:58:38.285833889 +0000 UTC m=+0.146076995 container attach 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 17:58:38 compute-0 ceph-mon[74985]: pgmap v4075: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:38 compute-0 serene_neumann[463638]: 167 167
Nov 25 17:58:38 compute-0 systemd[1]: libpod-4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3.scope: Deactivated successfully.
Nov 25 17:58:38 compute-0 podman[463622]: 2025-11-25 17:58:38.288854011 +0000 UTC m=+0.149097117 container died 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 17:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c54f84bb7002f4e6014f04760bfa47d4c3b26c87c4be5b66c28b2681b84655e-merged.mount: Deactivated successfully.
Nov 25 17:58:38 compute-0 podman[463622]: 2025-11-25 17:58:38.326063501 +0000 UTC m=+0.186306587 container remove 4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:58:38 compute-0 systemd[1]: libpod-conmon-4d67324e27481d51ad4fe4b00c2cf41ac23e54088625c54ab366316466a616e3.scope: Deactivated successfully.
Nov 25 17:58:38 compute-0 podman[463662]: 2025-11-25 17:58:38.516307403 +0000 UTC m=+0.040157521 container create c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 17:58:38 compute-0 systemd[1]: Started libpod-conmon-c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff.scope.
Nov 25 17:58:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:38 compute-0 podman[463662]: 2025-11-25 17:58:38.499514377 +0000 UTC m=+0.023364505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:58:38 compute-0 podman[463662]: 2025-11-25 17:58:38.594983718 +0000 UTC m=+0.118833856 container init c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 17:58:38 compute-0 podman[463662]: 2025-11-25 17:58:38.602324136 +0000 UTC m=+0.126174264 container start c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 17:58:38 compute-0 podman[463662]: 2025-11-25 17:58:38.605352789 +0000 UTC m=+0.129202907 container attach c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:58:38 compute-0 nova_compute[254092]: 2025-11-25 17:58:38.794 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]: {
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:     "0": [
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:         {
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "devices": [
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "/dev/loop3"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             ],
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_name": "ceph_lv0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_size": "21470642176",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "name": "ceph_lv0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "tags": {
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cluster_name": "ceph",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.crush_device_class": "",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.encrypted": "0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osd_id": "0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.type": "block",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.vdo": "0"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             },
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "type": "block",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "vg_name": "ceph_vg0"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:         }
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:     ],
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:     "1": [
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:         {
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "devices": [
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "/dev/loop4"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             ],
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_name": "ceph_lv1",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_size": "21470642176",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "name": "ceph_lv1",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "tags": {
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cluster_name": "ceph",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.crush_device_class": "",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.encrypted": "0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osd_id": "1",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.type": "block",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.vdo": "0"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             },
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "type": "block",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "vg_name": "ceph_vg1"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:         }
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:     ],
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:     "2": [
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:         {
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "devices": [
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "/dev/loop5"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             ],
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_name": "ceph_lv2",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_size": "21470642176",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "name": "ceph_lv2",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "tags": {
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.cluster_name": "ceph",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.crush_device_class": "",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.encrypted": "0",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osd_id": "2",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.type": "block",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:                 "ceph.vdo": "0"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             },
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "type": "block",
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:             "vg_name": "ceph_vg2"
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:         }
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]:     ]
Nov 25 17:58:39 compute-0 festive_hofstadter[463678]: }
Nov 25 17:58:39 compute-0 systemd[1]: libpod-c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff.scope: Deactivated successfully.
Nov 25 17:58:39 compute-0 podman[463662]: 2025-11-25 17:58:39.382771764 +0000 UTC m=+0.906621892 container died c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 25 17:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0e64e051d551ac1549eb6b0eef2bddc1c8245406a4b9c4675f0979362a111d7-merged.mount: Deactivated successfully.
Nov 25 17:58:39 compute-0 podman[463662]: 2025-11-25 17:58:39.449017241 +0000 UTC m=+0.972867389 container remove c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:58:39 compute-0 systemd[1]: libpod-conmon-c3c95ff25ec01a1c64bff1ed26e863cb1580847a1603a7e1294b9ee25aec95ff.scope: Deactivated successfully.
Nov 25 17:58:39 compute-0 sudo[463558]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:39 compute-0 sudo[463698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:39 compute-0 sudo[463698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:39 compute-0 sudo[463698]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:39 compute-0 sudo[463723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:58:39 compute-0 sudo[463723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:39 compute-0 sudo[463723]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:39 compute-0 sudo[463748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:39 compute-0 sudo[463748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:39 compute-0 sudo[463748]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:39 compute-0 sudo[463773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:58:39 compute-0 sudo[463773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:40 compute-0 podman[463839]: 2025-11-25 17:58:40.097488697 +0000 UTC m=+0.057590364 container create 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:58:40 compute-0 systemd[1]: Started libpod-conmon-52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf.scope.
Nov 25 17:58:40 compute-0 podman[463839]: 2025-11-25 17:58:40.073889487 +0000 UTC m=+0.033991244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:58:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:58:40 compute-0 podman[463839]: 2025-11-25 17:58:40.208628273 +0000 UTC m=+0.168730030 container init 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4076: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:40 compute-0 podman[463839]: 2025-11-25 17:58:40.217517954 +0000 UTC m=+0.177619621 container start 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:58:40 compute-0 podman[463839]: 2025-11-25 17:58:40.220915946 +0000 UTC m=+0.181017703 container attach 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 17:58:40 compute-0 thirsty_meitner[463855]: 167 167
Nov 25 17:58:40 compute-0 systemd[1]: libpod-52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf.scope: Deactivated successfully.
Nov 25 17:58:40 compute-0 podman[463839]: 2025-11-25 17:58:40.224512683 +0000 UTC m=+0.184614350 container died 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 17:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0abbd84eb03d0666ea3a870cbb3e9638c51d6a62cc515ba92714814080df2ea-merged.mount: Deactivated successfully.
Nov 25 17:58:40 compute-0 podman[463839]: 2025-11-25 17:58:40.262145465 +0000 UTC m=+0.222247132 container remove 52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:58:40 compute-0 systemd[1]: libpod-conmon-52527121f331c27ef9c84f9b614b0716e9d6a127473224c1d0d56bc88d2c2daf.scope: Deactivated successfully.
Nov 25 17:58:40 compute-0 ceph-mon[74985]: pgmap v4076: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:58:40
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:58:40 compute-0 podman[463880]: 2025-11-25 17:58:40.482872445 +0000 UTC m=+0.084664300 container create cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:58:40 compute-0 nova_compute[254092]: 2025-11-25 17:58:40.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:58:40 compute-0 podman[463880]: 2025-11-25 17:58:40.425469476 +0000 UTC m=+0.027261411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:58:40 compute-0 systemd[1]: Started libpod-conmon-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope.
Nov 25 17:58:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:58:40 compute-0 podman[463880]: 2025-11-25 17:58:40.573906494 +0000 UTC m=+0.175698369 container init cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:58:40 compute-0 podman[463880]: 2025-11-25 17:58:40.584528552 +0000 UTC m=+0.186320437 container start cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 17:58:40 compute-0 podman[463880]: 2025-11-25 17:58:40.589102496 +0000 UTC m=+0.190894391 container attach cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:58:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:58:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:58:41 compute-0 nova_compute[254092]: 2025-11-25 17:58:41.314 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:41 compute-0 competent_shirley[463896]: {
Nov 25 17:58:41 compute-0 competent_shirley[463896]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "osd_id": 1,
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "type": "bluestore"
Nov 25 17:58:41 compute-0 competent_shirley[463896]:     },
Nov 25 17:58:41 compute-0 competent_shirley[463896]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "osd_id": 2,
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "type": "bluestore"
Nov 25 17:58:41 compute-0 competent_shirley[463896]:     },
Nov 25 17:58:41 compute-0 competent_shirley[463896]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "osd_id": 0,
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:58:41 compute-0 competent_shirley[463896]:         "type": "bluestore"
Nov 25 17:58:41 compute-0 competent_shirley[463896]:     }
Nov 25 17:58:41 compute-0 competent_shirley[463896]: }
Nov 25 17:58:41 compute-0 systemd[1]: libpod-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope: Deactivated successfully.
Nov 25 17:58:41 compute-0 podman[463880]: 2025-11-25 17:58:41.667079346 +0000 UTC m=+1.268871201 container died cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:58:41 compute-0 systemd[1]: libpod-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope: Consumed 1.082s CPU time.
Nov 25 17:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8685aa150edcba1714bdbcbefb7e09cb6fa2d033ee328584258edafb83f31fd3-merged.mount: Deactivated successfully.
Nov 25 17:58:41 compute-0 podman[463880]: 2025-11-25 17:58:41.778274053 +0000 UTC m=+1.380065898 container remove cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 17:58:41 compute-0 systemd[1]: libpod-conmon-cc95ab5ef958dd706e5ed966b94ae7f2429d23b0e5819c6216b89b3a028dafd0.scope: Deactivated successfully.
Nov 25 17:58:41 compute-0 sudo[463773]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:58:41 compute-0 podman[463932]: 2025-11-25 17:58:41.817617331 +0000 UTC m=+0.072759415 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 17:58:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:58:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:58:41 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:58:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e1a4889d-9700-402e-8eb6-37a7e0946881 does not exist
Nov 25 17:58:41 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a64333cc-9fce-4bb2-b2c7-c0affcaa51bf does not exist
Nov 25 17:58:41 compute-0 podman[463940]: 2025-11-25 17:58:41.83747143 +0000 UTC m=+0.092851421 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 17:58:41 compute-0 podman[463941]: 2025-11-25 17:58:41.840338457 +0000 UTC m=+0.093257351 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:58:41 compute-0 sudo[463998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:58:41 compute-0 sudo[463998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:41 compute-0 sudo[463998]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:41 compute-0 sudo[464023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:58:41 compute-0 sudo[464023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:58:41 compute-0 sudo[464023]: pam_unix(sudo:session): session closed for user root
Nov 25 17:58:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4077: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:58:42 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:58:42 compute-0 ceph-mon[74985]: pgmap v4077: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:43 compute-0 nova_compute[254092]: 2025-11-25 17:58:43.798 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4078: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:44 compute-0 ceph-mon[74985]: pgmap v4078: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4079: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:46 compute-0 ceph-mon[74985]: pgmap v4079: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:46 compute-0 nova_compute[254092]: 2025-11-25 17:58:46.315 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4080: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:48 compute-0 ceph-mon[74985]: pgmap v4080: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:48 compute-0 nova_compute[254092]: 2025-11-25 17:58:48.801 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4081: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:50 compute-0 ceph-mon[74985]: pgmap v4081: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:51 compute-0 nova_compute[254092]: 2025-11-25 17:58:51.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4082: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:52 compute-0 ceph-mon[74985]: pgmap v4082: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:58:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:58:53 compute-0 nova_compute[254092]: 2025-11-25 17:58:53.837 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4083: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:54 compute-0 ceph-mon[74985]: pgmap v4083: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:58:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2738161178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:58:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:58:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2738161178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:58:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2738161178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:58:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2738161178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:58:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:58:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4084: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:56 compute-0 nova_compute[254092]: 2025-11-25 17:58:56.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:58:56 compute-0 ceph-mon[74985]: pgmap v4084: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4085: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:58 compute-0 ceph-mon[74985]: pgmap v4085: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:58:58 compute-0 nova_compute[254092]: 2025-11-25 17:58:58.840 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4086: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:00 compute-0 ceph-mon[74985]: pgmap v4086: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.848511) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540848534, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1955, "num_deletes": 250, "total_data_size": 3309509, "memory_usage": 3364008, "flush_reason": "Manual Compaction"}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540863633, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 1897128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82058, "largest_seqno": 84012, "table_properties": {"data_size": 1890749, "index_size": 3260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16461, "raw_average_key_size": 20, "raw_value_size": 1876615, "raw_average_value_size": 2363, "num_data_blocks": 150, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093318, "oldest_key_time": 1764093318, "file_creation_time": 1764093540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 15211 microseconds, and 7414 cpu microseconds.
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.863716) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 1897128 bytes OK
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.863738) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.865180) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.865194) EVENT_LOG_v1 {"time_micros": 1764093540865189, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.865214) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 3301262, prev total WAL file size 3301262, number of live WAL files 2.
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.866211) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353131' seq:72057594037927935, type:22 .. '6D6772737461740033373632' seq:0, type:0; will stop at (end)
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(1852KB)], [194(10MB)]
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540866236, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 12851139, "oldest_snapshot_seqno": -1}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 10019 keys, 10878473 bytes, temperature: kUnknown
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540931996, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10878473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10817903, "index_size": 34409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 262823, "raw_average_key_size": 26, "raw_value_size": 10645243, "raw_average_value_size": 1062, "num_data_blocks": 1328, "num_entries": 10019, "num_filter_entries": 10019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.932265) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10878473 bytes
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.933707) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.2 rd, 165.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.4 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(12.5) write-amplify(5.7) OK, records in: 10430, records dropped: 411 output_compression: NoCompression
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.933726) EVENT_LOG_v1 {"time_micros": 1764093540933716, "job": 122, "event": "compaction_finished", "compaction_time_micros": 65843, "compaction_time_cpu_micros": 27992, "output_level": 6, "num_output_files": 1, "total_output_size": 10878473, "num_input_records": 10430, "num_output_records": 10019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540934198, "job": 122, "event": "table_file_deletion", "file_number": 196}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093540936378, "job": 122, "event": "table_file_deletion", "file_number": 194}
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.866148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:00 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:00.936488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:01 compute-0 nova_compute[254092]: 2025-11-25 17:59:01.359 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4087: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:02 compute-0 ceph-mon[74985]: pgmap v4087: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:03 compute-0 nova_compute[254092]: 2025-11-25 17:59:03.844 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4088: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:04 compute-0 ceph-mon[74985]: pgmap v4088: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4089: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:06 compute-0 ceph-mon[74985]: pgmap v4089: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:06 compute-0 nova_compute[254092]: 2025-11-25 17:59:06.406 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4090: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:08 compute-0 ceph-mon[74985]: pgmap v4090: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:08 compute-0 nova_compute[254092]: 2025-11-25 17:59:08.848 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:59:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:59:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4091: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:10 compute-0 ceph-mon[74985]: pgmap v4091: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:11 compute-0 nova_compute[254092]: 2025-11-25 17:59:11.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4092: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:12 compute-0 ceph-mon[74985]: pgmap v4092: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:12 compute-0 podman[464049]: 2025-11-25 17:59:12.670713296 +0000 UTC m=+0.071244735 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 17:59:12 compute-0 podman[464048]: 2025-11-25 17:59:12.676484412 +0000 UTC m=+0.079690354 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 17:59:12 compute-0 podman[464050]: 2025-11-25 17:59:12.70771211 +0000 UTC m=+0.107323124 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 17:59:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:59:13.704 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:59:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:59:13.705 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:59:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 17:59:13.705 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:59:13 compute-0 nova_compute[254092]: 2025-11-25 17:59:13.850 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4093: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.291907) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554291943, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 347, "num_deletes": 251, "total_data_size": 206013, "memory_usage": 213720, "flush_reason": "Manual Compaction"}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554294849, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 204447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84013, "largest_seqno": 84359, "table_properties": {"data_size": 202256, "index_size": 354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5369, "raw_average_key_size": 18, "raw_value_size": 198035, "raw_average_value_size": 680, "num_data_blocks": 16, "num_entries": 291, "num_filter_entries": 291, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093542, "oldest_key_time": 1764093542, "file_creation_time": 1764093554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 2984 microseconds, and 1100 cpu microseconds.
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.294887) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 204447 bytes OK
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.294906) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296069) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296085) EVENT_LOG_v1 {"time_micros": 1764093554296080, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296103) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 203669, prev total WAL file size 203669, number of live WAL files 2.
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296493) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(199KB)], [197(10MB)]
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554296535, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 11082920, "oldest_snapshot_seqno": -1}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: pgmap v4093: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 9801 keys, 9361557 bytes, temperature: kUnknown
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554367037, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 9361557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9303898, "index_size": 32074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 258973, "raw_average_key_size": 26, "raw_value_size": 9136497, "raw_average_value_size": 932, "num_data_blocks": 1221, "num_entries": 9801, "num_filter_entries": 9801, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.367287) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 9361557 bytes
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.368874) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.0 rd, 132.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(100.0) write-amplify(45.8) OK, records in: 10310, records dropped: 509 output_compression: NoCompression
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.368894) EVENT_LOG_v1 {"time_micros": 1764093554368885, "job": 124, "event": "compaction_finished", "compaction_time_micros": 70581, "compaction_time_cpu_micros": 32398, "output_level": 6, "num_output_files": 1, "total_output_size": 9361557, "num_input_records": 10310, "num_output_records": 9801, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554369043, "job": 124, "event": "table_file_deletion", "file_number": 199}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093554370840, "job": 124, "event": "table_file_deletion", "file_number": 197}
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.296366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:14 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-17:59:14.370918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 17:59:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4094: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:16 compute-0 ceph-mon[74985]: pgmap v4094: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:16 compute-0 nova_compute[254092]: 2025-11-25 17:59:16.410 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:16 compute-0 nova_compute[254092]: 2025-11-25 17:59:16.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4095: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:18 compute-0 ceph-mon[74985]: pgmap v4095: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:18 compute-0 nova_compute[254092]: 2025-11-25 17:59:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:18 compute-0 nova_compute[254092]: 2025-11-25 17:59:18.855 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4096: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:20 compute-0 ceph-mon[74985]: pgmap v4096: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:20 compute-0 nova_compute[254092]: 2025-11-25 17:59:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:21 compute-0 nova_compute[254092]: 2025-11-25 17:59:21.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4097: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:22 compute-0 ceph-mon[74985]: pgmap v4097: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:23 compute-0 nova_compute[254092]: 2025-11-25 17:59:23.493 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:23 compute-0 nova_compute[254092]: 2025-11-25 17:59:23.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4098: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:24 compute-0 ceph-mon[74985]: pgmap v4098: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.516 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.548 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.549 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.549 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 17:59:24 compute-0 nova_compute[254092]: 2025-11-25 17:59:24.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:59:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:59:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672255322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.040 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.265 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.267 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3624MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.268 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.269 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 17:59:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/672255322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.346 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.347 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.372 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 17:59:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 17:59:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251580964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.801 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.808 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.823 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.825 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 17:59:25 compute-0 nova_compute[254092]: 2025-11-25 17:59:25.826 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 17:59:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4099: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:26 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4251580964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 17:59:26 compute-0 ceph-mon[74985]: pgmap v4099: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:26 compute-0 nova_compute[254092]: 2025-11-25 17:59:26.436 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4100: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:28 compute-0 ceph-mon[74985]: pgmap v4100: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:28 compute-0 nova_compute[254092]: 2025-11-25 17:59:28.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:29 compute-0 nova_compute[254092]: 2025-11-25 17:59:29.813 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:29 compute-0 nova_compute[254092]: 2025-11-25 17:59:29.815 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:29 compute-0 nova_compute[254092]: 2025-11-25 17:59:29.815 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 17:59:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4101: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:30 compute-0 ceph-mon[74985]: pgmap v4101: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:31 compute-0 nova_compute[254092]: 2025-11-25 17:59:31.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4102: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:32 compute-0 ceph-mon[74985]: pgmap v4102: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:33 compute-0 nova_compute[254092]: 2025-11-25 17:59:33.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4103: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:34 compute-0 ceph-mon[74985]: pgmap v4103: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:34 compute-0 nova_compute[254092]: 2025-11-25 17:59:34.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:34 compute-0 nova_compute[254092]: 2025-11-25 17:59:34.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 17:59:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4104: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:36 compute-0 ceph-mon[74985]: pgmap v4104: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:36 compute-0 nova_compute[254092]: 2025-11-25 17:59:36.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:37 compute-0 nova_compute[254092]: 2025-11-25 17:59:37.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4105: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:38 compute-0 ceph-mon[74985]: pgmap v4105: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:38 compute-0 nova_compute[254092]: 2025-11-25 17:59:38.869 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4106: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_17:59:40
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', '.rgw.root', '.mgr']
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 17:59:40 compute-0 ceph-mon[74985]: pgmap v4106: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:59:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 17:59:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 17:59:41 compute-0 nova_compute[254092]: 2025-11-25 17:59:41.442 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:42 compute-0 sudo[464155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:42 compute-0 sudo[464155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:42 compute-0 sudo[464155]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:42 compute-0 sudo[464180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:59:42 compute-0 sudo[464180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:42 compute-0 sudo[464180]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:42 compute-0 sudo[464205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:42 compute-0 sudo[464205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:42 compute-0 sudo[464205]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4107: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:42 compute-0 sudo[464230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 17:59:42 compute-0 sudo[464230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:42 compute-0 ceph-mon[74985]: pgmap v4107: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:42 compute-0 sudo[464230]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 17:59:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:59:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:59:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:59:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 17:59:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:59:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 17:59:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:59:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 043169e6-03c9-41f2-8f3b-3fdb5a0c52f3 does not exist
Nov 25 17:59:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2f06f259-f02e-43d9-a607-fc7827b5a53d does not exist
Nov 25 17:59:42 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2eeaaed8-b6e5-4062-838b-4534866f5909 does not exist
Nov 25 17:59:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 17:59:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:59:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 17:59:42 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:59:42 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 17:59:42 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:59:42 compute-0 sudo[464286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:42 compute-0 sudo[464286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:42 compute-0 sudo[464286]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:42 compute-0 podman[464311]: 2025-11-25 17:59:42.972499071 +0000 UTC m=+0.076143457 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 17:59:42 compute-0 sudo[464329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:59:42 compute-0 sudo[464329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:42 compute-0 sudo[464329]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:42 compute-0 podman[464310]: 2025-11-25 17:59:42.990619563 +0000 UTC m=+0.093397916 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 17:59:43 compute-0 podman[464312]: 2025-11-25 17:59:43.006312898 +0000 UTC m=+0.108171486 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Nov 25 17:59:43 compute-0 sudo[464394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:43 compute-0 sudo[464394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:43 compute-0 sudo[464394]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:43 compute-0 sudo[464421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 17:59:43 compute-0 sudo[464421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 17:59:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:59:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 17:59:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:59:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 17:59:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 17:59:43 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 17:59:43 compute-0 podman[464486]: 2025-11-25 17:59:43.49138522 +0000 UTC m=+0.038622918 container create 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:59:43 compute-0 systemd[1]: Started libpod-conmon-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope.
Nov 25 17:59:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:59:43 compute-0 podman[464486]: 2025-11-25 17:59:43.473901526 +0000 UTC m=+0.021139244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:59:43 compute-0 podman[464486]: 2025-11-25 17:59:43.586590214 +0000 UTC m=+0.133827952 container init 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:59:43 compute-0 podman[464486]: 2025-11-25 17:59:43.593918473 +0000 UTC m=+0.141156171 container start 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:59:43 compute-0 podman[464486]: 2025-11-25 17:59:43.597298864 +0000 UTC m=+0.144536602 container attach 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 17:59:43 compute-0 condescending_euler[464502]: 167 167
Nov 25 17:59:43 compute-0 systemd[1]: libpod-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope: Deactivated successfully.
Nov 25 17:59:43 compute-0 conmon[464502]: conmon 5c9c71aff816ae3d6ef2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope/container/memory.events
Nov 25 17:59:43 compute-0 podman[464486]: 2025-11-25 17:59:43.602855885 +0000 UTC m=+0.150093583 container died 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0edf2a1332a5a9c336c55154110dc8b1f7ad1001ce2644f196070f9f19f259c6-merged.mount: Deactivated successfully.
Nov 25 17:59:43 compute-0 podman[464486]: 2025-11-25 17:59:43.647534628 +0000 UTC m=+0.194772326 container remove 5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:59:43 compute-0 systemd[1]: libpod-conmon-5c9c71aff816ae3d6ef2fba2318f5ce98bbd431c0960e7e4c8012dc5b7bba830.scope: Deactivated successfully.
Nov 25 17:59:43 compute-0 podman[464526]: 2025-11-25 17:59:43.805878375 +0000 UTC m=+0.044450008 container create 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:59:43 compute-0 systemd[1]: Started libpod-conmon-629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e.scope.
Nov 25 17:59:43 compute-0 nova_compute[254092]: 2025-11-25 17:59:43.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:59:43 compute-0 podman[464526]: 2025-11-25 17:59:43.78580665 +0000 UTC m=+0.024378303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:43 compute-0 podman[464526]: 2025-11-25 17:59:43.89380518 +0000 UTC m=+0.132376843 container init 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 17:59:43 compute-0 podman[464526]: 2025-11-25 17:59:43.901501359 +0000 UTC m=+0.140072992 container start 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 17:59:43 compute-0 podman[464526]: 2025-11-25 17:59:43.904388047 +0000 UTC m=+0.142959680 container attach 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:59:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4108: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:44 compute-0 ceph-mon[74985]: pgmap v4108: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:44 compute-0 gracious_noether[464542]: --> passed data devices: 0 physical, 3 LVM
Nov 25 17:59:44 compute-0 gracious_noether[464542]: --> relative data size: 1.0
Nov 25 17:59:44 compute-0 gracious_noether[464542]: --> All data devices are unavailable
Nov 25 17:59:44 compute-0 systemd[1]: libpod-629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e.scope: Deactivated successfully.
Nov 25 17:59:44 compute-0 podman[464526]: 2025-11-25 17:59:44.926948184 +0000 UTC m=+1.165519827 container died 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-096acaeaed2f906bd7a891d28b2522e8fcf7f4bbbdc3ce5a4fa54c5d09957caf-merged.mount: Deactivated successfully.
Nov 25 17:59:44 compute-0 podman[464526]: 2025-11-25 17:59:44.982525392 +0000 UTC m=+1.221097025 container remove 629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 17:59:44 compute-0 systemd[1]: libpod-conmon-629287cbe609991c9a4fe8f08ed2a2210c002efa5d20ea69687b7a1df6d7789e.scope: Deactivated successfully.
Nov 25 17:59:45 compute-0 sudo[464421]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:45 compute-0 sudo[464584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:45 compute-0 sudo[464584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:45 compute-0 sudo[464584]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:45 compute-0 sudo[464609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:59:45 compute-0 sudo[464609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:45 compute-0 sudo[464609]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:45 compute-0 sudo[464634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:45 compute-0 sudo[464634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:45 compute-0 sudo[464634]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:45 compute-0 sudo[464659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 17:59:45 compute-0 sudo[464659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:45 compute-0 podman[464724]: 2025-11-25 17:59:45.575704987 +0000 UTC m=+0.042672139 container create 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 17:59:45 compute-0 systemd[1]: Started libpod-conmon-8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d.scope.
Nov 25 17:59:45 compute-0 podman[464724]: 2025-11-25 17:59:45.555826558 +0000 UTC m=+0.022793770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:59:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:59:45 compute-0 podman[464724]: 2025-11-25 17:59:45.665886115 +0000 UTC m=+0.132853277 container init 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 17:59:45 compute-0 podman[464724]: 2025-11-25 17:59:45.672881064 +0000 UTC m=+0.139848226 container start 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:59:45 compute-0 podman[464724]: 2025-11-25 17:59:45.676000769 +0000 UTC m=+0.142967931 container attach 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:59:45 compute-0 mystifying_mendeleev[464741]: 167 167
Nov 25 17:59:45 compute-0 systemd[1]: libpod-8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d.scope: Deactivated successfully.
Nov 25 17:59:45 compute-0 podman[464724]: 2025-11-25 17:59:45.678143227 +0000 UTC m=+0.145110389 container died 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 17:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d89c354e42c66b347609f34995d65435dfd84c455e89ad083d775cb0ee8c9aa-merged.mount: Deactivated successfully.
Nov 25 17:59:45 compute-0 podman[464724]: 2025-11-25 17:59:45.718389999 +0000 UTC m=+0.185357161 container remove 8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 17:59:45 compute-0 systemd[1]: libpod-conmon-8d96007fccf2611f6c2a67cfb77d241d216acea53d6563626134efb98e881c3d.scope: Deactivated successfully.
Nov 25 17:59:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:45 compute-0 podman[464765]: 2025-11-25 17:59:45.874290739 +0000 UTC m=+0.039368699 container create 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 17:59:45 compute-0 systemd[1]: Started libpod-conmon-22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d.scope.
Nov 25 17:59:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:45 compute-0 podman[464765]: 2025-11-25 17:59:45.952418679 +0000 UTC m=+0.117496659 container init 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:59:45 compute-0 podman[464765]: 2025-11-25 17:59:45.858091069 +0000 UTC m=+0.023169039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:59:45 compute-0 podman[464765]: 2025-11-25 17:59:45.959984525 +0000 UTC m=+0.125062485 container start 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:59:45 compute-0 podman[464765]: 2025-11-25 17:59:45.962686268 +0000 UTC m=+0.127764228 container attach 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 17:59:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4109: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:46 compute-0 ceph-mon[74985]: pgmap v4109: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:46 compute-0 nova_compute[254092]: 2025-11-25 17:59:46.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]: {
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:     "0": [
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:         {
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "devices": [
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "/dev/loop3"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             ],
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_name": "ceph_lv0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_size": "21470642176",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "name": "ceph_lv0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "tags": {
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cluster_name": "ceph",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.crush_device_class": "",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.encrypted": "0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osd_id": "0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.type": "block",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.vdo": "0"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             },
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "type": "block",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "vg_name": "ceph_vg0"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:         }
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:     ],
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:     "1": [
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:         {
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "devices": [
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "/dev/loop4"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             ],
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_name": "ceph_lv1",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_size": "21470642176",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "name": "ceph_lv1",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "tags": {
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cluster_name": "ceph",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.crush_device_class": "",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.encrypted": "0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osd_id": "1",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.type": "block",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.vdo": "0"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             },
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "type": "block",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "vg_name": "ceph_vg1"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:         }
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:     ],
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:     "2": [
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:         {
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "devices": [
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "/dev/loop5"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             ],
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_name": "ceph_lv2",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_size": "21470642176",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "name": "ceph_lv2",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "tags": {
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.cluster_name": "ceph",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.crush_device_class": "",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.encrypted": "0",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osd_id": "2",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.type": "block",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:                 "ceph.vdo": "0"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             },
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "type": "block",
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:             "vg_name": "ceph_vg2"
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:         }
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]:     ]
Nov 25 17:59:46 compute-0 dreamy_stonebraker[464782]: }
Nov 25 17:59:46 compute-0 systemd[1]: libpod-22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d.scope: Deactivated successfully.
Nov 25 17:59:46 compute-0 podman[464765]: 2025-11-25 17:59:46.720692486 +0000 UTC m=+0.885770446 container died 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-61d5da5616b79b4b5baeab86cf324e73a670617dcbfc6c3653373d8eca0f830f-merged.mount: Deactivated successfully.
Nov 25 17:59:46 compute-0 podman[464765]: 2025-11-25 17:59:46.772355708 +0000 UTC m=+0.937433668 container remove 22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 17:59:46 compute-0 systemd[1]: libpod-conmon-22c61a35979d76ebe58a1ccac0be76e7ed69250067b9f7d4a23b898f02f7579d.scope: Deactivated successfully.
Nov 25 17:59:46 compute-0 sudo[464659]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:46 compute-0 sudo[464805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:46 compute-0 sudo[464805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:46 compute-0 sudo[464805]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:46 compute-0 sudo[464830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 17:59:46 compute-0 sudo[464830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:46 compute-0 sudo[464830]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:47 compute-0 sudo[464855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:47 compute-0 sudo[464855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:47 compute-0 sudo[464855]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:47 compute-0 sudo[464880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 17:59:47 compute-0 sudo[464880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:47 compute-0 podman[464945]: 2025-11-25 17:59:47.440580159 +0000 UTC m=+0.047412497 container create aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:59:47 compute-0 systemd[1]: Started libpod-conmon-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope.
Nov 25 17:59:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:59:47 compute-0 podman[464945]: 2025-11-25 17:59:47.421575253 +0000 UTC m=+0.028407571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:59:47 compute-0 podman[464945]: 2025-11-25 17:59:47.528194356 +0000 UTC m=+0.135026654 container init aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 17:59:47 compute-0 podman[464945]: 2025-11-25 17:59:47.535799553 +0000 UTC m=+0.142631881 container start aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 17:59:47 compute-0 podman[464945]: 2025-11-25 17:59:47.539476282 +0000 UTC m=+0.146308600 container attach aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 17:59:47 compute-0 nice_goodall[464961]: 167 167
Nov 25 17:59:47 compute-0 systemd[1]: libpod-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope: Deactivated successfully.
Nov 25 17:59:47 compute-0 conmon[464961]: conmon aa6f9312714aad220207 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope/container/memory.events
Nov 25 17:59:47 compute-0 podman[464945]: 2025-11-25 17:59:47.544937411 +0000 UTC m=+0.151769729 container died aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 17:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-882c959bb73b8af3ac7f47a872ec394082dcca84288800df7918bff07f85ece1-merged.mount: Deactivated successfully.
Nov 25 17:59:47 compute-0 podman[464945]: 2025-11-25 17:59:47.589077159 +0000 UTC m=+0.195909457 container remove aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goodall, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 17:59:47 compute-0 systemd[1]: libpod-conmon-aa6f9312714aad22020781fd0361109573929ab6b09fd718a08732fbc26d8f26.scope: Deactivated successfully.
Nov 25 17:59:47 compute-0 podman[464983]: 2025-11-25 17:59:47.818044862 +0000 UTC m=+0.057901713 container create 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 17:59:47 compute-0 systemd[1]: Started libpod-conmon-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope.
Nov 25 17:59:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 17:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 17:59:47 compute-0 podman[464983]: 2025-11-25 17:59:47.799665913 +0000 UTC m=+0.039522824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 17:59:47 compute-0 podman[464983]: 2025-11-25 17:59:47.898400192 +0000 UTC m=+0.138257073 container init 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 17:59:47 compute-0 podman[464983]: 2025-11-25 17:59:47.904777335 +0000 UTC m=+0.144634186 container start 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 17:59:47 compute-0 podman[464983]: 2025-11-25 17:59:47.907957521 +0000 UTC m=+0.147814382 container attach 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 17:59:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4110: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:48 compute-0 ceph-mon[74985]: pgmap v4110: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:48 compute-0 nova_compute[254092]: 2025-11-25 17:59:48.875 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:48 compute-0 cool_margulis[464999]: {
Nov 25 17:59:48 compute-0 cool_margulis[464999]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "osd_id": 1,
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "type": "bluestore"
Nov 25 17:59:48 compute-0 cool_margulis[464999]:     },
Nov 25 17:59:48 compute-0 cool_margulis[464999]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "osd_id": 2,
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "type": "bluestore"
Nov 25 17:59:48 compute-0 cool_margulis[464999]:     },
Nov 25 17:59:48 compute-0 cool_margulis[464999]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "osd_id": 0,
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 17:59:48 compute-0 cool_margulis[464999]:         "type": "bluestore"
Nov 25 17:59:48 compute-0 cool_margulis[464999]:     }
Nov 25 17:59:48 compute-0 cool_margulis[464999]: }
Nov 25 17:59:49 compute-0 systemd[1]: libpod-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope: Deactivated successfully.
Nov 25 17:59:49 compute-0 systemd[1]: libpod-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope: Consumed 1.108s CPU time.
Nov 25 17:59:49 compute-0 conmon[464999]: conmon 0866a3c73a9eeadeb902 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope/container/memory.events
Nov 25 17:59:49 compute-0 podman[464983]: 2025-11-25 17:59:49.009334266 +0000 UTC m=+1.249191137 container died 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 17:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c53f6ed0d95b3456e85092984950ca68a10e614d4863e6af175d300af1ebab9-merged.mount: Deactivated successfully.
Nov 25 17:59:49 compute-0 podman[464983]: 2025-11-25 17:59:49.076224771 +0000 UTC m=+1.316081632 container remove 0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_margulis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 17:59:49 compute-0 systemd[1]: libpod-conmon-0866a3c73a9eeadeb902ee0f8b9adff788ff152eaf50a49be3ee8775e0ffd5c6.scope: Deactivated successfully.
Nov 25 17:59:49 compute-0 sudo[464880]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 17:59:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:59:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 17:59:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:59:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e2d9804f-719f-4bf6-92da-03a7660c6e43 does not exist
Nov 25 17:59:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4dc6530f-9d34-4b55-ab6f-b073f8be3cd7 does not exist
Nov 25 17:59:49 compute-0 sudo[465044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 17:59:49 compute-0 sudo[465044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:49 compute-0 sudo[465044]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:49 compute-0 sudo[465069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 17:59:49 compute-0 sudo[465069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 17:59:49 compute-0 sudo[465069]: pam_unix(sudo:session): session closed for user root
Nov 25 17:59:49 compute-0 nova_compute[254092]: 2025-11-25 17:59:49.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:59:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 17:59:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4111: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:50 compute-0 nova_compute[254092]: 2025-11-25 17:59:50.507 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 17:59:50 compute-0 nova_compute[254092]: 2025-11-25 17:59:50.508 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 17:59:50 compute-0 nova_compute[254092]: 2025-11-25 17:59:50.537 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 17:59:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:51 compute-0 ceph-mon[74985]: pgmap v4111: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:51 compute-0 nova_compute[254092]: 2025-11-25 17:59:51.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4112: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 17:59:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 17:59:53 compute-0 ceph-mon[74985]: pgmap v4112: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:53 compute-0 nova_compute[254092]: 2025-11-25 17:59:53.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4113: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:55 compute-0 ceph-mon[74985]: pgmap v4113: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 17:59:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2940417034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:59:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 17:59:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2940417034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:59:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 17:59:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4114: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:56 compute-0 nova_compute[254092]: 2025-11-25 17:59:56.485 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2940417034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 17:59:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2940417034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 17:59:57 compute-0 ceph-mon[74985]: pgmap v4114: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4115: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 17:59:58 compute-0 nova_compute[254092]: 2025-11-25 17:59:58.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 17:59:59 compute-0 ceph-mon[74985]: pgmap v4115: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4116: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:01 compute-0 nova_compute[254092]: 2025-11-25 18:00:01.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:01 compute-0 ceph-mon[74985]: pgmap v4116: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4117: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:03 compute-0 ceph-mon[74985]: pgmap v4117: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:03 compute-0 nova_compute[254092]: 2025-11-25 18:00:03.897 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4118: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:06 compute-0 ceph-mon[74985]: pgmap v4118: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4119: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:06 compute-0 nova_compute[254092]: 2025-11-25 18:00:06.490 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:08 compute-0 ceph-mon[74985]: pgmap v4119: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4120: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:08 compute-0 nova_compute[254092]: 2025-11-25 18:00:08.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:10 compute-0 ceph-mon[74985]: pgmap v4120: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:00:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:00:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4121: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:11 compute-0 nova_compute[254092]: 2025-11-25 18:00:11.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:12 compute-0 ceph-mon[74985]: pgmap v4121: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4122: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:13 compute-0 podman[465095]: 2025-11-25 18:00:13.685572095 +0000 UTC m=+0.088782150 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:00:13 compute-0 podman[465094]: 2025-11-25 18:00:13.694312063 +0000 UTC m=+0.107607682 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 18:00:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:00:13.705 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:00:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:00:13.706 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:00:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:00:13.706 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:00:13 compute-0 podman[465096]: 2025-11-25 18:00:13.769088412 +0000 UTC m=+0.165667307 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 18:00:13 compute-0 nova_compute[254092]: 2025-11-25 18:00:13.918 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:14 compute-0 ceph-mon[74985]: pgmap v4122: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4123: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:16 compute-0 ceph-mon[74985]: pgmap v4123: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4124: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:16 compute-0 nova_compute[254092]: 2025-11-25 18:00:16.494 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:18 compute-0 ceph-mon[74985]: pgmap v4124: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4125: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:18 compute-0 nova_compute[254092]: 2025-11-25 18:00:18.526 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:18 compute-0 nova_compute[254092]: 2025-11-25 18:00:18.974 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:19 compute-0 nova_compute[254092]: 2025-11-25 18:00:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:19 compute-0 sshd-session[465159]: Connection closed by authenticating user root 171.244.51.45 port 41548 [preauth]
Nov 25 18:00:20 compute-0 ceph-mon[74985]: pgmap v4125: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4126: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:21 compute-0 ceph-mon[74985]: pgmap v4126: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:21 compute-0 nova_compute[254092]: 2025-11-25 18:00:21.495 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4127: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:22 compute-0 nova_compute[254092]: 2025-11-25 18:00:22.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:23 compute-0 ceph-mon[74985]: pgmap v4127: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:23 compute-0 nova_compute[254092]: 2025-11-25 18:00:23.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:23 compute-0 nova_compute[254092]: 2025-11-25 18:00:23.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4128: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:25 compute-0 ceph-mon[74985]: pgmap v4128: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4129: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.498 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.523 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.524 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.549 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:00:26 compute-0 nova_compute[254092]: 2025-11-25 18:00:26.550 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:00:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:00:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024088845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.563 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:00:27 compute-0 ceph-mon[74985]: pgmap v4129: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3024088845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.732 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.734 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3616MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.734 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.734 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.795 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.796 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:00:27 compute-0 nova_compute[254092]: 2025-11-25 18:00:27.812 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:00:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:00:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080625504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:00:28 compute-0 nova_compute[254092]: 2025-11-25 18:00:28.232 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:00:28 compute-0 nova_compute[254092]: 2025-11-25 18:00:28.239 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:00:28 compute-0 nova_compute[254092]: 2025-11-25 18:00:28.254 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:00:28 compute-0 nova_compute[254092]: 2025-11-25 18:00:28.255 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:00:28 compute-0 nova_compute[254092]: 2025-11-25 18:00:28.255 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:00:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4130: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3080625504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:00:28 compute-0 nova_compute[254092]: 2025-11-25 18:00:28.982 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:30 compute-0 ceph-mon[74985]: pgmap v4130: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:30 compute-0 nova_compute[254092]: 2025-11-25 18:00:30.227 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4131: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:31 compute-0 nova_compute[254092]: 2025-11-25 18:00:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:31 compute-0 nova_compute[254092]: 2025-11-25 18:00:31.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:00:31 compute-0 nova_compute[254092]: 2025-11-25 18:00:31.499 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:32 compute-0 ceph-mon[74985]: pgmap v4131: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4132: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:34 compute-0 nova_compute[254092]: 2025-11-25 18:00:34.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:34 compute-0 ceph-mon[74985]: pgmap v4132: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4133: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:36 compute-0 ceph-mon[74985]: pgmap v4133: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4134: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:36 compute-0 nova_compute[254092]: 2025-11-25 18:00:36.502 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:37 compute-0 ceph-mon[74985]: pgmap v4134: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4135: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:39 compute-0 nova_compute[254092]: 2025-11-25 18:00:39.071 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:39 compute-0 ceph-mon[74985]: pgmap v4135: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:39 compute-0 nova_compute[254092]: 2025-11-25 18:00:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4136: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:00:40
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data']
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:00:40 compute-0 nova_compute[254092]: 2025-11-25 18:00:40.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:00:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.911517) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093640911556, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 255, "total_data_size": 1270155, "memory_usage": 1292968, "flush_reason": "Manual Compaction"}
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:00:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093640959176, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 1258100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84360, "largest_seqno": 85289, "table_properties": {"data_size": 1253480, "index_size": 2207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9852, "raw_average_key_size": 19, "raw_value_size": 1244236, "raw_average_value_size": 2415, "num_data_blocks": 99, "num_entries": 515, "num_filter_entries": 515, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093554, "oldest_key_time": 1764093554, "file_creation_time": 1764093640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 47700 microseconds, and 4298 cpu microseconds.
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.959216) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 1258100 bytes OK
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.959234) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962472) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962485) EVENT_LOG_v1 {"time_micros": 1764093640962481, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962502) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 1265670, prev total WAL file size 1265670, number of live WAL files 2.
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962970) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373639' seq:72057594037927935, type:22 .. '6C6F676D0034303230' seq:0, type:0; will stop at (end)
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(1228KB)], [200(9142KB)]
Nov 25 18:00:40 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093640963026, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 10619657, "oldest_snapshot_seqno": -1}
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 9794 keys, 10520083 bytes, temperature: kUnknown
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093641129769, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 10520083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10460620, "index_size": 33878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 259715, "raw_average_key_size": 26, "raw_value_size": 10291483, "raw_average_value_size": 1050, "num_data_blocks": 1297, "num_entries": 9794, "num_filter_entries": 9794, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.130183) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 10520083 bytes
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.134997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.6 rd, 63.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.9 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(16.8) write-amplify(8.4) OK, records in: 10316, records dropped: 522 output_compression: NoCompression
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.135018) EVENT_LOG_v1 {"time_micros": 1764093641135010, "job": 126, "event": "compaction_finished", "compaction_time_micros": 166970, "compaction_time_cpu_micros": 26357, "output_level": 6, "num_output_files": 1, "total_output_size": 10520083, "num_input_records": 10316, "num_output_records": 9794, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093641135322, "job": 126, "event": "table_file_deletion", "file_number": 202}
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093641137066, "job": 126, "event": "table_file_deletion", "file_number": 200}
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:40.962901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:00:41 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:00:41.137212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:00:41 compute-0 nova_compute[254092]: 2025-11-25 18:00:41.504 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:42 compute-0 ceph-mon[74985]: pgmap v4136: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4137: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:44 compute-0 nova_compute[254092]: 2025-11-25 18:00:44.074 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:44 compute-0 ceph-mon[74985]: pgmap v4137: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4138: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:44 compute-0 podman[465206]: 2025-11-25 18:00:44.64017533 +0000 UTC m=+0.049554756 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 25 18:00:44 compute-0 podman[465205]: 2025-11-25 18:00:44.651379104 +0000 UTC m=+0.063965797 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 18:00:44 compute-0 podman[465207]: 2025-11-25 18:00:44.698333627 +0000 UTC m=+0.105778170 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:00:45 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:46 compute-0 ceph-mon[74985]: pgmap v4138: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4139: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:46 compute-0 nova_compute[254092]: 2025-11-25 18:00:46.505 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:48 compute-0 ceph-mon[74985]: pgmap v4139: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4140: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:49 compute-0 nova_compute[254092]: 2025-11-25 18:00:49.102 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:49 compute-0 sudo[465269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:49 compute-0 sudo[465269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:49 compute-0 sudo[465269]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:49 compute-0 sudo[465294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:00:49 compute-0 sudo[465294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:49 compute-0 sudo[465294]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:49 compute-0 sudo[465319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:49 compute-0 sudo[465319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:49 compute-0 sudo[465319]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:49 compute-0 sudo[465344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 18:00:49 compute-0 sudo[465344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:50 compute-0 podman[465440]: 2025-11-25 18:00:50.037225484 +0000 UTC m=+0.088141172 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:00:50 compute-0 podman[465440]: 2025-11-25 18:00:50.13435603 +0000 UTC m=+0.185271708 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:00:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4141: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:50 compute-0 ceph-mon[74985]: pgmap v4140: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:51 compute-0 nova_compute[254092]: 2025-11-25 18:00:51.594 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:51 compute-0 sudo[465344]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:00:52 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:00:52 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:00:52 compute-0 ceph-mon[74985]: pgmap v4141: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4142: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:00:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:00:52 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:00:52 compute-0 sudo[465593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:52 compute-0 sudo[465593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:52 compute-0 sudo[465593]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:52 compute-0 sudo[465618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:00:52 compute-0 sudo[465618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:52 compute-0 sudo[465618]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:52 compute-0 sudo[465643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:52 compute-0 sudo[465643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:52 compute-0 sudo[465643]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:52 compute-0 sudo[465668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:00:52 compute-0 sudo[465668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:53 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:00:53 compute-0 ceph-mon[74985]: pgmap v4142: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:53 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:00:53 compute-0 sudo[465668]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:00:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:00:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:00:53 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:00:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:00:53 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:00:53 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ffa0e4cb-95fe-4786-9d56-dcc87ba90dd9 does not exist
Nov 25 18:00:53 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c441c53c-4bb7-48eb-98f2-922b1e73c2c0 does not exist
Nov 25 18:00:53 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 5e2a8684-05f1-4f01-a52b-720ee8726403 does not exist
Nov 25 18:00:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:00:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:00:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:00:53 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:00:53 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:00:53 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:00:53 compute-0 sudo[465724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:53 compute-0 sudo[465724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:53 compute-0 sudo[465724]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:53 compute-0 sudo[465749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:00:53 compute-0 sudo[465749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:53 compute-0 sudo[465749]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:53 compute-0 sudo[465774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:53 compute-0 sudo[465774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:53 compute-0 sudo[465774]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:53 compute-0 sudo[465799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:00:53 compute-0 sudo[465799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:54 compute-0 podman[465865]: 2025-11-25 18:00:54.032322468 +0000 UTC m=+0.088088321 container create 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 18:00:54 compute-0 podman[465865]: 2025-11-25 18:00:53.977921022 +0000 UTC m=+0.033686855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:00:54 compute-0 nova_compute[254092]: 2025-11-25 18:00:54.104 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:54 compute-0 systemd[1]: Started libpod-conmon-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope.
Nov 25 18:00:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:00:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4143: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:54 compute-0 podman[465865]: 2025-11-25 18:00:54.422166945 +0000 UTC m=+0.477932848 container init 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:00:54 compute-0 podman[465865]: 2025-11-25 18:00:54.433119243 +0000 UTC m=+0.488885056 container start 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:00:54 compute-0 systemd[1]: libpod-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope: Deactivated successfully.
Nov 25 18:00:54 compute-0 silly_wilson[465882]: 167 167
Nov 25 18:00:54 compute-0 conmon[465882]: conmon 8d2323bca6d74cf08328 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope/container/memory.events
Nov 25 18:00:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:00:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:00:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:00:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:00:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:00:54 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:00:54 compute-0 podman[465865]: 2025-11-25 18:00:54.480875259 +0000 UTC m=+0.536641072 container attach 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:00:54 compute-0 podman[465865]: 2025-11-25 18:00:54.481410263 +0000 UTC m=+0.537176076 container died 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 18:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-da949b1ec941125a0c97dd7e23205ffca60ddb25ed9da9729bfbb563e5862428-merged.mount: Deactivated successfully.
Nov 25 18:00:54 compute-0 podman[465865]: 2025-11-25 18:00:54.59037394 +0000 UTC m=+0.646139753 container remove 8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 18:00:54 compute-0 systemd[1]: libpod-conmon-8d2323bca6d74cf08328d2026eb15ed1b2bcd13852fe9e8123cebb0d092190db.scope: Deactivated successfully.
Nov 25 18:00:54 compute-0 podman[465906]: 2025-11-25 18:00:54.795724283 +0000 UTC m=+0.038847086 container create 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 18:00:54 compute-0 systemd[1]: Started libpod-conmon-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope.
Nov 25 18:00:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:54 compute-0 podman[465906]: 2025-11-25 18:00:54.779250375 +0000 UTC m=+0.022373198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:00:54 compute-0 podman[465906]: 2025-11-25 18:00:54.88188008 +0000 UTC m=+0.125002913 container init 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:00:54 compute-0 podman[465906]: 2025-11-25 18:00:54.895609202 +0000 UTC m=+0.138732005 container start 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:00:54 compute-0 podman[465906]: 2025-11-25 18:00:54.899528889 +0000 UTC m=+0.142651692 container attach 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:00:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:00:55 compute-0 ceph-mon[74985]: pgmap v4143: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973916735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:00:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:00:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973916735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:00:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:00:55 compute-0 kind_wilbur[465922]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:00:55 compute-0 kind_wilbur[465922]: --> relative data size: 1.0
Nov 25 18:00:55 compute-0 kind_wilbur[465922]: --> All data devices are unavailable
Nov 25 18:00:55 compute-0 systemd[1]: libpod-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope: Deactivated successfully.
Nov 25 18:00:55 compute-0 systemd[1]: libpod-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope: Consumed 1.021s CPU time.
Nov 25 18:00:55 compute-0 podman[465906]: 2025-11-25 18:00:55.967086466 +0000 UTC m=+1.210209269 container died 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:00:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-663b65db7604295ce446c12eb58e119ef502c72c9c67752f3bbd68e874413bdc-merged.mount: Deactivated successfully.
Nov 25 18:00:56 compute-0 podman[465906]: 2025-11-25 18:00:56.035897903 +0000 UTC m=+1.279020706 container remove 0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:00:56 compute-0 systemd[1]: libpod-conmon-0e2f151e2cc47a7016b9a11dbebb248179b871c3c1c8b4dd6a3d641a71ed547f.scope: Deactivated successfully.
Nov 25 18:00:56 compute-0 sudo[465799]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:56 compute-0 sudo[465963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:56 compute-0 sudo[465963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:56 compute-0 sudo[465963]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:56 compute-0 sudo[465988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:00:56 compute-0 sudo[465988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:56 compute-0 sudo[465988]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4144: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:56 compute-0 sudo[466013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:56 compute-0 sudo[466013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:56 compute-0 sudo[466013]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:56 compute-0 sudo[466038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:00:56 compute-0 sudo[466038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/973916735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:00:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/973916735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:00:56 compute-0 nova_compute[254092]: 2025-11-25 18:00:56.595 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:56 compute-0 podman[466103]: 2025-11-25 18:00:56.731913619 +0000 UTC m=+0.021697670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:00:56 compute-0 podman[466103]: 2025-11-25 18:00:56.88708573 +0000 UTC m=+0.176869811 container create 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:00:57 compute-0 systemd[1]: Started libpod-conmon-4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f.scope.
Nov 25 18:00:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:00:57 compute-0 podman[466103]: 2025-11-25 18:00:57.237008654 +0000 UTC m=+0.526792705 container init 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:00:57 compute-0 podman[466103]: 2025-11-25 18:00:57.249971496 +0000 UTC m=+0.539755567 container start 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:00:57 compute-0 naughty_banach[466120]: 167 167
Nov 25 18:00:57 compute-0 systemd[1]: libpod-4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f.scope: Deactivated successfully.
Nov 25 18:00:57 compute-0 podman[466103]: 2025-11-25 18:00:57.60096222 +0000 UTC m=+0.890746291 container attach 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:00:57 compute-0 podman[466103]: 2025-11-25 18:00:57.601752471 +0000 UTC m=+0.891536512 container died 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:00:57 compute-0 ceph-mon[74985]: pgmap v4144: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e12758bbfd5fe5331cc3a74a7eb68890d142137527505a0438cffa9ca099ef88-merged.mount: Deactivated successfully.
Nov 25 18:00:57 compute-0 podman[466103]: 2025-11-25 18:00:57.878366277 +0000 UTC m=+1.168150308 container remove 4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:00:57 compute-0 systemd[1]: libpod-conmon-4c58dd0ce50a5a488c49f636f287640e37261ab7d6185051d6f883716e55504f.scope: Deactivated successfully.
Nov 25 18:00:58 compute-0 podman[466144]: 2025-11-25 18:00:58.051673669 +0000 UTC m=+0.025444181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:00:58 compute-0 podman[466144]: 2025-11-25 18:00:58.16813621 +0000 UTC m=+0.141906652 container create 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:00:58 compute-0 systemd[1]: Started libpod-conmon-0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36.scope.
Nov 25 18:00:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4145: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:00:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:00:58 compute-0 podman[466144]: 2025-11-25 18:00:58.350288322 +0000 UTC m=+0.324058794 container init 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:00:58 compute-0 podman[466144]: 2025-11-25 18:00:58.365223337 +0000 UTC m=+0.338993779 container start 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 18:00:58 compute-0 podman[466144]: 2025-11-25 18:00:58.449225937 +0000 UTC m=+0.422996339 container attach 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 18:00:59 compute-0 nova_compute[254092]: 2025-11-25 18:00:59.108 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:00:59 compute-0 objective_williamson[466161]: {
Nov 25 18:00:59 compute-0 objective_williamson[466161]:     "0": [
Nov 25 18:00:59 compute-0 objective_williamson[466161]:         {
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "devices": [
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "/dev/loop3"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             ],
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_name": "ceph_lv0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_size": "21470642176",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "name": "ceph_lv0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "tags": {
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cluster_name": "ceph",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.crush_device_class": "",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.encrypted": "0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osd_id": "0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.type": "block",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.vdo": "0"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             },
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "type": "block",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "vg_name": "ceph_vg0"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:         }
Nov 25 18:00:59 compute-0 objective_williamson[466161]:     ],
Nov 25 18:00:59 compute-0 objective_williamson[466161]:     "1": [
Nov 25 18:00:59 compute-0 objective_williamson[466161]:         {
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "devices": [
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "/dev/loop4"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             ],
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_name": "ceph_lv1",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_size": "21470642176",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "name": "ceph_lv1",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "tags": {
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cluster_name": "ceph",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.crush_device_class": "",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.encrypted": "0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osd_id": "1",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.type": "block",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.vdo": "0"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             },
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "type": "block",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "vg_name": "ceph_vg1"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:         }
Nov 25 18:00:59 compute-0 objective_williamson[466161]:     ],
Nov 25 18:00:59 compute-0 objective_williamson[466161]:     "2": [
Nov 25 18:00:59 compute-0 objective_williamson[466161]:         {
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "devices": [
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "/dev/loop5"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             ],
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_name": "ceph_lv2",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_size": "21470642176",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "name": "ceph_lv2",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "tags": {
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.cluster_name": "ceph",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.crush_device_class": "",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.encrypted": "0",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osd_id": "2",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.type": "block",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:                 "ceph.vdo": "0"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             },
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "type": "block",
Nov 25 18:00:59 compute-0 objective_williamson[466161]:             "vg_name": "ceph_vg2"
Nov 25 18:00:59 compute-0 objective_williamson[466161]:         }
Nov 25 18:00:59 compute-0 objective_williamson[466161]:     ]
Nov 25 18:00:59 compute-0 objective_williamson[466161]: }
Nov 25 18:00:59 compute-0 systemd[1]: libpod-0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36.scope: Deactivated successfully.
Nov 25 18:00:59 compute-0 podman[466144]: 2025-11-25 18:00:59.234952546 +0000 UTC m=+1.208723058 container died 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e871238ad96508e82a9d0d630aa6eb1404dd6ba98597bacdd67f3266b0430af-merged.mount: Deactivated successfully.
Nov 25 18:00:59 compute-0 podman[466144]: 2025-11-25 18:00:59.354289855 +0000 UTC m=+1.328060257 container remove 0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_williamson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:00:59 compute-0 systemd[1]: libpod-conmon-0afaa07a307f0e326bbb60c877cb0c6b533bcae8597af0d12f9daf2b8ac68f36.scope: Deactivated successfully.
Nov 25 18:00:59 compute-0 sudo[466038]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:59 compute-0 sudo[466184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:59 compute-0 sudo[466184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:59 compute-0 sudo[466184]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:59 compute-0 sudo[466209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:00:59 compute-0 sudo[466209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:59 compute-0 sudo[466209]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:59 compute-0 sudo[466234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:00:59 compute-0 sudo[466234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:59 compute-0 sudo[466234]: pam_unix(sudo:session): session closed for user root
Nov 25 18:00:59 compute-0 sudo[466259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:00:59 compute-0 sudo[466259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:00:59 compute-0 ceph-mon[74985]: pgmap v4145: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:00 compute-0 podman[466325]: 2025-11-25 18:01:00.0370209 +0000 UTC m=+0.039993927 container create fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:01:00 compute-0 systemd[1]: Started libpod-conmon-fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd.scope.
Nov 25 18:01:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:01:00 compute-0 podman[466325]: 2025-11-25 18:01:00.111293075 +0000 UTC m=+0.114266112 container init fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:01:00 compute-0 podman[466325]: 2025-11-25 18:01:00.016576495 +0000 UTC m=+0.019549562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:01:00 compute-0 podman[466325]: 2025-11-25 18:01:00.123362582 +0000 UTC m=+0.126335629 container start fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:01:00 compute-0 intelligent_khorana[466342]: 167 167
Nov 25 18:01:00 compute-0 podman[466325]: 2025-11-25 18:01:00.129730726 +0000 UTC m=+0.132703813 container attach fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:01:00 compute-0 systemd[1]: libpod-fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd.scope: Deactivated successfully.
Nov 25 18:01:00 compute-0 podman[466325]: 2025-11-25 18:01:00.131575225 +0000 UTC m=+0.134548272 container died fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:01:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc63579565e7e4cff7554d7cbbbfab95b4a8dd5d8be9422c763b2ecfa66e94c3-merged.mount: Deactivated successfully.
Nov 25 18:01:00 compute-0 podman[466325]: 2025-11-25 18:01:00.187587305 +0000 UTC m=+0.190560342 container remove fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_khorana, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:01:00 compute-0 systemd[1]: libpod-conmon-fe883c09372d3a40a86a4cc19667a4ebb6bcfbb80d481aaca71349756c2870bd.scope: Deactivated successfully.
Nov 25 18:01:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4146: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:00 compute-0 podman[466365]: 2025-11-25 18:01:00.4155044 +0000 UTC m=+0.079333484 container create 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:01:00 compute-0 podman[466365]: 2025-11-25 18:01:00.357042174 +0000 UTC m=+0.020871268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:01:00 compute-0 systemd[1]: Started libpod-conmon-6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37.scope.
Nov 25 18:01:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:01:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:01:00 compute-0 podman[466365]: 2025-11-25 18:01:00.59461024 +0000 UTC m=+0.258439314 container init 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:01:00 compute-0 podman[466365]: 2025-11-25 18:01:00.602594857 +0000 UTC m=+0.266423931 container start 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:01:00 compute-0 podman[466365]: 2025-11-25 18:01:00.631359507 +0000 UTC m=+0.295188581 container attach 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:01:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:01 compute-0 CROND[466390]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 18:01:01 compute-0 run-parts[466393]: (/etc/cron.hourly) starting 0anacron
Nov 25 18:01:01 compute-0 run-parts[466401]: (/etc/cron.hourly) finished 0anacron
Nov 25 18:01:01 compute-0 CROND[466388]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 18:01:01 compute-0 quirky_turing[466382]: {
Nov 25 18:01:01 compute-0 quirky_turing[466382]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "osd_id": 1,
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "type": "bluestore"
Nov 25 18:01:01 compute-0 quirky_turing[466382]:     },
Nov 25 18:01:01 compute-0 quirky_turing[466382]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "osd_id": 2,
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "type": "bluestore"
Nov 25 18:01:01 compute-0 quirky_turing[466382]:     },
Nov 25 18:01:01 compute-0 quirky_turing[466382]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "osd_id": 0,
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:01:01 compute-0 quirky_turing[466382]:         "type": "bluestore"
Nov 25 18:01:01 compute-0 quirky_turing[466382]:     }
Nov 25 18:01:01 compute-0 quirky_turing[466382]: }
Nov 25 18:01:01 compute-0 systemd[1]: libpod-6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37.scope: Deactivated successfully.
Nov 25 18:01:01 compute-0 podman[466365]: 2025-11-25 18:01:01.546465877 +0000 UTC m=+1.210294991 container died 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:01:01 compute-0 nova_compute[254092]: 2025-11-25 18:01:01.597 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7480863634aa3749b5c9bfa0353ca07e598a01b86ea7e1432227216658771ef3-merged.mount: Deactivated successfully.
Nov 25 18:01:01 compute-0 podman[466365]: 2025-11-25 18:01:01.676505486 +0000 UTC m=+1.340334560 container remove 6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:01:01 compute-0 systemd[1]: libpod-conmon-6f5397653d61d98b3a98317ba7e17f36a914f201109e3a0e16b4ed5b33d6ea37.scope: Deactivated successfully.
Nov 25 18:01:01 compute-0 sudo[466259]: pam_unix(sudo:session): session closed for user root
Nov 25 18:01:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:01:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:01:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:01:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:01:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f1552ad7-3bc2-45ee-9475-0e308dbf295c does not exist
Nov 25 18:01:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev dc8cd95b-fbd5-45e9-9dcc-5589bc99b19c does not exist
Nov 25 18:01:01 compute-0 sudo[466438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:01:01 compute-0 sudo[466438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:01:01 compute-0 sudo[466438]: pam_unix(sudo:session): session closed for user root
Nov 25 18:01:01 compute-0 sudo[466463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:01:01 compute-0 sudo[466463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:01:01 compute-0 sudo[466463]: pam_unix(sudo:session): session closed for user root
Nov 25 18:01:01 compute-0 ceph-mon[74985]: pgmap v4146: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:01:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:01:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4147: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:03 compute-0 ceph-mon[74985]: pgmap v4147: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:04 compute-0 nova_compute[254092]: 2025-11-25 18:01:04.111 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4148: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:05 compute-0 ceph-mon[74985]: pgmap v4148: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4149: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:06 compute-0 nova_compute[254092]: 2025-11-25 18:01:06.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:07 compute-0 ceph-mon[74985]: pgmap v4149: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4150: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:09 compute-0 nova_compute[254092]: 2025-11-25 18:01:09.112 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:01:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:01:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4151: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:10 compute-0 ceph-mon[74985]: pgmap v4150: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:11 compute-0 nova_compute[254092]: 2025-11-25 18:01:11.604 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:12 compute-0 ceph-mon[74985]: pgmap v4151: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4152: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:01:13.706 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:01:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:01:13.707 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:01:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:01:13.707 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:01:14 compute-0 nova_compute[254092]: 2025-11-25 18:01:14.113 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:14 compute-0 ceph-mon[74985]: pgmap v4152: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4153: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:15 compute-0 ceph-mon[74985]: pgmap v4153: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:15 compute-0 podman[466488]: 2025-11-25 18:01:15.659626108 +0000 UTC m=+0.067639777 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:01:15 compute-0 podman[466489]: 2025-11-25 18:01:15.677952965 +0000 UTC m=+0.087530255 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:01:15 compute-0 podman[466490]: 2025-11-25 18:01:15.683573698 +0000 UTC m=+0.093491718 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:01:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4154: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:16 compute-0 nova_compute[254092]: 2025-11-25 18:01:16.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:17 compute-0 ceph-mon[74985]: pgmap v4154: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4155: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:18 compute-0 nova_compute[254092]: 2025-11-25 18:01:18.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:19 compute-0 nova_compute[254092]: 2025-11-25 18:01:19.116 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:19 compute-0 nova_compute[254092]: 2025-11-25 18:01:19.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:19 compute-0 ceph-mon[74985]: pgmap v4155: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4156: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:21 compute-0 nova_compute[254092]: 2025-11-25 18:01:21.659 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:21 compute-0 ceph-mon[74985]: pgmap v4156: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4157: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:23 compute-0 nova_compute[254092]: 2025-11-25 18:01:23.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:23 compute-0 nova_compute[254092]: 2025-11-25 18:01:23.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:24 compute-0 nova_compute[254092]: 2025-11-25 18:01:24.164 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:24 compute-0 ceph-mon[74985]: pgmap v4157: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4158: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:25 compute-0 ceph-mon[74985]: pgmap v4158: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4159: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:26 compute-0 nova_compute[254092]: 2025-11-25 18:01:26.731 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:27 compute-0 nova_compute[254092]: 2025-11-25 18:01:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:27 compute-0 nova_compute[254092]: 2025-11-25 18:01:27.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:01:27 compute-0 nova_compute[254092]: 2025-11-25 18:01:27.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:01:27 compute-0 nova_compute[254092]: 2025-11-25 18:01:27.515 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:01:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4160: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:28 compute-0 ceph-mon[74985]: pgmap v4159: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:28 compute-0 nova_compute[254092]: 2025-11-25 18:01:28.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:28 compute-0 nova_compute[254092]: 2025-11-25 18:01:28.499 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:28 compute-0 nova_compute[254092]: 2025-11-25 18:01:28.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:01:28 compute-0 nova_compute[254092]: 2025-11-25 18:01:28.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:01:28 compute-0 nova_compute[254092]: 2025-11-25 18:01:28.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:01:28 compute-0 nova_compute[254092]: 2025-11-25 18:01:28.538 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:01:28 compute-0 nova_compute[254092]: 2025-11-25 18:01:28.539 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:01:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:01:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128121881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.016 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.209 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.210 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.210 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.211 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.373 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.373 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.505 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.655 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.655 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.682 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.714 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 18:01:29 compute-0 nova_compute[254092]: 2025-11-25 18:01:29.733 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:01:30 compute-0 ceph-mon[74985]: pgmap v4160: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3128121881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:01:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4161: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:01:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093183224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:01:30 compute-0 nova_compute[254092]: 2025-11-25 18:01:30.370 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:01:30 compute-0 nova_compute[254092]: 2025-11-25 18:01:30.375 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:01:30 compute-0 nova_compute[254092]: 2025-11-25 18:01:30.393 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:01:30 compute-0 nova_compute[254092]: 2025-11-25 18:01:30.395 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:01:30 compute-0 nova_compute[254092]: 2025-11-25 18:01:30.395 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:01:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2093183224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:01:31 compute-0 nova_compute[254092]: 2025-11-25 18:01:31.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:32 compute-0 ceph-mon[74985]: pgmap v4161: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4162: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:34 compute-0 ceph-mon[74985]: pgmap v4162: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:34 compute-0 nova_compute[254092]: 2025-11-25 18:01:34.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4163: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:35 compute-0 nova_compute[254092]: 2025-11-25 18:01:35.392 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:35 compute-0 nova_compute[254092]: 2025-11-25 18:01:35.393 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:01:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4164: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:36 compute-0 ceph-mon[74985]: pgmap v4163: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:36 compute-0 nova_compute[254092]: 2025-11-25 18:01:36.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:01:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 85K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1308 writes, 5944 keys, 1308 commit groups, 1.0 writes per commit group, ingest: 8.54 MB, 0.01 MB/s
                                           Interval WAL: 1308 writes, 1308 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.6      2.54              0.37        63    0.040       0      0       0.0       0.0
                                             L6      1/0   10.03 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.3    129.2    110.3      4.90              1.69        62    0.079    462K    33K       0.0       0.0
                                            Sum      1/0   10.03 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.3     85.2     86.5      7.44              2.06       125    0.059    462K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   8.8     98.7    100.5      0.56              0.21        10    0.056     51K   2479       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    129.2    110.3      4.90              1.69        62    0.079    462K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.3      2.49              0.37        62    0.040       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.100, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.63 GB write, 0.08 MB/s write, 0.62 GB read, 0.08 MB/s read, 7.4 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 72.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000964 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4638,69.18 MB,22.7566%) FilterBlock(126,1.23 MB,0.405537%) IndexBlock(126,1.97 MB,0.647148%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 18:01:37 compute-0 ceph-mon[74985]: pgmap v4164: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4165: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:39 compute-0 nova_compute[254092]: 2025-11-25 18:01:39.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:39 compute-0 nova_compute[254092]: 2025-11-25 18:01:39.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:01:40 compute-0 ceph-mon[74985]: pgmap v4165: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4166: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:01:40
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:01:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:01:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:41 compute-0 nova_compute[254092]: 2025-11-25 18:01:41.769 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:42 compute-0 ceph-mon[74985]: pgmap v4166: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4167: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:44 compute-0 ceph-mon[74985]: pgmap v4167: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:44 compute-0 nova_compute[254092]: 2025-11-25 18:01:44.207 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4168: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:46 compute-0 ceph-mon[74985]: pgmap v4168: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4169: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:46 compute-0 podman[466596]: 2025-11-25 18:01:46.66552657 +0000 UTC m=+0.077491423 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:01:46 compute-0 podman[466595]: 2025-11-25 18:01:46.701579998 +0000 UTC m=+0.115303709 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 18:01:46 compute-0 podman[466597]: 2025-11-25 18:01:46.766027137 +0000 UTC m=+0.163205579 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:01:46 compute-0 nova_compute[254092]: 2025-11-25 18:01:46.771 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:48 compute-0 ceph-mon[74985]: pgmap v4169: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4170: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:49 compute-0 nova_compute[254092]: 2025-11-25 18:01:49.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:50 compute-0 ceph-mon[74985]: pgmap v4170: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4171: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:51 compute-0 nova_compute[254092]: 2025-11-25 18:01:51.774 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:52 compute-0 ceph-mon[74985]: pgmap v4171: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4172: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:01:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:01:54 compute-0 nova_compute[254092]: 2025-11-25 18:01:54.215 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4173: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:54 compute-0 ceph-mon[74985]: pgmap v4172: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:01:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971686985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:01:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:01:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1971686985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:01:55 compute-0 ceph-mon[74985]: pgmap v4173: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1971686985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:01:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1971686985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:01:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:01:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4174: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:56 compute-0 nova_compute[254092]: 2025-11-25 18:01:56.776 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:57 compute-0 ceph-mon[74985]: pgmap v4174: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4175: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:01:59 compute-0 nova_compute[254092]: 2025-11-25 18:01:59.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:01:59 compute-0 ceph-mon[74985]: pgmap v4175: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4176: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:01 compute-0 nova_compute[254092]: 2025-11-25 18:02:01.777 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:01 compute-0 ceph-mon[74985]: pgmap v4176: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:01 compute-0 sudo[466656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:01 compute-0 sudo[466656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:01 compute-0 sudo[466656]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:02 compute-0 sudo[466681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:02:02 compute-0 sudo[466681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:02 compute-0 sudo[466681]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:02 compute-0 sudo[466706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:02 compute-0 sudo[466706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:02 compute-0 sudo[466706]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:02 compute-0 sudo[466731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:02:02 compute-0 sudo[466731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4177: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:02 compute-0 sudo[466731]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:02:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:02:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:02:02 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:02:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:02:02 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:02:02 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 517bbcb8-3996-49fc-81d4-2fef00f1c9a5 does not exist
Nov 25 18:02:02 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3debf604-0c90-4223-9a08-cf5d93a2dc63 does not exist
Nov 25 18:02:02 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c11892a3-94a3-4a00-95f1-c2aed5aacefc does not exist
Nov 25 18:02:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:02:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:02:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:02:02 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:02:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:02:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:02:02 compute-0 sudo[466787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:02 compute-0 sudo[466787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:02 compute-0 sudo[466787]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:03 compute-0 sudo[466812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:02:03 compute-0 sudo[466812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:03 compute-0 sudo[466812]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:03 compute-0 sudo[466837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:03 compute-0 sudo[466837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:03 compute-0 sudo[466837]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:03 compute-0 sudo[466862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:02:03 compute-0 sudo[466862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:03 compute-0 podman[466927]: 2025-11-25 18:02:03.57745082 +0000 UTC m=+0.066634118 container create 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:02:03 compute-0 systemd[1]: Started libpod-conmon-78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4.scope.
Nov 25 18:02:03 compute-0 podman[466927]: 2025-11-25 18:02:03.546045628 +0000 UTC m=+0.035228966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:02:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:02:03 compute-0 podman[466927]: 2025-11-25 18:02:03.684846055 +0000 UTC m=+0.174029403 container init 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:02:03 compute-0 podman[466927]: 2025-11-25 18:02:03.699457421 +0000 UTC m=+0.188640679 container start 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:02:03 compute-0 podman[466927]: 2025-11-25 18:02:03.703762768 +0000 UTC m=+0.192946196 container attach 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:02:03 compute-0 interesting_heisenberg[466943]: 167 167
Nov 25 18:02:03 compute-0 systemd[1]: libpod-78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4.scope: Deactivated successfully.
Nov 25 18:02:03 compute-0 podman[466927]: 2025-11-25 18:02:03.710723216 +0000 UTC m=+0.199906494 container died 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-040e8612f55c13e0e9a8e0e230790ce012a0312476b56294bb08c34a22039de2-merged.mount: Deactivated successfully.
Nov 25 18:02:03 compute-0 podman[466927]: 2025-11-25 18:02:03.761902985 +0000 UTC m=+0.251086273 container remove 78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:02:03 compute-0 systemd[1]: libpod-conmon-78a0d2a78ece21e642be74857de75f8f67b6bd8ea2828898d78734c80101edf4.scope: Deactivated successfully.
Nov 25 18:02:03 compute-0 ceph-mon[74985]: pgmap v4177: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:02:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:02:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:02:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:02:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:02:03 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:02:03 compute-0 podman[466968]: 2025-11-25 18:02:03.962129618 +0000 UTC m=+0.049080643 container create 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:02:03 compute-0 systemd[1]: Started libpod-conmon-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope.
Nov 25 18:02:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:02:04 compute-0 podman[466968]: 2025-11-25 18:02:03.940373778 +0000 UTC m=+0.027324813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:04 compute-0 podman[466968]: 2025-11-25 18:02:04.046171259 +0000 UTC m=+0.133122274 container init 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 18:02:04 compute-0 podman[466968]: 2025-11-25 18:02:04.053306322 +0000 UTC m=+0.140257317 container start 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:02:04 compute-0 podman[466968]: 2025-11-25 18:02:04.057139817 +0000 UTC m=+0.144090812 container attach 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:02:04 compute-0 nova_compute[254092]: 2025-11-25 18:02:04.223 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4178: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:05 compute-0 suspicious_wing[466984]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:02:05 compute-0 suspicious_wing[466984]: --> relative data size: 1.0
Nov 25 18:02:05 compute-0 suspicious_wing[466984]: --> All data devices are unavailable
Nov 25 18:02:05 compute-0 systemd[1]: libpod-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope: Deactivated successfully.
Nov 25 18:02:05 compute-0 podman[466968]: 2025-11-25 18:02:05.295097568 +0000 UTC m=+1.382048583 container died 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:02:05 compute-0 systemd[1]: libpod-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope: Consumed 1.175s CPU time.
Nov 25 18:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dae245a1e1acd9858ff48ae1cf93359da218ea9db392fccb7da961ca2805815c-merged.mount: Deactivated successfully.
Nov 25 18:02:05 compute-0 podman[466968]: 2025-11-25 18:02:05.391503433 +0000 UTC m=+1.478454438 container remove 0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wing, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:02:05 compute-0 systemd[1]: libpod-conmon-0f0e6557e73cf6ec1d559abfe9f490b78d3ec5cea44dff81e0eb74fd8a1313eb.scope: Deactivated successfully.
Nov 25 18:02:05 compute-0 sudo[466862]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:05 compute-0 sudo[467025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:05 compute-0 sudo[467025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:05 compute-0 sudo[467025]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:05 compute-0 sudo[467050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:02:05 compute-0 sudo[467050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:05 compute-0 sudo[467050]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:05 compute-0 sudo[467075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:05 compute-0 sudo[467075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:05 compute-0 sudo[467075]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:05 compute-0 sudo[467100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:02:05 compute-0 sudo[467100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:05 compute-0 ceph-mon[74985]: pgmap v4178: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:06 compute-0 podman[467166]: 2025-11-25 18:02:06.03155224 +0000 UTC m=+0.035852313 container create 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:02:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:06 compute-0 systemd[1]: Started libpod-conmon-81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c.scope.
Nov 25 18:02:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:02:06 compute-0 podman[467166]: 2025-11-25 18:02:06.016920543 +0000 UTC m=+0.021220636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:02:06 compute-0 podman[467166]: 2025-11-25 18:02:06.116149086 +0000 UTC m=+0.120449179 container init 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:02:06 compute-0 podman[467166]: 2025-11-25 18:02:06.124127873 +0000 UTC m=+0.128427946 container start 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:02:06 compute-0 podman[467166]: 2025-11-25 18:02:06.127855574 +0000 UTC m=+0.132155717 container attach 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:02:06 compute-0 objective_bouman[467183]: 167 167
Nov 25 18:02:06 compute-0 systemd[1]: libpod-81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c.scope: Deactivated successfully.
Nov 25 18:02:06 compute-0 podman[467166]: 2025-11-25 18:02:06.129556129 +0000 UTC m=+0.133856202 container died 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5894d33a7dc684ace6732b1170470b77f6ba2dab502da704ea0688dce05c61-merged.mount: Deactivated successfully.
Nov 25 18:02:06 compute-0 podman[467166]: 2025-11-25 18:02:06.170739797 +0000 UTC m=+0.175039880 container remove 81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:02:06 compute-0 systemd[1]: libpod-conmon-81326d1247bac2d5b16999d3fd6104378da6dbc0252cba23647b049734f8089c.scope: Deactivated successfully.
Nov 25 18:02:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4179: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:06 compute-0 podman[467208]: 2025-11-25 18:02:06.408748835 +0000 UTC m=+0.061808938 container create 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:02:06 compute-0 systemd[1]: Started libpod-conmon-0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628.scope.
Nov 25 18:02:06 compute-0 podman[467208]: 2025-11-25 18:02:06.382873133 +0000 UTC m=+0.035933336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:02:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:06 compute-0 podman[467208]: 2025-11-25 18:02:06.520492167 +0000 UTC m=+0.173552370 container init 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:02:06 compute-0 podman[467208]: 2025-11-25 18:02:06.529707747 +0000 UTC m=+0.182767850 container start 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 18:02:06 compute-0 podman[467208]: 2025-11-25 18:02:06.53532878 +0000 UTC m=+0.188388973 container attach 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:02:06 compute-0 nova_compute[254092]: 2025-11-25 18:02:06.778 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:07 compute-0 laughing_johnson[467225]: {
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:     "0": [
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:         {
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "devices": [
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "/dev/loop3"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             ],
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_name": "ceph_lv0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_size": "21470642176",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "name": "ceph_lv0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "tags": {
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cluster_name": "ceph",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.crush_device_class": "",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.encrypted": "0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osd_id": "0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.type": "block",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.vdo": "0"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             },
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "type": "block",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "vg_name": "ceph_vg0"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:         }
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:     ],
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:     "1": [
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:         {
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "devices": [
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "/dev/loop4"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             ],
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_name": "ceph_lv1",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_size": "21470642176",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "name": "ceph_lv1",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "tags": {
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cluster_name": "ceph",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.crush_device_class": "",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.encrypted": "0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osd_id": "1",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.type": "block",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.vdo": "0"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             },
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "type": "block",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "vg_name": "ceph_vg1"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:         }
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:     ],
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:     "2": [
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:         {
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "devices": [
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "/dev/loop5"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             ],
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_name": "ceph_lv2",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_size": "21470642176",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "name": "ceph_lv2",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "tags": {
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.cluster_name": "ceph",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.crush_device_class": "",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.encrypted": "0",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osd_id": "2",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.type": "block",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:                 "ceph.vdo": "0"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             },
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "type": "block",
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:             "vg_name": "ceph_vg2"
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:         }
Nov 25 18:02:07 compute-0 laughing_johnson[467225]:     ]
Nov 25 18:02:07 compute-0 laughing_johnson[467225]: }
Nov 25 18:02:07 compute-0 systemd[1]: libpod-0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628.scope: Deactivated successfully.
Nov 25 18:02:07 compute-0 podman[467208]: 2025-11-25 18:02:07.373279457 +0000 UTC m=+1.026339560 container died 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 18:02:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cf772c95c208b29200881f7d3a967866a55c7e907e8d6a637fcf63d113543ae-merged.mount: Deactivated successfully.
Nov 25 18:02:07 compute-0 podman[467208]: 2025-11-25 18:02:07.54957311 +0000 UTC m=+1.202633253 container remove 0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:02:07 compute-0 systemd[1]: libpod-conmon-0b919d5477cc294f544ae60d51fec7b8e19e5d460bd5682aeaaca872c2e0c628.scope: Deactivated successfully.
Nov 25 18:02:07 compute-0 sudo[467100]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:07 compute-0 sudo[467247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:07 compute-0 sudo[467247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:07 compute-0 sudo[467247]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:07 compute-0 sudo[467272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:02:07 compute-0 sudo[467272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:07 compute-0 sudo[467272]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:07 compute-0 ceph-mon[74985]: pgmap v4179: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:07 compute-0 sudo[467297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:07 compute-0 sudo[467297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:07 compute-0 sudo[467297]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:07 compute-0 sudo[467322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:02:07 compute-0 sudo[467322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4180: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:08 compute-0 podman[467388]: 2025-11-25 18:02:08.368365588 +0000 UTC m=+0.062495107 container create c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:02:08 compute-0 systemd[1]: Started libpod-conmon-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope.
Nov 25 18:02:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:02:08 compute-0 podman[467388]: 2025-11-25 18:02:08.348740946 +0000 UTC m=+0.042870505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:02:08 compute-0 podman[467388]: 2025-11-25 18:02:08.45208761 +0000 UTC m=+0.146217149 container init c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:02:08 compute-0 podman[467388]: 2025-11-25 18:02:08.464074155 +0000 UTC m=+0.158203704 container start c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:02:08 compute-0 podman[467388]: 2025-11-25 18:02:08.468613818 +0000 UTC m=+0.162743357 container attach c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:02:08 compute-0 systemd[1]: libpod-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope: Deactivated successfully.
Nov 25 18:02:08 compute-0 great_moser[467404]: 167 167
Nov 25 18:02:08 compute-0 podman[467388]: 2025-11-25 18:02:08.4705532 +0000 UTC m=+0.164682739 container died c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:02:08 compute-0 conmon[467404]: conmon c03a93ffbbb91c48bcae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope/container/memory.events
Nov 25 18:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea033297e667fa507de95a807fab2176a8a2b116d5c50bc46528c57bbaf0f24b-merged.mount: Deactivated successfully.
Nov 25 18:02:08 compute-0 podman[467388]: 2025-11-25 18:02:08.52321635 +0000 UTC m=+0.217345859 container remove c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moser, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:02:08 compute-0 systemd[1]: libpod-conmon-c03a93ffbbb91c48bcae4a371f35e377178792b9d84f6806007f7a5c1234d08f.scope: Deactivated successfully.
Nov 25 18:02:08 compute-0 podman[467427]: 2025-11-25 18:02:08.744216086 +0000 UTC m=+0.058738394 container create 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:02:08 compute-0 systemd[1]: Started libpod-conmon-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope.
Nov 25 18:02:08 compute-0 podman[467427]: 2025-11-25 18:02:08.725106978 +0000 UTC m=+0.039629266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:02:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:02:08 compute-0 podman[467427]: 2025-11-25 18:02:08.847880609 +0000 UTC m=+0.162402907 container init 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:02:08 compute-0 podman[467427]: 2025-11-25 18:02:08.855879817 +0000 UTC m=+0.170402085 container start 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:02:08 compute-0 podman[467427]: 2025-11-25 18:02:08.863982896 +0000 UTC m=+0.178505204 container attach 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:02:09 compute-0 nova_compute[254092]: 2025-11-25 18:02:09.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:09 compute-0 ceph-mon[74985]: pgmap v4180: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:09 compute-0 unruffled_nash[467443]: {
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "osd_id": 1,
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "type": "bluestore"
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:     },
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "osd_id": 2,
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "type": "bluestore"
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:     },
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "osd_id": 0,
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:         "type": "bluestore"
Nov 25 18:02:09 compute-0 unruffled_nash[467443]:     }
Nov 25 18:02:09 compute-0 unruffled_nash[467443]: }
Nov 25 18:02:10 compute-0 systemd[1]: libpod-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope: Deactivated successfully.
Nov 25 18:02:10 compute-0 podman[467427]: 2025-11-25 18:02:10.019943572 +0000 UTC m=+1.334465850 container died 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:02:10 compute-0 systemd[1]: libpod-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope: Consumed 1.170s CPU time.
Nov 25 18:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1815e7b8fe7397c74603564bcb46717bc61d5609e3e65c45a1a534d820f8941d-merged.mount: Deactivated successfully.
Nov 25 18:02:10 compute-0 podman[467427]: 2025-11-25 18:02:10.079845398 +0000 UTC m=+1.394367676 container remove 711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 25 18:02:10 compute-0 systemd[1]: libpod-conmon-711dc0fccb7972daa9ac0022a8fc15f6431d53e068b1efa00f9bef9f3360415d.scope: Deactivated successfully.
Nov 25 18:02:10 compute-0 sudo[467322]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:02:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:02:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:02:10 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 781a5a1b-9a50-4021-b937-c551150f0a7a does not exist
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e37e3d93-b9cd-4232-a47b-aac9e10190aa does not exist
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:02:10 compute-0 sudo[467488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:02:10 compute-0 sudo[467488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:10 compute-0 sudo[467488]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:10 compute-0 sudo[467513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:02:10 compute-0 sudo[467513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:02:10 compute-0 sudo[467513]: pam_unix(sudo:session): session closed for user root
Nov 25 18:02:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4181: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:02:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:02:11 compute-0 nova_compute[254092]: 2025-11-25 18:02:11.780 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:12 compute-0 ceph-mon[74985]: pgmap v4181: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4182: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:02:13.708 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:02:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:02:13.708 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:02:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:02:13.708 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:02:14 compute-0 ceph-mon[74985]: pgmap v4182: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:14 compute-0 nova_compute[254092]: 2025-11-25 18:02:14.231 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4183: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:16 compute-0 ceph-mon[74985]: pgmap v4183: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4184: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:16 compute-0 nova_compute[254092]: 2025-11-25 18:02:16.791 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:17 compute-0 podman[467539]: 2025-11-25 18:02:17.657846919 +0000 UTC m=+0.068117140 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:02:17 compute-0 podman[467538]: 2025-11-25 18:02:17.670908433 +0000 UTC m=+0.091500183 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 25 18:02:17 compute-0 podman[467540]: 2025-11-25 18:02:17.685284373 +0000 UTC m=+0.101730741 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 18:02:18 compute-0 ceph-mon[74985]: pgmap v4184: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4185: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:19 compute-0 nova_compute[254092]: 2025-11-25 18:02:19.233 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:19 compute-0 ceph-mon[74985]: pgmap v4185: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4186: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:20 compute-0 nova_compute[254092]: 2025-11-25 18:02:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:21 compute-0 ceph-mon[74985]: pgmap v4186: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:21 compute-0 nova_compute[254092]: 2025-11-25 18:02:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:21 compute-0 nova_compute[254092]: 2025-11-25 18:02:21.834 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4187: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:23 compute-0 ceph-mon[74985]: pgmap v4187: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:23 compute-0 nova_compute[254092]: 2025-11-25 18:02:23.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:24 compute-0 nova_compute[254092]: 2025-11-25 18:02:24.262 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4188: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:25 compute-0 ceph-mon[74985]: pgmap v4188: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:25 compute-0 nova_compute[254092]: 2025-11-25 18:02:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4189: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:26 compute-0 nova_compute[254092]: 2025-11-25 18:02:26.837 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:27 compute-0 ceph-mon[74985]: pgmap v4189: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4190: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.299 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.516 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.517 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.541 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.542 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.542 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:02:29 compute-0 ceph-mon[74985]: pgmap v4190: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:02:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247302969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:02:29 compute-0 nova_compute[254092]: 2025-11-25 18:02:29.995 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.166 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.167 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.167 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.167 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.228 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.229 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.250 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:02:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4191: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:30 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2247302969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:02:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:02:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150636084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.667 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.672 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.684 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.686 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:02:30 compute-0 nova_compute[254092]: 2025-11-25 18:02:30.686 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:02:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:31 compute-0 ceph-mon[74985]: pgmap v4191: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/150636084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:02:31 compute-0 nova_compute[254092]: 2025-11-25 18:02:31.665 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:31 compute-0 nova_compute[254092]: 2025-11-25 18:02:31.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4192: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:33 compute-0 ceph-mon[74985]: pgmap v4192: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:34 compute-0 nova_compute[254092]: 2025-11-25 18:02:34.303 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4193: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:35 compute-0 nova_compute[254092]: 2025-11-25 18:02:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:35 compute-0 nova_compute[254092]: 2025-11-25 18:02:35.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:02:35 compute-0 ceph-mon[74985]: pgmap v4193: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4194: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:36 compute-0 nova_compute[254092]: 2025-11-25 18:02:36.878 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:37 compute-0 ceph-mon[74985]: pgmap v4194: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4195: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:39 compute-0 nova_compute[254092]: 2025-11-25 18:02:39.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:39 compute-0 ceph-mon[74985]: pgmap v4195: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4196: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:02:40
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data']
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:02:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:02:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:41 compute-0 nova_compute[254092]: 2025-11-25 18:02:41.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:41 compute-0 ceph-mon[74985]: pgmap v4196: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:41 compute-0 nova_compute[254092]: 2025-11-25 18:02:41.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4197: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:43 compute-0 ceph-mon[74985]: pgmap v4197: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:44 compute-0 nova_compute[254092]: 2025-11-25 18:02:44.311 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4198: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:44 compute-0 nova_compute[254092]: 2025-11-25 18:02:44.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:02:45 compute-0 ceph-mon[74985]: pgmap v4198: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4199: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:46 compute-0 nova_compute[254092]: 2025-11-25 18:02:46.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:47 compute-0 ceph-mon[74985]: pgmap v4199: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4200: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:48 compute-0 podman[467648]: 2025-11-25 18:02:48.672494537 +0000 UTC m=+0.080829294 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 18:02:48 compute-0 podman[467647]: 2025-11-25 18:02:48.677807731 +0000 UTC m=+0.085384638 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 18:02:48 compute-0 podman[467649]: 2025-11-25 18:02:48.715476073 +0000 UTC m=+0.113581272 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 18:02:49 compute-0 nova_compute[254092]: 2025-11-25 18:02:49.314 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:49 compute-0 ceph-mon[74985]: pgmap v4200: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4201: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:51 compute-0 ceph-mon[74985]: pgmap v4201: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:51 compute-0 nova_compute[254092]: 2025-11-25 18:02:51.882 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4202: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:02:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:02:53 compute-0 ceph-mon[74985]: pgmap v4202: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:54 compute-0 nova_compute[254092]: 2025-11-25 18:02:54.317 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4203: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:02:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721422047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:02:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:02:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721422047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:02:55 compute-0 ceph-mon[74985]: pgmap v4203: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2721422047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:02:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2721422047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:02:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:02:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4204: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:56 compute-0 nova_compute[254092]: 2025-11-25 18:02:56.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:57 compute-0 ceph-mon[74985]: pgmap v4204: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4205: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:02:59 compute-0 nova_compute[254092]: 2025-11-25 18:02:59.323 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:02:59 compute-0 ceph-mon[74985]: pgmap v4205: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4206: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:01 compute-0 ceph-mon[74985]: pgmap v4206: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:01 compute-0 nova_compute[254092]: 2025-11-25 18:03:01.886 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4207: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.735487) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782735518, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1368, "num_deletes": 251, "total_data_size": 2152011, "memory_usage": 2186096, "flush_reason": "Manual Compaction"}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782753166, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 2109738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85290, "largest_seqno": 86657, "table_properties": {"data_size": 2103241, "index_size": 3696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13400, "raw_average_key_size": 19, "raw_value_size": 2090326, "raw_average_value_size": 3096, "num_data_blocks": 166, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093641, "oldest_key_time": 1764093641, "file_creation_time": 1764093782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 17867 microseconds, and 10371 cpu microseconds.
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.753345) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 2109738 bytes OK
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.753440) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.754876) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.754899) EVENT_LOG_v1 {"time_micros": 1764093782754892, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.754922) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 2145928, prev total WAL file size 2145928, number of live WAL files 2.
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.756834) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(2060KB)], [203(10MB)]
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782756883, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 12629821, "oldest_snapshot_seqno": -1}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 9955 keys, 10907978 bytes, temperature: kUnknown
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782842918, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 10907978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10847048, "index_size": 34961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24901, "raw_key_size": 263688, "raw_average_key_size": 26, "raw_value_size": 10674634, "raw_average_value_size": 1072, "num_data_blocks": 1338, "num_entries": 9955, "num_filter_entries": 9955, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764093782, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.843301) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 10907978 bytes
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.845236) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.6 rd, 126.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(11.2) write-amplify(5.2) OK, records in: 10469, records dropped: 514 output_compression: NoCompression
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.845266) EVENT_LOG_v1 {"time_micros": 1764093782845252, "job": 128, "event": "compaction_finished", "compaction_time_micros": 86125, "compaction_time_cpu_micros": 49666, "output_level": 6, "num_output_files": 1, "total_output_size": 10907978, "num_input_records": 10469, "num_output_records": 9955, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782846286, "job": 128, "event": "table_file_deletion", "file_number": 205}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764093782849822, "job": 128, "event": "table_file_deletion", "file_number": 203}
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.756711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:03:02 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:03:02.849927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:03:03 compute-0 ceph-mon[74985]: pgmap v4207: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:04 compute-0 nova_compute[254092]: 2025-11-25 18:03:04.326 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4208: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:05 compute-0 ceph-mon[74985]: pgmap v4208: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4209: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:06 compute-0 nova_compute[254092]: 2025-11-25 18:03:06.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:07 compute-0 ceph-mon[74985]: pgmap v4209: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4210: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:09 compute-0 nova_compute[254092]: 2025-11-25 18:03:09.330 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:09 compute-0 ceph-mon[74985]: pgmap v4210: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:03:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:03:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4211: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:10 compute-0 sudo[467709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:10 compute-0 sudo[467709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:10 compute-0 sudo[467709]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:10 compute-0 sudo[467734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:03:10 compute-0 sudo[467734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:10 compute-0 sudo[467734]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:10 compute-0 sudo[467759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:10 compute-0 sudo[467759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:10 compute-0 sudo[467759]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:10 compute-0 sudo[467784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:03:10 compute-0 sudo[467784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:11 compute-0 sudo[467784]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:03:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:03:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:03:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:03:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 651ca80a-56ef-4eba-844b-829f02288cc4 does not exist
Nov 25 18:03:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e89301fb-ad9a-4eaf-840f-b45b3fff0f49 does not exist
Nov 25 18:03:11 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ac5c2c7e-c51d-4639-9d5a-b8e46e5edd85 does not exist
Nov 25 18:03:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:03:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:03:11 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:03:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:03:11 compute-0 sudo[467840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:11 compute-0 sudo[467840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:11 compute-0 sudo[467840]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:11 compute-0 sudo[467865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:03:11 compute-0 sudo[467865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:11 compute-0 sudo[467865]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:11 compute-0 sudo[467890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:11 compute-0 sudo[467890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:11 compute-0 sudo[467890]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:11 compute-0 sudo[467915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:03:11 compute-0 sudo[467915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:11 compute-0 ceph-mon[74985]: pgmap v4211: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:03:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:03:11 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:03:11 compute-0 podman[467980]: 2025-11-25 18:03:11.866789808 +0000 UTC m=+0.039886363 container create 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:03:11 compute-0 nova_compute[254092]: 2025-11-25 18:03:11.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:11 compute-0 systemd[1]: Started libpod-conmon-1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6.scope.
Nov 25 18:03:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:03:11 compute-0 podman[467980]: 2025-11-25 18:03:11.848368219 +0000 UTC m=+0.021464764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:03:11 compute-0 podman[467980]: 2025-11-25 18:03:11.953861062 +0000 UTC m=+0.126957607 container init 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:03:11 compute-0 podman[467980]: 2025-11-25 18:03:11.960884882 +0000 UTC m=+0.133981417 container start 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 18:03:11 compute-0 podman[467980]: 2025-11-25 18:03:11.96414369 +0000 UTC m=+0.137240245 container attach 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:03:11 compute-0 bold_brahmagupta[467996]: 167 167
Nov 25 18:03:11 compute-0 podman[467980]: 2025-11-25 18:03:11.968443157 +0000 UTC m=+0.141539692 container died 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:03:11 compute-0 systemd[1]: libpod-1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6.scope: Deactivated successfully.
Nov 25 18:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-252093b1dd2f67f6d4a95ec2fdc32acf2234ff029248c57ad345f042c7741901-merged.mount: Deactivated successfully.
Nov 25 18:03:12 compute-0 podman[467980]: 2025-11-25 18:03:12.006249313 +0000 UTC m=+0.179345838 container remove 1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brahmagupta, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 18:03:12 compute-0 systemd[1]: libpod-conmon-1c379613b704992f07ca3b5fffe2f70eb13892f214a1c14d7fd58ae0fd7d7af6.scope: Deactivated successfully.
Nov 25 18:03:12 compute-0 podman[468022]: 2025-11-25 18:03:12.174847167 +0000 UTC m=+0.046026389 container create cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 18:03:12 compute-0 systemd[1]: Started libpod-conmon-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope.
Nov 25 18:03:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:03:12 compute-0 podman[468022]: 2025-11-25 18:03:12.156529211 +0000 UTC m=+0.027708423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:12 compute-0 podman[468022]: 2025-11-25 18:03:12.272691232 +0000 UTC m=+0.143870444 container init cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:03:12 compute-0 podman[468022]: 2025-11-25 18:03:12.281365018 +0000 UTC m=+0.152544220 container start cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:03:12 compute-0 podman[468022]: 2025-11-25 18:03:12.28472496 +0000 UTC m=+0.155904182 container attach cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:03:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4212: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:13 compute-0 gallant_kalam[468039]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:03:13 compute-0 gallant_kalam[468039]: --> relative data size: 1.0
Nov 25 18:03:13 compute-0 gallant_kalam[468039]: --> All data devices are unavailable
Nov 25 18:03:13 compute-0 systemd[1]: libpod-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope: Deactivated successfully.
Nov 25 18:03:13 compute-0 systemd[1]: libpod-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope: Consumed 1.029s CPU time.
Nov 25 18:03:13 compute-0 podman[468022]: 2025-11-25 18:03:13.356395718 +0000 UTC m=+1.227574950 container died cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:03:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e823d8198b78ec5e91c71bf4bb7a7c0e4ea320b73101261ba10cdf06d8f77ec-merged.mount: Deactivated successfully.
Nov 25 18:03:13 compute-0 podman[468022]: 2025-11-25 18:03:13.438986339 +0000 UTC m=+1.310165581 container remove cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:03:13 compute-0 systemd[1]: libpod-conmon-cf07074dbcc3d20c01e32e4d77378b0cb88626e0650ec462db7ef837a2a6f8f5.scope: Deactivated successfully.
Nov 25 18:03:13 compute-0 sudo[467915]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:13 compute-0 sudo[468080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:13 compute-0 sudo[468080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:13 compute-0 sudo[468080]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:13 compute-0 sudo[468105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:03:13 compute-0 sudo[468105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:13 compute-0 sudo[468105]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:03:13.709 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:03:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:03:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:03:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:03:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:03:13 compute-0 sudo[468130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:13 compute-0 sudo[468130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:13 compute-0 sudo[468130]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:13 compute-0 sudo[468155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:03:13 compute-0 sudo[468155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:13 compute-0 ceph-mon[74985]: pgmap v4212: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:14 compute-0 podman[468221]: 2025-11-25 18:03:14.303197499 +0000 UTC m=+0.062270021 container create a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 18:03:14 compute-0 systemd[1]: Started libpod-conmon-a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542.scope.
Nov 25 18:03:14 compute-0 nova_compute[254092]: 2025-11-25 18:03:14.334 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4213: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:03:14 compute-0 podman[468221]: 2025-11-25 18:03:14.375415998 +0000 UTC m=+0.134488520 container init a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:03:14 compute-0 podman[468221]: 2025-11-25 18:03:14.282329363 +0000 UTC m=+0.041401935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:03:14 compute-0 podman[468221]: 2025-11-25 18:03:14.382611893 +0000 UTC m=+0.141684435 container start a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:03:14 compute-0 podman[468221]: 2025-11-25 18:03:14.385992676 +0000 UTC m=+0.145065228 container attach a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:03:14 compute-0 beautiful_ardinghelli[468238]: 167 167
Nov 25 18:03:14 compute-0 systemd[1]: libpod-a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542.scope: Deactivated successfully.
Nov 25 18:03:14 compute-0 podman[468221]: 2025-11-25 18:03:14.389501311 +0000 UTC m=+0.148573843 container died a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe82724a27541c7e8cdfad59aa08c0129962a5069f97814df5553623323f0ad-merged.mount: Deactivated successfully.
Nov 25 18:03:14 compute-0 podman[468221]: 2025-11-25 18:03:14.428441567 +0000 UTC m=+0.187514089 container remove a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:03:14 compute-0 systemd[1]: libpod-conmon-a6ad5deb6a8f1d8cf3c269aa0361dc414e77853cefcd52fdaee8f6979148d542.scope: Deactivated successfully.
Nov 25 18:03:14 compute-0 podman[468264]: 2025-11-25 18:03:14.603743143 +0000 UTC m=+0.040351015 container create f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 18:03:14 compute-0 systemd[1]: Started libpod-conmon-f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35.scope.
Nov 25 18:03:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:14 compute-0 podman[468264]: 2025-11-25 18:03:14.588087469 +0000 UTC m=+0.024695351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:03:14 compute-0 podman[468264]: 2025-11-25 18:03:14.694700882 +0000 UTC m=+0.131308764 container init f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 18:03:14 compute-0 podman[468264]: 2025-11-25 18:03:14.701027293 +0000 UTC m=+0.137635165 container start f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:03:14 compute-0 podman[468264]: 2025-11-25 18:03:14.703616234 +0000 UTC m=+0.140224156 container attach f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:03:15 compute-0 bold_shannon[468281]: {
Nov 25 18:03:15 compute-0 bold_shannon[468281]:     "0": [
Nov 25 18:03:15 compute-0 bold_shannon[468281]:         {
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "devices": [
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "/dev/loop3"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             ],
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_name": "ceph_lv0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_size": "21470642176",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "name": "ceph_lv0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "tags": {
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cluster_name": "ceph",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.crush_device_class": "",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.encrypted": "0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osd_id": "0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.type": "block",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.vdo": "0"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             },
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "type": "block",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "vg_name": "ceph_vg0"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:         }
Nov 25 18:03:15 compute-0 bold_shannon[468281]:     ],
Nov 25 18:03:15 compute-0 bold_shannon[468281]:     "1": [
Nov 25 18:03:15 compute-0 bold_shannon[468281]:         {
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "devices": [
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "/dev/loop4"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             ],
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_name": "ceph_lv1",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_size": "21470642176",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "name": "ceph_lv1",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "tags": {
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cluster_name": "ceph",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.crush_device_class": "",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.encrypted": "0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osd_id": "1",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.type": "block",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.vdo": "0"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             },
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "type": "block",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "vg_name": "ceph_vg1"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:         }
Nov 25 18:03:15 compute-0 bold_shannon[468281]:     ],
Nov 25 18:03:15 compute-0 bold_shannon[468281]:     "2": [
Nov 25 18:03:15 compute-0 bold_shannon[468281]:         {
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "devices": [
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "/dev/loop5"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             ],
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_name": "ceph_lv2",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_size": "21470642176",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "name": "ceph_lv2",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "tags": {
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.cluster_name": "ceph",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.crush_device_class": "",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.encrypted": "0",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osd_id": "2",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.type": "block",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:                 "ceph.vdo": "0"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             },
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "type": "block",
Nov 25 18:03:15 compute-0 bold_shannon[468281]:             "vg_name": "ceph_vg2"
Nov 25 18:03:15 compute-0 bold_shannon[468281]:         }
Nov 25 18:03:15 compute-0 bold_shannon[468281]:     ]
Nov 25 18:03:15 compute-0 bold_shannon[468281]: }
Nov 25 18:03:15 compute-0 systemd[1]: libpod-f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35.scope: Deactivated successfully.
Nov 25 18:03:15 compute-0 podman[468264]: 2025-11-25 18:03:15.490856705 +0000 UTC m=+0.927464587 container died f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:03:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bba706711e9d924f4c39bc951273d25cacbce329d1d149f91ea6b0afd76b57d-merged.mount: Deactivated successfully.
Nov 25 18:03:15 compute-0 podman[468264]: 2025-11-25 18:03:15.560735341 +0000 UTC m=+0.997343223 container remove f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:03:15 compute-0 systemd[1]: libpod-conmon-f6b6a5c66621772fab24c617ad5ab32bbfbc48a85bdf1a51058fc2a0a0c73b35.scope: Deactivated successfully.
Nov 25 18:03:15 compute-0 sudo[468155]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:15 compute-0 sudo[468301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:15 compute-0 sudo[468301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:15 compute-0 sudo[468301]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:15 compute-0 sudo[468326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:03:15 compute-0 sudo[468326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:15 compute-0 sudo[468326]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:15 compute-0 sudo[468351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:15 compute-0 sudo[468351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:15 compute-0 sudo[468351]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:15 compute-0 ceph-mon[74985]: pgmap v4213: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:15 compute-0 sudo[468376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:03:15 compute-0 sudo[468376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:16 compute-0 podman[468441]: 2025-11-25 18:03:16.285056395 +0000 UTC m=+0.055035184 container create 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:03:16 compute-0 systemd[1]: Started libpod-conmon-5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db.scope.
Nov 25 18:03:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:03:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4214: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:16 compute-0 podman[468441]: 2025-11-25 18:03:16.260801767 +0000 UTC m=+0.030780596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:03:16 compute-0 podman[468441]: 2025-11-25 18:03:16.365909258 +0000 UTC m=+0.135888107 container init 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:03:16 compute-0 podman[468441]: 2025-11-25 18:03:16.377499304 +0000 UTC m=+0.147478093 container start 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:03:16 compute-0 podman[468441]: 2025-11-25 18:03:16.381592214 +0000 UTC m=+0.151571073 container attach 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 18:03:16 compute-0 stoic_mahavira[468457]: 167 167
Nov 25 18:03:16 compute-0 systemd[1]: libpod-5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db.scope: Deactivated successfully.
Nov 25 18:03:16 compute-0 podman[468441]: 2025-11-25 18:03:16.38287668 +0000 UTC m=+0.152855459 container died 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:03:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e0aa0154967610bd61d527e30ae15f4c48efcf70768452a316925635f76ea3d-merged.mount: Deactivated successfully.
Nov 25 18:03:16 compute-0 podman[468441]: 2025-11-25 18:03:16.424297823 +0000 UTC m=+0.194276592 container remove 5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:03:16 compute-0 systemd[1]: libpod-conmon-5b20e2d6aee160fe392aaecf149acef3a0c69635e6623e227cd45699273ae6db.scope: Deactivated successfully.
Nov 25 18:03:16 compute-0 podman[468480]: 2025-11-25 18:03:16.613296982 +0000 UTC m=+0.048767235 container create 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:03:16 compute-0 systemd[1]: Started libpod-conmon-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope.
Nov 25 18:03:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:03:16 compute-0 podman[468480]: 2025-11-25 18:03:16.588293863 +0000 UTC m=+0.023764226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:03:16 compute-0 podman[468480]: 2025-11-25 18:03:16.699899172 +0000 UTC m=+0.135369475 container init 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:03:16 compute-0 podman[468480]: 2025-11-25 18:03:16.707172478 +0000 UTC m=+0.142642741 container start 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:03:16 compute-0 podman[468480]: 2025-11-25 18:03:16.710681624 +0000 UTC m=+0.146151887 container attach 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:03:16 compute-0 nova_compute[254092]: 2025-11-25 18:03:16.931 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]: {
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "osd_id": 1,
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "type": "bluestore"
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:     },
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "osd_id": 2,
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "type": "bluestore"
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:     },
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "osd_id": 0,
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:         "type": "bluestore"
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]:     }
Nov 25 18:03:17 compute-0 vigorous_ellis[468496]: }
Nov 25 18:03:17 compute-0 systemd[1]: libpod-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope: Deactivated successfully.
Nov 25 18:03:17 compute-0 systemd[1]: libpod-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope: Consumed 1.026s CPU time.
Nov 25 18:03:17 compute-0 conmon[468496]: conmon 88d17dc24aa82937a3ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope/container/memory.events
Nov 25 18:03:17 compute-0 podman[468480]: 2025-11-25 18:03:17.72585371 +0000 UTC m=+1.161323953 container died 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:03:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-41de73a1031e19a7ea0bc831a8a3c0f2ea9d6d1551929ea3508e0e165a6fbdb6-merged.mount: Deactivated successfully.
Nov 25 18:03:17 compute-0 podman[468480]: 2025-11-25 18:03:17.786881976 +0000 UTC m=+1.222352229 container remove 88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ellis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:03:17 compute-0 systemd[1]: libpod-conmon-88d17dc24aa82937a3ba8c3dc34b0bdfe883ecd3ff4c4960d40b7f98275b6c95.scope: Deactivated successfully.
Nov 25 18:03:17 compute-0 sudo[468376]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:03:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:03:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:03:17 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:03:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1ec295e1-33cb-4d55-bca3-c92a46241b5d does not exist
Nov 25 18:03:17 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d5e2b15e-666e-4ff4-8b98-336365dcd4ed does not exist
Nov 25 18:03:17 compute-0 ceph-mon[74985]: pgmap v4214: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:03:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:03:17 compute-0 sudo[468543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:03:17 compute-0 sudo[468543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:17 compute-0 sudo[468543]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:18 compute-0 sudo[468568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:03:18 compute-0 sudo[468568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:03:18 compute-0 sudo[468568]: pam_unix(sudo:session): session closed for user root
Nov 25 18:03:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4215: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:19 compute-0 nova_compute[254092]: 2025-11-25 18:03:19.337 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:19 compute-0 podman[468593]: 2025-11-25 18:03:19.656042994 +0000 UTC m=+0.076794054 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:03:19 compute-0 podman[468594]: 2025-11-25 18:03:19.657422281 +0000 UTC m=+0.067800450 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 18:03:19 compute-0 podman[468595]: 2025-11-25 18:03:19.681344991 +0000 UTC m=+0.099731018 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 18:03:19 compute-0 ceph-mon[74985]: pgmap v4215: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4216: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:20 compute-0 nova_compute[254092]: 2025-11-25 18:03:20.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:21 compute-0 nova_compute[254092]: 2025-11-25 18:03:21.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:21 compute-0 nova_compute[254092]: 2025-11-25 18:03:21.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:22 compute-0 ceph-mon[74985]: pgmap v4216: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4217: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:24 compute-0 ceph-mon[74985]: pgmap v4217: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:24 compute-0 nova_compute[254092]: 2025-11-25 18:03:24.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4218: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:25 compute-0 nova_compute[254092]: 2025-11-25 18:03:25.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:26 compute-0 ceph-mon[74985]: pgmap v4218: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4219: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:26 compute-0 nova_compute[254092]: 2025-11-25 18:03:26.976 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:27 compute-0 nova_compute[254092]: 2025-11-25 18:03:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:28 compute-0 ceph-mon[74985]: pgmap v4219: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4220: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:29 compute-0 nova_compute[254092]: 2025-11-25 18:03:29.344 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:30 compute-0 ceph-mon[74985]: pgmap v4220: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4221: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.519 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.520 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.552 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.553 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.553 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.554 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:03:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:03:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197200820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:03:30 compute-0 nova_compute[254092]: 2025-11-25 18:03:30.997 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:03:31 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3197200820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:03:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.128 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.129 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.129 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.129 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.201 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.202 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.231 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:03:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:03:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884788986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.658 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.664 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.681 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.683 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.683 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:03:31 compute-0 nova_compute[254092]: 2025-11-25 18:03:31.978 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:32 compute-0 ceph-mon[74985]: pgmap v4221: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/884788986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:03:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4222: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:32 compute-0 nova_compute[254092]: 2025-11-25 18:03:32.660 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:34 compute-0 ceph-mon[74985]: pgmap v4222: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:34 compute-0 nova_compute[254092]: 2025-11-25 18:03:34.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4223: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:36 compute-0 ceph-mon[74985]: pgmap v4223: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4224: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:36 compute-0 nova_compute[254092]: 2025-11-25 18:03:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:36 compute-0 nova_compute[254092]: 2025-11-25 18:03:36.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:03:36 compute-0 nova_compute[254092]: 2025-11-25 18:03:36.981 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:38 compute-0 ceph-mon[74985]: pgmap v4224: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4225: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:39 compute-0 nova_compute[254092]: 2025-11-25 18:03:39.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:40 compute-0 ceph-mon[74985]: pgmap v4225: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:03:40
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'vms']
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4226: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:03:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:03:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:03:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:42 compute-0 nova_compute[254092]: 2025-11-25 18:03:42.030 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:42 compute-0 ceph-mon[74985]: pgmap v4226: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4227: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:42 compute-0 nova_compute[254092]: 2025-11-25 18:03:42.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:03:44 compute-0 ceph-mon[74985]: pgmap v4227: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:44 compute-0 nova_compute[254092]: 2025-11-25 18:03:44.353 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4228: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:46 compute-0 ceph-mon[74985]: pgmap v4228: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4229: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:47 compute-0 nova_compute[254092]: 2025-11-25 18:03:47.031 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:48 compute-0 ceph-mon[74985]: pgmap v4229: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4230: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:49 compute-0 nova_compute[254092]: 2025-11-25 18:03:49.356 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:03:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7801.2 total, 600.0 interval
                                           Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.79 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 220 writes, 330 keys, 220 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 220 writes, 110 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:03:50 compute-0 ceph-mon[74985]: pgmap v4230: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4231: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:50 compute-0 podman[468700]: 2025-11-25 18:03:50.693257423 +0000 UTC m=+0.091197905 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 18:03:50 compute-0 podman[468699]: 2025-11-25 18:03:50.69461422 +0000 UTC m=+0.100359694 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:03:50 compute-0 podman[468701]: 2025-11-25 18:03:50.734109502 +0000 UTC m=+0.125820315 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 18:03:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:52 compute-0 nova_compute[254092]: 2025-11-25 18:03:52.034 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:52 compute-0 ceph-mon[74985]: pgmap v4231: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4232: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:03:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:03:54 compute-0 ceph-mon[74985]: pgmap v4232: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:54 compute-0 nova_compute[254092]: 2025-11-25 18:03:54.360 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4233: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:03:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531718264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:03:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:03:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531718264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:03:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:03:56 compute-0 ceph-mon[74985]: pgmap v4233: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/531718264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:03:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/531718264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:03:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4234: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:57 compute-0 nova_compute[254092]: 2025-11-25 18:03:57.035 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:03:58 compute-0 ceph-mon[74985]: pgmap v4234: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4235: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:03:59 compute-0 nova_compute[254092]: 2025-11-25 18:03:59.364 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:00 compute-0 ceph-mon[74985]: pgmap v4235: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4236: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:02 compute-0 nova_compute[254092]: 2025-11-25 18:04:02.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:02 compute-0 ceph-mon[74985]: pgmap v4236: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4237: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:04 compute-0 ceph-mon[74985]: pgmap v4237: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4238: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:04 compute-0 nova_compute[254092]: 2025-11-25 18:04:04.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:06 compute-0 ceph-mon[74985]: pgmap v4238: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4239: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:07 compute-0 nova_compute[254092]: 2025-11-25 18:04:07.089 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:07 compute-0 ceph-mon[74985]: pgmap v4239: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:04:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7801.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 216 writes, 324 keys, 216 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 216 writes, 108 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:04:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4240: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:09 compute-0 nova_compute[254092]: 2025-11-25 18:04:09.423 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:09 compute-0 ceph-mon[74985]: pgmap v4240: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:04:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:04:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4241: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:11 compute-0 ceph-mon[74985]: pgmap v4241: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:12 compute-0 nova_compute[254092]: 2025-11-25 18:04:12.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4242: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:13 compute-0 ceph-mon[74985]: pgmap v4242: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:04:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:04:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:04:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:04:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:04:13.711 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:04:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4243: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:14 compute-0 nova_compute[254092]: 2025-11-25 18:04:14.457 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:15 compute-0 ceph-mon[74985]: pgmap v4243: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4244: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:17 compute-0 nova_compute[254092]: 2025-11-25 18:04:17.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:17 compute-0 ceph-mon[74985]: pgmap v4244: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:18 compute-0 sudo[468760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:18 compute-0 sudo[468760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:18 compute-0 sudo[468760]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:18 compute-0 sudo[468785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:04:18 compute-0 sudo[468785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:18 compute-0 sudo[468785]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:18 compute-0 sudo[468810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:18 compute-0 sudo[468810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:18 compute-0 sudo[468810]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:18 compute-0 sudo[468835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:04:18 compute-0 sudo[468835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4245: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:18 compute-0 sudo[468835]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:04:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:04:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:04:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:04:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:04:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:04:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a11f2b8f-0d81-4c28-9d17-245a5ee62d6c does not exist
Nov 25 18:04:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 921b8258-52fa-480d-b278-c17ba9aab9b3 does not exist
Nov 25 18:04:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 61eeedc0-7110-4414-a6de-76ba25fd6eac does not exist
Nov 25 18:04:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:04:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:04:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:04:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:04:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:04:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:04:19 compute-0 sudo[468891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:19 compute-0 sudo[468891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:19 compute-0 sudo[468891]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:19 compute-0 sudo[468916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:04:19 compute-0 sudo[468916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:19 compute-0 sudo[468916]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:19 compute-0 sudo[468941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:19 compute-0 sudo[468941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:19 compute-0 sudo[468941]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:19 compute-0 sudo[468966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:04:19 compute-0 sudo[468966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:19 compute-0 nova_compute[254092]: 2025-11-25 18:04:19.459 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:19 compute-0 podman[469030]: 2025-11-25 18:04:19.566919575 +0000 UTC m=+0.057026878 container create d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:04:19 compute-0 systemd[1]: Started libpod-conmon-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope.
Nov 25 18:04:19 compute-0 podman[469030]: 2025-11-25 18:04:19.543601723 +0000 UTC m=+0.033709006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:04:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:04:19 compute-0 podman[469030]: 2025-11-25 18:04:19.676470888 +0000 UTC m=+0.166578241 container init d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:04:19 compute-0 podman[469030]: 2025-11-25 18:04:19.6857274 +0000 UTC m=+0.175834703 container start d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:04:19 compute-0 podman[469030]: 2025-11-25 18:04:19.689991395 +0000 UTC m=+0.180098668 container attach d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:04:19 compute-0 great_roentgen[469046]: 167 167
Nov 25 18:04:19 compute-0 systemd[1]: libpod-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope: Deactivated successfully.
Nov 25 18:04:19 compute-0 conmon[469046]: conmon d820853bea1fd4ed1ec9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope/container/memory.events
Nov 25 18:04:19 compute-0 podman[469030]: 2025-11-25 18:04:19.698477925 +0000 UTC m=+0.188585228 container died d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:04:19 compute-0 ceph-mon[74985]: pgmap v4245: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:04:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:04:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:04:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:04:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:04:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a9a702558cef9d4da448bfc26aab5bb675ca5fd9ddeb3a99a026fb98d30a4c9-merged.mount: Deactivated successfully.
Nov 25 18:04:19 compute-0 podman[469030]: 2025-11-25 18:04:19.752026368 +0000 UTC m=+0.242133651 container remove d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_roentgen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:04:19 compute-0 systemd[1]: libpod-conmon-d820853bea1fd4ed1ec937bf96cb6ba322c83b3e3a99d3e0840e580d38734ac8.scope: Deactivated successfully.
Nov 25 18:04:19 compute-0 podman[469070]: 2025-11-25 18:04:19.964793931 +0000 UTC m=+0.067247365 container create de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:04:20 compute-0 systemd[1]: Started libpod-conmon-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope.
Nov 25 18:04:20 compute-0 podman[469070]: 2025-11-25 18:04:19.942858036 +0000 UTC m=+0.045311510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:04:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:20 compute-0 podman[469070]: 2025-11-25 18:04:20.086755171 +0000 UTC m=+0.189208615 container init de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:04:20 compute-0 podman[469070]: 2025-11-25 18:04:20.097093161 +0000 UTC m=+0.199546585 container start de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:04:20 compute-0 podman[469070]: 2025-11-25 18:04:20.100881294 +0000 UTC m=+0.203334758 container attach de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:04:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4246: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:20 compute-0 nova_compute[254092]: 2025-11-25 18:04:20.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:21 compute-0 practical_euler[469087]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:04:21 compute-0 practical_euler[469087]: --> relative data size: 1.0
Nov 25 18:04:21 compute-0 practical_euler[469087]: --> All data devices are unavailable
Nov 25 18:04:21 compute-0 systemd[1]: libpod-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope: Deactivated successfully.
Nov 25 18:04:21 compute-0 systemd[1]: libpod-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope: Consumed 1.196s CPU time.
Nov 25 18:04:21 compute-0 podman[469116]: 2025-11-25 18:04:21.400799397 +0000 UTC m=+0.032996237 container died de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 18:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-464d24d5b3f406dd9d815bfd15ef272a1b7d51af28a2619016c24132c1229cd9-merged.mount: Deactivated successfully.
Nov 25 18:04:21 compute-0 podman[469116]: 2025-11-25 18:04:21.789708989 +0000 UTC m=+0.421905799 container remove de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:04:21 compute-0 ceph-mon[74985]: pgmap v4246: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:21 compute-0 systemd[1]: libpod-conmon-de05f3f3ad5ee0aab3709cb2ed87bd8395ad68fab09e95e5f79a16d4e4fb077e.scope: Deactivated successfully.
Nov 25 18:04:21 compute-0 sudo[468966]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:21 compute-0 podman[469124]: 2025-11-25 18:04:21.866755099 +0000 UTC m=+0.474228938 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:04:21 compute-0 podman[469117]: 2025-11-25 18:04:21.90619919 +0000 UTC m=+0.519420925 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 18:04:21 compute-0 sudo[469171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:21 compute-0 sudo[469171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:21 compute-0 sudo[469171]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:22 compute-0 podman[469127]: 2025-11-25 18:04:22.0064369 +0000 UTC m=+0.604145994 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 18:04:22 compute-0 sudo[469221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:04:22 compute-0 sudo[469221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:22 compute-0 sudo[469221]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:22 compute-0 sudo[469249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:22 compute-0 sudo[469249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:22 compute-0 sudo[469249]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:22 compute-0 nova_compute[254092]: 2025-11-25 18:04:22.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:22 compute-0 sudo[469274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:04:22 compute-0 sudo[469274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4247: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:22 compute-0 nova_compute[254092]: 2025-11-25 18:04:22.500 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:22 compute-0 podman[469340]: 2025-11-25 18:04:22.559403954 +0000 UTC m=+0.026772357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:04:22 compute-0 podman[469340]: 2025-11-25 18:04:22.680009585 +0000 UTC m=+0.147377958 container create 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:04:22 compute-0 systemd[1]: Started libpod-conmon-49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50.scope.
Nov 25 18:04:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:04:22 compute-0 podman[469340]: 2025-11-25 18:04:22.818552246 +0000 UTC m=+0.285920629 container init 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:04:22 compute-0 podman[469340]: 2025-11-25 18:04:22.827764096 +0000 UTC m=+0.295132459 container start 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:04:22 compute-0 podman[469340]: 2025-11-25 18:04:22.831489666 +0000 UTC m=+0.298858029 container attach 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:04:22 compute-0 hopeful_perlman[469357]: 167 167
Nov 25 18:04:22 compute-0 systemd[1]: libpod-49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50.scope: Deactivated successfully.
Nov 25 18:04:22 compute-0 podman[469340]: 2025-11-25 18:04:22.836922914 +0000 UTC m=+0.304291277 container died 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:04:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-66ac467caedf2c76abe744e634a0df367b747d121c3d38cddf3f1670d3cdff87-merged.mount: Deactivated successfully.
Nov 25 18:04:22 compute-0 podman[469340]: 2025-11-25 18:04:22.874826933 +0000 UTC m=+0.342195296 container remove 49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:04:22 compute-0 systemd[1]: libpod-conmon-49e761ab1480727232dc78fa756d5f1c98b37ac80b3b71197348d2fec6b59e50.scope: Deactivated successfully.
Nov 25 18:04:23 compute-0 podman[469381]: 2025-11-25 18:04:23.090169436 +0000 UTC m=+0.060866963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:04:23 compute-0 podman[469381]: 2025-11-25 18:04:23.280122371 +0000 UTC m=+0.250819848 container create aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:04:23 compute-0 systemd[1]: Started libpod-conmon-aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af.scope.
Nov 25 18:04:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:23 compute-0 podman[469381]: 2025-11-25 18:04:23.521912041 +0000 UTC m=+0.492609538 container init aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:04:23 compute-0 podman[469381]: 2025-11-25 18:04:23.537080313 +0000 UTC m=+0.507777790 container start aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:04:23 compute-0 podman[469381]: 2025-11-25 18:04:23.63980052 +0000 UTC m=+0.610498007 container attach aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 18:04:23 compute-0 ceph-mon[74985]: pgmap v4247: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]: {
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:     "0": [
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:         {
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "devices": [
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "/dev/loop3"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             ],
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_name": "ceph_lv0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_size": "21470642176",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "name": "ceph_lv0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "tags": {
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cluster_name": "ceph",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.crush_device_class": "",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.encrypted": "0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osd_id": "0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.type": "block",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.vdo": "0"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             },
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "type": "block",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "vg_name": "ceph_vg0"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:         }
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:     ],
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:     "1": [
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:         {
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "devices": [
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "/dev/loop4"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             ],
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_name": "ceph_lv1",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_size": "21470642176",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "name": "ceph_lv1",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "tags": {
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cluster_name": "ceph",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.crush_device_class": "",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.encrypted": "0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osd_id": "1",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.type": "block",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.vdo": "0"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             },
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "type": "block",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "vg_name": "ceph_vg1"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:         }
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:     ],
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:     "2": [
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:         {
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "devices": [
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "/dev/loop5"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             ],
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_name": "ceph_lv2",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_size": "21470642176",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "name": "ceph_lv2",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "tags": {
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.cluster_name": "ceph",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.crush_device_class": "",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.encrypted": "0",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osd_id": "2",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.type": "block",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:                 "ceph.vdo": "0"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             },
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "type": "block",
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:             "vg_name": "ceph_vg2"
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:         }
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]:     ]
Nov 25 18:04:24 compute-0 condescending_driscoll[469398]: }
Nov 25 18:04:24 compute-0 systemd[1]: libpod-aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af.scope: Deactivated successfully.
Nov 25 18:04:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4248: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:24 compute-0 podman[469407]: 2025-11-25 18:04:24.388454934 +0000 UTC m=+0.025786701 container died aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-edf665cd0f4a727c12ba508eba4913e5bdf10cf2e1c77e0b9b3127fc5ba370aa-merged.mount: Deactivated successfully.
Nov 25 18:04:24 compute-0 podman[469407]: 2025-11-25 18:04:24.44579365 +0000 UTC m=+0.083125347 container remove aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 18:04:24 compute-0 systemd[1]: libpod-conmon-aa77a6f3ba6d2acdb7700b075c78169e7a7fc417e1c8855b39ac864ba94312af.scope: Deactivated successfully.
Nov 25 18:04:24 compute-0 nova_compute[254092]: 2025-11-25 18:04:24.492 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:24 compute-0 sudo[469274]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:24 compute-0 sudo[469422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:24 compute-0 sudo[469422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:24 compute-0 sudo[469422]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:24 compute-0 sudo[469447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:04:24 compute-0 sudo[469447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:24 compute-0 sudo[469447]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:24 compute-0 sudo[469472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:24 compute-0 sudo[469472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:24 compute-0 sudo[469472]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:24 compute-0 sudo[469497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:04:24 compute-0 sudo[469497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:25 compute-0 podman[469564]: 2025-11-25 18:04:25.233101623 +0000 UTC m=+0.058658102 container create 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:04:25 compute-0 systemd[1]: Started libpod-conmon-4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c.scope.
Nov 25 18:04:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:04:25 compute-0 podman[469564]: 2025-11-25 18:04:25.215426033 +0000 UTC m=+0.040982502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:04:25 compute-0 podman[469564]: 2025-11-25 18:04:25.319199579 +0000 UTC m=+0.144756038 container init 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:04:25 compute-0 podman[469564]: 2025-11-25 18:04:25.329789266 +0000 UTC m=+0.155345695 container start 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:04:25 compute-0 podman[469564]: 2025-11-25 18:04:25.332615733 +0000 UTC m=+0.158172212 container attach 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:04:25 compute-0 elastic_kare[469581]: 167 167
Nov 25 18:04:25 compute-0 systemd[1]: libpod-4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c.scope: Deactivated successfully.
Nov 25 18:04:25 compute-0 podman[469564]: 2025-11-25 18:04:25.335987534 +0000 UTC m=+0.161544013 container died 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 18:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-55b17501fbf72ac47780c5b8474875682b22587f245f9959ea309b19f6d3de92-merged.mount: Deactivated successfully.
Nov 25 18:04:25 compute-0 podman[469564]: 2025-11-25 18:04:25.379860515 +0000 UTC m=+0.205416954 container remove 4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 18:04:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 18:04:25 compute-0 systemd[1]: libpod-conmon-4fe14a0f67014c08a1b597b62ffda46a0e4263a34876c9447cd5a5f9ec41f52c.scope: Deactivated successfully.
Nov 25 18:04:25 compute-0 podman[469605]: 2025-11-25 18:04:25.585144575 +0000 UTC m=+0.062155987 container create 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:04:25 compute-0 systemd[1]: Started libpod-conmon-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope.
Nov 25 18:04:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:04:25 compute-0 podman[469605]: 2025-11-25 18:04:25.564395492 +0000 UTC m=+0.041406904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:04:25 compute-0 podman[469605]: 2025-11-25 18:04:25.673460412 +0000 UTC m=+0.150471924 container init 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:04:25 compute-0 podman[469605]: 2025-11-25 18:04:25.681398427 +0000 UTC m=+0.158409879 container start 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:04:25 compute-0 podman[469605]: 2025-11-25 18:04:25.685439907 +0000 UTC m=+0.162451319 container attach 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:04:25 compute-0 ceph-mon[74985]: pgmap v4248: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4249: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:26 compute-0 clever_morse[469621]: {
Nov 25 18:04:26 compute-0 clever_morse[469621]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "osd_id": 1,
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "type": "bluestore"
Nov 25 18:04:26 compute-0 clever_morse[469621]:     },
Nov 25 18:04:26 compute-0 clever_morse[469621]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "osd_id": 2,
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "type": "bluestore"
Nov 25 18:04:26 compute-0 clever_morse[469621]:     },
Nov 25 18:04:26 compute-0 clever_morse[469621]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "osd_id": 0,
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:04:26 compute-0 clever_morse[469621]:         "type": "bluestore"
Nov 25 18:04:26 compute-0 clever_morse[469621]:     }
Nov 25 18:04:26 compute-0 clever_morse[469621]: }
Nov 25 18:04:26 compute-0 systemd[1]: libpod-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope: Deactivated successfully.
Nov 25 18:04:26 compute-0 systemd[1]: libpod-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope: Consumed 1.199s CPU time.
Nov 25 18:04:26 compute-0 podman[469605]: 2025-11-25 18:04:26.877500393 +0000 UTC m=+1.354511815 container died 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 18:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-93db0796d39bac4223ad13ddf080bd9b23d4cbd183dcf9aea6ce816918022021-merged.mount: Deactivated successfully.
Nov 25 18:04:26 compute-0 podman[469605]: 2025-11-25 18:04:26.934784087 +0000 UTC m=+1.411795489 container remove 5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:04:26 compute-0 systemd[1]: libpod-conmon-5e25669e8e1ee516b027a2cffc2133053dec96424f1e2394d0c5767b9a50745a.scope: Deactivated successfully.
Nov 25 18:04:26 compute-0 sudo[469497]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:04:26 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:04:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:04:26 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:04:26 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 30e845d9-3870-43f9-ab6c-5db9693adddf does not exist
Nov 25 18:04:26 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2cb5da74-21fc-46b0-a652-aa5ac53be2cd does not exist
Nov 25 18:04:27 compute-0 sudo[469664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:04:27 compute-0 sudo[469664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:27 compute-0 sudo[469664]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:27 compute-0 sudo[469689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:04:27 compute-0 sudo[469689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:04:27 compute-0 nova_compute[254092]: 2025-11-25 18:04:27.158 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:27 compute-0 sudo[469689]: pam_unix(sudo:session): session closed for user root
Nov 25 18:04:27 compute-0 nova_compute[254092]: 2025-11-25 18:04:27.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:27 compute-0 nova_compute[254092]: 2025-11-25 18:04:27.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:27 compute-0 ceph-mon[74985]: pgmap v4249: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:27 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:04:27 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:04:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4250: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:29 compute-0 nova_compute[254092]: 2025-11-25 18:04:29.544 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:29 compute-0 ceph-mon[74985]: pgmap v4250: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4251: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:30 compute-0 nova_compute[254092]: 2025-11-25 18:04:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:30 compute-0 nova_compute[254092]: 2025-11-25 18:04:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:04:30 compute-0 nova_compute[254092]: 2025-11-25 18:04:30.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:04:30 compute-0 nova_compute[254092]: 2025-11-25 18:04:30.521 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:04:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:31 compute-0 nova_compute[254092]: 2025-11-25 18:04:31.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:31 compute-0 nova_compute[254092]: 2025-11-25 18:04:31.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:04:31 compute-0 nova_compute[254092]: 2025-11-25 18:04:31.539 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:04:31 compute-0 nova_compute[254092]: 2025-11-25 18:04:31.540 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:04:31 compute-0 nova_compute[254092]: 2025-11-25 18:04:31.540 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:04:31 compute-0 nova_compute[254092]: 2025-11-25 18:04:31.541 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:04:31 compute-0 ceph-mon[74985]: pgmap v4251: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:04:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/151221612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:04:31 compute-0 nova_compute[254092]: 2025-11-25 18:04:31.994 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.139 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.140 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3575MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.140 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.140 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.203 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.203 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.220 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:04:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4252: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:04:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520834223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.635 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.641 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.659 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.660 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:04:32 compute-0 nova_compute[254092]: 2025-11-25 18:04:32.661 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:04:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/151221612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:04:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2520834223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:04:33 compute-0 ceph-mon[74985]: pgmap v4252: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4253: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:34 compute-0 nova_compute[254092]: 2025-11-25 18:04:34.592 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:34 compute-0 nova_compute[254092]: 2025-11-25 18:04:34.661 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:35 compute-0 nova_compute[254092]: 2025-11-25 18:04:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:35 compute-0 nova_compute[254092]: 2025-11-25 18:04:35.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 18:04:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:36 compute-0 ceph-mon[74985]: pgmap v4253: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4254: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:37 compute-0 nova_compute[254092]: 2025-11-25 18:04:37.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:37 compute-0 nova_compute[254092]: 2025-11-25 18:04:37.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:37 compute-0 nova_compute[254092]: 2025-11-25 18:04:37.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:04:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4255: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:38 compute-0 ceph-mon[74985]: pgmap v4254: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:39 compute-0 ceph-mon[74985]: pgmap v4255: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:39 compute-0 nova_compute[254092]: 2025-11-25 18:04:39.627 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:04:40
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4256: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:04:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:04:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:41 compute-0 ceph-mon[74985]: pgmap v4256: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:42 compute-0 nova_compute[254092]: 2025-11-25 18:04:42.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4257: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:43 compute-0 nova_compute[254092]: 2025-11-25 18:04:43.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:43 compute-0 ceph-mon[74985]: pgmap v4257: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4258: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:44 compute-0 nova_compute[254092]: 2025-11-25 18:04:44.683 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4259: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:46 compute-0 nova_compute[254092]: 2025-11-25 18:04:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:46 compute-0 ceph-mon[74985]: pgmap v4258: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:47 compute-0 nova_compute[254092]: 2025-11-25 18:04:47.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4260: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:48 compute-0 ceph-mon[74985]: pgmap v4259: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:49 compute-0 nova_compute[254092]: 2025-11-25 18:04:49.726 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:50 compute-0 ceph-mon[74985]: pgmap v4260: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4261: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:52 compute-0 nova_compute[254092]: 2025-11-25 18:04:52.230 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4262: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:52 compute-0 ceph-mon[74985]: pgmap v4261: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:04:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:04:52 compute-0 podman[469759]: 2025-11-25 18:04:52.690924306 +0000 UTC m=+0.087163016 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 18:04:52 compute-0 podman[469758]: 2025-11-25 18:04:52.705398968 +0000 UTC m=+0.102066550 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 25 18:04:52 compute-0 podman[469760]: 2025-11-25 18:04:52.744317834 +0000 UTC m=+0.133758409 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 18:04:54 compute-0 ceph-mon[74985]: pgmap v4262: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4263: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:54 compute-0 nova_compute[254092]: 2025-11-25 18:04:54.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:04:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342693728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:04:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:04:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342693728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:04:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:04:56 compute-0 ceph-mon[74985]: pgmap v4263: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2342693728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:04:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2342693728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:04:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4264: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:56 compute-0 nova_compute[254092]: 2025-11-25 18:04:56.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:04:56 compute-0 nova_compute[254092]: 2025-11-25 18:04:56.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 18:04:56 compute-0 nova_compute[254092]: 2025-11-25 18:04:56.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 18:04:57 compute-0 nova_compute[254092]: 2025-11-25 18:04:57.289 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:57 compute-0 ceph-mon[74985]: pgmap v4264: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4265: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:04:59 compute-0 nova_compute[254092]: 2025-11-25 18:04:59.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:04:59 compute-0 ceph-mon[74985]: pgmap v4265: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4266: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:02 compute-0 nova_compute[254092]: 2025-11-25 18:05:02.290 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:02 compute-0 ceph-mon[74985]: pgmap v4266: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4267: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:03 compute-0 nova_compute[254092]: 2025-11-25 18:05:03.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:03 compute-0 ceph-mon[74985]: pgmap v4267: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4268: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:04 compute-0 nova_compute[254092]: 2025-11-25 18:05:04.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:05 compute-0 ceph-mon[74985]: pgmap v4268: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4269: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:07 compute-0 nova_compute[254092]: 2025-11-25 18:05:07.340 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:07 compute-0 ceph-mon[74985]: pgmap v4269: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4270: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:09 compute-0 nova_compute[254092]: 2025-11-25 18:05:09.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:05:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:05:10 compute-0 ceph-mon[74985]: pgmap v4270: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4271: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:11 compute-0 ceph-mon[74985]: pgmap v4271: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:12 compute-0 nova_compute[254092]: 2025-11-25 18:05:12.341 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4272: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:12 compute-0 sshd-session[469824]: Connection closed by authenticating user root 171.244.51.45 port 56566 [preauth]
Nov 25 18:05:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:05:13.712 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:05:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:05:13.713 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:05:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:05:13.713 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:05:13 compute-0 ceph-mon[74985]: pgmap v4272: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4273: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:14 compute-0 nova_compute[254092]: 2025-11-25 18:05:14.835 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:15 compute-0 ceph-mon[74985]: pgmap v4273: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4274: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:17 compute-0 nova_compute[254092]: 2025-11-25 18:05:17.342 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:18 compute-0 ceph-mon[74985]: pgmap v4274: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4275: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:19 compute-0 ceph-mon[74985]: pgmap v4275: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:19 compute-0 nova_compute[254092]: 2025-11-25 18:05:19.839 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4276: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:21 compute-0 ceph-mon[74985]: pgmap v4276: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:21 compute-0 nova_compute[254092]: 2025-11-25 18:05:21.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:22 compute-0 nova_compute[254092]: 2025-11-25 18:05:22.346 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4277: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:23 compute-0 ceph-mon[74985]: pgmap v4277: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:23 compute-0 podman[469827]: 2025-11-25 18:05:23.676370238 +0000 UTC m=+0.081940624 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:05:23 compute-0 podman[469826]: 2025-11-25 18:05:23.684343595 +0000 UTC m=+0.089916871 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 18:05:23 compute-0 podman[469828]: 2025-11-25 18:05:23.710982687 +0000 UTC m=+0.114668332 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 18:05:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4278: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:24 compute-0 nova_compute[254092]: 2025-11-25 18:05:24.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:24 compute-0 nova_compute[254092]: 2025-11-25 18:05:24.841 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:25 compute-0 ceph-mon[74985]: pgmap v4278: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4279: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:26 compute-0 ceph-mon[74985]: pgmap v4279: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:27 compute-0 sudo[469888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:27 compute-0 sudo[469888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:27 compute-0 sudo[469888]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:27 compute-0 nova_compute[254092]: 2025-11-25 18:05:27.347 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:27 compute-0 sudo[469913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:05:27 compute-0 sudo[469913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:27 compute-0 sudo[469913]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:27 compute-0 sudo[469938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:27 compute-0 sudo[469938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:27 compute-0 sudo[469938]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:27 compute-0 sudo[469963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:05:27 compute-0 sudo[469963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:28 compute-0 sudo[469963]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:05:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:05:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:05:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:05:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:05:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:05:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 48204fc0-ff82-4230-8d6a-588023046200 does not exist
Nov 25 18:05:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2e1a5f77-b5f1-4cee-869e-9ce2962f16b2 does not exist
Nov 25 18:05:28 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b84f05aa-26cb-430e-9c43-5b656b2b0817 does not exist
Nov 25 18:05:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:05:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:05:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:05:28 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:05:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:05:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:05:28 compute-0 sudo[470021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:28 compute-0 sudo[470021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:28 compute-0 sudo[470021]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:28 compute-0 sudo[470046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:05:28 compute-0 sudo[470046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:28 compute-0 sudo[470046]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:05:28 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:05:28 compute-0 sudo[470071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:28 compute-0 sudo[470071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:28 compute-0 sudo[470071]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4280: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:28 compute-0 sudo[470096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:05:28 compute-0 sudo[470096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:28 compute-0 nova_compute[254092]: 2025-11-25 18:05:28.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:28 compute-0 podman[470162]: 2025-11-25 18:05:28.74858245 +0000 UTC m=+0.023822578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:05:29 compute-0 podman[470162]: 2025-11-25 18:05:29.003149637 +0000 UTC m=+0.278389735 container create 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:05:29 compute-0 systemd[1]: Started libpod-conmon-633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9.scope.
Nov 25 18:05:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:05:29 compute-0 nova_compute[254092]: 2025-11-25 18:05:29.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:29 compute-0 podman[470162]: 2025-11-25 18:05:29.510165724 +0000 UTC m=+0.785405852 container init 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:05:29 compute-0 podman[470162]: 2025-11-25 18:05:29.518407268 +0000 UTC m=+0.793647376 container start 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 18:05:29 compute-0 quizzical_khorana[470178]: 167 167
Nov 25 18:05:29 compute-0 systemd[1]: libpod-633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9.scope: Deactivated successfully.
Nov 25 18:05:29 compute-0 podman[470162]: 2025-11-25 18:05:29.806124995 +0000 UTC m=+1.081365143 container attach 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:05:29 compute-0 podman[470162]: 2025-11-25 18:05:29.807847932 +0000 UTC m=+1.083088060 container died 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 18:05:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:05:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:05:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:05:29 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:05:29 compute-0 ceph-mon[74985]: pgmap v4280: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:29 compute-0 nova_compute[254092]: 2025-11-25 18:05:29.845 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b1d3184f67005f6cad05e2c5780b12361da75c6983d1f4b0cb966d56b749fdd-merged.mount: Deactivated successfully.
Nov 25 18:05:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4281: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:30 compute-0 podman[470162]: 2025-11-25 18:05:30.776920176 +0000 UTC m=+2.052160314 container remove 633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:05:30 compute-0 systemd[1]: libpod-conmon-633d1a5bbe0e8c6a9e0325561bb129d6eb4ecaaabca069c7f56e2cead980eaf9.scope: Deactivated successfully.
Nov 25 18:05:30 compute-0 ceph-mon[74985]: pgmap v4281: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:31 compute-0 podman[470204]: 2025-11-25 18:05:30.99414669 +0000 UTC m=+0.031974199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:05:31 compute-0 podman[470204]: 2025-11-25 18:05:31.122974685 +0000 UTC m=+0.160802144 container create 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:05:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:31 compute-0 systemd[1]: Started libpod-conmon-08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe.scope.
Nov 25 18:05:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.509 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.533 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.533 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.559 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.560 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.560 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.560 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:05:31 compute-0 nova_compute[254092]: 2025-11-25 18:05:31.561 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:05:31 compute-0 podman[470204]: 2025-11-25 18:05:31.561295409 +0000 UTC m=+0.599122848 container init 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:05:31 compute-0 podman[470204]: 2025-11-25 18:05:31.567861717 +0000 UTC m=+0.605689136 container start 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:05:31 compute-0 podman[470204]: 2025-11-25 18:05:31.700307421 +0000 UTC m=+0.738134870 container attach 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:05:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:05:31 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472023277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.014 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:05:32 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3472023277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.175 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.176 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.176 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.273 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.349 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4282: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:32 compute-0 zen_brattain[470221]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:05:32 compute-0 zen_brattain[470221]: --> relative data size: 1.0
Nov 25 18:05:32 compute-0 zen_brattain[470221]: --> All data devices are unavailable
Nov 25 18:05:32 compute-0 systemd[1]: libpod-08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe.scope: Deactivated successfully.
Nov 25 18:05:32 compute-0 podman[470204]: 2025-11-25 18:05:32.607707362 +0000 UTC m=+1.645534781 container died 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 25 18:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-521d678e201d600c055526ca39243a4bcfa7fd009474d62bb1a6311b4b8889db-merged.mount: Deactivated successfully.
Nov 25 18:05:32 compute-0 podman[470204]: 2025-11-25 18:05:32.668027889 +0000 UTC m=+1.705855308 container remove 08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 18:05:32 compute-0 systemd[1]: libpod-conmon-08e940ad1b5930ced8de9f07377f2da6f85bc00507cd388236402db239846dbe.scope: Deactivated successfully.
Nov 25 18:05:32 compute-0 sudo[470096]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:05:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1869174271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.739 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.745 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.757 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:05:32 compute-0 sudo[470304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.759 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:05:32 compute-0 nova_compute[254092]: 2025-11-25 18:05:32.760 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:05:32 compute-0 sudo[470304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:32 compute-0 sudo[470304]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:32 compute-0 sudo[470331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:05:32 compute-0 sudo[470331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:32 compute-0 sudo[470331]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:32 compute-0 sudo[470356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:32 compute-0 sudo[470356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:32 compute-0 sudo[470356]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:32 compute-0 sudo[470381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:05:32 compute-0 sudo[470381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:33 compute-0 ceph-mon[74985]: pgmap v4282: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:33 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1869174271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:05:33 compute-0 podman[470446]: 2025-11-25 18:05:33.269836429 +0000 UTC m=+0.043849251 container create 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:05:33 compute-0 systemd[1]: Started libpod-conmon-5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10.scope.
Nov 25 18:05:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:05:33 compute-0 podman[470446]: 2025-11-25 18:05:33.248826358 +0000 UTC m=+0.022839280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:05:33 compute-0 podman[470446]: 2025-11-25 18:05:33.357805945 +0000 UTC m=+0.131818767 container init 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:05:33 compute-0 podman[470446]: 2025-11-25 18:05:33.364050224 +0000 UTC m=+0.138063046 container start 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:05:33 compute-0 podman[470446]: 2025-11-25 18:05:33.36719177 +0000 UTC m=+0.141204592 container attach 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 18:05:33 compute-0 bold_clarke[470462]: 167 167
Nov 25 18:05:33 compute-0 systemd[1]: libpod-5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10.scope: Deactivated successfully.
Nov 25 18:05:33 compute-0 podman[470446]: 2025-11-25 18:05:33.36902391 +0000 UTC m=+0.143036762 container died 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48c688e9f74013a24ea66527c90e25c698d42cddb59da6faab0a4bfd7245060-merged.mount: Deactivated successfully.
Nov 25 18:05:33 compute-0 podman[470446]: 2025-11-25 18:05:33.416501928 +0000 UTC m=+0.190514780 container remove 5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_clarke, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 18:05:33 compute-0 systemd[1]: libpod-conmon-5b76f4a483790bc84f8d48ee4368d45da3ccc29edd1f4073f30140840c6a7f10.scope: Deactivated successfully.
Nov 25 18:05:33 compute-0 podman[470485]: 2025-11-25 18:05:33.571426311 +0000 UTC m=+0.043419648 container create dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:05:33 compute-0 systemd[1]: Started libpod-conmon-dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2.scope.
Nov 25 18:05:33 compute-0 podman[470485]: 2025-11-25 18:05:33.550192396 +0000 UTC m=+0.022185723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:05:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:33 compute-0 podman[470485]: 2025-11-25 18:05:33.726110778 +0000 UTC m=+0.198104085 container init dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:05:33 compute-0 podman[470485]: 2025-11-25 18:05:33.738177216 +0000 UTC m=+0.210170513 container start dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 18:05:33 compute-0 podman[470485]: 2025-11-25 18:05:33.754277543 +0000 UTC m=+0.226270840 container attach dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:05:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4283: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:34 compute-0 nice_ride[470502]: {
Nov 25 18:05:34 compute-0 nice_ride[470502]:     "0": [
Nov 25 18:05:34 compute-0 nice_ride[470502]:         {
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "devices": [
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "/dev/loop3"
Nov 25 18:05:34 compute-0 nice_ride[470502]:             ],
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_name": "ceph_lv0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_size": "21470642176",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "name": "ceph_lv0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "tags": {
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cluster_name": "ceph",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.crush_device_class": "",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.encrypted": "0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osd_id": "0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.type": "block",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.vdo": "0"
Nov 25 18:05:34 compute-0 nice_ride[470502]:             },
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "type": "block",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "vg_name": "ceph_vg0"
Nov 25 18:05:34 compute-0 nice_ride[470502]:         }
Nov 25 18:05:34 compute-0 nice_ride[470502]:     ],
Nov 25 18:05:34 compute-0 nice_ride[470502]:     "1": [
Nov 25 18:05:34 compute-0 nice_ride[470502]:         {
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "devices": [
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "/dev/loop4"
Nov 25 18:05:34 compute-0 nice_ride[470502]:             ],
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_name": "ceph_lv1",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_size": "21470642176",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "name": "ceph_lv1",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "tags": {
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cluster_name": "ceph",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.crush_device_class": "",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.encrypted": "0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osd_id": "1",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.type": "block",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.vdo": "0"
Nov 25 18:05:34 compute-0 nice_ride[470502]:             },
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "type": "block",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "vg_name": "ceph_vg1"
Nov 25 18:05:34 compute-0 nice_ride[470502]:         }
Nov 25 18:05:34 compute-0 nice_ride[470502]:     ],
Nov 25 18:05:34 compute-0 nice_ride[470502]:     "2": [
Nov 25 18:05:34 compute-0 nice_ride[470502]:         {
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "devices": [
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "/dev/loop5"
Nov 25 18:05:34 compute-0 nice_ride[470502]:             ],
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_name": "ceph_lv2",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_size": "21470642176",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "name": "ceph_lv2",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "tags": {
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.cluster_name": "ceph",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.crush_device_class": "",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.encrypted": "0",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osd_id": "2",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.type": "block",
Nov 25 18:05:34 compute-0 nice_ride[470502]:                 "ceph.vdo": "0"
Nov 25 18:05:34 compute-0 nice_ride[470502]:             },
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "type": "block",
Nov 25 18:05:34 compute-0 nice_ride[470502]:             "vg_name": "ceph_vg2"
Nov 25 18:05:34 compute-0 nice_ride[470502]:         }
Nov 25 18:05:34 compute-0 nice_ride[470502]:     ]
Nov 25 18:05:34 compute-0 nice_ride[470502]: }
Nov 25 18:05:34 compute-0 systemd[1]: libpod-dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2.scope: Deactivated successfully.
Nov 25 18:05:34 compute-0 podman[470485]: 2025-11-25 18:05:34.543255621 +0000 UTC m=+1.015248998 container died dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 18:05:34 compute-0 ceph-mon[74985]: pgmap v4283: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:34 compute-0 nova_compute[254092]: 2025-11-25 18:05:34.723 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c11efeca6bc212aa7dccab10c4ffd5bd4a69dd5ad28f4cef48933c2e1181eae0-merged.mount: Deactivated successfully.
Nov 25 18:05:34 compute-0 nova_compute[254092]: 2025-11-25 18:05:34.849 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:34 compute-0 podman[470485]: 2025-11-25 18:05:34.862071342 +0000 UTC m=+1.334064639 container remove dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:05:34 compute-0 sudo[470381]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:34 compute-0 systemd[1]: libpod-conmon-dfb03b8e624864947c4cf6a28533bd5e215716bca845cf44028d0644b519ffb2.scope: Deactivated successfully.
Nov 25 18:05:34 compute-0 sudo[470521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:34 compute-0 sudo[470521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:34 compute-0 sudo[470521]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:35 compute-0 sudo[470550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:05:35 compute-0 sudo[470550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:35 compute-0 sudo[470550]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:35 compute-0 sudo[470575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:35 compute-0 sudo[470575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:35 compute-0 sudo[470575]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:35 compute-0 sudo[470600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:05:35 compute-0 sudo[470600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:35 compute-0 podman[470666]: 2025-11-25 18:05:35.506143559 +0000 UTC m=+0.062979170 container create a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:05:35 compute-0 systemd[1]: Started libpod-conmon-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope.
Nov 25 18:05:35 compute-0 podman[470666]: 2025-11-25 18:05:35.477718078 +0000 UTC m=+0.034553709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:05:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:05:35 compute-0 podman[470666]: 2025-11-25 18:05:35.606037839 +0000 UTC m=+0.162873540 container init a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:05:35 compute-0 podman[470666]: 2025-11-25 18:05:35.613404039 +0000 UTC m=+0.170239690 container start a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:05:35 compute-0 podman[470666]: 2025-11-25 18:05:35.617597303 +0000 UTC m=+0.174432944 container attach a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:05:35 compute-0 elastic_chaum[470682]: 167 167
Nov 25 18:05:35 compute-0 systemd[1]: libpod-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope: Deactivated successfully.
Nov 25 18:05:35 compute-0 conmon[470682]: conmon a0ba8e955daba448130f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope/container/memory.events
Nov 25 18:05:35 compute-0 podman[470666]: 2025-11-25 18:05:35.622506926 +0000 UTC m=+0.179342537 container died a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7662d7b0b3797a57be45d01f30bdc3164ef69b4f09398d50295ed1d5466bf55b-merged.mount: Deactivated successfully.
Nov 25 18:05:35 compute-0 podman[470666]: 2025-11-25 18:05:35.860099843 +0000 UTC m=+0.416935464 container remove a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:05:35 compute-0 systemd[1]: libpod-conmon-a0ba8e955daba448130f51b133ed7d06a81df497f1e1b484aef8bf767eb8a536.scope: Deactivated successfully.
Nov 25 18:05:36 compute-0 podman[470707]: 2025-11-25 18:05:36.025196553 +0000 UTC m=+0.047738616 container create 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:05:36 compute-0 systemd[1]: Started libpod-conmon-2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec.scope.
Nov 25 18:05:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:05:36 compute-0 podman[470707]: 2025-11-25 18:05:36.002168398 +0000 UTC m=+0.024710551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:05:36 compute-0 podman[470707]: 2025-11-25 18:05:36.098008418 +0000 UTC m=+0.120550511 container init 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 18:05:36 compute-0 podman[470707]: 2025-11-25 18:05:36.104732262 +0000 UTC m=+0.127274325 container start 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:05:36 compute-0 podman[470707]: 2025-11-25 18:05:36.107678681 +0000 UTC m=+0.130220774 container attach 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:05:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4284: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]: {
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "osd_id": 1,
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "type": "bluestore"
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:     },
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "osd_id": 2,
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "type": "bluestore"
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:     },
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "osd_id": 0,
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:         "type": "bluestore"
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]:     }
Nov 25 18:05:37 compute-0 pedantic_archimedes[470724]: }
Nov 25 18:05:37 compute-0 systemd[1]: libpod-2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec.scope: Deactivated successfully.
Nov 25 18:05:37 compute-0 podman[470707]: 2025-11-25 18:05:37.09827965 +0000 UTC m=+1.120821713 container died 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:05:37 compute-0 nova_compute[254092]: 2025-11-25 18:05:37.363 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:37 compute-0 ceph-mon[74985]: pgmap v4284: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ce57b4601c458822ab3fa76b854e1745042d578d1cdf290838ce63fe0464b0c-merged.mount: Deactivated successfully.
Nov 25 18:05:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4285: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:38 compute-0 nova_compute[254092]: 2025-11-25 18:05:38.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:38 compute-0 nova_compute[254092]: 2025-11-25 18:05:38.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:05:38 compute-0 podman[470707]: 2025-11-25 18:05:38.605537229 +0000 UTC m=+2.628079292 container remove 2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:05:38 compute-0 systemd[1]: libpod-conmon-2d2e9297b86dc88818c2e8873411d6b262eb5ddeb1973c07522f527f9506d7ec.scope: Deactivated successfully.
Nov 25 18:05:38 compute-0 sudo[470600]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:05:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:05:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:05:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:05:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3d8530e5-ee00-4604-bd7d-165ef41befb0 does not exist
Nov 25 18:05:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7686b04a-10ec-4a8b-bcc5-bdf131cdef65 does not exist
Nov 25 18:05:38 compute-0 sudo[470771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:05:38 compute-0 sudo[470771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:38 compute-0 sudo[470771]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:38 compute-0 sudo[470796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:05:38 compute-0 sudo[470796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:05:38 compute-0 sudo[470796]: pam_unix(sudo:session): session closed for user root
Nov 25 18:05:39 compute-0 ceph-mon[74985]: pgmap v4285: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:05:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:05:39 compute-0 nova_compute[254092]: 2025-11-25 18:05:39.852 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:05:40
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta']
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4286: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:05:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:05:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:41 compute-0 ceph-mon[74985]: pgmap v4286: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:42 compute-0 nova_compute[254092]: 2025-11-25 18:05:42.367 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4287: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:43 compute-0 nova_compute[254092]: 2025-11-25 18:05:43.498 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:05:43 compute-0 ceph-mon[74985]: pgmap v4287: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4288: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:44 compute-0 nova_compute[254092]: 2025-11-25 18:05:44.854 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:45 compute-0 ceph-mon[74985]: pgmap v4288: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4289: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:47 compute-0 nova_compute[254092]: 2025-11-25 18:05:47.368 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:47 compute-0 ceph-mon[74985]: pgmap v4289: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4290: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:49 compute-0 nova_compute[254092]: 2025-11-25 18:05:49.858 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:50 compute-0 ceph-mon[74985]: pgmap v4290: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4291: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:52 compute-0 ceph-mon[74985]: pgmap v4291: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:52 compute-0 nova_compute[254092]: 2025-11-25 18:05:52.371 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4292: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:05:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:05:54 compute-0 ceph-mon[74985]: pgmap v4292: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4293: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:54 compute-0 podman[470822]: 2025-11-25 18:05:54.649687352 +0000 UTC m=+0.063785971 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 18:05:54 compute-0 podman[470821]: 2025-11-25 18:05:54.651361448 +0000 UTC m=+0.068307014 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:05:54 compute-0 podman[470823]: 2025-11-25 18:05:54.686341048 +0000 UTC m=+0.091970127 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 18:05:54 compute-0 nova_compute[254092]: 2025-11-25 18:05:54.861 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:05:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2034984102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:05:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:05:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2034984102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:05:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:05:56 compute-0 ceph-mon[74985]: pgmap v4293: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2034984102' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:05:56 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2034984102' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:05:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4294: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:57 compute-0 nova_compute[254092]: 2025-11-25 18:05:57.374 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:05:58 compute-0 ceph-mon[74985]: pgmap v4294: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4295: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:05:59 compute-0 nova_compute[254092]: 2025-11-25 18:05:59.865 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:00 compute-0 ceph-mon[74985]: pgmap v4295: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4296: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:02 compute-0 nova_compute[254092]: 2025-11-25 18:06:02.376 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4297: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:02 compute-0 ceph-mon[74985]: pgmap v4296: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:03 compute-0 ceph-mon[74985]: pgmap v4297: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4298: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:04 compute-0 nova_compute[254092]: 2025-11-25 18:06:04.480 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:04 compute-0 nova_compute[254092]: 2025-11-25 18:06:04.868 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:05 compute-0 ceph-mon[74985]: pgmap v4298: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4299: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 18:06:07 compute-0 nova_compute[254092]: 2025-11-25 18:06:07.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:07 compute-0 ceph-mon[74985]: pgmap v4299: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 18:06:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4300: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 18:06:09 compute-0 ceph-mon[74985]: pgmap v4300: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 25 18:06:09 compute-0 nova_compute[254092]: 2025-11-25 18:06:09.872 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:06:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:06:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4301: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 18:06:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:11 compute-0 ceph-mon[74985]: pgmap v4301: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Nov 25 18:06:12 compute-0 nova_compute[254092]: 2025-11-25 18:06:12.382 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4302: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 18:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:06:13.713 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:06:13.714 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:06:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:06:13.714 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:06:13 compute-0 sshd-session[470884]: Received disconnect from 150.95.85.24 port 51988:11:  [preauth]
Nov 25 18:06:13 compute-0 sshd-session[470884]: Disconnected from authenticating user root 150.95.85.24 port 51988 [preauth]
Nov 25 18:06:13 compute-0 ceph-mon[74985]: pgmap v4302: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 18:06:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4303: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 18:06:14 compute-0 nova_compute[254092]: 2025-11-25 18:06:14.876 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:16 compute-0 ceph-mon[74985]: pgmap v4303: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 18:06:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4304: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 18:06:17 compute-0 nova_compute[254092]: 2025-11-25 18:06:17.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:18 compute-0 ceph-mon[74985]: pgmap v4304: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 25 18:06:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4305: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Nov 25 18:06:19 compute-0 nova_compute[254092]: 2025-11-25 18:06:19.880 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4306: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Nov 25 18:06:20 compute-0 ceph-mon[74985]: pgmap v4305: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Nov 25 18:06:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:21 compute-0 ceph-mon[74985]: pgmap v4306: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Nov 25 18:06:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4307: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 18:06:22 compute-0 nova_compute[254092]: 2025-11-25 18:06:22.439 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:23 compute-0 nova_compute[254092]: 2025-11-25 18:06:23.516 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:23 compute-0 ceph-mon[74985]: pgmap v4307: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 11 op/s
Nov 25 18:06:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4308: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:24 compute-0 nova_compute[254092]: 2025-11-25 18:06:24.497 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:24 compute-0 nova_compute[254092]: 2025-11-25 18:06:24.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:25 compute-0 podman[470887]: 2025-11-25 18:06:25.673546461 +0000 UTC m=+0.081941394 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:06:25 compute-0 podman[470886]: 2025-11-25 18:06:25.680003146 +0000 UTC m=+0.095085921 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 18:06:25 compute-0 podman[470888]: 2025-11-25 18:06:25.69006469 +0000 UTC m=+0.093957751 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 18:06:25 compute-0 ceph-mon[74985]: pgmap v4308: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4309: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:27 compute-0 nova_compute[254092]: 2025-11-25 18:06:27.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:27 compute-0 ceph-mon[74985]: pgmap v4309: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4310: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:29 compute-0 nova_compute[254092]: 2025-11-25 18:06:29.887 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:29 compute-0 ceph-mon[74985]: pgmap v4310: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4311: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:30 compute-0 nova_compute[254092]: 2025-11-25 18:06:30.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:31 compute-0 nova_compute[254092]: 2025-11-25 18:06:31.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:32 compute-0 ceph-mon[74985]: pgmap v4311: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4312: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:32 compute-0 nova_compute[254092]: 2025-11-25 18:06:32.443 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.514 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.514 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.537 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.538 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:06:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:06:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937580322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:06:33 compute-0 nova_compute[254092]: 2025-11-25 18:06:33.966 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:06:34 compute-0 ceph-mon[74985]: pgmap v4312: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3937580322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.148 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.149 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.150 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:06:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4313: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.438 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.439 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.525 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.638 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.638 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.652 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.669 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.689 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:06:34 compute-0 nova_compute[254092]: 2025-11-25 18:06:34.890 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:06:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295506503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:06:35 compute-0 nova_compute[254092]: 2025-11-25 18:06:35.162 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:06:35 compute-0 nova_compute[254092]: 2025-11-25 18:06:35.167 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:06:35 compute-0 nova_compute[254092]: 2025-11-25 18:06:35.181 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:06:35 compute-0 nova_compute[254092]: 2025-11-25 18:06:35.183 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:06:35 compute-0 nova_compute[254092]: 2025-11-25 18:06:35.183 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:06:36 compute-0 ceph-mon[74985]: pgmap v4313: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2295506503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:06:36 compute-0 nova_compute[254092]: 2025-11-25 18:06:36.165 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4314: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:37 compute-0 nova_compute[254092]: 2025-11-25 18:06:37.514 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:38 compute-0 ceph-mon[74985]: pgmap v4314: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4315: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:38 compute-0 nova_compute[254092]: 2025-11-25 18:06:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:38 compute-0 nova_compute[254092]: 2025-11-25 18:06:38.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:06:38 compute-0 sudo[470993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:38 compute-0 sudo[470993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:38 compute-0 sudo[470993]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:38 compute-0 sudo[471018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:06:38 compute-0 sudo[471018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:38 compute-0 sudo[471018]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:38 compute-0 sudo[471043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:38 compute-0 sudo[471043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:38 compute-0 sudo[471043]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:39 compute-0 sudo[471068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:06:39 compute-0 sudo[471068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:39 compute-0 sudo[471068]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:06:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:06:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:06:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:06:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:06:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:06:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f454e515-bd6f-4b07-b53c-0f04ae3a2697 does not exist
Nov 25 18:06:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b6a12a9c-7783-41c3-b6d1-d6d03497f321 does not exist
Nov 25 18:06:39 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 7e160866-51b7-42cc-bd12-929a20f91bcc does not exist
Nov 25 18:06:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:06:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:06:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:06:39 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:06:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:06:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:06:39 compute-0 sudo[471124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:39 compute-0 sudo[471124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:39 compute-0 sudo[471124]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:39 compute-0 sudo[471149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:06:39 compute-0 sudo[471149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:39 compute-0 sudo[471149]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:39 compute-0 sudo[471174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:39 compute-0 sudo[471174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:39 compute-0 sudo[471174]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:39 compute-0 nova_compute[254092]: 2025-11-25 18:06:39.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:39 compute-0 sudo[471199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:06:39 compute-0 sudo[471199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:06:40 compute-0 podman[471265]: 2025-11-25 18:06:40.223430581 +0000 UTC m=+0.021534246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:06:40
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:06:40 compute-0 podman[471265]: 2025-11-25 18:06:40.409808517 +0000 UTC m=+0.207912192 container create c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:06:40 compute-0 ceph-mon[74985]: pgmap v4315: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:06:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:06:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:06:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:06:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:06:40 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4316: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:40 compute-0 systemd[1]: Started libpod-conmon-c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5.scope.
Nov 25 18:06:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:06:40 compute-0 podman[471265]: 2025-11-25 18:06:40.536426493 +0000 UTC m=+0.334530158 container init c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:06:40 compute-0 podman[471265]: 2025-11-25 18:06:40.544500572 +0000 UTC m=+0.342604217 container start c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:06:40 compute-0 sweet_herschel[471281]: 167 167
Nov 25 18:06:40 compute-0 systemd[1]: libpod-c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5.scope: Deactivated successfully.
Nov 25 18:06:40 compute-0 podman[471265]: 2025-11-25 18:06:40.574367863 +0000 UTC m=+0.372471538 container attach c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:06:40 compute-0 podman[471265]: 2025-11-25 18:06:40.575837513 +0000 UTC m=+0.373941168 container died c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f28ee7151e7418d3f8eabe72e35873505166ffd1c910326ad7504a5a0abd30ae-merged.mount: Deactivated successfully.
Nov 25 18:06:40 compute-0 podman[471265]: 2025-11-25 18:06:40.818860217 +0000 UTC m=+0.616963862 container remove c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:06:40 compute-0 systemd[1]: libpod-conmon-c82854b4c0603015fcf13dfb7487ecd149605673f02d528593a70a8b025ab4d5.scope: Deactivated successfully.
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:06:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:06:41 compute-0 podman[471303]: 2025-11-25 18:06:41.027691963 +0000 UTC m=+0.102736029 container create 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:06:41 compute-0 podman[471303]: 2025-11-25 18:06:40.947448636 +0000 UTC m=+0.022492732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:06:41 compute-0 systemd[1]: Started libpod-conmon-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope.
Nov 25 18:06:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:41 compute-0 podman[471303]: 2025-11-25 18:06:41.220320731 +0000 UTC m=+0.295364817 container init 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 18:06:41 compute-0 podman[471303]: 2025-11-25 18:06:41.226314563 +0000 UTC m=+0.301358629 container start 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:06:41 compute-0 podman[471303]: 2025-11-25 18:06:41.250923341 +0000 UTC m=+0.325967427 container attach 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:06:41 compute-0 ceph-mon[74985]: pgmap v4316: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:42 compute-0 pedantic_bhaskara[471320]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:06:42 compute-0 pedantic_bhaskara[471320]: --> relative data size: 1.0
Nov 25 18:06:42 compute-0 pedantic_bhaskara[471320]: --> All data devices are unavailable
Nov 25 18:06:42 compute-0 systemd[1]: libpod-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope: Deactivated successfully.
Nov 25 18:06:42 compute-0 systemd[1]: libpod-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope: Consumed 1.006s CPU time.
Nov 25 18:06:42 compute-0 podman[471303]: 2025-11-25 18:06:42.290997293 +0000 UTC m=+1.366041369 container died 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0fdfc45cf574a3e877ba9f8bdcf0aa9003a286d94d71b83ed0e16df409158f8-merged.mount: Deactivated successfully.
Nov 25 18:06:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4317: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:42 compute-0 nova_compute[254092]: 2025-11-25 18:06:42.515 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:42 compute-0 podman[471303]: 2025-11-25 18:06:42.759907646 +0000 UTC m=+1.834951712 container remove 38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:06:42 compute-0 sudo[471199]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:42 compute-0 systemd[1]: libpod-conmon-38e95b7cd76d648e459aa53d46cc9b3601e635f85e739b425dbcf61dbdd6619b.scope: Deactivated successfully.
Nov 25 18:06:42 compute-0 sudo[471361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:42 compute-0 sudo[471361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:42 compute-0 sudo[471361]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:42 compute-0 sudo[471386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:06:42 compute-0 sudo[471386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:42 compute-0 sudo[471386]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:43 compute-0 sudo[471411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:43 compute-0 sudo[471411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:43 compute-0 sudo[471411]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:43 compute-0 sudo[471436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:06:43 compute-0 sudo[471436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:43 compute-0 podman[471502]: 2025-11-25 18:06:43.528075928 +0000 UTC m=+0.106240644 container create 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:06:43 compute-0 podman[471502]: 2025-11-25 18:06:43.460148435 +0000 UTC m=+0.038313191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:06:43 compute-0 systemd[1]: Started libpod-conmon-01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c.scope.
Nov 25 18:06:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:06:43 compute-0 podman[471502]: 2025-11-25 18:06:43.765340645 +0000 UTC m=+0.343505361 container init 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:06:43 compute-0 podman[471502]: 2025-11-25 18:06:43.780708403 +0000 UTC m=+0.358873099 container start 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:06:43 compute-0 podman[471502]: 2025-11-25 18:06:43.788801642 +0000 UTC m=+0.366966348 container attach 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:06:43 compute-0 thirsty_shaw[471519]: 167 167
Nov 25 18:06:43 compute-0 systemd[1]: libpod-01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c.scope: Deactivated successfully.
Nov 25 18:06:43 compute-0 podman[471502]: 2025-11-25 18:06:43.79093257 +0000 UTC m=+0.369097306 container died 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a2d84a715a990e0b0a073408daaf9893970e086097a03e285a79b79bb7a6179-merged.mount: Deactivated successfully.
Nov 25 18:06:43 compute-0 ceph-mon[74985]: pgmap v4317: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:44 compute-0 podman[471502]: 2025-11-25 18:06:44.019624066 +0000 UTC m=+0.597788772 container remove 01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:06:44 compute-0 systemd[1]: libpod-conmon-01425ef91ddea1ce7d2864c365618c373ca7237753104353370863a5a0d9e04c.scope: Deactivated successfully.
Nov 25 18:06:44 compute-0 podman[471545]: 2025-11-25 18:06:44.224421523 +0000 UTC m=+0.061234803 container create 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:06:44 compute-0 systemd[1]: Started libpod-conmon-2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1.scope.
Nov 25 18:06:44 compute-0 podman[471545]: 2025-11-25 18:06:44.189168346 +0000 UTC m=+0.025981646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:06:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:44 compute-0 podman[471545]: 2025-11-25 18:06:44.330966953 +0000 UTC m=+0.167780253 container init 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 18:06:44 compute-0 podman[471545]: 2025-11-25 18:06:44.33969861 +0000 UTC m=+0.176511860 container start 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:06:44 compute-0 podman[471545]: 2025-11-25 18:06:44.426228878 +0000 UTC m=+0.263042148 container attach 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:06:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4318: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:44 compute-0 nova_compute[254092]: 2025-11-25 18:06:44.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:44 compute-0 nova_compute[254092]: 2025-11-25 18:06:44.899 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]: {
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:     "0": [
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:         {
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "devices": [
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "/dev/loop3"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             ],
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_name": "ceph_lv0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_size": "21470642176",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "name": "ceph_lv0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "tags": {
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cluster_name": "ceph",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.crush_device_class": "",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.encrypted": "0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osd_id": "0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.type": "block",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.vdo": "0"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             },
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "type": "block",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "vg_name": "ceph_vg0"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:         }
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:     ],
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:     "1": [
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:         {
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "devices": [
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "/dev/loop4"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             ],
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_name": "ceph_lv1",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_size": "21470642176",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "name": "ceph_lv1",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "tags": {
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cluster_name": "ceph",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.crush_device_class": "",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.encrypted": "0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osd_id": "1",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.type": "block",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.vdo": "0"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             },
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "type": "block",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "vg_name": "ceph_vg1"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:         }
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:     ],
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:     "2": [
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:         {
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "devices": [
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "/dev/loop5"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             ],
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_name": "ceph_lv2",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_size": "21470642176",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "name": "ceph_lv2",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "tags": {
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.cluster_name": "ceph",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.crush_device_class": "",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.encrypted": "0",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osd_id": "2",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.type": "block",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:                 "ceph.vdo": "0"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             },
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "type": "block",
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:             "vg_name": "ceph_vg2"
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:         }
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]:     ]
Nov 25 18:06:45 compute-0 distracted_heisenberg[471562]: }
Nov 25 18:06:45 compute-0 systemd[1]: libpod-2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1.scope: Deactivated successfully.
Nov 25 18:06:45 compute-0 podman[471545]: 2025-11-25 18:06:45.12614111 +0000 UTC m=+0.962954360 container died 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7773b518d0c3903cab4f11121f87aeaceac7503ca7035331b3bb77671240400e-merged.mount: Deactivated successfully.
Nov 25 18:06:45 compute-0 podman[471545]: 2025-11-25 18:06:45.697422011 +0000 UTC m=+1.534235261 container remove 2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:06:45 compute-0 systemd[1]: libpod-conmon-2b036adbc1001972fb5e43e6a0a88e4476d0daf4ea134eed1ed72a5c30551fe1.scope: Deactivated successfully.
Nov 25 18:06:45 compute-0 sudo[471436]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:45 compute-0 sudo[471585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:45 compute-0 sudo[471585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:45 compute-0 sudo[471585]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:45 compute-0 sudo[471610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:06:45 compute-0 sudo[471610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:45 compute-0 sudo[471610]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:45 compute-0 sudo[471635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:45 compute-0 sudo[471635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:45 compute-0 sudo[471635]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:46 compute-0 sudo[471660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:06:46 compute-0 sudo[471660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:46 compute-0 ceph-mon[74985]: pgmap v4318: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4319: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:46 compute-0 podman[471724]: 2025-11-25 18:06:46.453367933 +0000 UTC m=+0.120608584 container create 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:06:46 compute-0 podman[471724]: 2025-11-25 18:06:46.362347453 +0000 UTC m=+0.029588134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:06:46 compute-0 nova_compute[254092]: 2025-11-25 18:06:46.493 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:06:46 compute-0 systemd[1]: Started libpod-conmon-244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43.scope.
Nov 25 18:06:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:06:46 compute-0 podman[471724]: 2025-11-25 18:06:46.759010695 +0000 UTC m=+0.426251366 container init 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:06:46 compute-0 podman[471724]: 2025-11-25 18:06:46.767015883 +0000 UTC m=+0.434256534 container start 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:06:46 compute-0 zealous_kepler[471741]: 167 167
Nov 25 18:06:46 compute-0 systemd[1]: libpod-244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43.scope: Deactivated successfully.
Nov 25 18:06:46 compute-0 podman[471724]: 2025-11-25 18:06:46.877258454 +0000 UTC m=+0.544499135 container attach 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:06:46 compute-0 podman[471724]: 2025-11-25 18:06:46.878034665 +0000 UTC m=+0.545275326 container died 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 18:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d5a71ef4aea83e3fab189396a4383b485f86e0224a0a30cd9a4dcd6871dace-merged.mount: Deactivated successfully.
Nov 25 18:06:47 compute-0 podman[471724]: 2025-11-25 18:06:47.09528479 +0000 UTC m=+0.762525441 container remove 244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:06:47 compute-0 systemd[1]: libpod-conmon-244e80be70292791c8d20a963a343916d2b789a71ab4ddec7d52f537a8912b43.scope: Deactivated successfully.
Nov 25 18:06:47 compute-0 podman[471766]: 2025-11-25 18:06:47.277818153 +0000 UTC m=+0.052292300 container create d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:06:47 compute-0 systemd[1]: Started libpod-conmon-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope.
Nov 25 18:06:47 compute-0 podman[471766]: 2025-11-25 18:06:47.25597478 +0000 UTC m=+0.030448937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:06:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:06:47 compute-0 podman[471766]: 2025-11-25 18:06:47.423406563 +0000 UTC m=+0.197880720 container init d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:06:47 compute-0 podman[471766]: 2025-11-25 18:06:47.430075854 +0000 UTC m=+0.204549991 container start d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 18:06:47 compute-0 podman[471766]: 2025-11-25 18:06:47.452984305 +0000 UTC m=+0.227458442 container attach d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 18:06:47 compute-0 nova_compute[254092]: 2025-11-25 18:06:47.517 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:48 compute-0 ceph-mon[74985]: pgmap v4319: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4320: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]: {
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "osd_id": 1,
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "type": "bluestore"
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:     },
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "osd_id": 2,
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "type": "bluestore"
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:     },
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "osd_id": 0,
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:         "type": "bluestore"
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]:     }
Nov 25 18:06:48 compute-0 elastic_goldwasser[471782]: }
Nov 25 18:06:48 compute-0 systemd[1]: libpod-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope: Deactivated successfully.
Nov 25 18:06:48 compute-0 systemd[1]: libpod-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope: Consumed 1.077s CPU time.
Nov 25 18:06:48 compute-0 conmon[471782]: conmon d8069a693005846f4ccb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope/container/memory.events
Nov 25 18:06:48 compute-0 podman[471766]: 2025-11-25 18:06:48.506113501 +0000 UTC m=+1.280587638 container died d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f58d05f18f0aa28ba3460c7f4daf03c67ee80da9817e2e0c1ac90edfc8e2705-merged.mount: Deactivated successfully.
Nov 25 18:06:49 compute-0 podman[471766]: 2025-11-25 18:06:49.229503761 +0000 UTC m=+2.003977898 container remove d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:06:49 compute-0 sudo[471660]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:06:49 compute-0 systemd[1]: libpod-conmon-d8069a693005846f4ccb4821edd041f6c2826183aa966801c3f9f75d493d1c3e.scope: Deactivated successfully.
Nov 25 18:06:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:06:49 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:06:49 compute-0 ceph-mon[74985]: pgmap v4320: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:49 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:06:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e765ea6c-41af-44eb-af27-35efc63dff60 does not exist
Nov 25 18:06:49 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 62e94597-6654-44c6-bd80-af1b88c6e18a does not exist
Nov 25 18:06:49 compute-0 sudo[471827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:06:49 compute-0 sudo[471827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:49 compute-0 sudo[471827]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:49 compute-0 sudo[471852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:06:49 compute-0 sudo[471852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:06:49 compute-0 sudo[471852]: pam_unix(sudo:session): session closed for user root
Nov 25 18:06:49 compute-0 nova_compute[254092]: 2025-11-25 18:06:49.902 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4321: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:06:50 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:06:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:51 compute-0 ceph-mon[74985]: pgmap v4321: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4322: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:06:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:06:52 compute-0 nova_compute[254092]: 2025-11-25 18:06:52.553 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:53 compute-0 ceph-mon[74985]: pgmap v4322: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4323: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:54 compute-0 nova_compute[254092]: 2025-11-25 18:06:54.906 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:06:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972582466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:06:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:06:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972582466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:06:55 compute-0 ceph-mon[74985]: pgmap v4323: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1972582466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:06:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1972582466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:06:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:06:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4324: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:56 compute-0 podman[471877]: 2025-11-25 18:06:56.677498426 +0000 UTC m=+0.096242592 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:06:56 compute-0 podman[471879]: 2025-11-25 18:06:56.683852519 +0000 UTC m=+0.101092215 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 18:06:56 compute-0 podman[471878]: 2025-11-25 18:06:56.686362446 +0000 UTC m=+0.097423394 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.088846) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017088959, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3495026, "memory_usage": 3556944, "flush_reason": "Manual Compaction"}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017127893, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 3429161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86658, "largest_seqno": 88713, "table_properties": {"data_size": 3419654, "index_size": 6064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18742, "raw_average_key_size": 20, "raw_value_size": 3400926, "raw_average_value_size": 3649, "num_data_blocks": 269, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764093783, "oldest_key_time": 1764093783, "file_creation_time": 1764094017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 39231 microseconds, and 9222 cpu microseconds.
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.128091) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 3429161 bytes OK
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.128183) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.138256) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.138285) EVENT_LOG_v1 {"time_micros": 1764094017138276, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.138319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 3486407, prev total WAL file size 3486407, number of live WAL files 2.
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.140717) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(3348KB)], [206(10MB)]
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017140766, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 14337139, "oldest_snapshot_seqno": -1}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 10373 keys, 12558627 bytes, temperature: kUnknown
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017389737, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 12558627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12493204, "index_size": 38366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25989, "raw_key_size": 273003, "raw_average_key_size": 26, "raw_value_size": 12311777, "raw_average_value_size": 1186, "num_data_blocks": 1479, "num_entries": 10373, "num_filter_entries": 10373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094017, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.390076) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12558627 bytes
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.447261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.6 rd, 50.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.4 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.8) write-amplify(3.7) OK, records in: 10887, records dropped: 514 output_compression: NoCompression
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.447296) EVENT_LOG_v1 {"time_micros": 1764094017447284, "job": 130, "event": "compaction_finished", "compaction_time_micros": 249076, "compaction_time_cpu_micros": 43180, "output_level": 6, "num_output_files": 1, "total_output_size": 12558627, "num_input_records": 10887, "num_output_records": 10373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017448126, "job": 130, "event": "table_file_deletion", "file_number": 208}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094017450509, "job": 130, "event": "table_file_deletion", "file_number": 206}
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.140554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:06:57 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:06:57.450692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:06:57 compute-0 nova_compute[254092]: 2025-11-25 18:06:57.556 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:06:58 compute-0 ceph-mon[74985]: pgmap v4324: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4325: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:06:59 compute-0 nova_compute[254092]: 2025-11-25 18:06:59.910 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:00 compute-0 ceph-mon[74985]: pgmap v4325: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4326: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:02 compute-0 ceph-mon[74985]: pgmap v4326: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4327: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:02 compute-0 nova_compute[254092]: 2025-11-25 18:07:02.558 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:04 compute-0 ceph-mon[74985]: pgmap v4327: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4328: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:04 compute-0 nova_compute[254092]: 2025-11-25 18:07:04.911 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:06 compute-0 ceph-mon[74985]: pgmap v4328: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4329: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:07 compute-0 nova_compute[254092]: 2025-11-25 18:07:07.599 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:08 compute-0 ceph-mon[74985]: pgmap v4329: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4330: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:09 compute-0 nova_compute[254092]: 2025-11-25 18:07:09.916 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:07:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:07:10 compute-0 ceph-mon[74985]: pgmap v4330: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4331: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:12 compute-0 ceph-mon[74985]: pgmap v4331: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4332: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:12 compute-0 nova_compute[254092]: 2025-11-25 18:07:12.601 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:07:13.715 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:07:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:07:13.715 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:07:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:07:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:07:14 compute-0 ceph-mon[74985]: pgmap v4332: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4333: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:14 compute-0 nova_compute[254092]: 2025-11-25 18:07:14.919 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:16 compute-0 ceph-mon[74985]: pgmap v4333: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4334: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:17 compute-0 ceph-mon[74985]: pgmap v4334: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:17 compute-0 nova_compute[254092]: 2025-11-25 18:07:17.602 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4335: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:19 compute-0 ceph-mon[74985]: pgmap v4335: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:19 compute-0 nova_compute[254092]: 2025-11-25 18:07:19.922 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4336: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.188548) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041188625, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 423, "num_deletes": 250, "total_data_size": 346672, "memory_usage": 353968, "flush_reason": "Manual Compaction"}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041231226, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 264495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88714, "largest_seqno": 89136, "table_properties": {"data_size": 262105, "index_size": 489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6307, "raw_average_key_size": 20, "raw_value_size": 257411, "raw_average_value_size": 825, "num_data_blocks": 22, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094018, "oldest_key_time": 1764094018, "file_creation_time": 1764094041, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 42813 microseconds, and 3130 cpu microseconds.
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.231321) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 264495 bytes OK
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.231398) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300326) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300372) EVENT_LOG_v1 {"time_micros": 1764094041300362, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 344061, prev total WAL file size 344061, number of live WAL files 2.
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373631' seq:72057594037927935, type:22 .. '6D6772737461740034303132' seq:0, type:0; will stop at (end)
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(258KB)], [209(11MB)]
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041300964, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 12823122, "oldest_snapshot_seqno": -1}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 10184 keys, 9614052 bytes, temperature: kUnknown
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041520063, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 9614052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9554506, "index_size": 33001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 269245, "raw_average_key_size": 26, "raw_value_size": 9380898, "raw_average_value_size": 921, "num_data_blocks": 1255, "num_entries": 10184, "num_filter_entries": 10184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094041, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.520413) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 9614052 bytes
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.525834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.5 rd, 43.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.0 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(84.8) write-amplify(36.3) OK, records in: 10685, records dropped: 501 output_compression: NoCompression
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.525872) EVENT_LOG_v1 {"time_micros": 1764094041525854, "job": 132, "event": "compaction_finished", "compaction_time_micros": 219201, "compaction_time_cpu_micros": 24900, "output_level": 6, "num_output_files": 1, "total_output_size": 9614052, "num_input_records": 10685, "num_output_records": 10184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041526162, "job": 132, "event": "table_file_deletion", "file_number": 211}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094041530831, "job": 132, "event": "table_file_deletion", "file_number": 209}
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.300854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:07:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:07:21.530953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:07:22 compute-0 ceph-mon[74985]: pgmap v4336: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4337: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:22 compute-0 nova_compute[254092]: 2025-11-25 18:07:22.605 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:23 compute-0 ceph-mon[74985]: pgmap v4337: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4338: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:24 compute-0 nova_compute[254092]: 2025-11-25 18:07:24.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:24 compute-0 nova_compute[254092]: 2025-11-25 18:07:24.926 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:25 compute-0 nova_compute[254092]: 2025-11-25 18:07:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:26 compute-0 ceph-mon[74985]: pgmap v4338: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4339: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:27 compute-0 nova_compute[254092]: 2025-11-25 18:07:27.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:27 compute-0 podman[471940]: 2025-11-25 18:07:27.647920275 +0000 UTC m=+0.062563319 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 18:07:27 compute-0 podman[471941]: 2025-11-25 18:07:27.652694594 +0000 UTC m=+0.059392923 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 18:07:27 compute-0 podman[471942]: 2025-11-25 18:07:27.685065633 +0000 UTC m=+0.089443268 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:07:28 compute-0 ceph-mon[74985]: pgmap v4339: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4340: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:29 compute-0 ceph-mon[74985]: pgmap v4340: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:29 compute-0 nova_compute[254092]: 2025-11-25 18:07:29.930 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4341: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:31 compute-0 nova_compute[254092]: 2025-11-25 18:07:31.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:31 compute-0 ceph-mon[74985]: pgmap v4341: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4342: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:32 compute-0 nova_compute[254092]: 2025-11-25 18:07:32.606 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.529 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:07:33 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:07:33 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501295649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:07:33 compute-0 nova_compute[254092]: 2025-11-25 18:07:33.960 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.109 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.111 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.111 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.111 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.189 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.190 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.206 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:07:34 compute-0 ceph-mon[74985]: pgmap v4342: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:34 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2501295649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:07:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4343: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:07:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004929326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.702 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.709 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.728 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.729 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.730 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:07:34 compute-0 nova_compute[254092]: 2025-11-25 18:07:34.932 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4004929326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:07:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4344: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:36 compute-0 ceph-mon[74985]: pgmap v4343: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:36 compute-0 nova_compute[254092]: 2025-11-25 18:07:36.730 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:36 compute-0 nova_compute[254092]: 2025-11-25 18:07:36.731 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:07:36 compute-0 nova_compute[254092]: 2025-11-25 18:07:36.731 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:07:36 compute-0 nova_compute[254092]: 2025-11-25 18:07:36.744 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:07:36 compute-0 nova_compute[254092]: 2025-11-25 18:07:36.745 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:37 compute-0 nova_compute[254092]: 2025-11-25 18:07:37.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:37 compute-0 ceph-mon[74985]: pgmap v4344: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4345: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:39 compute-0 nova_compute[254092]: 2025-11-25 18:07:39.936 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:07:40 compute-0 ceph-mon[74985]: pgmap v4345: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:07:40
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4346: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:40 compute-0 nova_compute[254092]: 2025-11-25 18:07:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:40 compute-0 nova_compute[254092]: 2025-11-25 18:07:40.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:07:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:07:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:41 compute-0 ceph-mon[74985]: pgmap v4346: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4347: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:42 compute-0 nova_compute[254092]: 2025-11-25 18:07:42.657 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:44 compute-0 ceph-mon[74985]: pgmap v4347: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4348: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:45 compute-0 nova_compute[254092]: 2025-11-25 18:07:45.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:46 compute-0 ceph-mon[74985]: pgmap v4348: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4349: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:46 compute-0 nova_compute[254092]: 2025-11-25 18:07:46.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:07:47 compute-0 nova_compute[254092]: 2025-11-25 18:07:47.658 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4350: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:48 compute-0 ceph-mon[74985]: pgmap v4349: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:49 compute-0 ceph-mon[74985]: pgmap v4350: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:49 compute-0 sudo[472048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:49 compute-0 sudo[472048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:49 compute-0 sudo[472048]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:49 compute-0 sudo[472073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:07:49 compute-0 sudo[472073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:49 compute-0 sudo[472073]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:49 compute-0 sudo[472098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:49 compute-0 sudo[472098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:49 compute-0 sudo[472098]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:50 compute-0 sudo[472123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 18:07:50 compute-0 sudo[472123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:50 compute-0 nova_compute[254092]: 2025-11-25 18:07:50.019 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:50 compute-0 sudo[472123]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:07:50 compute-0 sudo[472166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:50 compute-0 sudo[472166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:50 compute-0 sudo[472166]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:50 compute-0 sudo[472191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:07:50 compute-0 sudo[472191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:50 compute-0 sudo[472191]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:50 compute-0 sudo[472216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:50 compute-0 sudo[472216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:50 compute-0 sudo[472216]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4351: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:50 compute-0 sudo[472241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:07:50 compute-0 sudo[472241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:50 compute-0 sudo[472241]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:07:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 561dd13c-5b87-4bb0-bd43-086fd34afe14 does not exist
Nov 25 18:07:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 661eebb7-9565-4bed-a345-daa46bb01a30 does not exist
Nov 25 18:07:50 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fed85de5-8287-49e6-a8b1-d7a3e97c8538 does not exist
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:07:50 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:07:50 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:07:51 compute-0 sudo[472295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:51 compute-0 sudo[472295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:51 compute-0 sudo[472295]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:51 compute-0 sudo[472320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:07:51 compute-0 sudo[472320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:51 compute-0 sudo[472320]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:51 compute-0 sudo[472345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:51 compute-0 sudo[472345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:51 compute-0 sudo[472345]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:51 compute-0 sudo[472370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:07:51 compute-0 sudo[472370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:07:51 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:07:51 compute-0 podman[472434]: 2025-11-25 18:07:51.579208238 +0000 UTC m=+0.063360880 container create fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:07:51 compute-0 systemd[1]: Started libpod-conmon-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope.
Nov 25 18:07:51 compute-0 podman[472434]: 2025-11-25 18:07:51.542539773 +0000 UTC m=+0.026692465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:07:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:07:51 compute-0 podman[472434]: 2025-11-25 18:07:51.71971219 +0000 UTC m=+0.203864862 container init fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 18:07:51 compute-0 podman[472434]: 2025-11-25 18:07:51.728731206 +0000 UTC m=+0.212883858 container start fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 18:07:51 compute-0 brave_brattain[472450]: 167 167
Nov 25 18:07:51 compute-0 systemd[1]: libpod-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope: Deactivated successfully.
Nov 25 18:07:51 compute-0 conmon[472450]: conmon fc0909373b59f524249d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope/container/memory.events
Nov 25 18:07:51 compute-0 podman[472434]: 2025-11-25 18:07:51.752919661 +0000 UTC m=+0.237072333 container attach fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 18:07:51 compute-0 podman[472434]: 2025-11-25 18:07:51.754231097 +0000 UTC m=+0.238383779 container died fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ebebd122ee90e1f02440da3f4e7810e8fc23f1656f79e390a07e61102ffd57c-merged.mount: Deactivated successfully.
Nov 25 18:07:52 compute-0 podman[472434]: 2025-11-25 18:07:52.015467475 +0000 UTC m=+0.499620157 container remove fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:07:52 compute-0 systemd[1]: libpod-conmon-fc0909373b59f524249d8d1b5ed9ddf7ef54d5e5ef5845b966ed16ed9b9f5171.scope: Deactivated successfully.
Nov 25 18:07:52 compute-0 podman[472475]: 2025-11-25 18:07:52.244900101 +0000 UTC m=+0.064817120 container create 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:07:52 compute-0 systemd[1]: Started libpod-conmon-104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b.scope.
Nov 25 18:07:52 compute-0 podman[472475]: 2025-11-25 18:07:52.209588333 +0000 UTC m=+0.029505432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:07:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:52 compute-0 ceph-mon[74985]: pgmap v4351: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:52 compute-0 podman[472475]: 2025-11-25 18:07:52.367432396 +0000 UTC m=+0.187349425 container init 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:07:52 compute-0 podman[472475]: 2025-11-25 18:07:52.37753077 +0000 UTC m=+0.197447779 container start 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:07:52 compute-0 podman[472475]: 2025-11-25 18:07:52.388305202 +0000 UTC m=+0.208222241 container attach 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4352: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:07:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:07:52 compute-0 nova_compute[254092]: 2025-11-25 18:07:52.661 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:53 compute-0 quirky_greider[472491]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:07:53 compute-0 quirky_greider[472491]: --> relative data size: 1.0
Nov 25 18:07:53 compute-0 quirky_greider[472491]: --> All data devices are unavailable
Nov 25 18:07:53 compute-0 systemd[1]: libpod-104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b.scope: Deactivated successfully.
Nov 25 18:07:53 compute-0 podman[472475]: 2025-11-25 18:07:53.375557471 +0000 UTC m=+1.195474490 container died 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-680af15f11a5c61822f5b5a9faa8b893cca751073e6b6ef34c0ade7b0fca3ade-merged.mount: Deactivated successfully.
Nov 25 18:07:53 compute-0 ceph-mon[74985]: pgmap v4352: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:53 compute-0 podman[472475]: 2025-11-25 18:07:53.548514093 +0000 UTC m=+1.368431102 container remove 104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_greider, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:07:53 compute-0 systemd[1]: libpod-conmon-104cddefba46d89ec94248d3ad1d94ad9363d0d848f47258cb8699c25a21953b.scope: Deactivated successfully.
Nov 25 18:07:53 compute-0 sudo[472370]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:53 compute-0 sudo[472534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:53 compute-0 sudo[472534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:53 compute-0 sudo[472534]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:53 compute-0 sudo[472559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:07:53 compute-0 sudo[472559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:53 compute-0 sudo[472559]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:53 compute-0 sudo[472584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:53 compute-0 sudo[472584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:53 compute-0 sudo[472584]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:53 compute-0 sudo[472609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:07:53 compute-0 sudo[472609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:54 compute-0 podman[472676]: 2025-11-25 18:07:54.225796231 +0000 UTC m=+0.090659060 container create 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 18:07:54 compute-0 podman[472676]: 2025-11-25 18:07:54.155938196 +0000 UTC m=+0.020801035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:07:54 compute-0 systemd[1]: Started libpod-conmon-7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76.scope.
Nov 25 18:07:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:07:54 compute-0 podman[472676]: 2025-11-25 18:07:54.33372885 +0000 UTC m=+0.198591689 container init 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:07:54 compute-0 podman[472676]: 2025-11-25 18:07:54.343048813 +0000 UTC m=+0.207911642 container start 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:07:54 compute-0 sweet_sanderson[472693]: 167 167
Nov 25 18:07:54 compute-0 systemd[1]: libpod-7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76.scope: Deactivated successfully.
Nov 25 18:07:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4353: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:54 compute-0 podman[472676]: 2025-11-25 18:07:54.515103451 +0000 UTC m=+0.379966300 container attach 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:07:54 compute-0 podman[472676]: 2025-11-25 18:07:54.516072358 +0000 UTC m=+0.380935177 container died 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0012523a69f04dadc2ca53afbf9adcddfe9c54c6b159bc620bcdb1dd6736e4-merged.mount: Deactivated successfully.
Nov 25 18:07:55 compute-0 nova_compute[254092]: 2025-11-25 18:07:55.022 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:07:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105009306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:07:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:07:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105009306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:07:55 compute-0 podman[472676]: 2025-11-25 18:07:55.480587459 +0000 UTC m=+1.345450278 container remove 7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:07:55 compute-0 systemd[1]: libpod-conmon-7f47857f80ae7b158f6f8bf01a2026f114fb4d1973fff666c83ce543db236d76.scope: Deactivated successfully.
Nov 25 18:07:55 compute-0 podman[472715]: 2025-11-25 18:07:55.669482344 +0000 UTC m=+0.079350014 container create 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:07:55 compute-0 podman[472715]: 2025-11-25 18:07:55.617788992 +0000 UTC m=+0.027656672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:07:55 compute-0 ceph-mon[74985]: pgmap v4353: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/105009306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:07:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/105009306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:07:55 compute-0 systemd[1]: Started libpod-conmon-60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8.scope.
Nov 25 18:07:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:56 compute-0 podman[472715]: 2025-11-25 18:07:56.002089859 +0000 UTC m=+0.411957519 container init 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:07:56 compute-0 podman[472715]: 2025-11-25 18:07:56.008464712 +0000 UTC m=+0.418332362 container start 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:07:56 compute-0 podman[472715]: 2025-11-25 18:07:56.173999314 +0000 UTC m=+0.583866974 container attach 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:07:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:07:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4354: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]: {
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:     "0": [
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:         {
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "devices": [
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "/dev/loop3"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             ],
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_name": "ceph_lv0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_size": "21470642176",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "name": "ceph_lv0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "tags": {
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cluster_name": "ceph",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.crush_device_class": "",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.encrypted": "0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osd_id": "0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.type": "block",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.vdo": "0"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             },
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "type": "block",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "vg_name": "ceph_vg0"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:         }
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:     ],
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:     "1": [
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:         {
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "devices": [
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "/dev/loop4"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             ],
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_name": "ceph_lv1",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_size": "21470642176",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "name": "ceph_lv1",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "tags": {
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cluster_name": "ceph",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.crush_device_class": "",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.encrypted": "0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osd_id": "1",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.type": "block",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.vdo": "0"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             },
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "type": "block",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "vg_name": "ceph_vg1"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:         }
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:     ],
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:     "2": [
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:         {
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "devices": [
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "/dev/loop5"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             ],
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_name": "ceph_lv2",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_size": "21470642176",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "name": "ceph_lv2",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "tags": {
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.cluster_name": "ceph",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.crush_device_class": "",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.encrypted": "0",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osd_id": "2",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.type": "block",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:                 "ceph.vdo": "0"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             },
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "type": "block",
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:             "vg_name": "ceph_vg2"
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:         }
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]:     ]
Nov 25 18:07:56 compute-0 zen_heyrovsky[472732]: }
Nov 25 18:07:56 compute-0 systemd[1]: libpod-60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8.scope: Deactivated successfully.
Nov 25 18:07:56 compute-0 podman[472715]: 2025-11-25 18:07:56.773063138 +0000 UTC m=+1.182930798 container died 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-98c3055e2a0f11d324fdcf3b7f4f67f6f1c689e700768ea1a6911cdd6bbed5d4-merged.mount: Deactivated successfully.
Nov 25 18:07:57 compute-0 podman[472715]: 2025-11-25 18:07:57.584707322 +0000 UTC m=+1.994574992 container remove 60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:07:57 compute-0 sudo[472609]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:57 compute-0 systemd[1]: libpod-conmon-60c5861d1b78eee516edb70d859c8b78461430e27defe9289ec52950dfa4cfb8.scope: Deactivated successfully.
Nov 25 18:07:57 compute-0 nova_compute[254092]: 2025-11-25 18:07:57.662 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:07:57 compute-0 sudo[472753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:57 compute-0 sudo[472753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:57 compute-0 sudo[472753]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:57 compute-0 podman[472778]: 2025-11-25 18:07:57.757397188 +0000 UTC m=+0.060943095 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 18:07:57 compute-0 podman[472777]: 2025-11-25 18:07:57.759650469 +0000 UTC m=+0.063471063 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:07:57 compute-0 sudo[472790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:07:57 compute-0 sudo[472790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:57 compute-0 sudo[472790]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:57 compute-0 sudo[472843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:07:57 compute-0 sudo[472843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:57 compute-0 sudo[472843]: pam_unix(sudo:session): session closed for user root
Nov 25 18:07:57 compute-0 podman[472835]: 2025-11-25 18:07:57.855447598 +0000 UTC m=+0.078427559 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:07:57 compute-0 sudo[472890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:07:57 compute-0 sudo[472890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:07:58 compute-0 ceph-mon[74985]: pgmap v4354: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:58 compute-0 podman[472956]: 2025-11-25 18:07:58.204388025 +0000 UTC m=+0.052590087 container create 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:07:58 compute-0 systemd[1]: Started libpod-conmon-82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491.scope.
Nov 25 18:07:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:07:58 compute-0 podman[472956]: 2025-11-25 18:07:58.172205092 +0000 UTC m=+0.020407174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:07:58 compute-0 podman[472956]: 2025-11-25 18:07:58.303771922 +0000 UTC m=+0.151974034 container init 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 18:07:58 compute-0 podman[472956]: 2025-11-25 18:07:58.317033382 +0000 UTC m=+0.165235444 container start 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:07:58 compute-0 laughing_poitras[472972]: 167 167
Nov 25 18:07:58 compute-0 systemd[1]: libpod-82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491.scope: Deactivated successfully.
Nov 25 18:07:58 compute-0 podman[472956]: 2025-11-25 18:07:58.349135483 +0000 UTC m=+0.197337645 container attach 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 25 18:07:58 compute-0 podman[472956]: 2025-11-25 18:07:58.351123097 +0000 UTC m=+0.199325249 container died 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e7c53a124dc0239d3a5ba7af44edbf34f18792f0a36b90e535f12f75b2af47b-merged.mount: Deactivated successfully.
Nov 25 18:07:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4355: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:07:58 compute-0 podman[472956]: 2025-11-25 18:07:58.507790788 +0000 UTC m=+0.355992860 container remove 82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:07:58 compute-0 systemd[1]: libpod-conmon-82c94259a0000d3bbfb96fd523ea09871ba5a7fc42f050d54c00e3a68e5cc491.scope: Deactivated successfully.
Nov 25 18:07:58 compute-0 podman[472998]: 2025-11-25 18:07:58.712280337 +0000 UTC m=+0.060742940 container create c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:07:58 compute-0 systemd[1]: Started libpod-conmon-c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb.scope.
Nov 25 18:07:58 compute-0 podman[472998]: 2025-11-25 18:07:58.674883681 +0000 UTC m=+0.023346304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:07:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:07:58 compute-0 podman[472998]: 2025-11-25 18:07:58.863194941 +0000 UTC m=+0.211657644 container init c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:07:58 compute-0 podman[472998]: 2025-11-25 18:07:58.871694593 +0000 UTC m=+0.220157216 container start c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:07:58 compute-0 podman[472998]: 2025-11-25 18:07:58.939685787 +0000 UTC m=+0.288148450 container attach c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]: {
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "osd_id": 1,
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "type": "bluestore"
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:     },
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "osd_id": 2,
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "type": "bluestore"
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:     },
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "osd_id": 0,
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:         "type": "bluestore"
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]:     }
Nov 25 18:07:59 compute-0 flamboyant_archimedes[473014]: }
Nov 25 18:07:59 compute-0 systemd[1]: libpod-c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb.scope: Deactivated successfully.
Nov 25 18:07:59 compute-0 podman[472998]: 2025-11-25 18:07:59.826550241 +0000 UTC m=+1.175012844 container died c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 18:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c8919248c49aa7473398f3ab8d4e02d4d9b64e5885dfb96f619abfb079c74f5-merged.mount: Deactivated successfully.
Nov 25 18:08:00 compute-0 nova_compute[254092]: 2025-11-25 18:08:00.025 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:00 compute-0 podman[472998]: 2025-11-25 18:08:00.154436918 +0000 UTC m=+1.502899521 container remove c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 18:08:00 compute-0 systemd[1]: libpod-conmon-c93dd0327fd86086d27113c650c443b7dda0a923ed72db2093707bc8be270bdb.scope: Deactivated successfully.
Nov 25 18:08:00 compute-0 sudo[472890]: pam_unix(sudo:session): session closed for user root
Nov 25 18:08:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:08:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:08:00 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:08:00 compute-0 ceph-mon[74985]: pgmap v4355: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4356: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:00 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:08:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev a73b9244-b586-4bc6-aac4-c584999d5f05 does not exist
Nov 25 18:08:00 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev d35225c1-eca7-44d5-9190-4ca8857e0ed1 does not exist
Nov 25 18:08:00 compute-0 sudo[473061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:08:00 compute-0 sudo[473061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:08:00 compute-0 sudo[473061]: pam_unix(sudo:session): session closed for user root
Nov 25 18:08:00 compute-0 sudo[473086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:08:00 compute-0 sudo[473086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:08:00 compute-0 sudo[473086]: pam_unix(sudo:session): session closed for user root
Nov 25 18:08:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:08:01 compute-0 ceph-mon[74985]: pgmap v4356: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:08:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4357: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:02 compute-0 nova_compute[254092]: 2025-11-25 18:08:02.664 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:04 compute-0 ceph-mon[74985]: pgmap v4357: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4358: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:05 compute-0 nova_compute[254092]: 2025-11-25 18:08:05.029 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4359: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:06 compute-0 ceph-mon[74985]: pgmap v4358: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:07 compute-0 nova_compute[254092]: 2025-11-25 18:08:07.668 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:07 compute-0 ceph-mon[74985]: pgmap v4359: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4360: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:10 compute-0 nova_compute[254092]: 2025-11-25 18:08:10.058 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:08:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:08:10 compute-0 ceph-mon[74985]: pgmap v4360: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4361: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:12 compute-0 ceph-mon[74985]: pgmap v4361: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4362: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:12 compute-0 nova_compute[254092]: 2025-11-25 18:08:12.712 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:08:13.715 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:08:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:08:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:08:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:08:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:08:14 compute-0 ceph-mon[74985]: pgmap v4362: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4363: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:15 compute-0 nova_compute[254092]: 2025-11-25 18:08:15.062 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:16 compute-0 ceph-mon[74985]: pgmap v4363: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4364: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:17 compute-0 nova_compute[254092]: 2025-11-25 18:08:17.715 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:18 compute-0 sshd-session[473111]: Received disconnect from 193.46.255.7 port 13836:11:  [preauth]
Nov 25 18:08:18 compute-0 sshd-session[473111]: Disconnected from authenticating user root 193.46.255.7 port 13836 [preauth]
Nov 25 18:08:18 compute-0 ceph-mon[74985]: pgmap v4364: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4365: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:20 compute-0 nova_compute[254092]: 2025-11-25 18:08:20.064 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:20 compute-0 ceph-mon[74985]: pgmap v4365: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4366: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:22 compute-0 ceph-mon[74985]: pgmap v4366: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4367: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:22 compute-0 nova_compute[254092]: 2025-11-25 18:08:22.759 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4368: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:24 compute-0 nova_compute[254092]: 2025-11-25 18:08:24.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:24 compute-0 ceph-mon[74985]: pgmap v4367: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:25 compute-0 nova_compute[254092]: 2025-11-25 18:08:25.066 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:25 compute-0 ceph-mon[74985]: pgmap v4368: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4369: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:27 compute-0 nova_compute[254092]: 2025-11-25 18:08:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:27 compute-0 nova_compute[254092]: 2025-11-25 18:08:27.761 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:27 compute-0 ceph-mon[74985]: pgmap v4369: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4370: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:28 compute-0 podman[473114]: 2025-11-25 18:08:28.66426702 +0000 UTC m=+0.071974083 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 18:08:28 compute-0 podman[473113]: 2025-11-25 18:08:28.675260448 +0000 UTC m=+0.081701217 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 18:08:28 compute-0 podman[473115]: 2025-11-25 18:08:28.710369421 +0000 UTC m=+0.116923433 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:08:30 compute-0 ceph-mon[74985]: pgmap v4370: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:30 compute-0 nova_compute[254092]: 2025-11-25 18:08:30.068 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4371: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:31 compute-0 ceph-mon[74985]: pgmap v4371: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4372: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:32 compute-0 nova_compute[254092]: 2025-11-25 18:08:32.763 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:33 compute-0 nova_compute[254092]: 2025-11-25 18:08:33.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:33 compute-0 ceph-mon[74985]: pgmap v4372: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4373: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:34 compute-0 nova_compute[254092]: 2025-11-25 18:08:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:34 compute-0 nova_compute[254092]: 2025-11-25 18:08:34.528 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:08:34 compute-0 nova_compute[254092]: 2025-11-25 18:08:34.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:08:34 compute-0 nova_compute[254092]: 2025-11-25 18:08:34.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:08:34 compute-0 nova_compute[254092]: 2025-11-25 18:08:34.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:08:34 compute-0 nova_compute[254092]: 2025-11-25 18:08:34.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:08:35 compute-0 ceph-mon[74985]: pgmap v4373: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.072 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:08:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111203199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.112 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.290 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.291 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.291 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.292 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.342 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.342 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.357 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:08:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:08:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697577250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.791 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.797 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.813 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.815 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:08:35 compute-0 nova_compute[254092]: 2025-11-25 18:08:35.815 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:08:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3111203199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:08:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2697577250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:08:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.297222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116297244, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 857, "num_deletes": 256, "total_data_size": 1149571, "memory_usage": 1169328, "flush_reason": "Manual Compaction"}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116305893, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 1128054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89137, "largest_seqno": 89993, "table_properties": {"data_size": 1123712, "index_size": 1993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9342, "raw_average_key_size": 19, "raw_value_size": 1115049, "raw_average_value_size": 2275, "num_data_blocks": 89, "num_entries": 490, "num_filter_entries": 490, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094042, "oldest_key_time": 1764094042, "file_creation_time": 1764094116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 8700 microseconds, and 2949 cpu microseconds.
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.305921) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 1128054 bytes OK
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.305934) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309239) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309252) EVENT_LOG_v1 {"time_micros": 1764094116309249, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 1145332, prev total WAL file size 1145332, number of live WAL files 2.
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303139' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end)
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(1101KB)], [212(9388KB)]
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116309693, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 10742106, "oldest_snapshot_seqno": -1}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 10150 keys, 10640519 bytes, temperature: kUnknown
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116380081, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 10640519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10579462, "index_size": 34576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 269443, "raw_average_key_size": 26, "raw_value_size": 10404669, "raw_average_value_size": 1025, "num_data_blocks": 1322, "num_entries": 10150, "num_filter_entries": 10150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.380333) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 10640519 bytes
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.382618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.4 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.2 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(19.0) write-amplify(9.4) OK, records in: 10674, records dropped: 524 output_compression: NoCompression
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.382683) EVENT_LOG_v1 {"time_micros": 1764094116382669, "job": 134, "event": "compaction_finished", "compaction_time_micros": 70481, "compaction_time_cpu_micros": 24685, "output_level": 6, "num_output_files": 1, "total_output_size": 10640519, "num_input_records": 10674, "num_output_records": 10150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116383175, "job": 134, "event": "table_file_deletion", "file_number": 214}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094116386610, "job": 134, "event": "table_file_deletion", "file_number": 212}
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.309595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:08:36 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:08:36.386725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:08:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4374: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:36 compute-0 nova_compute[254092]: 2025-11-25 18:08:36.816 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:36 compute-0 nova_compute[254092]: 2025-11-25 18:08:36.816 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:08:36 compute-0 nova_compute[254092]: 2025-11-25 18:08:36.817 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:08:36 compute-0 nova_compute[254092]: 2025-11-25 18:08:36.831 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:08:36 compute-0 nova_compute[254092]: 2025-11-25 18:08:36.831 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:36 compute-0 nova_compute[254092]: 2025-11-25 18:08:36.831 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:37 compute-0 ceph-mon[74985]: pgmap v4374: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:37 compute-0 nova_compute[254092]: 2025-11-25 18:08:37.766 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4375: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:39 compute-0 ceph-mon[74985]: pgmap v4375: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:40 compute-0 nova_compute[254092]: 2025-11-25 18:08:40.075 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:08:40
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:08:40 compute-0 nova_compute[254092]: 2025-11-25 18:08:40.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:40 compute-0 nova_compute[254092]: 2025-11-25 18:08:40.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4376: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:08:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:08:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:41 compute-0 ceph-mon[74985]: pgmap v4376: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4377: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:42 compute-0 nova_compute[254092]: 2025-11-25 18:08:42.770 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:43 compute-0 ceph-mon[74985]: pgmap v4377: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4378: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:45 compute-0 nova_compute[254092]: 2025-11-25 18:08:45.077 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:45 compute-0 ceph-mon[74985]: pgmap v4378: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4379: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:47 compute-0 nova_compute[254092]: 2025-11-25 18:08:47.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:47 compute-0 ceph-mon[74985]: pgmap v4379: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:47 compute-0 nova_compute[254092]: 2025-11-25 18:08:47.819 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4380: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:49 compute-0 ceph-mon[74985]: pgmap v4380: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:50 compute-0 nova_compute[254092]: 2025-11-25 18:08:50.125 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4381: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:51 compute-0 nova_compute[254092]: 2025-11-25 18:08:51.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:08:51 compute-0 ceph-mon[74985]: pgmap v4381: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4382: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:08:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:08:52 compute-0 nova_compute[254092]: 2025-11-25 18:08:52.823 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:53 compute-0 ceph-mon[74985]: pgmap v4382: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4383: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:55 compute-0 nova_compute[254092]: 2025-11-25 18:08:55.127 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:08:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195361864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:08:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:08:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195361864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:08:55 compute-0 ceph-mon[74985]: pgmap v4383: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2195361864' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:08:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/2195361864' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:08:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:08:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4384: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:57 compute-0 ceph-mon[74985]: pgmap v4384: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:57 compute-0 nova_compute[254092]: 2025-11-25 18:08:57.827 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:08:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4385: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:59 compute-0 ceph-mon[74985]: pgmap v4385: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:08:59 compute-0 podman[473217]: 2025-11-25 18:08:59.637532804 +0000 UTC m=+0.055958409 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 18:08:59 compute-0 podman[473218]: 2025-11-25 18:08:59.63776442 +0000 UTC m=+0.052607538 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 18:08:59 compute-0 podman[473219]: 2025-11-25 18:08:59.664491475 +0000 UTC m=+0.076350872 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 18:09:00 compute-0 nova_compute[254092]: 2025-11-25 18:09:00.129 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4386: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:00 compute-0 sudo[473283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:00 compute-0 sudo[473283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:00 compute-0 sudo[473283]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:00 compute-0 sudo[473308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:09:00 compute-0 sudo[473308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:00 compute-0 sudo[473308]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:01 compute-0 sudo[473333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:01 compute-0 sudo[473333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:01 compute-0 sudo[473333]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:01 compute-0 sudo[473358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:09:01 compute-0 sudo[473358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:01 compute-0 sudo[473358]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:09:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:09:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:09:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:09:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 060efeea-a335-4e3b-bd7c-5961f63336b5 does not exist
Nov 25 18:09:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 406d7ba9-4fa3-48bd-a082-25ea142b1049 does not exist
Nov 25 18:09:01 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 64c32308-eaeb-490f-95d7-10797319a082 does not exist
Nov 25 18:09:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:09:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:09:01 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:09:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:09:01 compute-0 sudo[473415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:01 compute-0 sudo[473415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:01 compute-0 sudo[473415]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:01 compute-0 sudo[473440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:09:01 compute-0 sudo[473440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:01 compute-0 sudo[473440]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:01 compute-0 ceph-mon[74985]: pgmap v4386: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:09:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:09:01 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:09:01 compute-0 sudo[473465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:01 compute-0 sudo[473465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:01 compute-0 sudo[473465]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:01 compute-0 sudo[473490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:09:01 compute-0 sudo[473490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:02 compute-0 podman[473557]: 2025-11-25 18:09:02.113757545 +0000 UTC m=+0.062059214 container create 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:09:02 compute-0 systemd[1]: Started libpod-conmon-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope.
Nov 25 18:09:02 compute-0 podman[473557]: 2025-11-25 18:09:02.073448611 +0000 UTC m=+0.021750320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:09:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:09:02 compute-0 podman[473557]: 2025-11-25 18:09:02.20940015 +0000 UTC m=+0.157701899 container init 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:09:02 compute-0 podman[473557]: 2025-11-25 18:09:02.216813062 +0000 UTC m=+0.165114761 container start 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:09:02 compute-0 podman[473557]: 2025-11-25 18:09:02.221297073 +0000 UTC m=+0.169598772 container attach 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:09:02 compute-0 agitated_gates[473573]: 167 167
Nov 25 18:09:02 compute-0 systemd[1]: libpod-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope: Deactivated successfully.
Nov 25 18:09:02 compute-0 conmon[473573]: conmon 7a451e678e38a1c11113 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope/container/memory.events
Nov 25 18:09:02 compute-0 podman[473557]: 2025-11-25 18:09:02.22595337 +0000 UTC m=+0.174255029 container died 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0621d5f16e8a9bc26ca411ed175018d606e4a68e6c39b74deef54c94698ff988-merged.mount: Deactivated successfully.
Nov 25 18:09:02 compute-0 podman[473557]: 2025-11-25 18:09:02.270630832 +0000 UTC m=+0.218932491 container remove 7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:09:02 compute-0 systemd[1]: libpod-conmon-7a451e678e38a1c1111378e16ddcc3f9c40c1500b64b44798a2636d806c75ffe.scope: Deactivated successfully.
Nov 25 18:09:02 compute-0 podman[473597]: 2025-11-25 18:09:02.483454237 +0000 UTC m=+0.099936653 container create a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:09:02 compute-0 podman[473597]: 2025-11-25 18:09:02.406959241 +0000 UTC m=+0.023441667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:09:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4387: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:02 compute-0 systemd[1]: Started libpod-conmon-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope.
Nov 25 18:09:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:02 compute-0 podman[473597]: 2025-11-25 18:09:02.585163916 +0000 UTC m=+0.201646332 container init a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 18:09:02 compute-0 podman[473597]: 2025-11-25 18:09:02.592797614 +0000 UTC m=+0.209280030 container start a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:09:02 compute-0 podman[473597]: 2025-11-25 18:09:02.59747611 +0000 UTC m=+0.213958536 container attach a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:09:02 compute-0 nova_compute[254092]: 2025-11-25 18:09:02.828 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:03 compute-0 trusting_nightingale[473613]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:09:03 compute-0 trusting_nightingale[473613]: --> relative data size: 1.0
Nov 25 18:09:03 compute-0 trusting_nightingale[473613]: --> All data devices are unavailable
Nov 25 18:09:03 compute-0 ceph-mon[74985]: pgmap v4387: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:03 compute-0 systemd[1]: libpod-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope: Deactivated successfully.
Nov 25 18:09:03 compute-0 systemd[1]: libpod-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope: Consumed 1.057s CPU time.
Nov 25 18:09:03 compute-0 podman[473597]: 2025-11-25 18:09:03.712854746 +0000 UTC m=+1.329337192 container died a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:09:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed1d00f822e4af4817469cf1e1956c3efb2d7ba8aa0e2ebbeeb3ea4ff9a4fcf-merged.mount: Deactivated successfully.
Nov 25 18:09:03 compute-0 podman[473597]: 2025-11-25 18:09:03.786469023 +0000 UTC m=+1.402951429 container remove a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 18:09:03 compute-0 systemd[1]: libpod-conmon-a3ea39cd5d4512f072ab4eaacc3880b74032df87f408fa48121b6f593b7e6e1e.scope: Deactivated successfully.
Nov 25 18:09:03 compute-0 sudo[473490]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:03 compute-0 sudo[473657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:03 compute-0 sudo[473657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:03 compute-0 sudo[473657]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:03 compute-0 sudo[473682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:09:03 compute-0 sudo[473682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:03 compute-0 sudo[473682]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:03 compute-0 sudo[473707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:03 compute-0 sudo[473707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:03 compute-0 sudo[473707]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:04 compute-0 sudo[473732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:09:04 compute-0 sudo[473732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:04 compute-0 podman[473797]: 2025-11-25 18:09:04.392430495 +0000 UTC m=+0.040191541 container create 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:09:04 compute-0 systemd[1]: Started libpod-conmon-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope.
Nov 25 18:09:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:09:04 compute-0 podman[473797]: 2025-11-25 18:09:04.463079412 +0000 UTC m=+0.110840358 container init 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:09:04 compute-0 podman[473797]: 2025-11-25 18:09:04.375019743 +0000 UTC m=+0.022780689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:09:04 compute-0 podman[473797]: 2025-11-25 18:09:04.469183368 +0000 UTC m=+0.116944294 container start 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 18:09:04 compute-0 podman[473797]: 2025-11-25 18:09:04.472353794 +0000 UTC m=+0.120114750 container attach 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:09:04 compute-0 admiring_leakey[473813]: 167 167
Nov 25 18:09:04 compute-0 systemd[1]: libpod-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope: Deactivated successfully.
Nov 25 18:09:04 compute-0 conmon[473813]: conmon 053f4563973dc3491011 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope/container/memory.events
Nov 25 18:09:04 compute-0 podman[473797]: 2025-11-25 18:09:04.474844732 +0000 UTC m=+0.122605668 container died 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-231e3300730934385c2a718ebdd11c5d03d96b262d2c63dc4ecdf35193784cad-merged.mount: Deactivated successfully.
Nov 25 18:09:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4388: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:04 compute-0 podman[473797]: 2025-11-25 18:09:04.510485118 +0000 UTC m=+0.158246044 container remove 053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:09:04 compute-0 systemd[1]: libpod-conmon-053f4563973dc3491011b313545e714f86c0d5e38582f405677a7245d4026ed4.scope: Deactivated successfully.
Nov 25 18:09:04 compute-0 podman[473837]: 2025-11-25 18:09:04.733230253 +0000 UTC m=+0.055830696 container create 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:09:04 compute-0 systemd[1]: Started libpod-conmon-33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188.scope.
Nov 25 18:09:04 compute-0 podman[473837]: 2025-11-25 18:09:04.708151432 +0000 UTC m=+0.030751935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:09:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:04 compute-0 podman[473837]: 2025-11-25 18:09:04.827392407 +0000 UTC m=+0.149992830 container init 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:09:04 compute-0 podman[473837]: 2025-11-25 18:09:04.835921679 +0000 UTC m=+0.158522082 container start 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:09:04 compute-0 podman[473837]: 2025-11-25 18:09:04.839782884 +0000 UTC m=+0.162383287 container attach 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 18:09:05 compute-0 nova_compute[254092]: 2025-11-25 18:09:05.133 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:05 compute-0 jovial_edison[473854]: {
Nov 25 18:09:05 compute-0 jovial_edison[473854]:     "0": [
Nov 25 18:09:05 compute-0 jovial_edison[473854]:         {
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "devices": [
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "/dev/loop3"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             ],
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_name": "ceph_lv0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_size": "21470642176",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "name": "ceph_lv0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "tags": {
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cluster_name": "ceph",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.crush_device_class": "",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.encrypted": "0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osd_id": "0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.type": "block",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.vdo": "0"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             },
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "type": "block",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "vg_name": "ceph_vg0"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:         }
Nov 25 18:09:05 compute-0 jovial_edison[473854]:     ],
Nov 25 18:09:05 compute-0 jovial_edison[473854]:     "1": [
Nov 25 18:09:05 compute-0 jovial_edison[473854]:         {
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "devices": [
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "/dev/loop4"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             ],
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_name": "ceph_lv1",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_size": "21470642176",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "name": "ceph_lv1",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "tags": {
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cluster_name": "ceph",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.crush_device_class": "",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.encrypted": "0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osd_id": "1",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.type": "block",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.vdo": "0"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             },
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "type": "block",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "vg_name": "ceph_vg1"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:         }
Nov 25 18:09:05 compute-0 jovial_edison[473854]:     ],
Nov 25 18:09:05 compute-0 jovial_edison[473854]:     "2": [
Nov 25 18:09:05 compute-0 jovial_edison[473854]:         {
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "devices": [
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "/dev/loop5"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             ],
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_name": "ceph_lv2",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_size": "21470642176",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "name": "ceph_lv2",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "tags": {
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.cluster_name": "ceph",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.crush_device_class": "",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.encrypted": "0",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osd_id": "2",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.type": "block",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:                 "ceph.vdo": "0"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             },
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "type": "block",
Nov 25 18:09:05 compute-0 jovial_edison[473854]:             "vg_name": "ceph_vg2"
Nov 25 18:09:05 compute-0 jovial_edison[473854]:         }
Nov 25 18:09:05 compute-0 jovial_edison[473854]:     ]
Nov 25 18:09:05 compute-0 jovial_edison[473854]: }
Nov 25 18:09:05 compute-0 systemd[1]: libpod-33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188.scope: Deactivated successfully.
Nov 25 18:09:05 compute-0 podman[473837]: 2025-11-25 18:09:05.658134699 +0000 UTC m=+0.980735172 container died 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:09:05 compute-0 ceph-mon[74985]: pgmap v4388: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6375316e26e8f8d9a6679507b76be28e83dc1ba78e09714ee3299e1cc903b7a4-merged.mount: Deactivated successfully.
Nov 25 18:09:05 compute-0 podman[473837]: 2025-11-25 18:09:05.899525179 +0000 UTC m=+1.222125612 container remove 33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 18:09:05 compute-0 systemd[1]: libpod-conmon-33ff60a57a6190733eaa855fdab59fe7a94005801dfcf88e3775fd8ff9308188.scope: Deactivated successfully.
Nov 25 18:09:05 compute-0 sudo[473732]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:06 compute-0 sudo[473878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:06 compute-0 sudo[473878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:06 compute-0 sudo[473878]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:06 compute-0 sudo[473903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:09:06 compute-0 sudo[473903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:06 compute-0 sudo[473903]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:06 compute-0 sudo[473928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:06 compute-0 sudo[473928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:06 compute-0 sudo[473928]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:06 compute-0 sudo[473953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:09:06 compute-0 sudo[473953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4389: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:06 compute-0 podman[474019]: 2025-11-25 18:09:06.678426804 +0000 UTC m=+0.112374730 container create 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:09:06 compute-0 podman[474019]: 2025-11-25 18:09:06.589014228 +0000 UTC m=+0.022962194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:09:06 compute-0 systemd[1]: Started libpod-conmon-3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d.scope.
Nov 25 18:09:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:09:06 compute-0 podman[474019]: 2025-11-25 18:09:06.811487585 +0000 UTC m=+0.245435541 container init 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:09:06 compute-0 podman[474019]: 2025-11-25 18:09:06.817830877 +0000 UTC m=+0.251778813 container start 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:09:06 compute-0 podman[474019]: 2025-11-25 18:09:06.820975662 +0000 UTC m=+0.254923598 container attach 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 18:09:06 compute-0 upbeat_raman[474036]: 167 167
Nov 25 18:09:06 compute-0 systemd[1]: libpod-3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d.scope: Deactivated successfully.
Nov 25 18:09:06 compute-0 podman[474019]: 2025-11-25 18:09:06.829181915 +0000 UTC m=+0.263129851 container died 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f22ffa39ae4eb3fc67177b96e18159f096d03bb94fab0195416c740bb39772c-merged.mount: Deactivated successfully.
Nov 25 18:09:06 compute-0 podman[474019]: 2025-11-25 18:09:06.878454982 +0000 UTC m=+0.312402918 container remove 3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_raman, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:09:06 compute-0 systemd[1]: libpod-conmon-3e4f98cb498eaa13cf624a74e383d8b82c1d4dc42fd25b2fd3e95a11122fd97d.scope: Deactivated successfully.
Nov 25 18:09:07 compute-0 podman[474059]: 2025-11-25 18:09:07.056812511 +0000 UTC m=+0.038801304 container create 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 18:09:07 compute-0 systemd[1]: Started libpod-conmon-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope.
Nov 25 18:09:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:09:07 compute-0 podman[474059]: 2025-11-25 18:09:07.039692607 +0000 UTC m=+0.021681420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:09:07 compute-0 podman[474059]: 2025-11-25 18:09:07.142769544 +0000 UTC m=+0.124758397 container init 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:09:07 compute-0 podman[474059]: 2025-11-25 18:09:07.149422974 +0000 UTC m=+0.131411767 container start 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:09:07 compute-0 podman[474059]: 2025-11-25 18:09:07.15368158 +0000 UTC m=+0.135670403 container attach 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:09:07 compute-0 ceph-mon[74985]: pgmap v4389: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:07 compute-0 nova_compute[254092]: 2025-11-25 18:09:07.829 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]: {
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "osd_id": 1,
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "type": "bluestore"
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:     },
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "osd_id": 2,
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "type": "bluestore"
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:     },
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "osd_id": 0,
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:         "type": "bluestore"
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]:     }
Nov 25 18:09:08 compute-0 hardcore_lichterman[474074]: }
Nov 25 18:09:08 compute-0 systemd[1]: libpod-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope: Deactivated successfully.
Nov 25 18:09:08 compute-0 podman[474059]: 2025-11-25 18:09:08.166709737 +0000 UTC m=+1.148698540 container died 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:09:08 compute-0 systemd[1]: libpod-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope: Consumed 1.029s CPU time.
Nov 25 18:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cf19ca189bd2f55c1dd695700faf5d1e71c6111496107298041b4ee33fdbac9-merged.mount: Deactivated successfully.
Nov 25 18:09:08 compute-0 podman[474059]: 2025-11-25 18:09:08.363283342 +0000 UTC m=+1.345272135 container remove 4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:09:08 compute-0 systemd[1]: libpod-conmon-4591a81ee8a428364af1f0df77dcab6cef3e0c9f82bfb1a29d76f4a39dbf8075.scope: Deactivated successfully.
Nov 25 18:09:08 compute-0 sudo[473953]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:09:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:09:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:09:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4390: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:08 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:09:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2fdf26f8-1fba-475e-b70c-77a856728538 does not exist
Nov 25 18:09:08 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 4323d5b8-ca6e-4767-ae8e-3e4cb47277d7 does not exist
Nov 25 18:09:08 compute-0 sudo[474122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:09:08 compute-0 sudo[474122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:08 compute-0 sudo[474122]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:08 compute-0 sudo[474147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:09:08 compute-0 sudo[474147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:09:08 compute-0 sudo[474147]: pam_unix(sudo:session): session closed for user root
Nov 25 18:09:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:09:09 compute-0 ceph-mon[74985]: pgmap v4390: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:09:10 compute-0 nova_compute[254092]: 2025-11-25 18:09:10.139 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:09:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:09:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4391: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:11 compute-0 ceph-mon[74985]: pgmap v4391: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4392: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:12 compute-0 nova_compute[254092]: 2025-11-25 18:09:12.831 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:13 compute-0 ceph-mon[74985]: pgmap v4392: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:09:13.716 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:09:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:09:13.717 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:09:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:09:13.717 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:09:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4393: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:15 compute-0 nova_compute[254092]: 2025-11-25 18:09:15.141 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:15 compute-0 ceph-mon[74985]: pgmap v4393: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4394: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:17 compute-0 ceph-mon[74985]: pgmap v4394: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:17 compute-0 nova_compute[254092]: 2025-11-25 18:09:17.832 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4395: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:19 compute-0 ceph-mon[74985]: pgmap v4395: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:20 compute-0 nova_compute[254092]: 2025-11-25 18:09:20.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4396: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:21 compute-0 ceph-mon[74985]: pgmap v4396: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4397: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:22 compute-0 nova_compute[254092]: 2025-11-25 18:09:22.873 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:23 compute-0 ceph-mon[74985]: pgmap v4397: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4398: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:25 compute-0 nova_compute[254092]: 2025-11-25 18:09:25.149 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:25 compute-0 nova_compute[254092]: 2025-11-25 18:09:25.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:25 compute-0 ceph-mon[74985]: pgmap v4398: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4399: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:27 compute-0 nova_compute[254092]: 2025-11-25 18:09:27.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:27 compute-0 ceph-mon[74985]: pgmap v4399: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:27 compute-0 nova_compute[254092]: 2025-11-25 18:09:27.877 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4400: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:29 compute-0 ceph-mon[74985]: pgmap v4400: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:30 compute-0 nova_compute[254092]: 2025-11-25 18:09:30.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4401: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:30 compute-0 podman[474173]: 2025-11-25 18:09:30.654126708 +0000 UTC m=+0.066399113 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 18:09:30 compute-0 podman[474172]: 2025-11-25 18:09:30.683513605 +0000 UTC m=+0.095904043 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 18:09:30 compute-0 podman[474174]: 2025-11-25 18:09:30.732487855 +0000 UTC m=+0.135627682 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:09:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:31 compute-0 ceph-mon[74985]: pgmap v4401: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4402: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:32 compute-0 nova_compute[254092]: 2025-11-25 18:09:32.879 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:33 compute-0 ceph-mon[74985]: pgmap v4402: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.492 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4403: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.522 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.523 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:09:34 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:09:34 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331853356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:09:34 compute-0 nova_compute[254092]: 2025-11-25 18:09:34.965 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:09:35 compute-0 nova_compute[254092]: 2025-11-25 18:09:35.106 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:09:35 compute-0 nova_compute[254092]: 2025-11-25 18:09:35.108 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3617MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:09:35 compute-0 nova_compute[254092]: 2025-11-25 18:09:35.108 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:09:35 compute-0 nova_compute[254092]: 2025-11-25 18:09:35.108 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:09:35 compute-0 nova_compute[254092]: 2025-11-25 18:09:35.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:35 compute-0 nova_compute[254092]: 2025-11-25 18:09:35.701 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:09:35 compute-0 nova_compute[254092]: 2025-11-25 18:09:35.701 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:09:35 compute-0 ceph-mon[74985]: pgmap v4403: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:35 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1331853356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:09:36 compute-0 nova_compute[254092]: 2025-11-25 18:09:36.112 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:09:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4404: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:09:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642846535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:09:36 compute-0 nova_compute[254092]: 2025-11-25 18:09:36.549 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:09:36 compute-0 nova_compute[254092]: 2025-11-25 18:09:36.554 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:09:36 compute-0 nova_compute[254092]: 2025-11-25 18:09:36.566 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:09:36 compute-0 nova_compute[254092]: 2025-11-25 18:09:36.567 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:09:36 compute-0 nova_compute[254092]: 2025-11-25 18:09:36.568 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:09:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1642846535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.497 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.510 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.510 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.511 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.511 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 18:09:37 compute-0 ceph-mon[74985]: pgmap v4404: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:37 compute-0 nova_compute[254092]: 2025-11-25 18:09:37.881 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4405: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:39 compute-0 ceph-mon[74985]: pgmap v4405: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:09:40 compute-0 nova_compute[254092]: 2025-11-25 18:09:40.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:09:40
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.meta', 'images', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4406: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:09:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:09:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:41 compute-0 nova_compute[254092]: 2025-11-25 18:09:41.504 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:41 compute-0 nova_compute[254092]: 2025-11-25 18:09:41.504 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:09:41 compute-0 ceph-mon[74985]: pgmap v4406: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4407: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:42 compute-0 nova_compute[254092]: 2025-11-25 18:09:42.883 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:43 compute-0 ceph-mon[74985]: pgmap v4407: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4408: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:45 compute-0 nova_compute[254092]: 2025-11-25 18:09:45.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:45 compute-0 ceph-mon[74985]: pgmap v4408: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4409: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:47 compute-0 ceph-mon[74985]: pgmap v4409: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:47 compute-0 nova_compute[254092]: 2025-11-25 18:09:47.885 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:48 compute-0 nova_compute[254092]: 2025-11-25 18:09:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:09:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4410: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:49 compute-0 sshd-session[474281]: Connection closed by authenticating user root 171.244.51.45 port 56944 [preauth]
Nov 25 18:09:49 compute-0 ceph-mon[74985]: pgmap v4410: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:50 compute-0 nova_compute[254092]: 2025-11-25 18:09:50.167 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4411: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:51 compute-0 ceph-mon[74985]: pgmap v4411: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4412: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:09:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:09:52 compute-0 nova_compute[254092]: 2025-11-25 18:09:52.888 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:53 compute-0 ceph-mon[74985]: pgmap v4412: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4413: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:55 compute-0 nova_compute[254092]: 2025-11-25 18:09:55.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:09:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828155944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:09:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:09:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828155944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:09:55 compute-0 ceph-mon[74985]: pgmap v4413: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/828155944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:09:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/828155944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:09:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:09:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4414: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:57 compute-0 nova_compute[254092]: 2025-11-25 18:09:57.889 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:09:57 compute-0 ceph-mon[74985]: pgmap v4414: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4415: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:09:59 compute-0 ceph-mon[74985]: pgmap v4415: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:00 compute-0 nova_compute[254092]: 2025-11-25 18:10:00.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4416: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:01 compute-0 podman[474284]: 2025-11-25 18:10:01.625761759 +0000 UTC m=+0.045151887 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 18:10:01 compute-0 podman[474283]: 2025-11-25 18:10:01.633724494 +0000 UTC m=+0.056642818 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:10:01 compute-0 podman[474285]: 2025-11-25 18:10:01.668966331 +0000 UTC m=+0.082257983 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:10:01 compute-0 ceph-mon[74985]: pgmap v4416: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4417: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:02 compute-0 nova_compute[254092]: 2025-11-25 18:10:02.893 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:03 compute-0 ceph-mon[74985]: pgmap v4417: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4418: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:05 compute-0 nova_compute[254092]: 2025-11-25 18:10:05.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:05 compute-0 ceph-mon[74985]: pgmap v4418: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:06 compute-0 nova_compute[254092]: 2025-11-25 18:10:06.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:06 compute-0 nova_compute[254092]: 2025-11-25 18:10:06.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 18:10:06 compute-0 nova_compute[254092]: 2025-11-25 18:10:06.509 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 18:10:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4419: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:07 compute-0 ceph-mon[74985]: pgmap v4419: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:07 compute-0 nova_compute[254092]: 2025-11-25 18:10:07.894 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4420: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:08 compute-0 sudo[474347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:08 compute-0 sudo[474347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:08 compute-0 sudo[474347]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:08 compute-0 sudo[474372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:10:08 compute-0 sudo[474372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:08 compute-0 sudo[474372]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:08 compute-0 sudo[474397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:08 compute-0 sudo[474397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:08 compute-0 sudo[474397]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:08 compute-0 sudo[474422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:10:08 compute-0 sudo[474422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:09 compute-0 sudo[474422]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 18:10:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:10:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:10:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:10:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:10:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev f885bc6e-55f0-4884-b7c4-12e2609749ec does not exist
Nov 25 18:10:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev b7d718a6-c3b5-467d-b275-5c426ca39ee1 does not exist
Nov 25 18:10:09 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev fcc65d9b-8af6-4034-a3bd-4445a377cb6f does not exist
Nov 25 18:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:10:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:10:09 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:10:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:10:09 compute-0 sudo[474479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:09 compute-0 sudo[474479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:09 compute-0 sudo[474479]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:09 compute-0 sudo[474504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:10:09 compute-0 sudo[474504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:09 compute-0 sudo[474504]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:09 compute-0 sudo[474529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:09 compute-0 sudo[474529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:09 compute-0 sudo[474529]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:09 compute-0 sudo[474554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:10:09 compute-0 sudo[474554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:09 compute-0 ceph-mon[74985]: pgmap v4420: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:10:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:10:09 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:10:09 compute-0 podman[474619]: 2025-11-25 18:10:09.97720101 +0000 UTC m=+0.040307355 container create 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 18:10:10 compute-0 systemd[1]: Started libpod-conmon-830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa.scope.
Nov 25 18:10:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:10:10 compute-0 podman[474619]: 2025-11-25 18:10:09.960790495 +0000 UTC m=+0.023896840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:10:10 compute-0 podman[474619]: 2025-11-25 18:10:10.061474807 +0000 UTC m=+0.124581222 container init 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:10:10 compute-0 podman[474619]: 2025-11-25 18:10:10.06970417 +0000 UTC m=+0.132810505 container start 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 18:10:10 compute-0 podman[474619]: 2025-11-25 18:10:10.073078582 +0000 UTC m=+0.136185007 container attach 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:10:10 compute-0 mystifying_chaplygin[474635]: 167 167
Nov 25 18:10:10 compute-0 systemd[1]: libpod-830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa.scope: Deactivated successfully.
Nov 25 18:10:10 compute-0 podman[474619]: 2025-11-25 18:10:10.076747821 +0000 UTC m=+0.139854156 container died 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4daaebc09c98882e3fb66e93c168fb0a8bc6f379d3f4dc7d80297cdd9b5ffab2-merged.mount: Deactivated successfully.
Nov 25 18:10:10 compute-0 podman[474619]: 2025-11-25 18:10:10.115349029 +0000 UTC m=+0.178455364 container remove 830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_chaplygin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 18:10:10 compute-0 systemd[1]: libpod-conmon-830860aa81d5236ae472511294d64ab04dded5dc9145002220cac14eb170bdfa.scope: Deactivated successfully.
Nov 25 18:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:10:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:10:10 compute-0 nova_compute[254092]: 2025-11-25 18:10:10.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:10 compute-0 podman[474661]: 2025-11-25 18:10:10.309319672 +0000 UTC m=+0.045192037 container create e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:10:10 compute-0 systemd[1]: Started libpod-conmon-e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f.scope.
Nov 25 18:10:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:10 compute-0 podman[474661]: 2025-11-25 18:10:10.292324901 +0000 UTC m=+0.028197296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:10:10 compute-0 podman[474661]: 2025-11-25 18:10:10.390883535 +0000 UTC m=+0.126755930 container init e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 18:10:10 compute-0 podman[474661]: 2025-11-25 18:10:10.396718873 +0000 UTC m=+0.132591228 container start e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:10:10 compute-0 podman[474661]: 2025-11-25 18:10:10.399559161 +0000 UTC m=+0.135431556 container attach e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 18:10:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4421: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:11 compute-0 youthful_kowalevski[474677]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:10:11 compute-0 youthful_kowalevski[474677]: --> relative data size: 1.0
Nov 25 18:10:11 compute-0 youthful_kowalevski[474677]: --> All data devices are unavailable
Nov 25 18:10:11 compute-0 systemd[1]: libpod-e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f.scope: Deactivated successfully.
Nov 25 18:10:11 compute-0 podman[474661]: 2025-11-25 18:10:11.406573255 +0000 UTC m=+1.142445610 container died e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 18:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4241e94ae5e725a4311a736d8513d6cbb2ed757df6bb6c40998b7b0713380a7-merged.mount: Deactivated successfully.
Nov 25 18:10:11 compute-0 podman[474661]: 2025-11-25 18:10:11.581872352 +0000 UTC m=+1.317744717 container remove e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kowalevski, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:10:11 compute-0 systemd[1]: libpod-conmon-e1f60b40de19e9d9d664ba1dd81f477c03e9fcdd2554c71e0c12f8d9396ee14f.scope: Deactivated successfully.
Nov 25 18:10:11 compute-0 sudo[474554]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:11 compute-0 ceph-mon[74985]: pgmap v4421: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:11 compute-0 sudo[474720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:11 compute-0 sudo[474720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:11 compute-0 sudo[474720]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:11 compute-0 sudo[474745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:10:11 compute-0 sudo[474745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:11 compute-0 sudo[474745]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:11 compute-0 sudo[474770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:11 compute-0 sudo[474770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:11 compute-0 sudo[474770]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:11 compute-0 sudo[474795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:10:11 compute-0 sudo[474795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:12 compute-0 podman[474860]: 2025-11-25 18:10:12.197465605 +0000 UTC m=+0.037934240 container create c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:10:12 compute-0 systemd[1]: Started libpod-conmon-c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1.scope.
Nov 25 18:10:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:10:12 compute-0 podman[474860]: 2025-11-25 18:10:12.18139505 +0000 UTC m=+0.021863705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:10:12 compute-0 podman[474860]: 2025-11-25 18:10:12.285667179 +0000 UTC m=+0.126135834 container init c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 18:10:12 compute-0 podman[474860]: 2025-11-25 18:10:12.293062899 +0000 UTC m=+0.133531534 container start c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 18:10:12 compute-0 podman[474860]: 2025-11-25 18:10:12.295969618 +0000 UTC m=+0.136438273 container attach c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:10:12 compute-0 elastic_elion[474876]: 167 167
Nov 25 18:10:12 compute-0 systemd[1]: libpod-c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1.scope: Deactivated successfully.
Nov 25 18:10:12 compute-0 podman[474860]: 2025-11-25 18:10:12.298592629 +0000 UTC m=+0.139061264 container died c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-77725b266f4ed036a4d935f98857517dd572a44abb7863c2f0d706cee2fc77e0-merged.mount: Deactivated successfully.
Nov 25 18:10:12 compute-0 podman[474860]: 2025-11-25 18:10:12.335756268 +0000 UTC m=+0.176224903 container remove c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 18:10:12 compute-0 systemd[1]: libpod-conmon-c2553f2ae8f9f621f14f22f7ff8c97d9d359fcb90b4f9b10d3e813b04317dbf1.scope: Deactivated successfully.
Nov 25 18:10:12 compute-0 podman[474902]: 2025-11-25 18:10:12.506078859 +0000 UTC m=+0.057628564 container create cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:10:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4422: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:12 compute-0 systemd[1]: Started libpod-conmon-cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028.scope.
Nov 25 18:10:12 compute-0 podman[474902]: 2025-11-25 18:10:12.478264265 +0000 UTC m=+0.029814070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:10:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:12 compute-0 podman[474902]: 2025-11-25 18:10:12.654472165 +0000 UTC m=+0.206021900 container init cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 18:10:12 compute-0 podman[474902]: 2025-11-25 18:10:12.661957519 +0000 UTC m=+0.213507224 container start cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 18:10:12 compute-0 podman[474902]: 2025-11-25 18:10:12.728029752 +0000 UTC m=+0.279579477 container attach cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:10:12 compute-0 nova_compute[254092]: 2025-11-25 18:10:12.896 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:13 compute-0 pensive_edison[474918]: {
Nov 25 18:10:13 compute-0 pensive_edison[474918]:     "0": [
Nov 25 18:10:13 compute-0 pensive_edison[474918]:         {
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "devices": [
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "/dev/loop3"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             ],
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_name": "ceph_lv0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_size": "21470642176",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "name": "ceph_lv0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "tags": {
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cluster_name": "ceph",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.crush_device_class": "",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.encrypted": "0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osd_id": "0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.type": "block",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.vdo": "0"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             },
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "type": "block",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "vg_name": "ceph_vg0"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:         }
Nov 25 18:10:13 compute-0 pensive_edison[474918]:     ],
Nov 25 18:10:13 compute-0 pensive_edison[474918]:     "1": [
Nov 25 18:10:13 compute-0 pensive_edison[474918]:         {
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "devices": [
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "/dev/loop4"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             ],
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_name": "ceph_lv1",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_size": "21470642176",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "name": "ceph_lv1",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "tags": {
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cluster_name": "ceph",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.crush_device_class": "",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.encrypted": "0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osd_id": "1",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.type": "block",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.vdo": "0"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             },
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "type": "block",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "vg_name": "ceph_vg1"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:         }
Nov 25 18:10:13 compute-0 pensive_edison[474918]:     ],
Nov 25 18:10:13 compute-0 pensive_edison[474918]:     "2": [
Nov 25 18:10:13 compute-0 pensive_edison[474918]:         {
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "devices": [
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "/dev/loop5"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             ],
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_name": "ceph_lv2",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_size": "21470642176",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "name": "ceph_lv2",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "tags": {
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.cluster_name": "ceph",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.crush_device_class": "",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.encrypted": "0",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osd_id": "2",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.type": "block",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:                 "ceph.vdo": "0"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             },
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "type": "block",
Nov 25 18:10:13 compute-0 pensive_edison[474918]:             "vg_name": "ceph_vg2"
Nov 25 18:10:13 compute-0 pensive_edison[474918]:         }
Nov 25 18:10:13 compute-0 pensive_edison[474918]:     ]
Nov 25 18:10:13 compute-0 pensive_edison[474918]: }
Nov 25 18:10:13 compute-0 systemd[1]: libpod-cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028.scope: Deactivated successfully.
Nov 25 18:10:13 compute-0 podman[474902]: 2025-11-25 18:10:13.434527372 +0000 UTC m=+0.986077087 container died cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3bd7e8b930edc99c99c6b705c54ed79bf1851dffa5fa0c04fb2290934118645-merged.mount: Deactivated successfully.
Nov 25 18:10:13 compute-0 podman[474902]: 2025-11-25 18:10:13.558457595 +0000 UTC m=+1.110007330 container remove cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:10:13 compute-0 systemd[1]: libpod-conmon-cfcce6a7bf9c0f8b728f9d97f5f1730761a520cb7a7513e13aecb2f5a00fc028.scope: Deactivated successfully.
Nov 25 18:10:13 compute-0 sudo[474795]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:13 compute-0 sudo[474939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:13 compute-0 sudo[474939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:10:13.718 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:10:13.719 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:10:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:10:13.719 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:10:13 compute-0 sudo[474939]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:13 compute-0 sudo[474964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:10:13 compute-0 sudo[474964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:13 compute-0 sudo[474964]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:13 compute-0 ceph-mon[74985]: pgmap v4422: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:13 compute-0 sudo[474989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:13 compute-0 sudo[474989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:13 compute-0 sudo[474989]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:13 compute-0 sudo[475014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:10:13 compute-0 sudo[475014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:14 compute-0 podman[475082]: 2025-11-25 18:10:14.370682564 +0000 UTC m=+0.040813529 container create bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:10:14 compute-0 systemd[1]: Started libpod-conmon-bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3.scope.
Nov 25 18:10:14 compute-0 podman[475082]: 2025-11-25 18:10:14.353025815 +0000 UTC m=+0.023156820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:10:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:10:14 compute-0 podman[475082]: 2025-11-25 18:10:14.481195593 +0000 UTC m=+0.151326598 container init bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:10:14 compute-0 podman[475082]: 2025-11-25 18:10:14.490241148 +0000 UTC m=+0.160372113 container start bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 18:10:14 compute-0 podman[475082]: 2025-11-25 18:10:14.493087005 +0000 UTC m=+0.163217970 container attach bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:10:14 compute-0 condescending_hawking[475098]: 167 167
Nov 25 18:10:14 compute-0 systemd[1]: libpod-bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3.scope: Deactivated successfully.
Nov 25 18:10:14 compute-0 podman[475082]: 2025-11-25 18:10:14.496926139 +0000 UTC m=+0.167057104 container died bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:10:14 compute-0 nova_compute[254092]: 2025-11-25 18:10:14.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-75dc6dfa47b30ec8b13779ec3d921d2457ade029da22c662d08116a099077f8b-merged.mount: Deactivated successfully.
Nov 25 18:10:14 compute-0 podman[475082]: 2025-11-25 18:10:14.528723052 +0000 UTC m=+0.198854017 container remove bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:10:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4423: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:14 compute-0 systemd[1]: libpod-conmon-bc215e86c7dd18d17259088839f62651d22a126af240bb144e8fb52e7aa531a3.scope: Deactivated successfully.
Nov 25 18:10:14 compute-0 podman[475120]: 2025-11-25 18:10:14.710236127 +0000 UTC m=+0.064573283 container create 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:10:14 compute-0 podman[475120]: 2025-11-25 18:10:14.667314973 +0000 UTC m=+0.021652159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:10:14 compute-0 systemd[1]: Started libpod-conmon-244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6.scope.
Nov 25 18:10:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:10:14 compute-0 podman[475120]: 2025-11-25 18:10:14.872630023 +0000 UTC m=+0.226967209 container init 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 18:10:14 compute-0 podman[475120]: 2025-11-25 18:10:14.881983778 +0000 UTC m=+0.236320934 container start 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:10:14 compute-0 podman[475120]: 2025-11-25 18:10:14.903753248 +0000 UTC m=+0.258090414 container attach 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:10:15 compute-0 nova_compute[254092]: 2025-11-25 18:10:15.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]: {
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "osd_id": 1,
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "type": "bluestore"
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:     },
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "osd_id": 2,
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "type": "bluestore"
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:     },
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "osd_id": 0,
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:         "type": "bluestore"
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]:     }
Nov 25 18:10:15 compute-0 adoring_mestorf[475136]: }
Nov 25 18:10:15 compute-0 systemd[1]: libpod-244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6.scope: Deactivated successfully.
Nov 25 18:10:15 compute-0 podman[475120]: 2025-11-25 18:10:15.847221488 +0000 UTC m=+1.201558634 container died 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 18:10:15 compute-0 ceph-mon[74985]: pgmap v4423: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f29b1040fd847b6d2a9ddd0e6672e9ed4ec3a5d91c2b756538edc93f9e9491f6-merged.mount: Deactivated successfully.
Nov 25 18:10:16 compute-0 podman[475120]: 2025-11-25 18:10:16.11040803 +0000 UTC m=+1.464745216 container remove 244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 18:10:16 compute-0 systemd[1]: libpod-conmon-244ace2913f8da7ce4c5bc0104ec9e4d1526cbde7a5a702c02100d8d1d0dbca6.scope: Deactivated successfully.
Nov 25 18:10:16 compute-0 sudo[475014]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:10:16 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:10:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ae17b0c7-7c63-4d49-a6dd-c603eff77393 does not exist
Nov 25 18:10:16 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 1fb7cf95-e341-4bfa-98dd-e7f4fba4d181 does not exist
Nov 25 18:10:16 compute-0 sudo[475184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:10:16 compute-0 sudo[475184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:16 compute-0 sudo[475184]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:16 compute-0 sudo[475209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:10:16 compute-0 sudo[475209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:10:16 compute-0 sudo[475209]: pam_unix(sudo:session): session closed for user root
Nov 25 18:10:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4424: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:10:17 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:10:17 compute-0 ceph-mon[74985]: pgmap v4424: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:17 compute-0 nova_compute[254092]: 2025-11-25 18:10:17.943 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4425: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:19 compute-0 ceph-mon[74985]: pgmap v4425: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:20 compute-0 nova_compute[254092]: 2025-11-25 18:10:20.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4426: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:21 compute-0 ceph-mon[74985]: pgmap v4426: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4427: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:22 compute-0 nova_compute[254092]: 2025-11-25 18:10:22.947 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:23 compute-0 ceph-mon[74985]: pgmap v4427: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4428: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:25 compute-0 nova_compute[254092]: 2025-11-25 18:10:25.198 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:25 compute-0 ceph-mon[74985]: pgmap v4428: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4429: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:27 compute-0 nova_compute[254092]: 2025-11-25 18:10:27.508 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:27 compute-0 ceph-mon[74985]: pgmap v4429: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:27 compute-0 nova_compute[254092]: 2025-11-25 18:10:27.950 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4430: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:29 compute-0 nova_compute[254092]: 2025-11-25 18:10:29.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:29 compute-0 ceph-mon[74985]: pgmap v4430: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:30 compute-0 nova_compute[254092]: 2025-11-25 18:10:30.203 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4431: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:31 compute-0 ceph-mon[74985]: pgmap v4431: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4432: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:32 compute-0 podman[475235]: 2025-11-25 18:10:32.67254046 +0000 UTC m=+0.078863141 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:10:32 compute-0 podman[475236]: 2025-11-25 18:10:32.692958445 +0000 UTC m=+0.103076908 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 25 18:10:32 compute-0 podman[475237]: 2025-11-25 18:10:32.697524548 +0000 UTC m=+0.103630113 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:10:32 compute-0 nova_compute[254092]: 2025-11-25 18:10:32.951 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:33 compute-0 ceph-mon[74985]: pgmap v4432: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4433: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.206 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.526 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.527 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.527 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.527 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:10:35 compute-0 ceph-mon[74985]: pgmap v4433: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:35 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:10:35 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1092278912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:10:35 compute-0 nova_compute[254092]: 2025-11-25 18:10:35.951 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.087 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.088 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3599MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.088 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.088 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.164 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.165 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.180 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:10:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4434: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:10:36 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234527518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.608 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.614 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.649 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.651 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:10:36 compute-0 nova_compute[254092]: 2025-11-25 18:10:36.651 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:10:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1092278912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:10:36 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4234527518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.646 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.646 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.647 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.647 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.659 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.659 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.659 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:37 compute-0 ceph-mon[74985]: pgmap v4434: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:37 compute-0 nova_compute[254092]: 2025-11-25 18:10:37.952 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4435: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:39 compute-0 ceph-mon[74985]: pgmap v4435: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:10:40 compute-0 nova_compute[254092]: 2025-11-25 18:10:40.209 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:10:40
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4436: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:10:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:10:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:10:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:10:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:10:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:41 compute-0 nova_compute[254092]: 2025-11-25 18:10:41.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:41 compute-0 nova_compute[254092]: 2025-11-25 18:10:41.495 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:10:41 compute-0 ceph-mon[74985]: pgmap v4436: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4437: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:42 compute-0 nova_compute[254092]: 2025-11-25 18:10:42.953 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:43 compute-0 ceph-mon[74985]: pgmap v4437: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4438: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:45 compute-0 ceph-mon[74985]: pgmap v4438: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:45 compute-0 nova_compute[254092]: 2025-11-25 18:10:45.212 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4439: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:47 compute-0 ceph-mon[74985]: pgmap v4439: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:47 compute-0 nova_compute[254092]: 2025-11-25 18:10:47.957 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:48 compute-0 nova_compute[254092]: 2025-11-25 18:10:48.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4440: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.763066) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094248763145, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1268, "num_deletes": 251, "total_data_size": 1990005, "memory_usage": 2022336, "flush_reason": "Manual Compaction"}
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094248876825, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 1960571, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89994, "largest_seqno": 91261, "table_properties": {"data_size": 1954473, "index_size": 3428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12656, "raw_average_key_size": 19, "raw_value_size": 1942322, "raw_average_value_size": 3044, "num_data_blocks": 154, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094117, "oldest_key_time": 1764094117, "file_creation_time": 1764094248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 113817 microseconds, and 5854 cpu microseconds.
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.876892) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 1960571 bytes OK
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.876922) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.884969) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.885028) EVENT_LOG_v1 {"time_micros": 1764094248885014, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.885053) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1984286, prev total WAL file size 1984286, number of live WAL files 2.
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.886021) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(1914KB)], [215(10MB)]
Nov 25 18:10:48 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094248886100, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 12601090, "oldest_snapshot_seqno": -1}
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 10274 keys, 10869408 bytes, temperature: kUnknown
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094249153259, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 10869408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10807234, "index_size": 35373, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25733, "raw_key_size": 272672, "raw_average_key_size": 26, "raw_value_size": 10629907, "raw_average_value_size": 1034, "num_data_blocks": 1351, "num_entries": 10274, "num_filter_entries": 10274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.153539) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 10869408 bytes
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.171441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.2 rd, 40.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 10.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(12.0) write-amplify(5.5) OK, records in: 10788, records dropped: 514 output_compression: NoCompression
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.171484) EVENT_LOG_v1 {"time_micros": 1764094249171469, "job": 136, "event": "compaction_finished", "compaction_time_micros": 267255, "compaction_time_cpu_micros": 35471, "output_level": 6, "num_output_files": 1, "total_output_size": 10869408, "num_input_records": 10788, "num_output_records": 10274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094249172062, "job": 136, "event": "table_file_deletion", "file_number": 217}
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094249174134, "job": 136, "event": "table_file_deletion", "file_number": 215}
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:48.885881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:10:49 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:10:49.174223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:10:49 compute-0 ceph-mon[74985]: pgmap v4440: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:50 compute-0 nova_compute[254092]: 2025-11-25 18:10:50.216 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4441: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:51 compute-0 nova_compute[254092]: 2025-11-25 18:10:51.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:10:51 compute-0 ceph-mon[74985]: pgmap v4441: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4442: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:10:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:10:52 compute-0 nova_compute[254092]: 2025-11-25 18:10:52.958 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:53 compute-0 ceph-mon[74985]: pgmap v4442: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4443: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:55 compute-0 nova_compute[254092]: 2025-11-25 18:10:55.219 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:55 compute-0 ceph-mon[74985]: pgmap v4443: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:10:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4444: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:57 compute-0 ceph-mon[74985]: pgmap v4444: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:57 compute-0 nova_compute[254092]: 2025-11-25 18:10:57.959 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:10:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4445: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:10:58 compute-0 ceph-mon[74985]: pgmap v4445: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:00 compute-0 nova_compute[254092]: 2025-11-25 18:11:00.222 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4446: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:01 compute-0 ceph-mon[74985]: pgmap v4446: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4447: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:02 compute-0 nova_compute[254092]: 2025-11-25 18:11:02.961 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:03 compute-0 podman[475346]: 2025-11-25 18:11:03.636264254 +0000 UTC m=+0.056234896 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:11:03 compute-0 podman[475347]: 2025-11-25 18:11:03.656685549 +0000 UTC m=+0.061777488 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 18:11:03 compute-0 ceph-mon[74985]: pgmap v4447: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:03 compute-0 podman[475348]: 2025-11-25 18:11:03.700360354 +0000 UTC m=+0.103193161 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 18:11:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4448: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:05 compute-0 nova_compute[254092]: 2025-11-25 18:11:05.225 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:05 compute-0 ceph-mon[74985]: pgmap v4448: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4449: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:07 compute-0 ceph-mon[74985]: pgmap v4449: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:07 compute-0 nova_compute[254092]: 2025-11-25 18:11:07.962 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4450: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:09 compute-0 ceph-mon[74985]: pgmap v4450: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:11:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:11:10 compute-0 nova_compute[254092]: 2025-11-25 18:11:10.228 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4451: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:11 compute-0 ceph-mon[74985]: pgmap v4451: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4452: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:12 compute-0 nova_compute[254092]: 2025-11-25 18:11:12.969 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:11:13.720 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:11:13.720 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:11:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:11:13.720 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:11:13 compute-0 ceph-mon[74985]: pgmap v4452: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4453: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:15 compute-0 ceph-mon[74985]: pgmap v4453: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:15 compute-0 nova_compute[254092]: 2025-11-25 18:11:15.232 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:16 compute-0 sudo[475411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:16 compute-0 sudo[475411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:16 compute-0 sudo[475411]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:16 compute-0 sudo[475436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:11:16 compute-0 sudo[475436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:16 compute-0 sudo[475436]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4454: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:16 compute-0 sudo[475461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:16 compute-0 sudo[475461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:16 compute-0 sudo[475461]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:16 compute-0 sudo[475486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 18:11:16 compute-0 sudo[475486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:17 compute-0 podman[475581]: 2025-11-25 18:11:17.268322158 +0000 UTC m=+0.239552919 container exec 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:11:17 compute-0 podman[475581]: 2025-11-25 18:11:17.363984941 +0000 UTC m=+0.335215672 container exec_died 5ca41d12605897cafb06012409eb0512d6f36d2a7c1e16c7ee44a2852b4c6b3b (image=quay.io/ceph/ceph:v18, name=ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:11:17 compute-0 ceph-mon[74985]: pgmap v4454: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:18 compute-0 nova_compute[254092]: 2025-11-25 18:11:18.013 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:18 compute-0 sudo[475486]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:18 compute-0 sudo[475739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:18 compute-0 sudo[475739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:18 compute-0 sudo[475739]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:18 compute-0 sudo[475764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:11:18 compute-0 sudo[475764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:18 compute-0 sudo[475764]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:18 compute-0 sudo[475789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:18 compute-0 sudo[475789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:18 compute-0 sudo[475789]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:18 compute-0 sudo[475814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:11:18 compute-0 sudo[475814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4455: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:18 compute-0 sudo[475814]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 3f9b0789-35e5-48e9-be4b-f53909fc8f47 does not exist
Nov 25 18:11:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 2852e686-8346-47be-836a-a5f7d68ab795 does not exist
Nov 25 18:11:18 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c17a1d97-e91b-428d-83e6-a2701e9c9804 does not exist
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:11:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:11:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:11:18 compute-0 sudo[475870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:18 compute-0 sudo[475870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:18 compute-0 sudo[475870]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:19 compute-0 sudo[475895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:11:19 compute-0 sudo[475895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:19 compute-0 sudo[475895]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:19 compute-0 sudo[475920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:19 compute-0 sudo[475920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:19 compute-0 sudo[475920]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:19 compute-0 sudo[475945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:11:19 compute-0 sudo[475945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:19 compute-0 ceph-mon[74985]: pgmap v4455: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:11:19 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:11:19 compute-0 podman[476011]: 2025-11-25 18:11:19.417885686 +0000 UTC m=+0.038308934 container create 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:11:19 compute-0 systemd[1]: Started libpod-conmon-0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f.scope.
Nov 25 18:11:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:11:19 compute-0 podman[476011]: 2025-11-25 18:11:19.401186802 +0000 UTC m=+0.021610070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:11:19 compute-0 podman[476011]: 2025-11-25 18:11:19.504443801 +0000 UTC m=+0.124867079 container init 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:11:19 compute-0 podman[476011]: 2025-11-25 18:11:19.516108379 +0000 UTC m=+0.136531617 container start 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 18:11:19 compute-0 podman[476011]: 2025-11-25 18:11:19.519083059 +0000 UTC m=+0.139506297 container attach 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:11:19 compute-0 naughty_cannon[476026]: 167 167
Nov 25 18:11:19 compute-0 systemd[1]: libpod-0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f.scope: Deactivated successfully.
Nov 25 18:11:19 compute-0 podman[476011]: 2025-11-25 18:11:19.521521896 +0000 UTC m=+0.141945144 container died 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b6a9c6744e5d021d780aca84c06b39515842bde5dee5d7c73e555dd15575160-merged.mount: Deactivated successfully.
Nov 25 18:11:19 compute-0 podman[476011]: 2025-11-25 18:11:19.558951124 +0000 UTC m=+0.179374362 container remove 0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 18:11:19 compute-0 systemd[1]: libpod-conmon-0cdc2f49c1eb26199c4749ed46e1e60448e6c2fe83e414eec31d73122e83d19f.scope: Deactivated successfully.
Nov 25 18:11:19 compute-0 podman[476048]: 2025-11-25 18:11:19.713725215 +0000 UTC m=+0.041509120 container create 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 18:11:19 compute-0 systemd[1]: Started libpod-conmon-7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695.scope.
Nov 25 18:11:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:19 compute-0 podman[476048]: 2025-11-25 18:11:19.783513804 +0000 UTC m=+0.111297699 container init 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:11:19 compute-0 podman[476048]: 2025-11-25 18:11:19.791728318 +0000 UTC m=+0.119512233 container start 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:11:19 compute-0 podman[476048]: 2025-11-25 18:11:19.697980127 +0000 UTC m=+0.025764052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:11:19 compute-0 podman[476048]: 2025-11-25 18:11:19.795422918 +0000 UTC m=+0.123206843 container attach 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 18:11:20 compute-0 nova_compute[254092]: 2025-11-25 18:11:20.235 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4456: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:20 compute-0 nostalgic_ishizaka[476065]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:11:20 compute-0 nostalgic_ishizaka[476065]: --> relative data size: 1.0
Nov 25 18:11:20 compute-0 nostalgic_ishizaka[476065]: --> All data devices are unavailable
Nov 25 18:11:20 compute-0 systemd[1]: libpod-7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695.scope: Deactivated successfully.
Nov 25 18:11:20 compute-0 podman[476048]: 2025-11-25 18:11:20.795624934 +0000 UTC m=+1.123408839 container died 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 18:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf5a5015e76cec4e787357452cb1b84f7befaee84ad6ae1d17eb2872ae735ff9-merged.mount: Deactivated successfully.
Nov 25 18:11:20 compute-0 podman[476048]: 2025-11-25 18:11:20.847508275 +0000 UTC m=+1.175292180 container remove 7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ishizaka, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:11:20 compute-0 systemd[1]: libpod-conmon-7bbdc80d6287a790bf95c1c0e09d1a967403d379ff5d76e622985844ebea7695.scope: Deactivated successfully.
Nov 25 18:11:20 compute-0 sudo[475945]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:20 compute-0 sudo[476104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:20 compute-0 sudo[476104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:20 compute-0 sudo[476104]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:20 compute-0 sudo[476129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:11:20 compute-0 sudo[476129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:20 compute-0 sudo[476129]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:21 compute-0 sudo[476154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:21 compute-0 sudo[476154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:21 compute-0 sudo[476154]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:21 compute-0 sudo[476179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:11:21 compute-0 sudo[476179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.324001) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281324041, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 511, "num_deletes": 251, "total_data_size": 511026, "memory_usage": 520528, "flush_reason": "Manual Compaction"}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281328201, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 507627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91262, "largest_seqno": 91772, "table_properties": {"data_size": 504673, "index_size": 925, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 5886, "raw_average_key_size": 16, "raw_value_size": 498914, "raw_average_value_size": 1389, "num_data_blocks": 41, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764094249, "oldest_key_time": 1764094249, "file_creation_time": 1764094281, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 4230 microseconds, and 1794 cpu microseconds.
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.328232) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 507627 bytes OK
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.328247) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329370) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329383) EVENT_LOG_v1 {"time_micros": 1764094281329378, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329397) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 508062, prev total WAL file size 508062, number of live WAL files 2.
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329738) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(495KB)], [218(10MB)]
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281329776, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 11377035, "oldest_snapshot_seqno": -1}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 10119 keys, 10360719 bytes, temperature: kUnknown
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281396125, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 10360719, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10299785, "index_size": 34517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 271142, "raw_average_key_size": 26, "raw_value_size": 10125088, "raw_average_value_size": 1000, "num_data_blocks": 1299, "num_entries": 10119, "num_filter_entries": 10119, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764085893, "oldest_key_time": 0, "file_creation_time": 1764094281, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "649eb26e-5d3e-4279-9fd6-2380e7a57b81", "db_session_id": "9WO1M855NQQLW0TZFZN0", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.397134) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 10360719 bytes
Nov 25 18:11:21 compute-0 podman[476244]: 2025-11-25 18:11:21.39785719 +0000 UTC m=+0.048853441 container create a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.398147) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 154.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.8) write-amplify(20.4) OK, records in: 10633, records dropped: 514 output_compression: NoCompression
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.398179) EVENT_LOG_v1 {"time_micros": 1764094281398167, "job": 138, "event": "compaction_finished", "compaction_time_micros": 67146, "compaction_time_cpu_micros": 34503, "output_level": 6, "num_output_files": 1, "total_output_size": 10360719, "num_input_records": 10633, "num_output_records": 10119, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281398349, "job": 138, "event": "table_file_deletion", "file_number": 220}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764094281400124, "job": 138, "event": "table_file_deletion", "file_number": 218}
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.329681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:11:21 compute-0 ceph-mon[74985]: rocksdb: (Original Log Time 2025/11/25-18:11:21.400195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 18:11:21 compute-0 systemd[1]: Started libpod-conmon-a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf.scope.
Nov 25 18:11:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:11:21 compute-0 podman[476244]: 2025-11-25 18:11:21.36812085 +0000 UTC m=+0.019117011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:11:21 compute-0 podman[476244]: 2025-11-25 18:11:21.477275231 +0000 UTC m=+0.128271372 container init a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 18:11:21 compute-0 podman[476244]: 2025-11-25 18:11:21.484868337 +0000 UTC m=+0.135864468 container start a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 18:11:21 compute-0 podman[476244]: 2025-11-25 18:11:21.487749296 +0000 UTC m=+0.138745457 container attach a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:11:21 compute-0 gracious_banzai[476260]: 167 167
Nov 25 18:11:21 compute-0 systemd[1]: libpod-a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf.scope: Deactivated successfully.
Nov 25 18:11:21 compute-0 podman[476244]: 2025-11-25 18:11:21.490504661 +0000 UTC m=+0.141500792 container died a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d99070f003d0f7a269b1365556165201f6f36c4ab57151d373531c750b22a317-merged.mount: Deactivated successfully.
Nov 25 18:11:21 compute-0 podman[476244]: 2025-11-25 18:11:21.523941441 +0000 UTC m=+0.174937572 container remove a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:11:21 compute-0 systemd[1]: libpod-conmon-a7ce71c44b1b7dd5c15878115c69f220cb6b9ab105bfd7d8c8602ac3907c1bcf.scope: Deactivated successfully.
Nov 25 18:11:21 compute-0 ceph-mon[74985]: pgmap v4456: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:21 compute-0 podman[476283]: 2025-11-25 18:11:21.670389925 +0000 UTC m=+0.035983590 container create 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:11:21 compute-0 systemd[1]: Started libpod-conmon-56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209.scope.
Nov 25 18:11:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:21 compute-0 podman[476283]: 2025-11-25 18:11:21.731665052 +0000 UTC m=+0.097258717 container init 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:11:21 compute-0 podman[476283]: 2025-11-25 18:11:21.739817764 +0000 UTC m=+0.105411429 container start 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:11:21 compute-0 podman[476283]: 2025-11-25 18:11:21.742925719 +0000 UTC m=+0.108519384 container attach 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:11:21 compute-0 podman[476283]: 2025-11-25 18:11:21.655032087 +0000 UTC m=+0.020625772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:11:22 compute-0 eager_curran[476300]: {
Nov 25 18:11:22 compute-0 eager_curran[476300]:     "0": [
Nov 25 18:11:22 compute-0 eager_curran[476300]:         {
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "devices": [
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "/dev/loop3"
Nov 25 18:11:22 compute-0 eager_curran[476300]:             ],
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_name": "ceph_lv0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_size": "21470642176",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "name": "ceph_lv0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "tags": {
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cluster_name": "ceph",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.crush_device_class": "",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.encrypted": "0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osd_id": "0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.type": "block",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.vdo": "0"
Nov 25 18:11:22 compute-0 eager_curran[476300]:             },
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "type": "block",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "vg_name": "ceph_vg0"
Nov 25 18:11:22 compute-0 eager_curran[476300]:         }
Nov 25 18:11:22 compute-0 eager_curran[476300]:     ],
Nov 25 18:11:22 compute-0 eager_curran[476300]:     "1": [
Nov 25 18:11:22 compute-0 eager_curran[476300]:         {
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "devices": [
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "/dev/loop4"
Nov 25 18:11:22 compute-0 eager_curran[476300]:             ],
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_name": "ceph_lv1",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_size": "21470642176",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "name": "ceph_lv1",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "tags": {
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cluster_name": "ceph",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.crush_device_class": "",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.encrypted": "0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osd_id": "1",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.type": "block",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.vdo": "0"
Nov 25 18:11:22 compute-0 eager_curran[476300]:             },
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "type": "block",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "vg_name": "ceph_vg1"
Nov 25 18:11:22 compute-0 eager_curran[476300]:         }
Nov 25 18:11:22 compute-0 eager_curran[476300]:     ],
Nov 25 18:11:22 compute-0 eager_curran[476300]:     "2": [
Nov 25 18:11:22 compute-0 eager_curran[476300]:         {
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "devices": [
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "/dev/loop5"
Nov 25 18:11:22 compute-0 eager_curran[476300]:             ],
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_name": "ceph_lv2",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_size": "21470642176",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "name": "ceph_lv2",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "tags": {
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.cluster_name": "ceph",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.crush_device_class": "",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.encrypted": "0",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osd_id": "2",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.type": "block",
Nov 25 18:11:22 compute-0 eager_curran[476300]:                 "ceph.vdo": "0"
Nov 25 18:11:22 compute-0 eager_curran[476300]:             },
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "type": "block",
Nov 25 18:11:22 compute-0 eager_curran[476300]:             "vg_name": "ceph_vg2"
Nov 25 18:11:22 compute-0 eager_curran[476300]:         }
Nov 25 18:11:22 compute-0 eager_curran[476300]:     ]
Nov 25 18:11:22 compute-0 eager_curran[476300]: }
Nov 25 18:11:22 compute-0 systemd[1]: libpod-56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209.scope: Deactivated successfully.
Nov 25 18:11:22 compute-0 podman[476283]: 2025-11-25 18:11:22.481767201 +0000 UTC m=+0.847360866 container died 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8dbc8ce893891ed194e2316b0f53ac16aaba2aaed307b18a67b5f10113f3e29-merged.mount: Deactivated successfully.
Nov 25 18:11:22 compute-0 podman[476283]: 2025-11-25 18:11:22.531071323 +0000 UTC m=+0.896664988 container remove 56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curran, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:11:22 compute-0 systemd[1]: libpod-conmon-56bc8463ce4bc6cd29b3e22570ec704f329f2061ab60d90a3c344f7a49831209.scope: Deactivated successfully.
Nov 25 18:11:22 compute-0 sudo[476179]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4457: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:22 compute-0 sudo[476322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:22 compute-0 sudo[476322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:22 compute-0 sudo[476322]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:22 compute-0 sudo[476347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:11:22 compute-0 sudo[476347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:22 compute-0 sudo[476347]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:22 compute-0 sudo[476372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:22 compute-0 sudo[476372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:22 compute-0 sudo[476372]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:22 compute-0 sudo[476397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:11:22 compute-0 sudo[476397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:23 compute-0 nova_compute[254092]: 2025-11-25 18:11:23.015 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:23 compute-0 podman[476463]: 2025-11-25 18:11:23.05096948 +0000 UTC m=+0.035544599 container create 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:11:23 compute-0 systemd[1]: Started libpod-conmon-9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab.scope.
Nov 25 18:11:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:11:23 compute-0 podman[476463]: 2025-11-25 18:11:23.123258826 +0000 UTC m=+0.107833975 container init 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:11:23 compute-0 podman[476463]: 2025-11-25 18:11:23.12999111 +0000 UTC m=+0.114566239 container start 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 18:11:23 compute-0 podman[476463]: 2025-11-25 18:11:23.03482983 +0000 UTC m=+0.019404969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:11:23 compute-0 podman[476463]: 2025-11-25 18:11:23.133014382 +0000 UTC m=+0.117589511 container attach 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:11:23 compute-0 silly_jones[476480]: 167 167
Nov 25 18:11:23 compute-0 systemd[1]: libpod-9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab.scope: Deactivated successfully.
Nov 25 18:11:23 compute-0 podman[476463]: 2025-11-25 18:11:23.135851279 +0000 UTC m=+0.120426408 container died 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-da8e9da3f0a1358d316ae2895f9fcdcdfde6a34bbc77ad74cf58f418d9ed8f97-merged.mount: Deactivated successfully.
Nov 25 18:11:23 compute-0 podman[476463]: 2025-11-25 18:11:23.170365688 +0000 UTC m=+0.154940817 container remove 9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 18:11:23 compute-0 systemd[1]: libpod-conmon-9182db46f5ef328eec954358590a60f0627e62e58050627b40bd7a1398d0d4ab.scope: Deactivated successfully.
Nov 25 18:11:23 compute-0 podman[476502]: 2025-11-25 18:11:23.320248057 +0000 UTC m=+0.039084595 container create 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:11:23 compute-0 systemd[1]: Started libpod-conmon-21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e.scope.
Nov 25 18:11:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:11:23 compute-0 podman[476502]: 2025-11-25 18:11:23.396219034 +0000 UTC m=+0.115055602 container init 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:11:23 compute-0 podman[476502]: 2025-11-25 18:11:23.302986117 +0000 UTC m=+0.021822695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:11:23 compute-0 podman[476502]: 2025-11-25 18:11:23.402664529 +0000 UTC m=+0.121501067 container start 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:11:23 compute-0 podman[476502]: 2025-11-25 18:11:23.405563918 +0000 UTC m=+0.124400466 container attach 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 18:11:23 compute-0 ceph-mon[74985]: pgmap v4457: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]: {
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "osd_id": 1,
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "type": "bluestore"
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:     },
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "osd_id": 2,
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "type": "bluestore"
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:     },
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "osd_id": 0,
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:         "type": "bluestore"
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]:     }
Nov 25 18:11:24 compute-0 vigorous_yalow[476518]: }
Nov 25 18:11:24 compute-0 systemd[1]: libpod-21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e.scope: Deactivated successfully.
Nov 25 18:11:24 compute-0 podman[476502]: 2025-11-25 18:11:24.332739715 +0000 UTC m=+1.051576253 container died 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2d4c68cb59a0edb4b55a5729d53c9dcb593a967e92b78d06b7525d46acdc742-merged.mount: Deactivated successfully.
Nov 25 18:11:24 compute-0 podman[476502]: 2025-11-25 18:11:24.380898115 +0000 UTC m=+1.099734663 container remove 21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 18:11:24 compute-0 systemd[1]: libpod-conmon-21ea15c6a96c6c6f652301aad66cacd6768e9054cb0570bb28f4c50709d9e74e.scope: Deactivated successfully.
Nov 25 18:11:24 compute-0 sudo[476397]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:11:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:11:24 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 87a1cd80-0b93-4f3f-8d52-9b6f465333d0 does not exist
Nov 25 18:11:24 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ba4cbcd4-1145-43c4-b379-91b21880834f does not exist
Nov 25 18:11:24 compute-0 sudo[476564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:11:24 compute-0 sudo[476564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:24 compute-0 sudo[476564]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:24 compute-0 sudo[476589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:11:24 compute-0 sudo[476589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:11:24 compute-0 sudo[476589]: pam_unix(sudo:session): session closed for user root
Nov 25 18:11:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4458: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:25 compute-0 nova_compute[254092]: 2025-11-25 18:11:25.238 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:11:25 compute-0 ceph-mon[74985]: pgmap v4458: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4459: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:27 compute-0 ceph-mon[74985]: pgmap v4459: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:28 compute-0 nova_compute[254092]: 2025-11-25 18:11:28.016 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:28 compute-0 nova_compute[254092]: 2025-11-25 18:11:28.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4460: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:29 compute-0 ceph-mon[74985]: pgmap v4460: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:30 compute-0 nova_compute[254092]: 2025-11-25 18:11:30.243 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:30 compute-0 nova_compute[254092]: 2025-11-25 18:11:30.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4461: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:31 compute-0 ceph-mon[74985]: pgmap v4461: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4462: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:33 compute-0 nova_compute[254092]: 2025-11-25 18:11:33.018 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:33 compute-0 ceph-mon[74985]: pgmap v4462: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4463: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:34 compute-0 podman[476615]: 2025-11-25 18:11:34.668471073 +0000 UTC m=+0.071188419 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 18:11:34 compute-0 podman[476614]: 2025-11-25 18:11:34.669972273 +0000 UTC m=+0.081261892 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 25 18:11:34 compute-0 podman[476616]: 2025-11-25 18:11:34.769577044 +0000 UTC m=+0.170452799 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 18:11:35 compute-0 nova_compute[254092]: 2025-11-25 18:11:35.244 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:35 compute-0 ceph-mon[74985]: pgmap v4463: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:36 compute-0 nova_compute[254092]: 2025-11-25 18:11:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4464: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:11:37 compute-0 ceph-mon[74985]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.1 total, 600.0 interval
                                           Cumulative writes: 19K writes, 91K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.01 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.12 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1296 writes, 6150 keys, 1296 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s
                                           Interval WAL: 1296 writes, 1296 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.5      2.76              0.40        69    0.040       0      0       0.0       0.0
                                             L6      1/0    9.88 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    120.2    102.8      5.86              1.90        68    0.086    526K    36K       0.0       0.0
                                            Sum      1/0    9.88 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     81.7     82.8      8.62              2.30       137    0.063    526K    36K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     59.9     59.8      1.19              0.25        12    0.099     64K   3081       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    120.2    102.8      5.86              1.90        68    0.086    526K    36K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.2      2.72              0.40        68    0.040       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.05              0.00         1    0.047       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 8400.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.109, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.70 GB write, 0.09 MB/s write, 0.69 GB read, 0.08 MB/s read, 8.6 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x563cc83831f0#2 capacity: 304.00 MB usage: 79.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 15 last_copies: 0 last_secs: 0.000458 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5090,76.03 MB,25.0108%) FilterBlock(138,1.39 MB,0.458742%) IndexBlock(138,2.22 MB,0.730168%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.520 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.521 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.521 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.522 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:11:37 compute-0 ceph-mon[74985]: pgmap v4464: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:37 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:11:37 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221870468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:11:37 compute-0 nova_compute[254092]: 2025-11-25 18:11:37.997 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.093 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.213 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.214 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3606MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.214 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.214 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.271 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.272 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.285 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing inventories for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 18:11:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4465: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.722 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating ProviderTree inventory for provider 4f066da7-306c-41d7-8522-9a9189cacc49 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.723 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Updating inventory in ProviderTree for provider 4f066da7-306c-41d7-8522-9a9189cacc49 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.799 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing aggregate associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 18:11:38 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1221870468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.830 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Refreshing trait associations for resource provider 4f066da7-306c-41d7-8522-9a9189cacc49, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE42,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 18:11:38 compute-0 nova_compute[254092]: 2025-11-25 18:11:38.844 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:11:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:11:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291437619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:11:39 compute-0 nova_compute[254092]: 2025-11-25 18:11:39.253 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:11:39 compute-0 nova_compute[254092]: 2025-11-25 18:11:39.260 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:11:39 compute-0 nova_compute[254092]: 2025-11-25 18:11:39.273 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:11:39 compute-0 nova_compute[254092]: 2025-11-25 18:11:39.275 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:11:39 compute-0 nova_compute[254092]: 2025-11-25 18:11:39.275 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:11:39 compute-0 ceph-mon[74985]: pgmap v4465: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:39 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4291437619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:11:40 compute-0 nova_compute[254092]: 2025-11-25 18:11:40.246 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:11:40
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.meta', '.mgr']
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4466: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:11:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:11:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:11:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:11:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:11:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:11:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:11:41 compute-0 nova_compute[254092]: 2025-11-25 18:11:41.276 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:41 compute-0 nova_compute[254092]: 2025-11-25 18:11:41.276 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:11:41 compute-0 nova_compute[254092]: 2025-11-25 18:11:41.276 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:11:41 compute-0 nova_compute[254092]: 2025-11-25 18:11:41.299 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:11:41 compute-0 nova_compute[254092]: 2025-11-25 18:11:41.299 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:41 compute-0 ceph-mon[74985]: pgmap v4466: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4467: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:43 compute-0 nova_compute[254092]: 2025-11-25 18:11:43.094 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:43 compute-0 nova_compute[254092]: 2025-11-25 18:11:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:43 compute-0 nova_compute[254092]: 2025-11-25 18:11:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:11:43 compute-0 ceph-mon[74985]: pgmap v4467: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4468: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:44 compute-0 ceph-mon[74985]: pgmap v4468: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:45 compute-0 nova_compute[254092]: 2025-11-25 18:11:45.249 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4469: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:47 compute-0 ceph-mon[74985]: pgmap v4469: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:48 compute-0 nova_compute[254092]: 2025-11-25 18:11:48.096 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4470: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:49 compute-0 ceph-mon[74985]: pgmap v4470: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:50 compute-0 nova_compute[254092]: 2025-11-25 18:11:50.253 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:50 compute-0 nova_compute[254092]: 2025-11-25 18:11:50.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:11:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4471: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:51 compute-0 ceph-mon[74985]: pgmap v4471: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4472: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:11:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:11:53 compute-0 nova_compute[254092]: 2025-11-25 18:11:53.098 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:53 compute-0 ceph-mon[74985]: pgmap v4472: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4473: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:55 compute-0 nova_compute[254092]: 2025-11-25 18:11:55.255 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:11:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3157365552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:11:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:11:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3157365552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:11:55 compute-0 ceph-mon[74985]: pgmap v4473: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3157365552' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:11:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3157365552' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:11:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:11:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4474: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:57 compute-0 ceph-mon[74985]: pgmap v4474: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:58 compute-0 nova_compute[254092]: 2025-11-25 18:11:58.099 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:11:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4475: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:11:59 compute-0 ceph-mon[74985]: pgmap v4475: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:00 compute-0 nova_compute[254092]: 2025-11-25 18:12:00.257 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4476: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:01 compute-0 ceph-mon[74985]: pgmap v4476: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4477: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:03 compute-0 nova_compute[254092]: 2025-11-25 18:12:03.145 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:03 compute-0 ceph-mon[74985]: pgmap v4477: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4478: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:05 compute-0 nova_compute[254092]: 2025-11-25 18:12:05.261 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:05 compute-0 podman[476723]: 2025-11-25 18:12:05.640697692 +0000 UTC m=+0.052097829 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:12:05 compute-0 podman[476722]: 2025-11-25 18:12:05.654050084 +0000 UTC m=+0.061590546 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:12:05 compute-0 podman[476724]: 2025-11-25 18:12:05.701358482 +0000 UTC m=+0.094753789 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 18:12:05 compute-0 ceph-mon[74985]: pgmap v4478: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4479: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:07 compute-0 ceph-mon[74985]: pgmap v4479: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:08 compute-0 nova_compute[254092]: 2025-11-25 18:12:08.147 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4480: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:09 compute-0 ceph-mon[74985]: pgmap v4480: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:12:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:12:10 compute-0 nova_compute[254092]: 2025-11-25 18:12:10.264 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4481: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:11 compute-0 ceph-mon[74985]: pgmap v4481: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4482: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:13 compute-0 nova_compute[254092]: 2025-11-25 18:12:13.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:12:13.721 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:12:13.721 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:12:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:12:13.721 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:12:13 compute-0 ceph-mon[74985]: pgmap v4482: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4483: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:15 compute-0 nova_compute[254092]: 2025-11-25 18:12:15.268 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:15 compute-0 ceph-mon[74985]: pgmap v4483: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4484: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:17 compute-0 ceph-mon[74985]: pgmap v4484: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:18 compute-0 nova_compute[254092]: 2025-11-25 18:12:18.148 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4485: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:19 compute-0 ceph-mon[74985]: pgmap v4485: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:20 compute-0 nova_compute[254092]: 2025-11-25 18:12:20.271 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4486: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:21 compute-0 ceph-mon[74985]: pgmap v4486: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4487: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:23 compute-0 nova_compute[254092]: 2025-11-25 18:12:23.150 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:23 compute-0 ceph-mon[74985]: pgmap v4487: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4488: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:24 compute-0 sudo[476787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:24 compute-0 sudo[476787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:24 compute-0 sudo[476787]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:24 compute-0 sudo[476812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:12:24 compute-0 sudo[476812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:24 compute-0 sudo[476812]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:24 compute-0 sudo[476837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:24 compute-0 sudo[476837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:24 compute-0 sudo[476837]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:24 compute-0 sudo[476862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:12:24 compute-0 sudo[476862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:25 compute-0 sudo[476862]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:12:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:12:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:12:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:12:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ff0c8eb1-2831-4c4a-b717-3454a8ee23fb does not exist
Nov 25 18:12:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e49c5826-e3a1-4a46-bcb6-e2511b11090f does not exist
Nov 25 18:12:25 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev c0cc8b11-0e75-48c0-a1b3-2d832e456cbf does not exist
Nov 25 18:12:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:12:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:12:25 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:12:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:12:25 compute-0 nova_compute[254092]: 2025-11-25 18:12:25.307 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:25 compute-0 sudo[476917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:25 compute-0 sudo[476917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:25 compute-0 sudo[476917]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:25 compute-0 sudo[476942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:12:25 compute-0 sudo[476942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:25 compute-0 sudo[476942]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:25 compute-0 sudo[476967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:25 compute-0 sudo[476967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:25 compute-0 sudo[476967]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:25 compute-0 sudo[476992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:12:25 compute-0 sudo[476992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:25 compute-0 podman[477057]: 2025-11-25 18:12:25.851993283 +0000 UTC m=+0.036968557 container create 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:12:25 compute-0 systemd[1]: Started libpod-conmon-63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868.scope.
Nov 25 18:12:25 compute-0 ceph-mon[74985]: pgmap v4488: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:12:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:12:25 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:12:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:12:25 compute-0 podman[477057]: 2025-11-25 18:12:25.835569727 +0000 UTC m=+0.020545111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:12:25 compute-0 podman[477057]: 2025-11-25 18:12:25.942001573 +0000 UTC m=+0.126976867 container init 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:12:25 compute-0 podman[477057]: 2025-11-25 18:12:25.948791047 +0000 UTC m=+0.133766321 container start 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:12:25 compute-0 podman[477057]: 2025-11-25 18:12:25.95182906 +0000 UTC m=+0.136804344 container attach 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 25 18:12:25 compute-0 angry_dijkstra[477073]: 167 167
Nov 25 18:12:25 compute-0 systemd[1]: libpod-63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868.scope: Deactivated successfully.
Nov 25 18:12:25 compute-0 podman[477057]: 2025-11-25 18:12:25.954933524 +0000 UTC m=+0.139908798 container died 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 18:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1125b769e919610d0ddd2fbc5edf09cd6f68f035850719f0917eec825e691bdf-merged.mount: Deactivated successfully.
Nov 25 18:12:25 compute-0 podman[477057]: 2025-11-25 18:12:25.991835308 +0000 UTC m=+0.176810582 container remove 63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 18:12:25 compute-0 systemd[1]: libpod-conmon-63ccea8b3d439c86be5d29e5959a37a79483967fd2580449c29ddf7b0220d868.scope: Deactivated successfully.
Nov 25 18:12:26 compute-0 podman[477097]: 2025-11-25 18:12:26.177314735 +0000 UTC m=+0.040096182 container create 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:12:26 compute-0 systemd[1]: Started libpod-conmon-05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae.scope.
Nov 25 18:12:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:26 compute-0 podman[477097]: 2025-11-25 18:12:26.162022069 +0000 UTC m=+0.024803536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:12:26 compute-0 podman[477097]: 2025-11-25 18:12:26.259353227 +0000 UTC m=+0.122134684 container init 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:12:26 compute-0 podman[477097]: 2025-11-25 18:12:26.269787892 +0000 UTC m=+0.132569339 container start 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 18:12:26 compute-0 podman[477097]: 2025-11-25 18:12:26.273301727 +0000 UTC m=+0.136083194 container attach 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 18:12:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4489: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:27 compute-0 quizzical_cori[477114]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:12:27 compute-0 quizzical_cori[477114]: --> relative data size: 1.0
Nov 25 18:12:27 compute-0 quizzical_cori[477114]: --> All data devices are unavailable
Nov 25 18:12:27 compute-0 systemd[1]: libpod-05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae.scope: Deactivated successfully.
Nov 25 18:12:27 compute-0 podman[477143]: 2025-11-25 18:12:27.316289685 +0000 UTC m=+0.024401905 container died 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 18:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5231ac02ee434c5a949b3669e4667ff7acf0e40925405040325fdf725bd765b2-merged.mount: Deactivated successfully.
Nov 25 18:12:27 compute-0 podman[477143]: 2025-11-25 18:12:27.36387424 +0000 UTC m=+0.071986420 container remove 05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:12:27 compute-0 systemd[1]: libpod-conmon-05e920da6312c8f6b35deb6e26900277bf6830b33d551df6bb44075a53355aae.scope: Deactivated successfully.
Nov 25 18:12:27 compute-0 sudo[476992]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:27 compute-0 sudo[477156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:27 compute-0 sudo[477156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:27 compute-0 sudo[477156]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:27 compute-0 sudo[477181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:12:27 compute-0 sudo[477181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:27 compute-0 sudo[477181]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:27 compute-0 sudo[477206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:27 compute-0 sudo[477206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:27 compute-0 sudo[477206]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:27 compute-0 sudo[477231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:12:27 compute-0 sudo[477231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:27 compute-0 podman[477297]: 2025-11-25 18:12:27.925455251 +0000 UTC m=+0.042156738 container create 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 18:12:27 compute-0 ceph-mon[74985]: pgmap v4489: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:27 compute-0 systemd[1]: Started libpod-conmon-81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805.scope.
Nov 25 18:12:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:12:28 compute-0 podman[477297]: 2025-11-25 18:12:27.904361966 +0000 UTC m=+0.021063483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:12:28 compute-0 podman[477297]: 2025-11-25 18:12:28.010418552 +0000 UTC m=+0.127120329 container init 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:12:28 compute-0 podman[477297]: 2025-11-25 18:12:28.016479307 +0000 UTC m=+0.133180794 container start 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:12:28 compute-0 kind_jemison[477313]: 167 167
Nov 25 18:12:28 compute-0 systemd[1]: libpod-81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805.scope: Deactivated successfully.
Nov 25 18:12:28 compute-0 podman[477297]: 2025-11-25 18:12:28.023431667 +0000 UTC m=+0.140133174 container attach 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 18:12:28 compute-0 podman[477297]: 2025-11-25 18:12:28.02390946 +0000 UTC m=+0.140610967 container died 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8dae33bbe7c5adfeed93bfcdc01a767524a5f9384bdba3f23bcc2bfa951309a-merged.mount: Deactivated successfully.
Nov 25 18:12:28 compute-0 podman[477297]: 2025-11-25 18:12:28.081504396 +0000 UTC m=+0.198205883 container remove 81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jemison, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:12:28 compute-0 systemd[1]: libpod-conmon-81fbfadd8234b54fe85dc5827f29d20edf84e7bdd082a05edaca86e0b3885805.scope: Deactivated successfully.
Nov 25 18:12:28 compute-0 nova_compute[254092]: 2025-11-25 18:12:28.151 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:28 compute-0 podman[477338]: 2025-11-25 18:12:28.260083385 +0000 UTC m=+0.050984658 container create 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:12:28 compute-0 systemd[1]: Started libpod-conmon-3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba.scope.
Nov 25 18:12:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:28 compute-0 podman[477338]: 2025-11-25 18:12:28.231833237 +0000 UTC m=+0.022734520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:12:28 compute-0 podman[477338]: 2025-11-25 18:12:28.34624567 +0000 UTC m=+0.137147093 container init 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 18:12:28 compute-0 podman[477338]: 2025-11-25 18:12:28.352734696 +0000 UTC m=+0.143635989 container start 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:12:28 compute-0 podman[477338]: 2025-11-25 18:12:28.358964996 +0000 UTC m=+0.149866279 container attach 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 18:12:28 compute-0 nova_compute[254092]: 2025-11-25 18:12:28.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4490: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:29 compute-0 eager_hellman[477354]: {
Nov 25 18:12:29 compute-0 eager_hellman[477354]:     "0": [
Nov 25 18:12:29 compute-0 eager_hellman[477354]:         {
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "devices": [
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "/dev/loop3"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             ],
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_name": "ceph_lv0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_size": "21470642176",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "name": "ceph_lv0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "tags": {
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cluster_name": "ceph",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.crush_device_class": "",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.encrypted": "0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osd_id": "0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.type": "block",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.vdo": "0"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             },
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "type": "block",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "vg_name": "ceph_vg0"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:         }
Nov 25 18:12:29 compute-0 eager_hellman[477354]:     ],
Nov 25 18:12:29 compute-0 eager_hellman[477354]:     "1": [
Nov 25 18:12:29 compute-0 eager_hellman[477354]:         {
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "devices": [
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "/dev/loop4"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             ],
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_name": "ceph_lv1",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_size": "21470642176",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "name": "ceph_lv1",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "tags": {
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cluster_name": "ceph",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.crush_device_class": "",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.encrypted": "0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osd_id": "1",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.type": "block",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.vdo": "0"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             },
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "type": "block",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "vg_name": "ceph_vg1"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:         }
Nov 25 18:12:29 compute-0 eager_hellman[477354]:     ],
Nov 25 18:12:29 compute-0 eager_hellman[477354]:     "2": [
Nov 25 18:12:29 compute-0 eager_hellman[477354]:         {
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "devices": [
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "/dev/loop5"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             ],
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_name": "ceph_lv2",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_size": "21470642176",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "name": "ceph_lv2",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "tags": {
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.cluster_name": "ceph",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.crush_device_class": "",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.encrypted": "0",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osd_id": "2",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.type": "block",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:                 "ceph.vdo": "0"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             },
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "type": "block",
Nov 25 18:12:29 compute-0 eager_hellman[477354]:             "vg_name": "ceph_vg2"
Nov 25 18:12:29 compute-0 eager_hellman[477354]:         }
Nov 25 18:12:29 compute-0 eager_hellman[477354]:     ]
Nov 25 18:12:29 compute-0 eager_hellman[477354]: }
Nov 25 18:12:29 compute-0 systemd[1]: libpod-3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba.scope: Deactivated successfully.
Nov 25 18:12:29 compute-0 podman[477338]: 2025-11-25 18:12:29.198738815 +0000 UTC m=+0.989640088 container died 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-42eadf90b0f0167e75390dddd643fea6c2e4cff67b64c457cd0d65b6df080596-merged.mount: Deactivated successfully.
Nov 25 18:12:29 compute-0 podman[477338]: 2025-11-25 18:12:29.261334558 +0000 UTC m=+1.052235831 container remove 3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:12:29 compute-0 systemd[1]: libpod-conmon-3aa73b4deba24d3395a5b9d8288b9257d205edb743f755fc24e508162cdd61ba.scope: Deactivated successfully.
Nov 25 18:12:29 compute-0 sudo[477231]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:29 compute-0 sudo[477375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:29 compute-0 sudo[477375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:29 compute-0 sudo[477375]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:29 compute-0 sudo[477400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:12:29 compute-0 sudo[477400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:29 compute-0 sudo[477400]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:29 compute-0 sudo[477425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:29 compute-0 sudo[477425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:29 compute-0 sudo[477425]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:29 compute-0 sudo[477450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:12:29 compute-0 sudo[477450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:29 compute-0 podman[477514]: 2025-11-25 18:12:29.935293737 +0000 UTC m=+0.041956253 container create 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:12:29 compute-0 ceph-mon[74985]: pgmap v4490: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:29 compute-0 systemd[1]: Started libpod-conmon-0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf.scope.
Nov 25 18:12:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:12:30 compute-0 podman[477514]: 2025-11-25 18:12:29.916581107 +0000 UTC m=+0.023243713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:12:30 compute-0 podman[477514]: 2025-11-25 18:12:30.01923137 +0000 UTC m=+0.125893896 container init 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:12:30 compute-0 podman[477514]: 2025-11-25 18:12:30.026310084 +0000 UTC m=+0.132972600 container start 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:12:30 compute-0 cool_austin[477530]: 167 167
Nov 25 18:12:30 compute-0 systemd[1]: libpod-0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf.scope: Deactivated successfully.
Nov 25 18:12:30 compute-0 podman[477514]: 2025-11-25 18:12:30.030806365 +0000 UTC m=+0.137468911 container attach 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:12:30 compute-0 podman[477514]: 2025-11-25 18:12:30.031653538 +0000 UTC m=+0.138316064 container died 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d9b3bde585c38faa1846b412a72ac237b3a3ec01657986504ddf75e0a18ffeb-merged.mount: Deactivated successfully.
Nov 25 18:12:30 compute-0 podman[477514]: 2025-11-25 18:12:30.066016164 +0000 UTC m=+0.172678680 container remove 0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_austin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:12:30 compute-0 systemd[1]: libpod-conmon-0417b133a98360249b8c5967f1cccdd00b61f8846a6860c87f525556db8a95cf.scope: Deactivated successfully.
Nov 25 18:12:30 compute-0 podman[477554]: 2025-11-25 18:12:30.247656166 +0000 UTC m=+0.041878600 container create e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 18:12:30 compute-0 systemd[1]: Started libpod-conmon-e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f.scope.
Nov 25 18:12:30 compute-0 nova_compute[254092]: 2025-11-25 18:12:30.310 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:12:30 compute-0 podman[477554]: 2025-11-25 18:12:30.228408682 +0000 UTC m=+0.022631136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:12:30 compute-0 podman[477554]: 2025-11-25 18:12:30.337985674 +0000 UTC m=+0.132208128 container init e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 18:12:30 compute-0 podman[477554]: 2025-11-25 18:12:30.343528135 +0000 UTC m=+0.137750569 container start e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:12:30 compute-0 podman[477554]: 2025-11-25 18:12:30.346994819 +0000 UTC m=+0.141217253 container attach e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 18:12:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4491: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:31 compute-0 magical_neumann[477570]: {
Nov 25 18:12:31 compute-0 magical_neumann[477570]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "osd_id": 1,
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "type": "bluestore"
Nov 25 18:12:31 compute-0 magical_neumann[477570]:     },
Nov 25 18:12:31 compute-0 magical_neumann[477570]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "osd_id": 2,
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "type": "bluestore"
Nov 25 18:12:31 compute-0 magical_neumann[477570]:     },
Nov 25 18:12:31 compute-0 magical_neumann[477570]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "osd_id": 0,
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:12:31 compute-0 magical_neumann[477570]:         "type": "bluestore"
Nov 25 18:12:31 compute-0 magical_neumann[477570]:     }
Nov 25 18:12:31 compute-0 magical_neumann[477570]: }
Nov 25 18:12:31 compute-0 systemd[1]: libpod-e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f.scope: Deactivated successfully.
Nov 25 18:12:31 compute-0 podman[477554]: 2025-11-25 18:12:31.289036351 +0000 UTC m=+1.083258815 container died e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-934d17be01232fa717a416469f2bbaf179305687278840c1568fd6261d1f51d8-merged.mount: Deactivated successfully.
Nov 25 18:12:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:31 compute-0 podman[477554]: 2025-11-25 18:12:31.343201145 +0000 UTC m=+1.137423569 container remove e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:12:31 compute-0 systemd[1]: libpod-conmon-e5d3573308780ffbf3f56514aadeb4cc396a2c5dc77530cd066924e4cb70679f.scope: Deactivated successfully.
Nov 25 18:12:31 compute-0 sudo[477450]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:12:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:12:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:12:31 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:12:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 183bea00-64fb-44eb-837f-599cacecafc8 does not exist
Nov 25 18:12:31 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 17ebbdac-f163-49ba-a541-06e5d601d13b does not exist
Nov 25 18:12:31 compute-0 sudo[477615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:12:31 compute-0 sudo[477615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:31 compute-0 sudo[477615]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:31 compute-0 sudo[477640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:12:31 compute-0 nova_compute[254092]: 2025-11-25 18:12:31.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:31 compute-0 sudo[477640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:12:31 compute-0 sudo[477640]: pam_unix(sudo:session): session closed for user root
Nov 25 18:12:31 compute-0 ceph-mon[74985]: pgmap v4491: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:31 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:12:31 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:12:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4492: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:33 compute-0 ceph-mon[74985]: pgmap v4492: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:33 compute-0 nova_compute[254092]: 2025-11-25 18:12:33.153 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4493: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:35 compute-0 nova_compute[254092]: 2025-11-25 18:12:35.313 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:35 compute-0 ceph-mon[74985]: pgmap v4493: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:36 compute-0 nova_compute[254092]: 2025-11-25 18:12:36.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4494: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:36 compute-0 podman[477666]: 2025-11-25 18:12:36.64152874 +0000 UTC m=+0.055874301 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 18:12:36 compute-0 podman[477665]: 2025-11-25 18:12:36.643590286 +0000 UTC m=+0.059504020 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 18:12:36 compute-0 podman[477667]: 2025-11-25 18:12:36.671477345 +0000 UTC m=+0.084429338 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 18:12:37 compute-0 ceph-mon[74985]: pgmap v4494: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:38 compute-0 nova_compute[254092]: 2025-11-25 18:12:38.155 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4495: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:39 compute-0 nova_compute[254092]: 2025-11-25 18:12:39.490 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:39 compute-0 nova_compute[254092]: 2025-11-25 18:12:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:39 compute-0 nova_compute[254092]: 2025-11-25 18:12:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:12:39 compute-0 nova_compute[254092]: 2025-11-25 18:12:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:12:39 compute-0 nova_compute[254092]: 2025-11-25 18:12:39.524 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:12:39 compute-0 nova_compute[254092]: 2025-11-25 18:12:39.525 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:12:39 compute-0 nova_compute[254092]: 2025-11-25 18:12:39.525 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:12:39 compute-0 ceph-mon[74985]: pgmap v4495: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:12:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/280637257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.017 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.155 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.156 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.157 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.157 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.245 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.246 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.272 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.316 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:12:40
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4496: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:12:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/805166120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:12:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/280637257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.688 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.694 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.707 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.708 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:12:40 compute-0 nova_compute[254092]: 2025-11-25 18:12:40.709 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:12:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:12:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:12:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:12:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:12:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:12:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:12:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:41 compute-0 ceph-mon[74985]: pgmap v4496: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/805166120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:12:41 compute-0 nova_compute[254092]: 2025-11-25 18:12:41.709 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:41 compute-0 nova_compute[254092]: 2025-11-25 18:12:41.710 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:12:41 compute-0 nova_compute[254092]: 2025-11-25 18:12:41.710 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:12:41 compute-0 nova_compute[254092]: 2025-11-25 18:12:41.731 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:12:41 compute-0 nova_compute[254092]: 2025-11-25 18:12:41.732 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4497: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:43 compute-0 nova_compute[254092]: 2025-11-25 18:12:43.156 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:43 compute-0 nova_compute[254092]: 2025-11-25 18:12:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:43 compute-0 nova_compute[254092]: 2025-11-25 18:12:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:12:43 compute-0 ceph-mon[74985]: pgmap v4497: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4498: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:45 compute-0 nova_compute[254092]: 2025-11-25 18:12:45.318 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:45 compute-0 ceph-mon[74985]: pgmap v4498: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4499: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:47 compute-0 ceph-mon[74985]: pgmap v4499: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:48 compute-0 nova_compute[254092]: 2025-11-25 18:12:48.159 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4500: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:49 compute-0 ceph-mon[74985]: pgmap v4500: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:50 compute-0 nova_compute[254092]: 2025-11-25 18:12:50.375 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4501: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:51 compute-0 ceph-mon[74985]: pgmap v4501: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:52 compute-0 nova_compute[254092]: 2025-11-25 18:12:52.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4502: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:12:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:12:53 compute-0 nova_compute[254092]: 2025-11-25 18:12:53.161 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:53 compute-0 nova_compute[254092]: 2025-11-25 18:12:53.491 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:12:53 compute-0 ceph-mon[74985]: pgmap v4502: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4503: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:55 compute-0 nova_compute[254092]: 2025-11-25 18:12:55.380 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:12:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133286245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:12:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:12:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133286245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:12:55 compute-0 ceph-mon[74985]: pgmap v4503: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3133286245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:12:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/3133286245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:12:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:12:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4504: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:57 compute-0 ceph-mon[74985]: pgmap v4504: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:58 compute-0 nova_compute[254092]: 2025-11-25 18:12:58.163 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:12:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4505: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:12:59 compute-0 ceph-mon[74985]: pgmap v4505: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:00 compute-0 nova_compute[254092]: 2025-11-25 18:13:00.408 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4506: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:01 compute-0 ceph-mon[74985]: pgmap v4506: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4507: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:03 compute-0 nova_compute[254092]: 2025-11-25 18:13:03.170 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:03 compute-0 ceph-mon[74985]: pgmap v4507: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4508: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:05 compute-0 nova_compute[254092]: 2025-11-25 18:13:05.412 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:05 compute-0 ceph-mon[74985]: pgmap v4508: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4509: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:07 compute-0 podman[477776]: 2025-11-25 18:13:07.658182034 +0000 UTC m=+0.060151948 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 18:13:07 compute-0 podman[477775]: 2025-11-25 18:13:07.670955321 +0000 UTC m=+0.074862037 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 18:13:07 compute-0 podman[477777]: 2025-11-25 18:13:07.714614939 +0000 UTC m=+0.110235270 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 25 18:13:07 compute-0 ceph-mon[74985]: pgmap v4509: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:08 compute-0 nova_compute[254092]: 2025-11-25 18:13:08.171 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4510: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:09 compute-0 ceph-mon[74985]: pgmap v4510: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:13:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:13:10 compute-0 nova_compute[254092]: 2025-11-25 18:13:10.416 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4511: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:11 compute-0 ceph-mon[74985]: pgmap v4511: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4512: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:13 compute-0 nova_compute[254092]: 2025-11-25 18:13:13.174 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:13:13.722 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:13:13.722 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:13:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:13:13.722 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:13:13 compute-0 ceph-mon[74985]: pgmap v4512: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4513: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:15 compute-0 nova_compute[254092]: 2025-11-25 18:13:15.419 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:15 compute-0 ceph-mon[74985]: pgmap v4513: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4514: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:17 compute-0 ceph-mon[74985]: pgmap v4514: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:18 compute-0 nova_compute[254092]: 2025-11-25 18:13:18.176 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4515: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:19 compute-0 ceph-mon[74985]: pgmap v4515: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:20 compute-0 nova_compute[254092]: 2025-11-25 18:13:20.422 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4516: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:21 compute-0 ceph-mon[74985]: pgmap v4516: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4517: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:23 compute-0 nova_compute[254092]: 2025-11-25 18:13:23.180 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:23 compute-0 ceph-mon[74985]: pgmap v4517: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4518: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:25 compute-0 nova_compute[254092]: 2025-11-25 18:13:25.425 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:25 compute-0 ceph-mon[74985]: pgmap v4518: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4519: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:27 compute-0 ceph-mon[74985]: pgmap v4519: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:28 compute-0 nova_compute[254092]: 2025-11-25 18:13:28.181 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4520: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:29 compute-0 ceph-mon[74985]: pgmap v4520: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:30 compute-0 nova_compute[254092]: 2025-11-25 18:13:30.430 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:30 compute-0 nova_compute[254092]: 2025-11-25 18:13:30.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:30 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4521: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:31 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:31 compute-0 sudo[477839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:31 compute-0 sudo[477839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:31 compute-0 sudo[477839]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:31 compute-0 sudo[477864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:13:31 compute-0 sudo[477864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:31 compute-0 sudo[477864]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:31 compute-0 sudo[477889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:31 compute-0 sudo[477889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:31 compute-0 sudo[477889]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:31 compute-0 sudo[477914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 18:13:31 compute-0 sudo[477914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:32 compute-0 ceph-mon[74985]: pgmap v4521: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:32 compute-0 sudo[477914]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:13:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:13:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 18:13:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:13:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 18:13:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:13:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 506dfbc4-b02d-40e4-9f7b-12cae849f91e does not exist
Nov 25 18:13:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev e9c30f05-2e2c-437c-b716-fcf14667ac53 does not exist
Nov 25 18:13:32 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev ebd1655f-681e-4a6c-b367-c77b17a271ed does not exist
Nov 25 18:13:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 18:13:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:13:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 18:13:32 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:13:32 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:13:32 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:13:32 compute-0 sudo[477971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:32 compute-0 sudo[477971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:32 compute-0 sudo[477971]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:32 compute-0 sudo[477996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:13:32 compute-0 sudo[477996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:32 compute-0 sudo[477996]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:32 compute-0 sudo[478021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:32 compute-0 sudo[478021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:32 compute-0 sudo[478021]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:32 compute-0 sudo[478046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 18:13:32 compute-0 sudo[478046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:32 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4522: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:32 compute-0 podman[478110]: 2025-11-25 18:13:32.797657828 +0000 UTC m=+0.056340654 container create ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:13:32 compute-0 systemd[1]: Started libpod-conmon-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope.
Nov 25 18:13:32 compute-0 podman[478110]: 2025-11-25 18:13:32.766626863 +0000 UTC m=+0.025309799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:13:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:13:32 compute-0 podman[478110]: 2025-11-25 18:13:32.89256987 +0000 UTC m=+0.151252776 container init ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 18:13:32 compute-0 podman[478110]: 2025-11-25 18:13:32.901261257 +0000 UTC m=+0.159944083 container start ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:13:32 compute-0 podman[478110]: 2025-11-25 18:13:32.904556426 +0000 UTC m=+0.163239362 container attach ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:13:32 compute-0 elated_turing[478126]: 167 167
Nov 25 18:13:32 compute-0 systemd[1]: libpod-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope: Deactivated successfully.
Nov 25 18:13:32 compute-0 conmon[478126]: conmon ce3270290fc65962c131 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope/container/memory.events
Nov 25 18:13:32 compute-0 podman[478110]: 2025-11-25 18:13:32.90985409 +0000 UTC m=+0.168536916 container died ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc69fb1fcff0c54633af0ccd9b9451cca57663cac7a7d0f05c1b8298f5b4ab59-merged.mount: Deactivated successfully.
Nov 25 18:13:32 compute-0 podman[478110]: 2025-11-25 18:13:32.954653619 +0000 UTC m=+0.213336445 container remove ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 18:13:32 compute-0 systemd[1]: libpod-conmon-ce3270290fc65962c13119078741391c8470d518748e6e498ca6a876dd79292e.scope: Deactivated successfully.
Nov 25 18:13:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:13:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 18:13:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:13:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 18:13:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 18:13:33 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:13:33 compute-0 ceph-mon[74985]: pgmap v4522: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:33 compute-0 podman[478150]: 2025-11-25 18:13:33.144706581 +0000 UTC m=+0.056459157 container create 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:13:33 compute-0 systemd[1]: Started libpod-conmon-7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708.scope.
Nov 25 18:13:33 compute-0 nova_compute[254092]: 2025-11-25 18:13:33.182 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:33 compute-0 podman[478150]: 2025-11-25 18:13:33.126966417 +0000 UTC m=+0.038719043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:33 compute-0 podman[478150]: 2025-11-25 18:13:33.232694554 +0000 UTC m=+0.144447210 container init 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 18:13:33 compute-0 podman[478150]: 2025-11-25 18:13:33.242881802 +0000 UTC m=+0.154634378 container start 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 25 18:13:33 compute-0 podman[478150]: 2025-11-25 18:13:33.245986976 +0000 UTC m=+0.157739592 container attach 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 18:13:33 compute-0 nova_compute[254092]: 2025-11-25 18:13:33.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:34 compute-0 flamboyant_brahmagupta[478167]: --> passed data devices: 0 physical, 3 LVM
Nov 25 18:13:34 compute-0 flamboyant_brahmagupta[478167]: --> relative data size: 1.0
Nov 25 18:13:34 compute-0 flamboyant_brahmagupta[478167]: --> All data devices are unavailable
Nov 25 18:13:34 compute-0 podman[478150]: 2025-11-25 18:13:34.26305087 +0000 UTC m=+1.174803456 container died 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 18:13:34 compute-0 systemd[1]: libpod-7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708.scope: Deactivated successfully.
Nov 25 18:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-267ec38c5b1825b46f5f1c01186096c67032ac6b5c40c0368e97f480d9fecea6-merged.mount: Deactivated successfully.
Nov 25 18:13:34 compute-0 podman[478150]: 2025-11-25 18:13:34.34607624 +0000 UTC m=+1.257828856 container remove 7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:13:34 compute-0 systemd[1]: libpod-conmon-7174d7ff1ad02162626fabe34b829bf5a1fc8960a2ebfc7ed9e516a2b0009708.scope: Deactivated successfully.
Nov 25 18:13:34 compute-0 sudo[478046]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:34 compute-0 sudo[478210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:34 compute-0 sudo[478210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:34 compute-0 sudo[478210]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:34 compute-0 sudo[478235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:13:34 compute-0 sudo[478235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:34 compute-0 sudo[478235]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:34 compute-0 sudo[478260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:34 compute-0 sudo[478260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:34 compute-0 sudo[478260]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:34 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4523: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:34 compute-0 sudo[478285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- lvm list --format json
Nov 25 18:13:34 compute-0 sudo[478285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:34 compute-0 podman[478350]: 2025-11-25 18:13:34.995443078 +0000 UTC m=+0.038559490 container create f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:13:35 compute-0 systemd[1]: Started libpod-conmon-f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded.scope.
Nov 25 18:13:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:13:35 compute-0 podman[478350]: 2025-11-25 18:13:35.071481636 +0000 UTC m=+0.114598098 container init f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 18:13:35 compute-0 podman[478350]: 2025-11-25 18:13:34.980331397 +0000 UTC m=+0.023447829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:13:35 compute-0 podman[478350]: 2025-11-25 18:13:35.077178402 +0000 UTC m=+0.120294814 container start f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 18:13:35 compute-0 podman[478350]: 2025-11-25 18:13:35.079806893 +0000 UTC m=+0.122923355 container attach f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 18:13:35 compute-0 agitated_allen[478367]: 167 167
Nov 25 18:13:35 compute-0 systemd[1]: libpod-f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded.scope: Deactivated successfully.
Nov 25 18:13:35 compute-0 podman[478350]: 2025-11-25 18:13:35.082601609 +0000 UTC m=+0.125718021 container died f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 18:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-99e181816fd1e7c9d8522431c82cd7b57d0fb776ad50201c44eab9394e3c33da-merged.mount: Deactivated successfully.
Nov 25 18:13:35 compute-0 podman[478350]: 2025-11-25 18:13:35.119323419 +0000 UTC m=+0.162439831 container remove f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 18:13:35 compute-0 systemd[1]: libpod-conmon-f0fd2f4fa2c66d0aac3bff7f15e3fd3b1727295f84bea2960fe8e82005819ded.scope: Deactivated successfully.
Nov 25 18:13:35 compute-0 podman[478391]: 2025-11-25 18:13:35.298217586 +0000 UTC m=+0.049303892 container create 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 18:13:35 compute-0 systemd[1]: Started libpod-conmon-07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1.scope.
Nov 25 18:13:35 compute-0 podman[478391]: 2025-11-25 18:13:35.272316361 +0000 UTC m=+0.023402677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:13:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:35 compute-0 podman[478391]: 2025-11-25 18:13:35.410523202 +0000 UTC m=+0.161609488 container init 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:13:35 compute-0 podman[478391]: 2025-11-25 18:13:35.419070215 +0000 UTC m=+0.170156501 container start 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 18:13:35 compute-0 podman[478391]: 2025-11-25 18:13:35.422708664 +0000 UTC m=+0.173794990 container attach 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 18:13:35 compute-0 nova_compute[254092]: 2025-11-25 18:13:35.431 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:35 compute-0 ceph-mon[74985]: pgmap v4523: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:36 compute-0 frosty_hugle[478407]: {
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:     "0": [
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:         {
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "devices": [
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "/dev/loop3"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             ],
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_name": "ceph_lv0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_size": "21470642176",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ac347129-f7f8-46b6-8e4f-7d01f9efee15,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "name": "ceph_lv0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "tags": {
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.block_uuid": "sWVecF-cHxf-Vmtk-ccZT-0cck-M0rg-bOvHwY",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cluster_name": "ceph",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.crush_device_class": "",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.encrypted": "0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osd_fsid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osd_id": "0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.type": "block",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.vdo": "0"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             },
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "type": "block",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "vg_name": "ceph_vg0"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:         }
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:     ],
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:     "1": [
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:         {
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "devices": [
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "/dev/loop4"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             ],
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_name": "ceph_lv1",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_size": "21470642176",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=29af116d-8685-452a-b634-7ce2a4974adc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "name": "ceph_lv1",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "tags": {
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.block_uuid": "FQ9nk0-U3j5-xJHt-V0Dj-xU4G-5QZs-qH65VD",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cluster_name": "ceph",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.crush_device_class": "",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.encrypted": "0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osd_fsid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osd_id": "1",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.type": "block",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.vdo": "0"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             },
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "type": "block",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "vg_name": "ceph_vg1"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:         }
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:     ],
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:     "2": [
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:         {
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "devices": [
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "/dev/loop5"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             ],
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_name": "ceph_lv2",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_size": "21470642176",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d82baeae-c742-50a4-b8f6-b5257c8a2c92,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=9ec96548-6af1-446f-a950-cee8e59f7654,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "lv_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "name": "ceph_lv2",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "tags": {
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.block_uuid": "KIYLpx-cAJh-dXdC-HBYK-DU7e-YztA-AYUAVR",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cluster_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.cluster_name": "ceph",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.crush_device_class": "",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.encrypted": "0",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osd_fsid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osd_id": "2",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.type": "block",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:                 "ceph.vdo": "0"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             },
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "type": "block",
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:             "vg_name": "ceph_vg2"
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:         }
Nov 25 18:13:36 compute-0 frosty_hugle[478407]:     ]
Nov 25 18:13:36 compute-0 frosty_hugle[478407]: }
Nov 25 18:13:36 compute-0 systemd[1]: libpod-07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1.scope: Deactivated successfully.
Nov 25 18:13:36 compute-0 podman[478391]: 2025-11-25 18:13:36.170421698 +0000 UTC m=+0.921507964 container died 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 25 18:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec7ef088f1e93fae3b32974b635ed20b9f6d04a2b0bae711e6639a2a83ced2fe-merged.mount: Deactivated successfully.
Nov 25 18:13:36 compute-0 podman[478391]: 2025-11-25 18:13:36.221810276 +0000 UTC m=+0.972896542 container remove 07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hugle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:13:36 compute-0 systemd[1]: libpod-conmon-07016a794023488d1a936aef510ae0271d769d42dc929c3634ea815010059de1.scope: Deactivated successfully.
Nov 25 18:13:36 compute-0 sudo[478285]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:36 compute-0 sudo[478426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:36 compute-0 sudo[478426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:36 compute-0 sudo[478426]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:36 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:36 compute-0 sudo[478451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 18:13:36 compute-0 sudo[478451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:36 compute-0 sudo[478451]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:36 compute-0 sudo[478476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:36 compute-0 sudo[478476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:36 compute-0 sudo[478476]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:36 compute-0 sudo[478501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/d82baeae-c742-50a4-b8f6-b5257c8a2c92/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid d82baeae-c742-50a4-b8f6-b5257c8a2c92 -- raw list --format json
Nov 25 18:13:36 compute-0 sudo[478501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:36 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4524: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:36 compute-0 podman[478567]: 2025-11-25 18:13:36.830527839 +0000 UTC m=+0.042450406 container create 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 18:13:36 compute-0 systemd[1]: Started libpod-conmon-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope.
Nov 25 18:13:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:13:36 compute-0 podman[478567]: 2025-11-25 18:13:36.810902485 +0000 UTC m=+0.022825042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:13:36 compute-0 podman[478567]: 2025-11-25 18:13:36.919053308 +0000 UTC m=+0.130975845 container init 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 18:13:36 compute-0 podman[478567]: 2025-11-25 18:13:36.932922066 +0000 UTC m=+0.144844633 container start 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:13:36 compute-0 podman[478567]: 2025-11-25 18:13:36.93677035 +0000 UTC m=+0.148692907 container attach 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:13:36 compute-0 great_moore[478584]: 167 167
Nov 25 18:13:36 compute-0 systemd[1]: libpod-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope: Deactivated successfully.
Nov 25 18:13:36 compute-0 conmon[478584]: conmon 77bb1c7a261d38ac48b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope/container/memory.events
Nov 25 18:13:36 compute-0 podman[478567]: 2025-11-25 18:13:36.940045689 +0000 UTC m=+0.151968266 container died 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 18:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-942ef13ac88d04b5005221c2c538b4afb85d4a35a07f7993d4c83c6451c58d10-merged.mount: Deactivated successfully.
Nov 25 18:13:36 compute-0 podman[478567]: 2025-11-25 18:13:36.977513829 +0000 UTC m=+0.189436366 container remove 77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 18:13:37 compute-0 systemd[1]: libpod-conmon-77bb1c7a261d38ac48b0b0501db34bb53cbe851058a619370ce09b0c0d4923cf.scope: Deactivated successfully.
Nov 25 18:13:37 compute-0 podman[478608]: 2025-11-25 18:13:37.185331554 +0000 UTC m=+0.056170370 container create 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 18:13:37 compute-0 systemd[1]: Started libpod-conmon-8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f.scope.
Nov 25 18:13:37 compute-0 podman[478608]: 2025-11-25 18:13:37.159198973 +0000 UTC m=+0.030037859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 18:13:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 18:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 18:13:37 compute-0 podman[478608]: 2025-11-25 18:13:37.284552173 +0000 UTC m=+0.155391059 container init 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 18:13:37 compute-0 podman[478608]: 2025-11-25 18:13:37.298176184 +0000 UTC m=+0.169015010 container start 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 18:13:37 compute-0 podman[478608]: 2025-11-25 18:13:37.302159663 +0000 UTC m=+0.172998579 container attach 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 18:13:37 compute-0 ceph-mon[74985]: pgmap v4524: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:38 compute-0 nova_compute[254092]: 2025-11-25 18:13:38.184 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:38 compute-0 confident_antonelli[478624]: {
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:     "29af116d-8685-452a-b634-7ce2a4974adc": {
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "osd_id": 1,
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "osd_uuid": "29af116d-8685-452a-b634-7ce2a4974adc",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "type": "bluestore"
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:     },
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:     "9ec96548-6af1-446f-a950-cee8e59f7654": {
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "osd_id": 2,
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "osd_uuid": "9ec96548-6af1-446f-a950-cee8e59f7654",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "type": "bluestore"
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:     },
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:     "ac347129-f7f8-46b6-8e4f-7d01f9efee15": {
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "ceph_fsid": "d82baeae-c742-50a4-b8f6-b5257c8a2c92",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "osd_id": 0,
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "osd_uuid": "ac347129-f7f8-46b6-8e4f-7d01f9efee15",
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:         "type": "bluestore"
Nov 25 18:13:38 compute-0 confident_antonelli[478624]:     }
Nov 25 18:13:38 compute-0 confident_antonelli[478624]: }
Nov 25 18:13:38 compute-0 systemd[1]: libpod-8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f.scope: Deactivated successfully.
Nov 25 18:13:38 compute-0 podman[478608]: 2025-11-25 18:13:38.281959273 +0000 UTC m=+1.152798079 container died 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 18:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-97cf8090b723629ddbd83e6cab75b51939302f49054b19a058ff995bd8033947-merged.mount: Deactivated successfully.
Nov 25 18:13:38 compute-0 podman[478608]: 2025-11-25 18:13:38.354223709 +0000 UTC m=+1.225062505 container remove 8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 18:13:38 compute-0 systemd[1]: libpod-conmon-8fd7b42e41551c3922d71f96348d5f08fb7dbcf757f1e901e1ad7c38c26c991f.scope: Deactivated successfully.
Nov 25 18:13:38 compute-0 sudo[478501]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 18:13:38 compute-0 podman[478670]: 2025-11-25 18:13:38.393488247 +0000 UTC m=+0.063987732 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 18:13:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:13:38 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 18:13:38 compute-0 podman[478658]: 2025-11-25 18:13:38.404223039 +0000 UTC m=+0.084485129 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 18:13:38 compute-0 ceph-mon[74985]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:13:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 47e8b7c1-35fb-4c4c-97a4-fa00e1a6d2f1 does not exist
Nov 25 18:13:38 compute-0 ceph-mgr[75280]: [progress WARNING root] complete: ev 85154e3c-0446-4bf2-83f8-4cdab9d54b5d does not exist
Nov 25 18:13:38 compute-0 podman[478676]: 2025-11-25 18:13:38.443870598 +0000 UTC m=+0.109933132 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 18:13:38 compute-0 sudo[478728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 18:13:38 compute-0 sudo[478728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:38 compute-0 sudo[478728]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:38 compute-0 nova_compute[254092]: 2025-11-25 18:13:38.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:38 compute-0 sudo[478753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 18:13:38 compute-0 sudo[478753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 18:13:38 compute-0 sudo[478753]: pam_unix(sudo:session): session closed for user root
Nov 25 18:13:38 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4525: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:13:39 compute-0 ceph-mon[74985]: from='mgr.14130 192.168.122.100:0/4023273529' entity='mgr.compute-0.mavpeh' 
Nov 25 18:13:39 compute-0 ceph-mon[74985]: pgmap v4525: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:39 compute-0 nova_compute[254092]: 2025-11-25 18:13:39.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:39 compute-0 nova_compute[254092]: 2025-11-25 18:13:39.529 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:13:39 compute-0 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:13:39 compute-0 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:13:39 compute-0 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 18:13:39 compute-0 nova_compute[254092]: 2025-11-25 18:13:39.530 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:13:39 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:13:39 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614377039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:13:39 compute-0 nova_compute[254092]: 2025-11-25 18:13:39.978 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.178 254096 WARNING nova.virt.libvirt.driver [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.179 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3534MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.179 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.179 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.317 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.318 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.349 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Optimize plan auto_2025-11-25_18:13:40
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] do_upmap
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', 'backups']
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [balancer INFO root] prepared 0/10 changes
Nov 25 18:13:40 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2614377039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.434 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4526: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:13:40 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 183K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.77 writes per sync, written: 0.17 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:13:40 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 18:13:40 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254113131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.799 254096 DEBUG oslo_concurrency.processutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.805 254096 DEBUG nova.compute.provider_tree [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed in ProviderTree for provider: 4f066da7-306c-41d7-8522-9a9189cacc49 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.843 254096 DEBUG nova.scheduler.client.report [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Inventory has not changed for provider 4f066da7-306c-41d7-8522-9a9189cacc49 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.845 254096 DEBUG nova.compute.resource_tracker [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 18:13:40 compute-0 nova_compute[254092]: 2025-11-25 18:13:40.845 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:13:40 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:13:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 18:13:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 18:13:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 18:13:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 18:13:41 compute-0 ceph-mgr[75280]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 18:13:41 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:41 compute-0 ceph-mon[74985]: pgmap v4526: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:41 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2254113131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 18:13:41 compute-0 nova_compute[254092]: 2025-11-25 18:13:41.846 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:41 compute-0 nova_compute[254092]: 2025-11-25 18:13:41.846 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:41 compute-0 nova_compute[254092]: 2025-11-25 18:13:41.846 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 18:13:41 compute-0 nova_compute[254092]: 2025-11-25 18:13:41.847 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 18:13:41 compute-0 nova_compute[254092]: 2025-11-25 18:13:41.871 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 18:13:41 compute-0 nova_compute[254092]: 2025-11-25 18:13:41.871 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:42 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4527: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:43 compute-0 nova_compute[254092]: 2025-11-25 18:13:43.186 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:43 compute-0 nova_compute[254092]: 2025-11-25 18:13:43.495 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:43 compute-0 nova_compute[254092]: 2025-11-25 18:13:43.496 254096 DEBUG nova.compute.manager [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 18:13:43 compute-0 ceph-mon[74985]: pgmap v4527: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:44 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4528: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:45 compute-0 nova_compute[254092]: 2025-11-25 18:13:45.438 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:45 compute-0 ceph-mon[74985]: pgmap v4528: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:46 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:46 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4529: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:47 compute-0 ceph-mon[74985]: pgmap v4529: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:48 compute-0 nova_compute[254092]: 2025-11-25 18:13:48.188 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:48 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4530: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:49 compute-0 ceph-mon[74985]: pgmap v4530: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:13:50 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8401.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.78 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:13:50 compute-0 nova_compute[254092]: 2025-11-25 18:13:50.441 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:50 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4531: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:50 compute-0 sshd-session[478822]: Accepted publickey for zuul from 192.168.122.10 port 38446 ssh2: ECDSA SHA256:9KqzpXmppnMwGwVHF2wOKwwhXNcutlJnRXXU19Lreu4
Nov 25 18:13:50 compute-0 systemd-logind[791]: New session 55 of user zuul.
Nov 25 18:13:50 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 25 18:13:50 compute-0 sshd-session[478822]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 18:13:50 compute-0 sudo[478826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 18:13:50 compute-0 sudo[478826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 18:13:51 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:51 compute-0 ceph-mon[74985]: pgmap v4531: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:13:52 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4532: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:53 compute-0 nova_compute[254092]: 2025-11-25 18:13:53.189 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:53 compute-0 ceph-mon[74985]: pgmap v4532: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:54 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23309 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:13:54 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23311 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:13:54 compute-0 nova_compute[254092]: 2025-11-25 18:13:54.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:13:54 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4533: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:54 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 18:13:54 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601491089' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 18:13:55 compute-0 nova_compute[254092]: 2025-11-25 18:13:55.445 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 18:13:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958131460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:13:55 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 18:13:55 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1958131460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:13:55 compute-0 ceph-mon[74985]: from='client.23309 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:13:55 compute-0 ceph-mon[74985]: from='client.23311 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:13:55 compute-0 ceph-mon[74985]: pgmap v4533: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/601491089' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 18:13:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1958131460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 18:13:55 compute-0 ceph-mon[74985]: from='client.? 192.168.122.10:0/1958131460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 18:13:56 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:13:56 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4534: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:57 compute-0 ceph-mon[74985]: pgmap v4534: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:57 compute-0 ovs-vsctl[479109]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 18:13:58 compute-0 nova_compute[254092]: 2025-11-25 18:13:58.191 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:13:58 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4535: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:58 compute-0 virtqemud[253880]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 18:13:59 compute-0 virtqemud[253880]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 18:13:59 compute-0 virtqemud[253880]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 18:13:59 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: cache status {prefix=cache status} (starting...)
Nov 25 18:13:59 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: client ls {prefix=client ls} (starting...)
Nov 25 18:13:59 compute-0 ceph-mon[74985]: pgmap v4535: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:13:59 compute-0 lvm[479444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 18:13:59 compute-0 lvm[479444]: VG ceph_vg0 finished
Nov 25 18:13:59 compute-0 lvm[479448]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 18:13:59 compute-0 lvm[479448]: VG ceph_vg2 finished
Nov 25 18:14:00 compute-0 lvm[479482]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 18:14:00 compute-0 lvm[479482]: VG ceph_vg1 finished
Nov 25 18:14:00 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23319 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:00 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 18:14:00 compute-0 nova_compute[254092]: 2025-11-25 18:14:00.447 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:00 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 18:14:00 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23321 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:00 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4536: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:00 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 18:14:00 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 18:14:00 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 18:14:01 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 18:14:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 18:14:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237477704' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 18:14:01 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23327 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mgr[75280]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:14:01 compute-0 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T18:14:01.323+0000 7f2d477a6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:14:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:14:01 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 18:14:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 18:14:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175176537' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: ops {prefix=ops} (starting...)
Nov 25 18:14:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 18:14:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576232190' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mon[74985]: from='client.23319 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mon[74985]: from='client.23321 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mon[74985]: pgmap v4536: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2237477704' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2175176537' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2576232190' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 18:14:01 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 18:14:01 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295279610' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 18:14:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302530583' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 18:14:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755463129' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: session ls {prefix=session ls} (starting...)
Nov 25 18:14:02 compute-0 ceph-mds[102090]: mds.cephfs.compute-0.aidjys asok_command: status {prefix=status} (starting...)
Nov 25 18:14:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 18:14:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318995126' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4537: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:02 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23341 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: from='client.23327 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3295279610' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1302530583' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/755463129' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3318995126' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 18:14:02 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 18:14:02 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407612323' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23345 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:03 compute-0 nova_compute[254092]: 2025-11-25 18:14:03.192 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 18:14:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252542466' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 18:14:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911525299' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 18:14:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3137598177' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: pgmap v4537: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:03 compute-0 ceph-mon[74985]: from='client.23341 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/407612323' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3252542466' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1911525299' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3137598177' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 18:14:03 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 18:14:03 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332770137' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 18:14:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835828571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23357 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mgr[75280]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 18:14:04 compute-0 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T18:14:04.387+0000 7f2d477a6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 18:14:04 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23359 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4538: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:04 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 18:14:04 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089263276' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mon[74985]: from='client.23345 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3332770137' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1835828571' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4089263276' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 18:14:04 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23363 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 18:14:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2307039187' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23367 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:05 compute-0 nova_compute[254092]: 2025-11-25 18:14:05.450 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:05 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23369 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 18:14:05 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1013756043' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mon[74985]: from='client.23357 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mon[74985]: from='client.23359 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mon[74985]: pgmap v4538: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:05 compute-0 ceph-mon[74985]: from='client.23363 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2307039187' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 18:14:05 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1013756043' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23373 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:45.499969+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:46.500155+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:47.500308+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:48.500519+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:49.500747+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:50.500927+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:51.501103+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:52.501286+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:53.501549+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:54.501753+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:55.501952+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:56.502140+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:57.502339+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:58.502530+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:59.502720+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:00.502920+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:01.503097+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:02.503325+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:03.503576+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:04.503780+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:05.503945+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:06.504127+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:07.504303+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:08.504448+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:09.504584+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:10.504800+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:11.504989+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:12.505275+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:13.505554+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:14.505779+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:15.505928+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:16.506055+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:17.506199+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:18.506429+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:19.506794+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:20.506933+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:21.507107+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:22.507296+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:23.507476+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x55750c646400 session 0x55750d8dbc20
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: handle_auth_request added challenge on 0x557511c03c00
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:24.507710+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:25.507838+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:26.508012+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:27.508155+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:28.508296+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:29.508533+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:30.508715+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:31.508894+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:32.509170+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:33.509316+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:34.509443+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:35.509567+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:36.509700+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:37.509839+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:38.509976+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:39.510104+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:40.510278+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:41.510478+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:42.510676+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:43.510948+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:44.511148+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:45.511398+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:46.511541+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:47.511680+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:48.511843+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300523520 unmapped: 57892864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:49.512032+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:50.512246+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:51.512438+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:52.512673+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:53.512872+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:54.512991+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:55.513158+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:56.513374+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:57.513534+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:58.513749+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:59.513933+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:00.514093+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:01.514234+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:02.514570+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:03.514749+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:04.515044+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:05.515219+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:06.515391+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:07.515605+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:08.515992+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:09.516141+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:10.516314+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:11.516437+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:12.516592+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:13.516899+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:14.517117+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:15.517261+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:16.517489+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:17.517754+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:18.517954+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300589056 unmapped: 57827328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:19.518094+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300597248 unmapped: 57819136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:20.518273+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:21.518425+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:22.518578+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:23.518898+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:24.519122+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:25.519406+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:26.519786+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:27.519947+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:28.520092+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:29.520345+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:30.520492+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:31.520734+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:32.520899+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:33.521152+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:34.521326+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:35.521533+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:36.521711+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:37.521960+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:38.522205+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:39.522438+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:40.522744+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:41.523051+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:42.523318+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:43.523581+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:44.523830+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:45.524014+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:46.524194+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:47.524445+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:48.524729+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:49.524914+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:50.525107+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300670976 unmapped: 57745408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:51.525319+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:52.525507+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:53.525741+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:54.525913+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:55.526054+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:56.526269+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:57.526418+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:58.526617+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:59.526830+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:00.527063+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:01.527275+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:02.527496+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:03.527746+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:04.527966+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:05.528574+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:06.529257+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:07.529413+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:08.529604+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:09.529835+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:10.529987+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:11.530136+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:12.530320+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:13.530518+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:14.530691+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:15.530875+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:16.531068+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:17.531234+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:18.544275+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:19.544463+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:20.544629+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:21.544894+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:22.545131+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:23.545407+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:24.545590+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:25.545775+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:26.545965+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:27.546171+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:28.546325+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:29.546554+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:30.546749+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:31.546964+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 57655296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:32.547200+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:33.547561+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:34.547768+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:35.547996+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:36.549205+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:37.549416+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:38.549777+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:39.549984+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:40.550238+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:41.550435+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:42.550604+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:43.550832+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:44.551030+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:45.551313+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:46.551522+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:47.551722+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:48.551929+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:49.552153+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:50.552365+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:51.552717+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:52.552956+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:53.553247+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:54.553473+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:55.553608+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:56.553746+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:57.553939+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:58.554145+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:59.554366+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:00.554559+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:01.554755+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:02.554926+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:03.555139+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:04.555279+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:05.555442+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:06.555602+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:07.555834+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:08.556134+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:09.556316+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:10.556529+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300843008 unmapped: 57573376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:11.556715+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 57565184 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:12.556962+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 57565184 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:13.557204+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:14.557359+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:15.557543+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:16.557763+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:17.558566+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:18.558717+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:19.558864+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:20.559141+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:21.559317+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:22.559547+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:23.559779+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:24.560006+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:25.560183+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:26.560385+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:27.560596+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300875776 unmapped: 57540608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:28.560826+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 57532416 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:29.561036+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300883968 unmapped: 57532416 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:30.561185+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:31.561337+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:32.561561+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:33.561901+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:34.562136+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:35.562326+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:36.562476+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:37.562608+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6601.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 230 writes, 420 keys, 230 commit groups, 1.0 writes per commit group, ingest: 0.17 MB, 0.00 MB/s
                                           Interval WAL: 230 writes, 108 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:38.562768+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:39.562964+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:40.563096+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:41.563246+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:42.563377+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:43.563532+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:44.563704+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:45.563856+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:46.563991+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:47.564142+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:48.564350+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:49.564521+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:50.564756+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:51.564895+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:52.565125+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:53.565301+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:54.565516+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:55.565760+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:56.565933+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:57.566056+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:58.566205+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:59.566403+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:00.566541+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:01.566772+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:02.566976+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:03.567199+0000)
Nov 25 18:14:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 18:14:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877039370' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:04.567316+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:05.567462+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:06.567593+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:07.567787+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:08.567910+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:09.568058+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:10.568222+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:11.568358+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:12.568483+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:13.568699+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:14.568840+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:15.568979+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:16.569152+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:17.569284+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:18.569401+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:19.569570+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:20.569711+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:21.569854+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:22.570027+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:23.570784+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:24.570920+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:25.571087+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:26.571251+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:27.571416+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:28.571554+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:29.571742+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:30.571943+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:31.572139+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 57376768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:32.572337+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 57376768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:33.572585+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:34.572743+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:35.572919+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:36.573103+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:37.573320+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:38.573475+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:39.573633+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:40.573875+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:41.574008+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:42.574137+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:43.574411+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:44.574544+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:45.574701+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:46.574837+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:47.574983+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:48.575811+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:49.575949+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:50.576203+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:51.576382+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:52.576544+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:53.576724+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:54.576924+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:55.577078+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:56.577206+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:57.577372+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:58.577542+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:59.577693+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:00.577840+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:01.578067+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:02.578223+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301113344 unmapped: 57303040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:03.578403+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:04.578684+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:05.578899+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:06.579084+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:07.579237+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:08.579381+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:09.579536+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:10.579819+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:11.579964+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:12.580113+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:13.580315+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:14.580527+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:15.580709+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:16.580882+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:17.581068+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:18.581197+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:19.581763+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:20.581886+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:21.582019+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:22.582187+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301146112 unmapped: 57270272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:23.582358+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:24.582488+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:25.582630+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:26.582788+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 57253888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:27.582930+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301162496 unmapped: 57253888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:28.583097+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:29.583285+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:30.583429+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:31.583817+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:32.583990+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.576416016s of 600.132080078s, submitted: 90
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:33.584151+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:34.584286+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 58507264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:35.584399+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 58482688 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:36.584573+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 58482688 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:37.584711+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299933696 unmapped: 58482688 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:38.584894+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:39.585046+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:40.585235+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:41.585416+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:42.585628+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:43.585947+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:44.586131+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:45.586282+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:46.586497+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 58474496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:47.586672+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:48.586862+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:49.587043+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:50.587203+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:51.587400+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:52.587556+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:53.587820+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:54.587998+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:55.588118+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:56.588292+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:57.588433+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:58.588602+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:59.588718+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 58466304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:00.588899+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:01.589060+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:02.589209+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:03.589407+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:04.589566+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:05.589751+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:06.589947+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 58458112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:07.590093+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:08.590252+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:09.590384+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:10.590513+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:11.590818+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:12.590997+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:13.591244+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299966464 unmapped: 58449920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:14.591394+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:15.591560+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:16.591748+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:17.591944+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:18.592217+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:19.592418+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:20.592597+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:21.592696+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299974656 unmapped: 58441728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:22.592911+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:23.593151+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:24.593339+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:25.593770+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:26.593941+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:27.594065+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299982848 unmapped: 58433536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:28.594273+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:29.594482+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:30.594706+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:31.594903+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299991040 unmapped: 58425344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:32.595082+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 299999232 unmapped: 58417152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:33.595324+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:34.595483+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:35.595684+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:36.595818+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300007424 unmapped: 58408960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:37.595956+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:38.596084+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:39.596231+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:40.596379+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300015616 unmapped: 58400768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:41.596508+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 58392576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:42.596777+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300023808 unmapped: 58392576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:43.597733+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 58384384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:44.597874+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 58384384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:45.598088+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300032000 unmapped: 58384384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:46.598250+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 58376192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:47.598496+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 58376192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:48.598724+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300040192 unmapped: 58376192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:49.598954+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:50.599159+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:51.599359+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:52.599904+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:53.600128+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:54.600308+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300048384 unmapped: 58368000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:55.600459+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300056576 unmapped: 58359808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:56.600774+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:57.600926+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:58.601069+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:59.601193+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:00.601321+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:01.601523+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:02.601752+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300064768 unmapped: 58351616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:03.601988+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:04.602169+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:05.602296+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:06.602494+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:07.602697+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:08.602877+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:09.603032+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300072960 unmapped: 58343424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:10.603221+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 58335232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:11.603364+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300081152 unmapped: 58335232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:12.603538+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300089344 unmapped: 58327040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:13.603693+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:14.603933+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:15.604108+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:16.604241+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:17.604403+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:18.604540+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:19.604731+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300105728 unmapped: 58310656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:20.604860+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:21.604988+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:22.605157+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:23.605321+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:24.605489+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:25.605629+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:26.605816+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300113920 unmapped: 58302464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:27.605957+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 58294272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:28.606089+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300122112 unmapped: 58294272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:29.606246+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 58286080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:30.606395+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300130304 unmapped: 58286080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:31.606584+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:32.606792+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:33.606980+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:34.607144+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:35.607264+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300146688 unmapped: 58269696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:36.607455+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:37.607617+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:38.607777+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:39.607903+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:40.608049+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:41.608234+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:42.608417+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:43.608697+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300154880 unmapped: 58261504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:44.608896+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 58253312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:45.609011+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 58253312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:46.609234+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300163072 unmapped: 58253312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:47.609359+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:48.609501+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:49.609628+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:50.609792+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300171264 unmapped: 58245120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:51.609936+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 58236928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:52.610186+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300179456 unmapped: 58236928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:53.610370+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:54.610485+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:55.610665+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:56.610881+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:57.611067+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:58.611274+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:59.611392+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300187648 unmapped: 58228736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:00.611575+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:01.611728+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:02.611868+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:03.612168+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:04.612415+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:05.612562+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:06.612760+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300195840 unmapped: 58220544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:07.612951+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300204032 unmapped: 58212352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:08.613121+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300212224 unmapped: 58204160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:09.613297+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:10.613600+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:11.613916+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:12.614122+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:13.614300+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:14.614511+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300220416 unmapped: 58195968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:15.614714+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 58187776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:16.614958+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300228608 unmapped: 58187776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:17.615094+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 58179584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:18.615217+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300236800 unmapped: 58179584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:19.615329+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:20.615437+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:21.615592+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:22.615784+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300244992 unmapped: 58171392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:23.615955+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:24.616131+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:25.616278+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:26.616454+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:27.616588+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:28.616820+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:29.616989+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:30.617183+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:31.617413+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300253184 unmapped: 58163200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:32.617772+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300261376 unmapped: 58155008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:33.618010+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:34.618214+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:35.618490+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:36.618730+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:37.618908+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:38.619048+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:39.619177+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:40.619328+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300269568 unmapped: 58146816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:41.619513+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300277760 unmapped: 58138624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:42.619683+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:43.619914+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:44.620074+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:45.620283+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:46.620558+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300285952 unmapped: 58130432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:47.620747+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:48.621001+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:49.621164+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:50.621320+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:51.621438+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:52.621564+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:53.621782+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:54.621942+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300294144 unmapped: 58122240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:55.622120+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:56.622291+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:57.622429+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:58.622591+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300302336 unmapped: 58114048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:59.622720+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:00.622883+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:01.623054+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:02.623270+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:03.623535+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:04.623737+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300310528 unmapped: 58105856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:05.623911+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:06.624042+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:07.624175+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:08.624392+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300318720 unmapped: 58097664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:09.624573+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:10.624775+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:11.624916+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:12.625170+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300326912 unmapped: 58089472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:13.625394+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:14.625510+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:15.625660+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:16.625774+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:17.625896+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:18.626053+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:19.626225+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300343296 unmapped: 58073088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:20.626436+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:21.626547+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:22.626679+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:23.626875+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:24.627077+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:25.627256+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300351488 unmapped: 58064896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:26.627373+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 58056704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:27.627575+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300359680 unmapped: 58056704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:28.627741+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300367872 unmapped: 58048512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:29.627867+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:30.628064+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:31.628192+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:32.628466+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:33.628717+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300376064 unmapped: 58040320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:34.628839+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:35.629022+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:36.629167+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300384256 unmapped: 58032128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:37.629290+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:38.629453+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:39.629588+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:40.629740+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:41.629874+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:42.630090+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300392448 unmapped: 58023936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:43.630352+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:44.630505+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:45.630671+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300400640 unmapped: 58015744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:46.630840+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:47.631003+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:48.631171+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:49.631427+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300408832 unmapped: 58007552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:50.631703+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 57999360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:51.631921+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 57999360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:52.632048+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300417024 unmapped: 57999360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:53.632188+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:54.632309+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:55.632461+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:56.889325+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:57.889512+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:58.889668+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:59.889861+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300425216 unmapped: 57991168 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:00.889993+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:01.890198+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:02.890337+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:03.890510+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:04.890735+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:05.890852+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:06.891005+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300433408 unmapped: 57982976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:07.891140+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:08.891299+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:09.891433+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300441600 unmapped: 57974784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:10.891544+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300449792 unmapped: 57966592 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:11.891696+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:12.891836+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:13.892033+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:14.892185+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300457984 unmapped: 57958400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:15.892327+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300466176 unmapped: 57950208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:16.892470+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:17.892792+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:18.893024+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:19.893266+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:20.893478+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:21.893683+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:22.893894+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300474368 unmapped: 57942016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:23.894114+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:24.894263+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:25.894460+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:26.894711+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:27.894940+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:28.895093+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:29.895250+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300482560 unmapped: 57933824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:30.895439+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:31.895613+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300490752 unmapped: 57925632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:32.895878+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:33.896149+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:34.896391+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:35.896625+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:36.896825+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:37.897035+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300498944 unmapped: 57917440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:38.897309+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:39.897495+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:40.897724+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:41.898014+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:42.898318+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:43.898809+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:44.899002+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:45.899325+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300507136 unmapped: 57909248 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:46.899560+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:47.900056+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:48.900212+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:49.900475+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:50.900734+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:51.900911+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:52.901059+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:53.901255+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300515328 unmapped: 57901056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:54.901458+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:55.901801+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:56.901997+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:57.902317+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:58.902566+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:59.902759+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:00.902943+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300531712 unmapped: 57884672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:01.903104+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:02.903249+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300539904 unmapped: 57876480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:03.903438+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:04.903706+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:05.903910+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:06.904177+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:07.904392+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:08.904591+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:09.904807+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300548096 unmapped: 57868288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:10.904932+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:11.905134+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300556288 unmapped: 57860096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:12.905328+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:13.905690+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300564480 unmapped: 57851904 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:14.905834+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:15.906009+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:16.906219+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:17.906394+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300572672 unmapped: 57843712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:18.906580+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:19.906755+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:20.906976+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:21.907409+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:22.907745+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:23.908071+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:24.908290+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:25.908501+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300580864 unmapped: 57835520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:26.908734+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300597248 unmapped: 57819136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:27.908932+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300597248 unmapped: 57819136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:28.909110+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:29.909322+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:30.909535+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:31.909781+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:32.909975+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:33.910238+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-11-25T17:51:34.910379+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _finish_auth 0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:34.911613+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300605440 unmapped: 57810944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:35.910607+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300613632 unmapped: 57802752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:36.910825+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:37.911015+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:38.911248+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:39.911395+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300621824 unmapped: 57794560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:40.911722+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:41.911931+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:42.912101+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300630016 unmapped: 57786368 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:43.912332+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:44.912491+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:45.912717+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:46.912964+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:47.913144+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:48.913329+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300638208 unmapped: 57778176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:49.913501+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:50.913762+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300646400 unmapped: 57769984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:51.914030+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:52.914263+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:53.914551+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:54.914796+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:55.915020+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:56.915259+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:57.915502+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300654592 unmapped: 57761792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:58.915693+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:59.915909+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300662784 unmapped: 57753600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:00.916073+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:01.916319+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:02.916593+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:03.916932+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:04.917154+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:05.917369+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:06.917555+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300679168 unmapped: 57737216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:07.917828+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:08.917987+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:09.918128+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:10.918290+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:11.918535+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:12.918784+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:13.919130+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300687360 unmapped: 57729024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:14.919298+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:15.919542+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300695552 unmapped: 57720832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:16.919772+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:17.919965+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:18.920270+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:19.920520+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300703744 unmapped: 57712640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:20.920856+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:21.921142+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:22.921443+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:23.921760+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300711936 unmapped: 57704448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:24.921930+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:25.922210+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:26.922420+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:27.922630+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:28.922853+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300720128 unmapped: 57696256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:29.923015+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:30.923237+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:31.923436+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:32.923572+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300728320 unmapped: 57688064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:33.923792+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300744704 unmapped: 57671680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:34.923966+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:35.924176+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:36.924400+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:37.924776+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:38.925000+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:39.925274+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:40.925564+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300752896 unmapped: 57663488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:41.925812+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 57655296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:42.926095+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300761088 unmapped: 57655296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:43.926520+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:44.926836+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:45.927141+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:46.927508+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:47.927811+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:48.928099+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300769280 unmapped: 57647104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:49.928334+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:50.928630+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:51.928954+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:52.929240+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:53.929508+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:54.929710+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300777472 unmapped: 57638912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:55.930051+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:56.930399+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:57.930727+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:58.931037+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300785664 unmapped: 57630720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:59.931382+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:00.931625+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300793856 unmapped: 57622528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:01.931977+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300802048 unmapped: 57614336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:02.932278+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:03.932584+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300810240 unmapped: 57606144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:04.932911+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300818432 unmapped: 57597952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:05.933131+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:06.933406+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300826624 unmapped: 57589760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:07.933709+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 57581568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:08.934081+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 57581568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:09.934300+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300834816 unmapped: 57581568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:10.934531+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300851200 unmapped: 57565184 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:11.934787+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:12.935090+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:13.935450+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:14.935765+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:15.935967+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300859392 unmapped: 57556992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:16.936146+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:17.936394+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:18.936539+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:19.936693+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:20.936838+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:21.937027+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:22.937206+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:23.937452+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:24.937688+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:25.937858+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:26.938015+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300867584 unmapped: 57548800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:27.938142+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:28.938309+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:29.938491+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:30.938632+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:31.938857+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:32.939020+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:33.939252+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:34.939393+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:35.940157+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300892160 unmapped: 57524224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:36.940353+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300900352 unmapped: 57516032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7201.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.77 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:37.940536+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:38.940694+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:39.940853+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:40.940991+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:41.941203+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300908544 unmapped: 57507840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:42.941324+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300916736 unmapped: 57499648 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:43.941530+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:44.941708+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300924928 unmapped: 57491456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:45.941849+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:46.942099+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:47.942283+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:48.942463+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:49.942626+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:50.942821+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:51.943065+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:52.943236+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300933120 unmapped: 57483264 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:53.943417+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:54.943795+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x55750e881000 session 0x55750cdc83c0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: handle_auth_request added challenge on 0x557512508c00
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300941312 unmapped: 57475072 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc ms_handle_reset ms_handle_reset con 0x55750ddfd000
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1812945073
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1812945073,v1:192.168.122.100:6801/1812945073]
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: get_auth_request con 0x55750e883400 auth_method 0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc handle_mgr_configure stats_period=5
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:55.943928+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:56.944082+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:57.944231+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:58.944378+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:59.944572+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300949504 unmapped: 57466880 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:00.944778+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:01.944939+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:02.945155+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:03.945425+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:04.945603+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:05.945757+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:06.945911+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300965888 unmapped: 57450496 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:07.946039+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:08.946282+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:09.946475+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:10.946629+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:11.946838+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:12.946995+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:13.947199+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300974080 unmapped: 57442304 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:14.947450+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300982272 unmapped: 57434112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:15.947604+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:16.947811+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:17.947973+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:18.948136+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:19.948312+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:20.948474+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:21.948958+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:22.949135+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:23.949290+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300990464 unmapped: 57425920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:24.949436+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:25.949605+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:26.949790+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:27.950047+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 300998656 unmapped: 57417728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:28.950339+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301006848 unmapped: 57409536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:29.950491+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:30.950724+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:31.950923+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301015040 unmapped: 57401344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:32.951086+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 57393152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:33.951305+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 57393152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:34.951589+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301023232 unmapped: 57393152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:35.951697+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:36.951845+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:37.952016+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:38.952170+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301031424 unmapped: 57384960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:39.952305+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301039616 unmapped: 57376768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:40.952443+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:41.952580+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:42.952729+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:43.952951+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:44.953083+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:45.953208+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301056000 unmapped: 57360384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:46.953367+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:47.953539+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:48.953756+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:49.953936+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:50.954099+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:51.954279+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:52.954488+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:53.954709+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:54.955073+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301064192 unmapped: 57352192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:55.955255+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301072384 unmapped: 57344000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:56.955474+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:57.955718+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:58.955898+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:59.956081+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:00.956280+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:01.956413+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:02.956594+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:03.956829+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301080576 unmapped: 57335808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:04.957024+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:05.957234+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:06.957409+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:07.957556+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:08.957706+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301088768 unmapped: 57327616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:09.957894+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:10.958064+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301105152 unmapped: 57311232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:11.958618+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:12.958875+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:13.959101+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:14.959336+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:15.959525+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301121536 unmapped: 57294848 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:16.959735+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:17.959901+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:18.960132+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:19.960309+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301129728 unmapped: 57286656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:20.960457+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:21.960743+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:22.960906+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:23.961088+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x557511c03c00 session 0x55750bb781e0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: handle_auth_request added challenge on 0x55750de3dc00
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:24.961219+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:25.961341+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:26.961697+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:27.961990+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 352256
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301137920 unmapped: 57278464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:28.962282+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301154304 unmapped: 57262080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:29.962448+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 596.978881836s of 597.216186523s, submitted: 90
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:30.962710+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301178880 unmapped: 57237504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:31.962921+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:32.963148+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:33.963344+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:34.963619+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 301187072 unmapped: 57229312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:35.963874+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302284800 unmapped: 56131584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:36.964077+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:37.964248+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:38.964473+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:39.964682+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:40.964844+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:41.964963+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:42.965106+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:43.965279+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:44.965404+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:45.965559+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:46.965716+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302292992 unmapped: 56123392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:47.966015+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:48.966253+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:49.966428+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:50.966597+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:51.966990+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:52.967192+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:53.967377+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:54.967628+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:55.967908+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:56.968300+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:57.968419+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302301184 unmapped: 56115200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:58.968768+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:59.968975+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:00.969210+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:01.969346+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:02.969767+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:03.970016+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:04.970294+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:05.970473+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302309376 unmapped: 56107008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:06.970742+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:07.970901+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:08.971230+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:09.971356+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:10.971631+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:11.971845+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:12.972079+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:13.972243+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:14.972538+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:15.972710+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302317568 unmapped: 56098816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:16.972960+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:17.973129+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:18.973420+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:19.973566+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:20.973727+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:21.973895+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:22.974159+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:23.974417+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:24.974688+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:25.974843+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:26.975095+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:27.975308+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:28.975722+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302325760 unmapped: 56090624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:29.975911+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302333952 unmapped: 56082432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:30.976130+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:31.976303+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:32.976586+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:33.976866+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:34.977119+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:35.977291+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:36.977455+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:37.977591+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:38.977811+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:39.977999+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:40.978191+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:41.978424+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:42.978706+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:43.978946+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:44.979131+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:45.979298+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:46.979545+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302342144 unmapped: 56074240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:47.979795+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302350336 unmapped: 56066048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:48.980031+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302350336 unmapped: 56066048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:49.980208+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:50.980397+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:51.980567+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:52.980819+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:53.981032+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:54.981253+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:55.981450+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:56.981584+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:57.981714+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:58.981838+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:59.981980+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:00.982389+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:01.982539+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:02.982673+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302358528 unmapped: 56057856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:03.982846+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:04.983030+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:05.983193+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:06.983350+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:07.983475+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:08.983719+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:09.983879+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:10.984028+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:11.984195+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:12.984465+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:13.984733+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:14.984966+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:15.985269+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302366720 unmapped: 56049664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:16.985464+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:17.985725+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:18.985922+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:19.986105+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:20.986298+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:21.986513+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:22.986739+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:23.986938+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:24.987082+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:25.987223+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:26.987404+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:27.987573+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:28.987744+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302374912 unmapped: 56041472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:29.987891+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:30.988167+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:31.988398+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:32.988715+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:33.988912+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302383104 unmapped: 56033280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:34.989088+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:35.989279+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:36.989805+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:37.990005+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:38.990148+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:39.990308+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:40.990525+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:41.991113+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:42.991287+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302391296 unmapped: 56025088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:43.991552+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302407680 unmapped: 56008704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:44.991756+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302407680 unmapped: 56008704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:45.992004+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:46.992324+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:47.992503+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:48.992700+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:49.992868+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:50.993046+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:51.993176+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:52.993506+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:53.993759+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:54.994039+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:55.994204+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:56.994433+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:57.994585+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:58.994707+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:59.994837+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:00.994959+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302415872 unmapped: 56000512 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:01.995133+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:02.995352+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:03.995722+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:04.995883+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:05.996068+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:06.996268+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:07.996396+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302424064 unmapped: 55992320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:08.996706+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:09.996863+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:10.997041+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:11.997203+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:12.997347+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:13.997592+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:14.997757+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302432256 unmapped: 55984128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:15.997988+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:16.998198+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:17.998382+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:18.998542+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:19.998793+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:20.998979+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:21.999138+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:22.999317+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:23.999505+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:24.999712+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:25.999886+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:27.000252+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:28.000453+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:29.000698+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:30.000883+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302448640 unmapped: 55967744 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:31.001032+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:32.001235+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:33.001417+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:34.001708+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:35.001903+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:36.002063+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:37.002261+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302456832 unmapped: 55959552 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:38.002431+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302465024 unmapped: 55951360 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:39.002731+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:40.002994+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:41.003260+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:42.003450+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:43.003613+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:44.003864+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:45.004122+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:46.004298+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:47.004451+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:48.004691+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:49.004878+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:50.005055+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:51.005205+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:52.005403+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:53.005613+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:54.005905+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:55.006111+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302481408 unmapped: 55934976 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:56.006271+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302489600 unmapped: 55926784 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:57.006436+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:58.006697+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:59.006854+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:00.007087+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:01.009546+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:02.010107+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:03.010276+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:04.010426+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:05.010569+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:06.010806+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:07.011009+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:08.011202+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:09.011423+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:10.011603+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:11.011785+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:12.011983+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302505984 unmapped: 55910400 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:13.012194+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:14.012384+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:15.012510+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:16.012653+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302514176 unmapped: 55902208 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:17.012771+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:18.012898+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:19.013049+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:20.013273+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:21.013514+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302522368 unmapped: 55894016 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:22.013702+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302530560 unmapped: 55885824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:23.013881+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302530560 unmapped: 55885824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:24.014072+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302530560 unmapped: 55885824 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:25.014292+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 55877632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:26.014553+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 55877632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:27.014703+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302538752 unmapped: 55877632 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:28.017164+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:29.017336+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:30.017547+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:31.017733+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:32.017918+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:33.018057+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302546944 unmapped: 55869440 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:34.018211+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302563328 unmapped: 55853056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:35.018397+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302563328 unmapped: 55853056 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:36.018605+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:37.018751+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:38.018886+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:39.019047+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:40.019173+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:41.019337+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:42.019580+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:43.019753+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302571520 unmapped: 55844864 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:44.019929+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302579712 unmapped: 55836672 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:45.020066+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:46.020195+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:47.020355+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:48.020471+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:49.020612+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:50.020781+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:51.020966+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:52.021156+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302587904 unmapped: 55828480 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:53.021338+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:54.021501+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:55.021676+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:56.021871+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:57.022014+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:58.022168+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302596096 unmapped: 55820288 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:59.022300+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:00.022452+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:01.022590+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:02.022763+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:03.022989+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:04.033505+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:05.033745+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:06.033928+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:07.034107+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:08.034329+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:09.034481+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:10.034672+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:11.034797+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:12.034978+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:13.035214+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:14.035468+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:15.035753+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302604288 unmapped: 55812096 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:16.035935+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:17.036165+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:18.036333+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:19.036555+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:20.036723+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:21.036868+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:22.037180+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:23.037336+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:24.037570+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:25.037817+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 55795712 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:26.038300+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:27.038472+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:28.038682+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:29.038941+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:30.039067+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302628864 unmapped: 55787520 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:31.039241+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:32.039424+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:33.039598+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:34.039781+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:35.040080+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:36.040241+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:37.040432+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302637056 unmapped: 55779328 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:38.040614+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:39.040826+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:40.040980+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:41.041147+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:42.041384+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:43.041773+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:44.042143+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:45.042452+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:46.042613+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:47.042818+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:48.042963+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:49.043163+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:50.043363+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302645248 unmapped: 55771136 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:51.043692+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:52.043889+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:53.044042+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:54.044239+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 55762944 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:55.044457+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 55754752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:56.044605+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 55754752 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:57.044796+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:58.044984+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:59.045111+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:00.045244+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:01.045419+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:02.045577+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:03.045777+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:04.045972+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:05.046162+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:06.058198+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:07.058432+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:08.058627+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:09.058834+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:10.059017+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:11.059201+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:12.059340+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:13.059466+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:14.059666+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:15.059813+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302669824 unmapped: 55746560 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:16.059945+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:17.060154+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:18.060376+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:19.060547+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:20.060802+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:21.060966+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:22.061167+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:23.061333+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:24.061520+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:25.061719+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:26.062773+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:27.062910+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:28.063088+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:29.063308+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302686208 unmapped: 55730176 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:30.063512+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:31.063752+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:32.063903+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:33.064037+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:34.064237+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:35.064372+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302694400 unmapped: 55721984 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:36.064492+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 55713792 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:37.064629+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:38.064841+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:39.065011+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:40.065139+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:41.065336+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:42.065470+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:43.065677+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:44.065867+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:45.066085+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:46.066238+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:47.066415+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:48.066536+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:49.066755+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:50.066917+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:51.067088+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:52.067277+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:53.067452+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 55705600 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:54.067736+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:55.067878+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:56.068028+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:57.068206+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:58.068399+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:59.068538+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:00.068672+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:01.068799+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:02.068945+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:03.069137+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 55697408 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:04.069406+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:05.069619+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:06.069796+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:07.070009+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:08.070145+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:09.070334+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:10.070555+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 55689216 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:11.070735+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:12.070930+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:13.071243+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:14.071481+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 55681024 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:15.071657+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:16.071858+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:17.072111+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:18.072346+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:19.072495+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:20.072695+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:21.072860+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:22.073055+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:23.073240+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:24.073463+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:25.073746+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 55672832 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:26.073951+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:27.074175+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:28.074366+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:29.079534+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:30.079771+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:31.079953+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 55664640 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:32.080124+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:33.080306+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:34.080525+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:35.080730+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:36.080875+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:37.081050+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 55656448 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:38.081265+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:39.081526+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:40.081749+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:41.082028+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:42.082191+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:43.082328+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:44.082756+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:45.082930+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 55648256 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:46.083060+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:47.083181+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:48.083449+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:49.083977+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:50.084199+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:51.084360+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302776320 unmapped: 55640064 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:52.084510+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:53.084728+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:54.084981+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:55.085166+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 55631872 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:56.085360+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:57.085565+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:58.085711+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:59.085879+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:00.086028+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:01.086153+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:02.086323+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:03.086514+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:04.086800+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 55623680 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:05.086959+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:06.087162+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:07.087313+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:08.087461+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:09.087606+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:10.087765+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 55615488 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:11.087986+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:12.088161+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:13.088328+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:14.088536+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:15.088717+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:16.088855+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:17.089025+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:18.089209+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:19.089361+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:20.089515+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:21.089659+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 55607296 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:22.089829+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:23.090002+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:24.090187+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:25.090391+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:26.090545+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:27.090720+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:28.090858+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:29.091029+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:30.091223+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:31.091360+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 55599104 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:32.091530+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:33.091726+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:34.091944+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:35.092121+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 55590912 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:36.092346+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:37.092506+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7801.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 216 writes, 324 keys, 216 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 216 writes, 108 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:38.092701+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:39.092884+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:40.093023+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 55582720 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:41.093173+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:42.093320+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:43.093447+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:44.093614+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 55574528 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:45.093823+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:46.094034+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:47.094214+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:48.094410+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:49.094657+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:50.094838+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:51.095032+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:52.095181+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:53.095373+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:54.095581+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:55.095720+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:56.095852+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:57.095986+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:58.096137+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:59.096315+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:00.096526+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 55566336 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:01.096731+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:02.096898+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:03.097037+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:04.097230+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:05.097397+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:06.097534+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:07.097675+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 55558144 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:08.097842+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:09.097984+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:10.098125+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:11.098261+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:12.098447+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:13.098684+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:14.098875+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:15.098990+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:16.099174+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 55549952 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:17.099324+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:18.099526+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:19.128186+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:20.128331+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:21.128472+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:22.128736+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:23.128875+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:24.129072+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:25.129237+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 55541760 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:26.129396+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:27.129543+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:28.129709+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:29.129902+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:30.130056+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:31.130192+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:32.130292+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 55533568 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:33.130453+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:34.130735+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:35.130916+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:36.131120+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:37.131250+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:38.131467+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 55525376 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:39.131775+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:40.131956+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:41.132150+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:42.132342+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:43.132518+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:44.132693+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:45.132853+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:46.133035+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:47.133182+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:48.133337+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 55508992 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:49.133505+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:50.133634+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:51.133777+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:52.133914+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:53.134103+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:54.134310+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:55.134541+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:56.406753+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:57.406929+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:58.407084+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:59.408622+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:00.408913+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:01.410521+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:02.410682+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:03.410808+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:04.410974+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 55500800 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:05.411210+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:06.411356+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:07.412038+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:08.412688+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:09.412972+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:10.413267+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 55492608 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:11.413670+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 55484416 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:12.414036+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:13.414290+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:14.414523+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:15.414832+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:16.414968+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:17.415301+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:18.415743+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 55476224 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:19.416106+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:20.416367+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:21.416618+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:22.416910+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:23.417505+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:24.417934+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:25.418340+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:26.418489+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 55468032 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:27.418733+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:28.419031+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:29.419284+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:30.419442+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:31.419725+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:32.419895+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 55459840 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:33.420143+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 602.693115234s of 603.017700195s, submitted: 108
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 55451648 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:34.420326+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 55451648 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:35.420514+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 55443456 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [0,1])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:36.420629+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:37.420849+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:38.421223+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:39.421371+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:40.421727+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:41.421873+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:42.422033+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:43.422171+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:44.422441+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:45.422565+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:46.422693+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:47.422818+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:48.423009+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:49.423156+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:50.423326+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:51.423446+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:52.423823+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:53.424004+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:54.424221+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:55.424384+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:56.424509+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:57.424657+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:58.424821+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:59.424927+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:00.425149+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:01.425304+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:02.425434+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:03.425558+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:04.425731+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:05.425860+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:06.425974+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:07.426114+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:08.426251+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:09.426384+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:10.426516+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:11.426686+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:12.426808+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:13.426947+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:14.427114+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:15.427261+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:16.427389+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:17.427663+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:18.427845+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:19.428058+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:20.428265+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:21.428484+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:22.428664+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 55386112 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:23.428989+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:24.429220+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:25.429436+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:26.429595+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:27.429777+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:28.429948+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:29.430116+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:30.430269+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:31.430432+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:32.430665+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:33.430895+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:34.431158+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:35.431450+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:36.431567+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:37.431721+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:38.431851+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 55377920 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:39.431989+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:40.432131+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:41.432300+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:42.432453+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:43.432600+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:44.432786+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:45.432922+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:46.433109+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 55369728 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:47.433246+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:48.433396+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:49.433632+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:50.433878+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:51.434017+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:52.434165+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:53.434310+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:54.434491+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:55.434613+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 55361536 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:56.434743+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:57.434900+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:58.435083+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:59.435233+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:00.435441+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:01.435623+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:02.435792+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:03.435981+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:04.436181+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:05.436490+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:06.436704+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:07.436869+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:08.437009+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:09.437172+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:10.437538+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:11.437712+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:12.437864+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 55353344 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:13.438067+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:14.438261+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:15.438482+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:16.438771+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:17.438951+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:18.439120+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:19.439257+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:20.439386+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:21.439529+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:22.439685+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:23.439821+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:24.439979+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:25.440137+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:26.440269+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:27.440404+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:28.440549+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:29.440758+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:30.440935+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:31.441088+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:32.441289+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:33.441440+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:34.441696+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:35.441861+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:36.441999+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:37.442145+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:38.442288+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:39.442429+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:40.442597+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:41.442752+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:42.442972+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 55336960 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:43.443118+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 55328768 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:44.443283+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:45.443425+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:46.443591+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:47.443745+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:48.443904+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:49.444046+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:50.444257+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:51.444441+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:52.444772+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:53.444964+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:54.445190+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:55.445423+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:56.445570+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:57.445711+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 55320576 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:58.445898+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 55312384 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:59.914992+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:00.915176+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:01.915329+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:02.915518+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:03.915752+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:04.915976+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:05.916134+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:06.916281+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:07.916419+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:08.916702+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:09.916847+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 55304192 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:10.917002+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:11.917176+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:12.917377+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:13.917579+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:14.917839+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 55296000 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:15.918014+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:16.918273+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:17.918436+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:18.918841+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:19.918982+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:20.919106+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:21.919240+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:22.919390+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 55287808 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:23.919552+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:24.919723+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:25.919836+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:26.919994+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:27.920180+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:28.920351+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 55279616 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:29.920495+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:30.920627+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:31.920765+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:32.920914+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:33.921048+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:34.921244+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:35.921369+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:36.921517+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:37.921684+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 55271424 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets getting new tickets!
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:38.921960+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _finish_auth 0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:38.923047+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:39.922093+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:40.922256+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:41.922371+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:42.922532+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:43.922706+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:44.922983+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:45.923162+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 55263232 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:46.923336+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 55255040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:47.923525+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 55255040 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:48.923745+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:49.923931+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:50.924197+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:51.924339+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:52.924704+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:53.924886+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:54.925106+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x557512508c00 session 0x55750bb783c0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: handle_auth_request added challenge on 0x55750d8fa400
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc ms_handle_reset ms_handle_reset con 0x55750e883400
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1812945073
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1812945073,v1:192.168.122.100:6801/1812945073]
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: get_auth_request con 0x55750d8f9c00 auth_method 0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: mgrc handle_mgr_configure stats_period=5
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:55.925249+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:56.925481+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:57.925732+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:58.925959+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:59.926138+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:00.926347+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 55238656 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:01.926487+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:02.926721+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:03.926898+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:04.927094+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:05.927247+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:06.927440+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:07.927599+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:08.927773+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:09.927919+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 55230464 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:10.928083+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:11.928279+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:12.928490+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:13.928703+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:14.928886+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:15.929028+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 55222272 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:16.929190+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:17.929366+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:18.929495+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:19.929632+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:20.929805+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:21.929951+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:22.930131+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:23.930303+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:24.930505+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 55214080 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:25.930747+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 55205888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:26.930891+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 55205888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:27.931054+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 55205888 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:28.931181+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:29.931322+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:30.931450+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:31.931604+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:32.931806+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:33.931966+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:34.932138+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:35.932275+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:36.932462+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:37.932622+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:38.932848+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:39.933035+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:40.933172+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:41.933328+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:42.933507+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:43.933753+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 55197696 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:44.933985+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:45.934102+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:46.934271+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:47.934481+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:48.934928+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:49.935198+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:50.935425+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:51.935582+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:52.935697+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:53.935901+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:54.936120+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:55.936339+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:56.936474+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:57.936603+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 55189504 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:58.936789+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 55181312 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:59.936908+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:00.937114+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:01.937291+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:02.937442+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:03.937696+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:04.937914+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 55173120 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:05.938024+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:06.938148+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:07.938293+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:08.938408+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:09.938543+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:10.938703+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:11.938823+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:12.938967+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:13.939101+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 55164928 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:14.939238+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:15.939366+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:16.939500+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:17.939699+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:18.939853+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:19.940010+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:20.940140+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:21.940285+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:22.940418+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:23.940544+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 ms_handle_reset con 0x55750de3dc00 session 0x55750e844b40
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: handle_auth_request added challenge on 0x557511c02400
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303259648 unmapped: 55156736 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:24.940715+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:25.940846+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:26.941008+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:27.941165+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:28.941287+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:29.941454+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 55148544 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:30.941620+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:31.941806+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:32.941916+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:33.942049+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:34.942203+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:35.942321+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:36.942439+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:37.942563+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:38.942703+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:39.942839+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 55140352 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:40.942958+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:41.943199+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:42.943335+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:43.943501+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:44.943687+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:45.943822+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:46.943946+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 55132160 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:47.944138+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:48.944286+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:49.944426+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:50.944572+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:51.944705+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:52.944816+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:53.945066+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:54.945299+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:55.945415+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:56.945535+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:57.945717+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:58.945956+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:59.946186+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 55123968 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:00.946366+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:01.946545+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:02.946797+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:03.947025+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:04.947467+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 55115776 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:05.947689+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:06.947800+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:07.947906+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:08.948009+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:09.948132+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:10.948241+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:11.948356+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:12.948469+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:13.948594+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:14.948755+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:15.948866+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 55107584 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:16.948987+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:17.949116+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:18.949241+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:19.949365+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:20.949491+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 55099392 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:21.949632+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 55091200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:22.949831+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 55091200 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:23.949980+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 55083008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:24.950126+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 55083008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:25.950285+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 55083008 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:26.950422+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:27.950545+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:28.950681+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:29.950794+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:30.950912+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 55074816 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:31.951024+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:32.951227+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:33.951415+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:34.951586+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:35.951714+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:36.951855+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:37.951970+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:38.952084+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:39.952198+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 55066624 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:40.952305+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 55058432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:41.952420+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 55058432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:42.952572+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 55058432 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:43.952698+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 55050240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:44.952836+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 55050240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:45.952942+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 55050240 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:46.953049+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:47.953172+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:48.953289+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:49.953408+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:50.953528+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:51.953719+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:52.953832+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:53.954014+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:54.954199+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:55.954313+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:56.954451+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:57.954706+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 55042048 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:58.954835+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:59.955107+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:00.955476+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:01.955692+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:02.955828+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 55033856 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:03.955996+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:04.956135+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:05.956250+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:06.956356+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:07.956471+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:08.956685+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:09.956827+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:10.956956+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:11.957103+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:12.957233+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:13.957386+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:14.957526+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:15.957654+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 55025664 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:16.957928+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:17.958108+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:18.958295+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:19.958503+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:20.958667+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:21.958801+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:22.958934+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 55017472 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:23.959043+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 55009280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:24.959314+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 55009280 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:25.959449+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:26.959592+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:27.959750+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:28.959908+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:29.960038+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:30.960175+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:31.960330+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:32.960471+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:33.960612+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:34.960805+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 55001088 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:35.960947+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:36.961125+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:37.961325+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:38.961458+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:39.961705+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:40.962014+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:41.962228+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:42.962412+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:43.962571+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:44.962812+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:45.962966+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:46.963099+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 54992896 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:47.963293+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:48.963498+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:49.963732+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:50.963936+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:51.964142+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:52.964348+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:53.964520+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:54.964729+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:55.964851+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:56.965007+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:57.965120+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:58.965251+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:59.965388+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:00.965529+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:01.965681+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:02.965787+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:03.966185+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:04.966337+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:05.966479+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:06.966687+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:07.966811+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:08.966929+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:09.967082+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303431680 unmapped: 54984704 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:10.967204+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:11.967328+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:12.967443+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:13.967553+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 54968320 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:14.967686+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:15.967805+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:16.967960+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:17.968093+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:18.968240+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:19.968381+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:20.968535+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:21.968672+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:22.968801+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:23.968925+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:24.969069+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:25.969184+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:26.969309+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:27.969417+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:28.969560+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:29.969692+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:30.969818+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:31.969942+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:06 compute-0 ceph-osd[90994]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:06 compute-0 ceph-osd[90994]: bluestore.MempoolThread(0x55750a691b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095424 data_alloc: 218103808 data_used: 356352
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 54951936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:32.970058+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 54960128 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'config diff' '{prefix=config diff}'
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'config show' '{prefix=config show}'
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:33.970199+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 55345152 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:34.970381+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: osd.2 307 heartbeat osd_stat(store_statfs(0x4ec124000/0x0/0x4ffc00000, data 0xe5e83c/0x100a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12acf9c6), peers [0,1] op hist [])
Nov 25 18:14:06 compute-0 ceph-osd[90994]: prioritycache tune_memory target: 4294967296 mapped: 302440448 unmapped: 55975936 heap: 358416384 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: tick
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_tickets
Nov 25 18:14:06 compute-0 ceph-osd[90994]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:35.970532+0000)
Nov 25 18:14:06 compute-0 ceph-osd[90994]: do_command 'log dump' '{prefix=log dump}'
Nov 25 18:14:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:14:06 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:14:06 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23377 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 18:14:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432690294' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4539: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:06 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mon[74985]: from='client.23367 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mon[74985]: from='client.23369 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2877039370' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1432690294' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 18:14:06 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 18:14:06 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933937080' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23385 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 18:14:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237594970' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 18:14:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:07 compute-0 ceph-osd[90994]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8401.5 total, 600.0 interval
                                           Cumulative writes: 39K writes, 155K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:07 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23389 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 18:14:07 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264492742' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: from='client.23373 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: from='client.23377 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: pgmap v4539: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:07 compute-0 ceph-mon[74985]: from='client.23381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3933937080' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2237594970' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 18:14:07 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2264492742' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 18:14:08 compute-0 nova_compute[254092]: 2025-11-25 18:14:08.194 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:08 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23397 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:08 compute-0 ceph-d82baeae-c742-50a4-b8f6-b5257c8a2c92-mgr-compute-0-mavpeh[75276]: 2025-11-25T18:14:08.430+0000 7f2d477a6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:14:08 compute-0 ceph-mgr[75280]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 18:14:08 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4540: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:08 compute-0 podman[480860]: 2025-11-25 18:14:08.672470755 +0000 UTC m=+0.089225029 container health_status f539aca08d80d6277815734d42293bc4b4a68e5ee7884b1738500d4d6aeec1f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:14:08 compute-0 podman[480850]: 2025-11-25 18:14:08.672941317 +0000 UTC m=+0.089874436 container health_status 917aeaa36fc838d5639dbf12f6bb31f1f18e2da306e671d9c98db5f53efd1cbe (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 18:14:08 compute-0 podman[480861]: 2025-11-25 18:14:08.697044284 +0000 UTC m=+0.112192274 container health_status ff31d86e783b4007758f09d162d17e8107ad04f1d3f2e2abd4ea2ef080b11e0b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 18:14:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 18:14:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859644948' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 18:14:08 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 18:14:08 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320324554' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 18:14:09 compute-0 crontab[481034]: (root) LIST (root)
Nov 25 18:14:09 compute-0 ceph-mon[74985]: from='client.23385 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:09 compute-0 ceph-mon[74985]: from='client.23389 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/859644948' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 18:14:09 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3320324554' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 18:14:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 18:14:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814498474' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 18:14:09 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 18:14:09 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1099905580' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 18:14:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677391739' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 18:14:10 compute-0 ceph-mgr[75280]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 18:14:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 18:14:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459415464' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 18:14:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200805878' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.453 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.496 254096 DEBUG oslo_service.periodic_task [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.496 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.497 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.498 254096 DEBUG oslo_concurrency.lockutils [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.516 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.524 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.524 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.524 254096 WARNING nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Removable base files: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169 /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9e29bca11122733e2b34fccd9459097794a3a169
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/161d3f571cb73ba41fcf2bb0ea94bcb3953595f6
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.525 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.526 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.526 254096 DEBUG nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 25 18:14:10 compute-0 nova_compute[254092]: 2025-11-25 18:14:10.526 254096 INFO nova.virt.libvirt.imagecache [None req-f92d0973-fc84-4331-bd6b-be80aa37ac23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 25 18:14:10 compute-0 ceph-mon[74985]: from='client.23397 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: pgmap v4540: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3814498474' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1099905580' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/677391739' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2459415464' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/200805878' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 18:14:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825372936' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4541: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:54.788306+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:55.788475+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:56.788711+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:57.788885+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:58.789049+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:59.789294+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:00.789579+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:01.789844+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:02.790166+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:03.790393+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:04.790661+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:05.790974+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:06.791324+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352796672 unmapped: 90759168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:07.791593+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:08.791901+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:09.792130+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:10.792365+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:11.792718+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:12.793048+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:13.793324+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352804864 unmapped: 90750976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:14.793696+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 90734592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:15.794016+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 90734592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:16.794992+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352821248 unmapped: 90734592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:17.795284+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 90726400 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:18.795516+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 90726400 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:19.795827+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:20.796028+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:21.796282+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:22.796527+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:23.796737+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618ddfa4c00 session 0x5618decc0780
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618ded41c00
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 90718208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:24.797049+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:25.797330+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:26.798382+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:27.798634+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:28.799064+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:29.799354+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 90710016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:30.799733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352854016 unmapped: 90701824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:31.799915+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 90693632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:32.800071+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:33.800284+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:34.800513+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:35.800764+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:36.801051+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:37.801288+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:38.801527+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:39.801765+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352870400 unmapped: 90685440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:40.802006+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:41.802263+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:42.802469+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:43.802697+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:44.802984+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:45.803205+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352886784 unmapped: 90669056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:46.803379+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 90660864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:47.803556+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352894976 unmapped: 90660864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:48.803718+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 90652672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:49.803878+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352903168 unmapped: 90652672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:50.804049+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:51.804208+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:52.804353+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:53.804483+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:54.804667+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:55.804812+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352911360 unmapped: 90644480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:56.805010+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:57.805189+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:58.805493+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:59.805724+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:00.805933+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:01.806178+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:02.806412+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:03.806612+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:04.806782+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352919552 unmapped: 90636288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:05.807011+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:06.807270+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:07.807595+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:08.807747+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:09.807937+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:10.808189+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:11.808344+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352935936 unmapped: 90619904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:12.808481+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352944128 unmapped: 90611712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:13.808660+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:14.808878+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:15.809027+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:16.809300+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:17.809490+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:18.809679+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352952320 unmapped: 90603520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:19.809874+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 90595328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:20.810060+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352960512 unmapped: 90595328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:21.810289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 90587136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:22.810485+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 90587136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:23.810726+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:24.810971+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:25.811182+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:26.811483+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:27.811629+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:28.811818+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352976896 unmapped: 90578944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:29.812049+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 90570752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:30.812250+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352985088 unmapped: 90570752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:31.812464+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 90562560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:32.812687+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 90562560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:33.812873+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352993280 unmapped: 90562560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:34.813129+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 90554368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:35.813323+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 90554368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:36.813631+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353001472 unmapped: 90554368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:37.813968+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 90546176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:38.814353+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353009664 unmapped: 90546176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:39.814559+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:40.814826+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:41.815029+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:42.815336+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:43.815502+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353017856 unmapped: 90537984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:44.815733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:45.815885+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:46.816100+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:47.816278+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:48.816475+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:49.817179+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:50.817393+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:51.817572+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353026048 unmapped: 90529792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:52.817709+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:53.817847+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:54.818004+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:55.818141+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:56.818783+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:57.818955+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 90513408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:58.819086+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353050624 unmapped: 90505216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:59.819252+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353050624 unmapped: 90505216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:00.819392+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 90497024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:01.819585+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 90497024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:02.819769+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353058816 unmapped: 90497024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:03.819996+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:04.820154+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:05.820291+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:06.820476+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:07.820669+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353067008 unmapped: 90488832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:08.820865+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 90480640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:09.821011+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 90480640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:10.821134+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353075200 unmapped: 90480640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:11.821321+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:12.821557+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:13.821815+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:14.822016+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:15.822156+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 90464256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:16.822334+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:17.822492+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:18.822770+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:19.822977+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:20.823219+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:21.823433+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353099776 unmapped: 90456064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:22.823785+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:23.824030+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:24.824195+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:25.824401+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:26.824701+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 90447872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:27.824908+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 90439680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:28.825102+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 90439680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:29.825268+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353116160 unmapped: 90439680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:30.825422+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353124352 unmapped: 90431488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:31.825604+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353124352 unmapped: 90431488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:32.825834+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353124352 unmapped: 90431488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:33.825979+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:34.826180+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:35.826461+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:36.826712+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:37.826906+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353132544 unmapped: 90423296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:38.827089+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:39.827269+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:40.827476+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:41.827722+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:42.827945+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353140736 unmapped: 90415104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:43.828178+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:44.828308+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:45.828434+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:46.828594+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:47.828796+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353148928 unmapped: 90406912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:48.829012+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:49.829211+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:50.829413+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:51.829593+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:52.829721+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:53.829917+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:54.830059+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353165312 unmapped: 90390528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:55.830204+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 90382336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:56.830367+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353173504 unmapped: 90382336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:57.830732+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:58.831038+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:59.831214+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:00.831463+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:01.831935+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353181696 unmapped: 90374144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:02.832186+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 90365952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:03.832383+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353189888 unmapped: 90365952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:04.832688+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353198080 unmapped: 90357760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:05.832933+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:06.833280+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:07.833510+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:08.833789+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:09.833969+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:10.834237+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 90349568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:11.834437+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353214464 unmapped: 90341376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:12.834620+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:13.834797+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:14.834944+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:15.835134+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:16.836013+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:17.836974+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353222656 unmapped: 90333184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:18.837695+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:19.837918+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6601.2 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 300 writes, 695 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s
                                           Interval WAL: 300 writes, 143 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:20.838088+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:21.838336+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353230848 unmapped: 90324992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:22.838561+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 90316800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:23.838759+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:24.840049+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:25.840262+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:26.840463+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:27.840760+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 90308608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:28.840918+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:29.841064+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:30.841264+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:31.841474+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:32.841805+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 90300416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:33.842017+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353263616 unmapped: 90292224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:34.842346+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 90284032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:35.842718+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353271808 unmapped: 90284032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:36.842996+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 90275840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:37.843171+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:38.843422+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:39.843669+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:40.843851+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:41.844036+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:42.844247+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:43.844487+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:44.844768+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:45.845009+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:46.845329+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353288192 unmapped: 90267648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:47.845730+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:48.846010+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:49.846309+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:50.846527+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:51.846784+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353296384 unmapped: 90259456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:52.847069+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 90243072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:53.847268+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353312768 unmapped: 90243072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:54.847462+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:55.847920+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:56.848166+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:57.848345+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353320960 unmapped: 90234880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:58.848619+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 90226688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:59.848899+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 90226688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:00.849070+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353329152 unmapped: 90226688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:01.849331+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:02.849621+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:03.849910+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:04.850239+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:05.850490+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:06.850813+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:07.851097+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353337344 unmapped: 90218496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:08.851310+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:09.851523+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:10.851777+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:11.852018+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:12.852276+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:13.852538+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353345536 unmapped: 90210304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:14.852781+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 90193920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:15.852963+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 90193920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:16.853181+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 90193920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:17.853375+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:18.853583+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:19.853787+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:20.854075+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:21.854318+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:22.854536+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:23.854701+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353370112 unmapped: 90185728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:24.854907+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 90177536 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:25.855061+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 90177536 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:26.855246+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353378304 unmapped: 90177536 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:27.855471+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 90169344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:28.855725+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 90169344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:29.856079+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353386496 unmapped: 90169344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:30.856300+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:31.856506+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:32.856819+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:33.857016+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353394688 unmapped: 90161152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:34.857334+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:35.857573+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:36.857864+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:37.858062+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:38.858280+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353411072 unmapped: 90144768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:39.858495+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 90136576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:40.858722+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 90136576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:41.858981+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353419264 unmapped: 90136576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:42.859138+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:43.859319+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:44.859457+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:45.859704+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:46.859899+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:47.860017+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353427456 unmapped: 90128384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:48.860133+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 90120192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:49.860313+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 90120192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:50.860502+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:51.860758+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:52.861027+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:53.861172+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:54.861350+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:55.861495+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:56.861691+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:57.861823+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353452032 unmapped: 90103808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:58.862033+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:59.862228+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:00.862403+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:01.862563+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:02.862756+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353460224 unmapped: 90095616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:03.863021+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 90087424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:04.863159+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:05.863333+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:06.863543+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:07.863692+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:08.863874+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353476608 unmapped: 90079232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:09.864003+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 90071040 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:10.864111+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:11.864262+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 90071040 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:12.864412+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353484800 unmapped: 90071040 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:13.864560+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 90062848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:14.864807+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353492992 unmapped: 90062848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:15.864980+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:16.865140+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:17.865296+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:18.865442+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:19.865571+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353501184 unmapped: 90054656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:20.865790+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:21.865955+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:22.866141+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:23.866304+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:24.866469+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:25.866661+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353509376 unmapped: 90046464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:26.866826+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 90038272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:27.866954+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 90038272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:28.867141+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353517568 unmapped: 90038272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:29.867317+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353525760 unmapped: 90030080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:30.867521+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353525760 unmapped: 90030080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:31.867803+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 90021888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:32.867933+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353533952 unmapped: 90021888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.755249023s of 600.130493164s, submitted: 90
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:33.868061+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 90013696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:34.868186+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353542144 unmapped: 90013696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:35.868367+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:36.868530+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:37.868679+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:38.868865+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:39.868983+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:40.869124+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:41.869263+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353591296 unmapped: 89964544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:42.869430+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:43.869593+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:44.869798+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:45.869939+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:46.870151+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353599488 unmapped: 89956352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:47.870339+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1200128
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:48.870583+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:49.870860+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:50.871037+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:51.871211+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353607680 unmapped: 89948160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:52.871492+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:53.871749+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:54.871963+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:55.872188+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:56.872478+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 89939968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:57.872689+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 89931776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:58.872902+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 89931776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:59.873073+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 89931776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:00.873220+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:01.873387+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:02.873578+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:03.873735+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:04.873946+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:05.874122+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 89923584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:06.874339+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:07.874479+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:08.874727+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:09.874857+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:10.874966+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353640448 unmapped: 89915392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:11.875130+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:12.875319+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:13.875483+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:14.875699+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:15.875826+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:16.876000+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:17.876142+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:18.876285+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:19.876440+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:20.876564+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:21.876705+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353648640 unmapped: 89907200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:22.876838+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 89899008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:23.876999+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 89899008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:24.877166+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353656832 unmapped: 89899008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:25.877301+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:26.877589+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:27.877769+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:28.877938+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:29.878072+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353665024 unmapped: 89890816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:30.878210+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353673216 unmapped: 89882624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:31.878338+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353673216 unmapped: 89882624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:32.878462+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:33.878583+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:34.878703+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:35.878830+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:36.878992+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:37.879147+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353681408 unmapped: 89874432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:38.879275+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:39.879451+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:40.879578+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:41.879729+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:42.879892+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353689600 unmapped: 89866240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:43.880088+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:44.880278+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:45.880473+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:46.880736+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:47.880883+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353697792 unmapped: 89858048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:48.881027+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:49.881190+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:50.881332+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:51.881518+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:52.881726+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:53.881871+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:54.882223+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:55.882499+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353714176 unmapped: 89841664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:56.882739+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:57.882864+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:58.882993+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:59.883128+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:00.883314+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:01.883466+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:02.883587+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:03.883766+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353722368 unmapped: 89833472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:04.883859+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353730560 unmapped: 89825280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:05.883980+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:06.884174+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:07.884394+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:08.884517+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:09.884755+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353738752 unmapped: 89817088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:10.884896+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:11.885041+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:12.885253+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:13.885432+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353746944 unmapped: 89808896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:14.885602+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:15.885724+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:16.885947+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:17.886175+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:18.886301+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353755136 unmapped: 89800704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:19.886466+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353763328 unmapped: 89792512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:20.886675+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 89784320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:21.886884+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 89784320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:22.887011+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 89784320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:23.887169+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:24.887336+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:25.887461+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:26.887734+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:27.887884+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 89776128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:28.888009+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:29.888224+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:30.888378+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:31.888508+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:32.888740+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:33.888859+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:34.889032+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 89759744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:35.889236+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 89751552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:36.889474+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 89751552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:37.889753+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353804288 unmapped: 89751552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:38.889945+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353812480 unmapped: 89743360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:39.890118+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:40.890252+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:41.890391+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:42.890590+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:43.890736+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:44.890866+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:45.890999+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353820672 unmapped: 89735168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:46.891168+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:47.891294+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:48.891445+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:49.891575+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:50.891731+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:51.891876+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353828864 unmapped: 89726976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:52.892041+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 89718784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:53.892198+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 89718784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:54.892325+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353837056 unmapped: 89718784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:55.892448+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:56.892757+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:57.892999+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:58.893165+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:59.893283+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353845248 unmapped: 89710592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:00.893432+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 89694208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:01.893592+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:02.893736+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:03.893874+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:04.894049+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:05.894197+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:06.894400+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:07.894582+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:08.894723+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 89686016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:09.894846+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:10.895016+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:11.895143+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:12.895275+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:13.895456+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:14.895616+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 89677824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:15.895907+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353886208 unmapped: 89669632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:16.896227+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 89661440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:17.896432+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 89661440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:18.896563+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353894400 unmapped: 89661440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:19.896812+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:20.896970+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:21.897145+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:22.897392+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353902592 unmapped: 89653248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:23.897582+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353910784 unmapped: 89645056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:24.897711+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 89636864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:25.897836+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 89636864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:26.898057+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353918976 unmapped: 89636864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:27.898175+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 89628672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:28.898386+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 89628672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:29.898552+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353927168 unmapped: 89628672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:30.898732+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:31.898923+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:32.899127+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:33.899273+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353935360 unmapped: 89620480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:34.899367+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:35.899541+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:36.899807+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:37.899975+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:38.900152+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:39.900295+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 89612288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:40.900445+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 89604096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:41.900584+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 89604096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:42.900733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 89604096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:43.900916+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 89595904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:44.901085+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 89595904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:45.901212+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353959936 unmapped: 89595904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:46.901408+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 89587712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:47.901549+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 89587712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:48.901705+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353968128 unmapped: 89587712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:49.901868+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:50.902022+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:51.902210+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:52.902411+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:53.902555+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:54.902743+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:55.902867+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353976320 unmapped: 89579520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:56.903035+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:57.903168+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:58.903323+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:59.903476+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:00.903674+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:01.903801+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353984512 unmapped: 89571328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:02.904040+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 89563136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:03.904186+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 353992704 unmapped: 89563136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:04.904331+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354000896 unmapped: 89554944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:05.904490+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:06.904714+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:07.904870+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:08.905087+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:09.905250+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:10.905369+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354009088 unmapped: 89546752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:11.905518+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354017280 unmapped: 89538560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:12.905723+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354025472 unmapped: 89530368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:13.905850+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354025472 unmapped: 89530368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:14.906024+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:15.906218+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:16.906418+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:17.906588+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:18.906759+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 89522176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:19.906920+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 89513984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:20.907070+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354041856 unmapped: 89513984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:21.907223+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:22.907366+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:23.907546+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:24.907732+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:25.908019+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:26.908269+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:27.908460+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354050048 unmapped: 89505792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:28.908595+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:29.908744+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:30.908912+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:31.909046+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:32.909212+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:33.909391+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:34.909549+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:35.909708+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354074624 unmapped: 89481216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:36.909940+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 89473024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:37.910120+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354082816 unmapped: 89473024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:38.910257+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:39.910514+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:40.910728+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:41.910862+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:42.910985+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:43.911170+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354091008 unmapped: 89464832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:44.911314+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:45.911482+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:46.911682+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:47.911849+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:48.911996+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:49.912210+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354099200 unmapped: 89456640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:50.912380+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 89448448 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:51.912583+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 89440256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:52.912726+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 89440256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:53.912861+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 89440256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:54.913015+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:55.913172+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:56.913322+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:57.913454+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:58.913693+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:59.913818+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354123776 unmapped: 89432064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:00.913948+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 89423872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:01.914109+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354131968 unmapped: 89423872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:02.914274+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:03.914521+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:04.914756+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:05.914886+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:06.915042+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354140160 unmapped: 89415680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:07.915168+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 89407488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:08.915287+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354148352 unmapped: 89407488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:09.915393+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:10.915559+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:11.915704+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:12.915836+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:13.915957+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:14.916071+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354164736 unmapped: 89391104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:15.916203+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:16.916380+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 89382912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:17.916562+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 89382912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:18.916739+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 89382912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:19.916954+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:20.917141+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:21.917313+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:22.917451+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:23.917592+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 89374720 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:24.917805+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354189312 unmapped: 89366528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:25.917930+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:26.918171+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:27.918341+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:28.918596+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:29.918773+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:30.918977+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:31.919169+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354197504 unmapped: 89358336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:32.919365+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 89350144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:33.919555+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 89350144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:34.919694+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354205696 unmapped: 89350144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:35.919892+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 89341952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:36.920123+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 89341952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:37.920332+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 89341952 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:38.920543+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 89333760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:39.920704+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 89333760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:40.920852+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 89333760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:41.920979+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:42.921199+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:43.921339+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:44.921561+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:45.921718+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:46.921885+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:47.922098+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:48.922340+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 89317376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:49.922543+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:50.922736+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:51.924230+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:52.924510+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:53.924693+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354246656 unmapped: 89309184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:54.924882+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354254848 unmapped: 89300992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:55.925092+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354254848 unmapped: 89300992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:56.925334+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354254848 unmapped: 89300992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:57.925494+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 89292800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:58.925730+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354263040 unmapped: 89292800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:59.925947+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:00.926155+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:01.926322+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:02.926453+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:03.926589+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:04.926732+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354271232 unmapped: 89284608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:05.926919+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 89276416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:06.927126+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:07.927301+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:08.927469+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:09.927654+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354287616 unmapped: 89268224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:10.927779+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 89260032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:11.927957+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 89260032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:12.928166+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354295808 unmapped: 89260032 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:13.928317+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:14.928455+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:15.928826+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:16.929068+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:17.929297+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:18.929532+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354304000 unmapped: 89251840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:19.929740+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 89243648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:20.929935+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 89243648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:21.930109+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:22.930304+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:23.930557+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:24.930758+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:25.931015+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 89235456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:26.931345+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:27.931578+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:28.931796+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:29.931999+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:30.932192+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 89227264 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:31.932358+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:32.932528+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:33.932781+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-11-25T17:51:34.932943+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _finish_auth 0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:34.934609+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 89219072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:35.933188+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:36.933514+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:37.933725+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:38.933960+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 89210880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:39.934177+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:40.934424+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:41.934622+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:42.934833+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:43.935022+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354361344 unmapped: 89194496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:44.935264+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:45.935459+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:46.935781+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:47.935995+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:48.936156+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:49.936352+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:50.936509+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354369536 unmapped: 89186304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:51.936741+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354377728 unmapped: 89178112 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:52.936954+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 89169920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:53.937158+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 89169920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:54.937399+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354385920 unmapped: 89169920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:55.937575+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:56.937847+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:57.938072+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:58.938274+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:59.938470+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354394112 unmapped: 89161728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:00.938713+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:01.939690+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:02.939944+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:03.940138+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:04.940324+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:05.940480+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:06.940752+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354410496 unmapped: 89145344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:07.941155+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 89137152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:08.941421+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 89137152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:09.941615+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354418688 unmapped: 89137152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:10.941818+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:11.942008+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:12.942222+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:13.942424+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:14.942590+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:15.942797+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354426880 unmapped: 89128960 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:16.943045+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 89120768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:17.943206+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354435072 unmapped: 89120768 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:18.943395+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:19.943594+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:20.944371+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:21.944575+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:22.944737+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354443264 unmapped: 89112576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:23.944944+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354451456 unmapped: 89104384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:24.945089+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:25.945273+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:26.945487+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:27.945665+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:28.945827+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:29.945977+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354459648 unmapped: 89096192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:30.946182+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 89088000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:31.946379+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 89088000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:32.946581+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354467840 unmapped: 89088000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:33.946738+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354476032 unmapped: 89079808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:34.946933+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:35.947134+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:36.947330+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:37.947492+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:38.947759+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:39.947927+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354484224 unmapped: 89071616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:40.948162+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 89063424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:41.948362+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 89063424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:42.948566+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:43.948704+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:44.948909+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:45.949095+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:46.949350+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:47.949529+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354500608 unmapped: 89055232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:48.949701+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 89038848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:49.949887+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354516992 unmapped: 89038848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:50.950056+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:51.950237+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:52.950469+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:53.950711+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354525184 unmapped: 89030656 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:54.950963+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 89022464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:55.951109+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 89022464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:56.951310+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:57.951492+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:58.951704+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:59.951896+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:00.952097+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:01.952277+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:02.952519+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:03.952737+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354541568 unmapped: 89014272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:04.952998+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 89006080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:05.953263+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354549760 unmapped: 89006080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:06.953531+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:07.953719+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:08.953964+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:09.954166+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354557952 unmapped: 88997888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:10.954316+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:11.954478+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:12.954686+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:13.954860+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354566144 unmapped: 88989696 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:14.955026+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:15.955193+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:16.955387+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:17.955575+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:18.955762+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:19.955955+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7201.2 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 47K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354574336 unmapped: 88981504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:20.956121+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 88973312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:21.956295+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 88973312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:22.956426+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:23.956590+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:24.956812+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:25.956979+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:26.957113+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:27.957256+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 88965120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:28.957417+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 88948736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:29.957615+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 88948736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:30.957799+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:31.958015+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:32.958154+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:33.958375+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:34.958757+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:35.958997+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 88940544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:36.959235+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:37.959410+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:38.959579+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:39.959729+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:40.959873+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 88932352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:41.960060+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:42.960187+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:43.960374+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:44.960720+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:45.960857+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 88924160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:46.961072+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:47.961213+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:48.961356+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:49.961493+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:50.961692+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:51.961876+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:52.970036+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354639872 unmapped: 88915968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618decf7800 session 0x5618ddf62960
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618ddfa4c00
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:53.970194+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354656256 unmapped: 88899584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: mgrc ms_handle_reset ms_handle_reset con 0x5618ded40c00
Nov 25 18:14:10 compute-0 ceph-osd[89991]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1812945073
Nov 25 18:14:10 compute-0 ceph-osd[89991]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1812945073,v1:192.168.122.100:6801/1812945073]
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: get_auth_request con 0x5618dcf1f400 auth_method 0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: mgrc handle_mgr_configure stats_period=5
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:54.970447+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618ded13800 session 0x5618de7d85a0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618eaf4fc00
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618e59c9c00 session 0x5618de7d81e0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618dc2a3800
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:55.970602+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:56.970827+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:57.971156+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354721792 unmapped: 88834048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:58.971318+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 88825856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:59.971563+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 88825856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:00.971815+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354729984 unmapped: 88825856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:01.971966+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:02.972209+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:03.972359+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:04.972528+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:05.972697+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:06.972895+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:07.973053+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354738176 unmapped: 88817664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:08.973257+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 88809472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:09.973485+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 88809472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:10.973698+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:11.973886+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:12.974043+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:13.974215+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:14.974418+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:15.974560+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:16.974796+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 88801280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:17.974950+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 88793088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:18.975085+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 88793088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:19.975213+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:20.975358+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:21.975541+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:22.975768+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:23.975916+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:24.976052+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 88784896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:25.976235+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354779136 unmapped: 88776704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:26.976555+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354779136 unmapped: 88776704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:27.976800+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:28.977108+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:29.977336+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:30.977511+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:31.977707+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:32.977849+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 88768512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:33.978059+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:34.978330+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:35.978468+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:36.978672+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:37.978853+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 88752128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:38.979099+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354811904 unmapped: 88743936 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:39.979252+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:40.979385+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:41.979537+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:42.979693+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354820096 unmapped: 88735744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:43.979897+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:44.980024+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:45.980143+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:46.980336+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:47.980513+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:48.980685+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:49.980830+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354828288 unmapped: 88727552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:50.980984+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:51.981224+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:52.981392+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:53.981579+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:54.981829+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354836480 unmapped: 88719360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:55.982002+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354844672 unmapped: 88711168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:56.982242+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354844672 unmapped: 88711168 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:57.982448+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:58.982608+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:59.982793+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:00.982976+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:01.983119+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354852864 unmapped: 88702976 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:02.983326+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354861056 unmapped: 88694784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:03.983478+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:04.983749+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:05.983927+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:06.984140+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:07.984308+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:08.984556+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:09.984758+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:10.984967+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:11.985138+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354869248 unmapped: 88686592 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:12.985279+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 88670208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:13.985469+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 88670208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:14.985720+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 88670208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:15.985889+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:16.986102+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:17.986288+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:18.986453+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:19.986630+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:20.986816+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:21.987041+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:22.987263+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:23.987451+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618ded41c00 session 0x5618dcf27a40
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618e77bc400
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:24.987633+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:25.987856+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354893824 unmapped: 88662016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:26.988110+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 88653824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:27.988405+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 88653824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:28.988696+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354918400 unmapped: 88637440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:29.988851+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1204224
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 596.947326660s of 597.180541992s, submitted: 90
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 354926592 unmapped: 88629248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e9063000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:30.989028+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:31.989194+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:32.989360+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:33.989533+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:34.989708+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351789056 unmapped: 91766784 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:35.989842+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 91742208 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,1])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:36.990345+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:37.990581+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:38.990851+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:39.991075+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:40.991233+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:41.991427+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:42.991610+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:43.991766+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:44.991924+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 91734016 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:45.992193+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:46.992383+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:47.992592+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:48.992825+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:49.993012+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:50.993388+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:51.993612+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:52.993861+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:53.994074+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 91725824 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:54.994353+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:55.994725+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:56.995037+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:57.995242+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:58.995479+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:59.995703+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 91717632 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:00.995924+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:01.996159+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:02.996405+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:03.996632+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:04.997122+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:05.997342+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:06.997554+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:07.997825+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:08.998067+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 91709440 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:09.998345+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:10.998561+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:11.998798+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:12.998973+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:13.999177+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:14.999345+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:15.999523+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351854592 unmapped: 91701248 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:16.999774+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:17.999971+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:19.000133+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:20.000294+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:21.000460+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:22.000617+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:23.000830+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351862784 unmapped: 91693056 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:24.001010+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:25.001276+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:26.001477+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:27.001734+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:28.001924+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:29.002150+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:30.002373+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:31.002553+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:32.002763+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:33.002928+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:34.003117+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:35.003291+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:36.003509+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:37.003738+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:38.003934+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351870976 unmapped: 91684864 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:39.004128+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 91676672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:40.004299+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351879168 unmapped: 91676672 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:41.004479+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:42.004630+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:43.004834+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:44.005007+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:45.005141+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:46.005332+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:47.005549+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:48.005719+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351887360 unmapped: 91668480 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:49.005913+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:50.006070+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:51.006259+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:52.006450+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:53.006675+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:54.006868+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:55.007038+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:56.007195+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:57.007361+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:58.007521+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351895552 unmapped: 91660288 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:59.007743+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:00.007875+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:01.008013+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:02.008184+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:03.008324+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:04.008466+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:05.008757+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:06.008983+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:07.009196+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:08.009419+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:09.009621+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:10.009815+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:11.009948+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:12.010189+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351903744 unmapped: 91652096 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:13.010362+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:14.010515+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:15.010689+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:16.010869+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:17.011095+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:18.011321+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:19.011561+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:20.011747+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351911936 unmapped: 91643904 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:21.011971+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:22.012132+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:23.012306+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:24.012579+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:25.012810+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:26.012989+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:27.013218+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:28.013464+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:29.013695+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:30.014000+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351920128 unmapped: 91635712 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:31.014243+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:32.014444+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:33.014735+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:34.014913+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:35.015175+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:36.015432+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:37.015687+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:38.015881+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351928320 unmapped: 91627520 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:39.016085+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:40.016327+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:41.016578+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:42.016745+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:43.016939+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:44.017118+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:45.017289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:46.017445+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:47.017731+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:48.017931+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:49.018116+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:50.018328+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:51.018541+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:52.018714+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:53.018847+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:54.018990+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351936512 unmapped: 91619328 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:55.019193+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:56.019416+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:57.019709+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:58.019865+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:59.020019+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:00.020184+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351944704 unmapped: 91611136 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:01.020305+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:02.020456+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:03.020727+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:04.020919+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:05.021061+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:06.021243+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:07.022346+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:08.022509+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:09.023289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:10.023471+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:11.023948+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:12.024106+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:13.024335+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:14.024527+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:15.024820+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:16.025055+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:17.025364+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:18.025598+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:19.025788+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351952896 unmapped: 91602944 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:20.025952+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:21.026177+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:22.026436+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:23.026622+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:24.026871+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:25.027034+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:26.027226+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:27.027488+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:28.027694+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:29.027890+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:30.028113+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:31.028368+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:32.028541+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:33.028711+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351961088 unmapped: 91594752 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:34.028941+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:35.029164+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:36.029327+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:37.029588+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:38.029818+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:39.030035+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351969280 unmapped: 91586560 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:40.030289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:41.030474+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:42.030707+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:43.030858+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:44.031048+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:45.031348+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:46.031488+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:47.031629+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:48.031877+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:49.032041+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:50.032187+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:51.032343+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 91578368 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:52.032737+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:53.032935+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:54.033156+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:55.033313+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351985664 unmapped: 91570176 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:56.033475+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:57.033754+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:58.033942+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:59.034092+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:00.034283+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:01.034439+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:02.034699+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:03.034886+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:04.035029+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:05.035200+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:06.035399+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:07.035834+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:08.035985+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:09.036176+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:10.036383+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:11.036593+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:12.036708+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 351993856 unmapped: 91561984 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:13.036838+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:14.037021+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:15.037213+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:16.037355+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:17.037501+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:18.037701+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:19.037858+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:20.038010+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:21.038184+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352002048 unmapped: 91553792 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:22.038388+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:23.038559+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:24.038676+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:25.038856+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:26.039014+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:27.039257+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:28.039395+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352010240 unmapped: 91545600 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:29.039532+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:30.039706+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:31.039863+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:32.040055+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:33.040216+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:34.040367+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:35.040518+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:36.040698+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:37.040951+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:38.041078+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:39.041295+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:40.041436+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:41.041568+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:42.041694+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:43.041834+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:44.041962+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352018432 unmapped: 91537408 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:45.042122+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:46.042289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:47.042470+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:48.042705+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:49.042865+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:50.043017+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352026624 unmapped: 91529216 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:51.043176+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:52.043334+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:53.043501+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:54.043708+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352034816 unmapped: 91521024 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:55.043838+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:56.044010+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:57.044186+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:58.044350+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:59.044499+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:00.044623+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:01.044763+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:02.044929+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:03.045077+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:04.045258+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:05.045443+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:06.045680+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:07.045940+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 91512832 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:08.046123+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:09.070548+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:10.070711+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:11.070840+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:12.071049+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:13.071313+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:14.071557+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:15.071728+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:16.071960+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:17.072293+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:18.072532+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:19.072716+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:20.072884+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:21.073013+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:22.073147+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:23.073273+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:24.073476+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:25.073624+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352051200 unmapped: 91504640 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:26.073781+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:27.074275+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:28.074424+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:29.074618+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:30.074788+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:31.074920+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:32.075088+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:33.075261+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:34.075459+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:35.075594+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:36.075810+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:37.075988+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:38.076129+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:39.076345+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:40.076544+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:41.076701+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:42.076957+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:43.077135+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:44.077414+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:45.077610+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:46.077796+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:47.078040+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:48.078290+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:49.078455+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:50.078678+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352075776 unmapped: 91480064 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:51.078872+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 91471872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:52.079041+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 91471872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:53.079219+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 91471872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:54.079471+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352083968 unmapped: 91471872 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:55.079712+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352092160 unmapped: 91463680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:56.079895+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352092160 unmapped: 91463680 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:57.080084+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:58.080292+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:59.080457+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:00.080622+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:01.080811+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:02.080972+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:03.081135+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:04.081365+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:05.081581+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:06.081785+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:07.082015+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:08.082150+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:09.082321+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:10.082456+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:11.082595+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:12.082827+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:13.082975+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 91455488 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:14.083148+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 91447296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:15.083289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 91447296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:16.083471+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 91447296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:17.083727+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 91447296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:18.083908+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 91447296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:19.084130+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 91447296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:20.084341+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352108544 unmapped: 91447296 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:21.084533+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:22.084702+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:23.084893+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:24.085057+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:25.085232+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:26.085397+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:27.085613+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:28.085836+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352116736 unmapped: 91439104 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:29.086022+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:30.086203+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:31.086451+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:32.086607+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:33.086723+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:34.086928+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:35.087311+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:36.087447+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:37.087702+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:38.087815+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:39.088017+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:40.088185+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:41.088315+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:42.088453+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:43.088725+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:44.088914+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352124928 unmapped: 91430912 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:45.089080+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 91414528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:46.089246+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 91414528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:47.089433+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 91414528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:48.089619+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 91414528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:49.089799+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 91414528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:50.089945+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 91414528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:51.090116+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352141312 unmapped: 91414528 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:52.090245+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:53.090384+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:54.090543+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:55.090693+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:56.090867+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:57.091014+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:58.091141+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:59.091291+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:00.091453+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352149504 unmapped: 91406336 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:01.091627+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:02.091859+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:03.092001+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:04.092206+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:05.092376+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:06.092551+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:07.092745+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:08.092899+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:09.093197+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:10.093386+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:11.093557+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:12.093740+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:13.093889+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:14.094080+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:15.094260+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:16.094464+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352157696 unmapped: 91398144 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:17.094739+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:18.094968+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:19.095118+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:20.095269+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:21.095457+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:22.095690+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:23.095966+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:24.096150+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:25.096291+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:26.096510+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:27.096749+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:28.096885+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:29.097104+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:30.097241+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:31.097438+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:32.097600+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352174080 unmapped: 91381760 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:33.097763+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:34.097946+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:35.098139+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:36.098306+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:37.098525+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:38.098700+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:39.098898+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:40.099095+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:41.099222+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352182272 unmapped: 91373568 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:42.099404+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:43.099584+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:44.099761+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:45.099890+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:46.100017+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:47.100218+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:48.100427+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:49.100582+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:50.100725+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:51.100903+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352190464 unmapped: 91365376 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:52.101064+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 91357184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:53.101210+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 91357184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:54.101402+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352198656 unmapped: 91357184 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:55.101564+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:56.101770+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:57.101997+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:58.102224+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:59.102451+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:00.102726+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:01.102861+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:02.103044+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:03.103273+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:04.103474+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:05.103721+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:06.103906+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:07.104059+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:08.104252+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:09.104495+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:10.104714+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:11.104895+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:12.105025+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:13.105183+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352206848 unmapped: 91348992 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:14.105365+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 91340800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:15.105531+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352215040 unmapped: 91340800 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:16.105698+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:17.105997+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:18.106142+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:19.106263+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7801.2 total, 600.0 interval
                                           Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.79 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 220 writes, 330 keys, 220 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 220 writes, 110 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:20.106413+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:21.106580+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:22.106761+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:23.106936+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 91332608 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:24.107160+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:25.107380+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:26.107535+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:27.107745+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:28.107907+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:29.108098+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:30.108246+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:31.108390+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:32.108573+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:33.108739+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:34.108919+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:35.109076+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:36.109279+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:37.109441+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:38.109594+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352231424 unmapped: 91324416 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:39.109799+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 91316224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:40.109932+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 91316224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:41.110074+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352239616 unmapped: 91316224 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:42.124992+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352256000 unmapped: 91299840 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:43.125175+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:44.125364+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:45.125564+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:46.125753+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:47.125942+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:48.126081+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:49.126208+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:50.126368+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:51.126491+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:52.126665+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:53.126795+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:54.126945+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:55.127086+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352264192 unmapped: 91291648 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:56.127258+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:57.127444+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:58.127609+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:59.127824+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:00.128067+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:01.128255+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:02.128387+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:03.128507+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:04.128689+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:05.128862+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 25 18:14:10 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3619964825' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:06.129005+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:07.129144+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:08.129351+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352272384 unmapped: 91283456 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:09.129501+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:10.129702+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:11.129935+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:12.130099+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:13.130304+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:14.130492+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:15.130738+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:16.130946+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:17.131117+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:18.131287+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:19.131432+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:20.131587+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:21.131794+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:22.132011+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:23.132127+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:24.132234+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:25.132355+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 91267072 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:26.132497+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:27.132783+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:28.132962+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:29.133137+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:30.133305+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:31.133541+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:32.133723+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:33.133899+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352296960 unmapped: 91258880 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:34.134024+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 91250688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:35.134177+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 91250688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:36.134276+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 91250688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:37.134468+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 91250688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:38.134610+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 91250688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:39.134873+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 91250688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:40.135007+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352305152 unmapped: 91250688 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:41.135147+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:42.135291+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:43.135454+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:44.135569+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:45.135725+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:46.135847+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:47.136036+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:48.136202+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:49.136343+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:50.136477+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:51.136611+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:52.136838+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:53.137013+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:54.137176+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352313344 unmapped: 91242496 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:55.137321+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 91234304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:56.137465+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 91234304 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:57.137742+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:58.137961+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:59.138100+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:00.138622+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:01.138733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:02.138853+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:03.139106+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:04.139458+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:05.139595+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:06.139939+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:07.140211+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:08.140416+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:09.140881+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:10.141209+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352337920 unmapped: 91217920 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:11.141493+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:12.142058+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:13.142361+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:14.142771+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:15.143020+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:16.143295+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:17.143577+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:18.143844+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:19.144188+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:20.145389+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352346112 unmapped: 91209728 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:21.145570+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:22.145845+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:23.146093+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:24.146265+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:25.146530+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:26.146676+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:27.146838+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:28.147031+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:29.147357+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:30.147576+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:31.147771+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:32.148000+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:33.148199+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 603.016723633s of 603.354736328s, submitted: 110
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352362496 unmapped: 91193344 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:34.148388+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352370688 unmapped: 91185152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:35.148624+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352370688 unmapped: 91185152 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:36.148886+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:37.149086+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:38.149263+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:39.149413+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:40.149598+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:41.149750+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:42.149925+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:43.150095+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:44.150286+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:45.150434+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:46.150589+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:47.150785+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:48.151021+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:49.151212+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:50.157451+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:51.157745+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:52.157900+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:53.158136+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:54.158452+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:55.158901+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:56.159142+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:57.159374+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:58.159549+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:59.159739+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:00.159913+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:01.160224+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:02.160423+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:03.160604+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:04.160733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:05.160900+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:06.161036+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:07.161249+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:08.161535+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:09.161713+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:10.161847+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:11.161974+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:12.162126+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:13.162342+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:14.162536+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:15.162691+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352395264 unmapped: 91160576 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:16.162820+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:17.162986+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:18.163153+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:19.163316+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:20.163489+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:21.164018+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:22.164199+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:23.164322+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:24.164545+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:25.164743+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:26.164926+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:27.165235+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:28.165461+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:29.165612+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:30.165733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:31.165979+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352403456 unmapped: 91152384 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:32.166121+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:33.166316+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:34.166818+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:35.166985+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:36.167183+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:37.167425+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:38.167588+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:39.167765+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:40.167896+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:41.168156+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:42.168422+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:43.168733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:44.168921+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:45.169163+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:46.169316+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352411648 unmapped: 91144192 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:47.169508+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 91136000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:48.169724+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 91136000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:49.169946+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 91136000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:50.170138+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 91136000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:51.170298+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352419840 unmapped: 91136000 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:52.170437+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:53.170606+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:54.170828+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:55.171074+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:56.171278+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:57.171587+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:58.171704+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:59.171918+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352428032 unmapped: 91127808 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:00.172108+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:01.172419+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:02.172578+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:03.172734+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:04.172894+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:05.173062+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:06.178053+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:07.178248+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:08.178473+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:09.178677+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:10.178828+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352436224 unmapped: 91119616 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:11.178996+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 91111424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:12.179161+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352444416 unmapped: 91111424 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:13.179325+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:14.179457+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:15.179609+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:16.179785+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:17.179968+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:18.180134+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:19.180271+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:20.180435+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:21.180589+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:22.180706+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:23.180871+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:24.181025+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:25.181348+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:26.181491+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:27.181682+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:28.181823+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:29.182078+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:30.182540+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:31.182730+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352452608 unmapped: 91103232 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:32.182929+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:33.183130+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:34.183293+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:35.183505+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:36.183675+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:37.183968+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:38.184146+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:39.184289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:40.184432+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:41.184584+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:42.184831+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:43.184968+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:44.185104+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:45.185303+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:46.185496+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:47.185786+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:48.185990+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:49.186148+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:50.186356+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:51.186626+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:52.186890+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352468992 unmapped: 91086848 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:53.187117+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352485376 unmapped: 91070464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:54.187315+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352485376 unmapped: 91070464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:55.187561+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352485376 unmapped: 91070464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:56.187724+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:57.187926+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:58.188081+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:59.188387+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:00.188686+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:01.189020+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:02.189253+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:03.189484+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:04.189728+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:05.189904+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:06.190066+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:07.190343+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:08.190552+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:09.190796+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:10.191090+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:11.191361+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:12.191545+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:13.191852+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:14.192063+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:15.192258+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:16.192428+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352493568 unmapped: 91062272 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:17.192746+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:18.192937+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:19.193090+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:20.193229+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:21.193439+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets getting new tickets!
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:22.193739+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _finish_auth 0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:22.194838+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:23.193915+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:24.194091+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:25.194292+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:26.194465+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:27.194711+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:28.194846+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:29.195056+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:30.195222+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:31.195521+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:32.195684+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352501760 unmapped: 91054080 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:33.195857+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 91045888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:34.195980+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352509952 unmapped: 91045888 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:35.196133+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:36.196299+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:37.196494+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:38.196625+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:39.196842+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:40.196988+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:41.197149+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:42.197351+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:43.197513+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:44.197768+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:45.197975+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:46.198240+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:47.198507+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:48.198710+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:49.198910+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:50.199072+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:51.199289+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:52.199490+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:53.199701+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618ddfa4c00 session 0x5618dec44000
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618ded41c00
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:54.199921+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618eaf4fc00 session 0x5618ddf612c0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618decdac00
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618dc2a3800 session 0x5618decc1680
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618dcf20000
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:55.200139+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:56.200328+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:57.200713+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:58.200896+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352526336 unmapped: 91029504 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:59.201030+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352534528 unmapped: 91021312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:00.201211+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352534528 unmapped: 91021312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:01.201428+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352534528 unmapped: 91021312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:02.201617+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352534528 unmapped: 91021312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:03.201913+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352534528 unmapped: 91021312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:04.202116+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352534528 unmapped: 91021312 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:05.202320+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:06.202481+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:07.203128+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:08.203344+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:09.203789+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:10.203970+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:11.204414+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:12.204801+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:13.205034+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:14.205242+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:15.205487+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:16.205718+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:17.206172+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:18.206451+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:19.206738+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352542720 unmapped: 91013120 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:20.206953+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:21.207228+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:22.207391+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:23.207726+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:24.207918+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:25.208123+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:26.208336+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:27.208519+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:28.208709+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:29.208868+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:30.209066+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:31.209290+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:32.209499+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:33.209707+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352550912 unmapped: 91004928 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:34.209894+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352559104 unmapped: 90996736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:35.210056+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352559104 unmapped: 90996736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:36.210313+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352559104 unmapped: 90996736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:37.210702+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352559104 unmapped: 90996736 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:38.210885+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:39.211042+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:40.211173+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:41.211302+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:42.211437+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:43.211706+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:44.211901+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:45.212172+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:46.212310+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 90988544 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:47.212525+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 90980352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:48.212857+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 90980352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:49.213045+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 90980352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:50.213186+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 90980352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:51.213344+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 90980352 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:52.213516+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:53.213699+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:54.213847+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:55.214036+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:56.214238+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:57.214428+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:58.214633+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:59.214910+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:00.215139+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:01.215390+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:02.215822+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352583680 unmapped: 90972160 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:03.215998+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:04.216150+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:05.216294+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:06.216426+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:07.216618+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:08.216907+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:09.217149+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:10.217294+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:11.217441+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:12.217578+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:13.217742+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:14.217920+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:15.218047+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:16.218202+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352591872 unmapped: 90963968 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:17.218373+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 90955776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:18.218586+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 90955776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:19.218739+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 90955776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:20.218853+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 90955776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:21.218990+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 90955776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:22.219153+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352600064 unmapped: 90955776 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:23.219293+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 90947584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:24.219440+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 ms_handle_reset con 0x5618e77bc400 session 0x5618dec7d4a0
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: handle_auth_request added challenge on 0x5618ebb3d000
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352608256 unmapped: 90947584 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:25.219569+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:26.219725+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:27.219896+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:28.220082+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:29.220290+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:30.220527+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:31.220733+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:32.220876+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:33.221029+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:34.221180+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:35.221324+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:36.221459+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:37.221625+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:38.221781+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:39.221905+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:40.222062+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:41.222216+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:42.222346+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:43.222479+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:44.222686+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:45.222796+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:46.222988+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:47.223171+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352616448 unmapped: 90939392 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:48.223313+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352624640 unmapped: 90931200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:49.223434+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352624640 unmapped: 90931200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:50.223589+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352624640 unmapped: 90931200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:51.223715+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352624640 unmapped: 90931200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:52.223843+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352624640 unmapped: 90931200 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:53.223991+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:54.224120+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:55.224296+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:56.224492+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:57.231222+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:58.231348+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:59.231518+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:00.231748+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:01.231949+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:02.232103+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:03.232277+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:04.232409+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:05.232530+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:06.232745+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352632832 unmapped: 90923008 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:07.232930+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 90914816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:08.233053+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 90914816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:09.233195+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 90914816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:10.233313+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352641024 unmapped: 90914816 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:11.233447+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:12.233570+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:13.233691+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:14.233884+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:15.234130+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:16.234364+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:17.234601+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:18.234766+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:19.258537+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:20.258677+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:21.258825+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:22.258956+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:23.259127+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352649216 unmapped: 90906624 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:24.259254+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:25.259379+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:26.259548+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:27.259753+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:28.259890+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:29.260509+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:30.260657+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:31.260829+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:32.261768+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352657408 unmapped: 90898432 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:33.261916+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 90890240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:34.262058+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 90890240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:35.262175+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 90890240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:36.262332+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352665600 unmapped: 90890240 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:37.262537+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:38.262716+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:39.262866+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:40.263020+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:41.263181+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:42.263342+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:43.263487+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:44.263630+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:45.263858+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:46.264063+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:47.264255+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352673792 unmapped: 90882048 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:48.264392+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:49.264525+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:50.264692+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:51.264873+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:52.265011+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:53.265211+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:54.265392+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:55.265537+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:56.265691+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:57.265855+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:58.265985+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:59.266108+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:00.266307+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:01.266459+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352681984 unmapped: 90873856 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:02.266721+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 90865664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:03.266844+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 90865664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:04.266965+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 90865664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:05.267107+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352690176 unmapped: 90865664 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:06.267288+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:07.267499+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:08.267689+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:09.267830+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:10.267956+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:11.268133+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:12.268267+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:13.268433+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:14.268566+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:15.268715+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352698368 unmapped: 90857472 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:16.268938+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 90849280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:17.269197+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 90849280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:18.269410+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 90849280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:19.269532+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352706560 unmapped: 90849280 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:20.269748+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:21.269963+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:22.270170+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:23.270413+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:24.270558+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:25.270718+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:26.270890+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:27.271107+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352714752 unmapped: 90841088 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:28.271331+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 90832896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:29.271487+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 90832896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:30.271682+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 90832896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:31.271834+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 90832896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:32.271965+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 90832896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:33.272133+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 90832896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:34.272305+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352722944 unmapped: 90832896 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:35.272720+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:36.272905+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:37.273138+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:38.273343+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:39.273590+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:40.273756+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:41.273930+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:42.274131+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:43.274276+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:44.274528+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:45.274743+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:46.274948+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:47.275252+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:48.275425+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352731136 unmapped: 90824704 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:49.275608+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:50.275805+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:51.275993+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:52.276274+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:53.276516+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:54.276811+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:55.277018+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:56.277181+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:57.277413+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:58.277565+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352739328 unmapped: 90816512 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:59.277815+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 90808320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:00.278022+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 90808320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:01.278181+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 90808320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:02.278303+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 90808320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:03.278459+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352747520 unmapped: 90808320 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:04.278670+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:05.278865+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:06.278996+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:07.279134+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:08.279320+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:09.279483+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:10.279624+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:11.279748+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:12.279919+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:13.280100+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:14.280287+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:15.280439+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:16.280684+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:17.280988+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:18.281171+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352755712 unmapped: 90800128 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:19.281331+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8401.3 total, 600.0 interval
                                           Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.78 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:20.281496+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:21.281675+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:22.281802+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:23.281968+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:24.282157+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:25.282318+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:26.282486+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:27.282719+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:28.282924+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:29.283219+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:30.283358+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:31.283489+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:32.283603+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:33.283689+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:34.283828+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352772096 unmapped: 90783744 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:35.283957+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: osd.1 307 heartbeat osd_stat(store_statfs(0x4e8c53000/0x0/0x4ffc00000, data 0x125ea26/0x140b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 90775552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:36.284070+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352780288 unmapped: 90775552 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:37.284238+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352788480 unmapped: 90767360 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'config diff' '{prefix=config diff}'
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'config show' '{prefix=config show}'
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:38.284352+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352485376 unmapped: 91070464 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:39.284513+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:10 compute-0 ceph-osd[89991]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:10 compute-0 ceph-osd[89991]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 91488256 heap: 443555840 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:10 compute-0 ceph-osd[89991]: bluestore.MempoolThread(0x5618da82fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3444478 data_alloc: 218103808 data_used: 1216512
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: tick
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_tickets
Nov 25 18:14:10 compute-0 ceph-osd[89991]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:40.284656+0000)
Nov 25 18:14:10 compute-0 ceph-osd[89991]: do_command 'log dump' '{prefix=log dump}'
Nov 25 18:14:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 25 18:14:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450882529' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 18:14:11 compute-0 rsyslogd[1006]: imjournal from <np0005535469:ceph-osd>: begin to drop messages due to rate-limiting
Nov 25 18:14:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 25 18:14:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062558830' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 18:14:11 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 18:14:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:14:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 25 18:14:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768225190' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 25 18:14:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3432358946' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3825372936' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: pgmap v4541: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3619964825' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3450882529' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1062558830' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2768225190' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3432358946' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 25 18:14:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2498615759' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 18:14:11 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 18:14:11 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265128924' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23429 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 25 18:14:12 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1096027067' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4542: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:12 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23435 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2498615759' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3265128924' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1096027067' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 18:14:12 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:13 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23439 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:13 compute-0 nova_compute[254092]: 2025-11-25 18:14:13.195 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:13 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23441 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:13 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:13 compute-0 ceph-mon[74985]: from='client.23429 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:13 compute-0 ceph-mon[74985]: from='client.23433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:13 compute-0 ceph-mon[74985]: pgmap v4542: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:13 compute-0 ceph-mon[74985]: from='client.23435 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:13 compute-0 ceph-mon[74985]: from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:14:13.722 163338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 18:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:14:13.723 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 18:14:13 compute-0 ovn_metadata_agent[163333]: 2025-11-25 18:14:13.723 163338 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 18:14:13 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 25 18:14:13 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/217814488' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 18:14:13 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23449 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 25 18:14:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664288614' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4543: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:14 compute-0 ceph-mon[74985]: from='client.23439 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mon[74985]: from='client.23441 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mon[74985]: from='client.23445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/217814488' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mon[74985]: from='client.23449 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3664288614' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 18:14:14 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328389572' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 75505664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:55.500467+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 75505664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:56.500718+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 75505664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:57.500850+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 75505664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:58.501015+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336084992 unmapped: 75505664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:39:59.501150+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:00.501358+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:01.501512+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:02.501758+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:03.502066+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:04.502730+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:05.503121+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:06.503323+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336101376 unmapped: 75489280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:07.503493+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 75481088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:08.503731+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 75481088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:09.503893+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 75481088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:10.504139+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 75481088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:11.504570+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 75481088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:12.504806+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 75481088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:13.505014+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336109568 unmapped: 75481088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:14.505215+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336117760 unmapped: 75472896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:15.505382+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:16.505606+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:17.505823+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:18.505962+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:19.506113+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:20.506248+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:21.506376+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:22.506587+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 75464704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:23.506727+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 75456512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 ms_handle_reset con 0x563f6dcc6000 session 0x563f6620e5a0
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: handle_auth_request added challenge on 0x563f67e98400
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:24.506965+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336142336 unmapped: 75448320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:25.507095+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336142336 unmapped: 75448320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:26.507238+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336142336 unmapped: 75448320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:27.507511+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336142336 unmapped: 75448320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:28.507734+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336142336 unmapped: 75448320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:29.507922+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336142336 unmapped: 75448320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:30.508109+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336150528 unmapped: 75440128 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:31.508264+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 75431936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:32.508437+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336158720 unmapped: 75431936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:33.508565+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:34.508694+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:35.508921+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:36.509189+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:37.509391+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:38.509579+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:39.509738+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:40.509919+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336166912 unmapped: 75423744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:41.510068+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336183296 unmapped: 75407360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:42.510235+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336183296 unmapped: 75407360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:43.510383+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336183296 unmapped: 75407360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:44.510714+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336183296 unmapped: 75407360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:45.510860+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336183296 unmapped: 75407360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:46.511029+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336183296 unmapped: 75407360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:47.511188+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336191488 unmapped: 75399168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:48.511387+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 75390976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:49.511591+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 75390976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:50.512018+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 75390976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:51.512172+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 75390976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:52.512383+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 75390976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:53.512585+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 75390976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:54.512761+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336199680 unmapped: 75390976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:55.512938+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:56.513257+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:57.513401+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:58.513614+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:40:59.513886+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:00.514061+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:01.514221+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:02.514393+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336216064 unmapped: 75374592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:03.514590+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 75366400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:04.514814+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336224256 unmapped: 75366400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:05.515018+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 75358208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:06.515308+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 75358208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:07.515496+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 75358208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:08.515692+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 75358208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:09.515901+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 75358208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:10.516117+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 75358208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:11.516253+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336240640 unmapped: 75350016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:12.516437+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336240640 unmapped: 75350016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:13.516727+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 75341824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:14.516932+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 75341824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:15.517100+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 75341824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:16.517329+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 75341824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:17.517528+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 75341824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:18.517740+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336248832 unmapped: 75341824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:19.517899+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:20.518066+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:21.518227+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:22.518543+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:23.518827+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:24.519107+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:25.519479+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:26.519763+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336265216 unmapped: 75325440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:27.520108+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 75309056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:28.520289+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336281600 unmapped: 75309056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:29.520475+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:30.520606+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:31.520766+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:32.520924+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:33.521135+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:34.521284+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:35.521499+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:36.521723+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336289792 unmapped: 75300864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:37.521871+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 75292672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:38.521995+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 75292672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:39.522225+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 75292672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:40.522397+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336297984 unmapped: 75292672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:41.522607+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336306176 unmapped: 75284480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:42.522789+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336306176 unmapped: 75284480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:43.522962+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336314368 unmapped: 75276288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:44.523129+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 75268096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:45.523336+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 75268096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:46.523563+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 75268096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:47.523727+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 75268096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:48.524016+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 75268096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:49.524343+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336322560 unmapped: 75268096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:50.524556+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 75259904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:51.524837+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 75259904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:52.525080+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 75259904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:53.525282+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336330752 unmapped: 75259904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:54.525429+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:55.525574+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:56.525805+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:57.525983+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:58.526111+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:41:59.526305+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:00.526452+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:01.526739+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336347136 unmapped: 75243520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:02.526934+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 75235328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:03.527105+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 75235328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:04.527252+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 75235328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:05.527391+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 75235328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:06.527570+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 75235328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:07.527743+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336355328 unmapped: 75235328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:08.527897+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336371712 unmapped: 75218944 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:09.528056+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 75210752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:10.528186+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 75210752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:11.528310+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 75210752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:12.528472+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 75210752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:13.528770+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 75210752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:14.528963+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336379904 unmapped: 75210752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:15.529183+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336388096 unmapped: 75202560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:16.529420+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336388096 unmapped: 75202560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:17.529715+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 75194368 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:18.529932+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 75194368 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:19.530130+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 75194368 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:20.530376+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 75194368 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:21.530614+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 75194368 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:22.530825+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336396288 unmapped: 75194368 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:23.530996+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336404480 unmapped: 75186176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:24.531202+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:25.531542+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:26.531971+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:27.532404+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:28.532601+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:29.532814+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:30.533012+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:31.533234+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:32.533452+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336412672 unmapped: 75177984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:33.533716+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 75161600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:34.534161+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336429056 unmapped: 75161600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:35.534363+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 75153408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:36.534571+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 75153408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:37.534818+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 75153408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:38.535113+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 75153408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:39.535449+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336437248 unmapped: 75153408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:40.535771+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 75145216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:41.536005+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 75145216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:42.536178+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 75145216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:43.536368+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336445440 unmapped: 75145216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:44.536809+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 75137024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:45.537058+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 75137024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:46.537279+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336453632 unmapped: 75137024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:47.537470+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336461824 unmapped: 75128832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:48.537730+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336461824 unmapped: 75128832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:49.537964+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 75112448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:50.538180+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 75112448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:51.538423+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 75112448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:52.538613+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 75112448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:53.538844+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 75112448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:54.539024+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 75112448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:55.539215+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336478208 unmapped: 75112448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:56.539409+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 75104256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:57.539583+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 75104256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:58.539765+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 75104256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:42:59.539896+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 75104256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:00.540049+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 75104256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:01.540190+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 75104256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:02.540364+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336486400 unmapped: 75104256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:03.540520+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 75096064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:04.540719+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336494592 unmapped: 75096064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:05.540892+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336502784 unmapped: 75087872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:06.541071+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336502784 unmapped: 75087872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:07.541211+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336502784 unmapped: 75087872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:08.541334+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336510976 unmapped: 75079680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:09.541484+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336510976 unmapped: 75079680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:10.541742+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 307 writes, 622 keys, 307 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 307 writes, 149 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336510976 unmapped: 75079680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:11.541913+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336519168 unmapped: 75071488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:12.542156+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336519168 unmapped: 75071488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:13.542362+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:14.542492+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:15.542619+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:16.542882+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:17.543060+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:18.543212+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:19.543360+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:20.543555+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336527360 unmapped: 75063296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:21.543784+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 75046912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:22.543954+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 75046912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:23.544133+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 75046912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:24.544322+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 75046912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:25.544507+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336543744 unmapped: 75046912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:26.544762+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336551936 unmapped: 75038720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:27.544938+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336560128 unmapped: 75030528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:28.545070+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336560128 unmapped: 75030528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:29.545238+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336560128 unmapped: 75030528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:30.545472+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 75022336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:31.545797+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 75022336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:32.546013+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 75022336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:33.546298+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 75022336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:34.546504+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336568320 unmapped: 75022336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:35.546709+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336576512 unmapped: 75014144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:36.547077+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336584704 unmapped: 75005952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:37.547228+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:38.547401+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:39.547556+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:40.547712+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:41.547936+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:42.548087+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:43.548324+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:44.548456+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336592896 unmapped: 74997760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:45.548689+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 74989568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:46.548879+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 74989568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:47.549044+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 74989568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:48.549276+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 74989568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:49.549445+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 74989568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:50.549609+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336601088 unmapped: 74989568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:51.549748+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:52.549891+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:53.550070+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:54.550211+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:55.550365+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:56.550580+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:57.550722+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:58.550891+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336617472 unmapped: 74973184 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:43:59.551101+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:00.551243+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:01.551456+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:02.551773+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:03.551961+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:04.552166+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:05.552324+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:06.552500+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336625664 unmapped: 74964992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:07.552688+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 74940416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:08.552858+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 74940416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:09.553048+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 74940416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:10.553216+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 74940416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:11.553351+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336650240 unmapped: 74940416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:12.553489+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 74932224 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:13.553757+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 74932224 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:14.553914+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336658432 unmapped: 74932224 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:15.554025+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 74924032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:16.554161+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336666624 unmapped: 74924032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:17.554298+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 74915840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:18.554571+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 74915840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:19.554754+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 74915840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:20.554957+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 74915840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:21.555113+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 74915840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:22.555283+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336674816 unmapped: 74915840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:23.555533+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 74907648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:24.555684+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 74907648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:25.555814+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 74907648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:26.555997+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 74907648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:27.556115+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 74907648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:28.556245+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 74907648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:29.556398+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336683008 unmapped: 74907648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:30.556619+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336691200 unmapped: 74899456 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:31.556772+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336699392 unmapped: 74891264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:32.556894+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336707584 unmapped: 74883072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:33.557025+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 74866688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:34.557161+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 74866688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:35.557299+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 74866688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:36.557479+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 74866688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:37.557617+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 74866688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:14 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:38.557806+0000)
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:14 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:14 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336723968 unmapped: 74866688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:39.557949+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:40.558106+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:41.558250+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:42.558395+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:43.558573+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:44.558714+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:45.558871+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:46.559079+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336732160 unmapped: 74858496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:47.559226+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 74850304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:48.559405+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336740352 unmapped: 74850304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:49.559563+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 74842112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:50.559797+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 74842112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:51.560035+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 74842112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:52.560275+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 74842112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:53.560444+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 74842112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:54.560692+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336748544 unmapped: 74842112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:55.560885+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336756736 unmapped: 74833920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:56.561060+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 74825728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:57.561203+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 74825728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:58.561346+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 74825728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:44:59.561492+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 74825728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:00.561684+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 74825728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:01.561834+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 74825728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:02.562021+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336764928 unmapped: 74825728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:03.562187+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 74817536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:04.562328+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336773120 unmapped: 74817536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:05.562514+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 74809344 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:06.562782+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 74809344 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:07.563004+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 74809344 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:08.563193+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 74809344 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:09.563403+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336781312 unmapped: 74809344 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:10.563577+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 74801152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:11.563734+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 74801152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:12.563964+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 74801152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:13.564098+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336797696 unmapped: 74792960 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:14.564279+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 74784768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:15.564427+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336805888 unmapped: 74784768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:16.564602+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 74776576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:17.564776+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 74776576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:18.564902+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 74776576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:19.565052+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336822272 unmapped: 74768384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:20.565217+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336822272 unmapped: 74768384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:21.565335+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:22.565475+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:23.565666+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:24.565833+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:25.565973+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:26.566165+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:27.566336+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:28.566524+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:29.566703+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 74743808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:30.566884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 74743808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:31.567120+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 74743808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4e95c9000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:32.567269+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.788513184s of 600.127685547s, submitted: 106
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 74801152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:33.567447+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336789504 unmapped: 74801152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:34.567613+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336814080 unmapped: 74776576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:35.567799+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:36.568025+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:37.568152+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:38.568297+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:39.568452+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:40.568615+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:41.568788+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:42.568930+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336830464 unmapped: 74760192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:43.569109+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:44.569321+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:45.569494+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:46.569700+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:47.569904+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:48.570109+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:49.570291+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:50.570822+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336838656 unmapped: 74752000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:51.571010+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 74743808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:52.571233+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 74743808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:53.571414+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 74743808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:54.571726+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 74735616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:55.571868+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336855040 unmapped: 74735616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:56.572051+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 74727424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:57.572182+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 74727424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:58.572342+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 74727424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:45:59.572466+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 74727424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:00.572718+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 74727424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:01.572849+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 74727424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:02.573005+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336863232 unmapped: 74727424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:03.573174+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 74719232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:04.573297+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 74719232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:05.573492+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 74719232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:06.573758+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336871424 unmapped: 74719232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:07.573906+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336879616 unmapped: 74711040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:08.574064+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336887808 unmapped: 74702848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:09.574185+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336887808 unmapped: 74702848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:10.574324+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336887808 unmapped: 74702848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:11.574479+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336887808 unmapped: 74702848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:12.574615+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336887808 unmapped: 74702848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:13.574834+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336887808 unmapped: 74702848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:14.575002+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336896000 unmapped: 74694656 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:15.575148+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336896000 unmapped: 74694656 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:16.575366+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336896000 unmapped: 74694656 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:17.575521+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 74686464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:18.575711+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 74686464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:19.575970+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 74686464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:20.576177+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 74686464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:21.576315+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 74686464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:22.576469+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 74686464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:23.576726+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336904192 unmapped: 74686464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:24.576905+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336912384 unmapped: 74678272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:25.577061+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336912384 unmapped: 74678272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:26.577244+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336912384 unmapped: 74678272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:27.577373+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336912384 unmapped: 74678272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:28.577533+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336912384 unmapped: 74678272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:29.577725+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336920576 unmapped: 74670080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:30.577939+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336920576 unmapped: 74670080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:31.578079+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 74661888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:32.578253+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 74661888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:33.578409+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 74661888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:34.578548+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 74661888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:35.578695+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 74661888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:36.578882+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 74661888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:37.579032+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336928768 unmapped: 74661888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:38.579187+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 74645504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:39.579297+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336945152 unmapped: 74645504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:40.579541+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336953344 unmapped: 74637312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:41.579659+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 74629120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:42.579764+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:43.579905+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 74629120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:44.580073+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 74629120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:45.580255+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 74629120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:46.580461+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336961536 unmapped: 74629120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:47.580592+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 74612736 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:48.580774+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336977920 unmapped: 74612736 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:49.580915+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 74604544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:50.581097+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 74604544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:51.581990+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336986112 unmapped: 74604544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:52.582209+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 74596352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:53.582417+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 74596352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:54.582617+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 74596352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:55.582794+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 74596352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:56.582963+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 336994304 unmapped: 74596352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:57.583102+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 74588160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:58.583281+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 74588160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:46:59.583419+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 74588160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:00.583566+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 74588160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:01.584235+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 74588160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:02.584861+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337002496 unmapped: 74588160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:03.585026+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337010688 unmapped: 74579968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:04.585181+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337010688 unmapped: 74579968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:05.585324+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 74571776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:06.585487+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 74571776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:07.585622+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 74571776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:08.585786+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 74571776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:09.585956+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 74571776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:10.586149+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337018880 unmapped: 74571776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:11.586294+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 74563584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:12.586477+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337027072 unmapped: 74563584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:13.586598+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337043456 unmapped: 74547200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:14.586774+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337043456 unmapped: 74547200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:15.587193+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337043456 unmapped: 74547200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:16.587404+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337043456 unmapped: 74547200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:17.587568+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337043456 unmapped: 74547200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:18.587684+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337043456 unmapped: 74547200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:19.587879+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337043456 unmapped: 74547200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:20.588033+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337051648 unmapped: 74539008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:21.588169+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337059840 unmapped: 74530816 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:22.588312+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337059840 unmapped: 74530816 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:23.588430+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337059840 unmapped: 74530816 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:24.588601+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337059840 unmapped: 74530816 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:25.588744+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337059840 unmapped: 74530816 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:26.588927+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 74522624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:27.589069+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 74522624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:28.589237+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337068032 unmapped: 74522624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:29.589383+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 74514432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:30.589544+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 74514432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:31.589693+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 74514432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:32.589853+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 74514432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:33.590013+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 74514432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:34.590174+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337076224 unmapped: 74514432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:35.590302+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337084416 unmapped: 74506240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:36.590479+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337084416 unmapped: 74506240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:37.590700+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:38.590870+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:39.591020+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:40.591164+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:41.591332+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:42.591499+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:43.591670+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:44.591845+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337092608 unmapped: 74498048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:45.591963+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 74481664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:46.592129+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 74481664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:47.592261+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 74481664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:48.592424+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 74481664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:49.592608+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 74481664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:50.592781+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 74481664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:51.592892+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337125376 unmapped: 74465280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:52.593073+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337125376 unmapped: 74465280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:53.593295+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 74457088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:54.593446+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 74457088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:55.593604+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 74457088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:56.593838+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 74457088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:57.593998+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 74457088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:58.594147+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 74457088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:47:59.594273+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337133568 unmapped: 74457088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:00.594502+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 74448896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:01.594716+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 74448896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:02.594921+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 74448896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:03.595159+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 74448896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:04.595303+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 74448896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:05.595475+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 74448896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:06.595707+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337141760 unmapped: 74448896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:07.595898+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 74440704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:08.596102+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337149952 unmapped: 74440704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:09.596250+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 74432512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:10.596412+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 74432512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:11.596556+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 74432512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:12.596734+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337158144 unmapped: 74432512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:13.596881+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 74424320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:14.597038+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337166336 unmapped: 74424320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:15.597169+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337174528 unmapped: 74416128 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:16.597359+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337174528 unmapped: 74416128 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:17.597547+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:18.597734+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:19.597872+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:20.598029+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:21.598225+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:22.598397+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:23.598579+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:24.598742+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337182720 unmapped: 74407936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:25.598883+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 74383360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:26.599089+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 74383360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:27.599245+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337207296 unmapped: 74383360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:28.599444+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 74375168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:29.599618+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337215488 unmapped: 74375168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:30.599902+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 74366976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:31.600136+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 74366976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:32.600368+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 74366976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:33.600556+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 74366976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:34.600744+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 74366976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:35.600904+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337223680 unmapped: 74366976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:36.601096+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 74358784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:37.601236+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 74358784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:38.601485+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 74358784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:39.601694+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337231872 unmapped: 74358784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:40.601860+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337240064 unmapped: 74350592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:41.601985+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 74342400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:42.602110+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 74342400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:43.602296+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 74342400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:44.602442+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 74342400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:45.602617+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 74342400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:46.602950+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 74342400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:47.603290+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337248256 unmapped: 74342400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:48.603525+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:49.603732+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:50.603958+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:51.604135+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:52.604292+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:53.604486+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:54.604689+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:55.604864+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337256448 unmapped: 74334208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:56.605108+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337281024 unmapped: 74309632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:57.605296+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337281024 unmapped: 74309632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:58.605473+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337281024 unmapped: 74309632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:48:59.605693+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337281024 unmapped: 74309632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:00.606012+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337281024 unmapped: 74309632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:01.606320+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337281024 unmapped: 74309632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:02.606549+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337281024 unmapped: 74309632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:03.606763+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337289216 unmapped: 74301440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:04.606962+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337289216 unmapped: 74301440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:05.607139+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337289216 unmapped: 74301440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:06.607439+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337289216 unmapped: 74301440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:07.607689+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337289216 unmapped: 74301440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:08.607963+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337297408 unmapped: 74293248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:09.608100+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337305600 unmapped: 74285056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:10.608358+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337305600 unmapped: 74285056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:11.608554+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337305600 unmapped: 74285056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:12.608731+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337321984 unmapped: 74268672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:13.608883+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337321984 unmapped: 74268672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:14.609015+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337321984 unmapped: 74268672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:15.609181+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337321984 unmapped: 74268672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:16.609376+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337321984 unmapped: 74268672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:17.609537+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337321984 unmapped: 74268672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:18.609728+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337321984 unmapped: 74268672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:19.609914+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337330176 unmapped: 74260480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:20.610069+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337330176 unmapped: 74260480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:21.610305+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337330176 unmapped: 74260480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:22.610482+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337330176 unmapped: 74260480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:23.610684+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337338368 unmapped: 74252288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:24.610856+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337338368 unmapped: 74252288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:25.611030+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337338368 unmapped: 74252288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:26.611186+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337338368 unmapped: 74252288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:27.611376+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337354752 unmapped: 74235904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:28.611507+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337354752 unmapped: 74235904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:29.611701+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337354752 unmapped: 74235904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:30.611880+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337354752 unmapped: 74235904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:31.612004+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337362944 unmapped: 74227712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:32.612191+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337362944 unmapped: 74227712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:33.612326+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337362944 unmapped: 74227712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:34.612492+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337362944 unmapped: 74227712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:35.612629+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337362944 unmapped: 74227712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:36.612812+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 74219520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:37.612991+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 74219520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:38.613173+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 74219520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:39.613338+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 74219520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:40.613523+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 74219520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:41.613701+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337371136 unmapped: 74219520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:42.613849+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 74211328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:43.614014+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337379328 unmapped: 74211328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:44.614155+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 74203136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:45.614269+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 74203136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:46.614436+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:47.614570+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 74203136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:48.614774+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 74203136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:49.614960+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 74203136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:50.615184+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337387520 unmapped: 74203136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:51.615435+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 74186752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:52.615599+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 74186752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:53.615742+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 74186752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:54.615888+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 74186752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:55.616038+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 74186752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:56.616203+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337403904 unmapped: 74186752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:57.616321+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337428480 unmapped: 74162176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:58.616575+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337436672 unmapped: 74153984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:49:59.616718+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337436672 unmapped: 74153984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:00.656253+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337436672 unmapped: 74153984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:01.656381+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:02.656516+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:03.656741+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:04.656938+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:05.657220+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:06.657483+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:07.657611+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:08.657752+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337444864 unmapped: 74145792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:09.657884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337461248 unmapped: 74129408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:10.658077+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337469440 unmapped: 74121216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:11.658346+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337469440 unmapped: 74121216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:12.658558+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337469440 unmapped: 74121216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:13.658804+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337477632 unmapped: 74113024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:14.658989+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337477632 unmapped: 74113024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:15.659123+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337485824 unmapped: 74104832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:16.659280+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337485824 unmapped: 74104832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:17.659434+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337494016 unmapped: 74096640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:18.659622+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337494016 unmapped: 74096640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:19.659960+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337494016 unmapped: 74096640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:20.660260+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337494016 unmapped: 74096640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:21.660451+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337494016 unmapped: 74096640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:22.660583+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337494016 unmapped: 74096640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:23.660729+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337494016 unmapped: 74096640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:24.660865+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337502208 unmapped: 74088448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:25.661059+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 74080256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:26.661266+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 74080256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:27.661405+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 74080256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:28.661580+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 74080256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:29.661725+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 74080256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:30.661947+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 74080256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:31.662196+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337510400 unmapped: 74080256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:32.662469+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337518592 unmapped: 74072064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:33.662746+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337518592 unmapped: 74072064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:34.662995+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337518592 unmapped: 74072064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:35.663162+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337518592 unmapped: 74072064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:36.663400+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337518592 unmapped: 74072064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:37.663695+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337518592 unmapped: 74072064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:38.663941+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337518592 unmapped: 74072064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:39.664111+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337526784 unmapped: 74063872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:40.664256+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337526784 unmapped: 74063872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:41.664400+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:42.664578+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:43.664723+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:44.664933+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:45.665093+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:46.665387+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:47.665561+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:48.667484+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337543168 unmapped: 74047488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:49.667749+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337551360 unmapped: 74039296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:50.667909+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 74031104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:51.668060+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337559552 unmapped: 74031104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:52.668211+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 74022912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:53.668424+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 74022912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:54.668624+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337567744 unmapped: 74022912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:55.668927+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 74014720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:56.669493+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337575936 unmapped: 74014720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:57.669686+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337584128 unmapped: 74006528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:58.669862+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337584128 unmapped: 74006528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:50:59.670025+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:00.670176+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:01.670323+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:02.670470+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:03.670595+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:04.670782+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:05.670933+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:06.671140+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:07.671387+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337592320 unmapped: 73998336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:08.671708+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 73990144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:09.671897+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 73990144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:10.672067+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337600512 unmapped: 73990144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:11.672262+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337608704 unmapped: 73981952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:12.672472+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337608704 unmapped: 73981952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:13.672617+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337616896 unmapped: 73973760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:14.672803+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337616896 unmapped: 73973760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:15.672962+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337625088 unmapped: 73965568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:16.673227+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337625088 unmapped: 73965568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:17.673412+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337625088 unmapped: 73965568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:18.673723+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337625088 unmapped: 73965568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:19.673920+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:20.674132+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:21.674294+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:22.674472+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:23.674778+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:24.674999+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:25.675196+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:26.675428+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337633280 unmapped: 73957376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:27.675622+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 73940992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:28.675891+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 73940992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:29.676151+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 73940992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:30.676369+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337649664 unmapped: 73940992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:31.676600+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 73932800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:32.676835+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 73932800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:33.677039+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 73932800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-11-25T17:51:34.677350+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _finish_auth 0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:34.678608+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337657856 unmapped: 73932800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:35.677703+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:36.678006+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:37.678332+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:38.678623+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:39.679039+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:40.679351+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:41.679598+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:42.679898+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337674240 unmapped: 73916416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:43.680109+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:44.680299+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:45.680514+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:46.680742+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:47.680909+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:48.681081+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:49.681206+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:50.681390+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337690624 unmapped: 73900032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:51.681532+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 73891840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:52.681764+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337698816 unmapped: 73891840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:53.681936+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337715200 unmapped: 73875456 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:54.682153+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337723392 unmapped: 73867264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:55.682331+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337723392 unmapped: 73867264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:56.682535+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337723392 unmapped: 73867264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:57.682700+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337723392 unmapped: 73867264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:58.682865+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337723392 unmapped: 73867264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:51:59.683045+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337731584 unmapped: 73859072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:00.683259+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337731584 unmapped: 73859072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:01.683430+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337731584 unmapped: 73859072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:02.683609+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337739776 unmapped: 73850880 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:03.683741+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337739776 unmapped: 73850880 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:04.683893+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337739776 unmapped: 73850880 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:05.684028+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337739776 unmapped: 73850880 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:06.684210+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337747968 unmapped: 73842688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:07.684358+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337747968 unmapped: 73842688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:08.684507+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337756160 unmapped: 73834496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:09.684712+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 73826304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:10.684863+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 73826304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:11.685045+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 73826304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:12.685275+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 73826304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:13.685493+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 73826304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:14.685718+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 73826304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:15.685884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 73818112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:16.686079+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 73818112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:17.686206+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337772544 unmapped: 73818112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:18.686355+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 73809920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:19.686474+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 73809920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:20.686615+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 73809920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:21.686779+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 73809920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:22.686937+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 73809920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:23.687067+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 73809920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:24.687196+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337780736 unmapped: 73809920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:25.688391+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337788928 unmapped: 73801728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:26.688748+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337788928 unmapped: 73801728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:27.688960+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337788928 unmapped: 73801728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:28.689126+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337788928 unmapped: 73801728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:29.689356+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337788928 unmapped: 73801728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:30.689553+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337797120 unmapped: 73793536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:31.689792+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337797120 unmapped: 73793536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:32.690302+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337813504 unmapped: 73777152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:33.690406+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337813504 unmapped: 73777152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:34.690580+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337813504 unmapped: 73777152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:35.690745+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337813504 unmapped: 73777152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:36.690948+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337813504 unmapped: 73777152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:37.691142+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337813504 unmapped: 73777152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:38.691316+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337813504 unmapped: 73777152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:39.691527+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337829888 unmapped: 73760768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:40.691736+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 73752576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:41.692001+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 73752576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:42.692201+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 73752576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:43.692419+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 73752576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:44.692627+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 73752576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:45.692946+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337838080 unmapped: 73752576 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:46.693209+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 73744384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:47.693405+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 73744384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:48.693741+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337846272 unmapped: 73744384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:49.693923+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 73736192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:50.694211+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 73736192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:51.694358+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 73736192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:52.694579+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 73736192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:53.694781+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 73736192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:54.694951+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337854464 unmapped: 73736192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:55.695215+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337862656 unmapped: 73728000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:56.695451+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337870848 unmapped: 73719808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:57.695681+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 73711616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:58.695945+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 73711616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:52:59.696146+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 73711616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:00.696358+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:01.696565+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 73711616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:02.696752+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 73711616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:03.696933+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 73711616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:04.697205+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337879040 unmapped: 73711616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:05.697339+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337887232 unmapped: 73703424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:06.697501+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337887232 unmapped: 73703424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:07.697710+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337887232 unmapped: 73703424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:08.697942+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337887232 unmapped: 73703424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:09.698144+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337887232 unmapped: 73703424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.17 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:10.698324+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337887232 unmapped: 73703424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:11.698536+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337887232 unmapped: 73703424 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:12.698719+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337895424 unmapped: 73695232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:13.698965+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 73678848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:14.699141+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 73678848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:15.699332+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 73678848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:16.699578+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337911808 unmapped: 73678848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:17.699731+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 73670656 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:18.699899+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337920000 unmapped: 73670656 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:19.700100+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:20.700336+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:21.700550+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:22.700821+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:23.701004+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:24.701165+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:25.701361+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:26.701567+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:27.701738+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337928192 unmapped: 73662464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:28.701894+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337936384 unmapped: 73654272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:29.702065+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337944576 unmapped: 73646080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:30.702311+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337944576 unmapped: 73646080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:31.702491+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337944576 unmapped: 73646080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:32.702666+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337944576 unmapped: 73646080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:33.702877+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337944576 unmapped: 73646080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:34.703047+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337960960 unmapped: 73629696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:35.703321+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337969152 unmapped: 73621504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:36.703583+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337969152 unmapped: 73621504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:37.703775+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337977344 unmapped: 73613312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:38.703973+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337977344 unmapped: 73613312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:39.704137+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337977344 unmapped: 73613312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:40.704317+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337977344 unmapped: 73613312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:41.704489+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337977344 unmapped: 73613312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:42.704716+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337977344 unmapped: 73613312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:43.704881+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337985536 unmapped: 73605120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:44.705085+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337985536 unmapped: 73605120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:45.705293+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337993728 unmapped: 73596928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:46.705511+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337993728 unmapped: 73596928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:47.705733+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337993728 unmapped: 73596928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:48.705895+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337993728 unmapped: 73596928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:49.706115+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337993728 unmapped: 73596928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:50.706409+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 337993728 unmapped: 73596928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:51.706807+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338001920 unmapped: 73588736 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:52.707010+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338001920 unmapped: 73588736 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:53.707253+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338010112 unmapped: 73580544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:54.707479+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338010112 unmapped: 73580544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 ms_handle_reset con 0x563f66c1f400 session 0x563f680cba40
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: handle_auth_request added challenge on 0x563f6dcc6000
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:55.707732+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338010112 unmapped: 73580544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:56.707917+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338018304 unmapped: 73572352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:57.708081+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338018304 unmapped: 73572352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:58.708220+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338018304 unmapped: 73572352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:53:59.708427+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338026496 unmapped: 73564160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:00.708747+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338026496 unmapped: 73564160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:01.708895+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338034688 unmapped: 73555968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:02.709065+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338034688 unmapped: 73555968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:03.709291+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338034688 unmapped: 73555968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:04.709508+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338034688 unmapped: 73555968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:05.709669+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338034688 unmapped: 73555968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:06.709902+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338034688 unmapped: 73555968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:07.710102+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338034688 unmapped: 73555968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:08.710346+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338042880 unmapped: 73547776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:09.710897+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338042880 unmapped: 73547776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:10.711142+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338051072 unmapped: 73539584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:11.711299+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338051072 unmapped: 73539584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:12.711494+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338059264 unmapped: 73531392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:13.711696+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338059264 unmapped: 73531392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:14.711899+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338059264 unmapped: 73531392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:15.712063+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338067456 unmapped: 73523200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:16.712298+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338067456 unmapped: 73523200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:17.712496+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 73515008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:18.712714+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 73515008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:19.712866+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 73515008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:20.713030+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 73515008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:21.713178+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 73515008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:22.713341+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 73515008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:23.713530+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:24.713708+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:25.713871+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:26.714064+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:27.714261+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:28.714508+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:29.714790+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:30.714961+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 73498624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:31.715218+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 73490432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:32.715360+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 73490432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:33.715562+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 73482240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:34.715757+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 73482240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:35.715947+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 73482240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:36.716585+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 73482240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:37.716716+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 73482240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:38.716919+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 73482240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:39.717073+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338116608 unmapped: 73474048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:40.717191+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338116608 unmapped: 73474048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:41.717326+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:42.717461+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:43.717673+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:44.717915+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:45.718076+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:46.718289+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:47.718528+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:48.718777+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338132992 unmapped: 73457664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:49.719004+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338141184 unmapped: 73449472 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:50.719278+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338141184 unmapped: 73449472 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:51.719533+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 73441280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:52.719688+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 73441280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:53.719894+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 73441280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:54.720095+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338149376 unmapped: 73441280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:55.720213+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 73433088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:56.720498+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338165760 unmapped: 73424896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:57.720708+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 73416704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:58.720908+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 73416704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:54:59.721071+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 73416704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:00.721281+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 73416704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:01.721409+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 73416704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:02.721584+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 73416704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:03.721817+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338173952 unmapped: 73416704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:04.722016+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338182144 unmapped: 73408512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:05.722192+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338182144 unmapped: 73408512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:06.722374+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338190336 unmapped: 73400320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:07.722555+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338190336 unmapped: 73400320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:08.722760+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338190336 unmapped: 73400320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:09.722979+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338190336 unmapped: 73400320 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:10.723225+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338198528 unmapped: 73392128 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:11.723485+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 73383936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:12.723715+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 73383936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:13.723870+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 73383936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:14.724034+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 73383936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:15.724233+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 73383936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:16.724466+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 73383936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:17.724706+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338206720 unmapped: 73383936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:18.724861+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338214912 unmapped: 73375744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:19.725095+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338214912 unmapped: 73375744 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:20.725281+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338231296 unmapped: 73359360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:21.725509+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 73351168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:22.725718+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 73351168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:23.725894+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 ms_handle_reset con 0x563f67e98400 session 0x563f664d4000
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: handle_auth_request added challenge on 0x563f6825a000
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 73351168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:24.726098+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 73351168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:25.726286+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1212416
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 73351168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:26.726609+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 73351168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:27.726873+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 73351168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:28.727083+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 73342976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:29.727223+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 596.903991699s of 597.162597656s, submitted: 106
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 73342976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:30.727436+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 73326592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:31.727593+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 73326592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:32.727838+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338272256 unmapped: 73318400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:33.728015+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338272256 unmapped: 73318400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:34.728195+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 338272256 unmapped: 73318400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:35.728348+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:36.728689+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:37.728822+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:38.729006+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:39.729298+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:40.729432+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:41.729630+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:42.729832+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339369984 unmapped: 72220672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:43.730020+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:44.730180+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:45.730452+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:46.730700+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:47.730917+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:48.731113+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:49.731325+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:50.731528+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339378176 unmapped: 72212480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:51.731667+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:52.731900+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:53.732228+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:54.732363+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:55.732557+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:56.732829+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:57.732973+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:58.733187+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339386368 unmapped: 72204288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:55:59.733423+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72196096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:00.733704+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72196096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:01.733906+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72196096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:02.734103+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72196096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:03.734299+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72196096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:04.734499+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72196096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:05.734757+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339394560 unmapped: 72196096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:06.735016+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 72187904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:07.735236+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339402752 unmapped: 72187904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:08.735503+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:09.735746+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:10.735890+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:11.736051+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:12.736262+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:13.736469+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:14.736604+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:15.736734+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339410944 unmapped: 72179712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:16.736899+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 72171520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:17.737093+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 72171520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:18.737309+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 72171520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:19.737545+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 72171520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:20.737761+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 72171520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:21.737996+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339419136 unmapped: 72171520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:22.738317+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 72163328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:23.738510+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339427328 unmapped: 72163328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:24.738766+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 72155136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:25.738998+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 72155136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:26.739243+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 72155136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:27.739507+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 72155136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:28.739789+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 72155136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:29.739959+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:30.740261+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 72155136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:31.740739+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339435520 unmapped: 72155136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:32.740946+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 72146944 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:33.741215+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 72146944 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:34.741434+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 72146944 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:35.741695+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339443712 unmapped: 72146944 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:36.741953+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 72138752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:37.742118+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 72138752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:38.742287+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339451904 unmapped: 72138752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:39.742458+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 72130560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:40.742724+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339460096 unmapped: 72130560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:41.742933+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:42.743132+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:43.743285+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:44.743470+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:45.743717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:46.743970+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:47.744197+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:48.744491+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339476480 unmapped: 72114176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:49.744697+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 72105984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:50.744858+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 72105984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:51.745036+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 72105984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:52.745162+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 72105984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:53.745355+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 72105984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:54.745561+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339484672 unmapped: 72105984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:55.745731+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 72097792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:56.745930+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339492864 unmapped: 72097792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:57.746084+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:58.746243+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:56:59.746387+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:00.746533+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:01.746682+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:02.746833+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:03.747063+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:04.747211+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339509248 unmapped: 72081408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:05.747459+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 72073216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:06.747685+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 72073216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:07.747868+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 72073216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:08.748044+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 72073216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:09.748198+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 72073216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:10.748348+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339517440 unmapped: 72073216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:11.748524+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 72065024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:12.748712+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339525632 unmapped: 72065024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:13.748884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:14.749017+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:15.749157+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:16.749357+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:17.749491+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:18.749624+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:19.749785+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:20.749937+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339533824 unmapped: 72056832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:21.750087+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 72040448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:22.750274+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 72040448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:23.750482+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 72040448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:24.750710+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 72040448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:25.750837+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339550208 unmapped: 72040448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:26.751012+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339558400 unmapped: 72032256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:27.751145+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 72024064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:28.751277+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 72024064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:29.751457+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 72024064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:30.751680+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339566592 unmapped: 72024064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:31.751843+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 72015872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:32.752024+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 72015872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:33.752253+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 72015872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:34.752437+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 72015872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:35.752607+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 72015872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:36.752840+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339574784 unmapped: 72015872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:37.753019+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 71999488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:38.753240+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 71999488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:39.753427+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 71999488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:40.753617+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339591168 unmapped: 71999488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:41.753808+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 71991296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:42.753982+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 71991296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:43.754171+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 71991296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:44.754302+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339599360 unmapped: 71991296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:45.754435+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 71974912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:46.754621+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 71974912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:47.754838+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 71974912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:48.754983+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 71974912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:49.755181+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 71974912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:50.755366+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339615744 unmapped: 71974912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:51.755595+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 71966720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:52.755762+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339623936 unmapped: 71966720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:53.784444+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 71958528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:54.784585+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 71958528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:55.784846+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 71958528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:56.785040+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 71958528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:57.785193+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 71958528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:58.785344+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339632128 unmapped: 71958528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:57:59.785677+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:00.785784+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:01.785897+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:02.786071+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:03.786259+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:04.786441+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:05.786619+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:06.786831+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339648512 unmapped: 71942144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:07.786982+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 71925760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:08.787233+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339664896 unmapped: 71925760 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:09.787432+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:10.787594+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:11.787717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:12.787984+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:13.788288+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:14.788428+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:15.788690+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:16.788900+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:17.789099+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:18.789322+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:19.789491+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:20.789705+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:21.789888+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:22.790713+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339673088 unmapped: 71917568 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:23.790927+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 71909376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:24.791154+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339681280 unmapped: 71909376 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:25.791390+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 71892992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:26.791682+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339697664 unmapped: 71892992 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:27.791884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 71884800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:28.792117+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 71884800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:29.792292+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 71884800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:30.792566+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 71884800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:31.792738+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 71884800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:32.792872+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339705856 unmapped: 71884800 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:33.793025+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 71876608 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:34.793183+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 71876608 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:35.793375+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339714048 unmapped: 71876608 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:36.793627+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:37.793971+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:38.794198+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:39.794436+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:40.794627+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:41.794867+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:42.795038+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:43.795246+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:44.795732+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:45.795914+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:46.796126+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339722240 unmapped: 71868416 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:47.796292+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339730432 unmapped: 71860224 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:48.796478+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:49.796703+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:50.796888+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:51.797045+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:52.797220+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:53.797427+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:54.797574+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:55.797698+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:56.797852+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:57.798037+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:58.798216+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:58:59.798397+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:00.798530+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339746816 unmapped: 71843840 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:01.798706+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:02.798924+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:03.799218+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:04.799421+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:05.799621+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:06.799861+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:07.800072+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:08.800292+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:09.800523+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339755008 unmapped: 71835648 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:10.800786+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 71819264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:11.801072+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339771392 unmapped: 71819264 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:12.801233+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 71811072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:13.801401+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 71811072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:14.801575+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 71811072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:15.801717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 71811072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:16.801881+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 71811072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:17.802019+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 71811072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:18.802214+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339779584 unmapped: 71811072 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:19.802373+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 71802880 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:20.802565+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 71802880 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:21.802775+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339787776 unmapped: 71802880 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:22.803019+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 71794688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:23.803173+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 71794688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:24.803349+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 71794688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:25.803562+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 71794688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:26.803777+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339795968 unmapped: 71794688 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:27.803957+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339804160 unmapped: 71786496 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:28.804298+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:29.804486+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:30.804720+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:31.804880+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:32.805054+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:33.805219+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:34.805360+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:35.805504+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:36.805687+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:37.805828+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339812352 unmapped: 71778304 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:38.805968+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 71770112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:39.806139+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 71770112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:40.806290+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 71770112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:41.806493+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339820544 unmapped: 71770112 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:42.806656+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 71761920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:43.806800+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 71761920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:44.806996+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339828736 unmapped: 71761920 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:45.807155+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 71753728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:46.807374+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 71753728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:47.807532+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 71753728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:48.807731+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 71753728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:49.807866+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 71753728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:50.808039+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 71753728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:51.808199+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339836928 unmapped: 71753728 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:52.808367+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:53.808538+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:54.808693+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:55.808840+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:56.808975+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:57.809121+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:58.809323+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T17:59:59.809476+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:00.809620+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:01.809828+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:02.809990+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:03.810201+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:04.810375+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:05.810582+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:06.810811+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:07.810994+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339845120 unmapped: 71745536 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:08.811189+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:09.811354+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:10.811505+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:11.811729+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:12.811897+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:13.812088+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:14.812290+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:15.812438+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339861504 unmapped: 71729152 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:16.812711+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 71720960 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:17.812949+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 71720960 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:18.813180+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 71720960 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:19.813458+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 71720960 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:20.813604+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 71720960 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:21.813747+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339869696 unmapped: 71720960 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:22.813894+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:23.814032+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:24.814288+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:25.814447+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:26.814691+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:27.814859+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:28.815083+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:29.815319+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:30.815462+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:31.815588+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:32.815931+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339877888 unmapped: 71712768 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:33.816087+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 71696384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:34.816240+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 71696384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:35.816375+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 71696384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:36.816620+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 71696384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:37.816974+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339894272 unmapped: 71696384 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:38.817138+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 71688192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:39.817293+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 71688192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:40.817472+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 71688192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:41.817623+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 71688192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:42.817786+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 71688192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:43.817980+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 71688192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:44.818113+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339902464 unmapped: 71688192 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:45.818275+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 71680000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:46.818489+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 71680000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:47.818665+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 71680000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:48.818819+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339910656 unmapped: 71680000 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:49.819019+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 71671808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:50.819165+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 71671808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:51.819344+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 71671808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:52.819808+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339918848 unmapped: 71671808 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:53.820041+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:54.820278+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:55.820484+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:56.820781+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:57.820929+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:58.821073+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:00:59.821230+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:00.821362+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:01.821512+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:02.821675+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339927040 unmapped: 71663616 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:03.821865+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 71647232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:04.822030+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 71647232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:05.822164+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339943424 unmapped: 71647232 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:06.822336+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:07.822481+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:08.822667+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:09.822876+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:10.823030+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:11.823162+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:12.823292+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:13.823459+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:14.823700+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:15.823848+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339951616 unmapped: 71639040 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:16.824078+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:17.824306+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:18.824769+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:19.824982+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:20.825224+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:21.825369+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:22.825544+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:23.825867+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:24.826030+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:25.826187+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:26.826416+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:27.826598+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:28.826805+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339959808 unmapped: 71630848 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:29.827011+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 71614464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:30.827226+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 71614464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:31.827364+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 71614464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:32.827545+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 71614464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:33.827717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 71614464 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:34.827874+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:35.828054+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:36.828235+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:37.828425+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:38.828670+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:39.828823+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:40.829015+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:41.829173+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:42.829355+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:43.829517+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:44.829724+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:45.829870+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:46.830095+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:47.830236+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:48.830364+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:49.830493+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:50.830558+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339984384 unmapped: 71606272 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:51.830770+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:52.830947+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:53.831078+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:54.831281+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:55.831431+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:56.831588+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:57.831739+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:58.831882+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:01:59.832012+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 339992576 unmapped: 71598080 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:00.832212+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340000768 unmapped: 71589888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:01.832371+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340000768 unmapped: 71589888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:02.832543+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340000768 unmapped: 71589888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:03.832715+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340000768 unmapped: 71589888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:04.832883+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340000768 unmapped: 71589888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:05.833064+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340000768 unmapped: 71589888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:06.833242+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340000768 unmapped: 71589888 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:07.833363+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:08.833593+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:09.833765+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:10.833919+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:11.834076+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:12.834282+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:13.834425+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:14.834546+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:15.834707+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:16.834883+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:17.835023+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340008960 unmapped: 71581696 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:18.835189+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:19.835383+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:20.835573+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:21.835777+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:22.835960+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:23.836138+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:24.836313+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:25.836474+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:26.836743+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 71565312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:27.836949+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 71565312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:28.837163+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 71565312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:29.837361+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 71565312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:30.837510+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340025344 unmapped: 71565312 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:31.837728+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340033536 unmapped: 71557120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:32.837875+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340033536 unmapped: 71557120 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:33.838054+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:34.838223+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:35.838413+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:36.838619+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:37.838970+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:38.839175+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:39.839349+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:40.839573+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:41.839730+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:42.839857+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:43.840031+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:44.840214+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:45.840383+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340041728 unmapped: 71548928 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:46.840614+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 71540736 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:47.840794+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340049920 unmapped: 71540736 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:48.840939+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:49.841075+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:50.841223+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:51.841426+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:52.841588+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:53.841715+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:54.841911+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:55.842058+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340058112 unmapped: 71532544 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:56.842283+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:57.842446+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:58.842592+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:02:59.842810+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:00.843027+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:01.843168+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:02.843338+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:03.843525+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:04.843680+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340066304 unmapped: 71524352 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:05.843854+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:06.844033+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:07.844205+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:08.844421+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:09.844697+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.17 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:10.844884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:11.845070+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:12.845280+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:13.845457+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:14.845627+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340074496 unmapped: 71516160 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:15.845824+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:16.845987+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:17.846154+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:18.846345+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:19.846529+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:20.846705+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:21.846936+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:22.847135+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:23.847305+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:24.847518+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:25.847700+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:26.847894+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:27.848059+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340082688 unmapped: 71507968 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:28.848270+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340090880 unmapped: 71499776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:29.848455+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340090880 unmapped: 71499776 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:30.848719+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:31.848938+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:32.849142+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:33.849303+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:34.849506+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:35.849723+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:36.849957+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:37.850125+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:38.850298+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:39.850506+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340099072 unmapped: 71491584 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:40.850738+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 71483392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:41.850927+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 71483392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:42.851150+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 71483392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:43.851320+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 71483392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:44.851487+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340107264 unmapped: 71483392 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:45.851711+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:46.852173+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:47.852361+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:48.852626+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:49.852856+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:50.853016+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:51.853388+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:52.853571+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340115456 unmapped: 71475200 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:53.853884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:54.854061+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:55.854228+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:56.854487+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:57.854693+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:58.854925+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:03:59.855232+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:00.855572+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:01.855747+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:02.855881+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:03.856025+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340123648 unmapped: 71467008 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:04.856106+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340131840 unmapped: 71458816 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:05.856260+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340131840 unmapped: 71458816 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:06.856417+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:07.856571+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:08.856718+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:09.856883+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:10.857077+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:11.857267+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:12.857543+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:13.857752+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:14.858019+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:15.858239+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340140032 unmapped: 71450624 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:16.858487+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:17.858701+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:18.858879+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175260771' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:19.859027+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:20.859232+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:21.859449+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:22.859593+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:23.859852+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:24.860028+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:25.860204+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:26.860441+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:27.860708+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:28.862763+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:29.862888+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:30.863061+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:31.863212+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 71442432 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:32.863341+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:33.863432+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:34.863596+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:35.863790+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:36.863926+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:37.864011+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:38.864167+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:39.864308+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340156416 unmapped: 71434240 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:40.864518+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 71426048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:41.864711+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340164608 unmapped: 71426048 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:42.864827+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 71417856 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:43.864942+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 71417856 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:44.865113+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 71417856 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:45.865223+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 71417856 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:46.865357+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:47.865484+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340172800 unmapped: 71417856 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:48.865621+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:49.865779+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:50.865903+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:51.866061+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:52.866193+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:53.866369+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:54.866509+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:55.866713+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:56.866926+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:57.867089+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:58.867390+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:04:59.868052+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:00.868268+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:01.868906+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:02.869111+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:03.869376+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:04.869569+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340180992 unmapped: 71409664 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:05.869786+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340189184 unmapped: 71401472 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:06.869941+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340189184 unmapped: 71401472 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:07.870187+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340189184 unmapped: 71401472 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:08.870517+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:09.870834+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:10.871135+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:11.871360+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:12.871521+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:13.871687+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:14.871825+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:15.871997+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:16.872245+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:17.872391+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:18.872581+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:19.872827+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:20.872983+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:21.873183+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:22.873441+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:23.873741+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340197376 unmapped: 71393280 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:24.873904+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340205568 unmapped: 71385088 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:25.874090+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 71376896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:26.874304+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 71376896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:27.874518+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 71376896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:28.874741+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 71376896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:29.874867+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 71376896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:30.875006+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 71376896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:31.875264+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340213760 unmapped: 71376896 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:32.875516+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340221952 unmapped: 71368704 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:33.875695+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 603.206298828s of 603.590026855s, submitted: 132
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 71360512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:34.875861+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340230144 unmapped: 71360512 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:35.876041+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340254720 unmapped: 71335936 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:36.876282+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:37.876447+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:38.876587+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:39.876707+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:40.876874+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:41.877026+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:42.877182+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:43.877357+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:44.877527+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:45.877707+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:46.878015+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:47.878221+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:48.878480+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:49.878723+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:50.878947+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:51.879167+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:52.879361+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:53.879721+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:54.880005+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:55.880253+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:56.880505+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:57.880717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:58.880920+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:05:59.881075+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:00.881217+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:01.881400+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:02.881680+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:03.881821+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:04.882005+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:05.882154+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:06.882358+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:07.882543+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:08.882697+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:09.882859+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:10.882992+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:11.883125+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:12.883305+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:13.883621+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:14.883802+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:15.883928+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:16.884104+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:17.884239+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:18.884387+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:19.884595+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:20.884819+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:21.885298+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:22.885561+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340279296 unmapped: 71311360 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:23.885744+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:24.885964+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:25.886169+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:26.886393+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:27.886596+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:28.886744+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:29.886906+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:30.887049+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:31.887302+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:32.887522+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:33.887726+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:34.887941+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:35.888101+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:36.888263+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:37.888393+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:38.888578+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340287488 unmapped: 71303168 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:39.888751+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:40.888946+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:41.889122+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:42.889318+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:43.889473+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:44.889627+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:45.889805+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:46.890054+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:47.890247+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:48.890458+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:49.890733+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:50.890991+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:51.891204+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:52.891356+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:53.891562+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 71294976 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:54.891731+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:55.891895+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:56.892072+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:57.892193+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:58.892354+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:06:59.892502+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:00.892682+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:01.892852+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:02.893012+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:03.893152+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:04.893290+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340303872 unmapped: 71286784 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:05.893451+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:06.893699+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:07.893876+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:08.894055+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:09.894181+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:10.894336+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:11.894470+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:12.894610+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:13.894735+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:14.894893+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:15.895052+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:16.895221+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:17.895397+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:18.895533+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:19.895673+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:20.895848+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340312064 unmapped: 71278592 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:21.895974+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:22.896164+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:23.896369+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:24.896501+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:25.896621+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:26.896809+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:27.896987+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:28.897149+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:29.897545+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:30.897765+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:31.897966+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:32.898163+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:33.898508+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:34.898762+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 71270400 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:35.898949+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340328448 unmapped: 71262208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:36.899139+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340328448 unmapped: 71262208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:37.899323+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340328448 unmapped: 71262208 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:38.899470+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 71254016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:39.899661+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 71254016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:40.899924+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 71254016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:41.900153+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 71254016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:42.900327+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 71254016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:43.900470+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340336640 unmapped: 71254016 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:44.900617+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340344832 unmapped: 71245824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:45.900805+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340344832 unmapped: 71245824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:46.901043+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340344832 unmapped: 71245824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:47.901236+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340344832 unmapped: 71245824 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:48.901417+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 71237632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:49.901614+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 71237632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:50.901764+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 71237632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:51.901898+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 71237632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:52.902071+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 71237632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:53.902279+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 71237632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:54.902513+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340353024 unmapped: 71237632 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:55.902716+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 71229440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:56.902966+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 71229440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:57.903098+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 71229440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:58.903233+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:07:59.903387+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 71229440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:00.903549+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340361216 unmapped: 71229440 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:01.903699+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:02.903927+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:03.904083+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:04.904269+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:05.904429+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:06.904586+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:07.904717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:08.904841+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:09.905015+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:10.905239+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:11.905423+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340369408 unmapped: 71221248 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets getting new tickets!
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:12.905849+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _finish_auth 0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:12.906721+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:13.906077+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:14.906248+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:15.906466+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:16.906729+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:17.906926+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:18.907093+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:19.907276+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340377600 unmapped: 71213056 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:20.907422+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:21.907562+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:22.907745+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:23.907916+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:24.908055+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:25.908267+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:26.908486+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:27.908758+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:28.908971+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:29.909196+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:30.909361+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340385792 unmapped: 71204864 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:31.909489+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 71196672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:32.909704+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340393984 unmapped: 71196672 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:33.909866+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:34.910095+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:35.910726+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:36.910989+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:37.911141+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:38.911322+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:39.911493+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:40.911665+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340402176 unmapped: 71188480 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:41.911823+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:42.912014+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:43.912161+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: mgrc ms_handle_reset ms_handle_reset con 0x563f6719f400
Nov 25 18:14:15 compute-0 ceph-osd[88890]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1812945073
Nov 25 18:14:15 compute-0 ceph-osd[88890]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1812945073,v1:192.168.122.100:6801/1812945073]
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: get_auth_request con 0x563f6985e800 auth_method 0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: mgrc handle_mgr_configure stats_period=5
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:44.912329+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:45.912542+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:46.912804+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:47.912993+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:48.913190+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340410368 unmapped: 71180288 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:49.913414+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 71172096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:50.913602+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 71172096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:51.913762+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 71172096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:52.914011+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 71172096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:53.914211+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 71172096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:54.914488+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 ms_handle_reset con 0x563f6dcc6000 session 0x563f68a64d20
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: handle_auth_request added challenge on 0x563f67e98400
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 71172096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:55.914630+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340418560 unmapped: 71172096 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:56.915024+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 71163904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:57.915308+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 71163904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:58.915541+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 71163904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:08:59.915701+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 71163904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:00.916030+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 71163904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:01.916355+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340426752 unmapped: 71163904 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:02.916541+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340434944 unmapped: 71155712 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:03.916712+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:04.916941+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:05.917286+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:06.917727+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:07.917927+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:08.918214+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:09.918530+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:10.918834+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:11.919104+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:12.919369+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:13.919714+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:14.919907+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:15.920228+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:16.920523+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:17.920776+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:18.920897+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340443136 unmapped: 71147520 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:19.921125+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 71139328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:20.921332+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 71139328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:21.921610+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 71139328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:22.921882+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 71139328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:23.922063+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340451328 unmapped: 71139328 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:24.922234+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:25.922412+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:26.922725+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:27.922877+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:28.923146+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:29.923388+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:30.923715+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:31.923898+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340459520 unmapped: 71131136 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:32.924065+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:33.924220+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:34.924438+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:35.924708+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:36.924928+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:37.925135+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:38.925285+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:39.925438+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:40.925698+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:41.925868+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:42.926080+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:43.926333+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:44.926581+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340475904 unmapped: 71114752 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:45.926776+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:46.927072+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:47.927372+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:48.927796+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:49.928064+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:50.928253+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:51.928489+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:52.928729+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:53.928926+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:54.929133+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:55.929322+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:56.929557+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:57.929964+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:58.930330+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:09:59.930807+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:00.931190+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:01.931558+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:02.931821+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:03.932054+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:04.932196+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:05.932330+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340484096 unmapped: 71106560 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:06.932524+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340492288 unmapped: 71098368 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:07.932741+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:08.932899+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:09.933052+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:10.933224+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:11.933439+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:12.933687+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:13.933890+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:14.934062+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:15.934211+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:16.934430+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:17.934784+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:18.935011+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:19.935210+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:20.935358+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:21.935594+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:22.935814+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340500480 unmapped: 71090176 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:23.935943+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 ms_handle_reset con 0x563f6825a000 session 0x563f6625b4a0
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: handle_auth_request added challenge on 0x563f703c2400
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 71081984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:24.936087+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 71081984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:25.936398+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340508672 unmapped: 71081984 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:26.936719+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:27.936905+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:28.937034+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:29.937182+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:30.937363+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:31.937502+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:32.937721+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:33.937865+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340516864 unmapped: 71073792 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:34.938063+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:35.938221+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:36.938391+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:37.938546+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:38.938727+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:39.938871+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:40.939095+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:41.939247+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:42.939419+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:43.939591+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:44.939719+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:45.939857+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340525056 unmapped: 71065600 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:46.940018+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:47.940161+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:48.940304+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:49.940447+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:50.940664+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:51.940870+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:52.940991+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:53.941143+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:54.941329+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:55.941468+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:56.941618+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:57.941775+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:58.941919+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:10:59.942043+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:00.942174+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:01.942305+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340533248 unmapped: 71057408 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:02.942452+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 71049216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:03.942605+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 71049216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:04.942761+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 71049216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:05.942972+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 71049216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:06.943277+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 71049216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:07.943405+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340541440 unmapped: 71049216 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:08.943523+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:09.943700+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:10.943823+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:11.943953+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:12.944100+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:13.944228+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:14.944365+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:15.944541+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:16.944714+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:17.944855+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:18.945069+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:19.945239+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:20.945361+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:21.945552+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:22.945744+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:23.945912+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:24.946025+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:25.946175+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340549632 unmapped: 71041024 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:26.946400+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340557824 unmapped: 71032832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:27.946542+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340557824 unmapped: 71032832 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:28.946671+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:29.946845+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:30.947039+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:31.947201+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:32.947337+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:33.947485+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:34.947612+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:35.947780+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340566016 unmapped: 71024640 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:36.947980+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:37.948130+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:38.948263+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:39.948426+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:40.948552+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:41.948702+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:42.948855+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:43.949031+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340574208 unmapped: 71016448 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:44.949167+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:45.949311+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:46.949473+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:47.949618+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:48.949804+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:49.949933+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:50.950079+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:51.950213+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340582400 unmapped: 71008256 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:52.950350+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:53.950571+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:54.950705+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:55.950842+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:56.950992+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:57.951692+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:58.951851+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:11:59.951990+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340590592 unmapped: 71000064 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:00.952114+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:01.952246+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:02.952416+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:03.952556+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:04.952670+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:05.952822+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:06.953020+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:07.953154+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340598784 unmapped: 70991872 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:08.953286+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:09.953414+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:10.953536+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:11.953709+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:12.953845+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:13.954043+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:14.954158+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:15.954342+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340606976 unmapped: 70983680 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:16.954589+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:17.954983+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:18.955163+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:19.955567+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:20.955724+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:21.955918+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:22.956086+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:23.956260+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340615168 unmapped: 70975488 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:24.956495+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 70967296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:25.956758+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 70967296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:26.956981+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 70967296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:27.957125+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 70967296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:28.957236+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:29.957376+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 70967296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:30.957548+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 70967296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:31.957717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340623360 unmapped: 70967296 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:32.957900+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:33.958094+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:34.958282+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:35.958415+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:36.958634+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:37.958823+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:38.958965+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:39.959145+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:40.959285+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:41.959467+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:42.959719+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:43.959875+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340631552 unmapped: 70959104 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:44.960034+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:45.960214+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:46.960471+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:47.960745+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:48.960977+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:49.961180+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:50.961353+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:51.961561+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:52.961702+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340639744 unmapped: 70950912 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:53.961906+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:54.962047+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:55.962209+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:56.962378+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:57.962506+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:58.962734+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:12:59.962952+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:00.963131+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:01.963314+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:02.963435+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340647936 unmapped: 70942720 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:03.963562+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:04.963717+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:05.963888+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:06.965704+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:07.965884+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:08.966117+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:09.966257+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.2 total, 600.0 interval
                                           Cumulative writes: 45K writes, 183K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.02 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.77 writes per sync, written: 0.17 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:10.966387+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:11.966535+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:12.966695+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:13.966868+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:14.966994+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:15.967119+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:16.967284+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:17.967435+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340656128 unmapped: 70934528 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:18.967561+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 70926336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:19.967760+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 70926336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:20.967949+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 70926336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:21.968076+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 70926336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:22.968209+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 70926336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:23.968352+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340664320 unmapped: 70926336 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:24.968537+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 70918144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:25.968680+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 70918144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:26.968876+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 70918144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:27.969075+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 70918144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:28.969223+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 70918144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:29.969339+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 70918144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:30.969476+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340672512 unmapped: 70918144 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:31.969599+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:32.969709+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:33.969850+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:34.969986+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:35.970112+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:36.970299+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:37.970446+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:38.970591+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:39.970726+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:40.970849+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: osd.0 307 heartbeat osd_stat(store_statfs(0x4ea609000/0x0/0x4ffc00000, data 0xe5e037/0x1005000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [1,2] op hist [])
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 18:14:15 compute-0 ceph-osd[88890]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 18:14:15 compute-0 ceph-osd[88890]: bluestore.MempoolThread(0x563f64959b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592258 data_alloc: 218103808 data_used: 1220608
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340680704 unmapped: 70909952 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:41.970963+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340762624 unmapped: 70828032 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'config diff' '{prefix=config diff}'
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'config show' '{prefix=config show}'
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:42.971111+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340271104 unmapped: 71319552 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: tick
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_tickets
Nov 25 18:14:15 compute-0 ceph-osd[88890]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T18:13:43.971246+0000)
Nov 25 18:14:15 compute-0 ceph-osd[88890]: prioritycache tune_memory target: 4294967296 mapped: 340017152 unmapped: 71573504 heap: 411590656 old mem: 2845415832 new mem: 2845415832
Nov 25 18:14:15 compute-0 ceph-osd[88890]: do_command 'log dump' '{prefix=log dump}'
Nov 25 18:14:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 18:14:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 18:14:15 compute-0 nova_compute[254092]: 2025-11-25 18:14:15.488 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:15 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 25 18:14:15 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/276515342' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 18:14:15 compute-0 ceph-mon[74985]: from='client.23453 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:15 compute-0 ceph-mon[74985]: pgmap v4543: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3328389572' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 18:14:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1175260771' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 18:14:15 compute-0 ceph-mon[74985]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 18:14:15 compute-0 ceph-mon[74985]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 18:14:15 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/276515342' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 18:14:15 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23465 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:14:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 25 18:14:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50792344' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 18:14:16 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4544: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:16 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/50792344' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 18:14:16 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 25 18:14:16 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772933123' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 18:14:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 25 18:14:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472784123' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 18:14:17 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 25 18:14:17 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836950125' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 18:14:17 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 18:14:17 compute-0 ceph-mon[74985]: from='client.23465 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:17 compute-0 ceph-mon[74985]: pgmap v4544: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2772933123' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 18:14:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/472784123' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 18:14:17 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1836950125' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 18:14:17 compute-0 systemd[1]: Started Hostname Service.
Nov 25 18:14:18 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23475 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:18 compute-0 nova_compute[254092]: 2025-11-25 18:14:18.197 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 25 18:14:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063033791' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 18:14:18 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4545: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:18 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3063033791' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 18:14:18 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 25 18:14:18 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4188891248' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 18:14:19 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23481 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:19 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 25 18:14:19 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430834046' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 18:14:19 compute-0 ceph-mon[74985]: from='client.23475 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:19 compute-0 ceph-mon[74985]: pgmap v4545: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/4188891248' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 18:14:19 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1430834046' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 18:14:19 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23485 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:20 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23487 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:20 compute-0 nova_compute[254092]: 2025-11-25 18:14:20.535 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 25 18:14:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779665899' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 18:14:20 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4546: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:20 compute-0 ceph-mon[74985]: from='client.23481 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:20 compute-0 ceph-mon[74985]: from='client.23485 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:20 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3779665899' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 18:14:20 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 25 18:14:20 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359933372' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23493 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:21 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23495 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 18:14:21 compute-0 ceph-mgr[75280]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 25 18:14:21 compute-0 ceph-mon[74985]: from='client.23487 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:21 compute-0 ceph-mon[74985]: pgmap v4546: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:21 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/359933372' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 18:14:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 25 18:14:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915361538' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 18:14:22 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 25 18:14:22 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/237175417' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 18:14:22 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4547: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:22 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23501 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:22 compute-0 ceph-mon[74985]: from='client.23493 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:22 compute-0 ceph-mon[74985]: from='client.23495 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3915361538' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 18:14:22 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/237175417' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 18:14:23 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23503 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:23 compute-0 nova_compute[254092]: 2025-11-25 18:14:23.199 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:23 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 18:14:23 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/311084273' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 18:14:23 compute-0 ceph-mon[74985]: pgmap v4547: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:23 compute-0 ceph-mon[74985]: from='client.23501 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:23 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/311084273' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 18:14:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 25 18:14:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963530015' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 18:14:24 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 25 18:14:24 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548909696' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:24 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4548: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:24 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23511 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:24 compute-0 ceph-mon[74985]: from='client.23503 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 18:14:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/963530015' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 18:14:24 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2548909696' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 18:14:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/353340946' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 18:14:25 compute-0 ceph-mgr[75280]: [devicehealth INFO root] Check health
Nov 25 18:14:25 compute-0 nova_compute[254092]: 2025-11-25 18:14:25.537 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:25 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 25 18:14:25 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2666642064' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 18:14:25 compute-0 ceph-mon[74985]: pgmap v4548: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:25 compute-0 ceph-mon[74985]: from='client.23511 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/353340946' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 18:14:25 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/2666642064' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 18:14:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 25 18:14:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3407451854' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:26 compute-0 ovs-appctl[483607]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 18:14:26 compute-0 ovs-appctl[483612]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 18:14:26 compute-0 ovs-appctl[483618]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 18:14:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 18:14:26 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 25 18:14:26 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309367549' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 18:14:26 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4549: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:26 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23521 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3407451854' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:27 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/1309367549' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 18:14:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 25 18:14:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890012439' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 18:14:27 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 25 18:14:27 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952440011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:28 compute-0 nova_compute[254092]: 2025-11-25 18:14:28.200 254096 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 18:14:28 compute-0 ceph-mon[74985]: pgmap v4549: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:28 compute-0 ceph-mon[74985]: from='client.23521 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3890012439' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 18:14:28 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/3952440011' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:28 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23527 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:28 compute-0 ceph-mgr[75280]: log_channel(cluster) log [DBG] : pgmap v4550: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:28 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 25 18:14:28 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628960594' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 18:14:29 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23531 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:29 compute-0 ceph-mon[74985]: from='client.23527 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:29 compute-0 ceph-mon[74985]: pgmap v4550: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Nov 25 18:14:29 compute-0 ceph-mon[74985]: from='client.? 192.168.122.100:0/628960594' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 18:14:29 compute-0 ceph-mgr[75280]: log_channel(audit) log [DBG] : from='client.23533 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 18:14:29 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 25 18:14:29 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4087313325' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 18:14:30 compute-0 ceph-mon[74985]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 25 18:14:30 compute-0 ceph-mon[74985]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751060117' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
